Textattack

Rank:

Average Model Cost: $0.0000

Number of Runs: 533,324

Models by this creator

bert-base-uncased-CoLA

bert-base-uncased-CoLA

textattack

The bert-base-uncased-CoLA model is a text classification model based on BERT (Bidirectional Encoder Representations from Transformers). It is specifically trained for the CoLA (Corpus of Linguistic Acceptability) task, which evaluates the grammatical correctness of a sentence. The model takes in a sentence as input and predicts whether it is grammatically acceptable or not. It is trained on a large corpus of text and can provide accurate predictions for various text classification tasks.

Read more

$-/run

176.8K

Huggingface

bert-base-uncased-SST-2

bert-base-uncased-SST-2

The bert-base-uncased-SST-2 model is a pre-trained text classification model that uses the BERT (Bidirectional Encoder Representations from Transformers) architecture. It has been trained on the Stanford Sentiment Treebank (SST-2) dataset, which consists of movie reviews labeled with their sentiment (positive or negative). The model can be fine-tuned on specific text classification tasks or used as a feature extractor for downstream NLP tasks.

Read more

$-/run

58.7K

Huggingface

bert-base-uncased-MNLI

bert-base-uncased-MNLI

The bert-base-uncased-MNLI model is a text classification model. It is based on the BERT architecture and has been fine-tuned on the Multi-Genre Natural Language Inference (MNLI) dataset. This model is able to classify pairs of sentences into three categories: entailment, contradiction, or neutral. It is trained to understand the relationships between sentences and can be used for various natural language processing tasks such as question answering, sentiment analysis, and machine translation.

Read more

$-/run

48.2K

Huggingface

bert-base-uncased-STS-B

bert-base-uncased-STS-B

The bert-base-uncased-STS-B model is a variant of the BERT model that has been fine-tuned for the task of semantic textual similarity (STS). This model is trained to determine the similarity between pairs of sentences by assigning them a similarity score. It uses the STS-B dataset, which consists of sentence pairs from a variety of sources annotated with similarity scores ranging from 0 to 5. The model is trained to predict these similarity scores, and can be used for tasks such as paraphrase detection, textual entailment, and other applications where determining the similarity between sentences is important.

Read more

$-/run

47.6K

Huggingface

albert-base-v2-rotten_tomatoes

albert-base-v2-rotten_tomatoes

The albert-base-v2-rotten_tomatoes model is a language model based on ALBERT (A Lite BERT), a transformer-based model for natural language processing. This particular model is trained on a dataset from Rotten Tomatoes, a popular review aggregation website. It is specifically designed to perform the fill-mask task, where given a sentence with a masked word, it predicts the most appropriate word to fill in the blank.

Read more

$-/run

40.4K

Huggingface

Similar creators