Textattack
Rank:Average Model Cost: $0.0000
Number of Runs: 533,324
Models by this creator
bert-base-uncased-CoLA
bert-base-uncased-CoLA
The bert-base-uncased-CoLA model is a text classification model based on BERT (Bidirectional Encoder Representations from Transformers). It is specifically trained for the CoLA (Corpus of Linguistic Acceptability) task, which evaluates the grammatical correctness of a sentence. The model takes in a sentence as input and predicts whether it is grammatically acceptable or not. It is trained on a large corpus of text and can provide accurate predictions for various text classification tasks.
$-/run
176.8K
Huggingface
roberta-base-CoLA
roberta-base-CoLA
The roberta-base-CoLA model is a text classification model that has been fine-tuned on the GLUE dataset using the TextAttack library. It achieved a score of 0.850431447746884 on the evaluation set accuracy after 1 epoch of training.
$-/run
118.3K
Huggingface
bert-base-uncased-SST-2
bert-base-uncased-SST-2
The bert-base-uncased-SST-2 model is a pre-trained text classification model that uses the BERT (Bidirectional Encoder Representations from Transformers) architecture. It has been trained on the Stanford Sentiment Treebank (SST-2) dataset, which consists of movie reviews labeled with their sentiment (positive or negative). The model can be fine-tuned on specific text classification tasks or used as a feature extractor for downstream NLP tasks.
$-/run
58.7K
Huggingface
bert-base-uncased-MNLI
bert-base-uncased-MNLI
The bert-base-uncased-MNLI model is a text classification model. It is based on the BERT architecture and has been fine-tuned on the Multi-Genre Natural Language Inference (MNLI) dataset. This model is able to classify pairs of sentences into three categories: entailment, contradiction, or neutral. It is trained to understand the relationships between sentences and can be used for various natural language processing tasks such as question answering, sentiment analysis, and machine translation.
$-/run
48.2K
Huggingface
bert-base-uncased-STS-B
bert-base-uncased-STS-B
The bert-base-uncased-STS-B model is a variant of the BERT model that has been fine-tuned for the task of semantic textual similarity (STS). This model is trained to determine the similarity between pairs of sentences by assigning them a similarity score. It uses the STS-B dataset, which consists of sentence pairs from a variety of sources annotated with similarity scores ranging from 0 to 5. The model is trained to predict these similarity scores, and can be used for tasks such as paraphrase detection, textual entailment, and other applications where determining the similarity between sentences is important.
$-/run
47.6K
Huggingface
albert-base-v2-rotten_tomatoes
albert-base-v2-rotten_tomatoes
The albert-base-v2-rotten_tomatoes model is a language model based on ALBERT (A Lite BERT), a transformer-based model for natural language processing. This particular model is trained on a dataset from Rotten Tomatoes, a popular review aggregation website. It is specifically designed to perform the fill-mask task, where given a sentence with a masked word, it predicts the most appropriate word to fill in the blank.
$-/run
40.4K
Huggingface
roberta-base-MNLI
$-/run
22.5K
Huggingface
bert-base-uncased-ag-news
bert-base-uncased-ag-news
The bert-base-uncased-ag-news model is a fine-tuned version of the BERT model that has been trained for sequence classification using the ag_news dataset. It has been trained for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 128. The model achieved a score of 0.9514473684210526 on the evaluation set after 3 epochs.
$-/run
9.8K
Huggingface
bert-base-uncased-rotten_tomatoes
$-/run
6.5K
Huggingface
xlnet-base-cased-rotten-tomatoes
$-/run
4.6K
Huggingface