Cardiffnlp

Rank:

Average Model Cost: $0.0000

Number of Runs: 7,348,614

Models by this creator

twitter-roberta-base-irony

twitter-roberta-base-irony

cardiffnlp

The twitter-roBERTa-base-irony model is a text classification model based on the roBERTa-base architecture. It has been trained on a large dataset of tweets and fine-tuned specifically for irony detection. The model performs well on the TweetEval benchmark for irony detection.

Read more

$-/run

4.0M

Huggingface

twitter-roberta-base-sentiment

twitter-roberta-base-sentiment

The twitter-roberta-base-sentiment model is a sentiment analysis model that has been trained on approximately 58 million tweets. It uses the roBERTa-base architecture and has been fine-tuned for sentiment analysis using the TweetEval benchmark. The model is specifically designed for English text and is capable of classifying tweets into three categories: negative, neutral, and positive. This model is useful for analyzing the sentiment expressed in tweets.

Read more

$-/run

1.0M

Huggingface

twitter-roberta-base-emotion

twitter-roberta-base-emotion

The twitter-roBERTa-base-emotion model is a text classification model based on the RoBERTa-base architecture. It has been trained on a large dataset of tweets and fine-tuned for emotion recognition using the TweetEval benchmark. The model can analyze the emotion expressed in a given tweet and classify it into categories such as happiness, sadness, anger, surprise, and others. This model is part of the TweetEval benchmark and has been shown to achieve high performance on emotion recognition tasks.

Read more

$-/run

150.2K

Huggingface

twitter-roberta-base-dec2021-tweet-topic-multi-all

twitter-roberta-base-dec2021-tweet-topic-multi-all

The twitter-roberta-base-dec2021-tweet-topic-multi-all model is a fine-tuned version of cardiffnlp/twitter-roberta-base-dec2021 that is specifically trained on the tweet_topic_multi dataset. It is trained on the train_all split and validated on the test_2021 split of the tweet_topic dataset. The model achieves an F1 score (micro) of 0.7647668393782383, an F1 score (macro) of 0.6187022581213811, and an accuracy of 0.5485407980941036 on the test_2021 set.

Read more

$-/run

23.5K

Huggingface

twitter-xlm-roberta-base

twitter-xlm-roberta-base

The twitter-xlm-roberta-base model is a language model that has been trained on approximately 198 million multilingual tweets. It is based on the XLM-Roberta-base architecture and is designed to be used for various natural language processing tasks, including filling in masked text. The model is described and evaluated in a reference paper, and it can be evaluated on Twitter-specific data using the main repository.

Read more

$-/run

22.5K

Huggingface

Similar creators