Joeddav

Rank:

Average Model Cost: $0.0000

Number of Runs: 150,642

Models by this creator

distilbert-base-uncased-go-emotions-student

distilbert-base-uncased-go-emotions-student

joeddav

The distilbert-base-uncased-go-emotions-student model is a distilled version of a zero-shot classification model trained on the GoEmotions dataset. It is intended for use as a demonstration of how an expensive NLI-based model can be distilled into a more efficient student model that can be trained with only unlabeled data. The model was trained with mixed precision for 10 epochs using the default script arguments. While it may not perform as well as a model trained with full supervision, it can still be used for text classification tasks.

Read more

$-/run

94.4K

Huggingface

bart-large-mnli-yahoo-answers

bart-large-mnli-yahoo-answers

The bart-large-mnli-yahoo-answers model is a variant of BART (Bidirectional and Auto-Regressive Transformers) that has been fine-tuned on the MultiNLI (Natural Language Inference) and Yahoo Answers datasets. It is designed for zero-shot classification tasks, where the model is capable of predicting the correct class label for a given input, even if it has not been explicitly trained on that particular task or dataset. This model specifically focuses on classifying Yahoo Answers questions into different categories.

Read more

$-/run

29.6K

Huggingface

xlm-roberta-large-xnli

xlm-roberta-large-xnli

The xlm-roberta-large-xnli model is a cross-lingual language model that is trained on the XNLI dataset. This model is based on the RoBERTa architecture and is capable of understanding and generating text in multiple languages. It has been fine-tuned specifically for the task of natural language inference, which involves determining the logical relationship between pairs of sentences. This model can be used for tasks such as sentence classification, semantic matching, and language understanding in a multilingual context.

Read more

$-/run

26.6K

Huggingface

Similar creators