Average Model Cost: $0.0000
Number of Runs: 6,660
Models by this creator
The sn-xlm-roberta-base-snli-mnli-anli-xnli model is a Siamese network model that is trained for zero-shot and few-shot text classification. It is based on the xlm-roberta-base model and was trained on SNLI, MNLI, ANLI, and XNLI datasets. This model is capable of mapping sentences and paragraphs to a 768-dimensional dense vector space. It can be used with the Sentence-Transformers library or with the HuggingFace Transformers library by passing the input through the model and applying the appropriate pooling operation on the contextualized word embeddings.
A Siamese network model trained for zero-shot and few-shot text classification. The base model is mpnet-base. It was trained on SNLI and MNLI. This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.