Cross-encoder
Rank:Average Model Cost: $0.0000
Number of Runs: 1,246,491
Models by this creator
ms-marco-MiniLM-L-6-v2
ms-marco-MiniLM-L-6-v2
ms-marco-MiniLM-L-6-v2 is a large-scale text classification model trained on the Microsoft Machine Reading Comprehension (MS MARCO) dataset. It is based on the MiniLM architecture, which is a smaller and faster version of the BERT model. The model is designed to extract meaningful information from text and classify it into various categories. It can be used for tasks such as document classification, sentiment analysis, and topic modeling. This model is particularly useful for applications that require fast and efficient text classification on a large scale.
$-/run
377.8K
Huggingface
ms-marco-MiniLM-L-4-v2
ms-marco-MiniLM-L-4-v2
ms-marco-MiniLM-L-4-v2 is a text classification model that has been trained on the Microsoft Research Open Web dataset, specifically designed for ranking and evaluating information retrieval systems. It uses the MiniLM architecture and has 24 layers with 4 attention heads. This model is useful for tasks such as question answering, document ranking, and text classification.
$-/run
355.1K
Huggingface
ms-marco-MiniLM-L-12-v2
ms-marco-MiniLM-L-12-v2
The ms-marco-MiniLM-L-12-v2 model is a text classification model. It is specifically designed to handle tasks related to the Microsoft Machine Reading Comprehension (MS MARCO) dataset. The model is based on the MiniLM architecture and has 12 transformer layers. It has been trained on a large corpus of text data and can be used to perform various text classification tasks on the MS MARCO dataset, such as document ranking and passage ranking.
$-/run
282.7K
Huggingface
stsb-roberta-base
stsb-roberta-base
stsb-roberta-base is a text classification model that has been pre-trained on a large corpus of data. It is specifically trained for the task of semantic textual similarity, which involves determining the degree of similarity between two pieces of text. This model is based on the RoBERTa architecture, which is a variant of the popular BERT model. Given two input texts, the stsb-roberta-base model predicts a similarity score ranging from 0 to 5, where higher values indicate greater similarity between the texts. It can be fine-tuned on specific datasets to improve its performance on various text similarity tasks.
$-/run
67.5K
Huggingface
nli-deberta-base
nli-deberta-base
The nli-deberta-base model uses DeBERTa, a state-of-the-art language model, for natural language inference (NLI) tasks. It is a pre-trained model that can classify the relationship between two sentences into categories like entailment, contradiction, or neutral. This model can be used for zero-shot classification, where it can classify sentences into categories that it has not been specifically trained on.
$-/run
35.0K
Huggingface
ms-marco-TinyBERT-L-2-v2
ms-marco-TinyBERT-L-2-v2
ms-marco-TinyBERT-L-2-v2 is a model that has been trained on the MS MARCO (Microsoft Machine Reading Comprehension) dataset. It is a smaller version of the original BERT model, called TinyBERT, and uses two Transformer layers. The model is designed for text classification tasks, such as determining the relevance of documents to a given query. It can be used to classify large amounts of text data efficiently.
$-/run
33.1K
Huggingface
ms-marco-TinyBERT-L-2
ms-marco-TinyBERT-L-2
ms-marco-TinyBERT-L-2 is a Pre-trained model based on the TinyBERT architecture. It is designed for text classification tasks and was trained on the MS MARCO dataset. The model is optimized for efficiency and has a small number of layers (L=2). It can be fine-tuned on specific text classification tasks to improve performance.
$-/run
26.2K
Huggingface
nli-deberta-v3-xsmall
nli-deberta-v3-xsmall
The nli-deberta-v3-xsmall model is a natural language processing model for zero-shot classification. It is designed to classify textual inputs into predefined classes without the need for training on specific data. It uses the DeBERTa architecture, which is a transformer-based neural network, to encode the input text and make predictions based on it. The model has been trained on a large corpus of text and can be used in various applications such as document classification, sentiment analysis, and intent detection.
$-/run
25.2K
Huggingface
mmarco-mMiniLMv2-L12-H384-v1
mmarco-mMiniLMv2-L12-H384-v1
mmarco-mMiniLMv2-L12-H384-v1 is a text classification model trained on the MSMARCO dataset. It uses the mMiniLMv2-L12-H384 architecture and has been fine-tuned for the task of text classification. The model is designed to understand and categorize text based on its content, making it useful for various natural language processing tasks such as sentiment analysis, topic classification, and spam detection.
$-/run
24.8K
Huggingface
nli-deberta-v3-base
nli-deberta-v3-base
The nli-deberta-v3-base model is a pre-trained natural language understanding model that can perform zero-shot classification tasks. It is based on the DeBERTa model architecture and is designed to understand and classify text without the need for task-specific training data. This model can be used for a wide range of NLU tasks such as sentiment analysis, question answering, and document classification.
$-/run
19.0K
Huggingface