Cross-encoder

Rank:

Average Model Cost: $0.0000

Number of Runs: 1,246,491

Models by this creator

ms-marco-MiniLM-L-6-v2

ms-marco-MiniLM-L-6-v2

cross-encoder

ms-marco-MiniLM-L-6-v2 is a large-scale text classification model trained on the Microsoft Machine Reading Comprehension (MS MARCO) dataset. It is based on the MiniLM architecture, which is a smaller and faster version of the BERT model. The model is designed to extract meaningful information from text and classify it into various categories. It can be used for tasks such as document classification, sentiment analysis, and topic modeling. This model is particularly useful for applications that require fast and efficient text classification on a large scale.

Read more

$-/run

377.8K

Huggingface

ms-marco-MiniLM-L-12-v2

ms-marco-MiniLM-L-12-v2

The ms-marco-MiniLM-L-12-v2 model is a text classification model. It is specifically designed to handle tasks related to the Microsoft Machine Reading Comprehension (MS MARCO) dataset. The model is based on the MiniLM architecture and has 12 transformer layers. It has been trained on a large corpus of text data and can be used to perform various text classification tasks on the MS MARCO dataset, such as document ranking and passage ranking.

Read more

$-/run

282.7K

Huggingface

stsb-roberta-base

stsb-roberta-base

stsb-roberta-base is a text classification model that has been pre-trained on a large corpus of data. It is specifically trained for the task of semantic textual similarity, which involves determining the degree of similarity between two pieces of text. This model is based on the RoBERTa architecture, which is a variant of the popular BERT model. Given two input texts, the stsb-roberta-base model predicts a similarity score ranging from 0 to 5, where higher values indicate greater similarity between the texts. It can be fine-tuned on specific datasets to improve its performance on various text similarity tasks.

Read more

$-/run

67.5K

Huggingface

nli-deberta-v3-xsmall

nli-deberta-v3-xsmall

The nli-deberta-v3-xsmall model is a natural language processing model for zero-shot classification. It is designed to classify textual inputs into predefined classes without the need for training on specific data. It uses the DeBERTa architecture, which is a transformer-based neural network, to encode the input text and make predictions based on it. The model has been trained on a large corpus of text and can be used in various applications such as document classification, sentiment analysis, and intent detection.

Read more

$-/run

25.2K

Huggingface

Similar creators