Jplu
Rank:Average Model Cost: $0.0000
Number of Runs: 24,596
Models by this creator
tf-camembert-base
tf-camembert-base
The tf-camembert-base model is a French language model based on the CamemBERT architecture. It is designed for fill-mask tasks, where the model is given a sentence with a masked word and needs to predict the most likely word to fill in the blank. This model is trained on the French language and can be used for various natural language processing tasks in French.
$-/run
11.1K
Huggingface
tf-xlm-roberta-base
tf-xlm-roberta-base
tf-xlm-roberta-base is a transformer-based language model that has been pretrained on a large corpus of multilingual text. It is based on the XLM-RoBERTa architecture and is designed to understand and generate text in multiple languages. It has been trained to perform various NLP tasks, including the ability to fill in missing words or tokens in a given sentence. This model is useful for tasks such as language understanding, text generation, and machine translation.
$-/run
6.1K
Huggingface
tiny-tf-bert-random
$-/run
4.4K
Huggingface
tf-xlm-roberta-large
$-/run
2.5K
Huggingface
tf-xlm-r-ner-40-lang
$-/run
374
Huggingface
tf-flaubert-small-cased
$-/run
81
Huggingface
tf-flaubert-base-cased
$-/run
18
Huggingface
adel-dbpedia-retrieval
adel-dbpedia-retrieval
{MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net Training The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 71 with parameters: Loss: beir.losses.margin_mse_loss.MarginMSELoss Parameters of the fit()-Method: Full Model Architecture Citing & Authors
$-/run
16
Huggingface
tf-flaubert-base-uncased
$-/run
15
Huggingface
adel-dbpedia-rerank
$-/run
14
Huggingface