Explosion-testing
Rank:Average Model Cost: $0.0000
Number of Runs: 35,548
Models by this creator
bert-test
bert-test
The BERT-Test model is a language model that is trained on a masked language modeling task. It is based on the BERT architecture and is designed to fill in masked tokens in a given sentence. This model can be used to generate missing words in text and is useful for tasks such as language understanding and text completion.
$-/run
6.1K
Huggingface
refined-web-model-test
refined-web-model-test
The refined-web-model-test is a language model trained by OpenAI. It is designed to generate text based on a given prompt or context. This model has not been specifically described by the creators, so the details of its training data and capabilities are unknown. However, it can be assumed that it is trained on a large corpus of web text and can generate responses in a wide range of topics and styles.
$-/run
5.8K
Huggingface
albert-test
albert-test
The "albert-test" model is a language model that has been trained on a large amount of text data. It is designed to fill in missing words or phrases in a given sentence, which is known as the "fill-mask" task. The model uses its understanding of language and context to predict the most likely word or phrase that should go in the masked position.
$-/run
5.0K
Huggingface
roberta-test
roberta-test
The roberta-test model is designed to fill in missing words in a given sentence or text. It is based on the RoBERTa architecture and has been fine-tuned on a fill-mask task. This model can be used for various natural language processing tasks such as text completion, language modeling, and question answering. By inputting a sentence with masked words, the model can predict the most probable words to fill in the gaps.
$-/run
5.0K
Huggingface
camembert-test
$-/run
5.0K
Huggingface
xlm-roberta-test
$-/run
5.0K
Huggingface
bert-test-sharded
$-/run
3.7K
Huggingface