Csarron
Rank:Average Model Cost: $0.0000
Number of Runs: 31,518
Models by this creator
bert-base-uncased-squad-v1
bert-base-uncased-squad-v1
The bert-base-uncased-squad-v1 model is a pre-trained model that uses the BERT architecture for question answering. It is trained on the Stanford Question Answering Dataset (SQuAD) and is designed to provide answers to questions based on given contexts. The model is case-insensitive and does not distinguish between capitalized and lowercase letters.
$-/run
21.2K
Huggingface
mobilebert-uncased-squad-v2
mobilebert-uncased-squad-v2
MobileBERT is a compact version of BERT, a popular language model that performs well on a wide range of natural language processing (NLP) tasks. This MobileBERT model is specifically fine-tuned for question-answering tasks using the SQuAD 2.0 dataset. It is designed to be more lightweight and efficient, making it suitable for deployment on mobile devices with limited resources. By inputting a question and a paragraph of text, the model can generate an answer to the question based on the provided context.
$-/run
9.8K
Huggingface
roberta-base-squad-v1
$-/run
280
Huggingface
mobilebert-uncased-squad-v1
$-/run
251
Huggingface
roberta-large-squad-v1
$-/run
44
Huggingface