Sophiebottani
Rank:Average Model Cost: $0.0000
Number of Runs: 65,809
Models by this creator
deberta_squadnewsqa
deberta_squadnewsqa
deberta_squadnewsqa is a fine-tuned model based on microsoft/deberta-v3-base that performs fine-grained question answering on the Squad_v2 and NewsQA datasets. The model achieves a loss of 0.9648 on the evaluation set. However, more information is needed regarding the model's description, intended uses, limitations, training and evaluation data, as well as the training procedure. The model was trained using hyperparameters such as a learning rate of 2e-05, a training batch size of 8, and an evaluation batch size of 8. The optimization was performed using the Adam optimizer with specific betas and epsilon values. The training was done for a single epoch. The model was developed and tested using Transformers 4.28.0, PyTorch 2.0.0, Datasets 2.12.0, and Tokenizers 0.13.3.
$-/run
65.8K
Huggingface
distilbert_NewsQA_model
distilbert_NewsQA_model
distilbert_NewsQA_model This model is a fine-tuned version of distilbert-base-uncased on an the NewsQA dataset. So it was specifically trained to answer questions in English news articles. It achieves the following results on the evaluation set: Loss: 1.9481 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 16 eval_batch_size: 16 seed: 42 distributed_type: tpu optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 3 Training results Framework versions Transformers 4.26.0 Pytorch 1.12.1+cu102 Datasets 2.9.0 Tokenizers 0.13.2
$-/run
20
Huggingface
distilbert_squad_newsqa
$-/run
10
Huggingface