Arbazk
Rank:Average Model Cost: $0.0000
Number of Runs: 6,814
Models by this creator
maestroqa-distilbert-negative-sentiment
maestroqa-distilbert-negative-sentiment
The maestroqa-distilbert-negative-sentiment model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english. It achieves a loss of 0.8880 and an accuracy of 0.77 on an evaluation set. However, further information is needed regarding the model's description, intended uses and limitations, training and evaluation data, and training procedure. The model was trained using specific hyperparameters, including a learning rate of 2e-05, a train batch size of 16, an eval batch size of 16, a seed of 42, and the Adam optimizer. The training was conducted over 5 epochs. The model was developed using the Transformers 4.25.1, PyTorch 2.0.0, Datasets 2.10.1, and Tokenizers 0.13.2 frameworks.
$-/run
6.8K
Huggingface
distilbert-base-uncased-finetuned-cola
$-/run
13
Huggingface
distilbert-base-uncased-finetuned-sst2
distilbert-base-uncased-finetuned-sst2
Platform did not provide a description for this model.
$-/run
0
Huggingface