Muhtasham
Rank:Average Model Cost: $0.0000
Number of Runs: 1,025
Models by this creator
santacoder-finetuned-the-stack-cobol
santacoder-finetuned-the-stack-cobol
Platform did not provide a description for this model.
$-/run
455
Huggingface
tiny-vanilla-target-imdb
$-/run
208
Huggingface
santacoder-finetuned-the-stack-assembly
santacoder-finetuned-the-stack-assembly
Platform did not provide a description for this model.
$-/run
80
Huggingface
bert-small-finetuned-eoir-privacy-longer30
bert-small-finetuned-eoir-privacy-longer30
Platform did not provide a description for this model.
$-/run
49
Huggingface
tiny-mlm-glue-stsb-custom-tokenizer
tiny-mlm-glue-stsb-custom-tokenizer
tiny-mlm-glue-stsb-custom-tokenizer This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 7.1017 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2
$-/run
45
Huggingface
bert-small-finetuned-wnut17-ner
$-/run
43
Huggingface
tiny-mlm-glue-sst2-custom-tokenizer
tiny-mlm-glue-sst2-custom-tokenizer
tiny-mlm-glue-sst2-custom-tokenizer This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 7.2580 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2
$-/run
43
Huggingface
bert-tiny-finetuned-xglue-ner
bert-tiny-finetuned-xglue-ner
Platform did not provide a description for this model.
$-/run
39
Huggingface
tiny-mlm-glue-sst2
tiny-mlm-glue-sst2
tiny-mlm-glue-sst2 This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 4.2692 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2
$-/run
34
Huggingface
tiny-mlm-rotten_tomatoes-custom-tokenizer
tiny-mlm-rotten_tomatoes-custom-tokenizer
tiny-mlm-rotten_tomatoes-custom-tokenizer This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 7.5806 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2
$-/run
29
Huggingface