Muhtasham

Rank:

Average Model Cost: $0.0000

Number of Runs: 1,025

Models by this creator

santacoder-finetuned-the-stack-cobol

santacoder-finetuned-the-stack-cobol

muhtasham

Platform did not provide a description for this model.

Read more

$-/run

455

Huggingface

tiny-mlm-glue-stsb-custom-tokenizer

tiny-mlm-glue-stsb-custom-tokenizer

tiny-mlm-glue-stsb-custom-tokenizer This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 7.1017 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2

Read more

$-/run

45

Huggingface

tiny-mlm-glue-sst2-custom-tokenizer

tiny-mlm-glue-sst2-custom-tokenizer

tiny-mlm-glue-sst2-custom-tokenizer This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 7.2580 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2

Read more

$-/run

43

Huggingface

tiny-mlm-glue-sst2

tiny-mlm-glue-sst2

tiny-mlm-glue-sst2 This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 4.2692 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2

Read more

$-/run

34

Huggingface

tiny-mlm-rotten_tomatoes-custom-tokenizer

tiny-mlm-rotten_tomatoes-custom-tokenizer

tiny-mlm-rotten_tomatoes-custom-tokenizer This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set: Loss: 7.5806 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 3e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant num_epochs: 200 Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu116 Datasets 2.8.1.dev0 Tokenizers 0.13.2

Read more

$-/run

29

Huggingface

Similar creators