Valurank

Rank:

Average Model Cost: $0.0000

Number of Runs: 2,298

Models by this creator

distilroberta-bias

distilroberta-bias

valurank

No description available.

Read more

$-/run

375

Huggingface

distilroberta-offensive

distilroberta-offensive

distilroberta-offensive This model is a fine-tuned version of distilroberta-base on an unknown dataset. It achieves the following results on the evaluation set: Loss: 0.4526 Acc: 0.8975 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 32 eval_batch_size: 32 seed: 12345 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear lr_scheduler_warmup_steps: 16 num_epochs: 20 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.11.3 Pytorch 1.10.1 Datasets 1.17.0 Tokenizers 0.10.3

Read more

$-/run

240

Huggingface

distilroberta-clickbait

distilroberta-clickbait

distilroberta-clickbait This model is a fine-tuned version of distilroberta-base on a dataset of headlines. It achieves the following results on the evaluation set: Loss: 0.0268 Acc: 0.9963 Training and evaluation data The following data sources were used: 32k headlines classified as clickbait/not-clickbait from kaggle A dataset of headlines from https://github.com/MotiBaadror/Clickbait-Detection Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 32 eval_batch_size: 32 seed: 12345 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear lr_scheduler_warmup_steps: 16 num_epochs: 20 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.11.3 Pytorch 1.10.1 Datasets 1.17.0 Tokenizers 0.10.3

Read more

$-/run

206

Huggingface

Similar creators