Huggingface-course

Rank:

Average Model Cost: $0.0000

Number of Runs: 8,389

Models by this creator

bert-finetuned-squad

bert-finetuned-squad

huggingface-course

Platform did not provide a description for this model.

Read more

$-/run

4.6K

Huggingface

distilbert-base-uncased-finetuned-imdb

distilbert-base-uncased-finetuned-imdb

distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of distilbert-base-uncased on the imdb dataset. It achieves the following results on the evaluation set: Loss: 2.4264 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 64 eval_batch_size: 64 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 3.0 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.12.0.dev0 Pytorch 1.9.1+cu111 Datasets 1.12.2.dev0 Tokenizers 0.10.3

Read more

$-/run

740

Huggingface

marian-finetuned-kde4-en-to-fr

marian-finetuned-kde4-en-to-fr

test-marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset. It achieves the following results on the evaluation set: Loss: 0.8559 Bleu: 52.9416 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 32 eval_batch_size: 64 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 3 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.12.0.dev0 Pytorch 1.8.1+cu111 Datasets 1.12.2.dev0 Tokenizers 0.10.3

Read more

$-/run

498

Huggingface

mt5-finetuned-amazon-en-es

mt5-finetuned-amazon-en-es

mt5-finetuned-amazon-en-es This model is a fine-tuned version of google/mt5-small on the None dataset. It achieves the following results on the evaluation set: Loss: 3.0285 Rouge1: 16.9728 Rouge2: 8.2969 Rougel: 16.8366 Rougelsum: 16.851 Gen Len: 10.1597 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 5.6e-05 train_batch_size: 8 eval_batch_size: 8 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 8 Training results Framework versions Transformers 4.12.3 Pytorch 1.9.1+cu111 Datasets 1.15.1 Tokenizers 0.10.3

Read more

$-/run

14

Huggingface

Similar creators