Vumichien

Rank:

Average Model Cost: $0.0000

Number of Runs: 3,504

Models by this creator

whisper-medium-jp

whisper-medium-jp

vumichien

openai/whisper-medium This model is a fine-tuned version of openai/whisper-medium on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: Loss: 0.3029 Wer: 9.0355 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 1e-05 train_batch_size: 32 eval_batch_size: 16 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear lr_scheduler_warmup_steps: 500 training_steps: 5000 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu117 Datasets 2.7.1.dev0 Tokenizers 0.13.2

Read more

$-/run

2.7K

Huggingface

whisper-large-v2-jp

whisper-large-v2-jp

openai/whisper-large-v2 This model is a fine-tuned version of openai/whisper-large-v2 on the mozilla-foundation/common_voice_11_0 dataset. It achieves the following results on the evaluation set: Loss: 0.2352 Wer: 8.1166 Cer: 5.0032 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 1e-05 train_batch_size: 8 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 16 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear lr_scheduler_warmup_steps: 500 training_steps: 10000 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.0+cu117 Datasets 2.7.1.dev0 Tokenizers 0.13.2

Read more

$-/run

157

Huggingface

tiny-albert

tiny-albert

tiny-albert This model is a fine-tuned version of hf-internal-testing/tiny-albert on an unknown dataset. It achieves the following results on the evaluation set: Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: optimizer: None training_precision: float32 Training results Framework versions Transformers 4.18.0 TensorFlow 2.8.0 Tokenizers 0.12.1

Read more

$-/run

49

Huggingface

mobilebert-uncased-squad-v2

mobilebert-uncased-squad-v2

tf-mobilebert-uncased-squad-v2 This model is a fine-tuned version of csarron/mobilebert-uncased-squad-v2 on an unknown dataset. It achieves the following results on the evaluation set: Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: optimizer: None training_precision: float32 Training results Framework versions Transformers 4.17.0 TensorFlow 2.8.0 Tokenizers 0.11.6

Read more

$-/run

33

Huggingface

trillsson3-ft-keyword-spotting-13

trillsson3-ft-keyword-spotting-13

trillsson3-ft-keyword-spotting-13 This model is a fine-tuned version of vumichien/nonsemantic-speech-trillsson3 on the superb dataset. It achieves the following results on the evaluation set: Loss: 0.3093 Accuracy: 0.9153 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 0.0003 train_batch_size: 16 eval_batch_size: 32 seed: 0 gradient_accumulation_steps: 4 total_train_batch_size: 64 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear lr_scheduler_warmup_ratio: 0.1 num_epochs: 20.0 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.23.0.dev0 Pytorch 1.12.1+cu113 Datasets 2.6.1 Tokenizers 0.13.1

Read more

$-/run

32

Huggingface

Similar creators