Arampacha
Rank:Average Model Cost: $0.0000
Number of Runs: 6,776
Models by this creator
roberta-tiny
roberta-tiny
Roberta-tiny is a small-scale language model trained using the Roberta architecture. It is designed for various natural language processing tasks such as text classification, text generation, and information retrieval. The model can understand and generate human-like text, making it useful for applications that require language understanding and generation capabilities.
$-/run
6.4K
Huggingface
wav2vec2-xls-r-1b-ka
wav2vec2-xls-r-1b-ka
wav2vec2-xls-r-1b-ka This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the /WORKSPACE/DATA/KA/NOIZY_STUDENT_2/ - KA dataset. It achieves the following results on the evaluation set: Loss: 0.1022 Wer: 0.1527 Cer: 0.0221 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 7e-05 train_batch_size: 16 eval_batch_size: 64 seed: 42 gradient_accumulation_steps: 8 total_train_batch_size: 128 optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 lr_scheduler_type: cosine lr_scheduler_warmup_ratio: 0.1 training_steps: 4000 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.17.0.dev0 Pytorch 1.10.2 Datasets 1.18.4.dev0 Tokenizers 0.11.0
$-/run
90
Huggingface
wav2vec2-large-xlsr-czech
wav2vec2-large-xlsr-czech
Wav2Vec2-Large-XLSR-53-Chech Fine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. Usage The model can be used directly (without a language model) as follows: Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. Test Result: 24.56 Training The Common Voice train, validation. The script used for training will be available here soon.
$-/run
66
Huggingface
gpt-neo-therapist-small
$-/run
48
Huggingface
whisper-large-uk-2
whisper-large-uk-2
whisper-large-uk This model is a fine-tuned version of openai/whisper-large-v2 on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: eval_loss: 0.2527 eval_wer: 10.0226 eval_runtime: 9610.7996 eval_samples_per_second: 0.747 eval_steps_per_second: 0.023 epoch: 1.8 step: 1098 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 5e-06 train_batch_size: 32 eval_batch_size: 32 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 64 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear training_steps: 1500 mixed_precision_training: Native AMP Framework versions Transformers 4.26.0.dev0 Pytorch 1.13.1+cu117 Datasets 2.8.0 Tokenizers 0.13.2
$-/run
39
Huggingface
wav2vec2-xls-r-1b-uk
wav2vec2-xls-r-1b-uk
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the /WORKSPACE/DATA/UK/COMPOSED_DATASET/ - NA dataset. It achieves the following results on the evaluation set: Loss: 0.1092 Wer: 0.1752 Cer: 0.0323 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 16 eval_batch_size: 64 seed: 42 gradient_accumulation_steps: 8 total_train_batch_size: 128 optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 lr_scheduler_type: cosine lr_scheduler_warmup_ratio: 0.1 training_steps: 12000 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.17.0.dev0 Pytorch 1.10.2 Datasets 1.18.4.dev0 Tokenizers 0.11.0
$-/run
35
Huggingface
DialoGPT-medium-simpsons
DialoGPT-medium-simpsons
DialoGPT-medium-simpsons This is a version of DialoGPT-medium fine-tuned on The Simpsons scripts.
$-/run
29
Huggingface
clip-rsicd-v5
$-/run
22
Huggingface
clip-test
clip-test
clip-test This model is a fine-tuned version of openai/clip-vit-base-patch32 on the arampacha/rsicd dataset. It achieves the following results on the evaluation set: Loss: 4.2656 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 64 eval_batch_size: 64 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear lr_scheduler_warmup_ratio: 0.1 num_epochs: 3.0 Training results Framework versions Transformers 4.18.0 Pytorch 1.10.0+cu111 Datasets 2.0.0 Tokenizers 0.11.6
$-/run
21
Huggingface
wav2vec2-xls-r-300m-ka
$-/run
11
Huggingface