Akhisreelibra
Rank:Average Model Cost: $0.0000
Number of Runs: 8,677
Models by this creator
bert-malayalam-pos-tagger
bert-malayalam-pos-tagger
The bert-malayalam-pos-tagger model is a fine-tuned version of bert-base-multilingual-uncased on a dataset of tagged Malayalam sentences. It achieves good results on the evaluation set, with a loss of 0.4383, precision of 0.7380, recall of 0.7767, F1 score of 0.7569, and accuracy of 0.8552. More information is needed regarding the model's description, intended uses and limitations, as well as the training and evaluation data. The training hyperparameters include a learning rate of 2e-05, a training batch size of 8, an evaluation batch size of 8, and a seed of 42. The optimizer used is Adam with betas=(0.9,0.999) and epsilon=1e-08, and the learning rate scheduler type is linear. The model was trained for 3 epochs. The framework versions used are Transformers 4.18.0, Pytorch 1.11.0, Datasets 2.1.0, and Tokenizers 0.12.1.
$-/run
8.7K
Huggingface
xlmR-finetuned-pos
$-/run
3
Huggingface
mt5-small-finetuned-one-india
mt5-small-finetuned-one-india
Platform did not provide a description for this model.
$-/run
0
Huggingface
malayalam-summariser
$-/run
0
Huggingface
bert-finetuned-ner
$-/run
0
Huggingface
mt5-small-finetuned-amazon-en-es
mt5-small-finetuned-amazon-en-es
Platform did not provide a description for this model.
$-/run
0
Huggingface
t5-small-finetuned-xsum
$-/run
0
Huggingface