Padmajabfrl
Rank:Average Model Cost: $0.0000
Number of Runs: 86,391
Models by this creator
Gender-Classification
Gender-Classification
The Gender-Classification model is a fine-tuned version of distilbert-base-uncased. It achieves 100% accuracy on the evaluation set, although no specific information about the dataset or training procedure is provided. The model uses Transformers 4.25.1, Pytorch 1.13.0+cu116, Datasets 2.8.0, and Tokenizers 0.13.2.
$-/run
86.2K
Huggingface
Religion-Classification
$-/run
48
Huggingface
Ethnicity-Model
Ethnicity-Model
Ethnicity-Model This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: Loss: 0.0596 Accuracy: 0.9857 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 16 eval_batch_size: 16 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 5 Training results Framework versions Transformers 4.14.1 Pytorch 1.12.0 Datasets 2.9.0 Tokenizers 0.10.3
$-/run
36
Huggingface
distilbert-base-uncased-finetuned_gender_classification
distilbert-base-uncased-finetuned_gender_classification
Platform did not provide a description for this model.
$-/run
19
Huggingface
Ethnicity-Classification
$-/run
16
Huggingface
Religion-Classification-Custom-Model
$-/run
12
Huggingface
demo
demo
demo This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: Loss: 0.0000 Accuracy: 1.0 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 16 eval_batch_size: 16 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 2 Training results Framework versions Transformers 4.25.1 Pytorch 1.13.0+cu116 Datasets 2.8.0 Tokenizers 0.13.2
$-/run
11
Huggingface