Average Model Cost: $0.0000
Number of Runs: 10,140
Models by this creator
The vit-face-expression model is an image classification model that is specifically designed to identify facial expressions in images. It utilizes the Vision Transformer (ViT) architecture, which has been shown to be highly effective for image classification tasks. By training on a large dataset of images with labeled facial expressions, the model is able to accurately classify images based on the emotions expressed on people's faces. This can be useful for various applications like emotion recognition in social media posts, sentiment analysis in market research, and facial expression analysis in human-computer interaction.
vit-pneumonia This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the chest-xray-classification dataset. It achieves the following results on the evaluation set: Loss: 0.1086 Accuracy: 0.9768 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 0.0002 train_batch_size: 64 eval_batch_size: 64 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: cosine lr_scheduler_warmup_ratio: 0.25 num_epochs: 10 Training results Framework versions Transformers 4.26.1 Pytorch 1.13.1+cu116 Datasets 2.10.1 Tokenizers 0.13.2