Edumunozsala

Rank:

Average Model Cost: $0.0000

Number of Runs: 3,262

Models by this creator

roberta_bne_sentiment_analysis_es

roberta_bne_sentiment_analysis_es

edumunozsala

No description available.

Read more

$-/run

1.5K

Huggingface

vit_base-224-in21k-ft-cifar100

vit_base-224-in21k-ft-cifar100

This model was trained using Amazon SageMaker and the Hugging Face Deep Learning container, The base model is Vision Transformer (base-sized model) which is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.Link to base model Link to dataset description The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. This dataset,CIFAR100, is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). Sizes of datasets: This model is intented for Image Classification. Created by Eduardo Muñoz/@edumunozsala

Read more

$-/run

1.5K

Huggingface

bertin_base_sentiment_analysis_es

bertin_base_sentiment_analysis_es

Model bertin_base_sentiment_analysis_es A finetuned model for Sentiment analysis in Spanish This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container, The base model is Bertin base which is a RoBERTa-base model pre-trained on the Spanish portion of mC4 using Flax. It was trained by the Bertin Project.Link to base model Article: BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling Author = Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury, journal = Procesamiento del Lenguaje Natural, volume = 68, number = 0, year = 2022 url = http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403 Dataset The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages. Sizes of datasets: Train dataset: 42,500 Validation dataset: 3,750 Test dataset: 3,750 Intended uses & limitations This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews. Hyperparameters Evaluation results Accuracy = 0.8989333333333334 F1 Score = 0.8989063750333421 Precision = 0.877147319104633 Recall = 0.9217724288840262 Test results Model in action Usage for Sentiment Analysis Created by Eduardo Muñoz/@edumunozsala

Read more

$-/run

27

Huggingface

distilroberta-sentence-transformer-test

distilroberta-sentence-transformer-test

edumunozsala/distilroberta-sentence-transformer-test This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net Training The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1125 with parameters: Loss: sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters: Parameters of the fit()-Method: Full Model Architecture Citing & Authors

Read more

$-/run

14

Huggingface

vit_base-224-in21k-ft-cifar10

vit_base-224-in21k-ft-cifar10

This model was trained using Amazon SageMaker and the Hugging Face Deep Learning container, The base model is Vision Transformer (base-sized model) which is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.Link to base model Link to dataset description The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. Sizes of datasets: This model is intented for Image Classification. Created by Eduardo Muñoz/@edumunozsala

Read more

$-/run

9

Huggingface

Similar creators