Sshleifer

Rank:

Average Model Cost: $0.0000

Number of Runs: 1,206,112

Models by this creator

distilbart-cnn-12-6

distilbart-cnn-12-6

sshleifer

The distilbart-cnn-12-6 model is a pre-trained language model that is specifically trained for text summarization tasks. It is based on the distillation of the BART model and the CNN/DailyMail dataset. The model takes in a piece of text and generates a concise summary of that text. It can be used to automate the process of summarizing news articles, blog posts, or any other textual content.

Read more

$-/run

731.9K

Huggingface

tiny-distilbert-base-cased-distilled-squad

tiny-distilbert-base-cased-distilled-squad

The tiny-distilbert-base-cased-distilled-squad model is a question-answering model that is based on the DistilBERT architecture. It is a smaller and more efficient version of the original BERT model, created by distilling knowledge from the larger BERT model into a smaller and faster model. The model has been fine-tuned on the SQuAD (Stanford Question Answering Dataset) task, which involves answering questions based on a given context passage. This model can be used to extract answers to questions from a given context passage.

Read more

$-/run

31.7K

Huggingface

bart-tiny-random

bart-tiny-random

The bart-tiny-random model is a text generation model that is part of the BART architecture, which stands for Bidirectional and Auto-Regressive Transformer. The BART model is trained to understand and generate human-like text by predicting the next word or phrase given a sequence of input text. The bart-tiny-random variant is a smaller version of the BART model, designed to be more computationally efficient. It generates random output text based on the input provided.

Read more

$-/run

15.8K

Huggingface

tiny-dbmdz-bert-large-cased-finetuned-conll03-english

tiny-dbmdz-bert-large-cased-finetuned-conll03-english

The tiny-dbmdz-bert-large-cased-finetuned-conll03-english model is a fine-tuned version of the BERT (Bidirectional Encoder Representations from Transformers) model for named entity recognition (NER) on the CoNLL-2003 dataset. It is trained to identify and classify named entities such as persons, locations, organizations, and others in text. This model can be used to extract information from text and perform tasks such as information retrieval, question answering, and text summarization.

Read more

$-/run

12.7K

Huggingface

Similar creators