Mrm8488

Rank:

Average Model Cost: $0.0000

Number of Runs: 1,206,908

Models by this creator

t5-base-finetuned-common_gen

t5-base-finetuned-common_gen

mrm8488

The t5-base-finetuned-common_gen model is Google's T5 model that has been fine-tuned on the CommonGen dataset. CommonGen is a dataset designed to test machines on their ability to generate coherent sentences describing everyday scenarios using common concepts. The t5-base-finetuned-common_gen model is trained to perform generative commonsense reasoning by combining relational reasoning and compositional generalization. The model achieves state-of-the-art results on various language understanding tasks such as summarization, question answering, and text classification.

Read more

$-/run

562.7K

Huggingface

t5-base-finetuned-summarize-news

t5-base-finetuned-summarize-news

The T5-base model has been fine-tuned on the News Summary dataset for the task of summarization. The model takes in text inputs and generates concise summaries of the given news articles. The dataset used for training consists of 4,515 examples collected from various news sources. The fine-tuning process involved modifying a training script and training the model for more epochs. The model has been trained to generate summaries that capture the key information and important details of the news articles.

Read more

$-/run

242.3K

Huggingface

distilroberta-finetuned-financial-news-sentiment-analysis

distilroberta-finetuned-financial-news-sentiment-analysis

The distilroberta-finetuned-financial-news-sentiment-analysis model is a text classification model that has been fine-tuned specifically for financial news sentiment analysis. It is based on the DistilRoBERTa architecture, which is a smaller and faster version of the RoBERTa model. This model is designed to analyze the sentiment of financial news articles and classify them as positive, negative, or neutral. It can be used to gain insights into market sentiment and make informed investment decisions.

Read more

$-/run

170.8K

Huggingface

codebert-base-finetuned-detect-insecure-code

codebert-base-finetuned-detect-insecure-code

CodeBERT is a language model that has been fine-tuned on code-related tasks. Specifically, this model has been trained to detect insecure code. Insecure code refers to code that may have vulnerabilities or potential security risks. The model is designed to classify code snippets and identify whether they contain insecure code. This can be useful for identifying potential security issues in software development and preventing security breaches.

Read more

$-/run

109.0K

Huggingface

bert-spanish-cased-finetuned-ner

bert-spanish-cased-finetuned-ner

The bert-spanish-cased-finetuned-ner model is a token classification model that has been fine-tuned for Named Entity Recognition (NER) tasks in the Spanish language. It is based on the BERT architecture and is trained to identify and classify named entities in Spanish text, such as person names, organization names, and locations. This model can be used to extract named entities from text in Spanish and is particularly useful for information retrieval and text mining tasks.

Read more

$-/run

13.4K

Huggingface

bert-small-finetuned-squadv2

bert-small-finetuned-squadv2

The bert-small-finetuned-squadv2 model is a small version of the BERT model that has been fine-tuned on the SQuAD v2.0 dataset. This model is designed for question answering tasks, where given a question and a paragraph of context, the model is able to provide the most likely answer within the context. The model has been trained to extract the answer from the paragraph by predicting the starting and ending positions of the answer.

Read more

$-/run

12.7K

Huggingface

Similar creators