Mrm8488
Rank:Average Model Cost: $0.0000
Number of Runs: 1,206,908
Models by this creator
t5-base-finetuned-common_gen
t5-base-finetuned-common_gen
The t5-base-finetuned-common_gen model is Google's T5 model that has been fine-tuned on the CommonGen dataset. CommonGen is a dataset designed to test machines on their ability to generate coherent sentences describing everyday scenarios using common concepts. The t5-base-finetuned-common_gen model is trained to perform generative commonsense reasoning by combining relational reasoning and compositional generalization. The model achieves state-of-the-art results on various language understanding tasks such as summarization, question answering, and text classification.
$-/run
562.7K
Huggingface
t5-base-finetuned-summarize-news
t5-base-finetuned-summarize-news
The T5-base model has been fine-tuned on the News Summary dataset for the task of summarization. The model takes in text inputs and generates concise summaries of the given news articles. The dataset used for training consists of 4,515 examples collected from various news sources. The fine-tuning process involved modifying a training script and training the model for more epochs. The model has been trained to generate summaries that capture the key information and important details of the news articles.
$-/run
242.3K
Huggingface
distilroberta-finetuned-financial-news-sentiment-analysis
distilroberta-finetuned-financial-news-sentiment-analysis
The distilroberta-finetuned-financial-news-sentiment-analysis model is a text classification model that has been fine-tuned specifically for financial news sentiment analysis. It is based on the DistilRoBERTa architecture, which is a smaller and faster version of the RoBERTa model. This model is designed to analyze the sentiment of financial news articles and classify them as positive, negative, or neutral. It can be used to gain insights into market sentiment and make informed investment decisions.
$-/run
170.8K
Huggingface
codebert-base-finetuned-detect-insecure-code
codebert-base-finetuned-detect-insecure-code
CodeBERT is a language model that has been fine-tuned on code-related tasks. Specifically, this model has been trained to detect insecure code. Insecure code refers to code that may have vulnerabilities or potential security risks. The model is designed to classify code snippets and identify whether they contain insecure code. This can be useful for identifying potential security issues in software development and preventing security breaches.
$-/run
109.0K
Huggingface
t5-base-finetuned-question-generation-ap
t5-base-finetuned-question-generation-ap
The t5-base-finetuned-question-generation-ap model is based on the T5 architecture and has been fine-tuned for the task of question generation. Given a passage of text, the model generates relevant questions based on the content of the passage. This can be useful for various applications, such as creating question-answer pairs for training question answering models or generating quizzes.
$-/run
55.8K
Huggingface
t5-base-finetuned-span-sentiment-extraction
t5-base-finetuned-span-sentiment-extraction
The T5-base-finetuned-span-sentiment-extraction model is a text-to-text generation model that has been fine-tuned specifically for sentiment extraction tasks. It takes in a given input and outputs a concise and complete summary of the sentiment expressed in the input text.
$-/run
20.5K
Huggingface
bert-spanish-cased-finetuned-ner
bert-spanish-cased-finetuned-ner
The bert-spanish-cased-finetuned-ner model is a token classification model that has been fine-tuned for Named Entity Recognition (NER) tasks in the Spanish language. It is based on the BERT architecture and is trained to identify and classify named entities in Spanish text, such as person names, organization names, and locations. This model can be used to extract named entities from text in Spanish and is particularly useful for information retrieval and text mining tasks.
$-/run
13.4K
Huggingface
bert-small-finetuned-squadv2
bert-small-finetuned-squadv2
The bert-small-finetuned-squadv2 model is a small version of the BERT model that has been fine-tuned on the SQuAD v2.0 dataset. This model is designed for question answering tasks, where given a question and a paragraph of context, the model is able to provide the most likely answer within the context. The model has been trained to extract the answer from the paragraph by predicting the starting and ending positions of the answer.
$-/run
12.7K
Huggingface
spanbert-large-finetuned-squadv1
spanbert-large-finetuned-squadv1
The spanbert-large-finetuned-squadv1 model is a pre-trained language model that has been fine-tuned on the SQuAD 1.1 dataset for question-answering tasks. It is based on the SpanBERT model developed by Facebook Research, which improves pre-training by representing and predicting spans of text. This model can be used to generate answers to questions based on a given context.
$-/run
11.8K
Huggingface
bert-medium-finetuned-squadv2
bert-medium-finetuned-squadv2
The bert-medium-finetuned-squadv2 model is a fine-tuned version of the BERT (Bidirectional Encoder Representations from Transformers) model on the SQuAD v2 dataset. It is trained to answer questions based on a given context paragraph. This model provides a detailed understanding of the context and can generate accurate answers to a wide range of questions.
$-/run
7.7K
Huggingface