Prithivida
Rank:Average Model Cost: $0.0000
Number of Runs: 547,667
Models by this creator
parrot_paraphraser_on_T5
parrot_paraphraser_on_T5
parrot_paraphraser_on_T5 is a text-to-text generation model based on the T5 architecture that is trained on a paraphrasing task. It is designed to take an input text and generate multiple paraphrased versions of it. The model has been fine-tuned on a large dataset of paraphrase pairs to improve its ability to generate diverse and high-quality paraphrases. It can be used for various applications such as text augmentation, data cleaning, and text synthesis.
$-/run
439.2K
Huggingface
grammar_error_correcter_v1
grammar_error_correcter_v1
The grammar_error_correcter_v1 is a model that is designed to correct grammar errors in text. It takes input text that may contain grammar errors and generates output text that has corrected grammar. This model can be useful in various applications where text with correct grammar is required, such as in writing essays, articles, or any other type of content.
$-/run
56.2K
Huggingface
parrot_adequacy_model
parrot_adequacy_model
The parrot_adequacy_model is a text classification model that is designed to assess the adequacy of paraphrased sentences. It can determine if a paraphrased sentence is adequate in terms of preserving the meaning and intent of the original sentence. The model can be used to evaluate the quality and accuracy of paraphrases, which can be useful in various natural language processing tasks such as machine translation, summarization, and data augmentation.
$-/run
28.4K
Huggingface
parrot_fluency_model
parrot_fluency_model
The parrot_fluency_model is a text classification model that has been trained to determine the fluency of a given text. It can identify whether a piece of text is well-written and grammatically correct or if there are errors and inconsistencies. This model is useful for tasks such as proofreading, language learning, and automated writing evaluation.
$-/run
15.0K
Huggingface
bert-for-patents-64d
bert-for-patents-64d
The "bert-for-patents-64d" model is a modified version of the BERT for Patents model, which is trained on over 100 million patents. The modified version reduces the dimension of the output embeddings from 768 or 1024 to 64 using Principle Component Analysis (PCA), allowing for more efficient storage of the embeddings while still maintaining comparable performance. The model is commonly used in projects such as Patents4IPPC, which is commissioned by the Joint Research Centre of the European Commission.
$-/run
5.6K
Huggingface
informal_to_formal_styletransfer
informal_to_formal_styletransfer
Platform did not provide a description for this model.
$-/run
2.6K
Huggingface
formal_to_informal_styletransfer
formal_to_informal_styletransfer
This model belongs to the Styleformer project Please refer to github page
$-/run
295
Huggingface
passive_to_active_styletransfer
passive_to_active_styletransfer
This model belongs to the Styleformer project Please refer to github page
$-/run
238
Huggingface
active_to_passive_styletransfer
active_to_passive_styletransfer
Platform did not provide a description for this model.
$-/run
172
Huggingface
ALT_CTRLSum
$-/run
29
Huggingface