Csebuetnlp

Rank:

Average Model Cost: $0.0000

Number of Runs: 100,705

Models by this creator

mT5_multilingual_XLSum

mT5_multilingual_XLSum

csebuetnlp

mT5_multilingual_XLSum is a multilingual summarization model based on the mT5 architecture. It is trained to generate summaries of multiple languages using the XLSum dataset. The model can summarize text in various languages, making it a useful tool for cross-lingual summarization tasks.

Read more

$-/run

61.5K

Huggingface

mT5_m2m_crossSum

mT5_m2m_crossSum

mT5-m2m-CrossSum This repository contains the many-to-many (m2m) mT5 checkpoint finetuned on all cross-lingual pairs of the CrossSum dataset. This model tries to summarize text written in any language in the provided target language. For finetuning details and scripts, see the paper and the official repository. Using this model in transformers (tested on 4.11.0.dev0) Available target language names amharic arabic azerbaijani bengali burmese chinese_simplified chinese_traditional english french gujarati hausa hindi igbo indonesian japanese kirundi korean kyrgyz marathi nepali oromo pashto persian pidgin portuguese punjabi russian scottish_gaelic serbian_cyrillic serbian_latin sinhala somali spanish swahili tamil telugu thai tigrinya turkish ukrainian urdu uzbek vietnamese welsh yoruba Citation If you use this model, please cite the following paper:

Read more

$-/run

812

Huggingface

banglat5_banglaparaphrase

banglat5_banglaparaphrase

banglat5_banglaparaphrase This repository contains the pretrained checkpoint of the model BanglaT5 finetuned on BanglaParaphrase dataset. This is a sequence to sequence transformer model pretrained with the "Span Corruption" objective. Finetuned models using this checkpoint achieve competitive results on the dataset. For finetuning and inference, refer to the scripts in the official GitHub repository of BanglaNLG. Note: This model was pretrained using a specific normalization pipeline available here. All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: Using this model in transformers Benchmarks Supervised fine-tuning The dataset can be found in the link below: BanglaParaphrase Citation If you use this model, please cite the following paper:

Read more

$-/run

472

Huggingface

Similar creators