Csebuetnlp
Rank:Average Model Cost: $0.0000
Number of Runs: 100,705
Models by this creator
mT5_multilingual_XLSum
mT5_multilingual_XLSum
mT5_multilingual_XLSum is a multilingual summarization model based on the mT5 architecture. It is trained to generate summaries of multiple languages using the XLSum dataset. The model can summarize text in various languages, making it a useful tool for cross-lingual summarization tasks.
$-/run
61.5K
Huggingface
mT5_m2m_crossSum_enhanced
mT5_m2m_crossSum_enhanced
mT5_m2m_crossSum_enhanced is a summarization model based on the mT5 architecture. It is trained to generate concise summaries of given texts. This model has been enhanced to improve its performance in summarization tasks.
$-/run
31.3K
Huggingface
banglabert
$-/run
4.1K
Huggingface
mT5_m2m_crossSum
mT5_m2m_crossSum
mT5-m2m-CrossSum This repository contains the many-to-many (m2m) mT5 checkpoint finetuned on all cross-lingual pairs of the CrossSum dataset. This model tries to summarize text written in any language in the provided target language. For finetuning details and scripts, see the paper and the official repository. Using this model in transformers (tested on 4.11.0.dev0) Available target language names amharic arabic azerbaijani bengali burmese chinese_simplified chinese_traditional english french gujarati hausa hindi igbo indonesian japanese kirundi korean kyrgyz marathi nepali oromo pashto persian pidgin portuguese punjabi russian scottish_gaelic serbian_cyrillic serbian_latin sinhala somali spanish swahili tamil telugu thai tigrinya turkish ukrainian urdu uzbek vietnamese welsh yoruba Citation If you use this model, please cite the following paper:
$-/run
812
Huggingface
banglat5_nmt_en_bn
$-/run
647
Huggingface
banglat5_nmt_bn_en
$-/run
607
Huggingface
banglat5_banglaparaphrase
banglat5_banglaparaphrase
banglat5_banglaparaphrase This repository contains the pretrained checkpoint of the model BanglaT5 finetuned on BanglaParaphrase dataset. This is a sequence to sequence transformer model pretrained with the "Span Corruption" objective. Finetuned models using this checkpoint achieve competitive results on the dataset. For finetuning and inference, refer to the scripts in the official GitHub repository of BanglaNLG. Note: This model was pretrained using a specific normalization pipeline available here. All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: Using this model in transformers Benchmarks Supervised fine-tuning The dataset can be found in the link below: BanglaParaphrase Citation If you use this model, please cite the following paper:
$-/run
472
Huggingface
banglabert_large
$-/run
445
Huggingface
mT5_m2o_chinese_simplified_crossSum
mT5_m2o_chinese_simplified_crossSum
Platform did not provide a description for this model.
$-/run
445
Huggingface
mT5_m2o_arabic_crossSum
$-/run
421
Huggingface