Get a weekly rundown of the latest AI models and research... subscribe! https://aimodels.substack.com/

Ctu-aic

Rank:

Average Model Cost: $0.0000

Number of Runs: 6,249

Models by this creator

💎

mbart25-multilingual-summarization-multilarge-cs

The mbart25-multilingual-summarization-multilarge-cs model is a fine-tuned checkpoint of the facebook/mbart-large-cc25 model for generating multilingual summaries, with a focus on Czech texts. It is trained on a large multilingual summarization dataset consisting of news and daily mail articles in eight different languages. The model aims to improve summarization performance in the Czech language specifically. It achieves this by leveraging the large amount of Czech documents in the training data. The model supports multiple languages including English, German, Spanish, French, Russian, and Turkish. It has been trained using the entire training set and 72% of the validation set of the multilingual large summarization dataset. The training process used cross-entropy loss as the optimization objective. The model has been evaluated using ROUGE metrics on the individual datasets of the test set, although the specific results are not provided.

Read more

$-/run

5.8K

Huggingface

🏋ïļ

mt5-base-multilingual-summarization-multilarge-cs

mt5-base-multilingual-summarization-multilarge-cs This model is a fine-tuned checkpoint of google/mt5-base on the Multilingual large summarization dataset focused on Czech texts to produce multilingual summaries. Task The model deals with a multi-sentence summary in eight different languages. With the idea of adding other foreign language documents, and by having a considerable amount of Czech documents, we aimed to improve model summarization in the Czech language. Supported languages: 'cs': '<extra_id_0>', 'en': '<extra_id_1>','de': '<extra_id_2>', 'es': '<extra_id_3>', 'fr': '<extra_id_4>', 'ru': '<extra_id_5>', 'tu': '<extra_id_6>', 'zh': '<extra_id_7>' #Usage Dataset Multilingual large summarization dataset consists of 10 sub-datasets mainly based on news and daily mails. For the training, it was used the entire training set and 72% of the validation set. Truncation and padding were set to 512 tokens for the encoder (input text) and 128 for the decoder (summary). Training Trained based on cross-entropy loss. ROUGE results per individual dataset test set: USAGE

Read more

$-/run

151

Huggingface

🏋ïļ

m2m100-418M-multilingual-summarization-multilarge-cs

Platform did not provide a description for this model.

Read more

$-/run

120

Huggingface

ðŸĪŊ

xlm-roberta-large-xnli-enfever_nli

('---\ndatasets:\n- ctu-aic/enfever_nli\nlanguages:\n- cs\nlicense: cc-by-sa-4.0\ntags:\n- natural-language-inference\n\n---',) ðŸĶū xlm-roberta-large-xnli-enfever_nli Transformer model for Natural Language Inference in ['cs'] languages finetuned on ['ctu-aic/enfever_nli'] datasets. 🧰 Usage ðŸ‘ū Using UKPLab sentence_transformers CrossEncoder The model was trained using the CrossEncoder API and we recommend it for its usage. ðŸĪ— Using Huggingface transformers ðŸŒģ Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. 👎 Authors The model was trained and uploaded by ullriher (e-mail: ullriher@fel.cvut.cz) The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague (AIC). 🔐 License cc-by-sa-4.0 💎 Citation If you find this repository helpful, feel free to cite our publication:

Read more

$-/run

33

Huggingface

🔎

mT5_multilingual_XLSum-smesum-2

Platform did not provide a description for this model.

Read more

$-/run

24

Huggingface

💎

mbart-sumeczech-claim-extraction

Platform did not provide a description for this model.

Read more

$-/run

23

Huggingface

🏋ïļ

xlm-roberta-large-squad2-csfever_v2-f1

Model Card for xlm-roberta-large-squad2-csfever_v2-f1 Model Details Model for natural language inference trained as a part of bachelor thesis. Uses Transformers Sentence Transformers

Read more

$-/run

17

Huggingface

✅

xlm-roberta-large-squad2-csfever_nli

Platform did not provide a description for this model.

Read more

$-/run

16

Huggingface

⛏ïļ

mt5-base-multilingual-summarization-multilarge-cs-smesum

Platform did not provide a description for this model.

Read more

$-/run

15

Huggingface

💎

mbart-at2h-cs-polish-news2

Platform did not provide a description for this model.

Read more

$-/run

14

Huggingface

Similar creators