Klue
Rank:Average Model Cost: $0.0001
Number of Runs: 191,600
Models by this creator
bert-base
bert-base
The KLUE BERT base model is a pre-trained transformer-based language model specifically designed for the Korean language. It was developed in the context of the Korean Language Understanding Evaluation (KLUE) Benchmark. The model can be used for various tasks such as topic classification, semantic textual similarity, natural language inference, named entity recognition, and more. However, it is important to note that the model should not be used to create hostile or misleading content, as it was not trained to be factual or represent real events. The model has undergone evaluation on the KLUE Benchmark using specific metrics for each task. The model has certain limitations and biases that have been addressed by the developers in the associated paper. The technical specifications and training details can be found in the documentation.
$-/run
149.7K
Huggingface
roberta-base
roberta-base
The KLUE RoBERTa base is a pre-trained RoBERTa model on the Korean language. It can be used for various natural language processing tasks such as text classification, sentiment analysis, and question-answering. The model is based on the RoBERTa architecture and has been fine-tuned on various Korean language tasks. It can be accessed using the BertTokenizer and is available on Github.
$-/run
20.8K
Huggingface
roberta-large
roberta-large
The roberta-large model is a pretrained RoBERTa model specifically trained on the Korean language. It can be used for various natural language processing tasks, such as text classification, sentiment analysis, and machine translation. The model is based on the Transformer architecture and has achieved state-of-the-art performance on several Korean NLP benchmarks. It is recommended to use the BertTokenizer instead of the RobertaTokenizer with this model.
$-/run
17.3K
Huggingface
roberta-small
roberta-small
KLUE RoBERTa small Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details. How to use NOTE: Use BertTokenizer instead of RobertaTokenizer. (AutoTokenizer will load BertTokenizer) BibTeX entry and citation info
$-/run
3.8K
Huggingface