Indobenchmark

Rank:

Average Model Cost: $0.0000

Number of Runs: 696,540

Models by this creator

indobert-base-p1

indobert-base-p1

indobenchmark

IndoBERT Base Model (phase1 - uncased) is a state-of-the-art language model for Indonesian based on the BERT model. It is pretrained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. This model can be used to extract contextual representations for Indonesian text.

Read more

$-/run

685.2K

Huggingface

indobert-base-p2

indobert-base-p2

IndoBERT Base Model (phase2 - uncased) is a state-of-the-art language model for Indonesian based on the BERT model. It is pretrained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. The model can be used for various natural language processing tasks such as text classification, named entity recognition, and sentiment analysis. To use the model, you need to load the model and tokenizer, and then extract the contextual representation for the given text. The authors of IndoBERT are Bryan Wilie, Karissa Vincentio, Genta Indra Winata, Samuel Cahyawijaya, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, and Ayu Purwarianti.

Read more

$-/run

6.9K

Huggingface

indobert-large-p1

indobert-large-p1

IndoBERT Large Model (phase1 - uncased) IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. All Pre-trained Models How to use Load model and tokenizer Extract contextual representation Authors IndoBERT was trained and evaluated by Bryan Wilie*, Karissa Vincentio*, Genta Indra Winata*, Samuel Cahyawijaya*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. Citation If you use our work, please cite:

Read more

$-/run

737

Huggingface

Similar creators