Average Model Cost: $0.0000
Number of Runs: 14,702
Models by this creator
Bangla-Bert-Base is a pretrained language model for the Bengali language. It is trained using the mask language modeling approach described in BERT. The model is trained on a combination of Bengali common crawl corpus and Bengali Wikipedia dump dataset. The vocabulary for the model is built using the BNLP package and has a size of 102025 tokens. The model architecture follows the bert-base-uncased model with 12 layers, 768 hidden units, 12 attention heads, and 110M parameters. The model has been evaluated on language modeling tasks as well as downstream tasks such as Bengali text classification and named entity recognition. It has achieved state-of-the-art results on these tasks. The model can be used for tasks such as masked language modeling using the Bangla BERT tokenizer.
codeswitch-spaeng-pos-lince This is a pretrained model for Part of Speech Tagging of spanish-english code-mixed data used from LinCE This model is trained for this below repository. https://github.com/sagorbrur/codeswitch To install codeswitch: Part-of-Speech Tagging of Spanish-English Mixed Data Method-1 Method-2