Sagorsarker
Rank:Average Model Cost: $0.0000
Number of Runs: 14,702
Models by this creator
bangla-bert-base
bangla-bert-base
Bangla-Bert-Base is a pretrained language model for the Bengali language. It is trained using the mask language modeling approach described in BERT. The model is trained on a combination of Bengali common crawl corpus and Bengali Wikipedia dump dataset. The vocabulary for the model is built using the BNLP package and has a size of 102025 tokens. The model architecture follows the bert-base-uncased model with 12 layers, 768 hidden units, 12 attention heads, and 110M parameters. The model has been evaluated on language modeling tasks as well as downstream tasks such as Bengali text classification and named entity recognition. It has achieved state-of-the-art results on these tasks. The model can be used for tasks such as masked language modeling using the Bangla BERT tokenizer.
$-/run
12.7K
Huggingface
codeswitch-hineng-ner-lince
$-/run
1.4K
Huggingface
codeswitch-spaeng-sentiment-analysis-lince
$-/run
111
Huggingface
emailgenerator
$-/run
66
Huggingface
codeswitch-hineng-lid-lince
codeswitch-hineng-lid-lince
Platform did not provide a description for this model.
$-/run
62
Huggingface
codeswitch-spaeng-lid-lince
codeswitch-spaeng-lid-lince
Platform did not provide a description for this model.
$-/run
39
Huggingface
codeswitch-hineng-pos-lince
$-/run
31
Huggingface
mbert-bengali-tydiqa-qa
$-/run
20
Huggingface
codeswitch-spaeng-pos-lince
codeswitch-spaeng-pos-lince
codeswitch-spaeng-pos-lince This is a pretrained model for Part of Speech Tagging of spanish-english code-mixed data used from LinCE This model is trained for this below repository. https://github.com/sagorbrur/codeswitch To install codeswitch: Part-of-Speech Tagging of Spanish-English Mixed Data Method-1 Method-2
$-/run
20
Huggingface