Asafaya
Rank:Average Model Cost: $0.0000
Number of Runs: 14,641
Models by this creator
bert-base-arabic
bert-base-arabic
The bert-base-arabic model is a pretrained BERT (Bidirectional Encoder Representations from Transformers) base language model for the Arabic language. It was trained on a large corpus of approximately 8.2 billion words, including the Arabic version of OSCAR (filtered from Common Crawl), a recent dump of Arabic Wikipedia, and other Arabic resources, totaling around 95GB of text. The model was trained for 3 million training steps with a batch size of 128. It can be used by installing the torch or tensorflow libraries and the Huggingface library transformers. The model's performance and further details can be found in the Arabic-BERT paper. The training process was supported by Google, who provided a free TPU, and the model is hosted by Huggingface.
$-/run
7.8K
Huggingface
bert-large-arabic
bert-large-arabic
Arabic BERT Large Model Pretrained BERT Large language model for Arabic If you use this model in your work, please cite this paper: Pretraining Corpus arabic-bert-large model was pretrained on ~8.2 Billion words: Arabic version of OSCAR - filtered from Common Crawl Recent dump of Arabic Wikipedia and other Arabic resources which sum up to ~95GB of text. Notes on training data: Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model. The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. Pretraining details This model was trained using Google BERT's github repository on a single TPU v3-8 provided for free from TFRC. Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256. Load Pretrained Model You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this: Results For further details on the models performance or any other queries, please refer to Arabic-BERT Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
$-/run
4.7K
Huggingface
bert-mini-arabic
bert-mini-arabic
Arabic BERT Mini Model Pretrained BERT Mini language model for Arabic If you use this model in your work, please cite this paper: Pretraining Corpus arabic-bert-mini model was pretrained on ~8.2 Billion words: Arabic version of OSCAR - filtered from Common Crawl Recent dump of Arabic Wikipedia and other Arabic resources which sum up to ~95GB of text. Notes on training data: Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model. The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. Pretraining details This model was trained using Google BERT's github repository on a single TPU v3-8 provided for free from TFRC. Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256. Load Pretrained Model You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this: Results For further details on the models performance or any other queries, please refer to Arabic-BERT Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
$-/run
1.1K
Huggingface
albert-base-arabic
albert-base-arabic
Arabic-ALBERT Base Arabic edition of ALBERT Base pretrained language model If you use any of these models in your work, please cite this work as: Pretraining data The models were pretrained on ~4.4 Billion words: Arabic version of OSCAR (unshuffled version of the corpus) - filtered from Common Crawl Recent dump of Arabic Wikipedia Notes on training data: Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model. The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. Pretraining details These models were trained using Google ALBERT's github repository on a single TPU v3-8 provided for free from TFRC. Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096. Models Results For further details on the models performance or any other queries, please refer to Arabic-ALBERT How to use You can use these models by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this: Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
$-/run
365
Huggingface
bert-medium-arabic
bert-medium-arabic
Arabic BERT Medium Model Pretrained BERT Medium language model for Arabic If you use this model in your work, please cite this paper: Pretraining Corpus arabic-bert-medium model was pretrained on ~8.2 Billion words: Arabic version of OSCAR - filtered from Common Crawl Recent dump of Arabic Wikipedia and other Arabic resources which sum up to ~95GB of text. Notes on training data: Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model. The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. Pretraining details This model was trained using Google BERT's github repository on a single TPU v3-8 provided for free from TFRC. Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256. Load Pretrained Model You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this: Results For further details on the models performance or any other queries, please refer to Arabic-BERT Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
$-/run
362
Huggingface
albert-xlarge-arabic
albert-xlarge-arabic
Arabic-ALBERT Xlarge Arabic edition of ALBERT Xlarge pretrained language model If you use any of these models in your work, please cite this work as: Pretraining data The models were pretrained on ~4.4 Billion words: Arabic version of OSCAR (unshuffled version of the corpus) - filtered from Common Crawl Recent dump of Arabic Wikipedia Notes on training data: Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model. The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. Pretraining details These models were trained using Google ALBERT's github repository on a single TPU v3-8 provided for free from TFRC. Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096. Models Results For further details on the models performance or any other queries, please refer to Arabic-ALBERT How to use You can use these models by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this: Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
$-/run
219
Huggingface
albert-large-arabic
albert-large-arabic
Arabic-ALBERT Large Arabic edition of ALBERT Large pretrained language model If you use any of these models in your work, please cite this work as: Pretraining data The models were pretrained on ~4.4 Billion words: Arabic version of OSCAR (unshuffled version of the corpus) - filtered from Common Crawl Recent dump of Arabic Wikipedia Notes on training data: Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model. The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. Pretraining details These models were trained using Google ALBERT's github repository on a single TPU v3-8 provided for free from TFRC. Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096. Models Results For further details on the models performance or any other queries, please refer to Arabic-ALBERT How to use You can use these models by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this: Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
$-/run
57
Huggingface
hubert-large-arabic-transcribe
hubert-large-arabic-transcribe
Platform did not provide a description for this model.
$-/run
7
Huggingface