Pedramyazdipoor
Rank:Average Model Cost: $0.0000
Number of Runs: 62,503
Models by this creator
persian_xlm_roberta_large
persian_xlm_roberta_large
The persian_xlm_roberta_large model is a large-scale pre-trained language model for Persian text. It is trained on a wide range of Persian language data to understand and generate text. It can be used for various natural language processing tasks such as question answering, text classification, and text generation.
$-/run
62.5K
Huggingface
parsbert_question_answering_PQuAD
parsbert_question_answering_PQuAD
ParsBert Fine-Tuned for Question Answering Task ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words. In this project I fine-tuned ParsBert on PQuAD dataset for extractive question answering task. Our source code is here. Paper presenting ParsBert : arXiv:2005.12515. Paper presenting PQuAD dataset: arXiv:2202.06219. Introduction This model is fine-tuned on PQuAD Train set and is easily ready to use. Too long training time encouraged me to publish this model in order to make life easier for those who need. Hyperparameters I set batch_size to 32 due to the limitations of GPU memory in Google Colab. Performance Evaluated on the PQuAD Persian test set. I trained for more than 2 epochs as well, but I get worse results. Our XLM-Roberta Large outperforms our ParsBert, but the former is more than 3 times bigger than the latter one; so comparing these two is not fair. Question Answering On Test Set of PQuAD Dataset How to use Pytorch Tensorflow Inference for pytorch I leave Inference for tensorflow as an excercise for you :) . There are some considerations for inference: Start index of answer must be smaller than end index. The span of answer must be within the context. The selected span must be the most probable choice among N pairs of candidates. Acknowledgments We did this project thanks to the fantastic job done by HooshvareLab. We also express our gratitude to Newsha Shahbodaghkhan for facilitating dataset gathering. Contributors Pedram Yazdipoor : Linkedin Releases Release v0.2 (Sep 19, 2022) This is the second version of our ParsBert for Question Answering on PQuAD.
$-/run
16
Huggingface