Lmqg

Rank:

Average Model Cost: $0.0000

Number of Runs: 3,575

Models by this creator

t5-base-squad-qag

t5-base-squad-qag

lmqg

Platform did not provide a description for this model.

Read more

$-/run

456

Huggingface

t5-small-squad-qg-ae

t5-small-squad-qg-ae

Model Card of lmqg/t5-small-squad-qg-ae This model is fine-tuned version of t5-small for question generation and answer extraction jointly on the lmqg/qg_squad (dataset_name: default) via lmqg. Overview Language model: t5-small Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Metric (Question & Answer Generation): raw metric file Metric (Answer Extraction): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['paragraph_answer', 'paragraph_sentence'] output_types: ['question', 'answer'] prefix_types: ['qg', 'ae'] model: t5-small max_length: 512 max_length_output: 32 epoch: 7 batch: 64 lr: 0.0001 fp16: False random_seed: 1 gradient_accumulation_steps: 1 label_smoothing: 0.15 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

426

Huggingface

t5-base-squad-qg

t5-base-squad-qg

Model Card of lmqg/t5-base-squad-qg This model is fine-tuned version of t5-base for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg. Overview Language model: t5-base Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Metric (Question & Answer Generation, Reference Answer): Each question is generated from the gold answer. raw metric file Metric (Question & Answer Generation, Pipeline Approach): Each question is generated on the answer generated by lmqg/t5-base-squad-ae. raw metric file Metrics (Question Generation, Out-of-Domain) Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['paragraph_answer'] output_types: ['question'] prefix_types: ['qg'] model: t5-base max_length: 512 max_length_output: 32 epoch: 5 batch: 16 lr: 0.0001 fp16: False random_seed: 1 gradient_accumulation_steps: 4 label_smoothing: 0.15 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

377

Huggingface

mt5-small-jaquad-qg-ae

mt5-small-jaquad-qg-ae

Model Card of lmqg/mt5-small-jaquad-qg-ae This model is fine-tuned version of google/mt5-small for question generation and answer extraction jointly on the lmqg/qg_jaquad (dataset_name: default) via lmqg. Overview Language model: google/mt5-small Language: ja Training data: lmqg/qg_jaquad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Metric (Question & Answer Generation): raw metric file Metric (Answer Extraction): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_jaquad dataset_name: default input_types: ['paragraph_answer', 'paragraph_sentence'] output_types: ['question', 'answer'] prefix_types: ['qg', 'ae'] model: google/mt5-small max_length: 512 max_length_output: 32 epoch: 24 batch: 64 lr: 0.0005 fp16: False random_seed: 1 gradient_accumulation_steps: 1 label_smoothing: 0.15 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

293

Huggingface

Similar creators