Research-backup

Rank:

Average Model Cost: $0.0000

Number of Runs: 184

Models by this creator

t5-small-squad-qg-no-paragraph

t5-small-squad-qg-no-paragraph

research-backup

Model Card of research-backup/t5-small-squad-qg-no-paragraph This model is fine-tuned version of t5-small for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg. This model is fine-tuned without pargraph information but only the sentence that contains the answer. Overview Language model: t5-small Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['sentence_answer'] output_types: ['question'] prefix_types: ['qg'] model: t5-small max_length: 128 max_length_output: 32 epoch: 8 batch: 64 lr: 0.0001 fp16: False random_seed: 1 gradient_accumulation_steps: 1 label_smoothing: 0.15 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

44

Huggingface

bart-base-squad-qg-default

bart-base-squad-qg-default

Model Card of research-backup/bart-base-squad-qg-default This model is fine-tuned version of facebook/bart-base for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg. This model is fine-tuned without parameter search (default configuration is taken from ERNIE-GEN). Overview Language model: facebook/bart-base Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['paragraph_answer'] output_types: ['question'] prefix_types: None model: facebook/bart-base max_length: 512 max_length_output: 32 epoch: 10 batch: 32 lr: 1.25e-05 fp16: False random_seed: 1 gradient_accumulation_steps: 1 label_smoothing: 0.1 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

17

Huggingface

bart-base-squad-qg-no-paragraph

bart-base-squad-qg-no-paragraph

Model Card of research-backup/bart-base-squad-qg-no-paragraph This model is fine-tuned version of facebook/bart-base for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg. This model is fine-tuned without pargraph information but only the sentence that contains the answer. Overview Language model: facebook/bart-base Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['sentence_answer'] output_types: ['question'] prefix_types: None model: facebook/bart-base max_length: 128 max_length_output: 32 epoch: 3 batch: 64 lr: 0.0001 fp16: False random_seed: 1 gradient_accumulation_steps: 2 label_smoothing: 0.15 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

16

Huggingface

relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0-child-prototypical

relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0-child-prototypical

relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0-child-prototypical RelBERT fine-tuned from roberta-base onrelbert/semeval2012_relational_similarity_v6. Fine-tuning is done via RelBERT library (see the repository for more detail). It achieves the following results on the relation understanding tasks: Analogy Question (dataset, full result): Accuracy on SAT (full): 0.44385026737967914 Accuracy on SAT: 0.4391691394658754 Accuracy on BATS: 0.4952751528627015 Accuracy on U2: 0.39035087719298245 Accuracy on U4: 0.4444444444444444 Accuracy on Google: 0.71 Lexical Relation Classification (dataset, full result): Micro F1 score on BLESS: 0.9159258701220431 Micro F1 score on CogALexV: 0.8164319248826291 Micro F1 score on EVALution: 0.6321776814734561 Micro F1 score on K&H+N: 0.9479029004660221 Micro F1 score on ROOT09: 0.8755875900971483 Relation Mapping (dataset, full result): Accuracy on Relation Mapping: 0.7786111111111111 Usage This model can be used through the relbert library. Install the library via pip and activate model as below. Training hyperparameters The following hyperparameters were used during training: model: roberta-base max_length: 64 mode: mask data: relbert/semeval2012_relational_similarity_v6 split: train split_eval: validation template_mode: manual loss_function: nce_logout classification_loss: False temperature_nce_constant: 0.05 temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} epoch: 6 batch: 128 lr: 5e-06 lr_decay: False lr_warmup: 1 weight_decay: 0 random_seed: 0 exclude_relation: None n_sample: 320 gradient_accumulation: 8 relation_level: None data_level: child_prototypical The full configuration can be found at fine-tuning parameter file. Reference If you use any resource from RelBERT, please consider to cite our paper.

Read more

$-/run

16

Huggingface

bart-large-squad-qg-default

bart-large-squad-qg-default

Model Card of research-backup/bart-large-squad-qg-default This model is fine-tuned version of facebook/bart-large for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg. This model is fine-tuned without parameter search (default configuration is taken from ERNIE-GEN). Overview Language model: facebook/bart-large Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['paragraph_answer'] output_types: ['question'] prefix_types: None model: facebook/bart-large max_length: 512 max_length_output: 32 epoch: 10 batch: 8 lr: 1.25e-05 fp16: False random_seed: 1 gradient_accumulation_steps: 4 label_smoothing: 0.1 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

15

Huggingface

bart-base-squad-qg-no-answer

bart-base-squad-qg-no-answer

Model Card of research-backup/bart-base-squad-qg-no-answer This model is fine-tuned version of facebook/bart-base for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg. This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph). Overview Language model: facebook/bart-base Language: en Training data: lmqg/qg_squad (default) Online Demo: https://autoqg.net/ Repository: https://github.com/asahi417/lm-question-generation Paper: https://arxiv.org/abs/2210.03992 Usage With lmqg With transformers Evaluation Metric (Question Generation): raw metric file Training hyperparameters The following hyperparameters were used during fine-tuning: dataset_path: lmqg/qg_squad dataset_name: default input_types: ['paragraph_sentence'] output_types: ['question'] prefix_types: None model: facebook/bart-base max_length: 512 max_length_output: 32 epoch: 4 batch: 32 lr: 0.0001 fp16: False random_seed: 1 gradient_accumulation_steps: 8 label_smoothing: 0.15 The full configuration can be found at fine-tuning config file. Citation

Read more

$-/run

15

Huggingface

Similar creators