Bioformers

Rank:

Average Model Cost: $0.0000

Number of Runs: 9,297

Models by this creator

bioformer-8L

bioformer-8L

bioformers

Platform did not provide a description for this model.

Read more

$-/run

4.6K

Huggingface

bioformer-8L-mnli

bioformer-8L-mnli

bioformer-cased-v1.0 fined-tuned on the MNLI dataset for 2 epochs. The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are: Evaluation results eval_accuracy = 0.803973 Speed In our experiments, the inference speed of Bioformer is 3x as fast as BERT-base/BioBERT/PubMedBERT, and is 40% faster than DistilBERT. More information The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. (source: https://huggingface.co/datasets/glue)

Read more

$-/run

203

Huggingface

bioformer-8L-qnli

bioformer-8L-qnli

bioformer-8L fined-tuned on the QNLI dataset for 2 epochs. The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are: Evaluation results eval_accuracy = 0.883397 More information The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark. (source: https://paperswithcode.com/dataset/qnli) Original GLUE paper: https://arxiv.org/abs/1804.07461

Read more

$-/run

29

Huggingface

Similar creators