Allenai

Rank:

Average Model Cost: $0.0000

Number of Runs: 5,795,705

Models by this creator

scibert_scivocab_uncased

scibert_scivocab_uncased

allenai

The scibert_scivocab_uncased is a BERT-based language model that has been pretrained on scientific text. It has been trained on a large corpus of scientific papers and has its own vocabulary specifically designed for scientific language. This model is useful for various natural language processing tasks in the scientific domain.

Read more

$-/run

3.3M

Huggingface

longformer-base-4096

longformer-base-4096

longformer-base-4096 is a BERT-like transformer model that is specifically designed to handle long documents. It can process sequences of up to 4,096 tokens in length. Longformer uses a combination of local and global attention mechanisms, allowing it to capture both local and global dependencies in the text. It is pretrained using masked language modeling on long documents. The model is based on the RoBERTa checkpoint and can be fine-tuned for downstream tasks.

Read more

$-/run

2.3M

Huggingface

longformer-large-4096

longformer-large-4096

The longformer-large-4096 is a language model that has been trained on a large corpus of text data. It is designed to process long documents and has a maximum limit of 4096 tokens per document. The model is capable of performing various natural language processing tasks such as text classification, question answering, and language generation. It can be used as a pre-trained model for downstream tasks or fine-tuned on specific datasets for better performance.

Read more

$-/run

25.4K

Huggingface

ivila-row-layoutlm-finetuned-s2vl-v2

ivila-row-layoutlm-finetuned-s2vl-v2

The model ivila/row-layoutlm-finetuned-s2vl-v2 is a token classification model that has been fine-tuned on the task of identifying and classifying tokens in a table row layout. It can take as input a table row and predict the class labels for each token, indicating whether it belongs to a certain category or not. The model has been trained on a specific dataset and can be used for tasks such as table analysis and information extraction from table data.

Read more

$-/run

11.9K

Huggingface

specter2_adhoc_query

specter2_adhoc_query

specter2_adhoc_query is an adapter for the SPECTER 2.0 model, which is trained on scientific papers and can generate task-specific embeddings for scientific tasks. The adapter is specifically designed for adhoc search queries, where papers have to be retrieved for a short textual query. It uses a combination of title and abstract of a scientific paper or a query to generate effective embeddings for downstream applications. The model is trained on over 6 million triplets of scientific paper citations and is evaluated on the SciRepEval benchmark. It is based on the BERT model with the addition of adapters. The model and adapter were developed by Allen AI.

Read more

$-/run

11.5K

Huggingface

longformer-large-4096-finetuned-triviaqa

longformer-large-4096-finetuned-triviaqa

The longformer-large-4096-finetuned-triviaqa is a question-answering model that has been trained on the TriviaQA dataset. The model is based on the Longformer architecture, which is able to process long sequences with thousands of tokens. It can read and answer questions based on the provided context. The model has been fine-tuned specifically for the task of trivia question-answering and can provide accurate and detailed answers based on the given input.

Read more

$-/run

9.2K

Huggingface

Similar creators