Neulab
Rank:Average Model Cost: $0.0000
Number of Runs: 156,551
Models by this creator
codebert-java
codebert-java
CodeBERT-Java is a machine learning model that has been trained on a dataset of Java code snippets. It is designed to generate masks in Java code, allowing users to fill in missing information or complete code snippets. This model can be useful for tasks such as code completion, code generation, and automated programming.
$-/run
68.6K
Huggingface
codebert-python
codebert-python
The codebert-python model is a language model trained on a large corpus of Python source code. It is designed to generate code snippets that complete a given Python function or statement. It can be used to provide suggestions for code completion or to generate new code based on the given context.
$-/run
57.4K
Huggingface
codebert-cpp
codebert-cpp
The codebert-cpp model is a language model trained on C++ code using the masked language modeling task. It has been trained on 1,000,000 steps with a batch size of 32 using the codeparrot/github-code-clean dataset. It can be used for various code-related tasks or integrated into other models. For more information, please refer to the provided GitHub repository.
$-/run
18.7K
Huggingface
codebert-javascript
codebert-javascript
This is a microsoft/codebert-base-mlm model, trained for 1,000,000 steps (with batch_size=32) on JavaScript code from the codeparrot/github-code-clean dataset, on the masked-language-modeling task. It is intended to be used in CodeBERTScore: https://github.com/neulab/code-bert-score, but can be used for any other model or task. For more information, see: https://github.com/neulab/code-bert-score Citation If you use this model for research, please cite:
$-/run
8.0K
Huggingface
gpt2-finetuned-wikitext103
$-/run
2.2K
Huggingface
omnitab-large-finetuned-wtq
$-/run
751
Huggingface
codebert-c
codebert-c
This is a microsoft/codebert-base-mlm model, trained for 1,000,000 steps (with batch_size=32) on C code from the codeparrot/github-code-clean dataset, on the masked-language-modeling task. It is intended to be used in CodeBERTScore: https://github.com/neulab/code-bert-score, but can be used for any other model or task. For more information, see: https://github.com/neulab/code-bert-score Citation If you use this model for research, please cite:
$-/run
566
Huggingface
omnitab-large
omnitab-large
OmniTab OmniTab is a table-based QA model proposed in OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering. The original Github repository is https://github.com/jzbjyb/OmniTab. Description neulab/omnitab-large (based on BART architecture) is initialized with microsoft/tapex-large and continuously pretrained on natural and synthetic data. Usage Reference
$-/run
174
Huggingface
reatt-large-nq-fiqa
$-/run
110
Huggingface
omnitab-large-1024shot-finetuned-wtq-1024shot
omnitab-large-1024shot-finetuned-wtq-1024shot
OmniTab OmniTab is a table-based QA model proposed in OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering. The original Github repository is https://github.com/jzbjyb/OmniTab. Description neulab/omnitab-large-1024shot-finetuned-wtq-1024shot (based on BART architecture) is initialized with neulab/omnitab-large-1024shot and fine-tuned on WikiTableQuestions in the 1024-shot setting. Usage Reference
$-/run
107
Huggingface