Average Model Cost: $0.0000

Number of Runs: 232,081

Models by this creator




OpenLLaMA is an open-source reproduction of Meta AI's LLaMA large language model. It provides pre-trained models trained on 1 trillion tokens, including 3B, 7B, and 13B models. The weights are released in both EasyLM and PyTorch formats and are licensed under the Apache 2.0 license. The models can be loaded using the Hugging Face Transformers library. The models have been evaluated using lm-eval-harness and show comparable performance to the original LLaMA model and GPT-J. The training was done using the RedPajama dataset and the EasyLM training framework. Feedback from the community is encouraged, and the development team includes researchers from Berkeley AI Research. The project acknowledges support from the Google TPU Research Cloud program and collaboration with Stability AI.

Read more




Similar creators