Maintainer: stabilityai

Total Score


Last updated 5/28/2024

Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Create account to get full access


If you already have an account, we'll log you in

Model overview

StableBeluga-7B is a Llama2 7B model fine-tuned on an Orca-style dataset by Stability AI. This model builds upon the foundational LLaMA model developed by Meta, with additional fine-tuning to improve its language understanding and generation capabilities. Compared to similar models like StableBeluga2 and StableLM-Tuned-Alpha, StableBeluga-7B has a smaller parameter count but is tailored for high-quality responses across a variety of conversational scenarios.

Model inputs and outputs

StableBeluga-7B is a text-to-text model, taking in natural language prompts and generating coherent and relevant responses. The model uses a specific prompt format that includes a system prompt, user prompt, and space for the model's output. This format helps the model understand the context and constraints of the task at hand.


  • System prompt: Provides instructions and guidelines for the model to follow, such as behaving in a helpful and safe manner.
  • User prompt: The user's input or request that the model should respond to.


  • Model response: The generated text output from the model, which aims to be informative, coherent, and aligned with the provided system prompt.


StableBeluga-7B demonstrates strong language understanding and generation capabilities, allowing it to engage in a wide range of conversational tasks. The model can assist with information lookup, task completion, creative writing, and even open-ended discussions. Its fine-tuning on the Orca-style dataset helps it maintain a coherent and consistent personality while providing helpful and engaging responses.

What can I use it for?

StableBeluga-7B can be a valuable tool for developers and researchers working on conversational AI applications. Some potential use cases include:

  • Virtual assistants: Integrate StableBeluga-7B into your virtual assistant to provide high-quality, natural language responses to user queries.
  • Chatbots: Use StableBeluga-7B as the language model behind your chatbot, enabling more engaging and informative conversations.
  • Content generation: Leverage StableBeluga-7B's creative capabilities to generate engaging written content, such as stories, articles, or poetry.

When using StableBeluga-7B in your projects, be sure to follow the STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT provided by the maintainer, Stability AI.

Things to try

One interesting aspect of StableBeluga-7B is its ability to maintain a consistent personality and tone throughout a conversation. Try prompting the model with a series of related queries and observe how it builds upon previous responses, demonstrating coherence and contextual understanding.

Additionally, you can explore the model's creative capabilities by providing open-ended prompts for story generation, poetry writing, or other types of creative text production. Observe how the model generates novel and imaginative content while staying true to the provided guidelines.

This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models




Total Score


StableBeluga-13B is a large language model developed by Stability AI. It is a 13B parameter Llama2 model that has been fine-tuned on an internal Orca-style dataset. This model is part of Stability AI's suite of language models, which also includes similar models like StableBeluga-7B and StableBeluga2. These models are designed to be helpful and safe, with a focus on following instructions and engaging in open-ended conversations. Model inputs and outputs StableBeluga-13B is a text-based language model, meaning it takes in text prompts as input and generates text as output. The model is designed to handle a wide range of conversational and task-oriented prompts, from open-ended questions to specific instructions. Inputs Text prompts**: The model accepts text prompts as input, which can include questions, statements, or instructions. System prompt**: The model should be used with a specific system prompt format, which sets the tone and guidelines for the assistant's behavior. Outputs Generated text**: The model generates coherent and relevant text in response to the input prompts. This can include answers to questions, task completions, and open-ended conversations. Up to 256 tokens**: The model can generate up to 256 tokens of text in a single output. Capabilities StableBeluga-13B is a powerful language model with a wide range of capabilities. It can engage in open-ended conversations, answer questions, and complete a variety of tasks such as writing poetry, short stories, and jokes. The model has been trained to be helpful and harmless, and will refuse to participate in anything that could be considered harmful. What can I use it for? StableBeluga-13B can be used for a variety of applications, such as: Chatbots and conversational assistants**: The model can be integrated into chatbots and virtual assistants to provide natural language interactions. Content generation**: The model can be used to generate various types of text, such as articles, stories, and creative writing. Question answering**: The model can be used to provide answers to a wide range of questions, drawing on its broad knowledge base. Task completion**: The model can be used to complete various tasks, such as research, analysis, and problem-solving. Things to try Some interesting things to try with StableBeluga-13B include: Engaging in open-ended conversations**: Explore the model's conversational abilities by asking it a wide range of questions and prompts, and see how it responds. Experimenting with different prompts**: Try providing the model with different types of prompts, such as creative writing prompts, math problems, or even instructions for a specific task, and observe how it responds. Evaluating the model's safety and helpfulness**: Provide the model with prompts that test its ability to be helpful and harmless, and observe how it responds. Comparing the model's capabilities to other language models**: Compare the performance of StableBeluga-13B to other language models, such as llama2-13b-orca-8k-3319, to understand its relative strengths and weaknesses. By exploring the capabilities of StableBeluga-13B, you can gain a deeper understanding of the potential applications and limitations of this powerful language model.

Read more

Updated Invalid Date




Total Score


Stable Beluga 2 is a Llama2 70B model finetuned by Stability AI on an Orca-style dataset. It is part of a family of Beluga models, with other variants including StableBeluga 1 - Delta, StableBeluga 13B, and StableBeluga 7B. These models are designed to be highly capable language models that follow instructions well and provide helpful, safe, and unbiased assistance. Model inputs and outputs Stable Beluga 2 is an autoregressive language model that takes text as input and generates text as output. It can be used for a variety of natural language processing tasks, such as text generation, summarization, and question answering. Inputs Text prompts Outputs Generated text Responses to questions or instructions Capabilities Stable Beluga 2 is a highly capable language model that can engage in open-ended dialogue, answer questions, and assist with a variety of tasks. It has been trained to follow instructions carefully and provide helpful, safe, and unbiased responses. The model performs well on benchmarks for commonsense reasoning, world knowledge, and other important language understanding capabilities. What can I use it for? Stable Beluga 2 can be used for a variety of applications, such as: Building conversational AI assistants Generating creative writing or content Answering questions and providing information Summarizing text Providing helpful instructions and advice The model's strong performance on safety and helpfulness benchmarks make it well-suited for use cases that require a reliable and trustworthy AI assistant. Things to try Some interesting things to try with Stable Beluga 2 include: Engaging the model in open-ended dialogue to see the breadth of its conversational abilities Asking it to provide step-by-step instructions for completing a task Prompting it to generate creative stories or poems Evaluating its performance on specific language understanding benchmarks or tasks The model's flexibility and focus on safety and helpfulness make it a compelling choice for a wide range of natural language processing applications.

Read more

Updated Invalid Date




Total Score


StableBeluga1-Delta is a language model developed by Stability AI that is based on the LLaMA 65B model and has been fine-tuned on an Orca-style dataset. It is part of the Stable Beluga series of models, which also includes StableBeluga2, StableBeluga-13B, and StableBeluga-7B. These models are designed to be helpful and harmless, and have been trained to follow instructions and generate responses in a safe and responsible manner. Model inputs and outputs StableBeluga1-Delta is an auto-regressive language model, which means it generates text one token at a time, based on the previous tokens in the sequence. The model takes in a prompt as input, and generates a response that continues the prompt. Inputs Prompt**: A text prompt that provides the starting point for the model to generate a response. Outputs Generated text**: The model's response, which continues the input prompt. Capabilities StableBeluga1-Delta is capable of a variety of language tasks, including generating coherent and contextually relevant text, answering questions, and following instructions. The model has been fine-tuned on a dataset that helps steer it towards safer and more responsible outputs, making it suitable for use in chatbot and conversational AI applications. What can I use it for? StableBeluga1-Delta can be used for a variety of applications, such as: Chatbots and virtual assistants**: The model can be used to power conversational AI agents, providing helpful and informative responses to users. Content generation**: The model can be used to generate text for a variety of purposes, such as writing stories, poems, or creative content. Instruction following**: The model can be used to follow and complete instructions, making it useful for task-oriented applications. Things to try One interesting aspect of StableBeluga1-Delta is its ability to generate responses that adhere to a specific set of instructions or guidelines. For example, you could try providing the model with a prompt that includes a system message, like the one provided in the usage example, and see how the model generates a response that follows the specified instructions. Another interesting thing to try would be to compare the responses of StableBeluga1-Delta to those of the other Stable Beluga models, or to other language models, to see how the fine-tuning on the Orca dataset has affected the model's outputs.

Read more

Updated Invalid Date




Total Score


StableBeluga2-70B-GPTQ is a large language model created by Stability AI that has been quantized using GPTQ techniques by TheBloke. It is based on Stability AI's original StableBeluga2 model. TheBloke has provided multiple quantization parameter options to choose from, allowing users to balance inference quality and VRAM usage based on their hardware and needs. Similar models include Llama-2-70B-Chat-GPTQ, Llama-2-7B-Chat-GPTQ, and Llama-2-13B-GPTQ, all of which are Llama 2 models quantized by TheBloke. Model inputs and outputs Inputs Text prompts of any length, to be completed or continued by the model. Outputs Coherent, contextual text generated in response to the input prompts, of any desired length. Capabilities StableBeluga2-70B-GPTQ is a powerful language model capable of generating high-quality text on a wide range of topics. It can be used for tasks like creative writing, summarization, question answering, and even chatbot-like conversations. The model's large size and quantization allow for fast and efficient inference, making it suitable for real-time applications. What can I use it for? You can use StableBeluga2-70B-GPTQ for a variety of natural language processing tasks, such as: Content generation**: Create original text for blog posts, articles, stories, or scripts. Conversation AI**: Build chatbots and virtual assistants with human-like responses. Question answering**: Develop intelligent search or query systems to answer user questions. Summarization**: Automatically generate concise summaries of long-form text. The model's versatility and quantization options make it a great choice for both research and commercial applications. By choosing the right quantization parameters, you can optimize the model's performance for your specific hardware and use case. Things to try Some interesting things to try with StableBeluga2-70B-GPTQ include: Experiment with different temperature and top-k/top-p settings to generate more creative or more coherent text. Fine-tune the model on your own dataset to specialize it for a particular domain or task. Combine it with other models or techniques, such as retrieval-augmented generation, to enhance its capabilities. Explore the model's limitations by prompting it with challenging or adversarial inputs and observing its responses. The quantized versions of the model provided by TheBloke offer a convenient way to leverage the power of StableBeluga2 without the full memory requirements of the original. By trying out the various quantization options, you can find the right balance of performance and efficiency for your needs.

Read more

Updated Invalid Date