Maintainer: core42

Total Score


Last updated 5/21/2024


Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Get summaries of the top AI models delivered straight to your inbox:

Model overview

jais-13b is a 13 billion parameter pre-trained bilingual large language model developed by Inception, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), and Cerebras Systems. The model is trained on a dataset containing 72 billion Arabic tokens and 279 billion English/code tokens, with the Arabic data iterated over for 1.6 epochs and the English/code for 1 epoch, for a total of 395 billion tokens.

The jais-13b model is based on a transformer-based decoder-only (GPT-3) architecture and uses SwiGLU non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate to long sequence lengths and providing improved context handling and model precision.

Compared to similar large language models like XVERSE-13B and Baichuan-7B, jais-13b stands out for its bilingual Arabic-English capabilities and strong performance on the C-EVAL and MMLU benchmarks.

Model inputs and outputs


  • Text data: The jais-13b model takes text input data, either in Arabic or English.


  • Generated text: The model outputs generated text, either in Arabic or English, based on the input prompt.


The jais-13b model has strong performance on standard benchmarks for both Arabic and English language understanding and generation. It achieves state-of-the-art results on the C-EVAL and MMLU benchmarks, outperforming other models of similar size.

Some example capabilities of the jais-13b model include:

  • Generating coherent, contextually relevant text in both Arabic and English
  • Answering questions and completing tasks that require understanding of the input text
  • Translating between Arabic and English
  • Summarizing long-form text in both languages

What can I use it for?

The jais-13b model can be used as a foundation for a wide range of NLP applications that require strong language understanding and generation capabilities in both Arabic and English. Some potential use cases include:

  • Developing multilingual chatbots and virtual assistants
  • Building machine translation systems between Arabic and English
  • Automating content generation and summarization for Arabic and English text
  • Powering search and information retrieval systems that handle both languages

To use the jais-13b model, you can follow the provided getting started guide, which includes sample code for loading the model and generating text.

Things to try

One interesting aspect of the jais-13b model is its ability to handle long input sequences thanks to the use of ALiBi position embeddings. You could experiment with providing the model with longer prompts or context and see how it performs on tasks that require understanding and reasoning over a larger amount of information.

Another area to explore could be fine-tuning the model on specific domains or tasks, such as Arabic-English machine translation or question-answering, to further enhance its capabilities in those areas. The Jais and Jais-chat paper discusses these potential fine-tuning approaches.

Overall, the jais-13b model represents a significant advancement in large language models that can handle both Arabic and English, and provides a powerful foundation for a wide range of multilingual NLP applications.

This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models




Total Score


XVERSE-13B is a large language model developed by Shenzhen Yuanxiang Technology. It uses a decoder-only Transformer architecture with an 8K context length, making it suitable for longer multi-round dialogues, knowledge question-answering, and summarization tasks. The model has been thoroughly trained on a diverse dataset of over 3.2 trillion tokens spanning more than 40 languages, including Chinese, English, Russian, and Spanish. It uses a BPE tokenizer with a vocabulary size of 100,534, allowing for efficient multilingual support without the need for additional vocabulary expansion. Compared to similar models like Baichuan-7B, XVERSE-13B has a larger context length and a more diverse training dataset, making it potentially more versatile in handling longer-form tasks. The model also outperforms Baichuan-7B on several benchmark evaluations, as detailed in the maintainer's description. Model inputs and outputs Inputs Text**: The model can accept natural language text as input, such as queries, instructions, or conversation history. Outputs Text**: The model generates relevant text as output, such as answers, responses, or summaries. Capabilities XVERSE-13B has demonstrated strong performance on a variety of tasks, including language understanding, question-answering, and text generation. According to the maintainer's description, the model's large context length and multilingual capabilities make it well-suited for applications such as: Multi-round dialogues**: The model's 8K context length allows it to maintain coherence and continuity in longer conversations. Knowledge-intensive tasks**: The model's broad training data coverage enables it to draw upon a wide range of knowledge to answer questions and provide information. Summarization**: The model's ability to process and generate longer text makes it effective at summarizing complex information. What can I use it for? Given its strong performance and versatile capabilities, XVERSE-13B could be useful for a wide range of applications, such as: Conversational AI**: The model's dialogue capabilities could be leveraged to build intelligent chatbots or virtual assistants. Question-answering systems**: The model's knowledge-processing abilities could power advanced question-answering systems for educational or research purposes. Content generation**: The model's text generation capabilities could be used to assist with writing tasks, such as drafting reports, articles, or creative content. Things to try One interesting aspect of XVERSE-13B is its large context length, which allows it to maintain coherence and continuity in longer conversations. To explore this capability, you could try engaging the model in multi-turn dialogues, where you ask follow-up questions or provide additional context, and observe how the model responds and stays on topic. Another interesting experiment could be to evaluate the model's performance on knowledge-intensive tasks, such as answering questions about a specific domain or summarizing complex information. This could help highlight the breadth and depth of the model's training data and its ability to draw upon diverse knowledge to tackle challenging problems.

Read more

Updated Invalid Date



Total Score


jais-13b-chat is a large language model developed by core42 that is trained on a vast corpus of text data. This model is similar to other large language models like evo-1-131k-base, f222, and vcclient000 in terms of its architecture and training data. Model inputs and outputs jais-13b-chat is a text-to-text model, meaning it takes textual inputs and generates textual outputs. The model can engage in open-ended conversations, answer questions, summarize text, and perform a variety of other natural language processing tasks. Inputs Arbitrary text prompts Outputs Generated text responses Answers to questions Summaries of input text Capabilities jais-13b-chat is a powerful language model that can handle a wide range of natural language tasks. It demonstrates strong capabilities in areas like text generation, question answering, and text summarization. What can I use it for? You can use jais-13b-chat for a variety of applications that involve natural language processing, such as chatbots, content creation, and text analysis. The model's versatility makes it a valuable tool for businesses, researchers, and developers who need to work with text-based data. Things to try One interesting thing to try with jais-13b-chat is using it for open-ended conversations. The model's ability to engage in dialog and generate coherent, contextual responses can be a valuable feature for building conversational interfaces or exploring the capabilities of large language models.

Read more

Updated Invalid Date




Total Score


gpt2 is a transformer-based language model created and released by OpenAI. It is the smallest version of the GPT-2 model, with 124 million parameters. Like other GPT-2 models, gpt2 is a causal language model pretrained on a large corpus of English text using a self-supervised objective to predict the next token in a sequence. This allows the model to learn a general understanding of the English language that can be leveraged for a variety of downstream tasks. The gpt2 model is related to larger GPT-2 variations such as GPT2-Large, GPT2-Medium, and GPT2-XL, which have 355 million, 774 million, and 1.5 billion parameters respectively. These larger models were also developed and released by the OpenAI community. Model inputs and outputs Inputs Text sequence**: The model takes a sequence of text as input, which it uses to generate additional text. Outputs Generated text**: The model outputs a continuation of the input text sequence, generating new text one token at a time in an autoregressive fashion. Capabilities The gpt2 model is capable of generating fluent, coherent text in English on a wide variety of topics. It can be used for tasks like creative writing, text summarization, and language modeling. However, as the OpenAI team notes, the model does not distinguish fact from fiction, so it should not be used for applications that require the generated text to be truthful. What can I use it for? The gpt2 model can be used for a variety of text generation tasks. Researchers may use it to better understand the behaviors, capabilities, and biases of large-scale language models. The model could also be fine-tuned for applications like grammar assistance, auto-completion, creative writing, and chatbots. However, users should be aware of the model's limitations and potential for biased or harmful output, as discussed in the OpenAI model card. Things to try One interesting aspect of the gpt2 model is its ability to generate diverse and creative text from a given prompt. You can experiment with providing the model with different types of starting prompts, such as the beginning of a story, a description of a scene, or even a single word, and see what kind of coherent and imaginative text it generates in response. Additionally, you can try fine-tuning the model on a specific domain or task to see how its performance and output changes compared to the base model.

Read more

Updated Invalid Date




Total Score


The ruGPT-3.5-13B is a large language model developed by ai-forever that has been trained on a 300GB dataset of various domains, with an additional 100GB of code and legal documents. This 13 billion parameter model is the largest version in the ruGPT series and was used to train the GigaChat model. Similar models include the mGPT multilingual GPT model, the FRED-T5-1.7B Russian-focused T5 model, and the widely used GPT-2 English language model. Model Inputs and Outputs Inputs Raw Russian text prompts of varying length Outputs Continuation of the input text, generating new content in the Russian language Capabilities The ruGPT-3.5-13B model demonstrates strong text generation capabilities for the Russian language. It can be used to continue and expand on Russian text prompts, producing fluent and coherent continuations. The model has been trained on a diverse dataset, allowing it to generate text on a wide range of topics. What Can I Use It For? The ruGPT-3.5-13B model could be useful for a variety of Russian language applications, such as: Chatbots and conversational agents that can engage in open-ended dialogue in Russian Content generation for Russian websites, blogs, or social media Assistants that can help with Russian language tasks like summarization, translation, or question answering Things to Try One interesting thing to try with the ruGPT-3.5-13B model is to experiment with different generation strategies, such as adjusting the number of beams or sampling temperature. This can help produce more diverse or controlled outputs depending on the specific use case. Another idea is to fine-tune the model on a smaller, domain-specific dataset to adapt it for specialized tasks like generating legal or technical Russian text. The model's large size and broad training make it a strong starting point for further fine-tuning.

Read more

Updated Invalid Date