Maintainer: replit

Total Score


Last updated 5/27/2024


Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Create account to get full access


If you already have an account, we'll log you in

Model overview

replit-code-v1-3b is a 2.7B Causal Language Model developed by Replit that is focused on code completion. It has been trained on a diverse dataset of 20 programming languages, including Markdown, Java, JavaScript, Python, and more, totaling 525B tokens. Compared to similar models like StarCoder and rebel-large, replit-code-v1-3b is tailored specifically for code generation tasks.

Model inputs and outputs

replit-code-v1-3b takes text input and generates text output, with a focus on producing code snippets. The model utilizes advanced techniques like Flash Attention and AliBi positional embeddings to enable efficient training and inference on long input sequences.


  • Text prompts, which can include a mix of natural language and code


  • Autoregressive text generation, with a focus on producing valid and relevant code snippets
  • The model can generate multi-line code outputs


replit-code-v1-3b excels at code completion tasks, where it can generate relevant and functional code to extend or complete a given programming snippet. It has been trained on a diverse set of languages, allowing it to handle a wide range of coding tasks.

What can I use it for?

The replit-code-v1-3b model is well-suited for applications that involve code generation or assistance, such as:

  • Integrated development environment (IDE) plugins that provide intelligent code completion
  • Automated code generation tools for rapid prototyping or boilerplate creation
  • Educational or learning platforms that help users learn to code by providing helpful suggestions

Things to try

One interesting thing to try with replit-code-v1-3b is to provide it with a partial code snippet and see how it can complete or extend the code. You could also experiment with providing the model with a natural language description of a programming task and see if it can generate the corresponding code.

This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models




Total Score


replit-code-v1_5-3b is a 3.3 billion parameter Causal Language Model developed by Replit, Inc. that is focused on code completion. Compared to similar models like replit-code-v1-3b and stable-code-3b, replit-code-v1_5-3b has been trained on a broader set of 30 programming languages and uses a custom trained vocabulary optimized for improved compression and coverage. Model inputs and outputs replit-code-v1_5-3b takes text as input and generates text as output. The model can be used to complete partially written code snippets, generate new code, or continue existing code. The context size of the model is 4096 tokens, which allows it to consider a sizable amount of context when generating new text. Inputs Partial code snippets or text prompts Outputs Completed code snippets Generated code in one of the 30 supported programming languages Capabilities replit-code-v1_5-3b demonstrates strong performance on a variety of coding tasks, from completing simple function definitions to generating more complex program logic. It can be particularly helpful for tasks like filling in missing parts of code, expanding on high-level ideas, and generating boilerplate code. The model's broad language support also makes it a versatile tool for developers working across different programming environments. What can I use it for? Developers can use replit-code-v1_5-3b as a foundational model for building a variety of applications that require code generation or completion, such as intelligent code editors, programming assistants, or even low-code/no-code platforms. The model's capabilities could be further enhanced through fine-tuning on domain-specific data or integrating it with other tools and workflows. Things to try Experiment with different decoding techniques and parameters, such as adjusting the temperature, top-k, and top-p values, to see how they impact the quality and diversity of the generated code. You can also try prompting the model with high-level descriptions of functionality and see how it translates those into working code. Additionally, exploring the model's performance across the 30 supported languages could yield interesting insights.

Read more

Updated Invalid Date




Total Score


stable-code-3b is a 2.7B parameter decoder-only language model pre-trained on 1.3 trillion tokens of diverse textual and code datasets. Developed by Stability AI, stable-code-3b demonstrates state-of-the-art performance on the MultiPL-E metrics across multiple programming languages compared to models of similar size. It outperforms other code generation models like CodeLLama, Deepseek Coder, and Wizard Coder on tasks like Python, C++, and JavaScript. Model inputs and outputs stable-code-3b is a text-to-text model, taking in prompts as input and generating relevant code as output. It can handle long context, with the ability to generate code based on sequences up to 16,384 tokens. The model also supports a "Fill in Middle" (FIM) capability, where it can complete partially-written code snippets. Inputs Text prompts for code generation, up to 16,384 tokens Partial code snippets for the "Fill in Middle" capability Outputs Generated code in one of 18 programming languages the model was trained on, including Python, C++, JavaScript, Java, PHP, and Rust Capabilities stable-code-3b excels at generating high-quality, functional code across a variety of programming languages. It can be used to write entire programs from scratch, or fill in missing sections of existing code. The model's strong performance on the MultiPL-E benchmark suggests it can handle a wide range of coding tasks and produce code that is syntactically correct and logically sound. What can I use it for? stable-code-3b can be a valuable tool for developers, data scientists, and anyone working with code. It could be used to speed up prototyping and development by automatically generating boilerplate code or completing repetitive tasks. The model could also be fine-tuned on domain-specific datasets to create customized code generation models for specialized applications. Things to try Experiment with different prompting techniques to see how stable-code-3b responds. Try providing high-level descriptions of the functionality you want, or giving it partially-completed code snippets to fill in. You can also try adjusting parameters like temperature and top-k/top-p values during generation to control the creativity and diversity of the output. By exploring the model's capabilities, you can unlock new ways to streamline your coding workflows.

Read more

Updated Invalid Date




Total Score


The Replit-v2-CodeInstruct-3B model is a 3 billion parameter AI model developed by teknium that has been fine-tuned on both the CodeAlpaca and GPTeacher Code-Instruct datasets to give it code instruction capabilities. This model builds upon the replit-code-v1-3b base model, which was trained on a diverse set of programming languages. The fine-tuning process has given the Replit-v2-CodeInstruct-3B model the ability to follow code-related instructions and generate relevant responses. Model Inputs and Outputs Inputs Code-related prompts and instructions**: The model is designed to accept text-based prompts and instructions related to coding tasks, such as "Write a function that computes the Fibonacci sequence up to n" or "Explain how this code snippet works." Outputs Generated code and text responses**: The model can generate relevant code snippets and text-based responses to address the provided instructions and prompts. The outputs aim to be helpful, informative, and aligned with the user's intent. Capabilities The Replit-v2-CodeInstruct-3B model is capable of engaging in a wide range of code-related tasks, such as code completion, code explanation, and generating code based on natural language instructions. It can handle prompts across multiple programming languages, including Python, JavaScript, Java, and more. The model's fine-tuning on the CodeAlpaca and GPTeacher datasets has improved its ability to follow instructions and provide helpful, coherent responses. What Can I Use It For? The Replit-v2-CodeInstruct-3B model can be a valuable tool for developers and researchers working on projects that involve code generation, code understanding, and code-related task completion. It can be used to build applications that assist programmers by providing code suggestions, explanations, and solutions to coding problems. Additionally, the model could be further fine-tuned or integrated into educational resources or coding learning tools to support students and beginners in their programming journeys. Things to Try One interesting thing to try with the Replit-v2-CodeInstruct-3B model is to explore its ability to handle code-related prompts that involve multiple steps or complex instructions. For example, you could try asking the model to write a function that solves a specific coding challenge, or to explain the inner workings of a given code snippet in detail. Experimenting with different types of prompts and observing the model's responses can help you better understand its capabilities and limitations.

Read more

Updated Invalid Date




Total Score


The starcoderbase-1b is a 1 billion parameter language model trained by bigcode on over 80 programming languages from The Stack (v1.2). It uses multi-query attention, a context window of 8,192 tokens, and was trained using the fill-in-the-middle objective on 1 trillion tokens. This model is smaller than the StarCoderBase 15.5B parameter model, but still provides powerful code generation capabilities. Model Inputs and Outputs The starcoderbase-1b model takes in text as input, such as partial code snippets or prompts, and generates additional text to continue or complete the input. The inputs can be in any of the 80+ supported programming languages. Inputs Text prompts or partial code snippets in any of the 80+ supported programming languages Outputs Continued or completed code snippets in the same language as the input Text responses that continue or elaborate on the provided input Capabilities The starcoderbase-1b model is skilled at generating realistic and coherent code in a wide range of programming languages. It can be used to autocomplete code, generate new functions or classes, fix bugs, and more. While it is not an instruction-following model, by using the Tech Assistant prompt you can turn it into a capable technical assistant. What Can I Use it For? The starcoderbase-1b model can be used for a variety of tasks in software development and engineering, such as: Code Completion**: Use the model to autocomplete partially written code snippets or functions. Code Generation**: Prompt the model with a description or high-level outline and have it generate working code. Bug Fixing**: Give the model a buggy code snippet and have it attempt to fix the issue. Refactoring**: Provide the model with code and ask it to refactor or optimize the implementation. When using generated code, be sure to carefully review it and ensure it meets your requirements, as the model may produce inefficient or incorrect outputs. Things to Try Try providing the model with different types of prompts, such as function signatures, pseudo-code, or high-level descriptions of what you want the code to do. Experiment with the fill-in-the-middle technique, which uses special tokens to identify the prefix, middle, and suffix of the input and output. This can help the model better understand the context and generate more coherent code.

Read more

Updated Invalid Date