Maintainer: CalderaAI

Total Score


Last updated 5/28/2024


Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Create account to get full access


If you already have an account, we'll log you in

Model overview

The 30B-Lazarus model is the result of an experimental approach to combining several large language models and specialized LoRAs (Layers of Residual Adaption) to create an ensemble model with enhanced capabilities. The composition includes models such as SuperCOT, gpt4xalpaca, and StoryV2, along with the manticore-30b-chat-pyg-alpha and Vicuna Unlocked LoRA models. The maintainer, CalderaAI, indicates that this experimental approach aims to additively apply desired features without paradoxically watering down the model's effective behavior.

Model inputs and outputs

The 30B-Lazarus model is a text-to-text AI model, meaning it takes text as input and generates text as output. The model is primarily instructed-based, with the Alpaca instruct format being the primary input format. However, the maintainer suggests that the Vicuna instruct format may also work.


  • Instruction: Text prompts or instructions for the model to follow, often in the Alpaca or Vicuna instruct format.
  • Context: Additional context or information provided to the model to inform its response.


  • Generated text: The model's response to the provided input, which can range from short answers to longer, more detailed text.


The 30B-Lazarus model is designed to have enhanced capabilities in areas like reasoning, storytelling, and task-completion compared to the base LLaMA model. By combining several specialized models and LoRAs, the maintainer aims to create a more comprehensive and capable language model. However, the maintainer notes that further experimental testing and evaluation is required to fully understand the model's capabilities and limitations.

What can I use it for?

The 30B-Lazarus model could potentially be used for a variety of natural language processing tasks, such as question answering, text generation, and problem-solving. The maintainer suggests that the model may be particularly well-suited for text-based adventure games or interactive storytelling applications, where its enhanced storytelling and task-completion capabilities could be leveraged.

Things to try

When using the 30B-Lazarus model, the maintainer recommends experimenting with different presets and instructions to see how the model responds. They suggest trying out the "Godlike" and "Storywriter" presets in tools like KoboldAI or Text-Generation-WebUI, and adjusting parameters like output length and temperature to find the best settings for your use case. Additionally, exploring the model's ability to follow chain-of-thought reasoning or provide detailed, creative responses to open-ended prompts could be an interesting area to investigate further.

This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models




Total Score


SuperCOT-LoRA is a Large Language Model (LLM) that has been fine-tuned using a variety of datasets to improve its ability to follow prompts for Langchain. It was developed by kaiokendev and builds upon the existing LLaMA model by infusing it with datasets focused on chain-of-thought, code explanations, instructions, and logical deductions. Similar models like Llama-3-8B-Instruct-262k from Gradient also aim to extend the context length and instructional capabilities of large language models. However, SuperCOT-LoRA is specifically tailored towards improving the model's ability to follow Langchain prompts through the incorporation of specialized datasets. Model inputs and outputs Inputs SuperCOT-LoRA accepts text input, similar to other autoregressive language models. Outputs The model generates text outputs, which can include responses to prompts, code explanations, and logical deductions. Capabilities SuperCOT-LoRA is designed to be particularly adept at following Langchain prompts and producing outputs that are well-suited for use within Langchain workflows. By incorporating datasets focused on chain-of-thought, code explanations, and logical reasoning, the model has been trained to provide more coherent and contextually-appropriate responses when working with Langchain. What can I use it for? The SuperCOT-LoRA model can be particularly useful for developers and researchers working on Langchain-based applications. Its specialized training allows it to generate outputs that are tailored for use within Langchain, making it a valuable tool for tasks such as: Building conversational AI assistants that can engage in multi-step logical reasoning Developing code generation and explanation tools that integrate seamlessly with Langchain Enhancing the capabilities of existing Langchain-powered applications with more advanced language understanding and generation Things to try One interesting aspect of SuperCOT-LoRA is its potential to improve the coherence and contextual awareness of Langchain-based applications. By leveraging the model's enhanced ability to follow prompts and maintain logical flow, developers could experiment with building more sophisticated question-answering systems, or task-oriented chatbots that can better understand and respond to user intents. Additionally, the model's training on code-related datasets could make it a useful tool for generating and explaining code snippets within Langchain-powered applications, potentially enhancing the developer experience for users.

Read more

Updated Invalid Date




Total Score


Alpacino30b is a merged model that combines features of the Alpaca model with additional capabilities from models focused on chain-of-thought reasoning and storytelling. The maintainer, digitous, describes it as a "triple model merge" that results in a comprehensive boost to Alpaca's reasoning and story writing abilities. It uses Alpaca as the backbone to maintain the instruct format that Alpaca is known for. Similar models like SuperCOT-LoRA also aim to enhance LLaMA models with additional capabilities for better logical reasoning and task completion. These models leverage datasets like Alpaca-CoT, CodeAlpaca, and Conala to fine-tune the base LLaMA model. Model inputs and outputs Inputs Text prompt provided in an instruction format Outputs Detailed, creative text responses that demonstrate improved reasoning and storytelling abilities compared to the base Alpaca model Capabilities Alpacino30b exhibits enhanced capabilities in areas like logical reasoning, chain-of-thought problem solving, and creative storytelling. The maintainer provides an example use case of using the model to power an interactive text-based adventure game, where the model can respond with rich, imaginative descriptions that progress the narrative. What can I use it for? Projects that could benefit from Alpacino30b's improved reasoning and storytelling skills include interactive fiction, creative writing assistants, and chatbots for engaging conversations. The model could also be useful for research into language model capabilities and prompt engineering. As with any large language model, users should exercise caution and test for potential biases or safety issues before deploying it in production. Things to try Experiment with prompts that require the model to demonstrate its chain-of-thought capabilities, such as multi-step reasoning problems or open-ended storytelling tasks. Try providing the model with different character backstories or narrative prompts to see how it can generate coherent and engaging storylines. Additionally, you could explore using the model in a text-based adventure game setting, as suggested by the maintainer, to see how it handles dynamic user interactions and evolving narratives.

Read more

Updated Invalid Date




Total Score


alpaca-lora-30b is a large language model based on the LLaMA-30B base model, fine-tuned using the Alpaca dataset to create a conversational AI assistant. It was developed by the researcher chansung and is part of the Alpaca-LoRA family of models, which also includes the alpaca-lora-7b and Chinese-Vicuna-lora-13b-belle-and-guanaco models. Model inputs and outputs alpaca-lora-30b is a text-to-text model, taking in natural language prompts and generating relevant responses. The model was trained on the Alpaca dataset, a cleaned-up version of the Alpaca dataset up to 04/06/23. Inputs Natural language prompts for the model to respond to Outputs Relevant natural language responses to the input prompts Capabilities alpaca-lora-30b can engage in open-ended conversations, answer questions, and complete a variety of language-based tasks. It has been trained to follow instructions and provide informative, coherent responses. What can I use it for? alpaca-lora-30b can be used for a wide range of applications, such as chatbots, virtual assistants, and language generation tasks. It could be particularly useful for companies looking to incorporate conversational AI into their products or services. Things to try Experiment with different types of prompts to see the range of responses alpaca-lora-30b can generate. You could try asking it follow-up questions, providing it with context about a specific scenario, or challenging it with more complex language tasks.

Read more

Updated Invalid Date




Total Score


The alpaca-lora-30B-ggml model is a 30 billion parameter AI model that has been fine-tuned on the Alpaca dataset using the LoRA (Low-Rank Adaptation) technique. This model is a version of the larger LLaMA language model, which was developed by Anthropic. The LoRA fine-tuning was done by the maintainer, Pi3141, to adapt the LLaMA model specifically for conversational and language tasks. This model is designed to be used with Alpaca.cpp, Llama.cpp, and Dalai, which are inference frameworks that can run large language models on CPU and GPU hardware. Similar models include the GPT4 X Alpaca (fine-tuned natively) 13B and the Alpaca (fine-tuned natively) 7B models, which are also LoRA-finetuned versions of large language models designed for conversational tasks. Model inputs and outputs Inputs Text**: The model takes text input, which can be prompts, questions, or other natural language text. Outputs Text**: The model generates text output, which can be continuations of the input, answers to questions, or other natural language responses. Capabilities The alpaca-lora-30B-ggml model is capable of engaging in a wide variety of conversational and language tasks, including answering questions, generating text, and providing explanations on a range of topics. It can be used for tasks like customer service chatbots, personal assistants, and creative writing. What can I use it for? The alpaca-lora-30B-ggml model can be used for a variety of natural language processing and generation tasks. Some potential use cases include: Conversational AI**: Use the model to build conversational agents or chatbots that can engage in natural language dialog. Content generation**: Leverage the model's text generation capabilities to create articles, stories, or other types of written content. Question answering**: Use the model to build systems that can answer questions on a wide range of topics. Language modeling**: Utilize the model's understanding of language to power applications like text autocomplete or language translation. Things to try One interesting thing to try with the alpaca-lora-30B-ggml model is to use it in a few-shot or zero-shot learning scenario. By providing the model with a small number of examples or instructions, you can see how it can generalize to novel tasks or prompts. This can help uncover the model's true capabilities and flexibility beyond its training data. Another interesting experiment would be to combine the alpaca-lora-30B-ggml model with other AI models or techniques, such as retrieval-augmented generation or hierarchical prompting. This could lead to new and innovative applications that leverage the strengths of multiple AI components.

Read more

Updated Invalid Date