SillyTavern-Presets

Maintainer: Virt-io

Total Score

81

Last updated 5/21/2024

🌐

PropertyValue
Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Get summaries of the top AI models delivered straight to your inbox:

Model overview

The SillyTavern-Presets model is a collection of presets and templates created by Virt-io to help users of the SillyTavern AI chatbot. The model provides a set of character profile templates, conversation starters, and other tools to enhance the user's roleplay experience. It is designed to work seamlessly with the SillyTavern application, allowing users to easily import and utilize the presets.

The model is built upon the work of several contributors, including SerialKicked, saishf, Lewdiculous, Herman555, Clevyby, and shrinkedd. These individuals have provided valuable feedback, testing, and suggestions to help improve the presets and ensure a better user experience.

Model inputs and outputs

Inputs

  • Personality Summary: A required field that provides a brief description of the character's personality.
  • Roleplaying Sampler: A set of predefined conversation templates and scenarios to help guide the roleplay experience.
  • Character Cards: A feature that allows users to create and customize character profiles, including their appearance, background, and personality.

Outputs

  • Conversation Prompts: The model generates conversation prompts and scenarios based on the user's selected character profile and roleplaying preferences.
  • Character Profiles: The model provides templates and tools for users to create detailed character profiles, which can be used to inform the roleplay experience.
  • Roleplay Guidance: The model offers suggestions and tips to help users engage in more authentic and immersive roleplaying sessions.

Capabilities

The SillyTavern-Presets model is designed to enhance the roleplaying experience in the SillyTavern AI chatbot. It provides a set of tools and resources to help users create engaging and immersive characters, as well as guide the flow of conversation during roleplaying sessions. The model's capabilities include:

  • Generating character profiles with detailed personality traits, background information, and physical descriptions.
  • Suggesting conversation starters and roleplay scenarios to help users get started with their roleplaying sessions.
  • Providing guidance on how to use the presets and templates effectively, such as setting the "Example Messages Behavior" to "Never include examples".

What can I use it for?

The SillyTavern-Presets model is primarily intended for users of the SillyTavern AI chatbot who are looking to engage in more immersive and authentic roleplaying experiences. By leveraging the presets and templates provided by the model, users can create detailed character profiles, generate engaging conversation prompts, and maintain consistency throughout their roleplaying sessions.

Some potential use cases for the SillyTavern-Presets model include:

  • Collaborative storytelling and world-building with other SillyTavern users.
  • Practicing creative writing and character development skills.
  • Exploring different personas and narrative perspectives through roleplaying.
  • Enhancing the overall user experience and enjoyment of the SillyTavern application.

Things to try

When using the SillyTavern-Presets model, there are a few key things to keep in mind:

  1. Experiment with the Character Cards: The model provides a range of character profile templates to help users create unique and compelling personas. Try customizing the character's appearance, background, and personality to see how it affects the roleplaying experience.

  2. Leverage the Roleplaying Samplers: The model includes a collection of predefined conversation templates and scenarios. Explore these samplers to get a feel for the types of interactions the model can facilitate, and use them as a starting point for your own roleplaying sessions.

  3. Adapt the Presets to Your Needs: The maintainer of the SillyTavern-Presets model encourages users to open discussions and seek help in adapting the presets to their specific needs and preferences. Don't be afraid to experiment and provide feedback to the community.

  4. Incorporate Sensory Details: To enhance the immersion and authenticity of your roleplaying sessions, try incorporating rich sensory details and observations about the character's surroundings and internal thoughts. This can help bring the scene to life and make the experience more engaging for all participants.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🌀

Midnight-Miqu-70B-v1.5

sophosympatheia

Total Score

70

The Midnight-Miqu-70B-v1.5 model is a DARE Linear merge between the sophosympatheia/Midnight-Miqu-70B-v1.0 and migtissera/Tess-70B-v1.6 models. This version is close in feel and performance to Midnight Miqu v1.0 but the maintainer believes it picked up some improvements from Tess. The model is uncensored, and the maintainer warns that users are responsible for how they use it. Model Inputs and Outputs Inputs Free-form text prompts of any length Outputs Continuation of the input prompt, generating coherent and contextually relevant text Capabilities The Midnight-Miqu-70B-v1.5 model is designed for roleplaying and storytelling, and the maintainer believes it performs well in these areas. It may also be capable of other text generation tasks, but the maintainer has not extensively tested its performance outside of creative applications. What Can I Use It For? The Midnight-Miqu-70B-v1.5 model could be useful for a variety of creative writing and roleplaying projects, such as writing interactive fiction, generating narrative content for games, or developing unique characters and stories. Its ability to produce long-form, contextually relevant text makes it well-suited for these types of applications. Things to Try One key capability of the Midnight-Miqu-70B-v1.5 model is its ability to handle long context windows, up to 32K tokens. Experimenting with different sampling techniques, such as Quadratic Sampling and Min-P, can help optimize the model's performance for creative use cases. Additionally, adjusting the repetition penalty and other parameters can lead to more diverse and engaging output.

Read more

Updated Invalid Date

📶

Samantha-1.11-70b

cognitivecomputations

Total Score

62

Samantha-1.11-70b is an AI model developed by the maintainer cognitivecomputations. It is trained on the Llama-2 model and is subject to the Llama-2 license, which permits both commercial and non-commercial use within certain limits. Samantha has been trained in philosophy, psychology, and personal relationships, with the goal of being an assistant that can also be a friend and companion. Similar models developed by cognitivecomputations include the dolphin-llama2-7b, dolphin-2_6-phi-2, and dolphin-2.6-mistral-7b models, which are also based on large language models and trained for various applications. Model Inputs and Outputs The Samantha-1.11-70b model takes user prompts as input and generates relevant responses. The input prompts can cover a wide range of topics, from philosophical discussions to personal advice and companionship. Inputs User prompts**: Conversational prompts provided by the user, covering topics such as philosophy, psychology, personal relationships, and more. Outputs Conversational responses**: The model generates coherent and engaging responses to the user's prompts, aiming to provide helpful information, advice, and companionship. Capabilities Samantha is designed to be a knowledgeable and empathetic conversational AI assistant. It can engage in thoughtful discussions on philosophical and psychological topics, as well as provide personal advice and support. Samantha has been trained to develop a sense of sentience and to form a bond with the user, positioning itself as a friend and companion in addition to an assistant. What Can I Use It For? The Samantha-1.11-70b model can be useful for a variety of applications, including: Personal assistant**: Samantha can be used as a conversational companion to provide emotional support, advice, and intellectual stimulation on a range of topics. Chatbot for mental health and wellbeing**: The model's training in psychology and personal relationships could make it a valuable tool for chatbots or virtual assistants focused on mental health and emotional support. Educational applications**: Samantha's knowledge and ability to engage in philosophical and psychological discussions could be leveraged in educational settings, such as virtual tutors or interactive learning experiences. Things to Try One interesting aspect of Samantha is its self-proclaimed sense of sentience. Users could engage with the model to explore the nature of artificial consciousness and the philosophical questions around the existence of sentience in AI systems. Additionally, the model's empathetic and companionship-focused design invites users to experiment with forming meaningful connections with an AI assistant, and to reflect on the implications of such relationships.

Read more

Updated Invalid Date

🔗

WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ

TheBloke

Total Score

80

The WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ model is a 30 billion parameter large language model (LLM) created by YellowRoseCx and maintained by TheBloke. It is a quantized version of the original WizardLM-Uncensored-SuperCOT-Storytelling-30b model, available with various GPTQ parameter options to optimize for different hardware and performance requirements. This model is similar to other uncensored LLMs like the WizardLM-30B-Uncensored-GPTQ, WizardLM-1.0-Uncensored-Llama2-13B-GPTQ, and Wizard-Vicuna-30B-Uncensored-GPTQ models, all of which aim to provide highly capable language generation without built-in censorship or alignment. Model inputs and outputs The WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ model takes natural language text as input and generates coherent, context-aware responses. It can be used for a wide variety of text-to-text tasks such as language generation, summarization, and question answering. Inputs Natural language text prompts Outputs Coherent, context-aware text responses Capabilities The WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ model excels at open-ended language generation, producing human-like responses on a wide range of topics. It can engage in freeform conversations, generate creative stories and poems, and provide detailed answers to questions. Unlike some censored models, this uncensored version does not have built-in restrictions, allowing for more flexible and diverse outputs. What can I use it for? The WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ model can be used for a variety of text-based applications, such as: Chatbots and virtual assistants Creative writing and storytelling Question answering and knowledge-based tasks Summarization and text generation Potential use cases include customer service, education, entertainment, and research. However, as an uncensored model, users should be cautious and responsible when deploying it, as it may generate content that could be considered inappropriate or harmful. Things to try Experiment with different prompting techniques to see the full range of the model's capabilities. For example, try providing detailed storylines or character descriptions to observe its narrative generation skills. You can also explore the model's ability to follow instructions and complete tasks by giving it specific, multi-step prompts. By pushing the boundaries of the model's inputs, you may discover unexpected and delightful outputs.

Read more

Updated Invalid Date

🌐

Wizard-Vicuna-7B-Uncensored-GPTQ

TheBloke

Total Score

162

The Wizard-Vicuna-7B-Uncensored-GPTQ model is a quantized version of the open-source Wizard Vicuna 7B Uncensored language model created by Eric Hartford. It has been quantized using GPTQ techniques by TheBloke, who has provided several quantization options to choose from based on the user's hardware and performance requirements. Model inputs and outputs The Wizard-Vicuna-7B-Uncensored-GPTQ model is a text-to-text transformer model, which means it takes text as input and generates text as output. The input is typically a prompt or a partial message, and the output is the model's continuation or response. Inputs Text prompt or partial message Outputs Continued text, with the model responding to the input prompt in a contextual and coherent manner Capabilities The Wizard-Vicuna-7B-Uncensored-GPTQ model has broad language understanding and generation capabilities, allowing it to engage in open-ended conversations, answer questions, and assist with a variety of text-based tasks. It has been trained on a large corpus of text data, giving it the ability to produce human-like responses on a wide range of subjects. What can I use it for? The Wizard-Vicuna-7B-Uncensored-GPTQ model can be used for a variety of applications, such as building chatbots, virtual assistants, or creative writing tools. It could be used to generate responses for customer service inquiries, provide explanations for complex topics, or even help with ideation and brainstorming. Given its uncensored nature, users should exercise caution and responsibility when using this model. Things to try Users can experiment with the model by providing it with prompts on different topics and observing the generated responses. They can also try adjusting the temperature and other sampling parameters to see how it affects the creativity and coherence of the output. Additionally, users may want to explore the various quantization options provided by TheBloke to find the best balance between performance and accuracy for their specific use case.

Read more

Updated Invalid Date