Get a weekly rundown of the latest AI models and research... subscribe!


Maintainer: haor

Total Score


Last updated 5/17/2024


Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Get summaries of the top AI models delivered straight to your inbox:

Model overview

The Evt_V2 model is a text-to-image AI model developed by the maintainer haor. It is an experimental model fine-tuned on a dataset of 15,000 images, mostly from the Pixiv daily ranking and some NSFW anime images. The model is based on the animefull-latest base model and exhibits an artistic anime-influenced style.

The model is capable of generating highly detailed images with attributes like "beautiful detailed eyes", "long hair", and "dramatic angle". The examples provided show a range of anime-style characters and scenes, with a focus on portraits and upper body shots. Similar models like Evt_V4-preview and Ekmix-Diffusion also explore anime-influenced text-to-image generation.

Model inputs and outputs


  • Textual prompts: The model takes in textual prompts that describe the desired image, using a combination of specific attributes like character descriptions, scene elements, and artistic styles.


  • Generated images: The model outputs high-quality, artistic anime-style images that match the provided textual prompts.


The Evt_V2 model excels at generating highly detailed, visually striking anime-inspired images. The examples demonstrate the model's ability to produce portraits with expressive eyes, flowing hair, and cinematic lighting and composition. By leveraging the artistic style of the training data, the model is able to imbue the generated images with a distinct anime aesthetic.

What can I use it for?

The Evt_V2 model could be useful for a variety of applications, such as:

  • Concept art and illustration: The model's ability to generate visually compelling anime-style images makes it a valuable tool for artists and concept designers working on projects with an anime or manga-inspired aesthetic.

  • Character design: The model's skill in rendering detailed character portraits could aid in the development of unique, expressive anime-style characters for various creative projects.

  • Anime-themed content generation: The artistic flair of the Evt_V2 model makes it well-suited for generating images to be used in anime-themed media, such as fan art, webcomics, or promotional materials.

Things to try

Experimenting with different prompt styles and modifiers can help you get the most out of the Evt_V2 model. Try using detailed character descriptions, incorporating specific artistic styles, or combining various scene elements to see how the model responds. Additionally, exploring the interplay between the model's anime-influenced style and more realistic or surreal elements could lead to unique and unexpected results.

This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models




Total Score


The Evt_V3 model is an AI image generation model developed by the maintainer haor. It is based on the previous Evt_V2 model, with 20 epochs of fine-tuning using a dataset of 35,467 images. The model is capable of generating high-quality, highly detailed anime-style images featuring characters with intricate features, expressions, and environments. Compared to the Evt_V2 model, Evt_V3 has been further refined and trained on a larger dataset, resulting in improved quality and consistency of the generated outputs. The model can produce images with a wide range of styles, from detailed character portraits to complex, cinematic scenes. Model inputs and outputs Inputs Text prompts describing the desired image, including details about the subject, style, and composition. Outputs High-quality, highly detailed images in the anime-style format, with a resolution of 512x512 pixels. The model can generate a variety of scenes, characters, and environments, ranging from portraits to complex, multi-element compositions. Capabilities The Evt_V3 model is capable of generating detailed, visually striking anime-style images. It can produce characters with intricate facial features, hairstyles, and expressions, as well as complex environments and scenes with elements like detailed skies, water, and lighting. The model's ability to generate such high-quality, cohesive images is a testament to the quality of its training data and fine-tuning process. What can I use it for? The Evt_V3 model can be a valuable tool for a variety of creative projects, such as: Concept art and illustrations for anime, manga, or other visual media Character design and development for games, animations, or other storytelling media Generating inspirational or reference images for artists and creatives Producing high-quality, visually striking images for use in marketing, advertising, or social media As a powerful AI-driven image generation tool, Evt_V3 can help streamline and enhance the creative process, allowing users to quickly explore and refine ideas without the constraints of traditional media. Things to try One interesting aspect of the Evt_V3 model is its ability to generate images with a strong sense of atmosphere and mood. By carefully crafting prompts that incorporate elements like "cinematic lighting," "dramatic angle," or "beautiful detailed water," users can create breathtaking, almost cinematic scenes that evoke a particular emotional response or narrative. Another area to explore is the model's handling of character expressions and poses. The examples provided demonstrate the model's skill in rendering nuanced facial expressions and body language, which can be a crucial element in crafting compelling and believable characters. Experimenting with prompts that focus on these details can yield compelling and impactful results. Overall, the Evt_V3 model offers a rich and versatile set of capabilities that can enable a wide range of creative projects and applications. By exploring the model's strengths and pushing the boundaries of what it can do, users can unlock new possibilities in the world of AI-driven art and design.

Read more

Updated Invalid Date




Total Score


The Evt_V4-preview model is an experimental text-to-image diffusion model created by maintainer haor that is focused on generating animation-style images. It is part of the EVT series, which aims to fine-tune large datasets to produce diverse artistic styles. Compared to previous EVT models, Evt_V4-preview uses an even larger dataset, resulting in images that have a cosine similarity of 85% with the ACertainty model. Similar models include Stable Diffusion v1-4, a general-purpose text-to-image diffusion model, and Epic Diffusion, a highly customized version of Stable Diffusion aimed at producing high-quality results in a wide range of styles. Model inputs and outputs Inputs Prompt**: A text description of the desired image, which can include specific details about the content, style, and artistic references. Outputs Image**: A generated image that corresponds to the provided text prompt. The model can produce images in a variety of artistic styles, including animation-influenced aesthetics. Capabilities The Evt_V4-preview model is capable of generating diverse, artistically-styled images from text prompts. The model excels at producing anime-inspired artwork, as evidenced by the provided samples that feature detailed characters, fantastical environments, and a vibrant color palette. What can I use it for? The Evt_V4-preview model is well-suited for artistic and creative applications, such as generating concept art, character designs, and illustrations. It could be used to quickly produce draft images for creative projects or as a tool for ideation and exploration. However, the model's capabilities are not limited to animation-style art, and it may be able to generate images in a range of other artistic genres as well. Things to try One interesting aspect of the Evt_V4-preview model is its potential to generate unique animation-inspired styles that differ from traditional anime or manga aesthetics. Experimenting with different prompts that blend various artistic influences, such as combining anime elements with western comic book styles or surreal, dreamlike compositions, could yield intriguing results. Additionally, trying the model with prompts that focus on less common subject matter, such as sci-fi or fantasy settings, might uncover new creative directions for the model's animation-influenced capabilities.

Read more

Updated Invalid Date




Total Score


The EimisAnimeDiffusion_1.0v is a diffusion model trained by eimiss on high-quality and detailed anime images. It is capable of generating anime-style artwork from text prompts. The model builds upon the capabilities of similar anime text-to-image models like waifu-diffusion and Animagine XL 3.0, offering enhancements in areas such as hand anatomy, prompt interpretation, and overall image quality. Model inputs and outputs Inputs Textual prompts**: The model takes in text prompts that describe the desired anime-style artwork, such as "1girl, Phoenix girl, fluffy hair, war, a hell on earth, Beautiful and detailed explosion". Outputs Generated images**: The model outputs high-quality, detailed anime-style images that match the provided text prompts. The generated images can depict a wide range of scenes, characters, and environments. Capabilities The EimisAnimeDiffusion_1.0v model demonstrates strong capabilities in generating anime-style artwork. It can create detailed and aesthetically pleasing images of anime characters, landscapes, and scenes. The model handles a variety of prompts well, from character descriptions to complex scenes with multiple elements. What can I use it for? The EimisAnimeDiffusion_1.0v model can be a valuable tool for artists, designers, and hobbyists looking to create anime-inspired artwork. It can be used to generate concept art, character designs, or illustrations for personal projects, games, or animations. The model's ability to produce high-quality images from text prompts makes it accessible for users with varying artistic skills. Things to try One interesting aspect of the EimisAnimeDiffusion_1.0v model is its ability to generate images with different art styles and moods by using specific prompts. For example, adding tags like "masterpiece" or "best quality" can steer the model towards producing more polished, high-quality artwork, while negative prompts like "lowres" or "bad anatomy" can help avoid undesirable artifacts. Experimenting with prompt engineering and understanding the model's strengths and limitations can lead to the creation of unique and captivating anime-style images.

Read more

Updated Invalid Date




Total Score


Animagine XL 2.0 is an advanced latent text-to-image diffusion model designed to create high-resolution, detailed anime images. It's fine-tuned from Stable Diffusion XL 1.0 using a high-quality anime-style image dataset. This model, an upgrade from Animagine XL 1.0, excels in capturing the diverse and distinct styles of anime art, offering improved image quality and aesthetics. The model is maintained by Linaqruf, who has also developed a collection of LoRA (Low-Rank Adaptation) adapters to customize the aesthetic of generated images. These adapters allow users to create anime-style artwork in a variety of distinctive styles, from the vivid Pastel Style to the intricate Anime Nouveau. Model inputs and outputs Inputs Text prompts**: The model accepts text prompts that describe the desired anime-style image, including details about the character, scene, and artistic style. Outputs High-resolution anime images**: The model generates detailed, anime-inspired images based on the provided text prompts. The output images are high-resolution, typically 1024x1024 pixels or larger. Capabilities Animagine XL 2.0 excels at generating diverse and distinctive anime-style artwork. The model can capture a wide range of anime character designs, from colorful and vibrant to dark and moody. It also demonstrates strong abilities in rendering detailed backgrounds, intricate clothing, and expressive facial features. The inclusion of the LoRA adapters further enhances the model's capabilities, allowing users to tailor the aesthetic of the generated images to their desired style. This flexibility makes Animagine XL 2.0 a valuable tool for anime artists, designers, and enthusiasts who want to create unique and visually striking anime-inspired content. What can I use it for? Animagine XL 2.0 and its accompanying LoRA adapters can be used for a variety of applications, including: Anime character design**: Generate detailed and unique anime character designs for use in artwork, comics, animations, or video games. Anime-style illustrations**: Create stunning anime-inspired illustrations, ranging from character portraits to complex, multi-figure scenes. Anime-themed content creation**: Produce visually appealing anime-style assets for use in various media, such as social media, websites, or marketing materials. Anime fan art**: Generate fan art of popular anime characters and series, allowing fans to explore and share their creativity. By leveraging the model's capabilities, users can streamline their content creation process, experiment with different artistic styles, and bring their anime-inspired visions to life. Things to try One interesting feature of Animagine XL 2.0 is the ability to fine-tune the generated images through the use of the LoRA adapters. By applying different adapters, users can explore a wide range of anime art styles and aesthetics, from the bold and vibrant to the delicate and intricate. Another aspect worth exploring is the model's handling of complex prompts. While the model performs well with detailed, structured prompts, it can also generate interesting results when given more open-ended or abstract prompts. Experimenting with different prompt structures and levels of detail can lead to unexpected and unique anime-style images. Additionally, users may want to explore the model's capabilities in generating dynamic scenes or multi-character compositions. By incorporating elements like action, emotion, or narrative into the prompts, users can push the boundaries of what the model can create, resulting in compelling and visually striking anime-inspired artwork.

Read more

Updated Invalid Date