Nitrosocke

Rank:

Average Model Cost: $0.0041

Number of Runs: 165,897

Models by this creator

AI model preview image

arcane-diffusion

nitrosocke

This model, called arcane-diffusion, is a text-to-image model that uses the Stable Diffusion via Dreambooth technique. It takes text input and generates corresponding images based on the description provided.

Read more

$0.009/run

34.7K

Replicate

AI model preview image

redshift-diffusion

The redshift-diffusion model is a method for creating 3D artworks by utilizing the stable diffusion technique through Dreambooth. Dreambooth uses text-to-image technology to generate images based on given descriptions. Through the redshift-diffusion model, Dreambooth can create 3D artwork by applying a stable diffusion technique to the generated images. This model is designed to provide a unique and creative way of producing 3D artworks.

Read more

$-/run

34.5K

Replicate

mo-di-diffusion

mo-di-diffusion

The mo-di-diffusion model is a text-to-image model. It takes a textual description as input and generates a corresponding image as output. The model uses diffusion models, which are based on simulating the evolution of pixel values in an image based on a given noise source. The model is trained to learn the relationship between textual descriptions and corresponding images, and can then generate new images based on new textual inputs.

Read more

$-/run

20.9K

Huggingface

Nitro-Diffusion

Nitro-Diffusion

Nitro-Diffusion is a Multi-Style Model trained from scratch that allows for high control of mixing, weighting, and single style use for generating images. It is a fine-tuned Stable Diffusion model trained on three art styles simultaneously while keeping each style separate from the others. The model can generate multi-style characters and scenes, as well as single-style characters. It supports a Gradio Web UI for easy usage and can be exported to ONNX, MPS, and/or FLAX/JAX. The model is open access with a CreativeML OpenRAIL-M license specifying rights and usage. Video demos of generated images using the model are available.

Read more

$-/run

14.6K

Huggingface

redshift-diffusion

redshift-diffusion

Redshift-diffusion is a machine learning model that allows for the conversion of text descriptions into images. It works by using a generator network that takes in text embeddings and outputs corresponding images. This model is based on the Diffusion models framework and is trained on a large dataset of text-image pairs. By training the model on this dataset, it learns to generate images that are semantically consistent with the given text descriptions. The generated images can then be used in various applications such as content creation, virtual reality, and more.

Read more

$-/run

14.6K

Huggingface

Future-Diffusion

Future-Diffusion

The Future-Diffusion model is a fine-tuned version of the Stable Diffusion 2.0 model. It has been trained on high-quality 3D images with a futuristic sci-fi theme. The model is designed to generate images with a futuristic aesthetic when prompted with text. It has been trained using a diffuser-based dreambooth training method, and it incorporates prior-preservation loss and the train-text-encoder flag. The model is still in the early stages and should be viewed as an experimental prototype. It is available under a CreativeML Open RAIL++-M License.

Read more

$-/run

11.5K

Huggingface

Ghibli-Diffusion

Ghibli-Diffusion

Ghibli-Diffusion is a fine-tuned version of the Stable Diffusion model trained specifically on images from modern anime feature films from Studio Ghibli. It can generate anime-style images and can be used with prompts that include the phrase "ghibli style". The model is trained using the diffusers-based dreambooth training with prior-preservation loss and the train-text-encoder flag. It can be exported to ONNX, MPS, and/or FLAX/JAX formats. The model is open access with a CreativeML OpenRAIL-M license, which specifies certain usage restrictions.

Read more

$-/run

9.9K

Huggingface

Arcane-Diffusion

Arcane-Diffusion

The Arcane-Diffusion model is a fine-tuned version of the Stable Diffusion model that has been trained on images from the TV show Arcane. It can be used to generate images with the "arcane" style by using the "arcane" token in prompts. The model has undergone several versions, with each version improving the quality and fidelity of the generated images. It can be exported to ONNX, MPS, and FLAX/JAX formats. The model has been trained using diffusers and prior-preservation loss methods. Sample images from the model and the training set are provided for reference.

Read more

$-/run

9.4K

Huggingface

AI model preview image

archer-diffusion

Archer-Diffusion is a model that is trained on the task of text-to-image generation using the Stable Diffusion technique. It incorporates the Dreambooth module, which helps in improving the stability of the diffusion process and produces high-quality images from textual input. The model is designed to generate visually appealing and coherent images that match the given text descriptions.

Read more

$0.032/run

7.5K

Replicate

Similar creators