Lllyasviel

Rank:

Average Model Cost: $0.0000

Number of Runs: 1,212,831

Models by this creator

control_v11p_sd15_canny

control_v11p_sd15_canny

lllyasviel

The control_v11p_sd15_canny model is an image-to-image generation model that uses the ControlNet neural network structure to add extra conditions to diffusion models. This particular checkpoint is a conversion of the original checkpoint into the diffusers format and is conditioned on Canny edges. It can be used in combination with Stable Diffusion. The model was developed by Lvmin Zhang and Maneesh Agrawala and is licensed under the CreativeML OpenRAIL M license. It allows for conditional inputs such as edge maps, segmentation maps, keypoints, etc. The checkpoint has been trained on Stable Diffusion v1-5 and is recommended for use with that version.

Read more

$-/run

201.8K

Huggingface

control_v11f1p_sd15_depth

control_v11f1p_sd15_depth

The "control_v11f1p_sd15_depth" model is an image-to-image translation model that can generate high-quality depth maps from input images. It is trained to accurately estimate the depth information of objects in an image, allowing for applications such as 3D reconstruction, augmented reality, and depth-based image editing. The model utilizes a deep neural network architecture and has been trained on a large dataset to produce visually and geometrically consistent depth maps. It can be used as a tool for various computer vision tasks that require depth estimation.

Read more

$-/run

191.6K

Huggingface

sd-controlnet-canny

sd-controlnet-canny

The ControlNet - Canny Version is a neural network structure that adds extra conditions to diffusion models. This particular checkpoint corresponds to the ControlNet conditioned on Canny edges. It can be used in combination with Stable Diffusion to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. The model was trained on 3 million edge-image, caption pairs for 600 GPU-hours. The ControlNet allows for end-to-end learning of task-specific conditions and is robust even with small training datasets. It can be trained on personal devices or scaled to large amounts of data with powerful computation clusters.

Read more

$-/run

188.7K

Huggingface

control_v11p_sd15_openpose

control_v11p_sd15_openpose

The control_v11p_sd15_openpose model is a neural network structure called ControlNet that can control diffusion models by adding extra conditions. This specific checkpoint corresponds to the ControlNet conditioned on openpose images. It is a conversion of the original checkpoint into diffusers format and can be used in combination with Stable Diffusion. The model allows for conditional inputs such as edge maps, segmentation maps, and keypoints, enriching the methods to control large diffusion models and facilitating related applications. It has been trained on Stable Diffusion v1-5 and can be used with other diffusion models such as dreamboothed stable diffusion. The model has improvements in Openpose 1.1, including better accuracy, support for more inputs (hand and face), and improvements in the training dataset. External dependencies are required for processing images to create the auxiliary conditioning.

Read more

$-/run

131.5K

Huggingface

control_v11p_sd15_lineart

control_v11p_sd15_lineart

The control_v11p_sd15_lineart model is a neural network structure called ControlNet that is used to control diffusion models by adding extra conditions. It is specifically conditioned on lineart images. This model allows for the generation of more controlled and specific text-to-image outputs. It is recommended to use this model in combination with Stable Diffusion v1-5. The model was developed by Lvmin Zhang and Maneesh Agrawala and is available under the CreativeML OpenRAIL M license. The model can be trained on personal devices or scaled to large computation clusters. It has been trained on Stable Diffusion v1-5 and can be used with other diffusion models as well. For more information and installation instructions, refer to the provided GitHub repository and paper.

Read more

$-/run

105.6K

Huggingface

control_v11p_sd15_scribble

control_v11p_sd15_scribble

Controlnet v1.1 is a neural network structure that adds extra conditions to diffusion models. This particular model is conditioned on Scribble images. It can be used in combination with Stable Diffusion to generate images based on Scribble input. The model has been trained on Stable Diffusion v1-5 and is compatible with other diffusion models as well. The training dataset has been improved to address issues such as duplicate images, low quality images, and incorrect prompts. The model is designed to handle thick Scribbles and has been trained for 200 GPU hours. For more information, refer to the associated GitHub repository and paper.

Read more

$-/run

105.0K

Huggingface

control_v11f1e_sd15_tile

control_v11f1e_sd15_tile

ControlNet v1.1 is a neural network structure that adds extra conditions to diffusion models. This specific model is conditioned on tiled images, allowing for tasks such as super-resolution or generating detailed images at the same size as the input. The model can be used in combination with Stable Diffusion v1.5 and has been trained using that checkpoint. It is designed to enable conditional inputs like edge maps, segmentation maps, keypoints, etc., and has been shown to be robust even with small training datasets. The model can be trained on personal devices or scaled to large amounts of data using powerful computation clusters. This model is a conversion of the original checkpoint into a diffusers format.

Read more

$-/run

69.8K

Huggingface

Similar creators