Jagilley
Rank:Average Model Cost: $0.0352
Number of Runs: 43,998,337
Models by this creator

controlnet-scribble
The controlnet-scribble model is a text-to-image model that can generate detailed images from scribbled drawings. It uses a neural network architecture called ControlGAN, which incorporates a control module to guide the generation process based on textual descriptions. This model can be useful for tasks such as image synthesis, where detailed images need to be generated based on simple input sketches or descriptions.
$0.044/run
33.6M
Replicate

controlnet-hough
The controlnet-hough model is an image processing model that uses M-LSD (Multi-Line Segments Detection) line detection to modify images. It detects multiple line segments in an image and allows for various modifications such as image rotation and perspective correction.
$0.021/run
8.4M
Replicate

controlnet-canny
The controlnet-canny model is an image-to-image transformation model that utilizes canny edge detection. It modifies images by applying canny edge detection, which identifies edges in an image. This model is designed to provide control over the parameters used in canny edge detection and allows users to modify images to enhance and emphasize edges.
$0.032/run
691.2K
Replicate

controlnet-depth2img
The controlnet-depth2img model is an image-to-image translation model that allows you to modify images using depth maps. By inputting an image and its corresponding depth map, the model can generate modified versions of the image based on the provided depth information. This can be useful for tasks such as image editing, depth-based inpainting, or generating 3D-like effects in images. The model has been trained on a large dataset of images and depth maps to learn the appropriate mappings between them.
$0.032/run
376.4K
Replicate

controlnet-hed
The ControlNet-HED model is an image-to-image translation model that is designed to modify images using HED (Holistically-Nested Edge Detection) maps. HED maps are a form of image representation that highlight the edges and boundaries in an image. The model takes an input image and a target HED map as input, and then generates an output image that is modified based on the HED map. The model is trained to learn the relationship between the input image and the HED map, and then applies this learning to generate the modified output image. This can be useful for tasks such as image editing, where specific changes or modifications need to be made to an image based on its edges and boundaries.
$0.018/run
300.2K
Replicate

controlnet-normal
The controlnet-normal model is an image-to-image translation model that is designed to modify images using normal maps. It takes in an input image and a normal map and produces an output image that has been altered based on the information in the normal map. This can be useful for various tasks such as image editing or generating augmented reality content.
$0.037/run
262.3K
Replicate

controlnet-pose
The controlnet-pose model is an image-to-image translation model trained in a supervised manner using the ControlNet dataset. It is specifically designed to modify images that contain humans by applying pose detection techniques. The model provides the ability to manipulate images with human subjects by detecting and altering their poses, allowing for various creative and practical applications.
$0.039/run
143.4K
Replicate

controlnet-seg
The controlnet-seg model is designed to modify images using semantic segmentation. Semantic segmentation is the process of classifying each pixel in an image into different categories or classes. This model allows users to manipulate images based on these semantic labels, making it easier to edit specific objects or areas within an image.
$0.037/run
123.5K
Replicate

controlnet
ControlNet is an image-to-image model that allows for modifying images while preserving their structure. By providing a prompt, ControlNet can make controlled edits to images with fine-grained control. This model enables users to modify images in a desired way, while maintaining the original structure and content.
$0.060/run
50.6K
Replicate

stable-diffusion-depth2img
The stable-diffusion-depth2img model is a text-to-image model that can generate variations of an input image while retaining the shape and depth of the original image. The model achieves this by applying stable diffusion, which is a diffusion process that propagates colors while preserving the underlying structure. This model can be useful for tasks that require generating diverse images while maintaining important visual features.
$0.032/run
50.5K
Replicate