Get a weekly rundown of the latest AI models and research... subscribe!

Controlnet Hed


AI model preview image
The ControlNet-HED model is an image-to-image translation model that is designed to modify images using HED (Holistically-Nested Edge Detection) maps. HED maps are a form of image representation that highlight the edges and boundaries in an image. The model takes an input image and a target HED map as input, and then generates an output image that is modified based on the HED map. The model is trained to learn the relationship between the input image and the HED map, and then applies this learning to generate the modified output image. This can be useful for tasks such as image editing, where specific changes or modifications need to be made to an image based on its edges and boundaries.

Use cases

The ControlNet-HED model has several possible use cases for a technical audience. One use case is in the field of image editing and manipulation. The model can be used to make specific modifications to an image based on its edges and boundaries. For example, if an image needs to have certain edges emphasized or smoothed out, the model can generate an output image that achieves this desired effect. Another use case is in the area of computer vision and object detection. By providing the model with an input image and a target HED map that indicates the desired boundaries of an object, the model can generate an output image that highlights and enhances these boundaries, making it easier to identify and detect the object in subsequent analysis. In addition, the model can be used for tasks such as image augmentation, where variations of an original image are needed for training a machine learning model. By providing different target HED maps to the model, it can generate a diverse set of modified images that can be used to improve the performance of the training process. Speculating on possible products or practical uses, the ControlNet-HED model could be integrated into existing image editing software to provide users with enhanced control over the modification of their images. For example, a user could select specific regions of an image and apply customized modifications based on the HED maps. This could enable more precise and targeted editing, resulting in higher quality and more visually appealing images. In the field of computer vision, the model could be applied to improve the accuracy and reliability of object detection algorithms by enhancing the edge detection capabilities. This could have applications in various industries, such as surveillance, autonomous vehicles, and robotics, where accurate and robust object detection is crucial. Furthermore, the model could be used in the development of image recognition systems by improving the quality of the images used for training, leading to better classification and identification performance.



Cost per run
Avg run time
Nvidia A100 (40GB) GPU

Creator Models

Controlnet Normal$0.0368328,635
Controlnet Pose$0.0391168,063
Stable Diffusion Upscaler$?3,136
Controlnet Seg$0.0368162,983
Controlnet Scribble$0.043737,573,553

Similar Models

Try it!

You can use this area to play around with demo applications that incorporate the Controlnet Hed model. These demos are maintained and hosted externally by third-party creators. If you see an error, message me on Twitter.

Currently, there are no demos available for this model.


Summary of this model and related resources.

Model NameControlnet Hed
Modify images using HED maps
Model LinkView on Replicate
API SpecView on Replicate
Github LinkView on Github
Paper LinkNo paper link provided


How popular is this model, by number of runs? How popular is the creator, by the sum of all their runs?

Model Rank
Creator Rank


How much does it cost to run this model? How long, on average, does it take to complete a run?

Cost per Run$0.0184
Prediction HardwareNvidia A100 (40GB) GPU
Average Completion Time8 seconds