The controlnet-normal model can be applied in a variety of use cases for image manipulation and content generation. One possible use case is in image editing, where photographers or graphic designers can use the model to enhance or modify images based on the desired effects encoded in the normal map. For example, adjusting the lighting, shadows, or surface texture of an image can be easily achieved by inputting an appropriate normal map. This model can also be valuable in the field of augmented reality, where it can be used to generate realistic virtual objects or alter the appearance of real-world environments. For instance, virtual furniture or decorations can be seamlessly integrated into a live camera feed by leveraging the controlnet-normal model. With its ability to modify images based on normal maps, this AI model has the potential to revolutionize various industries that require image editing or content creation. Potential products or practical uses of this model include standalone image editing software, plugins for popular photo editing applications, or tools for developing AR experiences.
- Cost per run
- Avg run time
- Nvidia A100 (40GB) GPU
|Stable Diffusion Depth2img
You can use this area to play around with demo applications that incorporate the Controlnet Normal model. These demos are maintained and hosted externally by third-party creators. If you see an error, message me on Twitter.
Currently, there are no demos available for this model.
Summary of this model and related resources.
Modify images using normal maps
|View on Replicate
|View on Replicate
|View on Github
|No paper link provided
How popular is this model, by number of runs? How popular is the creator, by the sum of all their runs?
How much does it cost to run this model? How long, on average, does it take to complete a run?
|Cost per Run
|Nvidia A100 (40GB) GPU
|Average Completion Time