Average Model Cost: $0.0151
Number of Runs: 294,889
Models by this creator
tres_iqa is a model that is designed to assess the quality of an image. It takes an image as input and provides a quantitative score that represents the quality of the image. This model can be useful in various applications, such as image compression, image enhancement, and image classification, where the quality of the image needs to be evaluated. The output score can be used to determine how well the image meets certain quality standards or to compare the quality of different images.
The Deoldify Image model is a deep learning tool designed to add colors to old or black and white images. Users input an image URL, a model name (Artistic), and a render factor. The model will then process the image and return a URL that leads to a colorized version of the originally input image.
The instruct-pix2pix model is a deep learning model that can generate images based on human instructions. It uses the pix2pix image-to-image translation framework and is trained to understand and generate images based on natural language instructions given by humans. This model can be used for a variety of tasks such as editing images based on descriptions, generating images based on specific instructions, and enhancing images based on user preferences.
The model "stable_diffusion_infinite_zoom" uses Runway's Stable-diffusion inpainting model to create an infinite loop video. It is inspired by a Twitter post and performs inpainting on videos by filling in missing or obscured parts with plausible content. The resulting video creates an illusion of an infinitely zooming view.
The Robust Video Matting model is a video-to-image model that is designed to extract the foreground of a video. The model requires a video link input and the desirable output type, in this case, 'green-screen'. The output from the model is a link to a video where the separating foreground appears on a 'green-screen' background.
The deoldify_image model is an image-to-image model that adds colors to old images. It can take black and white or faded images and enhance them by applying vibrant and realistic colorization. This model uses deep learning techniques to analyze the image and generate color information based on the patterns and features it identifies. It is trained on a large dataset of images to learn how to restore and enhance colors in old photographs.
The stable_diffusion2_upscaling model is an image super-resolution model that utilizes the stable-diffusion V2 technique. It is designed to take low-resolution images and generate high-resolution versions of those images. This model is useful for tasks such as enhancing the quality of images, improving the level of detail in images, or preparing images for further analysis or processing.
The paella_fast_outpainting is an image-to-image model that performs fast image outpainting. Given an input image and specifications about the location in the image where outpainting is to take place and the relative size of the output, it enhances the image to create a larger, outpainted version. The model's input includes a prompt, an image, its location, and the desired output size, while its output is a URL to the new outpainted image.
The dichotomous_image_segmentation model is a highly accurate image segmentation model that was presented at ECCV 2022. The model is capable of segmenting images into two classes with great precision. This model may be useful for various computer vision tasks that require accurate image segmentation.
Deoldify_video is a model that specializes in adding color to old video footage. It renews and rejuvenates vintage video clips by applying colorization. The model requires a link to the old video ("input_video") and a "render_factor" which influences the colorization depth to be provided as inputs. The output by the model is a link to the colorized version of the input video.