Average Model Cost: $0.0000
Number of Runs: 9,320
Models by this creator
Text2Video is a method developed by the Bot Intelligence Group at CMU for generating videos based on a series of language descriptions. The goal of this method is to produce videos that have a surreal or bizarre visual style, which can be achieved by mapping the descriptions to a sequence of visual elements and rendering them into a video. The method combines natural language processing and video generation techniques to create these unique and unconventional videos.
StyleCLIP-Draw is a text-to-image synthesis model that generates drawings based on textual descriptions. It is designed to generate drawings in a specific style specified by the user, by combining a text encoder and a style encoder. The text encoder converts the input text into a latent space representation, while the style encoder maps the input style image into a style latent code. These two latent codes are then combined and decoded into an image by a generator network. The model allows users to control the style and content of the generated drawings by modifying the input text and style image.