Andreasjansson
Rank:Average Model Cost: $0.0571
Number of Runs: 64,708,756
Models by this creator

clip-features
The clip-features model is a model that utilizes the clip-vit-large-patch14 architecture to extract features from text and images. It takes in an image and text as input and returns the corresponding CLIP features. These features can then be used for various tasks such as image classification, object detection, and image generation. The model is designed to provide a compact representation of the input that captures both visual and textual information, allowing for cross-modal understanding and analysis.
$0.001/run
49.8M
Replicate

blip-2
blip-2 is a model that answers questions about images. It takes an image as input and generates a textual response to questions asked about the image. The model has been trained on a large dataset of images and their corresponding questions and answers, enabling it to understand the content of images and provide accurate responses to various types of questions.
$-/run
11.6M
Replicate

deepfloyd-if
The DeepFloyd IF model is a non-commercial research model that generates images from text descriptions. It utilizes deep learning techniques to understand text prompts and generate corresponding visual representations. Please ensure that you adhere to the licensing terms before using this model.
$0.097/run
1.5M
Replicate

stable-diffusion-inpainting
The stable-diffusion-inpainting model is a deep learning model designed for inpainting tasks, which involves filling in missing or corrupted parts of an image. It uses the stable diffusion algorithm, which is a technique for propagating information in an image while preserving its overall structure and content. The model has been trained on a large dataset of images and can effectively generate inpainted versions of images, producing visually coherent and plausible results.
$0.007/run
859.7K
Replicate
tile-morph
tile-morph
Tile-Morph is a model that allows users to create tileable animations with seamless transitions. It takes a sequence of images as input and generates a smooth transition between them, resulting in a seamless looping animation. This model is useful for creating visually appealing and continuous animations that can be easily repeated without noticeable breaks or glitches.
$-/run
449.1K
Replicate
llama-2-13b-embeddings
llama-2-13b-embeddings
The Llama2 13B with embedding output model is a text-to-text model that takes in prompts separated by "\n\n" and returns embeddings representative of the text inputs. These embeddings are then represented as a sequence of floating point numbers. The model is trained to understand the semantic meaning of the input text and convert it to a numerical representation that can be used for various machine learning applications.
$-/run
172.6K
Replicate
illusion
illusion
The 'illusion' model is a text-to-image AI model created by Monster Labs. It leverages ControlNet on top of SD 1.5 to generate high quality images based on textual input. The model can be used to generate images based on a provided seed, image URL, dimensions (width and height), border size, and a descriptive prompt. Users can also indicate the number of output images they want, the guidance scale, and incorporate a negative prompt to avoid certain features in the generated image. Furthermore, users can incorporate qr-code content and specify qr-code background. The model uses a controlnet conditioning scale and can be run for a specific number of inference steps. The output is a URL that links to the generated image(s).
$-/run
153.4K
Replicate
illusion
illusion
The "illusion" model is a Text-to-Image type AI developed by Monster Labs. It is based on the ControlNet on top of SD 1.5 and is designed to generate an image based on the given prompt while excluding characteristics listed in the negative prompt. Input parameters include image characteristics like seed, width, border, height, guidance scale, qr_code_content, and qrcode_background, as well as prompts. The output of the model is a URL leading to the generated image. The example shows that the model can generate medieval city street scenes as oil paintings, excluding features such as being ugly, disfigured, low-quality, blurry, or not safe for work.
$-/run
130.4K
Replicate

stable-diffusion-animation
The stable-diffusion-animation model is a text-to-image model that creates an animation by interpolating between two prompts. It uses the stable diffusion model to generate frames of the animation, gradually transitioning from one prompt to another. This allows for the creation of smooth and coherent animations based on text input.
$0.159/run
108.9K
Replicate

musicgen-looper
musicgen-looper is a model that generates fixed beats per minute (bpm) loops from text prompts. It takes text inputs and generates audio outputs that are looped at a fixed bpm. This model can be useful for generating background music or loops for various applications.
$0.308/run
21.5K
Replicate