Lambdalabs

Rank:

Average Model Cost: $0.0000

Number of Runs: 67,500

Models by this creator

sd-image-variations-diffusers

sd-image-variations-diffusers

lambdalabs

The Stable Diffusion Image Variations model is a deep learning model that generates image variations using CLIP image embeddings. It is a fine-tuned version of the Stable Diffusion model, trained on the LAION improved aesthetics dataset. The model has been trained in two stages and provides better image quality and CLIP-rated similarity compared to the original version. It is intended for research purposes and can be used for tasks such as safe deployment of generative models, understanding model limitations and biases, generating artworks, and educational or creative tools. However, the model should not be used to create or disseminate harmful or offensive content, and it has certain limitations and biases, such as not achieving perfect photorealism and not performing well on complex composition tasks. Safety measures have been implemented to check for NSFW content.

Read more

$-/run

51.8K

Huggingface

sd-pokemon-diffusers

sd-pokemon-diffusers

The sd-pokemon-diffusers model is a variant of the Stable Diffusion (SD) model that has been fine-tuned on Pokémon images. It has been trained on BLIP captioned Pokémon images using 2xA6000 GPUs on Lambda GPU Cloud for approximately 15,000 steps, which took about 6 hours and cost around $10. The model allows users to generate their own Pokémon character by providing a text prompt without requiring any prompt engineering. The model was trained by Justin Pinkney at Lambda Labs.

Read more

$-/run

13.9K

Huggingface

pythia-1.4b-deduped-synthetic-instruct

pythia-1.4b-deduped-synthetic-instruct

This model is created by finetuning EleutherAI/pythia-1.4b-deduped on the Dahoas/synthetic-instruct-gptj-pairwise. You can try a demo of the model hosted on Lambda Cloud. Model Details Finetuned by: Lambda Model type: Transformer-based Language Model Language: English Pre-trained model: EleutherAI/pythia-1.4b-deduped Dataset: Dahoas/synthetic-instruct-gptj-pairwise Library: transformers License: Apache 2.0 Prerequisites Running inference with the model takes ~4GB of GPU memory. Quick Start Output: Training The model was trained on the Dahoas/synthetic-instruct-gptj-pairwise. We split the original dataset into the train (first 32000 examples) and validation (the remaining 1144 examples) subsets. We finetune the model for 4 epoches. This took 8xA100 80GB 2 hours, where we set batch_size_per_gpu to 8 (so global batch size is 64), and learning rate to 0.00002 (with linear decay to zero at the last trainig step). You can find a Weights and Biases record here.

Read more

$-/run

195

Huggingface

miniSD-diffusers

miniSD-diffusers

Usage Training details Fine tuned from the stable-diffusion 1.4 checkpoint as follows: 22,000 steps fine-tuning only the attention layers of the unet, learn rate=1e-5, batch size=256 66,000 steps training the full unet, learn rate=5e-5, batch size=552 GPUs provided by Lambda GPU Cloud Trained on LAION Improved Aesthetics 6plus. Trained using https://github.com/justinpinkney/stable-diffusion, original checkpoint available here License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the model to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here

Read more

$-/run

145

Huggingface

pythia-2.8b-deduped-synthetic-instruct

pythia-2.8b-deduped-synthetic-instruct

This model is created by finetuning EleutherAI/pythia-2.8b-deduped on the Dahoas/synthetic-instruct-gptj-pairwise. You can try a demo of the model hosted on Lambda Cloud. Model Details Finetuned by: Lambda Model type: Transformer-based Language Model Language: English Pre-trained model: EleutherAI/pythia-2.8b-deduped Dataset: Dahoas/synthetic-instruct-gptj-pairwise Library: transformers License: Apache 2.0 Prerequisites Running inference with the model takes ~7GB of GPU memory. Quick Start Output: Training The model was trained on the Dahoas/synthetic-instruct-gptj-pairwise. We split the original dataset into the train (first 32000 examples) and validation (the remaining 1144 examples) subsets. We finetune the model for 4 epoches. This took 8xA100 80GB 5 hours, where we set batch_size_per_gpu to 2 (so global batch size is 16), and learning rate to 0.00001 (with linear decay to zero at the last trainig step). You can find a Weights and Biases record here.

Read more

$-/run

32

Huggingface

pythia-6.9b-deduped-synthetic-instruct

pythia-6.9b-deduped-synthetic-instruct

This model is created by finetuning EleutherAI/pythia-6.9b-deduped on the Dahoas/synthetic-instruct-gptj-pairwise. You can try a demo of the model hosted on Lambda Cloud. Model Details Finetuned by: Lambda Model type: Transformer-based Language Model Language: English Pre-trained model: EleutherAI/pythia-6.9b-deduped Dataset: Dahoas/synthetic-instruct-gptj-pairwise Library: transformers License: Apache 2.0 Prerequisites Running inference with the model takes ~17GB of GPU memory. Quick Start Output: Training The model was trained on the Dahoas/synthetic-instruct-gptj-pairwise. We split the original dataset into the train (first 32000 examples) and validation (the remaining 1144 examples) subsets. We finetune the model for 4 epoches with the help of deepspeed. This took 8xA100 80GB 6 hours, where we set batch_size_per_gpu to 8 (so global batch size is 64), and learning rate to 0.000005 (with linear decay to zero at the last trainig step). You can find a Weights and Biases record here.

Read more

$-/run

19

Huggingface

Similar creators