Generative AI Models for Creatives

Stable Diffusion

Ready To Use
Illustration
Graphic Design
Diffusion
Drawing
Outpainting
Characterdesign
Creaturedesign
Inpainting

Create a new picture given on a text input

Stable Diffusion is a machine learning model that can generate digital images from natural language descriptions. It was developed at LMU Munich and later extended by a collaboration of StabilityAI, LMU, and Runway. It can also be used for other tasks, such as generating image-to-image translations guided by a text prompt.

Unlike other models like DALL-E↗︎, Stable Diffusion makes its source code and pre-trained weights available. However, its license prohibits certain harmful use cases. Critics have raised concerns about the ethics of using the model, citing its potential for creating deepfakes and questioning the legality of generating images with a model trained on a dataset containing copyrighted content without the consent of the original artists.

Model details
Authorstability.ai
Published in2022
ArchitectureDiffusion
LicenseMIT
Related models:
Didn't find what you are looking for? Send us your suggestions!
notfound::false