Stable Diffusion is a machine learning model that can generate digital images from natural language descriptions. It was developed at LMU Munich and later extended by a collaboration of StabilityAI, LMU, and Runway. It can also be used for other tasks, such as generating image-to-image translations guided by a text prompt.
Unlike other models like DALL-E↗︎, Stable Diffusion makes its source code and pre-trained weights available. However, its license prohibits certain harmful use cases. Critics have raised concerns about the ethics of using the model, citing its potential for creating deepfakes and questioning the legality of generating images with a model trained on a dataset containing copyrighted content without the consent of the original artists.