All Articles
AI7 min read15 September 2022

Stable Diffusion and What Open Source AI Actually Means

Stability AI released Stable Diffusion openly in August 2022. Within weeks it was running on consumer hardware, generating images that would have seemed impossible a year earlier. The implications were not simple.

Stable DiffusionAIImage GenerationOpen Source

Stable Diffusion was released by Stability AI on 22 August 2022, and within days it was everywhere. Not as a cloud service but as a download you could run on your own computer. If you had a recent GPU with enough VRAM, you could generate images locally. No API costs, no content policies, no data sent to a third party.

The image quality was startling for a model that ran on consumer hardware. Not as good as DALL-E 2, which OpenAI had been gating behind a waitlist. But close enough that the gap between "can afford to use" and "high quality" had narrowed dramatically.

The open source release changed what AI image generation meant. Midjourney was a cloud service. DALL-E was a cloud service. You generated images, they kept the images, they set the rules. Stable Diffusion was software you owned. You ran it locally. Nobody saw what you generated.

The modelling community moved fast. Fine-tunes appeared within weeks. Models trained on specific artists' styles. Models for specific domains: fantasy art, photorealism, anime. The Hugging Face model hub became the de facto registry for community models. The tooling ecosystem (ComfyUI, Automatic1111's web UI) made the models accessible to people who could not run Python from a command line.

The ethical questions were real and not cleanly resolved. Models fine-tuned on specific artists' styles created output that looked like those artists' work, without their consent and potentially reducing demand for their services. The debate between those who argued training on public data was fair use and those who argued it violated creative rights was genuine and unresolved.

The copyright question applied to the images themselves. Who owned a Stable Diffusion image? The company that trained the model? The user who wrote the prompt? The artists whose work was in the training data? These questions were in courts and legislatures across multiple jurisdictions by the end of 2022.

What the Stable Diffusion release demonstrated conclusively was that open source AI was viable for frontier capabilities. The argument that safety required keeping powerful models proprietary was challenged by the fact that the open model was out and could not be recalled. The debate about whether to open source powerful AI models became real in 2022 and remains active.

Found this useful?

Share it with someone who'd enjoy it.