Stable Diffusion: Prompt Guide and Examples (2024)

What a week, huh? A few days ago, Stability.ai released the new AI art model Stable Diffusion. It is similarly powerful to DALL-E 2, but open source, and open to the public through Dream Studio, where anyone gets 50 free uses just by signing up with an email address.

Since it is open source and anyone who has 5GB of GPU VRAM can download it (and Emad Mostaque, Stability.ai’s founder has come out and said more efficient models are coming) to get unlimited uses, expect to keep seeing headlines about AI art for a while.

I am tired of repeating the same old speech, but thinking back to how primitive models were just a year and a half ago with DALL-E and other VQVAE, this is completely insane. I can only imagine what applications artists and other users will come up with in the near future by leveraging StableDiffusion’s embeddings and its text-to-image capabilities, let alone whatever the next generation of models will be able to do.

Extrapolating from how much this field has grown in the last 18 months, I wouldn’t be surprised if in 2 more years you can write a script for a comic book, feed it to some large language encoder and a text-to-image model like this, and get a fully illustrated, style-coherent graphic novel. This would also apply for frames for an animated movie or a storyboard.

Are we really that close to something so big? I feel like the technology is there if enough compute and budget were allocated, but I am not sure whether someone will do it. I don’t see any obvious blockers or barriers to the next generation of models being even bigger or understanding style better.

Given this context, many people are concerned some artists may lose their jobs. After lots of discussion in Reddit and at parties, I will try to summarize my current opinion on that topic.

For use-cases where having a human artist brings the least value, I think text-to-image models will dominate the market. However, for those cases I think we already had stock images. For instance if I am adorning a random blog post, I’d rather get a free stock image in the header than pay an artist for a new professional photo, as I don’t think my readers care that much (see “I replaced all our blog thumbnails using DALL·E 2” for an example).

Especially if this site was monetized and the big picture was just there to make you scroll further down and get more engagement and ad views.

Stable Diffusion: Prompt Guide and Examples (1)

For cases where the artist’s vision matters, like original paintings for decorating my home, or the panels for a graphic novel, I think StableDiffusion or DALL-E 2 for that matter are far away from beating humans. So far.

However, I guess many freelance artists who work for commissions may find less demand for their work as bloggers or random people get their art itch scratched by AI art. I would love to hear the opinion of artists from that sort of market on this, as I am quite ignorant of how the whole process works (for instance, what kind of people commission art in the first place).

I think models like this can also enhance artists’ work. Say you are asked to draw 5 illustrations of a character doing different things. You could use dall-e to get 5 relevant background scenes in 5 minutes, then use your time to add in the characters and some details on top of them. AI art models are significantly better at drawing background scenes than action and characters, so this is a combination of the best capabilities of both human and machine.

Compare against drawing the whole thing from scratch.

Obviously style matching would not be easy, but when we reach a point where you can supply your own mini dataset of backgrounds to condition on and style transfer, this may make artists able to work much faster.

Now is this a good thing? I guess it is a double edged sword: will artists who leverage AI art models to become more productive drive down the general price of art? Or will the new supply generate new demand? I am not an economist, nor an artist, so speculating further would be futile and I will leave the rest to my dear readers.

Speculate away, please.

Extra reflection on the topic: As I said on Reddit, I would be worried that eventually you may remove the artist from some parts of the design loop if user feedback or artist feedback could be used as a reward for a reinforcement learning agent.

Something like asking “did you like this city’s design / this building? 1 to 10” to users, and using this for a policy gradient or similar algorithm.

I am reminded of Evolution through Large Models where this approach is used for autogenerated instances made through genetic mutation algorithms. Imagine the same but for text-to-image.

Stable Diffusion Art

As in my DALL-E 2 article, I tried the same prompts I had already tried in Craiyon or Guided diffusion (I know, that’s so early 2022!) to show just how much these systems have improved.

My focus was on fantasy, science fiction, and steampunk illustrations, because that is what I like, but I also experimented with more complex scenes and descriptions, to see how well StableDiffusion understands things like scene composition, prepositions and element interactions.

As with DALL-E 2, I found the model yields much better results for prompts that do not ask for a certain action or verb to be performed, but rather static scenes, especially if there are no humanoids or moving characters in them.

For everything else though, the results were astounding. I feel like StableDiffusion beats DALL-E 2 in character design and realism, but loses in the beauty of its landscapes and backgrounds, but take this with a 10% certainty as it is only my intuition and is likely be biased by my prompt choices.

One thing I’ve found helps a lot in getting more beautiful results is thinking of which exact visual effect would add to the picture, and specifying it. For instance, for fantasy illustrations usually adding fireflies or sparkles helps. For landscapes, I like naming specific flowers like azaleas, and for buildings naming features like a column, a fountain, or anything else to ground the picture in a certain time and place, even if the detail is only tangential to the whole picture.

In my opinion these details can steer the model even better than many vague cues like “4k”, in this generation of models -unlike in CLIP or GLIDE-. See Appendix A: StableDiffusion Prompt Guide to see how I choose most of my prompts, and some advice.

Prompt Examples and Experiments

I will begin with some scenes that I already tried with other models. These are some of the stable diffusion prompts that I liked best.

“A digital illustration of a steampunk library with clockwork machines, 4k, detailed, trending in artstation, fantasy vivid colors”

Stable Diffusion: Prompt Guide and Examples (2)

Stable Diffusion: Prompt Guide and Examples (3)

Stable Diffusion: Prompt Guide and Examples (4)

I like the last one especially.

“A digital illustration of a steampunk flying machine in the sky with cogs and mechanisms, 4k, detailed, trending in artstation, fantasy vivid colors”

Stable Diffusion: Prompt Guide and Examples (5)

Stable Diffusion: Prompt Guide and Examples (6)

Stable Diffusion: Prompt Guide and Examples (7)

Stable Diffusion: Prompt Guide and Examples (8)

Stable Diffusion: Prompt Guide and Examples (9)

For scene composition, it is still struggling. Here are a ferret and a badger (which the model turned into another ferret) fencing with swords.

Stable Diffusion: Prompt Guide and Examples (10)

As expected, some of the best prompts that had worked for DALL-E 2 and Craiyon worked great with stable diffusion too.

A digital Illustration of the Babel tower, 4k, detailed, trending in artstation, fantasy vivid colors

Stable Diffusion: Prompt Guide and Examples (11)

Stable Diffusion: Prompt Guide and Examples (12)

Stable Diffusion: Prompt Guide and Examples (13)

Cluttered house in the woods, anime, oil painting, high resolution, cottagecore, ghibli inspired, 4k

Stable Diffusion: Prompt Guide and Examples (14)

Stable Diffusion: Prompt Guide and Examples (15)

Stable Diffusion: Prompt Guide and Examples (16)

Stable Diffusion: Prompt Guide and Examples (17)

In my opinion, both prompts yielded better images here than they did for DALLE-2, but you can be the judge of that.

A digital illustration of a medieval town, 4k, detailed, trending in artstation, fantasy

Stable Diffusion: Prompt Guide and Examples (18)

Stable Diffusion: Prompt Guide and Examples (19)

Stable Diffusion: Prompt Guide and Examples (20)

A medieval town with disco lights and a fountain, by Josef Thoma, matte painting trending on artstation HQ, concept art

Stable Diffusion: Prompt Guide and Examples (21)

Stable Diffusion: Prompt Guide and Examples (22)

Stable Diffusion: Prompt Guide and Examples (23)

A digital illustration of a treetop house with fireflies, vivid colors, 4k, fantasy [,/organic]

Stable Diffusion: Prompt Guide and Examples (24)

Stable Diffusion: Prompt Guide and Examples (25)

Stable Diffusion: Prompt Guide and Examples (26)

Stable Diffusion: Prompt Guide and Examples (27)

Stable Diffusion: Prompt Guide and Examples (28)

Paintings of Landscapes

Again back to comparing with DALL-E 2 and Craiyon, I tried asking for landscape paintings of mansions, castles and general garden scenes. I expected a more classical feel as opposed to the more digital illustration like air of the images from before.

A beautiful castle beside a waterfall in the woods, by Josef Thoma, matte painting, trending on artstation HQ

Stable Diffusion: Prompt Guide and Examples (29)

Stable Diffusion: Prompt Guide and Examples (30)

Stable Diffusion: Prompt Guide and Examples (31)

A beautiful mansion with flowered gardens and a fountain, digital illustration, 4k, detailed, bokeh

Stable Diffusion: Prompt Guide and Examples (32)

Stable Diffusion: Prompt Guide and Examples (33)

Stable Diffusion: Prompt Guide and Examples (34)

A beautiful mansion beside a waterfall in the woods, by josef thoma, matte painting, trending on artstation HQ

Stable Diffusion: Prompt Guide and Examples (35)

Stable Diffusion: Prompt Guide and Examples (36)

A beautiful mansion with flowered gardens and a fountain, painting, oil on canvas, 4k, detailed, thomas cole

Stable Diffusion: Prompt Guide and Examples (37)

Stable Diffusion: Prompt Guide and Examples (38)

Stable Diffusion: Prompt Guide and Examples (39)

A beautiful mansion with flowered gardens and a fountain, painting, oil on canvas, 4k, detailed, bokeh

Stable Diffusion: Prompt Guide and Examples (40)

Stable Diffusion: Prompt Guide and Examples (41)

Stable Diffusion: Prompt Guide and Examples (42)

Stable Diffusion: Prompt Guide and Examples (43)

Style cue: Steampunk / Clockpunk

A digital illustration of a steampunk robot [/with cogs and clockwork by Josef Thoma], 4k, detailed, trending in artstation, fantasy vivid colors

Stable Diffusion: Prompt Guide and Examples (44)

Stable Diffusion: Prompt Guide and Examples (45)

Stable Diffusion: Prompt Guide and Examples (46)

Stable Diffusion: Prompt Guide and Examples (47)

Stable Diffusion: Prompt Guide and Examples (48)

Stable Diffusion: Prompt Guide and Examples (49)

Recommended Style Cue: Beksinski

I tried adding the name of the painter Beksinski as a style cue, and the results were mixed: many of them blocked by StableDiffusion’s content policy, which I guess means there was something awful in them, but the survivors looked amazing.

Stable Diffusion: Prompt Guide and Examples (50)

Stable Diffusion: Prompt Guide and Examples (51)

Stable Diffusion: Prompt Guide and Examples (52)

Stable Diffusion: Prompt Guide and Examples (53)

Stable Diffusion: Prompt Guide and Examples (54)

Stable Diffusion: Prompt Guide and Examples (55)

Anthropomorphic Animals (Mostly dressed as adventurers)

One thing I struggled to get right with other models was anthropomorphic animals, especially if I also asked for medieval, steampunk or fantasy clothes. My dream of drawing a Mouseguard party with DALL-E would never come to fruition. With StableDiffusion one trick that worked for me was, instead of, say, prompting “ferret with pirate clothes/dressed as a pirate”, using the prompt “ferret wearing a pirate costume”.

Then I also got a prompt from twitter and iterated it, which was “cute and adorable [animal], wearing [clothes], steampunk/clockpunk/fantasy…” plus style prompts.

This one worked like a charm. Rather than telling you the prompt I used for each individual picture, I will just show you the ones I liked best, so you can see what possibilities exist by tweaking a prompt like that one (I guess you can deduce the animal, etc. from the images themselves).

Example prompt: “Cute and adorable ferret wizard, wearing coat and suit, steampunk, lantern, anthromorphic, Jean paptiste monge, oil painting”

In general, removing Jean Paptiste Monge didn’t change results that much, switching coat and suit by tailcoat gave me the results I liked best, and adding portrait made all the paintings into humans. Also, the lantern keyword was copied from twitter but it didn’t really bring much to the table (And most of these animals aren’t carrying a lantern because of that).

Stable Diffusion: Prompt Guide and Examples (56)Stable Diffusion: Prompt Guide and Examples (57)Stable Diffusion: Prompt Guide and Examples (58)Stable Diffusion: Prompt Guide and Examples (59)Stable Diffusion: Prompt Guide and Examples (60)Stable Diffusion: Prompt Guide and Examples (61)Stable Diffusion: Prompt Guide and Examples (62)Stable Diffusion: Prompt Guide and Examples (63)Stable Diffusion: Prompt Guide and Examples (64)Stable Diffusion: Prompt Guide and Examples (65)Stable Diffusion: Prompt Guide and Examples (66)Stable Diffusion: Prompt Guide and Examples (67)Stable Diffusion: Prompt Guide and Examples (68)Stable Diffusion: Prompt Guide and Examples (69)Stable Diffusion: Prompt Guide and Examples (70)Stable Diffusion: Prompt Guide and Examples (71)Stable Diffusion: Prompt Guide and Examples (72)Stable Diffusion: Prompt Guide and Examples (73)Stable Diffusion: Prompt Guide and Examples (74)Stable Diffusion: Prompt Guide and Examples (75)

I really went crazy with these, and these are my selection, so you can imagine how many I tried.

Forest Scenes

Another thing that I love about StableDiffusion (Which DALL-E 2 also gets right) is how well it renders textures. I can imagine how a 3d artist may use one of these models to enhance their own 3d objects by creating many different textures fast and combining them with domain knowledge.

Because of that, and my love for moss, I made many forest scenes with abundant growth and moss. Many of these look like a Magic: the Gathering illustration. I will not post the prompts as each one was different, but they mostly followed my digital fantasy illustration template.

Stable Diffusion: Prompt Guide and Examples (76)Stable Diffusion: Prompt Guide and Examples (77)Stable Diffusion: Prompt Guide and Examples (78)Stable Diffusion: Prompt Guide and Examples (79)Stable Diffusion: Prompt Guide and Examples (80)Stable Diffusion: Prompt Guide and Examples (81)Stable Diffusion: Prompt Guide and Examples (82)

StableDiffusion was the first AI art model where I have successfully got a centaur. Not a deformed monstrosity, not a horse, not a weird human. A real centaur! So that made me happy and I had to share it.

I honestly made a lot more illustrations I loved (like 400 total, I think?) but I guess most readers will get bored long before they finish scrolling this post, so I will not keep you any longer.

These are the last ones, I swear.

Stable Diffusion: Prompt Guide and Examples (83)Stable Diffusion: Prompt Guide and Examples (84)

Before we reach the end, I want to raise a concern and propose a challenge. No matter what I tried, I could not make either DALL-E 2, or StableDiffusion make characters in the style of Jojo’s Bizarre Adventure (or Araki, in general). I tried the obvious style cues and others, and none worked. So if any of you manages to make one of these models draw Spongebob Squarepants in the style of Jojo’s, or any other recognizable character, you will get a thousand internet points from me.

Appendix A: Stable Diffusion Prompt Guide

In general, the best stable diffusion prompts will have this form:

“A [type of picture] of a [main subject], [style cues]*

Some types of picture include digital illustration, oil painting (usually good results), matte painting, 3d render, medieval map.

The main subject can be anything you’re thinking of, but StableDiffusion still struggles with compositionality, so it shouldn’t be more than one or two main things (say, a beaver wearing a suit, or a cat samurai with a pet pug). The main subject should be mostly composed of adjectives and nouns. Avoid verbs, as Stable Diffusion has a hard time interpreting them correctly.

Style cues can be anything you want to condition the image on. I wouldn’t add too many, maybe only 1 to 3. These can really vary a lot but some good ones are: concept art, steampunk, trending in artstation, good composition, hyper realistic, oil on canvas, vivid colors.

Additionally, adding the name of an artist as a cue will make the picture look like something that artist made, though it may condition the image’s contents, especially if that artist had narrow themes (Beatrix Potter gets you spurious rabbits, for instance).

For a detailed guide to crafting the best stable diffusion prompts, see A Guide to Writing Prompts for Text-to-image AI if you feel like you want to read more.

You can also find many great prompts if you use the Lexica.art prompt search engine to find images you like and make tweaks to their prompts.

Appendix B: Resources and Links

Given how much has happened lately (even John Oliver is talking about DALL-E!), here are some other articles you may want to read.

  • Stable Diffusion: The Most Important AI Art Model Ever covering the more social/economic side of this.
  • A traveler’s guide to the latent space: a guide on prompt engineering that goes really in depth. I haven’t actually read the whole thing.
  • A guide to Writing Prompts for Text-to-Image AI: The best quick primer I’ve found on prompt engineering and writing prompts for DALL-E 2/StableDiffusion or any other text-to-image AI.
  • Art Prompts: My Experiments with Mini DALL-E: My first post on text-to-image AI, where I included my own AI art prompt guide. Here you can see how far we’ve come and how fast.
  • DALL-E 2 Experiments: The post I wrote two weeks ago when DALL-E 2 beta release was news and StableDiffusion hadn’t come out yet. See if you can spot the same prompts’ different results.
  • How to Draw: Where a user uses StableDiffusion’s img2img version to convert an MSPaint drawing into a realistic sci-fi image.
  • Image2Image StableDiffusion, available on Replicate for free. You can draw a rough sketch of what you want in jspaint (the browser copy of MSPaint), then upload it to Stable Diffusion img2img and use that as a starting point for your AI art. See also Stable Diffusion is a really big deal, Simon Willison: This came out a little after I wrote this post, when Stability.ai released the img2img StableDiffusion model. It is amazing! You can make a sketch in MSPaint (Or JsPaint) and make the AI turn it into a painting or illustration in the style you want.
  • High-performance image generation using Stable Diffusion in KerasCV: Stable Diffusion was finally ported to keras. It runs smoothly both on GPU or CPU if you have keras installed, and this is the version I’ve been using to make AI art in my local computer.
  • Lexica.art a great search engine for prompts and artworks. It is an amazing resource to find good images and see which prompts generated them, which you can then copy and tune to your needs. An easy way to build on the best stable diffusion prompts other people has already found.
  • If you like anime, Waifu Diffusion is a text-to-image diffusion model that was conditioned on high-quality anime images through fine-tuning, using Stable Diffusion as a starting point. It generates anime illustrations and it’s awesome.
  • The Illustrated Stable Diffusion explains how Stable Diffusion works, step by step and through different levels of abstraction, and has great illustrations.

If you liked this article, please share it with someone you think will like reading it too. I wrote this for you guys.

Stable Diffusion: Prompt Guide and Examples (2024)

FAQs

What is stable diffusion prompt? ›

Stable Diffusion is a deep learning, text-to-image model released in 2022. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.

How long does Stable Diffusion take? ›

Many consumer grade GPUs can do a fine job, since stable diffusion only needs about 5 seconds and 5 GB of VRAM to run. When it comes to speed to output a single image, the most powerful Ampere GPU (A100) is only faster than 3080 by 33% (or 1.85 seconds).

Can you train Stable Diffusion? ›

To get good results training Stable Diffusion with Dreambooth, it's important to tune the learning rate and training steps for your dataset. High learning rates and too many training steps will lead to overfitting. The model will mostly generate images from your training data, no matter what prompt is used.

What is the best Stable Diffusion model? ›

Popular diffusion models include Open AI's Dall-E 2, Google's Imagen, and Stability AI's Stable Diffusion. Dall-E 2: Dall-E 2 revealed in April 2022, generated even more realistic images at higher resolutions than the original Dall-E.

How does Stable Diffusion work? ›

Stable Diffusion works by adding noise to an image. The model then reverses the noising process and gradually improves the quality of the image until there is no noise, thus generating a realistic image to match the text prompt. Popular models include Open AI's Dalle-E 2, Midjourney, and Dream Studio.

Can I use Stable Diffusion for commercial use? ›

The model is released under a Creative ML OpenRAIL-M license, which allows both non-commercial and commercial use.

What is the system of least prompts? ›

System of Least Prompts (SLP) is a practice that involves defining and implementing a hierarchy of prompts to assist students in learning a skill. A prompt is an action by the teacher or other practitioner—such as a verbal instruction to complete a task—that helps a student respond correctly during a learning activity.

How do you write a prompt for AI art? ›

Tips to Keep in Mind:
  1. Think about what kind of images you want to generate. ...
  2. Consider what kind of information you want the AI to have access to. ...
  3. Try to be as specific as possible with your prompts. ...
  4. Use different art styles like filters in your prompts. ...
  5. Keep the prompt simple. ...
  6. Define a colour palette in your prompts.
Oct 15, 2022

What is diffusion in SNA? ›

Diffusion in the social network can be referred to spread of information among interconnected nodes or entities in a network. The rate and intensity of diffusion depend upon network topology and initialization of network parameters. Individual nodes act as source of motivation for others in the diffusion process.

Can you use Dall E for free? ›

Dall-E is not entirely free. The service runs on "credits(Opens in a new window)." You get 50 free credits at signup, and then 15 credits free per month after that, but they don't roll over. Paid credits do roll over month to month, for up to 12 months; get 115 credits for $15.

How much RAM do you need for Stable Diffusion? ›

You'll need a PC with a modern AMD or Intel processor, 16 gigabytes of RAM, an NVIDIA RTX GPU with 8 gigabytes of memory, and a minimum of 10 gigabytes of free storage space available.

How long does Stable Diffusion take to generate an image? ›

Currently, Stable Diffusion generates images fastest on high-end GPUs from Nvidia when run locally on a Windows or Linux PC. For example, generating a 512×512 image at 50 steps on an RTX 3060 takes about 8.7 seconds on our machine.

Does Stable Diffusion allow NSFW? ›

Stable Diffusion NSFW Google Colab

If you don't have a powerful GPU, don't worry. You can make NSFW images In Stable Diffusion using Google Colab Pro or Plus. By default, Colab notebooks rely on the original Stable Diffusion which comes with NSFW filters.

How much does it cost to train Stable Diffusion? ›

Stable Diffusion is an open-source alternative to DALL-E 2 from OpenAI. One of the brains behind Stable Diffusion is Emad Mostaque. He discloses that training the model cost around $600,000. That's a price range where it should be possible for many companies to train their own large generative AI models.

Can you upload images to Stable Diffusion? ›

(Stable Diffusion's own beta program, called “Dreamstudio,” also includes an image-to-image generator, but there is no drawing option.) Users can draw or upload an image and then add a text prompt.

What grade do you learn diffusion? ›

Students' first formal introduction to diffusion is in Grade 8 science, at which time they should have some prior knowledge of the behaviour of particles from Grade 7. Describe the movement of nutrients and wastes across cell membranes and explain its importance.

How does stable diffusion work? ›

Stable Diffusion works by adding noise to an image. The model then reverses the noising process and gradually improves the quality of the image until there is no noise, thus generating a realistic image to match the text prompt. Popular models include Open AI's Dalle-E 2, Midjourney, and Dream Studio.

What are inference steps in stable diffusion? ›

Stable Diffusion works quite well with a relatively small number of steps, so we recommend to use the default number of inference steps of 50 . If you want faster results you can use a smaller number. If you want potentially higher quality results, you can use larger numbers.

How do diffusion filters work? ›

Basically, a diffusion filter diffuses strong light without affecting the sharpness and contrast of the image. Thus, it is different from the effect of out of focus. Diffusion filters are mainly used in portrait work.

What is stable diffusion AI? ›

Stable Diffusion is a latent diffusion model that is capable of generating detailed images from text descriptions. It can also be used for tasks such as inpainting, outpainting, text-to-image and image-to-image translations.

How much space do you need for Stable Diffusion? ›

To run Stable Diffusion on your computer, you have to download its code which is about four GB; you may also need to install a third-party python application or use Windows Subsystem for Linux. This is because the text prompts used to train the algorithm to generate images are entered via the Linux Command Line.

What are the 5 main steps to inference? ›

  • How to Make an Inference in 5 Easy Steps.
  • Identify an Inference Question.
  • Trust the Passage.
  • Hunt for Clues.
  • Narrow Your Choices.
  • Practice.
Mar 6, 2017

What are the 4 types of questions in the inference strategy? ›

  • TYPES OF QUESTIONS.
  • • Factual Questions.
  • • Think and Seek Questions.
  • • Big Picture Questions.
  • • Predicting Questions.
  • • Clarifying Questions.

What are 4 types of inferences? ›

Inferences can be deductive, inductive, or abductive. Deductive inferences are the strongest because they can guarantee the truth of their conclusions. Inductive inferences are the most widely used, but they do not guarantee the truth and instead deliver conclusions that are probably true.

What are the 3 methods of diffusion? ›

The three types of diffusion are - simple diffusion, osmosis and facilitated diffusion.
  • (i) Simple diffusion is when ions or molecules diffuse from an area of high concentration to an area of low concentration.
  • (ii) In osmosis, the particles moving are water molecules.

What are the three types of filter systems? ›

The Aquarium uses three main types of filtration: mechanical, chemical, and biological.

How does Stable Diffusion make money? ›

Runway is also a creator of Latent Diffusion, a diffusion model that can be used to create images. The company plans to make money by selling access to its AI technology, including AI Magic Tools, a suite of more than 30 utilities for generating and editing images.

Can Stable Diffusion be used offline? ›

Conversation. got stable diffusion running locally and offline on my laptop. unlimited generations, finally free of cloud services…

Where does Stable Diffusion get images? ›

As a brief recap, Stable Diffusion, an AI image synthesis model, gained its ability to generate images by "learning" from a large dataset of images scraped from the Internet without consulting any rights holders for permission.

Top Articles
Latest Posts
Article information

Author: Carlyn Walter

Last Updated:

Views: 6285

Rating: 5 / 5 (50 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.