this post was submitted on 08 Feb 2024
987 points (98.9% liked)

Funny: Home of the Haha

5393 readers
2018 users here now

Welcome to /c/funny, a place for all your humorous and amusing content.

Looking for mods! Send an application to Stamets!

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.


Other Communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] dipshit@lemmy.world 21 points 7 months ago (2 children)

AI / LLM only tries to predict the next word or token. it cannot understand or reason, it can only sound like someone who knows what they are talking about. you said elephants and it gave you elephants. the “no” modifier makes sense to us but not to AI. it could, if we programmed it with if/then statements, but that’s not LLM, that’s just coding.

AI is really, really good at bullshitting.

[–] Turun@feddit.de 30 points 7 months ago* (last edited 7 months ago) (2 children)

AI / LLM only tries to predict the next word or token

This is not wrong, but also absolutely irrelevant here. You can be against AI, but please make the argument based on facts, not by parroting some distantly related talking points.

Current image generation is powered by diffusion models. Their inner workings are completely different from large language models. The part failing here in particular is the text encoder (clip). If you learn how it works and think about it you'll be able to deduce how the image generator is forced to draw this image.

Edit: because it's an obvious limitation, negative prompts have existed pretty much since diffusion models came out

[–] dipshit@lemmy.world 5 points 7 months ago (1 children)

Does the text encoder use natural language processing? I assumed it was working similarly to how an LLM would.

[–] Turun@feddit.de 5 points 7 months ago (1 children)

No, it does not. At least not in the same way that generative pre-trained transformers do. It is handling natural language though.

The research is all open source if you want details. For Stable Diffusion you'll find plenty of pretty graphs that show how the different parts interact.

[–] dipshit@lemmy.world 4 points 7 months ago (1 children)

There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?

I understand the image generation works differently, which I sort of gather starts with noise and a random seed and then via learnt networks has pathways a model can take which (“automagic” goes here) it takes from what has been recognized with NLP on the text. something in the end like “elephant (subject) 100% confidence, big room (background) 75% confidence, windows (background) 75% confidence”. I assume then that it “merges” the things which it thinks make up those tokens along with the noise and (more “automagic” goes here) puts them where they need to go.

[–] Turun@feddit.de 2 points 7 months ago

There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?

Correct. The clip encoder is trained on images and their corresponding description. Therefore it learns the names for things in images.

And now it is obvious why this prompt fails: there are no images of empty rooms tagged as "no elephants". This can be fixed by adding a negative prompt, which subtracts the concept of "elephants" from the image in one of the automagical steps.

[–] Z4rK@lemmy.world 3 points 7 months ago (1 children)

All these examples are not just using stable diffusion though. They are using an LLM to create a generative image prompt for DALL-E / SD, which then gets executed. In none of these examples are we shown the actual prompt.

If you instead instruct the LLM to first show the text prompt, review it and make sure the prompt does not include any elephants, revise it if necessary, then generate the image, you’ll get much better results. Now, ChatGPT is horrible in following instructions like these if you don’t set up the prompt very specifically, but it will still follow more of the instructions internally.

Anyway, the issue in all the examples above does not stem from stable diffusion, but from the LLM generating an ineffective prompt to the stable diffusion algorithm by attempting to include some simple negative word for elephants, which does not work well.

[–] Turun@feddit.de 2 points 7 months ago

If you prompt stable Diffusion for "a room without elephants in it" you'll get elephants. You need to add elephants to the negative prompt to get a room without them. I don't think LLMs have been given the ability to add negative prompts

[–] DarkThoughts@fedia.io 1 points 7 months ago

That's what negative prompts are for in those image generating AIs (I have never used DALL-E so no idea if they support negative prompts). I guess you could have an LLM interpret a sentence like OPs to extract possible positive & negative prompts based on sentence structure but that would always be less accurate than just differentiating them. Because once you spend some time with those chat bot LLMs you notice very quickly just how fucking stupid they actually are. And unfortunately things like larger context / token sizes won't change that and would scale incredibly badly in regards to hardware anyway. When you regenerate replies a few times you kinda understand how much guesswork they make, and how often they completely go wrong in interpreting the previous tokens (including your last reply). So yeah, they're definitely really good at bullshitting. Can be fun, but it is absolutely not what I'd call "AI", because there's simply no intelligence behind it, and certainly pretty overhyped (not to say that there aren't actually useful fields for those algorithms).