this post was submitted on 08 Feb 2024
990 points (98.9% liked)

Funny: Home of the Haha

5660 readers
391 users here now

Welcome to /c/funny, a place for all your humorous and amusing content.

Looking for mods! Send an application to Stamets!

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.


Other Communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] RememberTheApollo_@lemmy.world 8 points 9 months ago (1 children)

I think most of us understand that and this exercise is the realization of that issue. These AI do have “negative” prompts, so if you asked it to draw a room and it kept giving you elephants in the room you could “-elephants”, or whatever the “no” format is for the particular AI, and hope that it can overrule whatever reference it is using to generate elephants in the room. It’s not always successful.

[–] fidodo@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

I think the main point here is that image generation AI doesn't understand language, it's giving weight to pixels based on tags, and yes you can give negative weights too. It's more evident if you ask it to do anything positional or logical, it's not designed to understand that.

LLMs are though, so you could combine the tools so the LLM can command the image generator and even create a seed image to apply positional logic. I was surprised to find out that asking chat gpt to generate a room without elephants via dalle also failed. I would expect it to convert the user query to tags and not just feed it in raw.