this post was submitted on 27 Jan 2024
32 points (100.0% liked)

NotAwfulTech

367 readers
1 users here now

a community for posting cool tech news you don’t want to sneer at

non-awfulness of tech is not required or else we wouldn’t have any posts

founded 1 year ago
MODERATORS
 

Remember how we were told that genAI learns "just like humans", and how the law can't say about fair use, and I guess now all art is owned by big tech companies?

Well, of course it's not true. Exploiting a few of the ways in which genAI --is not-- like human learners, artists can filter their digital art in such a way that if a genAI tool consumes it, it actively reduces the quality of the model, undoing generalization and bleading into neighboring concepts.

Can an AI tool be used to undo this obfuscation? Yes. At scale, however, doing so requires increasing compute costs more and more. This also looks like an improvable method, not a dead end -- adversarial input design is a growing field of machine learning with more and more techniques becoming highly available. Imagine this as sort of "cryptography for semantics" in the sense that it presents asymetrical work on AI consumers (while leaving the human eye much less effected).

Now we just need labor laws to catch up.

Wouldn't it be funny if not only does generative AI not lead to a boring dystopia, but the proliferation and expansion of this and similar techniques to protect human meaning eventually put a lot of grifters out of business?

We must have faith in the dark times. Share this with your artist friends far and wide!

you are viewing a single comment's thread
view the rest of the comments
[–] corbin@awful.systems 13 points 10 months ago (11 children)

I can't endorse Glaze or Nightshade, sorry. If literally nothing else, it's not Free Software and it's offered with a nasty license:

You are not permitted to … reverse engineer the Software …

You are not permitted to … permit … any part of the Software … to be combined with or become incorporated in any other software …

So I'm not allowed to have the discussion I'm currently having, nor to include it in any Linux distro. To me, that's useless at best and malicious at worst. Ironic, considering that their work directly builds upon Stable Diffusion.

Also, Nightshade will be ineffective as an offensive tool. Quoting from their paper:

… the perturbations we optimized on poison images are able to perturb image’s features in text-to-image models, but they have limited impact on the features extracted by alignment models. … We note that it might be possible for model trainers to customize an alignment model to ensure high transferability with poison sample generation, thus making it more effective at detecting poison samples.

This is not only an admission of failure but a roadmap for anybody who wants to work around Nightshade. Identify poisoned images by using an "alignment model," which correlates images with sets of labels, to test whether an image is poorly labeled; if the image appears well-labeled to a human but not to an alignment model, then it may be poisoned and will need repair/corroboration from alternate sources.

I also ranted about this on Mastodon.

[–] locallynonlinear@awful.systems -3 points 10 months ago* (last edited 10 months ago) (7 children)

Ha! Nope, not buying it.

nasty license Ironic, considering that their work directly builds upon Stable Diffusion.

Funny you mention licenses, since stable diffusion and leading AI models were built on labor exploitation. When this issue is finally settled by law, history will not look back well on you.

So I’m not allowed to have the discussion I’m currently having

Doesn't seem to prevent you from doing it anyways. Does any license slow you down? Nope.

nor to include it in any Linux distro

Not sure that's true, but also unnecessary. Artists don't care about this or need it to be. I think it's a disengenous argument, made in the astronaut suit you wear on the high horse drawn from work you stole from other people.

This is not only an admission of failure but a roadmap for anybody who wants to work around Nightshade.

Sounds like an admission of success given that you have to step out of the shadows to tell artists on mastodon not to use it because, ahem, license issues?????????

No. Listen. The point is to alter the economics, to make training on image from the internet actively dangerous. It doesn't even take much. A small amount of internet data actively poisoned requires future models to use alignment to bypass it, increasing the marginal (thin) costs of training and cheating people out of their work.

Shame on you dude.

If you want to hurt the capitalists, consider exfiltrating weights directly, as was done with LLaMa, to ruin their moats.

Good luck on competing in the arms race to use other people's stuff.

@self@awful.systems can we ban the grifter?

[–] 200fifty@awful.systems 7 points 10 months ago

I don't think they were defending ai necessarily, just saying they had objections to the specific technique used by these tools. I do think that not open-sourcing the thing is probably defensible given that it exists in an adversarial context, but the technical concerns are worth being aware of

load more comments (6 replies)
load more comments (9 replies)