Remember how we were told that genAI learns "just like humans", and how the law can't say about fair use, and I guess now all art is owned by big tech companies?
Well, of course it's not true. Exploiting a few of the ways in which genAI --is not-- like human learners, artists can filter their digital art in such a way that if a genAI tool consumes it, it actively reduces the quality of the model, undoing generalization and bleading into neighboring concepts.
Can an AI tool be used to undo this obfuscation? Yes. At scale, however, doing so requires increasing compute costs more and more. This also looks like an improvable method, not a dead end -- adversarial input design is a growing field of machine learning with more and more techniques becoming highly available. Imagine this as sort of "cryptography for semantics" in the sense that it presents asymetrical work on AI consumers (while leaving the human eye much less effected).
Now we just need labor laws to catch up.
Wouldn't it be funny if not only does generative AI not lead to a boring dystopia, but the proliferation and expansion of this and similar techniques to protect human meaning eventually put a lot of grifters out of business?
We must have faith in the dark times. Share this with your artist friends far and wide!
I can't endorse Glaze or Nightshade, sorry. If literally nothing else, it's not Free Software and it's offered with a nasty license:
So I'm not allowed to have the discussion I'm currently having, nor to include it in any Linux distro. To me, that's useless at best and malicious at worst. Ironic, considering that their work directly builds upon Stable Diffusion.
Also, Nightshade will be ineffective as an offensive tool. Quoting from their paper:
This is not only an admission of failure but a roadmap for anybody who wants to work around Nightshade. Identify poisoned images by using an "alignment model," which correlates images with sets of labels, to test whether an image is poorly labeled; if the image appears well-labeled to a human but not to an alignment model, then it may be poisoned and will need repair/corroboration from alternate sources.
I also ranted about this on Mastodon.
Ha! Nope, not buying it.
Funny you mention licenses, since stable diffusion and leading AI models were built on labor exploitation. When this issue is finally settled by law, history will not look back well on you.
Doesn't seem to prevent you from doing it anyways. Does any license slow you down? Nope.
Not sure that's true, but also unnecessary. Artists don't care about this or need it to be. I think it's a disengenous argument, made in the astronaut suit you wear on the high horse drawn from work you stole from other people.
Sounds like an admission of success given that you have to step out of the shadows to tell artists on mastodon not to use it because, ahem, license issues?????????
No. Listen. The point is to alter the economics, to make training on image from the internet actively dangerous. It doesn't even take much. A small amount of internet data actively poisoned requires future models to use alignment to bypass it, increasing the marginal (thin) costs of training and cheating people out of their work.
Shame on you dude.
Good luck on competing in the arms race to use other people's stuff.
@self@awful.systems can we ban the grifter?
I certainly comply with software licenses so yes, that does slow me down. As they pointed out this precludes it from appearing in Linux distros and such.
Incidentally, I've gotten into very long and stupid arguments with people about Stable Diffusion's Definitely Not Open Fucking Source license.