242
OpenAI says it’s “impossible” to create useful AI models without copyrighted material
(arstechnica.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
What you count as "one" example is arbitrary. In terms of pixels, you're looking at millions right now.
The ability to train faster using fewer examples in real time, similar to what an intelligent human brain can do, is definitely a goal of AI research. But right now, we may be seeing from AI what a below average human brain could accomplish with hundreds of lifetimes to study.
I mean, no, if you only ever look at public domain stuff you literally wouldn't know the state of the art, which is historically happening for profit. Even the most untrained artist "doing their own thing" watches Disney/Pixar movies and listens to copyrighted music.
If we're going by the number of pixels being viewed, then you have to use the same measure for both humans and AIs - and because AIs have to look at billions of images while humans do not, the AI still requires far more pixels than a human does.
And humans don't require the most modern art in order to learn to draw at all. Sure, if they want to compete with modern artists, they would need to look at modern artists (for which educational fair use exists, and again the quantity of art being used by the human for this purpose is massively lower than what an AI uses - a human does not need to consume billions of artworks from modern artists in order to learn what the current trends are). But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works, because the process for drawing, say, the human figure (with the right number of fingers!) has not changed in hundreds of years. A human can also just... go outside and draw things they see themselves, because the sky above them and the tree across the street aren't copyrighted. And in fact, I'd argue that a good artist should go out and find real things to draw.
OpenAI's argument is literally that their AI cannot learn without using copyrighted materials in vast quantities - too vast for them to simply compensate all the creators. So it genuinely is not comparable to a human, because humans can, in fact, learn without using copyrighted material. If OpenAI's argument is actually that their AI can't compete commercially with modern art without using copyrighted works, then they should be honest about that - but then they'd be showing their hand, wouldn't they?
Which is the literal goal of Dall-E, SD, etc.
They could definitely learn some amount of skill, I agree. I'd be very interested to see the best that an AI could achieve using only PD and CC content. It would be interesting. But you'd agree that it would look very different from modern art, just as an alien who has only been consuming earth media from 100+ years ago would be unable to relate to us.
Yeah, I'd consider that PD/CC content that such an AI would easily have access to. But obviously the real sky is something entirely different from what is depicted in Starry Night, Star Wars, or H.P. Lovecraft's description of the cosmos.
Yeah, I'd consider that a strong claim on their part; what they really mean is, it's the easiest way to make progress in AI, and we wouldn't be anywhere close to where we are without it.
And you could argue "convenient that it both saves them money, and generates money for them to do it this way", but I'd also point out that the alternative is they keep the trained models closed source, never using them publicly until they advance the tech far enough that they've literally figured out how to build/simulate a human brain that is able to learn as quickly and human-like as you're describing. And then we find ourselves in a world where one or two corporations have this incredible proprietary ability that no one else has.
Personally, I'd rather live in the world where the information about how to do all of this isn't kept for one or two corporations to profit from, I would rather live in the version where they publish their work publicly, early, and often, show that it works, and people are able to reproduce it, open source it, train their own models, and advance the technology in a space where anyone can use it.
You could hypothesize of a middle ground where they do the research, but aren't allowed to profit from it without licensing every bit of data they train on. But the reality of AI research is that it only happens to the extent that it generates revenue. It's been that way for the entire history of AI. Douglas Hofstadter has been asking deep important questions about AI as it relates to consciousness for like 60 years (ex. GEB, I am a Strange Loop), but there's a reason he didn't discover LLMs and tech companies did. That's not to say his writings are meaningless, in fact I think they're more important than ever before, but he just wasn't ever going to get to this point with a small team of grad students, a research grant, and some public domain datasets.
So, it's hard to disagree with OpenAI there, AI definitely wouldn't be where it is without them doing what they've done. And I'm a firm believer that unless we figure our shit out with energy generation soon, the earth will be an uninhabitable wasteland. We're playing a game of climb the Kardashev scale, we opted for the "burn all the fossil fuels as fast as possible" strategy, and now we're a the point where either spent enough energy fast enough to figure out the tech needed to survive this, or we suffocate on the fumes. The clock is ticking, and AI may be our best bet at saving the human race that doesn't involve an inordinate number of people dying.
OpenAI are not going to make the source code for their model accessible to all to learn from. This is 100% about profiting from it themselves. And using copyrighted data to create open source models would seem to violate the very principles the open source community stands for - namely that everybody contributes what they agree to, and everything is published under a licence. If the basis of an open source model is a vast quantity of training data from a vast quantity of extremely pissed off artists, at least some of the people working on that model are going to have a "are we the baddies?" moment.
The AI models are also never going to produce a solution to climate change that humans will accept. We already know what the solution is, but nobody wants to hear it, and expecting anyone to listen to ChatGPT and suddenly change their minds about using fossil fuels is ludicrous. And an AI that is trained specifically on knowledge about the climate and technologies that can improve it, with the purpose of innovating some hypothetical technology that will fix everything without humans changing any of their behaviour, categorically does not need the entire contents of ArtStation in its training data. AIs that are trained to do specific tasks, like the ones trained to identify new antibiotics, are trained on a very limited set of data, most of which is not protected by copyright and any that is can be easily licenced because the quantity is so small - and you don't see anybody complaining about those models!
OpenAI isn't the only company doing this, nor is their specific model the knowledge that I'm referring to.
It is already being used to further fusion research beyond anything we've been able to do with standard algorithms
Then it's not a solution. That's like telling your therapist, "I know how to fix my relationship, my partner just won't do it!"
Lol. Yeah, I agree, that's never going to work.
That's a strong claim to make. Regardless of the ethics involved, or the problems the AI can solve today, the fact is we seeing rapid advances in AI research as a direct result of these ethically dubious models.
In general, I'm all for the capitalist method of artists being paid their fair share for the work they do, but on the flip side, I see a very possible mass extinction event on the horizon, which could cause suffering the likes of which humanity has never seen. If we assume that is the case, and we assume AI has a chance of preventing it, then I would prioritize that over people's profits today. And I think it's perfectly reasonable to say I'm wrong.
And then there's the problem of actually enforcing any sort of regulation, which would be so much more difficult than people here are willing to admit. There's basically nothing you can do even if you wanted to. Your Carlin example is exactly the defense a company would use: "I guess our AI just happened to create a movie that sounds just like Paul Blart, but we swear it's never seen the film. Great minds think alike, I guess, and we sell only the greatest of minds".
It isn't wrong to use copyrighted works for training. Let me quote an article by the EFF here:
and
What you want would swing the doors open for corporate interference like hindering competition, stifling unwanted speech, and monopolization like nothing we've seen before. There are very good reasons people have these rights, and we shouldn't be trying to change this. Ultimately, it's apparent to me, you are in favor of these things. That you believe artists deserve a monopoly on ideas and non-specific expression, to the detriment of anyone else. If I'm wrong, please explain to me how.
Humans benefit from years of evolutionary development and corporeal bodies to explore and interact with their world before they're ever expected to produce complex art. AI need huge datasets to understand patterns to make up for this disadvantage. Nobody pops out of the womb with fully formed fine motor skills, pattern recognition, understanding of cause and effect, shapes, comparison, counting, vocabulary related to art, and spatial reasoning. Datasets are huge and filled with image-caption pairs to teach models all of this from scratch. AI isn't human, and we shouldn't judge it against them, just like we don't judge boats on their rowing ability.
AI don’t require most modern art in order to learn to make images either, but the range of expression would be limited, just like a human's in this situation. You can see this in cave paintings and early sculptures. They wouldn't be limited to this same degree, but you would still be limited.
It took us 100,000 years to get from cave drawings to Leonard Da Vinci. This is just another step for artists, like Camera Obscura was in the past. It's important to remember that early man was as smart as we are, they just lacked the interconnectivity to exchange ideas that we have.
I think the difference in artistic expression between modern humans and humans in the past comes down to the material available (like the actual material to draw with).
Humans can draw without seeing any image ever. Blind people can create art and draw things because we have a different understanding of the world around us than AI has. No human artist needs to look at a thousand or even at 1 picture of a banana to draw one.
The way AI sees and "understands" the world and how it generates an image is fundamentally different from how the human brain conveys the object banana into an image of a banana.
That is definitely a difference, but even that is a kind of information shared between people, and information itself is what gives everyone something to build on. That gives them a basis on which to advance understanding, instead of wasting time coming up with the same things themselves every time.
Humans don't need representations of things in images because they have the opportunity to interact with the genuine article, and in situations when that is impractical, they can still fall back on images to learn. Someone without sight from birth can't create art the same way a sighted person can.
That's the beauty of it all, despite that, these models can still output bananas.
Humans learn mostly from real life. Go touch some grass