this post was submitted on 11 Jul 2023
59 points (98.4% liked)
Technology
37750 readers
347 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hi, it's me the author!
First of all, thanks for reading.
In the article I explain that it is not exactly what authors do, we reading and writing are an inherently human activity and the consumption and processing of massive amounts of data (far more than a human with a photographic memory could process in a hundred million lifetimes) is a completely different process to that.
I also point out that I don't have a problem with LLMs as a concept, and I'm actually excited about what they can do, but that they are inherently different from humans and should be treated as such by the law.
My main point is that authors should have the ability to decree that they don't want their work used as training data for megacorporations to profit from without their consent.
So, yes in a way it is about money, but the money in question being the money OpenAI and Meta are making off the backs of millions of unpaid and often unsuspecting people.
I think it's an interesting topic, thanks for the article.
It does start to raise some interesting questions, if an author doesn't want they book to be ingested by a LLM, then what is acceptable? Should all LLMs now be ignorant of that work? What about summaries or reviews of that work?
What if from a summary of a book an LLM could extrapolate what's in the book? Or write a similar book to the original, does that become a new work or is it still fall into the issue of copyright?
I do fear that copyright laws will muddy the waters and slow down the development of LLMs and have a greater impact more than any government standards ever will!
I'm all for muddy waters and slow development of LLMs at this juncture. The world is enough of a capitalist horrorshow and so far all this tech provides is a faster way to accelerate the already ridiculously wide class divide. Just my cynical luddite take of the day...