this post was submitted on 08 Jan 2024
407 points (96.1% liked)
Technology
59673 readers
2917 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sure but who is at fault?
If I manually type an entire New York Times article into this comment box, and Lemmy distributes it all over the internet... that's clearly a breach of copyright. But are the developers of the open source Lemmy Software liable for that breach? Of course not. I would be liable.
Obviously Lemmy should (and does) take reasonable steps (such as defederation) to help manage illegal use... but that's the extent of their liability.
All NYT needed to do was show OpenAI how they go the AI to output that content, and I'd expect OpenAI to proactively find a solution. I don't think the courts will look kindly on NYT's refusal to collaborate and find some way to resolve this without a lawsuit. A friend of mine tried to settle a case once, but the other side refused and it went all the way to court. The court found that my friend had been in the wrong (as he freely admitted all along) but also made them pay my friend compensation for legal costs (including just time spent gathering evidence). In the end, my friend got the outcome he was hoping for and the guy who "won" the lawsuit lost close to a million dollars.
They might look down upon that but I doubt they’ll rule against NYT entirely. The AI isn’t a separate agent from OpenAI either. If the AI infringes on copyright, then so does OpenAI.
Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.
Seems like the solution here is to train data to not output copyrighted works and to maybe train a sub-system to detect it and stop the main chatbot from responding with it.
That is for sure not the case. The modern world is bursting with machines capable of reproducing copyrighted works, and their manufacturers are not liable for copyright violations carried out by users of those machines. You're using at least once of those machines to read this comment. This stuff was decided around the time VCRs were invented.
Sorry, the unlicensed reproduction of those works via machine. Missed a word but it’s important. Most machines do not reproduce works in unlicensed ways, especially not by themselves. Then we talk users. Yes, if a user utilizes a machine to reproduce a work, it’s on the user. However, the machine doesn’t usually produce the copyrighted work itself because that production is illegal. For VCR, it’s fine to make a tv recorder because the VCR itself doesn’t violate copyright, the user does via its inputs. If the NYT input its own material and then received it, obviously fine. If it didn’t though, that’s illegal reproduction.
So here I expect the court will say that OpenAI has no right to reproduce the work in full or in amounts not covered by fair use and must take measures to prevent the reproduction of irrelevant portions of articles. However, they’ll likely be able to train their AI off of publicly available data so long as they don’t violate anyone’s TOS.
I am not familiar with any judicative system. It sounds to me that OpenAI wants to get the evidence the NYT collected beforehand.