this post was submitted on 08 Jan 2024
407 points (96.1% liked)
Technology
59673 readers
2917 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Absolutely, and that's why OpenAI says the lawsuit has no merit. NYT claims that ChatGPT will copy articles without asking, were OpenAI claims that NYT constructed prompts to make it copy articles, and thus there's no merit to the suit.
That seems like a silly argument to me. A bit like claiming a piracy site is not responsible for hosting an unlicensed movie because you have to search for the movie to find it there.
(Or to be more precise, where you would have to upload a few seconds of the movie's trailer to get the whole movie.)
Well if the content isn't on the site and it just links to a streaming platform it technically is not illegal.
The argument is that the article isn't sitting there to be retrieved but if you gave the model enough prompting it would too make the same article.
Like if hired an director told them to make a movie just like another one, told the actors to act like the previous actors, , told the writers the exact plot and dialogue. You MAY get a different movie because of creative differences since making the last one, but it's probably going to turn out the very close, close enough that if you did that a few times you'd get a near perfect replica.
Well, no one has shared the prompt, so it's difficult to tell how credible it is.
If they put in a sentence and got 99% of the article back, that's one thing.
If they put in 99% of the article and got back something 95% similar, that's another.
Right now we just have NYT saying it gives back the article, and OpenAI saying it only does that if you give it "significant" prompting.
I think their concern is that I would be able to ask chat gpt about a NYT article and it would tell me about it without me having to go to their ad infested, cookie crippled, account restricted, steaming pile that is their and every other news site.
Anyone with access to the NYT can also just copy paste the text and plagiarize it directly. At the point where you're deliberately inputting copyrighted text and asking the same to be printed as an output, ChatGPT is scarcely being any more sophisticated than MS Word.
The issue with plagiarism in LLMs is where they are outputting copyrighted material as a response to legitimate prompts, effectively causing the user to unwittingly commit plagiarism themselves if they attempt to use that output in their own works. This issue isn't really in play in situations where the user is deliberately attempting to use the tool to commit plagiarism.