Orcs versus progress.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
🤖 I'm a bot that provides automatic summaries for articles:
Click here to see the summary
Further, OpenAI writes that limiting training data to public domain books and drawings "created more than a century ago" would not provide AI systems that "meet the needs of today's citizens."
OpenAI responded to the lawsuit on its website on Monday, claiming that the suit lacks merit and affirming its support for journalism and partnerships with news organizations.
OpenAI's defense largely rests on the legal principle of fair use, which permits limited use of copyrighted content without the owner's permission under specific circumstances.
"Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents," OpenAI wrote in its Monday blog post.
In August, we reported on a similar situation in which OpenAI defended its use of publicly available materials as fair use in response to a copyright lawsuit involving comedian Sarah Silverman.
OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."
Saved 58% of original text.
OpenAI's notion of "fair use": military and weapons
Those type of companies are getting so f*****g disgusting.
https://techcrunch.com/2024/01/12/openai-changes-policy-to-allow-military-applications/ https://www.theverge.com/2024/1/12/24036397/openai-is-softening-its-stance-on-military-use
The amount of second hand content an LLM needs to consume to train inevitably includes copyrighted material. If they used this thread, the quotes OP included would end up in the training set.
The amount of fan forums and wikis on copy written material provide copious amounts of information about the stories and facilitate the retelling. They're right that it is impossible for a general purpose LLM.
My personal experience so far though has been that general purpose and multiple modality LLMs are less consistently useful to me than GPT4 was at launch. I think small, purpose built LLMs with trusted content providers have a better chance of success for most users, but we will see if anyone can make that work given the challenge of bringing users to the right one for the right task.
I would just like to say, with open curiosity, that I think a nice solution would be for OpenAI to become a nonprofit with clear guidelines to follow.
What does that make me? Other than an idiot.
Of that at least, I’m self aware.
I feel like we’re disregarding the significance of artificial intelligence’s existence in our future, because the only thing anybody that cares is trying to do is get back control to DO something about it. But news is becoming our feeding tube for the masses. They’ve masked that with the hate of all of us.
Anyways, sorry, diatribe, happy new year
#fuckingcapitalists
On both sides even.