this post was submitted on 11 Jan 2024
242 points (100.0% liked)

Technology

37757 readers
268 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Apparently, stealing other people's work to create product for money is now "fair use" as according to OpenAI because they are "innovating" (stealing). Yeah. Move fast and break things, huh?

"Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials," wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."

you are viewing a single comment's thread
view the rest of the comments
[–] hascat@programming.dev 8 points 10 months ago (2 children)

That's not the point though. The point is that the human comedian and the AI both benefit from consuming creative works covered by copyright.

[–] Phanatik@kbin.social 14 points 10 months ago (1 children)

Yeah except a machine is owned by a company and doesn't consume the same way. It breaks down copyrighted works into data points so it can find the best way of putting those data points together again. If you understand anything at all about how these models work, they do not consume media the same way we do. It is not an entity with a thought process or consciousness (despite the misleading marketing of "AI" would have you believe), it's an optimisation algorithm.

[–] chahk@beehaw.org 16 points 10 months ago (1 children)

It's a glorified autocomplete.

[–] Phanatik@kbin.social 5 points 10 months ago (1 children)

It's so funny that this is something new. This was Grammarly's whole schtick since before ChatGPT so how different is Grammarly AI?

[–] vexikron@lemmy.zip 4 points 10 months ago (1 children)

Here is the bigger picture: The vast majority of tech illiterate people think something is AI because duh its called AI.

Its literally just the power of branding and marketing on the minds of poorly informed humans.

Unfortunately this is essentially a reverse Turing Test.

The vast majority of humans do not know anything about AI, and also a huge majority of them can also barely tell the difference between, currently in some but not all forms, output from what is basically a brute force total internet plagiarism and synthesis software, from many actual human created content in many cases.

To me this basically just means that about 99% of the time, most humans are actually literally NPCs, and they only do actual creative and unpredictable things very very rarely.

[–] intensely_human@lemm.ee 1 points 10 months ago (3 children)

I call it AI because it’s artificial and it’s intelligent. It’s not that complicated.

The thing we have to remember is how scary and disruptive AI is. Given that fear, it is scary to acknowledge that we have AI emerging into our world. Because it is scary, that pushes us to want to ignore it.

It’s called denial, and it’s the best explanation for why people aren’t willing to acknowledge that LLMs are AI.

[–] vexikron@lemmy.zip 3 points 10 months ago

It meets almost none of the conceptions of intelligence at all.

It is not capable of abstraction.

It is capable of brute force understanding similarities between various images and text, and then presenting a wide array of text and images containing elements that reasonably well emulate a wide array of descriptors.

This is convincing to many people that it has a large knowledge set.

But that is not abstraction.

It is not capable of logic.

It is only capable of again brute force analyzing an astounding amount of content and then producing essentially the consensus view on answers to common logical problems.

Ask it any complex logical question that has never been answered on the internet before and it will output irrelevant or inaccurate nonsense, likely just finding an answer to a similar but not identical question.

The same goes for reasoning, planning, critical thinking and problem solving.

If you ask it to do any of these things in a highly specific situation even giving it as much information as possible, if your situation is novel or even simply too complex, it will again just spit out a non sense answer that is basically going to be very inadequate and faulty because it will just draw elements together from the closest things it has been trained on, nearly certainly being contradictory or entirely dubious due to being unable to account for a particularly uncommon constraint, or constraints that are very uncommonly faced simultaneously.

It is not creative, in the sense of being able to generate something novel or new.

All it does is plagiarize elements of things that are popular and have many examples of and then attempt mix them together, but it will never generate a new art style or a new genre of music.

It does not even really infer things, is not really capable of inference.

It simply has a massive, astounding data set, and the ability to synthesize elements from this in a convincing way.

In conclusion, you have no idea what you are talking about, and you yourself literally are one of the people who has failed the reverse Turing Test, likely because you are not very well very versed in the technicals of how this stuff actually works, thus proving my point that you simply believe it is AI because of its branding, with no critical thought applied whatsoever.

[–] ParsnipWitch@feddit.de 1 points 10 months ago

Current models aren't intelligent. Not even by the flimsy and unprecise definition of intelligence we currently have.

Wanted to post a whole rant but then saw vexikron already did so I spare you xD

[–] vexikron@lemmy.zip 9 points 10 months ago* (last edited 10 months ago) (1 children)

And human comedians regularly get called out when they outright steal others material and present it as their own.

The word for this is plagiarism.

And in OpenAIs framework, when used in a relevant commercial context, they are functionally operating and profiting off of the worlds most comprehensive plagiarism software.

[–] intensely_human@lemm.ee 1 points 10 months ago

They get called out when they use others work as a template, not as training data.