this post was submitted on 11 Jan 2024
242 points (100.0% liked)

Technology

37757 readers
388 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Apparently, stealing other people's work to create product for money is now "fair use" as according to OpenAI because they are "innovating" (stealing). Yeah. Move fast and break things, huh?

"Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials," wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."

you are viewing a single comment's thread
view the rest of the comments
[–] vexikron@lemmy.zip 3 points 10 months ago

It meets almost none of the conceptions of intelligence at all.

It is not capable of abstraction.

It is capable of brute force understanding similarities between various images and text, and then presenting a wide array of text and images containing elements that reasonably well emulate a wide array of descriptors.

This is convincing to many people that it has a large knowledge set.

But that is not abstraction.

It is not capable of logic.

It is only capable of again brute force analyzing an astounding amount of content and then producing essentially the consensus view on answers to common logical problems.

Ask it any complex logical question that has never been answered on the internet before and it will output irrelevant or inaccurate nonsense, likely just finding an answer to a similar but not identical question.

The same goes for reasoning, planning, critical thinking and problem solving.

If you ask it to do any of these things in a highly specific situation even giving it as much information as possible, if your situation is novel or even simply too complex, it will again just spit out a non sense answer that is basically going to be very inadequate and faulty because it will just draw elements together from the closest things it has been trained on, nearly certainly being contradictory or entirely dubious due to being unable to account for a particularly uncommon constraint, or constraints that are very uncommonly faced simultaneously.

It is not creative, in the sense of being able to generate something novel or new.

All it does is plagiarize elements of things that are popular and have many examples of and then attempt mix them together, but it will never generate a new art style or a new genre of music.

It does not even really infer things, is not really capable of inference.

It simply has a massive, astounding data set, and the ability to synthesize elements from this in a convincing way.

In conclusion, you have no idea what you are talking about, and you yourself literally are one of the people who has failed the reverse Turing Test, likely because you are not very well very versed in the technicals of how this stuff actually works, thus proving my point that you simply believe it is AI because of its branding, with no critical thought applied whatsoever.