this post was submitted on 13 Sep 2023
63 points (98.5% liked)

Technology

37603 readers
441 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Avram Piltch is the editor in chief of Tom's Hardware, and he's written a thoroughly researched article breaking down the promises and failures of LLM AIs.

you are viewing a single comment's thread
view the rest of the comments
[–] FlapKap@feddit.dk 23 points 1 year ago (5 children)

I like the point about LLMs interpolating data while humans extrapolate. I think that's sums up a key difference in "learning". It's also an interesting point that we anthropomorphise ML models by using words such as learning or training, but I wonder if there are other better words to use. Fitting?

[–] RickRussell_CA@beehaw.org 13 points 1 year ago

"Plagiarizing" 😜

[–] amju_wolf@pawb.social 10 points 1 year ago (2 children)

Isn't interpolation and extrapolation the same thing effectively, given a complex enough system?

[–] CanadaPlus@lemmy.sdf.org 2 points 1 year ago

Depending on the geometry of the state space, very literally yes. Think about a sphere, there's a straight line passing from Denver to Guadalajara, roughly hitting Delhi on the way. Is Delhi in between them (interpolation), or behind one from the other (extrapolation)? Kind of both, unless you move the goalposts to add distance limits on interpolation, which could themselves be broken by another geometry

[–] maynarkh@feddit.nl 2 points 1 year ago (1 children)

No, repeated extrapolation results in eventually making everything that ever could be made, constant interpolation would result in creating the same "average" work over and over.

The difference is infinite vs zero variety.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

Fun fact, an open interval is topologically isomorphic the the entire number line. In practice they're often different but you started talking about limits ("eventually"), where that will definitely come up.

[–] brie@beehaw.org 8 points 1 year ago

What about tuning, to align with "finetuning?"

[–] frog@beehaw.org 6 points 1 year ago

I also like the point about interpolation vs extrapolation. It's demonstrated when you look at art history (or the history of any other creative field). Humans don't look at paintings and create something that's predictable based on those paintings. They go "what happens when I take that idea and go even further?" An LLM could never have invented Cubism after looking at Paul Cezanne's paintings, but Pablo Picasso did.

[–] lloram239@feddit.de 2 points 1 year ago

That's not a limitation of ML, but just how it is commonly used. You can take every parameter that neural network recognizes and tweak it, make it bigger, smaller, recombine it with other stuff and marvel at the results. That's how we got origami porn, (de)cartoonify AI, QR code art, Balenciaga, dancing statues or my 5min attempt at reinventing cubism (tell AI to draw cubes over a depthmap).