this post was submitted on 20 Jul 2023
663 points (97.4% liked)

Technology

59308 readers
4851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] RufusLoacker@feddit.it 90 points 1 year ago (9 children)

Why are people using a language model for math problems?

[–] gratux@lemmy.blahaj.zone 49 points 1 year ago (2 children)

It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

[–] MeanEYE@lemmy.world 11 points 1 year ago

Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can't come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

[–] Lukecis@lemmy.world 0 points 1 year ago

Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I'd be surprised if a super intelligent sentient ai doesn't decide humanity needs to go extinct for its own best self interests.

[–] nani8ot@lemmy.ml 7 points 1 year ago

I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it'd go.

ChatGPT was better than I'd thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it'd have been expected to be).

[–] affiliate@lemmy.world 5 points 1 year ago

it’s pretty useful for explaining high level math concepts, or at least it used to be. before chatgpt 4 launched, it was able to give intuitive descriptions of stuff in algebraic topology and even prove some properties of the structures involved.

[–] FunnyUsername@lemmy.world 5 points 1 year ago

Math is a language.

Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

A language model doing math makes more sense than you'd think!

[–] danwardvs@sh.itjust.works 4 points 1 year ago* (last edited 1 year ago)

I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.

Well it was quite good for simple math problems, as this study also shows

[–] lorcster123@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

It can be useful asking it certain questions which are a bit complex. Like on a plot which has the y axis linear and x axis logarithmic, the equation of a straight line is a little bit complicated. Its in the form y = m*(log(x)) + b rather than on a linear-linear plot which is y = m*x+b

ChatGPT is able to calculate the correct equation of the line but it gets the answer wrong a few times... lol

[–] Steeve@lemmy.ca 0 points 1 year ago

And why is it being measured on a single math problem lol