this post was submitted on 25 May 2024
775 points (97.1% liked)

Technology

59693 readers
3105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

you are viewing a single comment's thread
view the rest of the comments
[–] Furbag@lemmy.world 9 points 6 months ago* (last edited 6 months ago) (2 children)

People down vote me when I point this out in response to "AI will take our jobs" doomerism.

[–] Leate_Wonceslace@lemmy.dbzer0.com 11 points 6 months ago (5 children)

I mean, AI eventually will take our jobs, and with any luck it'll be a good thing when that happens. Just because Chat GPT v3 (or w/e) isn't up to the task doesn't mean v12 won't be.

[–] NoLifeGaming@lemmy.world 6 points 6 months ago (1 children)

I'm not so sure about the "it'll be good" part. I'd like to imagine a world where people don't have to work because everything is done by robots but in reality you'll have some companies that will make trillions while everyone else will go hungry and become poor and homeless.

Yes, that's exactly the scenario we need to avoid. Automated gay space communism would be ideal, but social democracy might do in a pinch. A sufficiently well-designed tax system coupled with a robust welfare system should make the transition survivable, but the danger with making that our goal is allowing the private firms enough political power that they can reverse the changes.

[–] Furbag@lemmy.world 4 points 6 months ago

Yes, this is also true. I see things like UBI as an inevitable necessity, because AI and automation in general will eliminate the need for most companies to employ humans. Our capitalistic system is set up in a way such that a person can sell their ability to work and provide value to the owner class, but if that dynamic is ever challenged on a fundamental level, it will violently collapse when people who can't get jobs because a robot replaced them either reject automation to preserve the status quo or embrace a new dynamic that provides for the population's basic needs without requiring them to be productive.

But the way that managers talk about AI makes it sound like the techbros have convinced everybody that AI is far more powerful than it currently is, which is a glorified chatbot with access to unfiltered Google search results.

[–] reksas@sopuli.xyz 4 points 6 months ago

It could be good thing, but price for that is making being unemployed okay.

[–] smnwcj@fedia.io 2 points 6 months ago

This begs some reflection. what is a"job", functionally? What would be needed for losing it to be good?

I suspect a system with jobs would not eradicate jobs, just change them.

[–] assassin_aragorn@lemmy.world 1 points 6 months ago (1 children)

If it's possible for AI to reach that level. We shouldn't take for granted it's possible.

I was really humbled when I learned that a cubic mm of human brain matter took over a petabyte to map. It suggests to me that AI is nowhere close to the level you're describing.

[–] Leate_Wonceslace@lemmy.dbzer0.com 1 points 6 months ago* (last edited 6 months ago) (1 children)

It suggests to me that AI

This is a fallacy. Specifically, I think you're committing the informal fallacy confusion of necessary and sufficient conditions. That is to say, we know that if we can reliably simulate a human brain, then we can make an artificial sophont (this is true by mere definition). However, we have no idea what the minimum hardware requirements are for a sufficiently optimized program that runs a sapient mind. Note: I am setting aside what the definition of sapience is, because if you ask 2 different people you'll get 20 different answers.

We shouldn't take for granted it's possible.

I'm pulling from a couple decades of philosophy and conservative estimates of the upper limits of what's possible as well as some decently-founded plans on how it's achievable. Suffice it to say, after immersing myself in these discussions for as long as I have I'm pretty thoroughly convinced that AI is not only possible but likely.

The canonical argument goes something like this: if brains are magic, we cannot say if humanlike AI is possible. If brains are not magic, then we know that natural processes can create sapience. Since natural processes can create sapience, it is extraordinarily unlikely that it will prove impossible to create it artificially.

So with our main premise (AI is possible) cogently established, we need to ask the question: "since it's possible, will it be done, and if not why?" There are a great many advantages to AI, and while there are many risks, the barrier of entry for making progress is shockingly low. We are talking about the potential to create an artificial god with all the wonders and dangers that implies. It's like a nuclear weapon if you didn't need to source the uranium; everyone wants to have one, and no one wants their enemy to decide what it gets used for. So everyone has the insensitive to build it (it's really useful) and everyone has a very powerful disincentive to forbidding the research (there's no way to stop everyone who wants to, and so the people who'd listen are the people who would make an AI who'll probably be friendly). So what possible scenario do we have that would mean strong general AI (let alone the simpler things that'd replace everyone's jobs) never gets developed? The answers range from total societal collapse to extinction, which are all worse than a bad transition to full automation.

So either AI steals everyone's job or something worse happens.

[–] assassin_aragorn@lemmy.world 2 points 6 months ago (1 children)

Thanks for the detailed and thought provoking response. I stand corrected. I appreciate the depth you went into!

You're welcome! I'm always happy to learn someone re-evaluated their position in light of new information that I provided. 🙂

[–] magic_lobster_party@kbin.run 4 points 6 months ago

Even if AI is able to answer all questions 100% accurately, it wouldn’t mean much either way. Most of programming is making adjustments to old code while ensuring nothing breaks. Gonna be a while before AI will be able to do that reliably.