this post was submitted on 10 Dec 2023
160 points (85.4% liked)

Technology

59237 readers
3936 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Boozilla@lemmy.world 24 points 11 months ago (3 children)

Meanwhile, the power grid, traffic controls, and myriad infrastructure & adjacent internet-connected software will be using AI, if not already.

[–] Lodespawn@aussie.zone 16 points 11 months ago

You have a very high opinion of the level of technology running power grids, traffic systems and other infrastructure in most parts of the world.

[–] the_q@lemmy.world 7 points 11 months ago

I'm pretty sure all of the things you listed run on Pentium 4s.

[–] FaceDeer@kbin.social -5 points 11 months ago (3 children)

None of which can be used to "kill all humans."

Kill a bunch of humans, sure. After which the AI will be shut down and unable to kill any more, and next time we build systems like that we'll be more cautious.

I find it a very common mistake in these sorts of discussions to blend together "kills a bunch of people" or even "destroys civilization" with "kills literally everyone everywhere, forever." These are two wildly different sorts of things and the latter is just not plausible without assuming all kinds of magical abilities.

[–] subignition@kbin.social 10 points 11 months ago (1 children)

While I appreciate the nitpick, I think it's likely the case that "kills a bunch of people" is also something we want to avoid...

[–] FaceDeer@kbin.social 0 points 11 months ago

Oh, certainly. Humans in general just have a poor sense of scale, and I think it's important to keep aware of these sorts of things. I find it comes up a lot in environmentalism and the lack of nuance between "this could kill a few people" and "this could kill everyone" seriously hampers discussion.

[–] DABDA@lemmy.world 5 points 11 months ago (1 children)

After which the AI will be shut down and unable to kill any more, and next time we build systems like that we’ll be more cautious.

I think that's an overly simplistic assumption if you're dealing with advanced A(G)I systems. Here's a couple Computerphile videos that discuss potential problems with building in stop buttons: AI "Stop Button" Problem (Piped mirror) and Stop Button Solution? (Piped mirror).

Both videos are from ~6 years ago so maybe there's been conclusive solutions proposed since then that I'm unaware of.

[–] FaceDeer@kbin.social 3 points 11 months ago* (last edited 11 months ago) (2 children)

We're talking about an AI "without arms and legs", that is, one that's not capable of actually building and maintaining its own infrastructure. If it attacks humanity it's committing suicide.

And an AGI isn't any cleverer or more capable than a human is, you may be thinking about ASI.

[–] DABDA@lemmy.world 1 points 11 months ago

I would prefer an AI to be dispassionate about its existence and not be motivated by the threat of it committing suicide. Even without maintaining its own infrastructure I can imagine scenarios where it just being able to falsify information can be enough to cause catastrophic outcomes. If its "motivation" includes returning favorable values it might decide against alerting to dangers that would necessitate bringing it offline for repairs or causing distress to humans ("the engineers worked so hard on this water treatment plant and I don't want to concern them with the failing filters and growing pathogen content"). I don't think the terrible outcomes are guaranteed or a reason to halt all research in AI, but I just can't get behind absolutist claims of there's nothing to worry about if we just x.

Right now if there's a buggy process I can tell the manager to cleanly shut it down, if it hangs I can tell/force the manager to kill the process immediately -- if you then add in AI there's then the possibility it still wants to second guess my intentions and just ignore or reinterpret that command too; and if it can't, then the AI element could just be standard conditional programming and we're just adding unnecessary complexity and points of failure.

[–] KevonLooney@lemm.ee -3 points 11 months ago (1 children)

The funny thing is, we already have super intelligent people walking around. Do they manipulate everyone into killing each other? No, because we have basic rules like "murder is bad" or just "fraud is bad".

Super intelligent computers would probably not even bother with people because they would be created with a purpose like "develop new physics" or "organize these logistics". Smart people are smart enough to not break the rules because the punishment is not worth it. Smart computers will be finding aliens or something interesting.

[–] JackGreenEarth@lemm.ee 2 points 11 months ago

Wow, I think you need to hear about the paperclip maximiser.

Basically, you tell an AGI to maximise the number of paperclips. As that is its only goal and it wasn't programmed with human morality, it starts making paperclips, then it realised humans might turn it off, and that would be an obstacle to maximising the amount of paperclips. So it kills all the humans and turns them into paperclips, turns the whole planet into paperclips - turns all the universe it can access into paperclips because when you're working with a superintelligence, a small misalignment of values can be very fatal.

[–] young_broccoli@kbin.social 2 points 11 months ago (1 children)

Until it becomes more intelligent than us, then we are fucked, lol

What worries me more about AI right now is who will be in controll of it and how will it be used. I think we have more chances of destroying ourselves by misusing the technology (as we often do) than the technology itself.

[–] Icalasari@kbin.social 3 points 11 months ago* (last edited 11 months ago)

One thing which actually scares me with AI ia we get one chance. And there are a bunch who don't think of repercussions, just profit/war/etc.

A group can be as careful as possible but it doesn't mean shit if their Smarter Than Human AI isn't also the first one out because as soon as it can improve itself, nothing is catching up

EDIT: This is also with the assumption of any groups doing so being dumb enough to give it capabilities to build its own body, obviously yes one that can't jump to hardware capable of creating and assembling new parts is much less of a threat, as the thread points out