this post was submitted on 19 Jul 2023
163 points (81.7% liked)

Asklemmy

43993 readers
1421 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Climate is fucked, animals continue to go extinct even more, our money will be worth nothing the coming years.. What motivation do I even have to care to keep going? The world is ran and basically owned by corrupt rich people, there's poverty, war, etc. It makes me sick to my stomach the way to world is. So I ask, why bother anymore?

you are viewing a single comment's thread
view the rest of the comments
[–] TheFutureIsDelaware@sh.itjust.works 1 points 1 year ago (3 children)

You're at a moment in history where the only two real options are utopia or extinction. There are some worse things than extinction that people also worry about, but lets call it all "extinction" for now. Super-intelligence is coming. It literally can't be stopped at this point. The only question is whether it's 2, 5, or 10 years.

If we don't solve alignment, you die. It is the default. AI alignment is the hardest problem humans have ever tried to solve. Global warming will cause suffering on that timescale, but not extinction. A well-aligned super-intelligence has actual potential to reverse global warming. A misaligned one will mean it doesn't matter.

So, if you care, you should be working in AI alignment. If you don't have the skillset, find something else: https://80000hours.org/

Every single dismissal of AI "doom" is based on wishful thinking and hand-waving.

[–] pfannkuchen_gesicht@lemmy.one 3 points 1 year ago (1 children)

No. Maybe as a short stop on the way to extinction, but absolute and complete extinction aint a dystopia. And the worse than extinction possibilities are more like eternal suffering in a simulator for resisting the AI. Not quite captured by a "dystopia".

[–] PipedLinkBot@feddit.rocks 2 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=3K25VPdbAjU

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] tegs_terry 2 points 1 year ago (1 children)

What do you mean by alignment?

[–] TheFutureIsDelaware@sh.itjust.works 2 points 1 year ago (1 children)

AI alignment is a field that attempts to solve the problem of "how do you stop something with the ability to deceive, plan ahead, seek and maintain power, and parallelize itself from just doing that to everything".

https://aisafety.info/

AI alignment is "the problem of building machines which faithfully try to do what we want them to do". An AI is aligned if its actual goals (what it's "trying to do") are close enough to the goals intended by its programmers, its users, or humanity in general. Otherwise, it’s misaligned. The concept of alignment is important because many goals are easy to state in human language terms but difficult to specify in computer language terms. As a current example, a self-driving car might have the human-language goal of "travel from point A to point B without crashing". "Crashing" makes sense to a human, but requires significant detail for a computer. "Touching an object" won't work, because the ground and any potential passengers are objects. "Damaging the vehicle" won't work, because there is a small amount of wear and tear caused by driving. All of these things must be carefully defined for the AI, and the closer those definitions come to the human understanding of "crash", the better the AI is "aligned" to the goal that is “don't crash”. And even if you successfully do all of that, the resulting AI may still be misaligned because no part of the human-language goal mentions roads or traffic laws. Pushing this analogy to the extreme case of an artificial general intelligence (AGI), asking a powerful unaligned AGI to e.g. “eradicate cancer” could result in the solution “kill all humans”. In the case of a self-driving car, if the first iteration of the car makes mistakes, we can correct it, whereas for an AGI, the first unaligned deployment might be an existential risk.

[–] tegs_terry 2 points 1 year ago