this post was submitted on 18 Dec 2023
15 points (100.0% liked)

SneerClub

968 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

top 13 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 9 points 9 months ago (1 children)

To answer the original question, it’s as simple as the people working on AI are unable to grasp the near term risks (e.g. deepfakes, labor devaluation, climate change from energy use), so they focus on the fun, sci-fi “long term” issues.

So then ofc we have yud here on his usual bullshit talking about some made-up problems that only his giant brain can confabulate.

[–] GorillasAreForEating@awful.systems 11 points 9 months ago

No, they're able to grasp the near term risks, they just don't want that to get in the way of making money because they know they're unlikely to be affected.

[–] Soyweiser@awful.systems 9 points 9 months ago* (last edited 9 months ago)

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market

Ow god he is going blackpiller. Kudos to this seasons writers, unexpected twist.

Small story I read about the 'guy who was convinced by his AI replika gf to try and kill the queen' the AI girlfriend basically went with stuff like 'that sounds like a great idea' and that was all the convincing it did. The sexting logs (apparently, I have not checked myself) which came out when Replika turned off the AI's sex responses were also very passive on the AI's side (For people interested the Sarah Z yt vid on it might be interesting but I have not watched it yet). They are not looking for dates, they are looking for slaves and supportive moms, the human dating market will do fine. The only risk is people pigbutchering, but we don't need AI for that (and the human slaves they use for that (yeah really, not fun to look that up) are also doing good work, prob better than any AI can do, with the human people being actually human).

Anyway, I still have the feeling that when a social movement is lagging they start looking into dating sites so not sure how great this is for the future of AI (I'm joking, clearly this is a different situation involving dating).

[–] carlitoscohones@awful.systems 8 points 9 months ago

I think that, if you assume the consequent, my slippery slope argument is valid.

[–] dgerard@awful.systems 7 points 9 months ago (1 children)

Way back in the Sequences days, Yudkowsky talked about memetic hazards wiping out Western Civilization, this is an old theme of his

[–] gerikson@awful.systems 9 points 9 months ago* (last edited 9 months ago) (1 children)

So he wrote that in 2007. Since then, games have only gotten more immersive according to his definition, so people dying of too much gaming should be a massive issue. As far as I know, it is not. People can fuck their lives up in other ways, but arguably straight up gambling is worse as it draws off way more real money from people that could have gone to education, housing etc.

Yud likes to argue from first principles (obviously), but doesn't reckon on social dynamics. If games were as bad as he describes, there would be regulation around them. Presumably if AI girlfriends become a threat to future pension payments, they will be regulated also.

[–] dgerard@awful.systems 8 points 9 months ago* (last edited 9 months ago) (1 children)

see also the similar deleterious social effects of chess addiction in history (mostly as part of bans on gambling)

[–] gerikson@awful.systems 11 points 9 months ago (1 children)

Their crippling addiction

My worthy pastime

[–] locallynonlinear@awful.systems 7 points 9 months ago (2 children)

Completely unrelated, but Everytime I see your avatar in the tiny minimized form I see Squidward's face, and then your comments get 20% more amusing.

[–] 200fifty@awful.systems 4 points 9 months ago (1 children)

Oh man, I won't be able to unsee this, lol

[–] gerikson@awful.systems 4 points 9 months ago (1 children)

Me neither. Poor Elden Ring jellyfish!

[–] Soyweiser@awful.systems 4 points 9 months ago

Do you feel attacked? Because now is the time to switch to the red jellyfish.

[–] BernieDoesIt@kbin.social 4 points 9 months ago

Today I learned that gerikson's avatar isn't Squidward.