this post was submitted on 29 Aug 2023
28 points (100.0% liked)

SneerClub

989 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TerribleMachines@awful.systems 18 points 1 year ago (2 children)

My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.

Golly gee, those sure are all the ingredients for white supremacy these folk are playing around with what, good job there are no signs of racism... right, right?!?!

In other news, I find it wild that big Yud has gone on an arc from "I will build an AI to save everyone" to "let's do a domestic terrorism against AI researchers." He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.

(Idk if anyone remembers how salty he was when AlphaGo showed up and crapped all over his "symbolic AI is the only way" mantra, but it's pretty funny to me that the very group of people he used to say were incompetent are a "threat" to him now they're successful. Schoolyard bully stuff and wotnot.)

[–] froztbyte@awful.systems 8 points 1 year ago* (last edited 1 year ago) (2 children)

In other news, I find it wild that big Yud has gone on an arc from “I will build an AI to save everyone” to “let’s do a domestic terrorism against AI researchers.” He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.

disclaimer/framing: the 'ole yudster only came to my attention fairly recently, so the following is observation/speculation (and I'll need some more evidence/visibility to see if the guess pans out)

a few years ago I happened to deal with someone who is a hell of a grifter - in intensity, scope, impact. it was primarily through that experience which I gained handle on a number of things that've served me well in spotting it in other things. some things I've been observing under that light:

  1. he's clearly talking out of his ass almost all the time
  2. shell game applies
  3. I think 'ole yuddy is aware that he's not as clever as he claims he is, and is very salty about that[0]

no-op line to make lemmy newline better

(1) and (2) means he has to continuously keep ahead of the marks ^W rats. the guy is fairly clearly some kind of widely read/informed, and can manage to deal with some kind of complexity[1] in concepts. but because (3) - he can never be as right as he wants to be, so he has to keep pivoting the grift to a new base before he gets egg on his face. his method for doing this is "abandon all hope" but practically it's an attempt to retcon history, and likely if anyone tried to really engage him on it he'd get ragey and blame them on working on "outdated information" or some other shit (because lol who needs acknowledging their own past actions amirite)[2]

[0] - this is a guess from my side, but all his "imagine a world in which einstein wasn't exceptional, because there's many of them" shit comes through to me in this way. anyone else?

[1] - not very well, of course, this is why the multi-million word vomits exist, but "some".

[2] - this is something I've seen with narcissists a lot - they can never be wrong, and "making them" be wrong (i.e. simply providing proof of past actions/statements) gets them going nuclear

[–] TerribleMachines@awful.systems 5 points 1 year ago (1 children)

My perspective is a little different (from having met him), I think he genuinely believed a lot of what he said at one point at least ... but you're pretty much spot on in all the ways that matter, he's a really bad person of the should probably be in jail for crimes kind.

[–] froztbyte@awful.systems 4 points 1 year ago (1 children)

The line between “actually believes $x” and “appears to actually believe $x” can be made heeeeeella fuzzy (and people in that space take advantage of that)

Curious about the latter half of your remarks. Is that opinion, or something grounded in other knowledge that isn’t widely known yet?

[–] TerribleMachines@awful.systems 8 points 1 year ago (1 children)

Good point with the line! Some of the best liars are good at pretending to themselves they believe something.

I don't think its widely known, but it is known, (old sneeeclub posts about it somwhere) that he used to feed the people he was dating LSD and try to convince them they "depended" on him.

First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.

Do I have hard proof he's a criminal? Probably not, at least not without digging. Do I think he is? Almost certainly.

[–] self@awful.systems 7 points 1 year ago (2 children)

First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.

hold on, you can’t just write this paragraph and then continue on as if it’s not a whole damn thing

ah yes the first time I met yud he non-consensually involved me in his bondage play with his wife (which he somehow incorporated into a business meeting)

[–] TerribleMachines@awful.systems 7 points 1 year ago (1 children)

😅 honestly I don't know what else to say, the memory haunts me to this day. I think it was the point when I started going "huh, the rats make weirdly dumb mistakes considering they've made posts exactly about these kinds of error" to "wait, there's something really sinister going on here"

[–] earthquake@lemm.ee 3 points 1 year ago* (last edited 1 year ago) (1 children)

Can you say where and when this happened without doxxing yourself? Was anyone else around while he and his wife were doing this?

Best not to for exactly that reason but I know I wasn't the only one who experienced it by any means!

[–] cstross@wandering.shop 5 points 1 year ago (1 children)

@self Oh, that's like the time I met Young Moldbug at his student house and his first words were, "let me show you the lizard room!"

He was so proud of his room full of giant lizards (and the odd snake).

So proud.

[–] dgerard@awful.systems 6 points 1 year ago

that's literally the most endearing and human thing I've ever heard about Yarvin

[–] BrickedKeyboard@awful.systems -2 points 1 year ago

Personally I imagine him as a cult leader of a flying saucer cult where suddenly an alien vehicle is actually arriving. He's running around panicking tearing his hair out because this wasn't actually what he planned, he just wanted money and bitches as a cult leader. And because it's one thing to say the aliens will beam every cult member up and take them to paradise, but if you see a multi-kilometer alien vehicle getting closer to earth, whatever it's intentions are no one is going to be taken to paradise...

[–] BrickedKeyboard@awful.systems 0 points 1 year ago* (last edited 1 year ago) (1 children)

academic AI researchers have passed him by.

Just to be pedantic, it wasn't academic AI researchers. The current era of AI began here : https://www.npr.org/2012/06/26/155792609/a-massive-google-network-learns-to-identify

Academic AI researchers have never had the compute hardware to contribute to AI research since 2012, except some who worked at corporate giants (mostly deepmind) and went back into academia.

They are getting more hardware now, but the hardware required to be relevant and to develop a capability that commercial models don't already have keeps increasing. Table stakes are now something like 10,000 H100s, or about 250-500 million in hardware.

https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

I am not sure MIRI tried any meaningful computational experiments. They came up with unrunnable algorithms that theoretically might work but would need nearly infinite compute.

[–] TerribleMachines@awful.systems 9 points 1 year ago (1 children)

As you were being pedantic, allow me to be pedantic in return.

Admittedly, you might know something I don't, but I would describe Andrew Ng as an academic. These kinds of industry partnerships, like the one in that article you referred to, are really, really common in academia. In fact, it's how a lot of our research gets done. We can't do research if we don't have funding, and so a big part of being an academic is persuading companies to work with you.

Sometimes companies really, really want to work with you, and sometimes you've got to provide them with a decent value proposition. This isn't just AI research either, but very common in statistics, as well as biological sciences, physics, chemistry, well, you get the idea. Not quite the same situation in humanities, but eh, I'm in STEM.

Now, in terms of universities having the hardware, certainly these days there is no way a university will have even close to the same compute power that a large company like Google has access to. Though, "even back in" 2012, (and well before) universities had supercomputers. It was pretty common to have a resident supercomputer that you'd use. For me, and my background's orginally in physics, back then we had a supercomputer in our department, the only one at the university, and people from other departments would occasionally ask to run stuff on it. A simpler time.

It's less that universities don't have access to that compute power. It's more that they just don't run server farms. So we pay for it from Google or Amazon and so on, like everyone in the corporate world---except of course the companies that run those servers (they still have to pay costs and lost revenue). Sometimes that's subsidized by working with a big tech company, but it isn't always.

I'm not even going to get into the history of AI/ML algorithms and the role of academic contributions there, and I don't claim that the industry played no role; but the narrative that all these advancements are corporate just ain't true, compute power or no. We just don't shout so loud or build as many "products."

Yeah, you're absolutely right that MIRI didn't try any meaningful computation experiments that I've seen. As far as I can tell, their research record is... well, staring at ceilings and thinking up vacuous problems. I actually once (when I flirted with the cult) went to a seminar that the big Yud himself delivered, and he spent the whole time talking about qualia, and then when someone asked him if he could describe a research project he was actively working on, he refused to, on the basis that it was "too important to share."

"Too important to share"! I've honestly never met an academic who doesn't want to talk about their work. Big Yud is a big let down.

[–] blakestacey@awful.systems 8 points 1 year ago (1 children)

A joke I heard in the last century: Give a professor a nickel and they'll talk for an hour. Give 'em a quarter and you'll be in real trouble.

It's true, I'm terrible for it myself 😅