this post was submitted on 04 Apr 2024
24 points (100.0% liked)

SneerClub

989 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

A while back, I set myself the project of figuring out how much of the MIT undergrad physics curriculum could be taught from free online books. The answer, so far, is more than I had anticipated but much less than what we deserve. But working on that, along with a few other conversations, has got me to wondering. We've seen TESCREAL types be just plain wrong about science many times over the years. Harry Potter and the Methods of Rationality botches Punnett squares and pretty much everything more advanced than that. LessWrong demonstrably has no filter against old-school math crankery. The (ahem) leading light of "effective accelerationism" just plays Mad Libs with physics words. Yudkowsky's declarations about organic chemistry boggle the educated mind. They even manage to be weird about theoretical computer science — what we might call the "lambda calculus is super-Turing!" school of TESCREAL.

Sometimes, the difference between a TESCREAL understanding of science and a legitimate one comes from having studied the subject in a formal way. But not every aspiring autodidact with an interest in molecular biology or the theoretical limits of computation is a lost cause!

So, then: What books come down upon the superficial TESCREAL version of cool things like a ton of scientific bricks? What are the texts that one withdraws from an inside coat pocket and slides across the table, saying "This here is the good shit"?

you are viewing a single comment's thread
view the rest of the comments
[–] JohnBierce@awful.systems 7 points 7 months ago (2 children)
  • The Structure of Scientific Revolutions, by Thomas Kuhn

Bit of philosophy of science is a useful bit of immunization against Rationalist bullshit. Maybe not on its own, but it helps.

[–] YouKnowWhoTheFuckIAM@awful.systems 12 points 7 months ago* (last edited 7 months ago) (2 children)

My graduate degree was in philosophy of science, and I wouldn’t suggest Kuhn or, indeed, much philosophy of science as a salve for this particular problem. For much of the 20th century, the philosophy of science primarily theorised about two main sets of data: (1) idealised physics, which is to say the “final” theories of physics; (2) historical case studies, which is to say the experimental and theoretical debates which produced those theories. These are two distinct strands of research (of which Kuhn belongs to, and plays an important role in introducing, the second), but perspicuous observers will note that neither of them deal with people who get science wrong, rather they deal with either what is “scientific knowledge”, or how it is that scientific “knowledge” is produced.

Now understanding a little better how scientific knowledge is produced, or even that it is produced (and not intuited, Yudkowsky-style, as if given by a beam of pink energy from the future), could be a preliminary inoculation against behaving as if it is intuited, Yudkowsky-style, as if given by a beam of pink energy from the future. Or, in a twist of which many Kuhn readers have fallen afoul, it can be the radicalisation of a would-be “paradigmatic” thinker, who therefore learns that “normal” scientific knowledge is always local, partial, and primarily intended for the NPC types who populate laboratories. If I wanted to turn somebody with the quintessential rationalist personality into a monstrous basilisk-wraith I would give them Kuhn.

I’m not one for delivering the usual bromides against Kuhn’s supposed sloppiness (I think his treatment has been selective and unkind), but there are also better, more recent works in the same vein (and, naturally, Feyerabend did Kuhn better anyway). If I wanted to give somebody “the good shit” from philosophy of science, I would give them Nancy Cartwright, Ian Hacking, and Bas van Frassen. But the problem remains - how do I explain to these people that they aren’t participating in scientific discourse at all? - after all, as we get more and more recent even the very moderate non-objectivisms of Cartwright, Hacking, van Frassen et al. become diluted as, in practical terms, much of philosophy of science converges on the project of once again reifying a now complicated picture of scientific knowledge in the teeth of perceived worries about its objectivity.

Why is this a problem? Well the pragmatic image of science with which your rationalist is liable to come away from these texts is one in which the body of the whole thing is incredibly complex and everything has its role, including that of the rationalist. With Kuhn we will have deepened their appreciation of their own importance, and with the non-objectivists we will have challenged their STEMacism only to supply their project with an undeserved aura of validity!

(I here leave out the really technical stuff, naturally. Much of philosophy of science is of course concerned with resolving particular puzzles in particular areas. This is of course a lot more difficult and worth doing than any grand project we might have in mind, but it can’t help the people we’re discussing).

Only the hardcore realists remain, but what do they have to offer? Idealised physical models! This simply cannot help us at all.

Hell, if they’re anything like a gamut of arseholes I’ve run into over the years, at least a few of them proudly trumpet that back at the turn of the century Bruno Latour was expressing regret about the critical project in STS, and that it’s the only thing of his they’ve ever read.

The great demarcatory projects are, mostly, a thing of the past, but really this is what we need. Problematically, for the last 50 years it has been widely agreed that they were wrong, and there was no real standard of demarcation between “science” and other modes of thought. Nonetheless, and ignoring that there is one good Popperian still alive to do, we can’t use Popper - that’s absurdly dangerous territory - but we do have Lakatos.

Now that’s an idea I could have put at the top. We have to ignore that, as before, people don’t really believe in “degenerating research programmes” anymore (although perhaps philosophy of science is just a little too close to science to say so). But you know what? Fuck it. Make them read Lakatos.

But it won’t help, because their research programme is almost tailor made to outrun scientific testing. Along with history of science, which I advocate because it shows science in its particulars, the real solution is to starve the cult of oxygen. It’s an attritional war of pointing out that this is bullshit in its particulars.

[–] titotal@awful.systems 13 points 7 months ago (1 children)

The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

We need to explain that yes, science has it's flaws, but it still shits all over pseudobayesianism.

[–] YouKnowWhoTheFuckIAM@awful.systems 12 points 7 months ago (1 children)

Well this is where I was going with Lakatos. Among the large scale conceptual issues with rationalist thinking is that there isn’t any understanding of what would count as a degenerating research programme. In this sense rationalism is a perfect product of the internet era: there are far too many conjectures being thrown out and adopted at scale on grounds of intuition for any effective reality-testing to take place. Moreover, since many of these conjectures are social, or about habits of mind, and the rationalists shape their own social world and their habits of mind according to those conjectures, the research programme(s) they develop is (/are) constantly tested, but only according to rationalist rules. And, as when the millenarian cult has to figure out what its leader got wrong about the date of the apocalypse, when the world really gets in the way it only serves as an impetus to refine the existing body of ideas still further, according to the same set of rules.

Indeed the success of LLMs illustrates another problem with making your own world, for which I’m going to cheerfully borrow the term “hyperstition” from the sort of cultural theorists of which I’m usually wary. “Hyperstition” is, roughly speaking, where something which otherwise belongs to imagination is manifested in the real world by culture. LLMs (like Elon Musk’s projects) are a good example of hyperstition gone awry: rationalist AI science fiction manifested an AI programme in the real world, and hence immediately supplied the rationalists with all the proof they needed that their predictions were correct in the general if not in exact detail.

But absent the hyperstitional aspect, LLMs would have been much easier to spot as by and large a fraudulent cover for mass data-theft and the suppression of labour. Certainly they don’t work as artificial intelligence, and the stuff that does work (I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be), i.e. transformers and unbelievable energy-spend on data-processing, doesn’t even superficially resemble “intelligence”. With a sensitive critical eye, and an open environment for thought, this should have been, from early on, easily sufficient evidence, alongside the brute mechanicality of the linguistic output of ChatGPT, to realise that the prognostic tools the rationalists were using lacked either predictive or explanatory power.

But rationalist thought had shaped the reality against which these prognoses were supposed to be tested, and we are still dealing with people committed to the thesis that skynet is, for better or worse, getting closer every day.

Lakatos’s thesis about degenerating research programmes asks us to predict novel and look for corroborative evidence. The rationalist programme does exactly the opposite. It predicts corroborative evidence, and looks for novel evidence which it can feed back into its pseudo-Bayesian calculator. The novel evidence is used to refine the theory, and the predictions are used to corroborate a (foregone) interpretation of what the facts are going to tell us.

Now, I would say, more or less with Lakatos, that this isn’t an amazingly hard and fast rule, and it’s subject to different interpretations. But it’s a useful tool for analysing what’s happening when you’re trying to build a way of thinking about the world. The pseudo-Bayesian tools, insofar as they have any impact at all, almost inevitably drag the project into degeneration, because they have no tool for assessing whether the “hard core” of their programme can be borne out by facts.

[–] dgerard@awful.systems 6 points 7 months ago (2 children)

(I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be)

yes, this is a specific area i have a note to self to look into

[–] mountainriver@awful.systems 5 points 7 months ago

From what I have read, it can be a support as long as:

  • It is trained on local data, from the machine and procedures normally used.
  • The accuracy is regularly tested (because any variation in the indata, whether from equipment or procedures changes the input data).
  • It is understood as a tool that gives suggestions for the radiologist, not a replacement.

Of course, it cannot be better than the best radiologists around. So the question is if it is worth it, compared with for example hire more staff.

[–] hairyvisionary@fosstodon.org 2 points 7 months ago (1 children)

@dgerard @YouKnowWhoTheFuckIAM About a decade ago I was working with (kinda sorta) a guy who wanted to do a start-em-up that would involve machine recognition of situations from electrocardiograph recordings, in real-time so as to give the cardio outpatient early warning that they should call for help. At that time the buzzword was Machine Learning, but also I looked and found the published research to be voluminous and ongoing for some decades.

[–] hairyvisionary@fosstodon.org 3 points 7 months ago

@dgerard @YouKnowWhoTheFuckIAM But the most interesting thing I found was the flash cards. You see, we've been training meat-based neural networks to do this for a while. Now I wonder what I would find if I looked into radiology.

[–] JohnBierce@awful.systems 2 points 6 months ago

Oh geez, just saw this response, feel really bad I missed it- you put a ton of effort into it! (And I'm overwhelmed with work right now, so I can't reply in the depth it deserves, alas!)

In short, though: Your arguments largely make sense to me, and I'm reasonably persuaded by them! I too also think Kuhn has been treated worse than he deserves- yes, others have surpassed him since, but few of them are as approachable to laymen as he is, and that's worth something, imho. (I'm also kinder to Jared Diamond than many folks for similar reasons. Yeah, he fucked a lot of stuff up, but he got a lot of laymen- including me, before I started by studies in geology- interested in environmental history, so at the very least he deserves that nod.) And I'd agree that Feyerebend did better than Kuhn! (Maybe not on layman approachability, but he's not that much tougher than Kuhn- I certainly had no trouble, and I'm a dilettante in philosophy of science.)

Wish I had time for a longer (and very belated) reply, but thanks for the great response!

And is the "beam of pink energy from the future" a reference to Philip K Dick's Valis, by any chance?

[–] gerikson@awful.systems 7 points 7 months ago (1 children)

Reading a bit about the history of science is good too. For some reason TESCREAL types are like the Whig historians, science is a constant march towards this, the best of all possible worlds.

I read a small monograph years ago about the history of plate tectonics, and it was clear to me that far from being deluded Bible-huggers, the people who preceded modern ideas of how continents form were grappling with the evidence as they saw it.

Also this overview of "dying sun" SF points out that in the late 19th/early 20th century, what powered the Sun was entirely unknown! https://www.typebarmagazine.com/2024/03/24/science-fiction-and-the-death-of-the-sun/ [1]

Considering that much TESCREAL discourse is less about science and more about science fiction, maybe the focus should be on pointing out the many ways where SF tech goes wrong...


[1] as an aside, I got that link from HN, and the discussions are typically shallow, like most HN discussions about SF https://news.ycombinator.com/item?id=39911155

[–] JohnBierce@awful.systems 4 points 6 months ago (1 children)

Super late response (sorry!), but yeah, history of science is great stuff. And your point about TESCREALS engaging with science fiction over science is entirely spot-on. (Which was me as a teenager. There but for the grace of god go I...)

Btw, if you want to read a FANTASTIC book dealing with people grappling with plate tectonics, John McPhee's Pulitzer-winning Annals of the Ancient World spans literal decades of interviews with geologists, and you get to start with geologists being deeply skeptical of this newfangled plate tectonics (not dismissive, but not convinced of the breadth of its explanatory power), and work to it being fully accepted science over the course of the book.

[–] gerikson@awful.systems 5 points 6 months ago (1 children)

I have listened to the entirety of John McPhee's geological books as audiobooks, which is more entertaining than it sounds.

I think the concept of geological Deep Time is very humbling, and it kind of grounds the human condition in a weird way.

[–] JohnBierce@awful.systems 3 points 6 months ago (1 children)

John McPhee's so goddamn good, one of the best nonfiction writers out there. The absolute master of nonfiction narrative structure, imho.

And yeah, Deep Time is... a hell of a trip.