this post was submitted on 04 Jun 2024
28 points (100.0% liked)
SneerClub
989 readers
59 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Once I would just like to see an explaination from the AI doomers how, considering the limited capacities of Turing style machines, and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don't need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods), how AGI can be an existential risk, it cannot by definition surpass the limits of Turing machines via any of the proposed hypercomputational methods (as then turning machines are hyperturing and the whole classification structure crashed down).
I'm not a smart computer scientist myself (I did learn about some of the theories as evidenced above) but im constantly amazed at how our hyperhyped tech scene nowadays seems to not know that our computing paradigm has fundamental limits. (Everything touched by Musk extremely has this problem, with capacity problems in Starlink, Shannon Theoritically impossible compression demands for Neuralink, everything related to his tesla/AI related autonomous driving/robots thing. (To further make this an anti-Musk rant, he also claimed AI would solve chess, solving chess is a computational problem (it has been done for 7x7 board iirc), which just costs a lot of computation time (more than we have), if AI would solve chess, it would side step that time, making it a superturing thing, which makes turing machines superturing (I also can't believe that of all the theorethical hypercomputing methods we are going with the oracle method (machine just conjures up the right method, no idea how), the one I have always mocked personally) which is theoretically impossible and would have massive implications for all of computer science) sorry rant over).
Anyway, these people are not engineers or computer scientists, they are bad science fiction writers. Sorry for the slightly unrelated rant, it was stuck as a splinter in my mind for a while now. And I guess that typing it out and 'telling it to earth' like this makes me feel less ranty about it.
E: of course the fundamental limits apply to both sides of the argument, so both the 'AGI will kill the world' shit and 'AGI will bring us to posthuman utopia of a googol humans in postscarcity' seem unlikely. Unprecedented benefits? No. (Also im ignoring physical limits here as well, a secondary problem which would severely limit the singularity even if P=NP).
E2: looks at title of OPs post, looks at my post. Shit, the loons ARE at it again.
No, they never address this. And as someone who personally works on large scale optimization problems for a living, I do think it's difficult for the public to understand, that no, a 10000 IQ super machine will not be able to just "solve these problems" in a nano second like Yud thinks. And it's not like well, the super machine will just avoid having to solve them. No. NP hard problems are fucking everywhere. (Fun fact, for many problems of interest, even approximating the solution to a given accuracy is NP-hard, so heuristics don't even help.)
I've often found myself frustrated that more computer scientist who should know better simply do not address this point. If verifying solutions is exponentially easier than coming up with them for many difficult problems (all signs point to yes), and if a super intelligent entity actually did exist (I mean does a SAT solver count as a super intelligent entity?), it would probably be EASY to control, since it would have to spend eons and massive amounts of energy coming up with its WORLD_DOMINATION_PLAN.exe, but you wouldn't be able to hide a super computer doing this massive calculation, and someone running the machine seeing it output TURN ALL HUMANS INTO PAPER CLIPS, would say, 'ah, we are missing a constraint here, it thinks that this optimization problem is unbounded' <- this happens literally all the time in practice. Not the world domination part, but a poorly defined optimization problem that is unbounded. But again, it's easy to check that the solution is nonsense.
I know Francois Chollet (THE GOAT) has talked about how there are no unending exponentials and the faster growth the faster you hit constraints IRL (running out of data, running out of chips, running out of energy, etc... ) and I've definitely heard professional shit poster Pedro Domingos explicitly discuss how NP-hardness strongly implies EA/LW type thinking is straight up fantasy, but it's a short list of people who I can think of off the top of my head who have discussed this.
Edit: bizarrely, one person who I didn't mention who has gone down this line of thinking is Illya Sutskever; however, he has come to some frankly... uh... strange conclusions -> the only reason to explain the successful performance of ML is to conclude that they are Kolmogorov minimizers, i.e., by optimizing for loss over a training set, you are doing compression which done optimally is solving an undecidable problem. Nice theory. Definitely not motivated by bad sci-fi mysticism imbued with pure distilled hopium. But from my arm-chair psychologist POV, it seems he implicitly acknowledges for his fantasy to come true, he needs to escape the limitations of Turing Machines, so he has to somehow shoehorn a method for hyper computation into Turing Machines. Smh, this is the kind of behavior reserved for aging physicist, amirite lads? Yet in 2023, it seemed like the whole world was succumbing to this gas lighting. He was giving this lecture to auditoriums filled with tech bro shilling this line of thinking to thunderous applause. I have olde CS prof friends who were like, don't we literally have mountains of evidence this is straight up crazy talk? Like you can train an ANN to perform addition, and if you can look me straight in the eyes and say the absolute mess of weights that results looks anything like a Kolmogorov minimizer then I know you are trying to sell me a bag of shit.
"Computational complexity does not work that way!" is one of those TESCREAL-zone topics that I wish I had better reading recommendations for.
Bah Gawd! That man has a family!
Ow god im not alone in thinking this, thank you! I'm not going totally crazy!
I got you homie
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀