this post was submitted on 17 Sep 2023
34 points (100.0% liked)

TechTakes

1397 readers
76 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @dgerard@awful.systems):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

all 50 comments
sorted by: hot top controversial new old
[–] fasterandworse@awful.systems 14 points 1 year ago* (last edited 1 year ago) (2 children)

I’ve got this absolutely massive draft document where I’ve tried to articulate what this person explains in a few sentences. The gradual removal of immediate purpose from products has become deliberate. This combination of conceptual solutions to conceptual problems gives the business a free pass from any kind of distinct accountability. It is a product that has potential to have potential. AI seems to achieve this better than anything ever before. Crypto is good at it but it stumbles at the cash-out point so it has to keep cycling through suckers. AI can just keep chugging along on being “powerful” for everything and nothing in particular, and keep becoming more powerful, without any clear benchmark of progress.

Edit: just uploaded this clip of Ralph Nader in 1971 talking about the frustration of being told of benefits that you can’t really grasp https://youtu.be/CimXZJLW_KI

[–] dgerard@awful.systems 8 points 1 year ago (1 children)

this is also the marketing for quantum computing. Yes, there is a big money market for quantum computers in 2023. They still can't reliably factor 35.

[–] fasterandworse@awful.systems 8 points 1 year ago* (last edited 1 year ago) (1 children)

shit, I forgot about quantum computing. If you don't game, do video production or render 3d models, you're upgrading your computer to keep up with the demands of client-side rendered web apps and the operating system that loads up the same Excel that has existed for 30 years.

Lust for computing power is a great match for AI

[–] dgerard@awful.systems 7 points 1 year ago (3 children)

i literally upgrade computers in the past decade purely to get ones that can take more RAM because the web now sends 1000 characters of text as a virtual machine written in javascript rather than anything so tawdry as HTML and CSS

[–] self@awful.systems 6 points 1 year ago (1 children)

the death of server-side templating and the lie of server-side rendering (which practically just ships the same virtual machine to you but with a bunch more shit tacked on that doesn’t do anything) really has done fucked up things to the web

[–] 200fifty@awful.systems 6 points 1 year ago* (last edited 1 year ago) (2 children)

as someone who never really understood The Big Deal With SPAs (aside from, like, google docs or whatever) i'm at least taking solace in the fact that like a decade later people seem to be coming around to the idea that, wait, this actually kind of sucks

[–] dgerard@awful.systems 4 points 1 year ago (1 children)

React doesn't have to suck for the user (lemmy is fast) but ...

[–] fasterandworse@awful.systems 4 points 1 year ago (1 children)

this is the thing.

6 degrees of transpiler separation.

[–] dgerard@awful.systems 6 points 1 year ago (1 children)

when your web page is actually an app written in JS, the commercial temptation to load it up with as many trackers as will fit is overwhelming

[–] fasterandworse@awful.systems 8 points 1 year ago

October 2012 - I still consider this to be one of the most unacknowledged milestones in the enshittification of the web https://web.archive.org/web/20121003000922/http://www.google.com/tagmanager/  Digital marketing made (much) easier. Want to focus on marketing instead of marketing technology? Google Tag Manager lets you add and update your website tags, easily and for free, whenever you want, without bugging the IT folks.

[–] raktheundead@fedia.io 5 points 1 year ago

Every day, we pay the price for embracing a homophobe's 10-day hack comprising a shittier version of Lisp.

[–] fasterandworse@awful.systems 5 points 1 year ago (1 children)

The internet document transfer protocol needs a separation of page and app

[–] fasterandworse@hci.social 3 points 1 year ago (1 children)

@fasterandworse@awful.systems @self @trisweb I didn’t know masto picked up lemmy posts like this

[–] trisweb@m.trisweb.com 4 points 1 year ago (2 children)

@fasterandworse@hci.social @fasterandworse@awful.systems @self Yep, it's all the same protocol. It's pretty weird though; no indication of what platform the post really came from or how it was intended to be viewed. I could see that being useful first-class information for the reader on whatever platform they're reading from.

Trying to remember how I even got this post. Did you boost it from your masto account?

[–] fasterandworse@hci.social 4 points 1 year ago (1 children)

@trisweb @fasterandworse@awful.systems @self yeah I figured the activitypub protocol used some kind of content type definition to control where stuff was appropriately published… I never got around to actually reading the docs.

I have no idea how it came to your feed. I found it because you boosted it!

[–] self@awful.systems 8 points 1 year ago (1 children)

as an open source federated protocol, ActivityPub and all the apps built on top of it are required to have a layer of jank hiding just under the surface

[–] dgerard@awful.systems 6 points 1 year ago (2 children)

ActivityPub is a protocol for software to fail to talk to each other

@self has tapped Lemmy with carefully aimed hammers in a few places so that we federate both ways with Mastodon, which has been pretty cool actually

[–] trisweb@m.trisweb.com 3 points 1 year ago (1 children)

@dgerard @self Oh interesting, so is this Lemmy instance special in this regard?

load more comments (1 replies)
[–] fasterandworse@hci.social 3 points 1 year ago (1 children)
[–] trisweb@m.trisweb.com 3 points 1 year ago

@fasterandworse @self Fediverse problems. This could... use improvement. But it's cool that it works!

[–] swlabr@awful.systems 14 points 1 year ago

100% on point. More people must remember that everything we know about large companies' operations is still completely valid. Leadership doesn't understand any of the technologies at play, even at a high level- they don't think in terms of black boxes; they think in black, amorphous miasmas of supposed function or vibes, for short. They are concerned with a few metrics going up or down every quarter. As long as the number goes up, they get paid, the dopamine hits, and everyone stays happy.

The AI miasma (mAIasma? miasmAI?) in particular is near perfect. Other technologies only held a finite amount of potential to be hyped, meaning execs had to keep looking for their next stock price bump. AI is infinitely hypeable since you can promise anything with it, and people will believe you thanks to the smoke and mirrors it procedurally pumps out today.

I have friends who have worked in plenty of large corporations and have experience/understanding of the worthlessness of executive leadership, but they don't connect that to AI investment and thus don't see the grift. It's sometimes exasperating.

[–] tetranomos@awful.systems 7 points 1 year ago* (last edited 1 year ago) (2 children)

what i'm trying to understand is the bridge between the quite damning works like Artificial Intelligence: A Modern Myth by John Kelly, R. Scha elsewhere, G. Ryle at advent of the Cognitive Revolution, deriving many of the same points as L. Wittgenstein, and then there's PMS Hacker, a daunting read, indeed, that bridge between these counter-"a.i." authors, and the easy think substance that seems to re-emerge every other decade? how is it that there are so many resolutely powerful indictments, and they are all being lost to what seems like a digital dark age? is it that the kool-aid is too good, that the sauce is too powerful, that the propaganda is too well funded? or is this all merely par for the course in the development of a planet that becomes conscious of all its "hyperobjects"?

[–] bitofhope@awful.systems 7 points 1 year ago

I don't claim to know any better than you, but my intuition says it's the funding, combined with the fact that even understanding what the claims are takes a fair bit of technical sophistication, let alone understanding why they're bullshit. The ever soaring levels of inequality — constant record highs in a couple generations at least — make it hard to realize just how much power the technocrats hold over the public perception and it can take a full lecture to explain even an educated and intelligent person how exactly the sentences the computer man utters are a crock of shit.

And more cynically, for some people it's the old saw about not understanding things when your paycheck depends on it.

It's a self-repairing problem, since you can't fool most people forever, but the sooner the less people still buy into it, the better.

[–] fasterandworse@awful.systems 4 points 1 year ago

Sometimes it seems like wilful ignorance