self

joined 1 year ago
MODERATOR OF
[–] self@awful.systems 16 points 3 days ago

the linked Buttondown article deserves highlighting because, as always, Emily M Bender knows what’s up:

If we value information literacy and cultivating in students the ability to think critically about information sources and how they relate to each other, we shouldn't use systems that not only rupture the relationship between reader and information source, but also present a worldview where there are simple, authoritative answers to questions, and all we have to do is to just ask ChatGPT for them.

(and I really should start listening to Mystery AI Hype Theater 3000 soon)

also, this stood out, from the OpenAI/Common Sense Media (ugh) presentation:

As a responsible user, it is essential that you check and evaluate the accuracy of the outputs of any generative AI tool before you share it with your colleagues, parents and caregivers, and students. That includes any seemingly factual information, links, references, and citations.

this is such a fucked framing of the dangers of informational bias, algorithmic racism, and the laundering of fabricated data through the false authority of an LLM. framing it as an issue where the responsible party is the non-expert user is a lot like saying “of course you can diagnose your own ocular damage, just use your eyes”. it’s very easy to perceive the AI as unbiased in situations where the bias agrees with your own, and that is incredibly dangerous to marginalized students. and as always, it’s gross how targeted this is: educators are used to being the responsible ones in the room, and this might feel like yet another responsibility to take on — but that’s not a reasonable way to handle LLMs as a source of unending bullshit.

[–] self@awful.systems 27 points 3 days ago (1 children)

Lack of familiarity with AI PCs leads to what the study describes as "misconceptions," which include the following: 44 percent of respondents believe AI PCs are a gimmick or futuristic; 53 percent believe AI PCs are only for creative or technical professionals; 86 percent are concerned about the privacy and security of their data when using an AI PC; and 17 percent believe AI PCs are not secure or regulated.

ah yeah, you just need to get more familiar with your AI PC so you stop caring what a massive privacy and security risk both Recall and Copilot are

lol @ 44% of the study’s participants already knowing this shit’s a desperate gimmick though

[–] self@awful.systems 14 points 4 days ago

per capita: your mom

[–] self@awful.systems 7 points 4 days ago (1 children)

fuck me that is some awful fucking moderation. I can’t imagine being so fucking bad at this that I:

  • dole out a ban for being rude to a fascist
  • dole out a second ban because somebody in the community did some basic fucking due diligence and found out one of the accounts defending the above fascist has been just a gigantic racist piece of shit elsewhere, surprise
  • in the process of the above, I create a safe space for a fascist and her friends

but for so many of these people, somehow that’s what moderation is? fucking wild, how the fuck did we get here

[–] self@awful.systems 10 points 4 days ago* (last edited 4 days ago)

a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the templates, pages, static, and less directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I'll DM an invite link!

[–] self@awful.systems 14 points 5 days ago (3 children)

most of the dedicated Niantic (Pokemon Go, Ingress) game players I know figured the company was using their positioning data and phone sensors to help make better navigational algorithms. well surprise, it’s worse than that: they’re doing a generative AI model that looks to me like it’s tuned specifically for surveillance and warfare (though Niantic is of course just saying this kind of model can be used for robots… seagull meme, “what are the robots for, fucker? why are you being so vague about who’s asking for this type of model?”)

[–] self@awful.systems 16 points 5 days ago

another absolutely fucked thing about the gotcha interview is, they never stop at just one. if you somehow read the interviewer’s mind and asspull the expected (not “correct”, mind you) answer, they’ll just go “huh” and instantly pivot to a different instant-fail gotcha. the point of the gotcha interview isn’t candidate selection; the point is that the asshole interviewer has power over the candidate, and can easily use gotchas to fabricate technical-sounding reasons for rejecting suitable candidates they personally just don’t like.

shit like this is one reason our industry is full of fucking assholes; they select for their own by any practical means. it’s reminiscent of those rigged, impossible “literacy tests” they used to give voters in the south (that is, the southern US), where almost every question was a gotcha designed so that a poll worker could exclude Black voters at effectively their own discretion, complete with a bullshit paper trail in case anyone questioned the process.

(also, how many of these assholes send candidates down a rabbit hole wasting time answering questions unrelated to the position when they don’t get the gotcha right? I swear that’s happened to me more than once, and I can only imagine it’s so nobody asks why most of the interviews are so short)

[–] self@awful.systems 9 points 5 days ago

I start every new day by screaming this is the remix, as required by law

[–] self@awful.systems 14 points 6 days ago

I can now believe that they might have considered that, and are probably hoping he’ll come down in their favour now that he’s coming in.

I’m betting there are some juicy logs of something like this happening in the C++ Alliance Slack’s secret #unfiltered channel, given that its typical content allegedly consists of abuse directed towards marginalized people, coordination of harassment and misinformation on community platforms, and various other fash shit. it’s weird that a Slack so closely associated with the “professional and respectful” ISO C++ committee would host two (including the diet version, #on-topic) of what are essentially chan-style trolling channels for people who think they’re adults on an official Slack. maybe it shouldn’t be too surprising, since one committee member felt comfortable posting an extensive, unhinged COVID conspiracy rant on the WG21 mailing list — with a community like that, it’s just a matter of time before the assholes in charge go mask-off.

and for anyone who hasn’t read the article yet, the above isn’t even the worst shit you’ll learn about, it’s a fucking rollercoaster (and there are some details alluded to that you’ll only pick up on a second reading too)

[–] self@awful.systems 9 points 6 days ago (1 children)

We’re sure phrases like “novel types of debt structures” won’t give you flashbacks to the 2008 financial crisis.

I feel like John taking the Soy Sauce for the first time and knowing how many people are gonna explode

[–] self@awful.systems 7 points 1 week ago

It is the fourth book in the John Dies at the End series

oh damn, I just gave the (fun but absolute mess of a) movie another watch and was wondering if they ever wrote more stories in the series — I knew they wrote a sequel to John Dies at the End, but I lost track of it after that. it looks like I’ve got a few books to pick up!

[–] self@awful.systems 7 points 1 week ago

how are you this fucking mid?

 

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

 

in spite of popular belief, maybe lying your ass off on the orange site is actually a fucking stupid career move

for those who don’t know about Kyle, see our last thread about Cruise. the company also popped up a bit recently when we discussed general orange site nonsense — Paully G was doing his best to make Cruise look like an absolute success after the safety failings of their awful self-driving tech became too obvious to ignore last month

 

this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

 

having recently played and refunded a terrible “modern” text adventure, I’ve had the urge to revisit my favorite interactive fiction author, Andrew Plotkin aka Zarf. here’s a selection of recommendations from his long list of works:

 

I found this searching for information on how to program for the old Commodore Amiga’s HAM (Hold And Modify) video mode and you gotta touch and feel this one to sneer at it, cause I haven’t seen a website this aggressively shitty since Flash died. the content isn’t even worth quoting as it’s just LLM-generated bullshit meant to SEO this shit site into the top result for an existing term (which worked), but just clicking around and scrolling on this site will expose you to an incredible density of laggy, broken full screen animations that take way too long to complete and block reading content until they’re done, alongside a long list of other good design sense violations (find your favorites!)

bonus sneer arguably I’m finally taking up Amiga programming as an escape from all this AI bullshit. well fuck me I guess cause here’s one of the vultures in the retrocomputing space selling an enshittified (and very ugly) version of AmigaOS with a ChatGPT app and an AI art generator, cause not even operating on a 30 year old computer will spare me this bullshit:

like fuck man, all I want to do is trick a video chipset from 1985 into making pretty colors. am I seriously gonna have to barge screaming into another German demoscene IRC channel?

 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @dgerard@awful.systems):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

 

a surprisingly good Atari 2600 demo by XAYAX, originally presented at Revision 2014

 

Netrunner is a collectible card game with a very long history. in short:

  • its first edition was designed by the Magic: The Gathering guy (with about as many greed and scarcity mechanics as Magic) and took place in the same universe as Cyberpunk 2077
  • the second edition was published by Fantasy Flight Games, replaced the scarcity mechanics with Living Card Game expansion packs (you get all the cards in the set with one purchase) and a sliding window for tournament play card validity, and switched universes and names to Android: Netrunner
  • the game went entirely out of print once Fantasy Flight dropped it
  • the current “edition” of the game and its rules are maintained by a non-profit cooperative named Nullsignal (formerly NISEI), who also continued the story started in Android: Netrunner.

because the game is maintained by a non-profit (and actually appropriately fairly anti-corporate) cooperative, playing Netrunner ranges from free to relatively cheap:

  • any recognizable proxy is valid even in tournament play with the right (opaque-backed) sleeves. this means that you can print out Nullsignal’s cards at home and sleeve them with a little bit of card stock for rigidity and be ready for tournament play. this also means you can sleeve a post-it note for the same effect, so long as both players can recognize which card you’re supposed to be playing
  • you can buy a boxed set from Nullsignal if you’d like high quality cards, and they’ve also got on-demand manufacturing set up through DriveThruCards and MakePlayingCards
  • or you can forget physical cards entirely and play on jinteki.net, a free service that lets you play an online game of Netrunner using every card ever published by Fantasy Flight and Nullsignal. the designers at Nullsignal also use Jinteki to beta test and pre-release sets, so you may also get access to cards that don’t physically exist yet

the gameplay of Netrunner is fucking great: it’s an asymmetric card game where one player is a corporation (or their sysadmin at least) and the other is a runner trying to hack and bring down that corporation. the gameplay feels a lot like a mix between a shell game, the bluffing parts of poker, the better bits of Magic (most of the rules you need are on the cards), and an aggressive cat and mouse struggle, all at once. it’s actually one of my favorite ways that decking and ICE have been translated into gameplay mechanics.

Nullsignal also does a great job on the story, art, and aesthetic of their new cards. modern Netrunner has a distinctive feel to it, but it’s clear that the folks behind it understand how to make good cyberpunk.

 

Hypnospace Outlaw is that funny meme game with the pizza dance. it’s also a leftist parody of the California Ideology and some of the factors that led to the bursting of the dot com bubble. crucially, it’s also a whole lot of fun to play — it’s a very good point and click mystery adventure that takes place on a faithfully rendered and authentic-feeling version of a networked computer in the 90s, crafted by someone who absolutely knew what they were doing with the time period and aesthetic.

above all, it’s one of the better cyberpunk games I’ve played, though I can’t really explain why without spoiling the ending. Hypnospace Outlaw can be finished fairly quickly, so I encourage anyone who hasn’t to give it a play or at least watch a playthrough from a non-annoying YouTuber. ending spoilers follow:

Hypnospace Outlaw ending spoilersit goes without saying that sleeptime computing in Hypnospace is a limited and janky but still revolutionary brain-computer interface, and in effect what you’re doing during the whole game is a precursor to netrunning. in fact, Hypnospace in general is a perfect prelude to a Gibsonian cyberpunk dystopia.

as demonstrated in the last chapter of the game, sleeptime computing tech is fatal when pushed beyond its limits, as Merchantsoft demonstrated like only a short-sighted and greedy startup in 1999 could. Dylan even spends 20 solid years blaming a hacker for the lives he took fucking with tech he barely understood. the tech behind sleeptime computing is most likely outlawed after 1999, or its use is at least heavily stigmatized.

at the same time, the promise behind Hypnospace remains alluring as fuck. in the last chapter of the game, you join up with a nostalgic effort to archive all of Hypnospace from the cache memory in your repaired moderator headband. the allure goes beyond nostalgia though: with the 90s ideas stripped away, even a janky BCI is incredibly useful. you can imagine high-frequency traders, drone pilots, and similar assholes being particularly interested in the illegal tech that replaces sleep with the ability to very efficiently do their jobs 24/7. cyberdeck tech being strictly regulated and only available to high-level corpos and obsessed hackers is a key component of classic cyberpunk.

and hey, while we’re on the topic of the worst people in the world adopting illegal tech, did you finish the (excellent) M1NX and Leaky Piping side plots? cause if you did, you’ll know that sleeptime computing doesn’t actually let you sleep — it severely limits the amount of time you spend in REM sleep, but users don’t realize that because they’re still physically resting. so those high-frequency traders, drone pilots, and other assholes who’ve adopted habitual sleeptime computing use are also slowly going insane from a lack of REM sleep, and chances are they don’t know it because all the evidence was released right before the Mindcrash

in short, these are all the precursor chemicals you need for a cyberpunk future.

the game’s author, Jay Tholen, is currently in progress on its sequel, Dreamsettler. I can’t wait for more good cyberpunk.

 

there’s an alternate universe version of this where musk’s attendant sycophants and bodyguard have to fish his electrocuted/suffocated/crushed body out from the crawlspace he wedged himself into with a pocket knife

 

404media continues to do devastatingly good tech journalism

What Kaedim’s artificial intelligence produced was of such low quality that at one point in time “it would just be an unrecognizable blob or something instead of a tree for example,” one source familiar with its process said. 404 Media granted multiple sources in this article anonymity to avoid retaliation.

this is fucking amazing. the company tries to hide it as a QA check, but they’re really just paying 3d modelers $1-$4 a pop to churn out models in 15 minutes while they pretend the work’s being done by an AI, and now I’m wondering what other AI startups have also discovered this shitty dishonest growth hack

 

this is a computer that’s almost entirely without graphical capabilities, so here’s a demo featuring animations and sound someone did last year

view more: ‹ prev next ›