self

joined 1 year ago
MODERATOR OF
[–] self@awful.systems 7 points 1 week ago

how are you this fucking mid?

[–] self@awful.systems 20 points 1 week ago

In fact, they’re getting plenty of free publicity for using AI to make it thanks to articles like these

good thing there’s no such thing as bad publicity, otherwise this shit would be fucking embarrassing

[–] self@awful.systems 7 points 1 week ago

I hereby vow to be extremely firm and extremely realistic

[–] self@awful.systems 8 points 1 week ago

finally my pyramid scheme is coming to fruition

sneerchain, coming soon to an awful system near you

[–] self@awful.systems 7 points 1 week ago (3 children)

fucking… maybe if I edit it in nobody will notice

maybe they’ll pay even more for the director’s cut

[–] self@awful.systems 12 points 1 week ago (6 children)

what kind of leftist would I be if I didn’t?

[–] self@awful.systems 12 points 1 week ago (6 children)

no part of me is surprised that every famous FOSS personality has a ridiculously bad age of consent take, but it is a real fucking problem that this keeps happening

[–] self@awful.systems 15 points 1 week ago (1 children)

The funniest part was probably them blaming anime avatars for trans people.

holy fuck, I’ve seen anti-trans accounts on mastodon spew this same shit and I’m kind of surprised it’s a wider conservative conspiracy theory and not something local to mastodon

just kidding the funniest part was the guy calling trans people the “priest class” of urbanism

leftists don’t go to church on Sundays, they just trans themselves and then bikeshed about bike sheds

[–] self@awful.systems 9 points 1 week ago

a lot of projects with crypto ties have been doing this kind of hedge — they now emphasize data sovereignty instead of web3, decentralization, and other poisoned buzzwords, but they never really answer why crypto assholes would invest in a company if not to do crypto asshole shit when the time is right

[–] self@awful.systems 8 points 1 week ago

fuck yes, why wasn’t my college this cool? all I got was an AI elective taught by a guy whose proudest achievement was having the only remaining Genera license on campus

[–] self@awful.systems 12 points 1 week ago (8 children)

this is leftist infighting between Mondayists and Tomorrow-Evaporists and I won’t allow it in this space

[–] self@awful.systems 13 points 1 week ago (1 children)

all the replies anthropomorphizing the LLM cause it generated something creepy and they don’t know why aren’t surprising, but for some reason this one really pisses me off:

I just checked out the conversation and it looks legit. So weird. I cannot imagine why it would generate a completion like this. Tell your brother to buy a lottery ticket.

an LLM generating absolute garbage that happens to be abusive in some way is a lottery ticket event, is it? I had no idea lottery wins happened that fucking frequently

 

kinda glad I bounced off of the suckless ecosystem when I realized how much their config mechanism (C header files and a recompile cycle) fucking sucked

 

 

A Brief Primer on Technofascism

Introduction

It has become increasingly obvious that some of the most prominent and monied people and projects in the tech industry intend to implement many of the same features and pursue the same goals that are described in Umberto Eco’s Ur-Fascism(4); that is, these people are fascists and their projects enable fascist goals. However, it has become equally obvious that those fascist goals are being pursued using a set of methods and pathways that are unique to the tech industry, and which appear to be uniquely crafted to force both Silicon Valley corporations and the venture capital sphere to embrace fascist values. The name that fits this particular strain of fascism the best is technofascism (with thanks to @future_synthetic), frequently shortened for convenience to techfash.

Some prime examples of technofascist methods in action exist in cryptocurrency projects, generative AI, large language models, and a particular early example of technofascism named Urbit. There are many more examples of technofascist methods, but these were picked because they clearly demonstrate what outwardly separates technofascism from ordinary hype and marketing.

The Unique Mechanisms of Technofascism

Disassociation with technological progress or success

Technofascist projects are almost always entirely unsuccessful at achieving their stated goals, and rarely involve any actual technological innovation. This is because the marketed goals of these projects are not their real, fascist aims.

Cryptocurrencies like Bitcoin are frequently presented as innovative, but all blockchain-based technologies are, in fact, inefficient distributed database based on Merkle trees, a very old technology which blockchains add little practical value to. In fact, blockchains are so impractical that they have provably failed to achieve any of the marketed goals undertaken by cryptocurrency corporations since the public release of Bitcoin(6).

Statement of world-changing goals, to be achieved without consent

Technofascist goals are never small-scale. Successful tech projects are usually narrowly focused in order to limit their scope(9), but technofascist projects invariably have global ambitions (with no real attempt to establish a roadmap of humbler goals), and equally invariably attempt to achieve those goals without the consent of anyone outside of the project, usually via coercion.

This type of coercion and consent violation is best demonstrated by example. In cryptocurrency, a line of thought that has been called the Bitcoin Citadel(8) has become common in several communities centered around Bitcoin, Ethereum, and other cryptocurrencies. Generally speaking, this is the idea that in a near-future post-collapse society, the early adopters of the cryptocurrency at hand will rule, while late and non-adopters will be enslaved. In keeping with technofascism’s disdain for the success of its marketed goals, this monstrous idea ignores the fact that cryptocurrencies would be useless in a post-collapse environment with a fractured or non-existent global computer network.

AI and TESCREAL groups demonstrate this same pattern by simultaneously positioning large language models as an existential threat on the verge of becoming a hostile godlike sentience, as well as the key to unlocking a brighter (see: more profitable) future for the faithful of the TESCREAL in-group. In this case, the consent violation is exacerbated by large language models and generative AI necessarily being trained on mass volumes of textual and artistic work taken without permission(1).

Urbit positions itself as the inevitable future of networked computing, but its admitted goal is to technologically implement a neofeudal structure where early adopters get significant control over the network and how it executes code(3, 12).

Creation and furtherance of a death cult

In the fascist ideology described by Eco, fascism is described as “a life lived for struggle” where everyone is indoctrinated to believe in a cult of heroism that is closely linked with a cult of death(4). This same indoctrination is common in what I will refer to as a death cult, where a technofascist project is simultaneously positioned as both a world-ending problem, and the solution to that same problem (which would not exist without the efforts of technofascists) for a select, enlightened few.

The death cult of technofascism is demonstrated with perfect clarity by the closely-related ideologies surrounding Large Language Models (LLMs), Artificial General Intelligence (AGI), and the bundle of ideas known as TESCREAL (Transhumanism, Extropianism, Singulartarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism)(5).

We can derive examples of this death cult from the examples given in the previous section. In the concept of the Bitcoin Citadel, cryptocurrencies are idealized as both the cause of the collapse and as the in-group’s source of power after that collapse(6). The TESCREAL belief that Artificial General Intelligence (AGI) will end the world unless it is “aligned with humanity” by members of the death cult, who handle the AGI with the proper religious fervor(11).

While Urbit does not technologically structure itself as a death cult, its community and network is structured to be a highly effective incubator for other death cults(2, 7, 10).

Severance of our relationship with truth and scientific research

Destruction and redefinition of historical records

This can be viewed as a furtherance of technofascism’s goal of destroying our ability to perceive the truth, but it must be called out that technofascist projects have a particular interest in distorting our remembrance of history; to make history effectively mutable in order to cover for technofascism’s failings.

Parasitization of existing terminology

As part of the process of generating false consensus and covering for the many failings of technofascist projects, existing terminology is often taken and repurposed to suit the goals of the fascists.

One obvious example is the popular term crypto, which until relatively recently referred to cryptography, an extremely important branch of mathematics. Cryptocurrency communities have now adopted the term, and have deliberately used the resulting confusion to falsely imply that cryptocurrencies, like cryptography, are an important tool in software architecture.

Weaponization of open source and the commons

One of the distinctive traits that separates ordinary capitalist exploitation from technofascism is the subversion and weaponization of the efforts of the open source community and the development commons.

One notable weapon used by many technofascist projects to achieve absolute control while maintaining the illusion that the work being undertaken is an open source community effort is what I will call forking hostility. This is a concerted effort to make forking the project infeasible, and it takes two forms.

Its technological form is accomplished via network effects; good examples are large cryptocurrency projects like Bitcoin and Ethereum, which cannot practically be forked because any blockchain without majority consensus is highly vulnerable to attacks, and in any case is much less valuable than the larger chain. Urbit maintains technological forking hostility via its aforementioned implementation of neofeudal network resource allocation.

The second form of forking hostility is social; technofascist open source communities are notably for extremely aggressively telling dissenters to “just for it, it’s open source” while just as aggressively punishing anyone attempting a fork with threats, hacking attempts (such as the aforementioned blockchain attacks), ostracization, and other severe social repercussions. These responses are very distinctive in the uniformity of their response, which is rarely seen even among the most toxic of regular open source communities.

Implementation of racist, biased, and prejudiced systems

References

[1] Bender, Emily M. and Hanna, Alex, Ai Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, 2023.

[2] Broderick, Ryan, Inside Remilia Corporation, the Anti-Woke Dao behind the Doomed Milady Maker Nft, Fast Company, 2022.

[3] Duesterberg, James, Among the Reality Entrepreneurs, The Point Magazine, 2022.

[4] Eco, Umberto, Ur-Fascism, The Anarchist Library, 1995.

[5] Gebru, Timnit and Torres, Emile, Satml 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through Agi, 2023.

[6] Gerard, David, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Etherium and Smart Contracts, {David Gerard}, 2017.

[7] Gottsegen, Will, Everything You Always Wanted to Know about Miladys but Were Afraid to Ask, 2022.

[8] Munster, Decrypt / Ben, The Bizarre Rise of the ’Bitcoin Citadel’, Decrypt, 2021.

[9] , Scope Creep, Wikipedia, 2023.

[10] , How to Start a Secret Society, 2022.

[11] Torres, Emile P., The Acronym behind Our Wildest Ai Dreams and Nightmares, Truthdig, 2023.

[12] Yarvin, Curtis, 3-Intro.Txt, GitHub, 2010.

3
submitted 1 year ago* (last edited 1 year ago) by self@awful.systems to c/techtakes@awful.systems
 

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

 

Science shows that the brain and the rest of the nervous system stops at death. How that relates to the notion of consciousness is still pretty much unknown, and many neuroscientists will tell you that. We haven't yet found an organ or process in the brain responsible for the conscious mind that we can say stops at death.

no matter how many neuroscientists I ask, none of them will tell me which part of the brain contains the soul. the orange site actually has a good sneer for this:

You don't need to know which part of the brain corresponds to a conscious mind when they entire brain is dead.

a lot of the rest of the thread is the most braindead right-libertarian version of Pascal’s Wager I’ve ever seen:

Ultimately, it's their personal choice, with their money, and even if they spend $100,000 on paying for it, or more, it doesn't mean they didn't leave other assets or things for their descendants.

By making a moral claim for why YOU decide that spending that money isn't justified, you're going down one very arrogant and ultimately silly road of making the same claim to so many other things people spend money and effort they've worked hard for on specific personal preferences, be they material or otherwise.

Maybe you buying a $700,000 house vs. a $600,000 house is just as idiotic then? Do you really need the extra floor space or bathrooms?

Where would you draw a line? Should other once-implausible life enhancement therapies that are now widely used and accepted also be forsaken? How about organ transplants? Gene therapy? highly expensive cancer treatments that all have extended life beyond what was previously "natural" for many people? Often these also start first as speculative ideas, then experiments, then just options for the rich, but later become much more widely available.

and therefore the only rational course of action is to put $100,000 straight into the pockets of grifters. how dare I make any value judgments at all about cryonicists based on their extreme distaste for the scientific method, consistent history of failure, and use of extremely exploitative marketing?

 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

 

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

 

I defederated us from two lemmy instances:

  • exploding-heads: transphobia
  • basedcount: finally I get to ban most of r/PoliticalCompassMemes in one go
 

we suffered some extremely unexpected downtime while I deployed a trivial change (a reverse proxy from http://awful.systems/archives to http://these.awful.systems/archives) to prod

the downtime was unrelated to the deployment change; instead, it seems like lemmy-ui started crashing because it couldn't render the app icons it uses when saved as a home screen app on mobile. it uses a fairly heavy dependency to do this, and has no error handling in case the source icon data is corrupt, which causes it to crash on every request (resulting in a 503 Service Unavailable error for everyone who tried to access awful.systems during this outage)

since I don't know how that corruption occurred or why it was persistent (the app icon data should be fully static as part of the Nix store as far as I know), so until I can dig in I've disabled generating app icons for our instance. since it seems like we're the first ones to hit this bug, I'll do my best to keep the patch upstreamable so other lemmy instances can benefit from the fix

 

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

 

RationalWiki is a highly biased cancel community which has attacked people like Scott Aaronson and Scott Alexander before.

Background on the authors according to a far-left website.

Let's at least be honest.

That is profiling work. (Not just "Ad hominem".)

The clash with the name "rational-wiki" is too strong not to be noted.

as the infrastructure admin of a highly biased far-left cancel community that attacks people like Scott Aaronson and Scott Alexander: mmm delicious

for bonus sneers, see the entire rest of the thread for the orange site’s ideas on why they don’t need therapy:

I was about to start psychotherapy last month, I ask my family's friend therapist If he could recommend me where to go. So he interviewed me for about 30 mins and ask me about all my problems.

A week later he send me the number of the therapist. I didnt write her yet, I think I dont need it as badly as before.

Those 30 mins were key. I am highly introspective and logical, I only needed to orderly speak my problems.

to quote Key & Peele: motherfucker, that’s called a job

 

hey let’s see what the people who killed and buried hacker culture think should go in the jargon file!

If the spirit of the original Jargon file was to be a living document, alas, it failed to keep with the times.

Hackers at large have moved away from Lisp despite Paul Graham and other evangelists […]

Hackers also have moved away from academia at large, and 9-5 jobs at tech behemoths are more natural habitats for them, which also shaped the lingo. I mean, there’s a whole layer of slang usually pertinent to outsourcing agencies and to cubicle farms.

I can’t wait for the corporate-approved jargon file, with any hint of anti-capitalism replaced with fun words and quotes from billionaires to share as the soul leaves my body

So in order for the document to evolve, we need a system to determine consensus. Everyone who cares runs a program on their computer that joins the network and registers their intent. With each proposed change, a query goes out to the network, and it's up to everyone on the network to say yea or nay to the proposal. With enough "yea"s, the document is updated.

...this is starting to sound like a blockchain, isn't it.

for the absolute sake of fuck. coming soon: HackerDAO! collect 10xer tokens and finally prove to the junior devs why corporate gives you so many points to crunch on! vote on fun new jargon, but only if it’s crypto-related! surely you’re hacker enough to be on the pump side of this pump and dump!

view more: ‹ prev next ›