this post was submitted on 15 Apr 2024
489 points (100.0% liked)

Technology

37800 readers
418 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Credit to @bontchev

you are viewing a single comment's thread
view the rest of the comments
[–] Flax_vert 18 points 8 months ago (1 children)
[–] DdCno1@beehaw.org 27 points 8 months ago (2 children)

An alt-right LLM (large language model). Think of it as a crappy Nazi alternative to the text part of GPT-4 (there's also a separate text-to-image component). It's probably just a reskinned existing language model that had Mein Kampf, The Turner Diaries and Stormfront added to its training data.

[–] mozz@mbin.grits.dev 41 points 8 months ago* (last edited 8 months ago) (2 children)

You are overestimating the alt-right's appetite for honest work. To all appearances they just took a fully-stock model, slapped a racist prompt on the front of it, and called it a day.

[–] Flax_vert 17 points 8 months ago

I agree, seems like what it is from this lol. Especially the GPT-4 mentions

[–] irq0@infosec.pub 7 points 8 months ago

They did more than that. They have a Hitler character too

[–] DarkThoughts@fedia.io 1 points 8 months ago

I feel if you have to specify that you are not GPT-4 then you are likely GPT-4. The output screenshots in this thread would likely suggest the same as it gave proper replies instead of disinfo, which is of course because that's what the model was trained on. You can only steer an LLM so much away from its trained model.