this post was submitted on 25 Nov 2024
29 points (100.0% liked)

TechTakes

1435 readers
174 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)

you are viewing a single comment's thread
view the rest of the comments
[–] sailor_sega_saturn@awful.systems 11 points 1 day ago* (last edited 1 day ago) (6 children)

I woke up and immediately read about something called "Defense Llama". The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/

Scale AI advertised their chatbot as being able to:

apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities

However their marketing material, as is tradition, include an example of terrible advice. Which is not great given it's about blowing up a building "while minimizing collateral damage".

Scale AI's response to the news pointing this out -- complaining that everyone took their murderbot marketing material seriously:

The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.

[–] BlueMonday1984@awful.systems 13 points 1 day ago (5 children)

On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.

[–] istewart@awful.systems 4 points 5 hours ago

The eventual war crimes trials will very likely reveal that "AI targeting" has already been used as an accountability sink for a premeditated ethnic cleansing policy in Gaza.

load more comments (4 replies)
load more comments (4 replies)