this post was submitted on 29 Sep 2023
19 points (100.0% liked)

TechTakes

1425 readers
251 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

top 15 comments
sorted by: hot top controversial new old
[–] bitofhope@awful.systems 10 points 1 year ago

This highlights an inherent issue in trying to create ostensibly informative tools based on input data scraped indiscriminately from all over the internet. Misral's simply doesn't even pretend to paper over it while the rest go

The instruction "Do not act like Slobodan Milošević" in my AI's initial prompt has people asking a lot of questions answered by my AI's initial prompt.

Unrelated, I would call the opposite of a promptfan a "prompt critical" but unfortunately it reminds me of TERFs.

[–] gerikson@awful.systems 10 points 1 year ago (2 children)

The HN crowd are very excited to have have a model that is not "woke":

https://news.ycombinator.com/item?id=37714703

What none of these idiots realize is the reason most big LLM vendors carefully filter what their models output is not because they're namby-pamby liberals intent on throttling free speech, it's because headlines like "ChatGPT teaches kids how to make meth with the help of Adolf Hitler" are a fucking nightmare for a business to deal with.

[–] froztbyte@awful.systems 8 points 1 year ago

ayup

and, infuriatingly, that's what makes this mistral play "good" - it gives them free distance, free protection for causal culpability.

research and solutions exist for ensuring poison pills or traceability or so.... and I'd bet it's more likely than not that they used none of that.

there are so many gating points where they could've gone "hmm, wait", and they just ... didn't. I am not inclined to believe any of this was done in good faith (whether towards their stated goals or towards societally good outcomes

(and, given the circles and actions, probably it wasn't either really either of those two as target goals either)

[–] froztbyte@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Ah shit I missed your reply earlier, muh bad

Edit: holy shit at when both the other comment and this went through. Yay for bad packets.

[–] froztbyte@awful.systems 9 points 1 year ago

aaaaand of course the orange site just has ...... very, very orange takes

[–] swlabr@awful.systems 7 points 1 year ago (2 children)

Good article. If nothing else, TIL from it that there is an “effective accelerationist” community and that we are all decels. A priori I’m guessing they’re all just NRXers cosplaying as pro “acceleration”.

[–] self@awful.systems 6 points 1 year ago (1 children)

it explains why all the least coherent folks on Twitter have /acc or similar in their names

[–] cstross@wandering.shop 7 points 1 year ago

@self @techtakes To neoreactionaries, accelerationism offers an attractive stalking-horse for their forward-to-the-past politics. Feudalism shall rise once more in spaaaaace! And the beta cucks will be put in their place alongside the wimmins and other chattels, or something, I guess. (Ack, spit.)

[–] dgerard@awful.systems 6 points 1 year ago

that is literally what e/acc is - bad Nick Land ideas done by kids not even as bright as Land. So dumb it has a Know Your Meme.

[–] ABoxOfNeurons@lemmy.one -3 points 1 year ago (1 children)

It's a 7b model. There are plenty of other larger open source models out already. I fail to see the issue.

[–] self@awful.systems 10 points 1 year ago (1 children)

did you consider reading the linked article before coming here to post about your failure?

[–] ABoxOfNeurons@lemmy.one -2 points 1 year ago (3 children)

I did. I'm not convinced the author knows the space very well though. There are larger models out there with similarly absent safety features. This isn't a remarkable release, and the tone is of ragebait.

Guardrails are a term of art for something like Nemo, which is more like the unreal ramen shop demo or a corporate chatbot. Most raw open models I've tried will tell you how to make meth if you ask them.

[–] pikesley@mastodon.me.uk 12 points 1 year ago

@ABoxOfNeurons @self we've reached the "Ape holders can use multiple slurp juices on a single ape" stage of AI haven't we

[–] bitofhope@awful.systems 10 points 1 year ago

Look, I'll just spell this out for you.

The size of the model is not in the least bit the point of contention here. Whether this is the largest language model ever created or a tiny and unimpressive one is not why the article was written or linked here.

The reason the article has an indignant tone as do we is that a company is proudly flaunting that they're not even trying to deal with the harmful potential of the ethically dubious or straight up awful shit their supposedly informational product can produce.

They also have a worryingly excited audience praising them for releasing a model whose main selling point is not even its technical sophistication (as you are keen to point out) but the fact it can be used to answer questions like how to kill one's spouse or why ethnic cleansing is good.

[–] froztbyte@awful.systems 7 points 1 year ago

ah, evidence that one needs more than a single box of neurons to

  1. realize that this isn't Model-Quality Debate Club
  2. hear that strange whooshing sound

a handy result!