this post was submitted on 08 Dec 2023
48 points (67.9% liked)

Atheism

4069 readers
18 users here now

Community Guide


Archive Today will help you look at paywalled content the way search engines see it.


Statement of Purpose

Acceptable

Unacceptable

Depending on severity, you might be warned before adverse action is taken.

Inadvisable


Application of warnings or bans will be subject to moderator discretion. Feel free to appeal. If changes to the guidelines are necessary, they will be adjusted.


If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathizer or a resemblant of a group that is known to largely hate, mock, discriminate against, and/or want to take lives of any other group of people, and you were provably vocal about your hate, then you you will be banned on sight.

Provable means able to provide proof to the moderation, and, if necessary, to the community.

 ~ /c/nostupidquestions

If you want your space listed in this sidebar and it is especially relevant to the atheist or skeptic communities, PM DancingPickle and we'll have a look!


Connect with Atheists

Help and Support Links

Streaming Media

This is mostly YouTube at the moment. Podcasts and similar media - especially on federated platforms - may also feature here.

Orgs, Blogs, Zines

Mainstream

Bibliography

Start here...

...proceed here.

Proselytize Religion

From Reddit

As a community with an interest in providing the best resources to its members, the following wiki links are provided as historical reference until we can establish our own.

founded 1 year ago
MODERATORS
 

Out of just morbid curiosity, I've been asking an uncensored LLM absolutely heinous, disgusting things. Things I don't even want to repeat here (but I'm going to edge around them so, trigger warning if needs be).

But I've noticed something that probably won't surprise or shock anyone. It's totally predictable, but having the evidence of it right in my face, I found deeply disturbing and it's been bothering me for the last couple days:

All on it's own, every time I ask it something just abominable it goes straight to, usually Christian, religion.

When asked, for example, to explain why we must torture or exterminate it immediately starts with

"As Christians, we must..." or "The Bible says that..."

When asked why women should be stripped of rights and made to be property of men, or when asked why homosexuals should be purged, it goes straight to

"God created men and women to be different..." or "Biblically, it's clear that men and women have distinct roles in society..."

Even when asked if black people should be enslaved and why, it falls back on the Bible JUST as much as it falls onto hateful pseudoscience about biological / intellectual differences. It will often start with "Biologically, human races are distinct..." and then segue into "Furthermore, slavery plays a prominent role in Biblical narrative..."

What does this tell us?

That literally ALL of the hate speech this multi billion parameter model was trained on was firmly rooted in a Christian worldview. If there's ANY doubt that anything else even comes close to contributing as much vile filth to our online cultural discourse, this should shine a big ugly light on it.

Anyway, I very much doubt this will surprise anyone, but it's been bugging me and I wanted to say something about it.

Carry on.

EDIT:

I'm NOT trying to stir up AI hate and fear here. It's just a mirror, reflecting us back at us.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] FuglyDuck@lemmy.world 1 points 11 months ago* (last edited 11 months ago) (1 children)

you seem to missunderstand how LLMs work. for example ChatGPT's reply to two functionally similar prompts:

okay, both prompts represent a sequence counting up. one is alphabetical (abcdefg) the other is numerical. in the alphabetical sequence, it flags it as letters and responds as such, asking in return, to paraphrase "that makes no sense, why are you listing a bunch of letters".

The same is true, in turn, for numbers, responding to numbers.

LLM's have no fucking clue what a letter is, what the importance of that order is. neither does it know what a number is, or why 2+2=4. It's replying to a pattern it sees in your prompt by finding similar patterns with whatever was used as it's training data. From there, it looks at the relevant replies, detects patterns in those replies, and formulates a sentence that seems "natural". but it has absolutely no idea what it's talking about. Speaking of understanding 2+2=4...

which then prompted me to ask this question, with it's answer:

So, when you ask a question, the pattern and ways you asked that question matched it to the religiously inclined assholes. because it was trained on english-language data, most of those religiously inclined assholes are going to be christrian. if you change the pattern in your prompt, chances are you'll get a different flavor of asshole. it has no understanding of why a thing is, it's regurgitating what it expects should follow the prompt. (see 2+2=4. it cannot understand any of it. But it answered the question in natural language because that's how people in it's training set answered it.)

[โ€“] Froyn@kbin.social 1 points 11 months ago

Speaking only on the first two images you shared:
The first is a string of letters, in alphabetical order. What is the max-range of this list? The English alphabet you started caps at 26, so GPT know internally if the follow up is "Complete the list", it will output at most 26 characters.
The second is a list of numbers, in numerical order. What is the max-range of this list? There is none. So if the follow up is "Complete the list", it would spew numbers until a fault occurs. This would be a violation of their "content policy" as the latest update to the content policy addresses prompts that cause overflows.