this post was submitted on 24 Jun 2023
99 points (100.0% liked)

Technology

37712 readers
224 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

There are lots of articles about bad use cases of ChatGPT that Google already provided for decades.

Want to get bad medical advice for the weird pain in your belly? Google can tell you it's cancer, no problem.

Do you want to know how to make drugs without a lab? Google even gives you links to stores where you can buy the materials for it.

Want some racism/misogyny/other evil content? Google is your ever helpful friend and garbage dump.

What's the difference apart from ChatGPT's inability to link to existing sources?

Edit: Just to clear things up. This post is specifically not about the new use cases that come from AI. Sure, Google cannot make semi-non-functional mini programs automatically, and Google will not write a fake paper in whole for me. I am specifically talking about the "This will change the world" articles, that mirror stuff that Google can do exactly like ChatGPT can.

you are viewing a single comment's thread
view the rest of the comments
[–] justgohomealready@lemmy.pt 14 points 1 year ago (2 children)

The problem with chatGPT is that it allows for automation of content creation.

Imagine a a single guy using chatGPT to control thousands of social media bots, who answer in a human-like way and are able to follow conversations and context, but who all defend the same point of view.

Or imagine a single guy controlling thousands of "local news blogs" that have a constant stream of fresh AI-generated content (both articles and comments), once again all pushing the same narrative.

That is the main problem with things like chatGPT, if not controlled - they allow anyone to create their own "troll farm".

[–] Ganbat@lemmynsfw.com 6 points 1 year ago (2 children)

I just wish humans could be not awful for once in our history. You know what I've done with ChatGPT? I had it help me convert a big python function into a one-line delta, and got it to write a short horror story about a man eating a can of beans in theater, amongst other silly things. But everyone suffers, and things get harder to make use of because of power-mad scumbags who see everything as a means to gain control of others.

[–] PlasticExistence@beehaw.org 8 points 1 year ago

Hell is other people

[–] diannetea@beehaw.org 3 points 1 year ago

I had it write a rap about the poop emoji

I mostly use it for silly things or to give me ideas, and I might try it if we get back into running a game in foundry vtt because the macro I had it test generate messing around looked like it should work fine and I'm lazy

But I can't imagine using it for serious stuff ever. Maybe in a few years.

[–] squaresinger@feddit.de 1 points 1 year ago (1 children)

But that was possible before. Just that these bots would just copy content, add random words and run it through Google Translate, translating it to a different language and back again. That already did the trick for the last 10 or so years.

[–] justgohomealready@lemmy.pt 1 points 1 year ago

Those bots wouldn't pass the turing test, that's for sure. One thing is pure spam like you're describing, another is to be arguing with an AI (and losing) without even being aware that it's an AI on the other side.