this post was submitted on 23 Apr 2024
773 points (98.7% liked)

Technology

59308 readers
4851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] KingThrillgore@lemmy.ml 76 points 6 months ago (2 children)

Generative AI has really become a poison. It'll be worse once the generative AI is trained on its own output.

[–] Simon@lemmy.dbzer0.com 33 points 6 months ago* (last edited 6 months ago) (3 children)

Here's my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it'll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it's going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.

The new 'humans only' internet will be the new streaming and eventually it'll take over the web (until they eventually figure out how to ruin that too). In the meantime, they'll continue to exploit the infested hellscape internet because everybody's grandma and grampa are still on it.

[–] treadful@lemmy.zip 48 points 6 months ago (1 children)

I would rather wade with bots than exist on a fully doxxed Internet.

[–] rottingleaf@lemmy.zip 3 points 6 months ago

Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it's very intuitive now.

That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won't happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.

[–] Baylahoo@sh.itjust.works 10 points 6 months ago

That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.

[–] blusterydayve26@midwest.social 16 points 6 months ago* (last edited 6 months ago) (1 children)

You’re two years late.

Maybe not for the reputable ones, that’s 2026, but these sheisters have been digging out the bottom of the swimming pool for years.

https://theconversation.com/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then-216741

[–] k110111@feddit.de 2 points 6 months ago (1 children)

New models already train on synthetic data. It's already a solved solution.

[–] blusterydayve26@midwest.social 1 points 6 months ago (1 children)

Is it really a solution, though, or is it just GIGO?

For example, GPT-4 is about as biased as the medical literature it was trained on, not less biased than its training input, and thereby more inaccurate than humans:

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00225-X/fulltext

[–] k110111@feddit.de 1 points 6 months ago

All the latest models are trained on synthetic data generated on got4. Even the newer versions of gpt4. Openai realized it too late and had to edit their license after Claude was launched. Human generated data could only get us so far, recent phi 3 models which managed to perform very very well for their respective size (3b parameters) can only achieve this feat because of synthetic data generated by AI.

I didn't read the paper you mentioned, but recent LLM have progressed a lot in not just benchmarks but also when evaluated by real humans.