this post was submitted on 30 Oct 2023
525 points (95.5% liked)

Technology

59652 readers
4997 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn't that it'll kill you. It's that a small group of billionaires will control the tech forever.

you are viewing a single comment's thread
view the rest of the comments
[–] photonic_sorcerer@lemmy.dbzer0.com 65 points 1 year ago (4 children)

This is why we need large-scale open-source AI efforts, even if it scares the everliving shit out of me.

[–] uriel238@lemmy.blahaj.zone 8 points 1 year ago (2 children)

AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.

And I, for one, welcome our new robot overlords!

[–] zbyte64@lemmy.blahaj.zone 4 points 1 year ago (1 children)

Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.

[–] uriel238@lemmy.blahaj.zone 1 points 1 year ago

Actually AI safety experts are worried that corporations are just interested in getting technology that achieves specific ends, and don't care that it is dangerous or insufficiently tested. Our rate of industrial disasters kinda demonstrates their views regarding risk.

For now, we are careening towards giving smart drones autonomy to detect, identify, target and shoot weapons at enemies long before they're smart enough to build flat-packed furniture from the IKEA visual instructions.

[–] PsychedSy@sh.itjust.works 3 points 1 year ago (1 children)

If we have to choose between corporations or the government ruling us with AI I think I'm gonna just take a bullet.

[–] Kedly@lemm.ee 1 points 1 year ago (1 children)

Anarchy with never exist as anything but the exception to the rule, governments are a form of power that the population can at least influence. Weaker government will always mean stronger either nobility or corporations

[–] PsychedSy@sh.itjust.works 1 points 1 year ago (1 children)

We're failing at influencing now.

You may think you're choosing the best yoke, but I'd prefer none.

[–] Kedly@lemm.ee 1 points 1 year ago

Maybe in the future we can go back to smaller tribes/groups of people that take care of each other, but in the world as it exists today? An entity will come by sooner or later to conquer said group. We influence our government FAR better than we influence a corporation or dictator. Right now we need an equalizing big power, and at least with democratic governments, these big powers at least have to pretend to work for their people. Which, again, corporations and dictators do not

[–] frezik@midwest.social 8 points 1 year ago (1 children)

I've been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It's mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.

The dataset is the real problem. Say you want to classify fruit to check if it's ripe enough for harvesting. You'll need a whole lot of pictures of your preferred fruit where it's both ripe and not ripe. You'll want people who know the fruit to classify those images, and then you can feed it into a model. It's a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven't traditionally been a part of open source software.

[–] pinkdrunkenelephants@lemmy.cafe -2 points 1 year ago (2 children)

If we set up some kind of blockchain to just pay people to honestly differentiate between pictures, it could be done.

[–] echodot 9 points 1 year ago (1 children)

There is no problem in this world so serious that someone will not suggest blockchain as a potential solution.

[–] Corkyskog@sh.itjust.works -3 points 1 year ago

Your being hyperbolic and silly. Find me a solution to mass shootings or racism using blockchain.

[–] ICastFist@programming.dev 4 points 1 year ago (1 children)

Nah, using Recaptcha is the way to get free labor for that training

[–] errer@lemmy.world 6 points 1 year ago

Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).

[–] r3df0x@7.62x54r.ru 3 points 1 year ago

Yep. As dangerous as that could be, it's better then centralizing it. There are already systems like GPT4all that come with good models that are slower then things like Chat GPT but work similarly well.