this post was submitted on 22 Feb 2024
238 points (93.1% liked)
Technology
59308 readers
4786 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Forgive me if I think any kind of nuclear reaction should not be handled by what we’re calling “AI.” It could hallucinate that it’s winning a game of chess by causing a nuclear blast.
That’s not how AI or nuclear fusion work.
Okay you’re forgiven, doesn’t change that your opinion is flawed however.
Putting aside your lack of knowledge of nuclear energy and AI systems, do you honestly think scientists are stupid enough to give a non-deterministic system complete control over critical systems? No, they are merely taking suggestions from it, with hard limits on what it can do.
Setting aside the matter of "AI", this is a fusion reactor, not fission, so there's no scenario in which this can possibly cause an explosion. The absolute worst case scenario is that containment fails and the plasma melts and destroys the electromagnets and superconductors of the containment vessel before dissipating. It would be a very expensive mistake to repair and the reactor would be out of commission until it was fixed, but in terms of danger to anyone not literally standing right next to the reactor there is none. Even someone standing next to the reactor would probably be in more danger from the EM fields of a correctly functioning reactor than they would be from the plasma of a failed one.
Whatever you read to convince you this is what an AI hallucination is needs a better editing pass
Error builds upon error. It’s cursed from the start. When you factor in poisoned data, it never had a chance.
It’s not here yet because we aren’t advanced enough to make it happen. Dress it up in whatever way the owner class can swallow. That’s the truth. Dead on arrival
It seems like you are building on criticisms of LLMs and applying them to something that very different. What poisoned data do you imagine this model having in the future?
That is a criticism of LLMs because new generations are being trained on writing that could be the output of LLMs, which can degrade the model. What suggests to you that this fusion reactor will be using synthetic fusion reactor data to learn when to stop itself?
That isn’t how any of this works…
You can’t just assume every AI works exactly the same. Especially since the term “AI” is such a vague and generalized definition these days.
The hallucinations you’re talking about, for one, are referring to LLMs and their losing track of the narrative when they are required to hold too much “in memory.”
Poison data isn’t even something an AI of this sort would really encounter unless intentional sabotage took place. It’s a private program training on private data, where does the opportunity for intentionally bad data come from?
And errors don’t necessarily build on errors. These are models that predict 30 seconds into the future by using known physics and estimated outcomes. They can literally check their predictions in 30 seconds if the need arises, but honestly why would they? Just move on to the next calculation from virgin data and estimate the next outcome, and the next, and the next.
On top of all that… this isn’t even dangerous. It’s not like anyone is handing the detonator for a nuke to an AI and saying “push the button when you think is best.” The worst outcome is “no more power” which is scary if you run on electricity, but mildly frustrating if you’re a human attempting to achieve fusion.
Me, when I confidently spread misinformation about topics I don't even have a surface level understanding of.
Do you really think we're at I, Robot levels of AI right now?