this post was submitted on 29 Jan 2024
262 points (100.0% liked)
Technology
37757 readers
584 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anything a human can be trained to do, a neural network can be trained to do.
Yes, there will be a lack of trained humans for those positions... but spinning up enough "senior engineers" will be as easy as moving a slider on a cloud computing interface... or remote API... done by whichever NN comes to replace the people from HR.
Cue in the humanoid robots.
Better yet: outsource the creation of "qualified oversight", and just download/subscribe to some when needed.
Citation needed
Humans are neural networks... you can cite me on that.
(Notice I didn't say anything about the complexity, structure, or fundamental functioning of a human neural network. All points to modern artificial NNs being somewhat on a tangent to humans... but also that there is some overlap already, and that it can be increased)
Humans are a lot more than the mathematical abstraction that is a neural network.
You could say that you believe that any computational task that a human brain can accomplish, a neural network can also accomplish (simply assuming that all of the higher-level structures, different parts of the brain allocated to particular tasks, the way it encodes and interacts with memories and absorbs new skills, variety of chemical signals which communicate more than a simple number 0 through 1 being sent through each neuron-to-neuron connection, is abstractable within the mathematical construct of a neural network in some doable way). But that's (a) not at all obvious to me (b) not at all the same as simply asserting that we've got it all tackled now that we can do some great stuff with neural networks (c) not implying anything at all about how soon it'll happen (i.e. could take 5 years, or 500, although my feeling is probably on the shorter side as well).
Artificial NNs are simulations (not "abstractions") of animal, and human, neural networks... so, by definition, humans are not more than a neural network.
Not how it works.
Animal neurons respond as a clamping function, with a constant 0 output up to some threshold, where they start outputting neurotransmitters as a function of the input values. Artificial NNs have been able to simulate that for a while.
Still, for a long time it used to be thought that copying the human connectome and simulating it, would be required to start showing human-like behaviors.
Then, some big surprises came from a few realizations:
There are still a couple things to tackle:
The first one is kind of getting solved by attention heads and self-reflection, but I'd imagine adding extra layers that "surface" deeper states into shallower ones, might be a closer approach.
The second one... right now we have LoRAs, which are more like psychedelics or psychoactive drugs, working in a "bulk" kind of way... with surprisingly good results, but still.
Where it really will start getting solved, is with massive scale neuromorphic hardware accelerators the size of a 1TB microSD card (proof of concept is already here: https://www.science.org/doi/10.1126/science.ade3483 ), which could cut down training times by 10 orders of magnitude. Shoving those into a billion smartphones, then into some humanoid robots, is when the NN age will really get started.
Whether that's going to take more or less than 5 years, it's hard to say, but surely everyone is trying as hard as possible to make it less.
Then, imagine a "trainee" humanoid robot, with maybe 1000 accelerators of those, that once it trains a NN for whatever task, can be copied over to as many simple "worker" robots as needed. Imagine a company spending a few billion USD on training a wide range of those NNs, then offering a per-core subscription to other companies... at a fraction of the cost of similarly trained humans.
TL;DR: we haven't seen nothing yet.
Imma stop you right there
What's the neural net that implements storing and retrieving a specific memory within the neural net after being exposed to it once?
Remember, you said not more than a neural net -- anything you add to the neural net to make that happen shouldn't be needed, because humans can do it, and they're not more than a neural net.
Cyclic neural networks.
https://en.m.wikipedia.org/wiki/Recurrent_neural_network
Any other questions?
I think we're gonna have to agree to disagree as to the nature of neural net technology. I'm clearly not talking about optimizing handwriting recognition.
That's unfortunate, I wish we could agree on something instead. Anyway, let's leave it so, then. ✌️
We don't even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat's brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).
I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks' enlightening writing like Elephants Don't Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).
Where did I exaggerate anything?
We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.
It's nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness ("temperature") to return the best results.
There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.
It's kind of a brute force approach, but the results speak for themselves.
I'm afraid the "state of the art" in 2020, was not the same as the "state of the art" in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.
The human brain is the most complex object in the known universe. We are only scratching the surface of it right now. Discussions of consciousness and sentience are more a domain of philosophy than anything else. The true innovations in AI will come from neurologists and biologists, not from computer scientists or mathematicians.
Quantum effects are not randomness. Emulating quantum effects is possible, they can be understood empirically, but it is very slow. If intelligence relies on quantum effects, then we will need to build whole new types of quantum computers to build AI.
Well, there we agree. In that the results are very limited I suppose that they do speak for themselves 😛
This is what I mean by exaggeration. I'm an AI proponent, I want to see the field succeed. But this is nothing like the leap forward some people seem to think it is. It's a neat trick with some interesting if limited applications. It is not an AI. This is no different than when Minsky believed that by the end of the 70s we would have "a machine with the general intelligence of an average human being", which is exactly the sort of over-promising that led to the AI field having a terrible reputation and all the funding drying up.
Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today's neural nets as a space shuttle is to a paper airoplane.
https://www.youtube.com/watch?v=wXxrmussq4E
Have you not been paying attention to robotics recently? Opening doors is a solved problem with consumer grade hardware and software at this point.
I wouldn't say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.
For now there is: AI vs. Stairs, you may need to wait for a future video for "AI vs. Doors" 🤷
BTW, that is a rudimentary neural network.
I've seen a million of such demos but simulations like these are nothing like the real world. Moravec's paradox will make neural nets look like toddlers for a long time to come yet.
Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.
Moravec's paradox holds true because of two fronts:
But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we're brute-forcing away the lack of a formal description of the algorithm.
We're now looking towards neuromorphic hardware on the trillion-"core" scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler... before you copy the first trained NN to an identical body and re-training costs drop to O(0)... which is much less than even training a million toddlers at once.
I'm assuming you're being facetious. If not...well, you're on the cutting edge of MBA learning.
There are still some things that just don't get into books, or drawings, or written content. It's one of the drawbacks humans have - we keep some things out our brains that just never make it to paper. I say this as someone who has encountered conditions in the field that have no literature on the effect. In the niches and corners of any practical field there are just a few people who do certain types of work, and some of them never write down their experiences. It's frustrating as a human doing the work, but it would not necessarily be so to a ML assistant unless there is a new ability to understand and identify where solutions don't exist and go perform expansive research to extend the knowledge. More importantly, it needs the operators holding the purse to approve that expenditure, trusting that the ML output is correct and not asking it to extrapolate in lieu of testing. Will AI/ML be there in 20 years to pick up the slack and put it's digital foot down stubbornly and point out that lives are at risk? Even as a proponent of ML/AI, I'm not convinced that kind of output is likley - or even desired by the owners and users of the technology.
I think AI/ML can reduce errors and save lives. I also think it is limited in the scope of risk assessment where there are no documented conditions on which to extrapolate failure mechanisms. Heck, humans are bad at that, too - but maybe more cautious/less confident and aware of such caution/confidence. At least for the foreseeable future.
ISO 9001 would like to talk to all those people and have them either document, or see the door. Not really cutting edge, more of a basic business certification to even dream about bidding for any government related project (then, people still lie and don't keep everything documented... and shit happens, but such are people).
Get a humanoid learning robot, you'll have a log of everything it experienced at the end of the day, with exact timestamps, photos, and annotations.
Auto-GPT does it. The operator's purse is why it doesn't get used much more 😉