this post was submitted on 14 Feb 2024
671 points (95.5% liked)
Technology
59693 readers
2299 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This shit is just analog computing though, right? Like at it's base, we're just reproducing analog computation in a digital environment and then we're framing that in a million different ways, like we've been doing since the seventies. We've actually had this shit since the first computers, which were analog. The whole reason we moved to digital, though, is because the results were easier to break down, parse, and we had control over every step of the process to confirm it was correct, and it was going to be correct every time. A clearer sense of limitations and constraints, basically.
Now I'm not entirely against analog computing as a matter of fact, right, in fact I think it can be pretty cool if we recognize it for what it is, but at the same time I can't help but think that the level of hype around it is fucking insane. Primarily because it's not easily controllable or reproducible. Not in the sense that we're gonna somehow invent a rogue AI that will kill us all, or whatever garbage, but in the sense that, while you can get easily reproducible results (such is the nature of computation), it is very hard to control what the output is of a given neural network. You can process loads of information extremely quickly, but, like, what use is that if I don't know whether or not the solution is correct, or if it's just a kind of ballpark figure? That's the main issue.
Again, fine if we recognize it, but I don't think we're really close at all to just like, randomly inventing a rogue consciousness. We're not anywhere close to that, from what I've seen. We're still barely good at image recognition and generation in an actually complicated environment, and even then it's still pretty hard to get what it is that you specifically want, partially because the hype is driving so much development at this point, and the implementation is bunk and, again, kind of uncontrollable. Venture capital jumping down this thing's throat has partially blocked it's airway, as I see it. Still a useful technology, potentially, but a million stupid tech demos and image generators for nonsensical memes that we can flood everyone with is the dumbest shit imaginable, and even dumber than that is the level of venture capitalists I see that want to somehow monetize that.
And so I have to ask, right, if I want a robot to sort through the different colors of little plastic beads, right, do I get a large language model on that, or do I just run a pretty basic and more efficient algorithm that just narrows the parameter of beads to a certain color, as recorded by the camera, and then that's it? Do I want to translate a sentence with AI, or do I want to just manually run a straight word to word conversion that maybe changes based on a couple passes I'm gonna run at it to check whether or not it contextually makes sense with something like a markov chain? Trick question, they are both the same approach, AI has just done it in a way where I could apply a kind of broader paintbrush to the thing and get my results a little faster and with a little less thought even if I have less control over it.