this post was submitted on 11 Nov 2024
48 points (98.0% liked)

philosophy

19816 readers
1 users here now

Other philosophy communities have only interpreted the world in various ways. The point, however, is to change it. [ x ]

"I thunk it so I dunk it." - Descartes


Short Attention Span Reading Group: summary, list of previous discussions, schedule

founded 4 years ago
MODERATORS
 

I don’t know how there aren’t a myriad of problems associated with attempting to emulate the brain, especially with the end goal of destroying livelihoods and replacing one indentured servant for another. In fact, that’s what promoted this post- an advertisement for a talk with my alma mater’s philosophy department asking what happens when see LLMs discover phenomenological awareness.

I admit that I don’t have a ton of formal experience with philosophy, but I took one course in college that will forever be etched into my brain. Essentially, my professor explained to us the concept of a neural network and how with more computing power, researchers hope to emulate the brain and establish a consciousness baseline with which to compare a human’s subjective experience.

This didn’t use to be the case, but in a particular sector, most people’s jobs are just showing up a work, getting on a computer, and having whatever (completely unregulated and resource devouring) LLM give them answer they can find themselves, quicker. And shit like neuralink exists and I think the next step will to be to offer that with a chatgpt integration or some dystopian shit.

Call me crazy, but I don’t think humans are as special as we think we are and our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer. Hell, we collectively decided to slaughter en masse another collective group with feeling (animals) to appease our tastebuds, a lot of us are thoroughly entrenched into our digital boxes because opting out will result in a loss of items we take for granted, and any discussions on these topics are taboo.

Data-obsessed weirdos are a genuine threat to humanity, consciousness-emulation never should have been a conversation piece in the first place without first understanding its downstream implications. Feeling like a certified Luddite these days

you are viewing a single comment's thread
view the rest of the comments
[–] WhyEssEff@hexbear.net 20 points 1 month ago* (last edited 1 month ago)

As a data science undergrad, knowing generally how they work, LLMs are fundamentally not built in a way that could achieve a measure of consciousness.

Large language models are probability-centric models. They essentially look at a graph node of "given my one quintillion sentences and one quadrillion paragraphs on hand, which word is probably next given the current chain of output and the given input." This makes it really good at making something that is voiced coherently. However, this is not reasoning–this is parroting – it's a chain of dice rolls that's weighted to all writing ever to create something that reads like a good output against the words of the input.

The entire idea behind prompt engineering is that these models cannot achieve internal reasoning, and thus you have to trick it into speaking around itself in order to write out the lines of logic that it could reference in its own model.

I do not think AGI or whatever they're calling Star Trek-tier AI will arise out of LLMs and transformer models. I think it is fundamentally folly. I think what I see as fundamental elements of consciousness are just not covered at all by it (such as subjectivity) or are something I just find sorely lacking even despite the advances in development (such as cognition). Call me a cynic, I just truly think it's not going to come out of genAI (as we generally understand the technology behind it for the past couple years) and further research into it.