This is an automated archive.
The original was posted on /r/singularity by /u/Blizzwalker on 2024-01-24 02:55:26+00:00.
The speculation and debate evident in this thread are probably both good and necessary. Going in circles, however, is not helpful. One ongoing debate is knowing when we have achieved AGI. Given how slippery words are, we are faced with many terms that come up repeatedly in these discussions that are ambiguous. Understanding, consciousness, reasoning, sentience, and self-awareness all get used without agreement on their definitions-- if they even can be defined clearly. Maybe we should be measuring progress in computation by functional ability instead of always wanting them to somehow mimic human mind (Ie artificial intelligence). So, as some say, AGI need not be proven conscious or having understanding, as long as it can get tasks done across the range of human activity.
It seems however that a machine that has some of those elusive qualities,like self awareness, might be a more powerful machine than one lacking them. Don't we want the most powerful machine? Or isn't the machine that designs better machines going to make the most powerful machine ? Now we get back to problem of defining such terms. If we can't define these qualities, how can we measure them?. Take the question about LLMs having understanding. If I prompt it to generate a poem, it gives the line "With lips as red as a rose". If it can use the word red in context, if it can pick out red in an image, and generate a red image, doesn't it understand "redness"? But understanding can mean lots of things. It reminds me of the Mary's Room thought experiment (Jackson). Mary can only see black and white-- born with a visual processing deficit. Yet as a neuroscientist she knows everything about visual processing and the process of color vision, the frequencies of the color spectrum, etc. One day she has a brain operation and is cured. They take the bandages off and she sees a red tomato. Doesn't she have some new knowledge that she didn't understand before? In a similar way, if an LLM just digests language about the color red, does it understand red in the way we do? Maybe current models need something added before they can equal or surpass us, even though they may already be disruptive. And the same fuzziness about what we are looking for, and how to measure it, will continue to dog us. I guess one day soon it will be so different, it won't matter.