this post was submitted on 21 Sep 2023
300 points (100.0% liked)
Technology
37717 readers
432 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Problem is: who is in charge to write down the thing that is being voted for. E.g. "we need to protect the children" will get my yes vote. However, that is very unspecific and the specific thing could be "we are scanning every text message and sent file on every of your devices to compare that with child sexual abuse images as we need to protect the children". I wouldn't vote for that. So, whoever can frame the question that is beeing asked still directs what is happening.
Get an AI to analyze each poll and compare it to whatever preferences/indications you gave it, then output a yay/nay. For the lazy, let it automatically cast the vote for you.
Whoever gets an AI capable of holding more context, and to fool other's AIs, will direct what's happening... but will have it become increasingly difficulty.
OMG fucking techbros. Yes, technology can be useful for many a thing, can both alleviate social issues by providing legit wealth, as well as shape society by its own shape (e.g. the interactions possible and encouraged by a social network).
It won't, it can't, however, bring about utopia for us. For to shape technology such that it would shape us in beneficial ways we'd have to fucking know what we want at which point we wouldn't have the issue in the first place. Society, as a superorganism, will have to understand human nature, first.
Go outside. Talk to people in the real world. Use the faculties nature has given to you to fix shit, like your body, your mind, both ratio and instincts, don't pray to some technological spectre that it shall deliver us from evil you're displacing.
Look, I don't want to pull the faculties card, so I'll tell you a very easy thing to do: go to your city hall, or whatever public place you have with ALL THE LAWS applicable to you personally, and read them ALL. Just once, no more.
Then go outside, and find a single person who has done the same, with whom you can have even a remote chance of talking about the real and full consequences of any single law change proposal.
If you do that... congratulations, you're better than a whole law firm with a hundred lawyers taken all together. And congratulate the other one for being one of the only two people in the whole world who can do it too.
For the rest of us, having an AI read all that stuff, then make it compare whatever we think we want, with what the result of changing even a couple words would be, much less 50 or 200 pages of amendments, is not about bringing some "utopia"... it's about having a fighting chance of not stepping on a landmine in a quagmire at night in the middle of a tornado.
Right now, we have to pray to a party, a bunch of representatives, all their staff, their lobbyists, and several law firms. I'd rather pray to a single "technological spectre" that I could turn off and on again, as many times as I wanted.
An AI being able to do that kind of analysis would be an AGI. Also: Garbage in, garbage out. Without knowledge of the system you cannot know what you actually want.
Let's take NIMBYs as an example: A municipality wants to drop parking minimums and fund public transport and start up a couple of medium-density housing/commercial developments around new tram stops in the suburbs, to fix their own finances (not having to subsidise infrastructure in low-density areas with high-density land taxes), as well as save money for suburbanites (cars are expensive and those tram stops are at most a short bike ride away from everywhere), and generally make the municipality nicer and more liveable. Suburbia is up in arms, because suburbanites are, well, not necessarily idiots but they don't understand city planning.
The issue here is not one of having time to read through statutes, but a) knowing what you want, b) trust in that decision-makers aren't part of the "big public transport" conspiracy trying to kill car manufacturers and your god-given right as a god-damned individual to sit in a private steel box four hours a day while commuting and not even being able to play flappy bird while doing it.
Even if your AI were able to see through all that and advise our NIMBYs there that the new development is in their interest -- why would the NIMBYs trust it any more than politicians? Sure the scheme would save on steel and other resources but who's to say the AI doesn't want to use all that steel to produce more paper clips?
Questions to answer here revolve around trust, the atomisation of society, and alienation. AI ain't going to help with that.