This is an automated archive.
The original was posted on /r/singularity by /u/Cody4rock on 2024-01-17 12:29:44+00:00.
Currently, our algorithms rely on maximising engagement and personalisation of content, keeping us on platforms while diminishing our ability to maintain social interaction with other people. We're also antagonistic against each other, often having polarising views and responding emotionally to titles or opinion pieces without understanding what the article or opinion is actually about. All the algorithm cares about is whether we comment, upvote or downvote something.
We all live in different worlds because our feeds are personalised. We all get angry because of differing opinions - "How can that guy believe that?!" Hopefully, in this long ass post, we can come to a healthy discussion about how my LLM idea can be implemented and what the implications of it are.
There are limitations to the idea. But the goal is to improve social media so that it doesn't hurt us mentally and acts as a bridge for real social interaction rather than as a replacement for it. How that happens is outside this post's scope, but it will give us clues. It starts with eliminating what makes social media divide us - engagement and ultra-personalised, extremist-inducing algorithms.
How an AI as content moderation and enforcement can be useful
What if an LLM can understand and moderate the content based on the platform's rules and communities? Things like; "mRNA vaccinations cause autism" can be met with a correction with sources of credible information instantly. You can ask it for extra sources or an explanation for why vaccines don't cause autism, and I bet you an LLM could do a much better job at convincing someone because it's not telling a frightened mother that she is an idiot.
Censorship
It can censor information and remain impartial for content that infringes on clear violations. So nobody can influence its decisions, and it has no sides. This includes censoring illegal and explicit content per platform rules (and community rules as far as they don't infringe on other rules) without any human moderator. If a community wants people to be kind to each other (as an example), then an LLM can ensure it because it understands the instruction. How that can happen and whether that involves censorship of "Unkind discussions and bullying" and limits freedom of speech is up for debate.
Giving communities the power to change the degree of AI enforcement
On Reddit, we can allow communities to change what the AI moderator can do but never remove its core function - to address misinformation by creating an equivalent of community notes or simply responding to people. For example, imagine a community where an AI enforces the rules more heavily than others or one that acts as a mediator of debates - think, preventing a debate from going too far by locking a thread, while others are more lax. Science communities would prefer stricter moderation, while art communities might prefer more freedom where an AI's role is minimal. But all communities will have an AI that responds to misinformation, so there's a social contract for us humans that if we're bullshitting, we're not getting away with it or getting rewarded for it.
Problems
It comes with a few problems, namely:
- If Reddit's rules are in the interests of the users AND of the virtual environment,
- If communities have their follower's interests in mind and are intent on following rules,
- Whether an AI model should enforce rules, especially if there is room for error,
- HOW an AI can do this without infringing on free speech and expression,
- If giving access to "propaganda" is a good idea for an AI (future or present) that might not have our interests at heart,
- Whether communities want to be fact-checked (think religious or conspiracy theorists),
- Privacy as the LLM/AI will be collecting data about you - See Personalised Feeds
- And whether it is in the interests of the social media platform.
But most of this requires us to do some experiments to verify what works and doesn't work. This should be increasingly trivial as AI improves and auto-content moderation becomes the norm.
Personalised Feeds
In personalised feeds, an AI that can understand will mean that you'll get content based on goals you can explicitly ask for or on what is associated with what you see without imposing extreme views on you. It could also occasionally present opposing or newer views or unrelated content every now and then as exposure and opportunities to explore while not driving entirely off the rails from what you're used to. It might offer a convincing argument against your views, allowing you to question everything but maintain a sense of sanity. You can choose to hold on to your views, or you can choose to explore further when the rare opportunity comes. The AI might learn more about you, know your limits and interests and respond to what you want in your feed as long as it complies with community or platform rules. Advertisements will get filtered much better, too - but that comes with big problems.
It can already happen now
Some of this can already happen; the GPT store is a pathway in that direction, while APIs for Reddit to use might become available. We can already experiment and find a nice equilibrium by giving those abilities to individual communities.
Should make Social Media better
Ideally, this should create a world where social media doesn't consume your life by engaging in extreme views or interacting with ultra-negative points of view detrimental to your mental health. You want a world where doom-scrolling yourself into a void of despair and hopelessness isn't your only option or where you don't have to argue with some random guy because you're lonely, are being ignored by society, or are angry about the status quo - or the guy just sucks. The first step is eliminating what divides us - Social media in its current form.