this post was submitted on 29 Aug 2023
772 points (96.8% liked)
Technology
59693 readers
2972 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'll agree that ISPs should not be in the business of policing speech, buuuut
I really think it's about time platforms and hosts be held responsible for content on their platforms, particularly if in their quest to monetize that content they promote antisocial outcomes like the promulgation of conspiracy theories and hate and straight-up crime
For example, Meta is not modding down outright advertising and sales of stolen credit cards at the moment Also meta selling information with which to target voters... to foreign entities
The issue with this is holding tech companies liable for every possible infraction will mean tjay platforms like Lemmy, and mastodon can't exist cause they could be sued out of existance
That concern was the basis for section 230 of the 1996 Communications Decency Act, which is in effect in the USA but is not the law in places like, say the EU. It made sense at the time, but today it is desperately out of date.
Today we understand that in absolving platforms like Meta of their duty of care to take reasonable steps to not cause harm to their customers, their profit motive would guide them to look the other way when their platform is used to disseminate disinformation about vaccines that gets people killed, that the money would have them protecting Nazis, that algorithms intended to promote engagement would become a tool not just to advertisers but to propagandists and information warfare people.
I'm not particularly persuaded that if in the US there is reform to section 230 of the Communications Decency act, that it would doom nonprofit social media like most of the fediverse- if you look around at all, most of it already follows a well-considered duty-of-care standard that provides its operators substantial legal protection from liability for what 3rd parties post to their platforms. Also if you consider even briefly, that is the standard in effect in much of Europe and social media still exists- it's just less-profitable and has fewer nazis.
I think generally we need to regulate how algrithums work of this is the case. We need actual legislation and not just law suit buttons. Also meta can slither its way out of any lawsuit, this would really only effect small mastodon instances.
I feel like you can't really change 230, you need to to instead legislatate differently. There is room for more criminal liability when things go wrong I think. But cival suits in the US can be really bogus. Like someone could likely sue a mastodon instance for turning their kid trans and win without section 230
I'm with you on the legislate differently part.
The background of Section 230(c)(2) is an unfortunate 1995 court ruling that held that if you moderate any content whatsoever, you should be regarded as its publisher (and therefore ought to be legally liable for whatever awful nonsense your users put on your platform). This perversely created incentive for web forum operators (and a then-fledgling social media industry) to not moderate content at all in order to gain immunity from liability- and that in turn transformed broad swathes of the social internet into an unmoderated cesspool full of Nazis and conspiracy theories and vaccine disinformation, all targeting people with inadequate critical thinking faculties to really process it responsibly.
The intent of 230(c)(2) was to encourage platform operators to feel safe to moderate harmful content, but it also protects them if they don't. The result is a wild-west, if you will, in which it's perfectly legal for social media operators in the USA to look the other way when known-unlawful use of their platforms (like advertising stolen goods, or sex trafficking, or coordinating a coup attempt, or making porn labeled 'underage' searchable) goes on.
It was probably done in good faith, but in hindsight it was naïve and carved out the American internet as a magical zone of no-responsibility.
This is not really what 230 does, sites still face criminal liability were needed, like if I made a site that had illegal content I could still be arrested and have my server seized, repealing 230 would legit just let Ken Paxton launch a multi state lawsuit suing a large list of queer mastodon instance for transing minors. Without 230 it would be lawsuit land and sites would censor anything that wasnt cat photos in an effort to avoid getting sued. Lawsuits are expensive even when you win. If you wanna make social media companies deal with something you gotta setup criminal liability not repeal 230. 230 just protect sites from cival suits not criminal ones.
publishers are held responsible.
Internet service providers, as defined in the 1996 cda, are not.