this post was submitted on 30 Nov 2024
25 points (96.3% liked)

Technology

35129 readers
46 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Meta is actively helping self-harm content to flourish on Instagram by failing to remove explicit images and encouraging those engaging with such content to befriend one another, according to a damning new study that found its moderation “extremely inadequate”.

Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

When it created its own simple AI tool to analyse the content, it was able to automatically identify 38% of the self-harm images and 88% of the most severe. This, the company said, showed that Instagram had access to technology able to address the issue but “has chosen not to implement it effectively”.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here