this post was submitted on 02 Jul 2023
43 points (100.0% liked)

Beehaw Support

2797 readers
3 users here now

Support and meta community for Beehaw. Ask your questions about the community, technical issues, and other such things here.

A brief FAQ for lurkers and new users can be found here.

Our September 2024 financial update is here.

For a refresher on our philosophy, see also What is Beehaw?, The spirit of the rules, and Beehaw is a Community


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.


if you can see this, it's up  

founded 2 years ago
MODERATORS
 

Hey all,

Moderation philosophy posts started out as an exercise by myself to put down some of my thoughts on running communities that I'd learned over the years. As they continued I started to more heavily involve the other admins in the writing and brainstorming. This most recent post involved a lot of moderator voices as well, which is super exciting! This is a community, and we want the voices at all levels to represent the community and how it's run.

This is probably the first of several posts on moderation philosophy, how we make decisions, and an exercise to bring additional transparency to how we operate.

you are viewing a single comment's thread
view the rest of the comments
[–] kalanggam@beehaw.org 4 points 1 year ago (1 children)

It seems like it’s implying that the real problem is that people are too sensitive / too easily offended and not the person initiating the harmful content.

"Safe space" as a term comes from particular considerations about marginalized, especially LGBTQ+, people inhabiting academic space. The use of "safe" isn't necessarily about the participants' sensitivities so much as it's in reference to a facilitator's (such as a teacher) trustworthiness. As a queer person, can I come out to the person facilitating this space (and, possibly, to the others in this space) without fear of identity-based psychological/emotional or physical harm/violence? And can I trust that this facilitator will respect my identity and not harm me in any way?

"Sanitized space"


well, that isn't really a term which comes from anywhere. We created it as a convenience for drawing comparison between other types of space.

Of course, the "paradox of tolerance" is something many of us are well acquainted with, and I think it's always relevant when talking about bigotry. A space can't be safe, sanitized, brave, accountable, tolerant, etc. unless we, as a rule, do not tolerate bigotry.

The problem with a "tolerant space" is that simple tolerance (with respect to identity) can imply some level of disagreeableness. Many people, especially queer people and people of color, don't want to just be tolerated, as this can convey that our identities are something to be 'put up with' or 'endured' by others, when it should be bigotry that is the actual burden. In this case, what I personally want is acceptance and affirmation


to have my identity accepted, to have difference be welcomed, and to be affirmed in my experiences (especially with discrimination and bigotry).

Of course, you could say that leaving certain harmful content up makes a platform less tolerant, but as was raised in our philosophy article, what is the bar for harmful? Many of those who wrote this post, including myself, are frequent targets for bigotry, but our personal standards for 'harmful' aren't universally applicable. Plus, it's a lot harder to gauge the harm of long-form posts/comments than to moderate, say, messages in a chatroom.

The other aspect of this is: I hate having to wait on moderators and admins to take action, and I don't want to put all the onus/responsibility for shaping the space on them. It's glaring to me if other users don't say or do anything about it and just leave it alone. A moderator can remove the content and ban the offending person, but it doesn't get rid of the sour taste in my mouth that the others alongside me saw no need to do anything, which raises questions for me of their trustworthiness in handling other, more unclear instances of bigotry or more subtle prejudice.

I've been in spaces before which were highly vigilant in removing bigots and their speech, but even without them, what about the attitudes of others in the space? If they don't take the right tone or approach to bigotry before the mod acts, it's harder to trust them to listen when one of them does something less obviously harmful.

You may personally have not seen anything harmful on here, but I have seen stuff I would consider outright or subtly harmful, some of it directed at people like me. Honestly, I feel more assured when I can see that others have shown strong resistance to that kind of speech, which is what I'm really looking for to determine if a space is safe for me. Whether the content itself gets removed after that fact becomes of less consequence to me.

I'm personally more a fan of building an "accountable space":

Accountability means being responsible for yourself, your intentions, words, and actions. It means entering a space with good intentions, but understanding that aligning your intent with action is the true test of commitment.

Accountable space guidelines allow for allies and marginalized communities to agree on a set of actionable behaviours/actions during the discussion to show allyship in real-time and after the event. It allows participants to align their well-meaning intentions with impact through a collective set of guidelines.

Accountable space guidelines do not place an unfair burden of bravery. They do not create mythical promises of safety and unicorns. They place an equal amount of onus for all to behave equitably and inclusively, to foster a deeper understanding of diverse lived experiences in real-time. (source)

[–] StereoTypo@beehaw.org 1 points 1 year ago

I really like that term, "Accountable Space". Thanks for introducing me!