Difficult Speech in Feminist Communities
(This essay was originally published in 2017, as part of the Berkman Klein Center’s Perspectives on Harmful Speech Online collection. In the interest of making it more broadly sharable, I’m now posting it here.)
Many feminist communities online have developed sets of practices to accommodate, moderate and regulate speech. As we consider the implications of hateful speech on our online communities, it is vital that we also reflect upon how communities deliberatively deal with wanted yet complicated topics, and whether these practices can provide models for dealing with formulating and regulating speech according to community-developed norms. This essay discusses one such set of models – a set of interventions against what I call “difficult speech.”
Difficult speech is speech that is wanted yet may also cause discomfort or harm in a community with a shared set of norms. For example, a trans person in a community aimed at trans folks might want to discuss their body as part of seeking advice on dysphoria (a psychological condition of distress stemming from one’s body not matching one’s gender). However, for other trans folks, a person’s recounting of their feelings about their body may not be something that they cannot read without having suicidal thoughts. The issue is further complicated by the fact that what may cause difficulty for a person on their bad day might be perfectly fine a few days later. This variability, both across people and time, creates unique moderation needs. In writing this piece, I reviewed a small number (~5) of feminist sites, including both blogs with moderated comments sections and forums/private community spaces, to see how they deal with difficult speech. Content warnings and multiple channels with redirection are two options for handling this moderation that were common to multiple surveyed spaces.
Using Content Warnings to Offset the Impact of Difficult Speech
Perhaps the most obvious method of dealing with difficult speech is “content warnings” or “trigger warnings.” Content warnings are literal statements of the content of following text or images – for example, if a text contained the first-person narrative of sexual assault, a content warning might say “sexual assault.” (Generally, the term “content warning” is considered broader than “trigger warning” and thus I will use it.)
Content warnings are not unique to feminist communities, but are often more common in feminist spaces than elsewhere. Warnings can be used in a variety of circumstances, for content containing anything from depictions of rape to manifestations of white supremacy. In some communities, warnings are deployed along with tags that make the difficult material not readable unless moused over (“spoiler tags”). If material is not obscured, a content warning can be paired with a note about how long the warning will be in effect for (“CN: police violence, next 4 paragraphs”).
Communities often engage in discursive practices around what kinds of content requires a warning – allowing autonomy and discussion over shared values. Commonly chosen content warnings among some feminist communities surveyed include “sexual assault”, “transphobia”, “racism”, “war on agency” (reproductive rights), and “Nazis.” As demonstrated by this list, the potential options are broad, and often depend on the needs and characteristics of the members of the community.
Using Multiple Channels to Respond to Difficult Speech
Some communities use a combination between multiple channels and conversation redirection to handle difficult speech. For example, there might be two channels for a particular issue: #bodyissues and #bodyissues-unfiltered. When someone wants to talk about something that others might find difficult, either as explicitly mentioned in guidelines or just understood as a sensitive topic, they might post in #bodyissues with a content warning and a pointer – “I want to talk about a dysmorphia thing in unfiltered, if you’re up for listening meet me there.” Users who are able to support can view #bodyissues-unfiltered to read and comment. Other users who might not be worried about potential triggers can view the unfiltered channel as part of their daily community interactions.
Finally, a user who is finding a conversation taking place in the #bodyissues tag difficult can ask other users to move to #bodyissues-unfiltered. This allows for more situational reactivity than a more traditional content warning system.
Platforms, Affordances and Regulation
One notable characteristic of the aforementioned interventions that deal with difficult speech is that they rely on platforms having particular affordances, and making these affordances accessible to moderators – the power to ban members, to create multiple channels, and to block out speech (for example, with spoiler tags). Thus, difficult speech interventions may not be possible in communities that work on platforms without these. For example, a community on Facebook, couldn’t use spoiler tags, as they are unsupported by the platform.
Additionally, difficult speech interventions can be undermined by more traditional moderation actions by platforms. For example, imagine a racial slur is used in the context of explaining a recent experience and asking for reassurance. If an appropriate content warning is used, the harmful effects on members of the targeted community may be mitigated. Nevertheless, the post containing the slur might trigger a “shadowban” or “time out” from the platform due to the language – resulting in fewer people seeing the post at all, the exact opposite of what the user may need.
As I write this essay, Mastodon, an alternative social network, has been rapidly gaining popularity. Mastodon supports content warnings, and users from different Mastodon servers have been engaged in robust debate over what content deserves warnings, from politics to porn. Whether Mastodon ends up going the way of forgotten social networks like Diaspora or Ello or becomes widely adopted, it is notable that content warnings are now more integrated directly into platforms.
Since much regulation of speech is bound up in legal frameworks and debates over banned terms, community adaptations to difficult speech, like those taking place on feminist platforms or on Mastodon, suggest a new way forward for dealing with harmful speech online.