In this article I will give an analysis of what I believe to be sound principles in the advocacy of content warnings. I will be focusing on the core ideas and rationale behind content warnings, as well as the benefits that they may have to creators, by placing responsibility on consumers for the media that they consume.
[Ed.: this article is a companion piece to our recent point/counterpoint articles looking at trigger warnings and safe spaces within furry, Trigger Warnings, Safe Spaces, and Fandom and Of course trigger warnings and safe spaces are a good thing….]
Content warnings and appropriate behaviour
Imagine that I jumped out in front of you shouting “AH!” in order to surprise you. If we were friends, and I knew you were disposed towards laughing-off jokes such as this, then such an act would likely be deemed as acceptable.
Now, imagine that I am a complete stranger, and you are walking down the street, minding your own business. If I were to jump out and scare you, your reaction would most likely be less than positive. You may feel angry, annoyed, or genuinely frightened. If you were very elderly or feeble, you may even have a heart attack. Hence, this sort of behaviour is not acceptable towards people we do not know, people who have not expressed a willingness, or people who’s likely reactions we do not have knowledge of.
Thus, in the first example, if you know somebody very well, and you can reasonably assume that they will be alright with the joke, there is nothing wrong with it. In the second example, this sort of behaviour is wrong, due the consequences being unknown, and potentially very negative.
To take this further, imagine you are reading something I have written, and, without warning, a graphic scene of something hideous is presented; rape, for example. I do not know you, nor do I know your disposition towards such a thing. It is, in my mind, akin to jumping out and scaring a stranger, in that it is an unexpected event that may lead to negative consequences for them. If somebody has had an experience with rape, from which they are recovering, this unexpected scare could lead to a lot of trauma. For this reason, it is usually important to give people some idea of what they are going to experience, if they read/watch/play a piece of media.
Assuming you have given an adequate picture of what the work will involve, there is no longer any responsibility on the part of the creator for the potentially traumatic or dangerous content. The responsibility is passed to the consumer; if rape is involved in a work, and this has been adequately indicated, then it becomes the responsibility of the audience to know whether they are able to consume it. Think of it like an allergy warning: If, on food packaging, an allergy warning is displayed, then anybody with that allergy can be considered at fault if they eat it (assuming the warning is adequate).
In my mind, a content warning is not a shifting of a discussion, but a way of putting responsibility on the audience. To take an example that I am very passionate about; when thinking philosophically, ideas and opinions are sometimes passed around that are going to make some people uncomfortable. A content warning is, in effect, saying, “If you cannot handle this material, you have no place here”. For example, if somebody is easily offended by topics of religion, then they really have no place participating in any serious discussion on the philosophy of religion. This is, in my view, the best use for content warnings; but also as an indication of where they ought to be, and where they ought not to be.
To go back to my allergy example: If you are allergic to peanuts, you have no right to complain if you eat a clearly labelled bag of peanuts and end up in the hospital.
Why on earth would you want somebody in a discussion who is unable to properly handle that discussion? In this way, a content creator ought not to be held accountable for somebody being offended. If an appropriate warning is provided, the creator allows for more open discussion within the agreed upon space.
The under-appreciated value of common-sense
When it comes to content warnings, there’s a simple principle that I feel is severely lacking: Common sense.
It’s reasonable to expect people to know what may or may not be considered offensive. If it is entirely within the realm of common sense that something might be traumatic, then not much effort is required to put a brief, clear, warning onto it.
Similarly, on the other side of the debate, it should also be a matter of common sense as to what it is reasonable to expect people to put content warnings on. If somebody was terrified of bananas, for example, whilst this may be traumatic, it is such an uncommon and irregular fear that it would be unreasonable to expect people to cater to it, not least of all since, logically, if we warned people of bananas just because somebody might be afraid of them, would we then not need to list every fruit that shows up in a work?
There therefore ought to be a general principle of common sense as to what ought to and ought not to have a content warning.
The limits of freedom, and why censorship is not just acceptable, but occasionally morally required
My final point will immediately smack people the wrong way. The word “freedom,” is tossed around so liberally these days, and people are so willing to fight for this vague concept, that few will recognise its limits.
You do not have complete freedom of speech and you are not free to spread any information you wish. This is a fact. It is illegal and wrong to give out the names of a suspected criminal in a certain case, due to the fact that, if it later turns out that they are innocent, their life will still be negatively effected.
A second example of this is that you cannot openly give people instructions of how to make certain explosive devices. Even if you know such a thing, you are not at liberty to share that information. Most people will agree that this is a sensible limit to freedom.
However, this can be taken further, in my mind. Some pieces of media exist purely to spread hate, or to disrupt society. I will give two examples of pieces of media which I feel ought to be censored, followed by reasons as to why:
“Kill the Faggot,” was a game put up on Steam, which was nothing more than a homophobic murder simulator. Eventually, due to its nature, the game was removed from the service. This game’s entire purpose was to be hateful and offensive.
More recently, a photo of a man holding an iPad was edited to make it appear as if he were holding a Quran, with bombs strapped to him. This imagine is, obviously, Islamophobic, and its intent is to generate hate.
To formulate this in a more philosophical way: The question is whether the benefits of free-speech to a society are enough to justify the existence of extreme forms of hate-speech. Famously, Mill’s advocacy of free-speech has been challenged as being contradictory to his “Harm Principle.”
Put simply, the Harm Principle allows government authority to intervene in the freedom of citizens when they are likely to cause harm to one another. It is unclear what exactly Mill meant by “harm”, but, if somebody was exercising their free-speech to persuade others to commit violent crimes against another group of people within society, then it would seem that there is a contradiction between allowing for complete freedom of speech, and censoring certain opinions in order to prevent harm being done to others within society.
Thus, we must all ask the question of whether we want completely free speech, even if it allows for extreme hate speech, and for media which may cause a great deal of harm to others.
In my mind, the ethics of belief play an important role in answering this question.
When we believe something, we take it to be true. Philosophically, it is stated that “belief aims at truth,” vis. Believing P means that P is taken to be true. You cannot simultaneously believe something, yet also think it to be untrue.
If we take something to be true, it is likely that we will act in an appropriate manner. For example, if I believe that it is raining, I am more likely to wear a coat. In the case of these two pieces of media, if people believe that Muslims are more likely to commit acts of terrorism, they are more likely to act in discriminatory ways.
Media alters beliefs. Beliefs inform behaviour. Behaviour effects everybody. The media that others consume, and the beliefs that others hold, are therefore in the interest of everybody. If a piece of media is likely to form untrue beliefs that will lead to severely negative consequences then it ought to be censored. For example, a piece of media that is designed to recruit young people into terrorist organisations by making them believe it is the right thing to do may have severely negative effects for others, and should therefore, be censored.
Ergo, in certain cases, when the effects will be severely negative, it is in the best interest of everyone that certain things be censored. To argue against this is a very difficult challenge, as either one of the two things would need to be show: Either, there is no scenario under which it is more desirable for everyone that somebody be allowed to say something, and that banning that something would cause society more severe harm than the suspension of free-speech. Or, it would have to be shown that a) Free speech has intrinsic worth, and b) that intrinsic worth is always worth more than any consequences it could bring. There is no deontological maxim for free-speech, nor is there a maxim that says content warnings are always bad. Sometimes certain things need to be censored, and, at other times, certain things within various pieces of media require content warnings.
I’ll close by clarifying where I stand: Content warnings are in the interest of everybody, though they ought not to be treated unreasonably by those who are for or against them. Certain media has no “right” to exist, and other media ought to be properly labelled. Responsibility ought to lay with the consumer where appropriate warnings are provided.