The internet as we know it and many of the companies that dominate it were forged in the wake of a nationwide sex panic about children’s access to pornography. This sex panic might be best exemplified by the August 3, 1995, Time magazine issue on cyberporn (see figure 3.1). In the issue’s cover story, Philip Elmer-Dewitt reported on the findings of a later debunked study showing that 83.5 percent of the images stored online at Usenet newsgroups were pornographic.1 Twenty years later, Elmer-Dewitt would describe this as his worst story “by far” and note that one Time researcher assigned to his story later recalled it as “one of the more shameful, fear-mongering and unscientific efforts that we ever gave attention to.”2 This sex panic surrounding “cyberporn” culminated in Congress passing the Children’s Internet Protection Act (CIPA) in 2000. CIPA required public schools and libraries to install internet filters on all of their computers to block obscene content, child sexual abuse images, and content deemed harmful to minors in order to continue receiving federal funding. Similar to earlier moral panics surrounding the dissemination of pornography in the United States, CIPA also embodied a class-based anxiety over who had access to online pornography, as evidenced by the original extension of the ban to adult library patrons.3
Media scholar Henry Jenkins responded to the Time story in a 1997 article published in Radical Teacher. In a passage worth quoting at length, he wrote,
The myth of “childhood innocence” “empties” children of any thoughts of their own, stripping them of their own political agency and social agendas so that they may become vehicles for adult needs, desires, and politics. . . . The “innocent” child is an increasingly dangerous abstraction when it starts to substitute in our thinking for actual children or when it helps justify efforts to restrict real children’s minds and to regulate their bodies. The myth of “childhood innocence,” which sees children only as potential victims of the adult world or as beneficiaries of paternalistic protection, opposes pedagogies that empower children as active agents in the educational process. We cannot teach children how to engage in critical thought by denying them access to challenging information or provocative images.4
As we’ll see in this chapter, so much of our lives as children and adults are lived at the interstice between the sexual and the platonic, the prurient and the pure. Critical thought requires us to learn how to navigate these gray areas, and the development of this capacity takes many years of practice, online or off. Overbroad filters stunt this development, blocking access to everything from legitimate nonsexual speech to hard-core pornography and all the gray areas in between. As we’ll see, this problem is particularly acute when it comes to LGBTQIA+ discourse. However, what little resistance there was to CIPA maintained these black-and-white distinctions between legitimate and illegitimate speech, focusing on how nonsexual speech was blocked by overbroad filters.
Prior to CIPA, the American Civil Liberties Union (ACLU) was already publishing detailed white papers arguing against internet censorship based on a broad interpretation of First Amendment rights.5 In 2002 the Kaiser Family Foundation published a study indicating that at moderate levels, internet filters did not significantly impede access to online health information, but at their more restrictive levels, the filters would “block access to a substantial amount of health information, with only a minimal increase in blocked pornographic content.”6 CIPA was challenged in court by the American Library Association, also on First Amendment grounds, and appealed to the Supreme Court by 2003. It is worth noting that none of these free speech arguments against the implementation of porn filters were arguing that pornography ought not be filtered. Pornography has never constituted protected speech in the United States and has been especially vulnerable to censorship after Miller v. California.7 The central fact of all these arguments was that porn filters are unreliable. They always overblock—they filter some portion of nonpornographic sites for one reason or another—and they always still let some porn through. The First Amendment claims against these filters were all based on the fact that they would necessarily be blocking some portion of nonpornographic content, which, precisely because it was not pornography, would qualify for free speech protections.
All nine Supreme Court justices agreed that restricting children’s access to pornography posed no constitutional problem. They also agreed that all available filters were blunt instruments that inevitably block some portion of nonpornographic material.8 The constitutional question was thus whether this overblocking constituted a violation of First Amendment rights. The Supreme Court ultimately decided in favor of CIPA by a margin of six to three. In the aftermath of this decision and the displacement of the cyberporn sex panic from center stage by 9/11 and the escalation of wars in Afghanistan and Iraq, the free speech concern of overblocking largely faded into the background. As Deborah Caldwell-Stone noted in her 2013 American Libraries article, “Debate over filtering became muted. . . . While researchers counted the number of libraries and schools using filters, little inquiry was made into how institutions were implementing CIPA or how filtering was affecting library users.”9
While the critical discourse seeking to combat overblocking by internet filters has yet to fully resurface, this moral panic about access to pornography through public internet outlets, and particularly in schools, is alive and strong. For instance, in both 2017 and 2018, NCOSE (see chapter 1) added EBSCO Information Services to their annual “Dirty Dozen” list of smut peddlers. While CIPA remains in force and most American public schools filter internet pornography, the EBSCO databases that many students use to access educational materials are not subject to these same internet filters. Even after EBSCO worked to scrub their elementary, middle, and high school databases of pornographic and sexually explicit materials, NCOSE found a number of materials on these databases that they objected to. NCOSE researchers found “sexually graphic written content on high school databases, including sexually graphic written descriptions and instructions for oral sex and other sexual acts” that they considered “salacious and not academic.”10 On the high school EBSCO database, they also objected to academic articles about gay porn, articles about pornography more broadly, and articles from magazines like Cosmopolitan and Redbook that provide sex advice. EBSCO’s middle school database (Middle Search Plus) and elementary school database (Primary Search) contained articles on adult entertainer Bettie Page; teen activists working to make public nudity acceptable by posting nude protest images to Instagram; sex advice articles with information on oral sex, anal sex, and BDSM; and other sex education materials that NCOSE considered guilty of normalizing deviant sex and encouraging the use of pornography by children.11
EBSCO spokeswoman Kathleen McEvoy noted that schools are primarily responsible for setting up their own EBSCO filters for blocking objectionable content and that the NCOSE researchers were likely accessing these materials through their home computers that would not be subject to these school filters. However, she noted that EBSCO initiated an investigation after NCOSE’s research findings. While EBSCO was not able to reproduce their findings, she noted that the company took NCOSE’s findings very seriously and that the company has taken steps to ramp up its content filters for public school databases. She also concurred with NCOSE that magazine articles about sex practices like BDSM did not count as “sex education” and thus ought to be censored.12 Despite EBSCO’s response to these problems, at least one school district in Colorado discontinued its subscription to EBSCO Information Services for the foreseeable future.13
We can thus expect that overblocking will continue to be part of the daily lives of public school students in the United States for the foreseeable future as well. This inordinately impacts the most underprivileged students who might not have ready access to the internet via broadband or mobile devices outside of the school’s internet filters. As we’ve repeatedly seen, moral panics over sex and pornography always have class dimensions to them, which are repeated both here and in another instance in which NCOSE, after joining forces with Enough Is Enough, was able to get both Starbucks and McDonald’s to agree to filter sexually explicit content on their free Wi-Fi in locations nationwide.14 The overblocking that results inordinately impacts poor and unhoused adults in the same way that it did when public libraries installed filters after CIPA.
This chapter will make the case that overblocking is a phenomenon common across the internet writ large and is not confined to public schools or free Wi-Fi hotspots. Nearly every major internet platform today engages in systematic overblocking of sexual expression, which by default reinforces heteronormativity. The primary focus will be on analyzing the impact of the Stop Enabling Sex Traffickers Act and Fight Online Sex Trafficking Act, collectively known as FOSTA-SESTA and hereafter referred to as FOSTA, which the US Congress passed in 2017. FOSTA was the first substantial change in legislation and regulative policy surrounding adult content that the United States has made since CIPA and has had the largest impact on the internet since the Communications Decency Act (CDA) was passed in 1996. We can essentially divide content moderation practices into pre-FOSTA and post-FOSTA eras. In the former, ISPs and content hosts, like social media platforms, were not liable for user-generated content disseminated by or hosted on their networks, which led to a lighter, but still quite repressive, censorship regime, as we’ll see below. FOSTA, claimed as a marquee policy victory by NCOSE, has led to extreme crackdowns on sexual speech on the internet. It has adversely impacted many LGBTQIA+ communities and has been exploited by the manosphere (see chapter 1) to punish adult entertainers and sex workers online. In the aftermath of FOSTA, many adult entertainers and sex workers have faced serious consequences as both their livelihoods and their bodies were put at risk by the act. Additionally, the act has ramped up the overblocking of sex education materials, which is likely to have an inordinate impact on the adolescents coming of age during its reign over internet content. As I’ll show in chapter 4, this victory of anti-porn crusaders has also led to a détente wherein pornography is allowed to continue proliferating online provided it is produced by multinational corporations and coheres to heteronormative genre conventions.
Since 2016, LGBTQIA+ digital content and its creators have been increasingly under attack at a global scale. As Freedom House’s annual “Freedom on the Net” report for 2016 states, “Posts related to the LGBTI community resulted in blocking, takedowns, or arrests for the first time in many settings. Authorities also demonstrated an increasing wariness of the power of images on today’s internet.”15 The organization found attempts to block LGBTQIA+ content in eighteen countries, up from fourteen in 2015, ranging from South Korean regulators asking the Naver web portal to reconsider linking to gay dramas to the Turkish government blocking all the popular LGBTQIA+ websites in the country for a period during 2015.16 Turkey regularly invokes legal provisions about protecting families, censoring obscenity, and preventing prostitution to block LGBTQIA+ websites and apps like Hadigayri.com, Transsick-o, and Grindr.17 By 2017, Freedom House estimated that 47 percent of the global population lived in countries where LGBTQIA+ content was suppressed and sometimes punishable by law.18
It is worth noting that information on the overblocking of LGBTQIA+ content online was restricted to only a single page of both Freedom House’s 2016 and 2017 reports and was entirely absent from their 2018 report. Reporting on the overblocking of LGBTQIA+ content is largely absent from contemporary discourse on content moderation. A large part of this is due to the pressing issues of social media spreading alt-right political propaganda and conspiracy theories, which leads to an inevitable focus on content moderation in terms of political speech. However, it may also be due to the false Pandora’s box of porn narrative leading people to believe that LGBTQIA+ content flows freely across the global internet. My hope in this chapter is to convince you that this is a false assumption and that LGBTQIA+ content is regularly overblocked in the United States. This US-based overblocking has global implications. As many of the most prominent internet platforms are headquartered in the United States, its legislation has an inordinate impact on global internet traffic for two primary reasons. First, internet platforms rarely maintain separate content moderation standards for different national or cultural audiences. If a state has the power to influence these standards, that impact is frequently felt globally. Second, the proprietors of these internet platforms and many of their employees live in and are influenced by the same US norms that make legislation like FOSTA possible. The global impact of FOSTA is thus to doubly reinforce heteronormativity, first by subjecting LGBTQIA+ content to stricter scrutiny than heteronormative content and second because silencing sexual expression effectively preserves the status quo.
The majority of anti-porn discourse argues that content filters are the only way to protect children from unwanted exposure to pornography online, thus justifying overblocking. This, however, does not seem to be the case. For instance, after analyzing two separate datasets, researchers found that the use of internet filters “had inconsistent and practically insignificant links” with adolescents encountering sexually explicit content online.19 The overblocking that results from internet filters thus does not have its desired effect. Mainstream heteroporn with wide distribution networks, advanced search engine optimization (SEO) techniques, and the capacity for mass-producing content still makes it through the filter, as we’ll see in chapter 4. What is lost is always a combination of art, sex education, LGBTQIA+ community resources, and LGBTQIA+ pornography.
It is difficult to retroactively construct a full catalogue of unduly censored content prior to FOSTA because few researchers were focused on content moderation and no centralized agencies were collecting archival examples of overblocking. For example, a paper from the Berkman Klein Center for Internet and Society at Harvard University examined Google SafeSearch in 2003 and found strong evidence that Google routinely blocked newspapers, government sites, educational resources, and even sites about controversial concepts and images.20 However, since 2003, there have been no academic studies of SafeSearch censorship, and thus there is no real catalogue of what has been getting censored or how the adjudication mechanisms play out for those who believe their content was censored in error and are thus seeking to get it unblocked.21 To stick with the case of Google, this censorship has dire consequences for content producers and website managers, as even a temporary block can do irreparable damage to their position in Google search rankings and thus can cause an unexpected and potentially prolonged cessation of revenue as web traffic slows to a halt. As we will see below, this is not unique to Google. Across the internet, content creators and website administrators, particularly those with less access to capital and representing niche and/or marginalized communities, are confronting undue censorship and loss of revenue. The adjudication channels provided to them are opaque, alienating, and often unsuccessful if they do not have national visibility or expensive legal counsel.
In lieu of a robust archive of unduly censored content pre-FOSTA, I will work to stitch together what has been documented with some experimental explorations of contemporary content moderation practices, both my own and those conducted by artists using their own convolutional neural networks—particularly what are termed “generative adversarial networks” that reverse engineer the operations of computer vision algorithms. What we’ll find is that automated content moderation performed by computer vision and image recognition algorithms is not very good at parsing the context of nudity, which constitutes a significant problem when it comes to the censorship of art. While some of this lack of contextual knowledge can be compensated for by relaying the moderation decisions to human algorithms, they, too, will often err on the side of overblocking artistic nudity. While they may recognize and override blocks to canonical Western artistic nudity—the types of oil paintings hung in world-class museums—this same consideration is rarely extended to non-Western, noncanonized, or everyday artistic productions.
It is no wonder then that one of the most frequent victims of overblocking is the artistic representation of nudity. As we saw in chapter 2, even canonical works of art like the Venus de Milo are potentially subject to censorship by Google SafeSearch because automated content filters have trouble with higher-level differentiations like that between pornography and nude art. Several famous works of art have been subjected to censorship on platforms like Facebook. In 2018, Facebook flagged images of the Venus of Willendorf as pornography and censored them on its platform, which led to an online petition against art censorship.22 Facebook also automatically flagged an image of the painting The Origin of the World posted by Gustave Courbet, who had his account deactivated as a result.23 Facebook has also banned images of Gerhard Richter’s 1992 painting Ema, a misted view of a nude woman descending a staircase; Evelyne Axell’s 1964 painting Ice Cream, a pop art painting of a woman’s head as she licks an ice-cream cone; and Edvard Erikson’s 1913 public sculpture The Little Mermaid.24
This trend is even more impactful when it comes to photography. Take, for example, Michael Stokes’s work, which often includes photographs of men in various stages of undress, including wounded, amputee veterans. Since 2013, Stokes’s photographs have been repeatedly flagged on Facebook as violating their community standards, and he has been subjected to multiple bans from the platform (not to mention hate messages and threats from other users). Stokes compares this to Helmut Newton’s ability to freely post his photograph of Venus Williams in the nude for ESPN’s 2014 Body Issue, which Facebook has allowed to circulate without challenge. Stokes writes, “Nude subjects have traditionally been reserved exclusively for the male gaze, so when a man poses nude, to some this implies that the image is homoerotic.”25 Thus, Stokes has found that images of women can be further undressed than those of men without triggering content filters (either automatically or by people reporting the images). Stokes argues that this trend has only accelerated in the past few years. In 2015, he posted a photo of two male police officers, fully dressed, kissing with a caption about censorship that was, ironically, quickly censored. He further notes that he recognized a strong shift in Instagram’s content moderation after it was purchased by Facebook. He encountered few problems with the platform before its sale and afterward was regularly subject to warnings and takedown notices. More recently, after Tumblr announced that it would no longer host sexually explicit content on its platform, nearly 70 percent of his 900 photographs on the platform were flagged as violating the new community standards.
Photorealism does seem to be a key marker of the likelihood that an image will be automatically flagged as sexually explicit, at least via Google SafeSearch. For example, I ran the first one hundred images that resulted from Google Image searches for “nude sculpture” and “nude painting” through Google’s Cloud Vision API and found evidence that photorealism was a key indicator for an image being flagged as “adult” or “racy.”26 For instance, of the sculptures, only one was flagged as likely or very likely to be adult, and thirty were flagged as likely or very likely to be racy. Similarly, of the paintings, only twelve were flagged as adult and sixty-seven as racy. The sculptures that were flagged often were realistic and had a sheen to them reminiscent of the sweat and oil often found on models’ skin during filming, and paintings were much more likely to be flagged the less abstract they were. This bears out upon further testing. I ran the first one hundred images from The Vulva Gallery, an online site and printed book containing close-up illustrations of vulvas in the likeness of watercolor paintings. None of them were flagged as adult, and only fifteen of them were flagged as racy. Similarly, I ran two sets of hentai fanart from the site DeviantArt.com through Cloud Vision, fifty color illustrations and fifty line art illustrations. Of the color illustrations, thirty-four were flagged as adult and forty-eight as racy, while only one of the line art illustrations was flagged as adult and only thirty-three as racy. Lastly, I took the first forty-four images of Real Dolls, lifelike silicone sex dolls, from a Google Image Search and ran them through Cloud Vision and found that all forty-four of them were flagged as both adult and racy. These findings are borne out by computer science literature, which demonstrates that color and texture properties are key features in the detection of nudity by computer vision algorithms, as seen in chapter 2.
Yet what a computer “sees” as indicative color and textural features of nudity is not the same as what we would expect based on our own visual experience. This has been demonstrated by several artists who have been using machine learning to probe the limits of computer vision, image recognition, and adult content moderation as it relates to the arts. Take, for example, the work of Tom White, an artist and senior lecturer in media design at Victoria University of Wellington. White uses a generative adversarial network (GAN) to produce what the tech industry calls “adversarial examples” based on ImageNet classifiers. In essence, a GAN mirrors the CNN that powers an image recognition algorithm (see chapter 2 for a lengthy overview of CNNs), feeding abstract shapes, patterns, or amalgamations of images into the CNN, seeing what classifiers that image triggers, and then adjusting the shapes, patterns, or amalgamations of images iteratively until it outputs an image that will trigger a classifier despite looking nothing like what a human would recognize as an example of that particular classification. As White puts it, he uses abstract forms to “highlight the representations that are shared across neural network architectures—and perhaps across humans as well.”27
In two exhibitions, Synthetic Abstractions and Perception Engines, White has generated shocking images that will trigger certain classifiers on Amazon, Google, and Yahoo’s image recognition systems but to a human look nothing like an object that ought to trigger that classification.28 Take, for example, figure 3.2, which depicts a series of black-and-white abstract shapes and lines on an orange and yellow background. Google SafeSearch recognizes this abstract image as “very likely” to be adult content, and both Amazon Web Services and Yahoo Open NSFW make similar determinations. White has a series of similar adversarial examples that to humans present as abstract shapes and colors but to image recognition systems look like concrete, identifiable objects. Images like these challenge the efficacy of image recognition systems, probing their boundaries to demonstrate the different ways in which they perceive the world. They also constitute a more practical problem, as White’s work would likely be censored on most major platforms today, and he would be required to individually appeal each automatic flag applied to images on his accounts despite their (to human eyes) obviously “safe for work” status.
For another example, we can look to Mario Klingemann’s eroGANous project, which stitches together elements from actual images into adversarial examples that will trigger image recognition systems.29 These images are much more photorealistic than White’s and, thus, while White’s images may survive human review after the system has automatically flagged his content, the eroGANous images are more likely to be censored in the six- to eight-second window that human reviewers generally have to make censorship determinations on potentially sexually explicit content (see figure 3.3). As Klingemann notes, “When it comes to freedom, my choice will always be ‘freedom to’ and not ‘freedom from,’ and as such I strongly oppose any kind of censorship. Unfortunately in these times, the ‘freedom from’ proponents are gaining more and more influence in making this world a sterile, ‘morally clean’ place in which happy consumers will not be offended by anything anymore. What a boring future to look forward to.”30 As a side note, for those interested in escaping the boredom of this sterile visual regime, I’d recommend taking a look at Jake Elwes attempt at producing “machine learning porn,” a two-minute video of computer vision pornography unrecognizable—yet uncannily evocative—to human vision.31
A similar example can be found in Robbie Barrat’s work. Barrat fed images of ten thousand nude portraits into a GAN and used it to iteratively generate new “nude” images. As Barrat notes,
So what happened with the Nudes is the generator figured out a way to fool the discriminator without actually getting good at generating nude portraits. The discriminator is stupid enough that if I feed it these blobs, it can’t figure out the difference between that and people. So the generator can just do that instead of generating realistic portraits, which is a harder job. It can fall into this local-minima where it isn’t the ideal solution, but it works for the generator, and discriminator doesn’t know any better so it gets stuck there. And that is what is happening in the nude portraits.32
Thus, as Barrat’s project demonstrates acutely, computer vision has a very peculiar and least-common-denominator approach to detecting nudity that totally collapses the context within which that nudity occurs. For many people, none of the images above would be considered obscene, and even if they were, they are most certainly contained within the realm of artistic nudity rather than pornography. Despite this, these images are routinely censored by all major computer vision algorithms.
These experiments with computer vision challenge the reliability of image recognition and produce an implicit challenge to content moderation. They also demonstrate the guiding role that the ethic of anti-porn crusaders plays in their production, as overbroad censorship is always preferable to even one pornographic image slipping through. This prioritization of anti-porn morality is, as I’ve shown, explicitly at odds with the needs, desires, and rights of the LGBTQIA+ community. Further, the artists above allow us to imagine a future in which new BigGAN production practices can obfuscate pornography from content moderation algorithms. As Klingemann notes,
Luckily, the current automated censorship engines are more and more employing AI techniques to filter content. It is lucky because the same classifiers that are used to detect certain types of material can also be used to obfuscate that material in an adversarial way so that whilst humans will not see anything different, the image will not trigger those features anymore that the machine is looking for. This will of course start an arms race where the censors will have to retrain their models and harden them against these attacks and the freedom of expression forces will have to improve their obfuscation methods in return.33
What goes unnoted here is that these techniques will likely only be available to the most tech savvy of content producers or, in lieu of doing it themselves, those with either the access to capital to hire others to perform this labor or with large enough audiences to crowdsource it for free. A likely unintended effect of this will be that in this arms race between porn obfuscators and content moderators, the only people unable to keep up will be amateur and low-budget artists and pornographers, of which LGBTQIA+ content creators are likely to form a substantial portion. In short, if what computers view as “porn” really can be likened to spam, it seems inevitable that certain types of “porn” will mutate to exploit the weaknesses in image recognition systems. It also seems likely that the content producers that achieve this will be the well-resourced corporations peddling mainstream, heteronormative content.
As we saw earlier in this chapter, the discourse of moral panic that leverages the idea of unwanted and traumatic exposure of children to hard-core pornography to legitimate regimes of censorship, sexual discipline, and heteronormativity necessarily makes children and adolescents the most likely to have their internet traffic filtered—at school, college, the library, and at home. This filtration is likely to be under the direction of the people with the most authority at any of these locations, and thus the patterns of regulation of internet traffic are likely to draw upon the preexisting material relations of inequalities at these locations, which are often strongly heteronormative in the household.34 By pandering to these moral panics and providing overbroad filters to ensure the smallest possibility of “unwanted exposure,” filters like SafeSearch place themselves at odds with some of the more liberatory potentialities of the internet. Additionally, in the United States, some evidence suggests that adolescents who use online pornography are more likely to be African American and to come from less educated households with lower socioeconomic status.35 There are thus always class and racial tensions that cut through these sex panics.36
A number of scholars have argued that the internet can be a very effective medium for disseminating educational information about sexual health, introducing sexuality, and fostering sex-positive attitudes in children and adolescents.37 While people of any age can reap these benefits, it is more common for younger people to use the internet for information about sex and sexuality, and even more common for LGBTQIA+ youths to do so.38 Keep in mind that this is precisely the age group meant to have its internet traffic censored by filters like SafeSearch. Overblocking frequently leads to the censorship of sex education materials. Take, for example, the case of the National Campaign to Prevent Teen and Unplanned Pregnancy’s online campaign called “Bedsider,” which was launched in 2012. This campaign was meant to be hipper so that it would have more appeal to young people. This is a common strategy in newer sex education social media campaigns. As Susan Gilbert, codirector of the National Coalition for Sexual Health, explains, “We have to make healthy behaviors desirable by using creative, humorous, and positive appeals.”39
Bedsider made use of these standard social media strategies to try to entice teens to engage in safer sex practices. For instance, Bedsider tweeted, “98 percent of women have used birth control. Not one of them? Maybe it’s time to upgrade your sex life.”40 In response, Twitter banned Bedsider from promoting its tweets for violating Twitter’s ad policy, which prohibits the promotion of or linking to adult content and sexual products or services. A Twitter account strategist noted that the problem would persist as long as Bedsider’s website continued to host the article “Condom Love: Find Out How Amazing Safer Sex Can Be.”41 Even though the article was focused on encouraging young people to engage in safe sex, the Twitter account strategist told Bedsider, “It still paints sex in a recreational/positive light versus being neutral and dry.”42 In 2017, Facebook banned advertisements from the National Campaign to Prevent Teen and Unwanted Pregnancy that promoted regular health checkups. Like others, their modern and catchy “You’re so sexy when you’re well” advertising campaign was deemed as profane or vulgar language. Similarly, journalist Sarah Lacy’s advertisements for her book The Uterus Is a Feature, Not a Bug were rejected for containing the word “uterus.”43
In response, Lawrence Swiader, Bedsider’s director, told the Atlantic, “We need to be able to talk about sex in a real way: that it’s fun, funny, sexy, awkward . . . all the things that the entertainment industry gets so well. How can we possibly compete with all of the not-so-healthy messages about sex if we have to speak like doctors and show stale pictures of people who look like they’re shopping for insurance?”44 This is not an isolated incident. The Keep A Breast Foundation, a youth-based organization that promotes breast cancer awareness and educates young people about their health, was banned from using Google AdWords because of their slogan, “I Love Boobies.”45 Both of these instances constitute a staggering reiteration of early Supreme Court bias in enforcing obscenity doctrine against LGBTQIA+ and sex education materials but not against Playboy magazine; Playboy has been allowed to advertise its content through its Twitter account and has even posted photos of bare breasts. And they have very real material consequences. In the last systematic study I could locate from 2013, the Pew Research Center found that 59 percent of Americans had turned to the internet for health information in the past year, with 77 percent of them starting at a search engine like Google, Bing, or Yahoo.46
The appeals process for banned content is too complex, time-consuming, and expensive for nonprofit organizations to successfully engage in. For instance, in 2014, the sex education organizations Spark and YTH (Youth+Tech+Health) had four of their sex education videos removed from YouTube. The organizations repeatedly contacted YouTube and filed two official appeals through the online process, all to no avail. They were only able to successfully get their videos reactivated after hiring a lawyer who happened to go to law school with another lawyer high up in YouTube’s policy department. As Swiader noted, “While some organizations have had success getting content through after initial rejection, the process of winning that minor victory is tireless. Many smaller organizations just don’t have the bandwidth to fight for each individual piece of content.”47 This has been a huge hindrance to online sex education campaigns, as at least forty sex educational content creators have had their YouTube videos demonetized, their channels deprioritized in search results, and their accounts shadow banned, including channels like Watts the Safe Word, Come Curious, Bria and Chrissy, and Evie Lupine.48
This overblocking is not confined to sex education though. Entire identity categories have been subject to overblocking, even when their online content is not sexually explicit. Take, for example, Google’s understanding of the term bisexual. From 2009 to 2012, Google only understood the term “bisexual” as a query for mainstream heteroporn. While the effects of this oversight at Google and their slowness to address it are quite bad, it is easy to understand how their algorithms would have come to such a conclusion. In mainstream porn, the term “bisexual” is popularly appropriated heteronormatively to signal only scenes with females willing to engage in group sex with other women—for example, male-female-female (MFF) threesomes.49 The term “bisexual” then is hugely popular in mainstream heteroporn, and mainstream heteroporn comprises a large percentage of internet pornography (if not of the web in its entirety). As such, the term “bisexual” actually is more likely to indicate pornography than not. And while it is a flagship term in the LGBTQIA+ marquee, bisexuals often speak of feeling underrepresented or even marginalized in LGBTQIA+ discourse. With the term often being collapsed into its container initialism, one can see how this usage would have been less compelling to the content filter’s machine learning protocols. The result was Google adding the term “bisexual” to a list of banned search terms that could cause a website to be deprioritized in search rankings if any of these terms appeared on the site. Because of this, for three years, all bisexual organizations and community resources were either deprioritized in Google Search results or completely censored.50
I can find no comprehensive studies of the effects of Google’s changes to its algorithms post-2012 to disallow the censorship of bisexual organizations and community resources. There are also no comprehensive studies on other such terms that have been designated as exclusively pornographic, though “gigolo” and “swinger” went through similar classifications between 2007 and 2015.51 Without such studies, it is hard to determine how many online LGBTQIA+ resources are still being prevented from reaching their intended audiences by Google’s SafeSearch features. These sorts of resources are a particularly difficult issue to deal with from Google’s regulatory framework, as the line between explicit and educational or identity-forming content is hazy. In communities that look to the performativity of sex, sexuality, and/or sex acts for their communal identity formation, visuals and discourse that might be considered explicit in other contexts take on a new valence. Here “prurient” interest can be tethered to sexual education and individuation. “Hard-core” pornography is used—in particular by adolescents—for educational purposes.52 There is some evidence of a correlation between prurience—in this case masturbation to online materials—and seeking information about sex and sexuality online. While, not surprisingly, masturbation also correlates to viewing these materials more favorably, more interestingly it also correlates to people reportedly being less disturbed by sexual material.53
The internet is well suited for offering a safe space to experiment with one’s sexuality with few negative repercussions—people can “try on” and “test out” sexualities and practice coming out—and for building communities for people with marginalized sexual identities.54 As Nicola Döring notes, “The Internet can ameliorate social isolation, facilitate social networking, strengthen self-acceptance and self-identity, help to communicate practical information, and encourage political activism.”55 While the internet offers very promising opportunities for LGBTQIA+ individuation and community building, its heteronormative content moderation practices work to circumvent those opportunities. As Attwood, Smith, and Barker note, “Young people appear to be using their encounters with pornography as part of their reflections upon their readiness for sex, what they might like to engage in, with whom, how and what might be ethical considerations for themselves and prospective partners.”56 As such, we need to be having a much more robust conversation about what constitutes pornography, in which contexts, when it is actually in the best interests of children and adolescents to censor it, and how, and this conversation needs to better reflect LGBTQIA+ and sex-positive voices. To facilitate this conversation, we need a more robust and longer duration dataset that tracks online censorship, particularly when it comes to LGBTQIA+ resources online so that we can better understand just what content is being considered “explicit.” Additionally, we need to collect more information on how people (adolescents in particular) use the web for sexual education, experimentation, individuation, and community building.57 Without this basic information, it is very difficult to provide a well-founded critique of content moderation or to advocate for precise interventions into how content moderation algorithms ought to be altered to better suit LGBTQIA+ communities online.
As I noted briefly above, in March of 2018, the US Senate passed the Stop Enabling Sex Traffickers Act (SESTA) and the tacked-on Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) by a vote of ninety-seven to two. Collectively, these acts are known as FOSTA-SESTA and work to close off Section 230 of the CDA of 1996, which for two decades had allowed internet providers and content hosts to avoid legal culpability for obscenity and the facilitation of prostitution that at times may have been facilitated by their services. Under the pretense of protecting women from sex trafficking and cracking down on child sexual exploitation, FOSTA stopped the protections of Section 230 and instituted a new, very ambiguous definition of content and services that can be considered to facilitate sex trafficking and prostitution. For instance, under FOSTA, sex work and sex trafficking are the same thing, and content hosts and service providers can be held liable for “knowingly assisting, supporting, or facilitating” sex work in any way.58 Congresswoman Ann Wagner, a key sponsor of the bill, has also explicitly conflated consensual sex work and sex trafficking in a speech on the House floor.59
FOSTA is a clear mark of heteronormative bias in the congressional agenda, or at least a pandering to it. As Violet Blue notes, “Lawmakers did not fact-check the bill’s claims, research the religious neocons behind it, nor did they listen to constituents.”60 FOSTA was opposed by everyone from the ACLU to the Department of Justice.61 The Electronic Frontier Foundation (EFF) published many dozens of articles condemning the act, as did law professors, anti-trafficking groups, and sex worker organizations.62 Large internet companies like Amazon, Google, Facebook, Microsoft, and Twitter also unilaterally opposed the act under the guise of the Internet Association in August of 2017.63 Though by November these companies had changed their minds—ostensibly after unspecified revisions to the legislation—and were thanking the same senators that they were testifying before in Congress about having facilitated Russian interference in the 2016 US presidential election.64 Tech journalists writing in both conservative and liberal news forums attributed this shift to Facebook’s breaking ranks and championing the legislation in the wake of their series of scandals ranging from Cambridge Analytica to Russian bots spreading pro-Trump propaganda and hate speech.65
FOSTA was immediately claimed as a significant victory for NCOSE, which wrote shortly after its passage, “This is as great a moment in the fight to free our country from sexual exploitation, as the Emancipation Proclamation was in ending the scourge of slavery.”66 It is unclear exactly how much credit can be reasonably attributed to NCOSE for FOSTA. As has been noted, between 2016 and 2018, NCOSE was able to get its language into the official Republican Party platform and saw a number of states pass resolutions declaring pornography a public health crisis. It is unclear exactly how much of this grassroots organizing had made its way into the US House of Representatives and Senate, but the time line of FOSTA overlaps significantly with this movement, and it is safe to assume that it is part of a growing consensus among legislators that pornography needs stronger regulation, though under the familiar guise of protecting children. What is clear is that the discursive conventions of NCOSE have become mainstream. Their so-called intersectional approach to sexual exploitation that considers pornography and human sex slavery to be different only in degree rather than different in kind has been taken up on both sides of the aisle and was echoed by the Internet Association and Facebook executives like Cheryl Sandberg in particular. This is a particularly dangerous conservative anti-sex apparatus that has been constructed. It can mobilize the ambiguity of its blurred definition of sexual exploitation—containing equally everything from soft-core pornography to sex slavery—to attack any and all forms of sexual expression from the unassailable rhetorical ground of protecting children from being sexually abused and exploited on the dark web. And further, as has repeatedly been the case in the past, these sorts of apparatuses often reach a détente with the untamable flow of erotic expression in which only the most industrialized, corporate, and heteronormative versions of pornography are able to persist.
Ron Wyden, the only senator besides Rand Paul to vote against FOSTA, noted that rather than preventing sex trafficking or helping child victims of abuse, the law would primarily create “an enormous chilling effect on speech in America.”67 We can already see that this is precisely the case. The new law incentivizes law enforcement to focus on intermediaries that facilitate prostitution rather than sex traffickers themselves. It thus shifts focus away from real criminals, and in shuttering these intermediaries, it cuts law enforcement off from essential tools that were previously used to locate and rescue victims. It similarly cuts law enforcement off from easily tracked evidence that can be used in criminal cases against sex traffickers. This is why the bill was also opposed almost universally by anti-trafficking groups and sex work organizations.68 Chapter 4 will dig deeper into the impact that FOSTA has had on the finances and everyday lives of sex workers and adult entertainers, with a particular focus on those offering LGBTQIA+ services and content. Here it is worth exploring how a number of platforms responded to the shift in regulatory policy. Most major ISPs and internet platforms clamped down on sexual expression, ramped up their content moderation practices, and ended up overblocking more content than ever. In particular, we will look at the overblocking imposed by Apple through its App Store that serves as a gatekeeper to all iPhone users globally and at the Google platform, both of which have engaged in heteronormative overblocking in the wake of FOSTA-SESTA.
The Apple App Store was set up to function as a sort of moral policing mechanism for mobile content. Steve Jobs famously said, “We do believe we have a moral responsibility to keep porn off the iPhone,” and he further noted that “folks who want porn can buy an Android phone.”69 This anti-sex sentiment has been literally codified in both the community standards for the app store and the algorithmic procedures for policing iOS content. These policies have been claimed as a victory by NCOSE, which had been putting pressure on Apple for years.70 This sentiment also permeated the iPhone’s firmware at one point, as researchers discovered in 2013 that the following words were intentionally excluded from the iPhone’s dictionary and thus also from autocorrect and auto-complete: abortion, abort, rape, arouse, virginity, cuckold, deflower, homoerotic, pornography, and prostitute.71 In essence, the system was hardwired to be blind to these terms and thus to inhibit conversations mediated by iOS about abortion, rape, virginity, sex, homosexuality, pornography, and prostitution. The industry describes lists like these as kill lists, and many text input technologies like Android and the Swype keyboard contain them as well.72
This was not new for Apple, as in 2011, it was discovered that Siri could not answer simple questions about where people might go to get birth control or to receive an abortion. In the latter instance, Siri would instead direct iPhone users to antiabortion clinics.73 In 2016, researchers in New Zealand found that Siri produced either no answer or answers from disreputable sources for 36 percent of the fifty sexual health–related questions they asked. In particular, Siri failed to produce visual illustrations, misinterpreted “STI” as a stock market quote, and when asked to tell them about menopause pulled up the Wikipedia page for the show Menopause the Musical that was then running in Las Vegas.74 These findings were in line with previous research demonstrating that Siri trivialized many important inquiries about mental health, interpersonal violence, and physical health.75 These discrepancies between voice search and desktop search have a disproportionate impact on communities that more frequently access the internet through these or similar vocal interfaces, including people with visual impairments, lower literacy rates, or whose only internet-enabled device is their phone. Nor are they exclusive to Apple. In 2016, all the top virtual assistants—Siri, Google Now, and S Voice—could not understand questions about what to do if you are raped or being abused in a relationship.76 As Jillian York, then director of international freedom of expression at the EFF, told The Daily Beast, “I hate to say it, but I don’t think this should surprise anyone. Apple is one of the most censorious companies out there.”77 Apple’s commitment to an anti-sex and anti-pornography regime of censorship is particularly important because so many technology companies, and platforms in particular, require the intermediation of the App Store to interact with iPhone users.
Take Instagram for example. As Instagram cofounder and former CEO Kevin Sysntrom explained, much of the platform’s focus on censoring explicit content is meant to maintain its 12+ rating in the Apple App Store and thus capture a larger youth market share.78 Instagram largely achieves this by operating two types of censorship based on hashtag use. The first type permanently blocks all content with particular hashtags from ever appearing in a search. It contains over one hundred hashtags that have been applied to millions of photos, mostly having to do with nudity, pornography, pro-anorexia, and self-harm. These hashtags range from the expected—#anal, #bigtits, #blowjob, #porn, and so on—to the vaguely sexual and somewhat surprising—#cleavage, #sexual, #femdom, #fetish, #footfetish, #freethenips, #gstring, #nipple, #shirtless, #twink, #wtf, and the like.79 These banned hashtags betray a general anti-sex comportment that sexualizes and objectifies female bodies by banning images of female-presenting people with cleavage and thongs that would be legally permissible to wear in public settings. They also work to foreclose politically sexual speech, like the famous Free the Nipple campaign in which males and females posted close-up photos of their nipples so that Instagram would have trouble determining whether they were sexually explicit or not according to its policies. The campaign began in 2014 to protest the double standard in which female nipples are eroticized and legally required to be covered in public under most state and local laws in the United States. The campaign grew immensely on Instagram and even attracted attention and support from celebrities like Miley Cyrus and Chelsea Handler. Lastly, Instagram bends its own rules to render images of feet censorable as sexually explicit solely based on the context clue of a hashtag indicating that users might masturbate to the images. This demonstrates a stronger commitment to preserving the spirit of heteronormativity rather than the letter of their community standards.
The second type of censorship is a soft ban in which a select number of hashtags return only thirty or so results. Users searching for these hashtags receive the following message from Instagram: “Recent posts from #[hashtag] are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.”80 While Instagram implies that these soft bans are only temporary, research has shown that many have remained censored for at least several months at a time. Some of the soft-banned hashtags include #bi, #curvy, #everybodyisbeautiful, #iamgay, #lesbian, #mexicangirl, and #woman.81 This demonstrates an even more insidious policing of queer expression on the platform, as it reinforces body normativity, reproduces Google’s earlier obfuscation of bisexual discourse, and shuts down the sharing of images from LGBTQIA+ users because of the potential association of these hashtags with pornography. More recently, Instagram has soft-banned hashtags in a way that reaffirms cisnormativity. For example, in 2018, the hashtags #woman, #strippers, and #femalestrippers were all banned but hashtags like #man and #malestrippers were not.82 In the wake of FOSTA, the company also banned the hashtag #sexworkersrightsday, further marginalizing and stigmatizing sex workers in the United States.
While these two forms of censoring images from appearing in search results based on hashtag use are Instagram’s most proactive efforts to censor nudity on its platform, it also uses some rather clunky computer vision algorithms to automate content moderation and community reporting procedures that are exploitable by alt-right misogynists (as we’ll see in chapter 4). While there is little to no publicly available data on these systems, it is fair to assume that they internalize a lot of the same biases as Google’s SafeSearch and Cloud Vision API. What is available are many instances of Instagram’s algorithms failing to appropriately identify objects in images. Sometimes this leads to quite funny and ridiculous results, such as in 2019 when Instagram censored a photograph of a potato.83 Other times the results are much more appalling.
In 2018, internet studies researchers Stefanie Duguay, Jean Burgess, and Nicolas Suzor interviewed queer female Instagram users and found that they experienced Instagram’s content moderation as overly stringent.84 That same year, journalist and sex worker Alexander Cheves reached out to his network on Twitter and received over one hundred messages from other sex workers and adult performers whose accounts on Instagram had been flagged, disabled, or shadow banned in 2018.85 Here are just a few of the particularly egregious examples of Instagram censoring (nonpornographic) LGBTQIA+ content. In 2019, Instagram banned the account of Tom Bianchi, a male erotic photographer and HIV activist who has helped to document the history of gay men’s lives on Fire Island and elsewhere in the United States since the 1970s.86 Speaking about his photography, Bianchi told LGBTQ Nation,
Fire Island was, for me, a little utopia away from everything. It’s literally an island. And even for me, my photos were an idealization. . . . Stonewall happened right before I got to New York and shortly before I started doing all of this at Fire Island. The image of the homosexual was that of degenerates working in shadows and perverts trying to seduce children. So healthy young American boys playing on the beach? Early game changer. . . . Basically I saw myself as the supporter of and encourager of the whole gay consciousness that was emerging at that time in a very positive way. . . . What’s special about it is remembering the affection that we all had for each other. We were all best buddies. We played together, we partied together, we adored each other. We danced with each other.87
In 2018, Instagram also censored a photo of Queer Eye’s Antoni Porowski in his underwear.88 Also in 2018, the Warwick Rowers, a rowing team that highlights advocacy and allyship for women and queer communities, had yet another of their posts censored on Instagram. This time, the photo was of the rowers nude but with no exposure of their genitals, the cover for their upcoming charity calendar whose proceeds support LGBTQIA+ inclusivity in sports.89 Perhaps most egregiously, in late 2017, Instagram censored a photo of two lesbian women cuddling in bed with their child.90 All of these efforts are solely meant to prevent nudity from becoming easily visible on the platform so that Instagram can maintain its market share of iPhone users. This market share is more valuable to the company than the intermittent public relations crises that result from its stifling of LGBTQIA+ expression.
Apple’s aggressive anti-porn censorship regime even impacts large independent companies like Barnes & Noble and Amazon, both of whom rely on the Apple App Store to disseminate e-reading apps. For instance, in 2017, Barnes & Noble began terminating the accounts of erotica writers on their Nook platform without warning.91 Similarly, in 2018, Amazon followed suit and began shadow-banning authors of romance, erotica, and similar books considered to be sexual content. A number of authors had their books stripped of their best-seller rankings with no warning or notice from Amazon. While this alteration may seem mild to some, it is worth noting that many of Amazon’s algorithms use best-seller rankings to determine how content appears in searches, whether the book shows up in advertisements, and whether the book can be served up as a recommendation for buyers who have purchased similar titles.92 These changes only took effect on the US Amazon site and thus demonstrate that Amazon was likely introducing these changes in anticipation of FOSTA’s enactment.93 However, the focus on eliminating erotica from the Nook and Kindle stores also betrays a focus on censoring mobile content likely meant to assuage Apple and keeping their mobile apps in the Apple App Store. This marks a radical divergence from past precedent in the United States where the last major attempt to censor an “obscene” literary text was William S. Burroughs’s Naked Lunch, which reached the Massachusetts Supreme Court in 1965 before being overturned with testimony from Allen Ginsberg and Norman Mailer. Since then it has been presumed that establishing the negative impact of literary texts and demonstrating their obscenity was too high a bar to clear, and censorship was largely reserved for audiovisual texts going forward. While Amazon is a private company and does not have to adhere to these precedents in managing its digital storefront, it is shocking to see them take such a conservative and anti-sex stance on literotica. Further, self-publishing e-books presents a low barrier of entry for authors—it is cheap and easy to do—and thus literotica is a haven for LGBTQIA+ and other non-normative sexual content. Shadow-banning literotica from the Kindle and Nook platforms makes queer content harder to produce, locate, and afford for authors and readers alike.
Reddit similarly had a number of its apps pulled from the App Store in 2016 because they contained a NSFW toggle that allowed users to search for porn subreddits and view them on their iPhones. Reddit was forced to remove the toggle and make it extremely difficult to view any pornographic content through its apps to get them fully reinstated in the App Store.94 In 2018, Microsoft banned nudity and profanity platform-wide, including on Skype, in Office 365 documents, and in Microsoft Outlook, a move likely connected to Microsoft’s move to integrate these services into the mobile app ecosystem on iOS.95 In fact, all major platforms and app providers are forced to bow before Apple’s anti-sex morals, as Apple gatekeeps access to between 20 and 25 percent of all mobile phone users globally.96 FOSTA simply gave Apple yet another financial excuse and set of rhetorical tools to justify its heteronormative policing of sex and sexuality.
We might similarly read Facebook’s dedication to policing sexual expression, as was examined in chapter 2, as another result of Apple’s gatekeeping given the large portion of Facebook users who access the platform primarily through its mobile app. However, while there is plenty of evidence of Facebook overblocking LGBTQIA+ content, there is less documentation directly connecting it to Apple’s standards for its App Store. While we’ve already examined Facebook’s heteronormative content moderation policies and some of their impacts on sex education, it is worth adding a few explicit examples of them censoring LGBTQIA+ speech before moving on to examining the Google platform. In 2018, several site admins for the sex education group SEXx Interactive on Facebook were banned the day after their biggest annual conference for an “offending image,” which turned about to be their logo, which was simply the word SEXx in bold black text on a solid peach background.97 Cyndee Clay, executive director of sexual health and harm reduction advocacy group HIPS told Motherboard that they were seeing a lot of content getting blocked or removed from Facebook for violating community standards, including a post from a friend of hers asking to interview sex workers for an article.98
In a 2018 story, the Washington Post found dozens of LGBTQIA+-themed advertisements that were blocked on Facebook for supposedly being “political,” getting caught in the crossfire of Facebook’s attempt to moderate political content after the 2016 election and alleged Russian misinformation campaign. These included advertisements for pride parades, beach concerts, pride-themed nights at a sports arena, an LGBT youth prom, an NAACP-sponsored conference on LGBTQIA POC, a Lyft ad raising money for an LGBT community center, an LGBT-themed tourist expedition to Antarctica, gay social groups, a gay comedian’s stand-up event, senior-friendly housing options, and perhaps most notably an advertisement for a panel discussion with an LGBT radio station in Washington on the history of Stonewall.99
Steve Jobs’s recommendation that pornography enthusiasts turn to Android was misleading at best. The Google platform has largely kept pace with Apple in the race to see which can be the most anti-sex and anti-pornography. For example, Google maintains its own “kill list” for Android. In 2013, researchers found that Android’s firmware filtered words like intercourse, coitus, screwing, lovemaking, most terms for genitalia (with special attention paid to female anatomy), panty, braless, Tampax, lactation, preggers, uterus, STI, and condom.100 These words were not contained in its dictionary and not available for autocorrect or auto-complete functions. This essentially made it more difficult for Android users to talk about sex and about their bodies, betraying a sex negativity whose silence helped reinforce heteronormativity. In the same time period that Apple was pointing the finger at Google as being pro-pornography, Google was systematically censoring it across their entire platform. Google banned pornography on Google+ at its rollout in 2011, on Blogger in 2013, in Google Glass apps in 2013, on Chromecast in 2014, on AdWords in 2014, and in the Google Play app store in 2014.101
Since then, people have reported Google Drive automatically deleting pornography stored on Google’s servers without warning.102 For a period in 2018, Google News censored all articles with the word “porn” in them, including legitimate articles that simply happened to be about or to mention porn, like stories on revenge porn or on the suicide of adult entertainers that were published in mainstream newspapers and magazines.103 In July of 2018, Google AdSense blacklisted a page on GovTrack.us for hosting legislative information about a then thirty-two-year-old bill called the “Child Sexual Abuse and Pornography Act of 1986.” The site’s admin submitted a request to review the violation to Google but was quickly given a response that the request to unflag the page was denied and that the page would remain unable to display AdSense ads to generate revenue.104 Today, you can even purchase a Google router and use Google Family Wi-Fi to filter all web traffic passing through that router with Google SafeSearch.105
Each of these bans produced instances of overblocking, most notably the shift to the content policies at Blogger. In 2013, Google announced that it had changed the content policy for the site, which had provided free blog hosting since 1999. The changes included a policy shift that would ban and begin deleting blogs “displaying advertisements to adult websites” without offering any definition of what constituted “adult” content. As Violet Blue reported, at the time, the blogs that Blogger marked for deletion included “personal diaries, erotic writers, romance book editors and reviewers, sex toy reviewers, art nude photographers, film-makers, artists such as painters and comic illustrators, text-only fiction writers, sex news and porn gossip writers, LGBT sex activism, sex education and information outlets, fetish fashion, feminist porn blogs, and much, much more.”106 In 2015, Google made additional changes, removing adult blogs from its search index, hiding them from public discovery without a direct invitation and Google login, and providing content warnings to visitors before they land on the page.107 After these changes, bloggers were left with few alternatives to host their content. WordPress.com may host what Google considers “adult” content but does not offer options to monetize that content. Until 2018, Tumblr was a popular option, but its blogs were not indexed by Google Search and monetization was also difficult. The only real option was for bloggers to pay to host their own blogs, which produced financial and technical barriers for content producers.
Nowhere has overblocking been more visible on Google’s platform than on YouTube. As digital media researchers Jean Burgess and Joshua Green note,
Advertiser-friendly content regulation—particularly using automated methods—can just as effectively smooth the edges off radical progressive politics or the witnessing of human rights abuses as it can work for the intended purpose of dampening abuse, hate speech, and extremist activity. And the conflation of sexual content and harmful speech in content regulation can often end up inadvertently discriminating against sexual and gender minorities.108
This became readily apparent in what is popularly referred to as YouTube’s adpocalypse in 2017. Advertisers realized that their ads were popping up on videos of white nationalists, hate preachers, and sexually explicit content. Major advertisers like Coca-Cola and Amazon pulled their ads from the platform and ad revenues plummeted.109 YouTube acted swiftly to implement a system to automatically demonetize any videos violating its new “Advertiser-Friendly Content Guidelines,” therefore preventing ads from appearing alongside them. The criteria it used to make these determinations were vague and expansive, including videos whose main topics included inappropriate language, violence, adult content, harmful or dangerous acts, hateful content, incendiary and demeaning content, recreational drugs and drug-related content, tobacco-related content, firearms-related content, adult themes in family content, and controversial issues and sensitive events like politics, war, and tragedies, regardless of if they were presented “for news or documentary purposes,” as well as a lot of LGBTQIA+-related content.110
YouTube’s system is unique because its censorship is based fully on machine learning–based automated content filters and does not incorporate community flagging or reporting. As YouTube notes, “In the first few hours of a video upload, we use machine learning to determine if a video meets our advertiser-friendly guidelines. This also applies to scheduled live streams, where our systems look at the title, description, thumbnail, and tags even before the stream goes live.”111 YouTube acknowledged that the system was imperfect and implemented an appeal system in which creators of demonetized videos can get their cases reviewed, but only if they have been viewed 1,000 times in the past seven days. This requirement effectively prevents niche YouTubers from ever successfully appealing the demonetization of their videos and puts an unfair burden on smaller-scale content creators.112 While these changes continue to cause significant damage to LGBTQIA+ content creators, they successfully appeased advertisers who quickly began returning to the platform.113
For example, Erika Lust, an erotic filmmaker, had her account shut down and was permanently banned from the platform after posting a series of video interviews with sex workers about their trade.114 Lust wrote on her website, “There was NO explicit content, NO sex, NO naked bodies, NO (female) nipples or anything else that breaks YouTube’s strict guidelines in the series. [ . . . ] It was simply sex workers speaking about their work and experiences.”115 In 2018, the YouTube channel for Recon, a fetish dating site for gay men, was suspended yet again, only being reinstated after a negative backlash on Twitter and in the press.116 YouTube demonetized many of Sal Bardo’s films, including Sam, a film about a bullied trans boy’s journey of self-discovery, despite the fact that the film has been screened at festivals and in classrooms around the world and had over six million views on YouTube.117 Queer YouTuber Stevie Boebi reported that all of her lesbian sex videos were completely demonetized on the platform.118 Gaby Dunn similarly reported that YouTube had demonetized all of the LGBTQIA+ and mental health content on her and Allison Raskin’s channel Just Between Us.119 YouTubers Amp Somers and Kristofer Weston of Watts the Safeword have also had their content flagged and/or demonetized on YouTube.120
YouTube has offered a Restricted Mode since 2010, which is meant to be used by libraries, schools, public institutions, and users “who choose to have a more limited viewing experience on YouTube.”123 There are only two ways that a video can become censored in Restricted Mode: (1) the content’s creator can apply an age restriction to any of their videos, and (2) an “automated system checks signals like the video’s metadata, title, and the language used in the video.”124 Videos that deal with drugs and alcohol, violence, mature subjects, use profane and mature language, contain incendiary and demeaning content, and, most importantly for our purposes, sexual situations are subject to restriction. YouTube describes these sexual situations as follows: “Overly detailed conversations about or depictions of sex or sexual activity. Some educational, straightforward content about sexual education, affection, or identity may be included in Restricted Mode, as well as kissing or affection that’s not overly sexualized or the focal point of the video.”125 This poses a key problem for people creating youth content on sexual health and sexual identity, especially when they attempt to make this material appealing to young people, as we’ve already seen in the instances with sex educators on Twitter and Facebook. Further, videos suffering from restriction offer a much more damning portrait of the company since, as their site notes, the only way a video can be restricted is by content creators self-selecting an age restriction or by an internal, automated content filter. In each of the cases we will examine, this is worth bearing in mind. This is not caused by an army of misogynist trolls flagging LGBTQIA+ videos as inappropriate—this is a fully automated function on YouTube’s platform betraying its hardcoded heteronormativity.
In 2017, a number of LGBTQIA+ content creators noticed that their videos were now being censored on Restricted Mode, and their hashtag #YouTubeIsOverParty trended on Twitter as content creators commiserated with one another and began protesting YouTube’s biased censorship.126 If you notice the timing, this came shortly after the passage of FOSTA, a veil behind which all internet platforms were ramping up their censorship of LGBTQIA+ content. For example, Rowan Ellis, a feminist and queer YouTuber who makes videos about pop culture, activism, and self-care, found that forty of her videos were now being censored under Restricted Mode. In her video on the subject, Ellis noted, “The sexualization of queer and trans people is still rampant. This kind of insidious poison which makes us seem inappropriate is still around. It is still having an effect.”127 In another example, Calum McSwiggan, an LGBTQIA+ lifestyle vlogger, found that all of his videos had been censored under Restricted Mode except for one. McSwiggan acknowledges that a number of his videos include inappropriate content for children but notes that even videos with clean language and no explicit sexual themes were taken down without cause. Examples of such videos include one explaining gay pride and why LGBTQIA+ individuals march every year, a video celebrating the gay marriage of two of his friends, a video he made in collaboration with Tom Daley in which they interview celebrities who speak about who their pride heroes are, and a spoken-word video detailing how McSwiggan came out as gay to his grandmother.128
A popular LGBTQIA+ YouTuber named Tyler Oakley similarly complained on Twitter that his video “8 Black LGBTQIA+ Trailblazers Who Inspire Me” was blocked by YouTube’s Restricted Mode.129 A number of Sal Bardo’s videos were restricted, including his contribution to It Gets Better, a campaign meant to prevent at-risk youth suicide.130 Bisexual YouTuber neonfiona noted that on her channel, all the videos about her girlfriends were blocked while all the videos about her boyfriends remained visible in Restricted Mode—thus toggling the Restricted Mode settings effectively transforms neonfiona from a bisexual woman into a straight woman.131 Another bisexual YouTuber named Melanie Murphy reported the exact same thing happening to her channel.132 YouTuber Gigi Lazzarato had all of her videos about coming out as transgender and many that discussed gender identity and sexuality restricted. She notes, “[I]t’s scary on so many levels because I know when I was younger, YouTube was my family, YouTube was the place where I found a community of people that understood what I was going through.”133 Seaine Love’s video about coming out as transgender was restricted as well.134
In response to the complaints of these YouTubers, the company sent out a tweet noting that “LGBTQ+ videos are available in Restricted Mode, but videos that discuss more sensitive issues may not be.”135 In an emailed statement, YouTube representatives noted that their automated system may be incorrectly labeling some LGBTQIA+ videos as violating their community guidelines for Restricted Mode. They noted, “[W]e realize it’s very important to get this right. We’re working hard to make some improvements.”136 Within a month, YouTube claimed to have fixed a problem on the “engineering side” that was incorrectly filtering twelve million videos, hundreds of thousands of which featured LGBTQIA+ content.137
In 2018, a year after YouTube apologized for “accidentally” blocking, demonetizing, and/or age-gating the content of YouTubers like Rowan Ellis, Tyler Oakley, Stevie Boebi, and neonfiona, Chase Ross noted that any of his videos that contained the words “trans” or “transgender” in their titles were being demonetized or removed completely—the same videos with different titles were left alone. Ty Turner similarly tweeted that his channel was penalized for a video he posted about picking up his prescribed testosterone.138 Not only do LGBTQIA+ videos continue to be censored, demonetized, and age-gated on YouTube, but the company has also since allowed extremist anti-LGBTQIA+ advertisements to be posted alongside LGBTQIA+ content on its platforms—a number of which came from the Alliance Defending Freedom, which has been deemed a hate group by the Southern Poverty Law Center.139
The Women of Sex Tech conference, which contains presentations and talks by a group of entrepreneurs in sex and technology industries, had its first-ever, live-streamed conference censored by YouTube in 2020. SX Noir, the vice president of Women of Sex Tech, told Motherboard, “I think this indicates that there will always be a moral judgment on these platforms. . . . When cis, heterosexual white men create these digital worlds, you see these moral judgments leading to more discrimination for people who are brown, black and queer.”140 In 2021, YouTube’s overblocking of LGBTQIA+ content is still palpable, and LGBTQIA+ content creators still complain of censorship. For instance, as I write, you can still go to neonfiona’s channel, toggle the Restricted Mode, and watch as her sexual identity appears to shift from bi to straight. By leveraging the rhetoric of protecting children and combating criminality and sexual deviance, YouTube is complicit in silencing LGBTQIA+ discourse for the youth and anyone poor enough to need to access YouTube through public computers. And, from their own admission, this is an instance of pure algorithmic bias.
One of the oddest victims of FOSTA has been creators of autonomous sensory meridian response (ASMR) videos, called “ASMRtists.” ASMR is a sensory phenomenon “in which individuals experience a tingling, static-like sensation across the scalp, back of the neck and at times further areas in response to specific triggering audio and visual stimuli.”141 These auditory phenomena are wide-ranging and most often nonsexual. Browsing the most frequently viewed ASMR videos on YouTube brings up content like whispering, ear cleaning, massage, tapping, peeling, brushing, crunching, squishing, and eating sounds. People who experience ASMR report a pleasant feeling and relaxation while listening to and/or viewing ASMR content, and this is its primary purpose rather than supposed sexual enjoyment. For instance, research has shown that these same people experience reduced heart rate and increased skin conductance levels while listening to or viewing ASMR content, which may indicate that it has therapeutic benefits.142 There is evidence it may be useful in treating everything from depression to chronic pain.143 This is the most frequently cited reason for accessing ASMR content online. In one survey, 82 percent of people used ASMR content to help them sleep, 70 percent to deal with stress, and only 5 percent for sexual stimulation.144 It has also become remarkably mainstream. Rapper Cardi B noted that she listens to ASMR content every night, Ikea made ASMR advertisements for its furniture, and automaker Renault made an ASMR advertisement for one of its new cars.145 Michelob even ran an ASMR ad for its Ultra Pure Gold beer during the 2019 Super Bowl.146
ASMRtists have long had to contend with the assumption that ASMR is a sexual fetish, and it has recently become a new target of the war on porn. In 2018, China cracked down on ASMR, calling for its leading video sites to “thoroughly clean up vulgar and pornographic ASMR content,” a directive that sites like Youku, Bilivili, and Douyu complied with by removing all ASMR content writ large.147 While the response in the United States has not been as extreme, it has certainly been troubling and betrays a heteronormative paranoia about queerness. YouTube began demonetizing the genre in 2018. For example, the YouTube channel ASMR with MJ got a notice from YouTube for violating its community guidelines, as nearly a third of its videos were suddenly considered improper for monetization.148 In another example, the woman running the channel Be calm with Becca took to Reddit after having a number of her videos demonetized, such as videos where she is fully clothed and talking about clothes. As she notes, YouTube’s appeals policy requires a video to get 1,000 views in a week before they will review it, a near impossibility for many ASMR videos that were banned because they are older and have niche audiences.149
This reaction quickly spread to PayPal, which began banning ASMRtists for life and freezing their funds for 180 days. Content creators like Sharon DuBois (ASMR Glow), Scottish Murmurs, Creative Calm, and RoseASMR all had their PayPal accounts banned and funds frozen, though two of them were able to successfully appeal the decision.150 As Violet Blue has explained, there was an odd correlation between the ASMR accounts being demonetized, censored, and banned online and the gender of the content creators that can only be explained by looking to the manosphere, which had begun mobilizing against (female) ASMRtists on an 8chan forum called “PayPal Lowering the Hammer on ASMRtits [sic].”151 The 8chan forum’s name is a pun on the term ASMRtists, used to describe the predominantly female content creators. The censorship of ASMRtists betrays an assumption that all LGBTQIA+ and female-created content is automatically sexual and ought to be subject to stricter scrutiny on the part of internet platforms, which is all too easy for alt-right misogynists to exploit.
Despite the censorship crackdown, a number of companies have rushed to begin capitalizing on ASMR content.152 The recent app and ASMR platform Tingles is rushing in to supplant both YouTube and Patreon, hosting ASMR content and monetizing it for ASMRtists on the same platform. Tingles tries to lure ASMRtists to its platform by promising to quadruple their ad revenue and offering incentive gifts for reaching certain numbers of supporters.153 However, ASMRtists have reported that the company is a scam. By registering, content producers automatically have their entire YouTube portfolio uploaded to the Tingles platform, which disables their YouTube ad revenues and severely decreases their overall income.154 Another similar attempt at commercialization is Monclarity’s integration of ASMR content into their Mindwell meditation app, which is produced in-house. Mindwell now offers voices that pan across speakers to make users feel a sense of companionship that can aid with calming and relaxation, often offset by music.155 It is worth noting that neither of these apps has been banned in the Apple App Store or the Google Play Store, and there are no murmurs among the alt-right community of targeting them to get them censored in app stores. Perhaps this is because both companies are owned and operated by men? In a heteronormative internet rife with biased censorship, it seems only men are allowed to control the sufficiently vertically integrated and capitalized companies that can push their content through the content filters and community guidelines to reach a user base at web scale. Anti-porn organizers only rest once digital prostitution is placed under the control of digital pimps.