Moderation... Moderation... Moderation
A letter underscoring the importance of moderation tools and practices on internet products.
Hello everybody! Angelo here- I published this letter because I refuse to be ruled by a fear of putting out potentially substandard work. As high as my standards are, I realize I can’t objectively gauge my own output. If you are willing to read some of my drafts or ideas in the interest of helping me raise the bar of writing for you and future letters, please reach out and send me a DM on Twitter. With that being said- please enjoy this month’s newsletter on moderation on the internet.
From Facebook to the nascent app Clubhouse, I wholeheartedly believe that our efforts to achieve disintermediation in our communication environments left us with inadequate tools to scale non-violent communication. Regrettably, the systems for internet communication have unfortunately scaled up the reach of near-violent communication stoking existing trends of dis-information that has to lead to physical harm. As a result, there remains an asymmetric advantage to certain types of speech over others. To be clear, I am not asserting that words or opinions are mere violence. Still, when I say violent communication, I mean speech that actively calls for physical harm or blatantly racist ideals. I wrote that our digital platforms favor a specific meta defined by various business goals. The meta is composed of a few ingredients, partially due to algorithms, partly due to topic and focus, but mostly driven by human behavior. I assert that the consistent focus of viewing a platform's content and policy as a cost center costs companies. This view has second-order effects, where there lie social impact, where companies are missing a potential profit center instead.
I am about to mention is not particularly new; policy teams at every single tech company have been commenting on this landscape and the precarious position we find ourselves. However, now, this trend is impacting the corporate environment with potentially adverse effects. At the moment, as we saw at the first debate of the 2020 Presidential election cycle, rules that don't get enforced are merely suggestions. However, with the number of think-pieces fairly put forward by journalists and tech policymakers alike, many of them miss out on the historical perspective on tools older products used to provide and the experiences of the moderation experience from both the users and admins of communities alike. The moderation crisis is a predicament, not a problem to be solved with a wave of the hand; and as such we only can observe outcomes.
The Status Quo
Popular search engines or social media rely on one central component to their business models for content produced on their platform: near-zero content acquisition costs. Meaning, a company like Twitter doesn't have to pay people to post content on their network, unlike Spotify's might for specific big-name creators that might help with user acquisition. Not mentioned often is the maintenance cost and an acquisition cost. I will break down the former for those unfamiliar, a former upstart like TikTok, who at one point held no brand value, would pay for ads to try to convince people to watch and then become an active producer of content. One is a line item on the company's balance sheet; another one is the maintenance costs associated with doing business that eats into the margins of said content producers. Example costs that eat into the margin can include copyright enforcement, platform rule enforcement, and storage costs of hosting said information.
It is no secret that "The Blue App" isn't exactly the company's future as it enters the product life cycle's maintenance stages. In addition, Facebook recognizes implicitly that its flagship site needs to seem impartial with an increasingly politicized Department of Justice wishing to revise Section 230. As such, the websites demographics have made it perfect for conservative-leaning posts that bring an excellent foil for anyone discussing politics on the site. Around 2016, discounting foreign influence operations, plenty of conservative talk show hosts were able to get their start on the platform during the famed "pivot to video." Around that time, a product called Facebook Groups launched and became one of the largest draws to the forum. Groups of all stripes flocked onto the platform and became one of the largest drivers of Facebook growth and the right way for them to recover churned users. I witnessed the rise of multiple internet communities on the FB platform like Flat Earth Discussion Group, PHP, Hackathon Hackers, Post Aesthetics, Thunderdome(s), and Subtle Asian Traits.
We can view these communities in the context of what the internet was—a network of challenging to discover islands. Then ever-larger islands that built that served as connection hubs transformed into places to spend time in and of itself. Before the meme pages and FB Groups that now dominate most of the time spent online, many smaller yet popular websites like YTMND, eBaumsWorld, Digg, Newgrounds hosted and facilitated controversial content but were less networked that limited the reach of content. However, due to these websites' admin-based focus, users expected that content would be moderated or cataloged to prevent harm to users or viewers.
Like many communities, these FB groups ranged from benign conspiracy to general interest with their quirks on Facebook. Initially, as groups formed, the number of members who understood the group's original context could pick up on the community's founding principles. New members understand norms without intervention; overtime breaks down as the group achieves exponential growth. Marketers see the groups as an easily accessible audience to promote goods and post quality declines as posts turn off-topic.
There are a few options for recourse for bad actors on this product. There is an auto-moderator that does filter specific posts. But they have their caveats like as we have seen with regex capture on Twitter related to COVID. It can be woefully inaccurate, unable to judge culture and understand what is acceptable speech. An automated system enforcing non-violent speech might miss out on nuances on an ironic thread on "men are trash." As a result, members develop an aversion to using the reporting system to affect a group's status. Because these platforms are mostly automated, a ban occurs, with no support system to talk to. As a result, in the pursuit to appease a faint admin quality mark, trust in the system decays.
(As an aside, the fact that the only way people can get recourse over account action is by knowing someone who can escalate an internal ticket is a travesty in and of its self impacting platform trust)
Over time, notable members depart, and new members shift the Overton window of the group. Local township FB Group pages might pick up the local citizen who might have a bit too much time on their hands sharing articles from The Greyzone. The recommendation system overweights the number of contributions to the group. It becomes a gateway to harmful groups and ideas as moderation becomes nearly impossible due to a lack of tooling and trust in the platform. Admins lose visibility into posts as the number of posts increase. Not to mention how amateur community managers usually allow the discourse to get out of hand for ideological reasons or lack training to be good moderators. What occurs is a dereliction of duty on both ends.
Now we find ourselves mired in a landscape where those running large platforms that host communities of all sorts are in a problematic situation where users feel immense distrust towards the moderation practices of platforms, limiting their willingness to implement and enforce more rules. We then find that harmful content is free to spread out of a fear of appearing partial from those platforms' operators.
Inadequacies
This isn't a group's specific issue; this is a problem found on most major platforms- like Reddit and Twitter. As flaws in the product meta-game become apparent, they become exploited by trolls and hostile actors. The shift of focus of social products from admin hosted to user-centric focus to promote User-Generated Content shifted from abstracting moderation away from users.
As great as anti-spam systems written in Haskell are, the algorithms' stopgap is underpaid contractors subjected to hundreds of hours of content with no time to make meaningful decisions on what should stay and what should go. To note, I don't subscribe to the alarmist nature of The Social Dilemma that assumes that human behavior was excellent until the first internet-enabled mobile phone appeared. But removing friction on new mediums of communication needs those mediums to introduce friction by design to preserve communities' character from being weaponized.
These platforms faced unique challenges where websites were a hotbed of terrorist recruitment via WhatsApp, YouTube, LiveLeak, promoting content that has driven groups to participate in genocide and join militias. The platforms are currently ill-equipped to deal with the challenges of popular WhatsApp group chats in the hundreds of people with millions of and conversely and hundreds of thousands of Facebook Groups.
As a moderator for two FB Groups that serve people in the thousands, moderations are limited to a kick, ban, and mute. Those actions only go so far when you have alts flooding the member join queue. Not to mention the reporting system puts your group at risk to be delisted- and FB had no issue taking down groups that posted ironic content.
Let's assume that Facebook's leadership has earnestly tried to clamp down on all hostile behavior. The first issue is that its nearly impossible to have contractors from Cognizant to have the cultural context to understand harmful posts from all geos, as shown in Ethiopia. Policy teams who draft guidelines view those teams as liabilities and not an opportunity to green up the platform's content that makes users feel safe and secure. Users desire knowing that they are protected while allowing free expression of ideas and memes that maybe edgy but aren't harmful—a precarious balance.
Internet moderators of yore understood this problem well; forum software was sold to admins and focused on admins' needs. You can add users' scope, set up incentive structure to promote good behavior, and aggressively follow conversational aikido. Popular general interest forums on the edge of acceptability like Something Awful moderated aggressively (where a disgruntled former member spawned a fork called 4chan, which promoted the opposite methodology). (Link uses NSFW language) In contrast, today, there is a minimal toolset to deal with bad actors from public posts to groups. Products like vBulletin and Invision have very convoluted but subtle controlled settings to shape communities and empower admins to cultivate the group as they see fit.
Companies need to understand that under-investing in user safety would and can be the death of your platform. People forget that the consistent bad press that MySpace received was one reason why Facebook could pick on the trust that other social networks missed out on to build its social graph. Facebook might find it's self in a similar position of weakness.
The difference between older websites that were once considered "the cesspools of the internet" that held violent content and the larger platforms that serve as a loudspeaker for various groups is the level of responsibility they have to the global community. Imageboards a.k.a chans are limited economically by the advertiser unfriendliness of the websites serving as a limiting factor to the reach of those communities. In contrast, other platforms can pour accelerant through its recommendation systems. By shifting focus away from moderators, from popular posts to communities on various platforms- the lack of granular control presents communities' opportunities to become hijacked or, worse yet, allowing folks to follow the rabbit-hole into harmful content.
An Unfortunate Lesson
Unfortunately, the lack of moderator control for one early access app led to a situation where virulent anti-Semitic comments propagated on one of the holiest days of the Jewish faith, Yom Kippur. Clubhouse, not to be confused with Clubhouse, an enterprise SaaS solution; is a social audio application that allows people to host rooms and invite people to listen in on conversations.
The room swelled to 300 listeners and speakers; by nature of the Clubhouse's home screen's simple ranking, it brought it to the top of the page, allowing more exposure to such conversation. People abhorred by such comments found that the report button did nothing except open up a support ticket, and the room was allowed to persist well into the evening.
When focus on moderation is in the back seat, human nature takes the front seat allowing individuals with unsavory views to congregate.
The Clubhouse app is a smaller scale case study to the thousands of infractions of hate speech allowed to fester on platforms like YouTube and Facebook. Plenty of proponents who promote ethnonationalism have found a home on these platforms due to the reporting tools failing to understand those views' cultural context, mostly due to the impossible burden placed on contractors who fail to understand these groups' contexts. We have already seen such behavior spilling over in Ethiopia and India.
However, the actions underway by social apps of every sign show an encouraging direction. With the recent news about U.S. leadership going through a superspreader event, Twitter Comms mentioned removing all posts that call for an individual's death. It remains dubious that if the protection would be afforded to all. Facebook announced a sweeping set of improvements to Groups, a product under fire for harboring far-right militias. Disappointingly so, they will now allow posts from Groups that people aren't a part of to appear in user's newsfeeds. To Clubhouse's credit, they have a community of people who are more than willing to step up to the plate and enforce rules. Still, much more needs to be done.
While researching this post, I remain astonished on how much of the janitorial work to maintain our information landscape is handled by unpaid volunteers. Such moderators receive intense amounts of vitriol. What's more impressive is that people insist on doing the job because they care about the communities they serve.
Even more remarkable are the actions undertaken by interim CEO Ellen Pao on Reddit in 2014-2015. After a spate of revenge porn shared on specific subreddits, additional measures implemented also removed other subreddits like /r/fatpeoplehate that perpetuated harassment. This move was highly unpopular, leading to her eventual resignation and several anti-Ellen Pao posts reaching the front page. One possible point of resistance for doing the right thing for its users might upset those who like to test the spirit of the policies’ as they post risky content to test the waters.
I think we find ourselves in a massive context collapse contributing to the global breakdown of a short-lived period of civility in liberal democracies. I am not here to assert the cause of such a trend, but one has to note the power these products have on national conversations and existing media tools to wield public opinion. It is becoming increasingly clear that the age of low margins on user-generated content needs to end. Companies of all sorts need to understand that user safety is a draw to use products, and the next wave of social will take advantage of these points to create experiences that will get the following billion users. Companies who abdicate their responsibility to moderate the information environment deliver an additional nail polarizing the civil atmosphere.