1. Home
  2. Articles
  3. Algospeak and the Harm Reduction Consequences of Content Moderation

Algospeak and the Harm Reduction Consequences of Content Moderation

In the dynamic and ever-evolving social media landscape, a curious linguistic phenomenon has emerged: “algospeak.” This digitally native language employs coded words with alternative meanings to discuss topics overly moderated on social media, thereby avoiding detection by algorithms. The growth of such language reflects an ongoing battle between users and Big Tech companies. On TikTok, it’s common to find people swapping out the term “suicide” with expressions like “un(alive)”; emojis or strangely spelt words are also frequently used. Algospeak has been permeating across all forms of political discourse on our social media platforms but the concept isn’t new, and sounds more like a modernised version of Cockney Rhyming Slang.

But Algospeak is more than just creative wordplay; it’s the intentional and skilful navigation of the intricate maze of content moderation which can silence discussions on sanctioned topics. In a world where the legality of drugs’ is constantly evolving, platforms have the challenge of striking the balance between enforcing community guidelines and fostering open dialogue. As restrictive interpretations of content moderation policies often stifle nuanced discussions, the unintended consequences of content moderation on crucial discussions about drug-related topics become starkly apparent.

How content moderation works

Most major social media platforms like Meta or TikTok take a somewhat blanketed approach to content moderation. TikTok employs real-time content moderation through advanced algorithms that continuously scan and analyse user-generated content to swiftly identify and address community violations. These systems flag content based on a variety of signals, including keywords, images, titles, descriptions, and audio.

What shapes content differently on TikTok, however, is how the platform regulates content visibility. Unlike traditional content moderation that primarily focuses on removal or filtering, visibility moderation is about intentionally promoting, amplifying, or prioritising content deemed more relevant and appropriate. This has been a prominent trend in content moderation beyond TikTok: Human Rights Watch has uncovered how Palestine-related content on Meta-owned Instagram and Facebook has been systematically censored. Suppressing content related to politics, marginalised communities, drugs, or crime, irrespective of context and the audience it’s trying to reach, raises questions about how a supposedly “social” network is guided more by those running the platforms than the actual members that compose them.

 

Different platforms, different controls

In the past, algorithm changes on YouTube have sparked concerns among creators of drug education channels. Creators like PsychedSubstance and The Drug Classroom, who offer informative content about drug safety and harm reduction, saw their advertising revenue and view counts significantly impacted by Youtube’s algorithm changes. These changes, implemented after advertisers pulled out over concerns of ads being shown against extremist content, have led to limited monetisation for some videos, affecting the channels’ ability to survive and reach a wider audience. In some cases, they’ve even fully deleted channels promoting harm reduction.

Similar dynamics can be seen across platforms. Instagram has consistently posed challenges for content related to cannabis and cannabis businesses, for example. What was once a rarity to the U.S. and Canada-based industry, the deletion of cannabis business accounts has now become a recurring issue. Similar challenges are found with ketamine-related clinics and businesses, that must comply with medicine regulations in promoting products in a balanced manner. While it is valid to ensure that advertisers respect existing regulations around the promotion of medical products, there is a recreational consumption practice of these substances that needs to be addressed. Preventing harm reduction information from being disseminated means that drug-related content is therefore controlled by advertising rules – which constitute a substantial portion of platform’s revenue – or restrictive community safety guidelines.

These decisions by platforms spark a broader conversation about who the internet is for, and how to find the appropriate balance between regulatory measures and preserving open channels, particularly in spaces where responsible discourse is crucial.

 

Content moderation vs context moderation

While TikTok moderates content in real time through algorithms, context moderation considers the surrounding circumstances and nuanced meanings of content. In the past, platforms like Facebook, Twitter (X) and YouTube employed tens of thousands of workers on a case-by-case basis, considering factors such as intent, historical behaviour, and the broader conversation when assessing potentially problematic content. Having some level of human oversight to moderation can help strike a more balanced approach to content moderation, keeping platforms’ brand safe while still enabling nuanced discussions. But relying on an algorithms is much more cost-effective then employing thousands of moderators (even if they are paid next to nothing).

It’s no wonder that the allure of algorithms has become irresistible for platforms. The cost-effectiveness and efficiency they promise make them a tempting choice – and, unlike human moderators, they don’t become traumatised by the content they’re moderating. But platforms’ overreliance on them is ultimately at their own expense. While automation brings speed and scale, it often lacks the nuanced understanding and contextual knowledge only human oversight can provide. The need for nuanced content moderation is amplified by the shifting landscape of drug policy reform. As the legality of substances like cannabis and psychedelics has changed throughout time, so too do the legal and cultural contexts surrounding them. Unfortunately, current moderation structures often lack the flexibility to respond to these shifts. Harm reduction, with its context-specific emphasis on mitigating risks, requires an equally context-sensitive approach that clashes with the rigid guidelines and automated filtering currently employed by many platforms.

 

Can a balance be found?

Striking a balance between the efficiency of automated algorithms and the depth of human insight is not just a choice for platforms but a necessity, especially when it comes to navigating the complexities of drug discourse and education. In some ways, algospeak is protected by algorithmic content moderation as it can surpass controls to reach wider audiences. Given the global outsourcing of content moderation, moderators can often lack full contextual information on a post, and restrict drug-related content based on the global prohibition of drugs, even if they are decriminalised or legal in the place the post was originally made.

And while international policies offer easier enforcement -particularly with the rise of artificial intelligence- they often result in a one-size-fits-all approach that fails to consider national or regional nuances. Developing content moderation practices at this level will be key to reflect changing laws and behaviours, be they with drugs or any other practice or product. This limitation is recognised by Reddit that has a community-driven approach to moderation. This platform highlights how context-specific moderation can work, where those most invested in the community have a say in moderating its content while still upholding Reddit-wide values. But larger general-interest platforms like Twitter (X) might struggle to replicate Reddit’s model of micro-community moderation on a massive scale.

Ultimately, the best solution to content moderation requires time and serious investment, two things that an industry built off the back of unregulated, exponential growth despises. Only through community organising or government regulation will social media companies change their ways. And until they are forced to collaborate with subject experts – like harm reduction organisations – to develop context-appropriate content moderation policies, we cannot expect systemic change. Whether it’s wilful or deliberate ignorance, for now, social media companies will choose to moderate their massive communities in-house and with little to no community accountability.

Previous Post
Finding Normal: Cognitive Enhancers and Hyper-Productivity
Next Post
In Canada, The Fight for Compassion Clubs Rages On in Court

Related content