Artificial Intelligence (AI) tools have quickly become staple tools of the internet: unprecedented investments in this technology, appearing as chatbots, research tools or more have forced their ubiquity in nearly every aspect of political, economic, and social life. There’s increased interest in applying AI to drug-related fields for its effectiveness at “predicting outcomes”, especially in large population samples; AI researchers believe its potential for identifying drug-related risk can benefit those looking to develop early intervention or prevention strategies.
However, experts in AI technology policies have raised important concerns regarding the largely unquestioned proliferation of new technologies and what their impact may be, especially regarding their use in health. This is particularly important when dealing with research that engages with drug-related topics, from monitoring people who use drugs, to harm reduction and drug prevention.
Risk prediction
There’s growing evidence of AI’s use in processing biometric data, with real-life health and social implications. One of these areas is health systems’ risk assessment for people who use drugs (PWUD), or who are evaluated to be likely to use drugs.
Through social media profile scraping to real-time location surveillance, AI is increasingly used to track people’s drug using habits (or discussions of consumption), predict related behaviours and ultimately recommend or take actions. While much of this technology is still in a conceptual or research stage, it eerily mimics surveillance technology employed in the defence sector in how they expand the understanding of what behaviours are understood as “risky” and what groups engage in them.
One study suggests using wearable devices to track people’s “relapses” of cannabis use. It recommends long-term passive monitoring of a person’s stress indicators – such as a rapid heart rate at night, changes in physical activity, or more – to indicate they’ve used cannabis. Passive monitoring is a methodology used in the past to surveil groups perceived as threats. It does not rely on self-reported data collection, which is often used when collecting sensitive data – such as illegal or personal health activities. The study delineates that a substantial body of literature on the topic has been developed, which points towards a growing disregard for autonomy in the prevention process.
Some research suggests using machine learning models to detect correlations in populations’ socioeconomic factors, mental health indicators, and environmental stressors in order to “identify vulnerabilities” to drug addiction. This could be done through analysing health records, online activity or a mixture of both to, in essence, predict whether people will use drugs and become addicted to them.
While these exercises in risk assessment and prediction may seem innocuous by themselves, their data collection methods can operate as surveillance mechanisms; their findings can easily be manipulated to suggest early interventions in those deemed to be a risk – whether to health systems or security.
Chloé Berthélémy, a Senior Policy Advisor at the European Digital Rights Network (EDRi), an organisation that advocates for European digital rights, suggested that “the use of risk assessment algorithms is a dangerous slippery slope”: predictive AI models have a high potential for racial, economic, and gender bias in the assertions they make. Their consequences could be huge for those surveilled: treatment access or insurance coverage could be denied, or social welfare benefits removed, if someone is deemed to be engaging in risky behaviours or criminal activities. People could even face arrest if wearable devices trigger alarms due to suspected drug use, or be targeted in additional policing activity if they’ve engaged in past criminal activities. The potential applications for surveilling people using or involved with drugs are endless.
Online surveillance
The use of these tools is particularly concerning in online spaces, especially in social media where people will share content and potentially admit to illegal activities, believing they are secure due to digital encryption. One paper claims that AI tools can analyse social media posts to identify indications of intent to use drugs, despair, or other “warning signs” that serve as “digital indicators” of potential drug use. It goes on to suggest that:
“digital phenotypes extracted from social media language can be a screening tool for identifying … patients at high risk for drop-out at the beginning of treatment engagement. This work also sets a foundation for dynamic continuous monitoring during treatment to better understand the day to day factors that go beyond whether someone is likely to succeed on day one.”
These phenotypes are patterns of language that correlate to patterns of behavior, which were successfully used to identify these patients within the study, but have not yet indicated a potential for larger scale usage or accuracy.
Biopolitical monitoring of PWUD
Both the examples of AI powered wearables and predictive tools evoke biopolitics-oriented criticism of totalitarian societies that emerged far before AI did. Biopolitics has for decades highlighted how modern societies are increasingly developing tools to identify groups or behaviours deemed as threats, risks or crimes, deploying population control mechanisms to surveil and curtail them when possible. AI increased the scale in which these assessments can be done; it can also recommend actions, fully removing humans from decision-making processes that can radically alter the lives of those surveilled.
As is often the case with these emerging tools, there’s much ambiguity about the indicators used to identify risk – particularly around drug use. A rapid heart rate at night could mean anything; one’s past history of drug use or demographic indicators doesn’t dictate their future actions. There’s still much that is unclear about AI’s “reasonability” processes – how models develop the reasoning behind their recommendations. That’s why it’s particularly concerning to see researchers study its potential implementations before engaging in proper discussions about what the implications of their work may be, or what it would be like to be subjected to the control mechanisms they develop.
Berthélémy commented that “the underlying logic of these systems remains [the] control of vulnerable people. The power dynamics in these contexts [regarding wearable devices], where one is dependent on the other for support or survival, severely undermine the ability to obtain genuinely free consent to data collection. Furthermore, the risks of repurposing the data for even more harmful purposes are high.”
This illustrates why EDRi takes a largely sceptical stance towards AI usage for social welfare purposes, like monitoring. Moreover, any individual′s ability to give informed consent to surveillance is put into question, given its societal framing as a positive tool for welfare and harm reduction.
Drugbots
AI tools, with limited and clear data privacy and use guidelines, can be of use to harm reduction. Speaking with Ivan Roman and Josh Torrance, the developers of Drugbot, an AI-powered chatbot operated by Cranstoun, a British drug treatment organisation, both of them highlighted the importance of privacy in the drug world.
As Torrance said, “the user needs to trust Drugbot, and it′s our responsibility to build and uphold that trust”. This included being transparent with users on how their data is handled, in limiting location permissions, and a focused effort to train the model to provide evidence-based harm reduction advice and deal with sensitive information.
This optimistic approach highlights the benefits that technological advancements and AI can have in widening access to information, and support. For example, Roman outlined how Drugbot processed users’ location data and responses to warn them of dangerous contaminants in surrounding drug markets. Though the collection and use of location data could pose a privacy risk to the user, its focus on transparency could negate this issue by focusing on user consent for the tools it has to offer.
The nature of the word surveillance implies an absence of choice, and thus, power. Monitoring of at-risk populations must be done with consent and transparency in mind in order to maximise impact, and PWUD’s choices to use technologies that help keep them safe. Punishment and policing carried out through AI powered risk assessment and harm reduction tools rely on a level of passive user engagement that further strips PWUD of their autonomy and safety.
Questions remain on applications
How can surveillance and interactions with LLMs and other AI tools shape our experiences with talking and learning about drugs online? While Drugbot is one example of a product produced by people interested in and concerned about harm reduction, it is important to note that their ethos may not be shared by everyone looking to study or monitor people who use drugs.
While much of the work into predicting drug use and reducing risky behaviours is still in its infancy, its potential applications are truly a slippery slope of surveillance, privacy abuses and population control. Some of the studies highlighted here show that the process of putting these ideas into practice has already begun. Those creating these models need to stop and consider the implications of the work they’re conducting, and the very real and potentially life-changing consequences their work can create.


