The recent launch of Drugbot, a new Artificial Intelligence (AI) tool from British drug treatment charity Cranstoun, has highlighted how these emerging technologies may come to benefit the world of harm reduction. If used properly, and coupled with expert knowledge and human touch, AI harm reduction could become a great gadget to add to the toolbox of harm reduction.
How does Drugbot work?
Drugbot was developed in an iterative process between Cranstoun and Substancy. Working together, they created an initial version of an AI Large Language Model (LLM) based on carefully screened and selected data-sources, ensuring that information is grounded in evidence. The AI tool draws from these sources to provide tailored answers to users’ questions on drugs, their interactions, and the British laws that govern their use.
Before releasing to the public, Cranstoun conducted extensive testing of the bots’ outputs, circulating its initial version with drug use and harm reduction experts to validate and criticise its outputs. According to Cranstoun, Drugbot was tested with over 20,000 messages to validate the recommendations it made on different drugs, drug-using conditions and any user concerns that may arise from their use. At the moment, Drugbot only works in English and in the UK.

Refining AI for harm reduction
Josh Torrance, a Cranstoun consultant who worked on Drugbot, told TalkingDrugs that the development of such a harm reduction AI model needed extensive fine-tuning to ensure that it responded accurately. Providing information on safer drug use leaves little room for error.
A key factor to ensure the accuracy of Drugbot’s responses was to only draw its advice from a limited yet trusted number of sources. Currently, Drugbot’s database includes resources from Crew 2000, Drugs and Me, DrugScience, Drugwatch, Exchange Supplies, Psychonautwiki, Reagent Tests UK, UCC Today and Wikipedia.
“The online resources it draws from are excellent. They’re very high-quality resources. They’re credible. They approach things from a harm reductionist approach,” Torrance commented.
AI bots have a strong capacity to provide useful results: in one study, where clinicians analysed AI responses to questions on drug use, clinicians judged its responses to be of high quality.
“The Drugbot database draws on a diverse array of trusted sources, from crowdsourced information to insights written by domain experts,” Dr Ivan Romano, founder of Substancy, commented. He explained that Drugbot works like a “decision tree of AI models, each specialised in one task”. When a user asks something, one AI model would analyse the question, send it to its question category (for example, purchasing drugs, dosage, or consumption method) which would then provide an answer based on its database of sources.
It’s crucial for AI harm reduction models to limit the data they pull from when answering questions to reduce the potential for drug-related misinformation. Researchers have highlighted that AI-derived information could downplay the harms of using certain drugs, amplify existing misinformation, or not identify dangerous combinations. For people to trust AI in harm reduction, it needs to produce answers that are evidence-based and non-judgemental; this requires training models on specific data.
Other harm reduction bots follow the same path: dib, created by the Australian Alcohol and Drug Foundation (ADF) in May 2025, only draws its information from the ADF website. Speaking to Dotahn Caspi, ADF’s Digital Manager, he highlighted that dib was extensively tested by drug clinicians and people with lived and living experience of drug use to refine its tone and accuracy.

Will AI replace harm reduction humans?
Given harm reduction’s historical work in community-building and mutual aid, there are some fears that AI tools could come to replace the need for human interactions.
However, this isn’t an outcome that Cranstoun wants nor foresees.
“We know that for many people it can take a lot of confidence to come and speak to someone in one of our drug and alcohol services about their drug and alcohol use,” Megan Jones, Director of New Business and Services at Cranstoun, said.
“The Drugbot is seeking to break down some of those barriers and will be able to suggest that people do seek in person face-to-face help too.”
Torrance backed this up, highlighting that certain user queries would often trigger AI responses that recommended getting in touch with a human. For this article, I tested Drugbot’s advice on injecting drugs, tapering off heroin, and mental health support for drug use. In all instances, the AI gave some initial advice while also recommending to seek support from certain organisations. In emergency conditions (e.g. struggling to breathe), Drugbot suggested contacting emergency services.
As Torrance put it, “this is a humble LLM” which should not be seen as anything more than that.
“There is absolutely not a future that I can see where this replaces any kind of human harm reductionist. It’s not a replacement for asking colleagues with a decade of experience.”
Despite being excellent tools at collecting and partially analysing information, people still need to exercise caution around any advice given. As the editor of harm reduction forum Bluelight highlighted, AI can never replace the value of community knowledge and support. It also cannot fully understand people’s use contexts, nor replace first-hand experiences of accessing drug markets (like signs for adulterants). New digital tools are best placed alongside human care and support – not to replace it. This is especially important as some AI tools have been shown to prioritise maximising engagement, and possibly manipulate users’ feelings rather than provide unbiased and accurate information.
Other applications of AI in harm reduction
Across the Atlantic, another AI tool has been developed with promising results. Toxibot, run by the Argentinian Association of Harm Reduction (ARDA), was created by Pablo Ferreyra and Aníbal Sacco to empower drug consumers with rapid information. Toxibot is a Spanish-speaking bot that works on WhatsApp: it’s essentially a phone number that provides harm reduction advice, drug information and interactions, as well as access to ARDA’s drug checking results. In its first year, around 30,000 people used Toxibot; usage spikes around the weekend, particularly when there are large parties or festivals going on.
Through several commands, Toxibot also acts as an interface to a harm reduction trained version of ChatGPT, where users can ask Toxibot questions that are answered by AI, with other resources it provides coming from other reputable Spanish-language harm reduction organisations (namely ARDA, Argenpills, Energy Control and Echele Cabeza). Nonetheless, ARDA warns that its AI model is still experimental, that it does not give legal advice and that they are not in control of its replies.

Where Toxibot stands out is its integration into the Argentinian party and drug checking scene. The bot has specific commands used by drug checking organisations so they can submit results of reagent test kits of tested substances – especially pills. They collect their photos, weight, colour and location, as well as their reagent result; this information can then be accessed by anyone that messages Toxibot or ARDA’s registry of reagent results.
Ferreyra And Sacco believe that AI tools are a great anonymous tool to support those working in criminalised environments, where publicly providing harm reduction resources could end a party’s license or lead to arrests.
“The bot doesn’t replace; it complements the work of harm reduction. Argentinian drug laws are some of the most regressive in Latin America,” both said. “The more we can do to make the work of harm reductionists easy, for party promoters to keep people safe without a lot of papers or information visibly out, the better.”


