World April 2, 2026

New Zealand Team Developing Chatbot and Human Support Pathways to Redirect Users Showing Violent Extremist Signs

ThroughLine explores a hybrid deradicalisation tool in collaboration with The Christchurch Call as AI firms face mounting lawsuits and regulatory scrutiny

By Derek Hwang
New Zealand Team Developing Chatbot and Human Support Pathways to Redirect Users Showing Violent Extremist Signs

A New Zealand-based startup that routes people in crisis from AI chatbots to human-run helplines is testing an expanded service aimed at users showing violent extremist tendencies. ThroughLine, already contracted by major AI firms to handle self-harm and other crisis referrals, is developing a hybrid chatbot-and-human intervention in consultation with The Christchurch Call while broader questions remain about follow-up, escalation risk and platform moderation.

Key Points

  • ThroughLine, a New Zealand startup, is developing a hybrid chatbot-and-human intervention to redirect users who exhibit violent extremist tendencies toward deradicalisation support.
  • The company already provides AI firms with a network that monitors 1,600 helplines in 180 countries and currently handles referrals for self-harm, domestic violence and eating disorders.
  • The move has implications for technology platforms, mental health and social support sectors, and the moderation practices used in online communities including gaming forums.

People who display violent extremist tendencies while interacting with AI chatbots such as ChatGPT may soon be given routes to both human and chatbot-based deradicalisation support through a tool under development in New Zealand, the team behind the effort said.

The project is the latest response to safety concerns confronting AI companies as they face a rising number of lawsuits alleging the platforms have failed to prevent - and in some cases enabled - violence. The initiative seeks to expand an existing crisis-redirection service to capture a different and growing form of online disclosure: flirtation with extremist ideas.

ThroughLine, a startup that in recent years has been hired by firms including OpenAI as well as Anthropic and Google to divert users to crisis assistance when flagged as at risk of self-harm, domestic violence or eating disorders, is exploring ways to add violent-extremism prevention to its offerings, founder Elliot Taylor said.

Taylor, a former youth worker who runs ThroughLine from rural New Zealand, said the firm is in talks with The Christchurch Call, the anti-extremism initiative established after New Zealand’s 2019 terrorist attack. Under that collaboration, The Christchurch Call would provide guidance while ThroughLine develops the intervention chatbot, Taylor said. No release date or firm timeframe has been set.

"It’s something that we’d like to move toward and to do a better job of covering and then to be able to better support platforms," Taylor said in an interview.

OpenAI confirmed it has a relationship with ThroughLine but declined further comment. Anthropic and Google did not immediately respond to requests for comment, according to the developers of the project.

The firm has become a frequent contractor for AI companies by offering continuous monitoring of a global network of helplines. ThroughLine maintains what it describes as a constantly-checked network of 1,600 helplines across 180 countries. When an AI flags signs of a potential mental health crisis, ThroughLine matches the user to an available human-run service nearby.

Until now, the company’s scope has been limited to specific categories of risk. Taylor said the variety of mental health and safety issues people disclose to AI chatbots has widened with the technology’s popularity, and now increasingly includes interactions that touch on extremism.

Design and deployment

Taylor described the proposed anti-extremism tool as likely to be a hybrid: a chatbot trained to respond to users exhibiting signs of extremist sympathy or intent, combined with referrals to existing real-world mental health services. He emphasised that the system is being developed with appropriate expert input rather than relying on the raw training data from large language models.

"We’re not using the training data of a base LLM," Taylor said. "We’re working with the correct experts."

Testing of the technology is under way, but the project team has not set a public release date.

Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hopes the product will be rolled out for moderators of gaming forums, and for parents and caregivers seeking tools to identify and mitigate online radicalisation.

Independent observers say the proposal addresses a broader problem than content moderation alone. Henry Fraser, an AI researcher at Queensland University of Technology, said a rerouting chatbot is "a good and necessary idea because it recognises that it’s not just content that is the problem, but relationship dynamics." Fraser added that the efficacy of such a product will rely heavily on the quality of follow-up mechanisms and the capacity of the services users are directed to.

Outstanding questions and concerns

Taylor acknowledged several decisions about the tool remain unresolved, including the design of follow-up features and whether authorities would be notified about users judged to be dangerous. He said any approach to alerts would weigh the risk that contact with authorities might trigger escalated behaviour.

He also warned that people in distress often disclose online material they might be too embarrassed to say in person, and that governments risk worsening situations if they press platforms to sever conversations in which sensitive information is revealed.

Heightened moderation associated with concerns about militancy has already prompted some sympathisers to shift to less regulated services. The article cited a 2025 study by New York University’s Stern Center for Business and Human Rights that found supporters of militancy moving to platforms such as Telegram, where moderation is less strict.

"If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support," Taylor said.

The development comes as AI companies face intensified scrutiny. In February, OpenAI faced the prospect of government intervention in Canada after disclosing that an individual who later carried out a deadly school shooting had been banned from the platform without authorities being informed. That episode has increased pressure on firms and regulators to find mechanisms that both reduce harm and preserve avenues for people to seek help.


The path Forward for ThroughLine’s initiative depends on technical testing, expert input from counter-extremism groups and decisions about escalation and reporting protocols. For platforms, moderators and service providers, the project surfaces trade-offs between immediate content removal and maintaining lines of communication into support systems.

Risks

  • Uncertainty over follow-up mechanisms and whether referred users are connected to effective, adequately resourced services - affecting mental health and social services sectors.
  • Potential escalation risk if alerts to authorities are implemented, which could worsen outcomes for some users - impacting law enforcement and platform moderation strategies.
  • Shift of sympathisers to less-regulated platforms if mainstream sites increase moderation, complicating detection and intervention efforts for counterterrorism and online safety teams.

More from World

New Combat Drones Detected at Eastern Libya Base, Raising Questions Over U.N. Arms Embargo Apr 2, 2026 Markets and Diplomacy Shift as Trump Signals Intensified Strikes on Iran; Oil Spikes and Hopes for Quick Resolution Fade Apr 2, 2026 7.6-Magnitude Quake Strikes Northern Molucca Sea; Tsunami Alerts Withdrawn After Initial Waves and Building Damage Apr 1, 2026 Trump Defends Conduct of Month-Old U.S.-Israeli War on Iran; Provides No Clear End Date Apr 1, 2026 Trump Says U.S. Nearing Strategic Goals in Iran Conflict, Warns of Intensified Strikes Apr 1, 2026