When AI Fights Terror, Who Defends the Innocent?

AI is being increasingly deployed in the global war on terror: scanning online activity, flagging individuals, and even automating watchlists. But what happens when the algorithm gets it wrong?

In the name of national security, governments around the world are quietly using AI to identify potential threats. While this sounds efficient, the results can be chilling: innocent people being profiled, flagged, and interrogated based on flawed or biased data.

The problem lies in how these systems are trained. Algorithms are only as good as the data fed to them—and when that data reflects historic prejudice or incomplete information, it amplifies injustice rather than preventing it.

At DDI, we investigate how AI is being used in counter-terrorism and raise public awareness of its dangers. We advocate for transparency, oversight, and accountability in all algorithmic systems—especially those that determine someone’s freedom or fate.

Security without justice is just another form of control.

Scroll to Top