Introduction: Defence and Security is In Profound Moment of Change
- Dr Stephen Anning
- Oct 20, 2025
- 6 min read
Defence is at an inflexion point where its continued relevance depends on its ability to adopt Artificial Intelligence. In comments about the revolutionary impact of uncrewed systems in the Ukraine War, the Parliamentary Under-Secretary of State (Minister for the Armed Forces), Al Carns MP, has said, we are in “a machine gun moment for the Army, a submarine moment for the Navy and a jet engine moment for the Air Force”. Central to this revolution with uncrewed systems is a question that defines lethality in warfare: “How can I maximise the threat to my enemy, while maintaining risk to my own soldiers?” The role of AI in the revolution of uncrewed systems is a case study for how this question presently defines the character of warfare.
More broadly, AI has the potential to transform warfare by improving decision-making, increasing operational effectiveness, and enhancing strategic foresight. By enabling machines to process vast datasets far quicker than humans, AI supports tasks such as autonomous systems, intelligence analysis, cyber defence, and logistics optimisation, offering faster and more informed responses to complex security challenges (Binns et al., 2020). The increased lethality to the adversary and decreased risk to our own soldiers becomes increasingly apparent. So what does Defence need to do to remain relevant in the new era of Artificial Intelligence?
The Threat
The internet has become a new front line for Defence and Security organisations. Adversaries and criminals operate with anonymity, speed, and global reach. Conflicts are increasingly fought through internet-enabled sabotage, espionage, and subversion (Rid, 2013). State and non-state actors destabilise nations by manipulating public perception and sowing discord using disinformation campaigns (Fuchs, 2019). Criminal networks can match the productivity of multinational organisations by exploiting digital platforms to coordinate criminality. Threats to national security and public safety are stark.
The Opportunity
Beyond these emerging threats, society's increasing digitisation presents an opportunity for Defence and Security organisations to maximise taxpayer value through AI-driven productivity gains. AI can increase productivity by automating routine tasks, optimising resource allocation, and creating high standards of analytical rigour for decision-making, thereby allowing more effective use of limited budgets (Huang & Rust, 2020). Additionally, AI-powered systems enable the reallocation of skilled personnel from mundane tasks to more strategic, higher-value activities that would not otherwise be possible (Dastin, 2019).
The Challenge
To stay relevant, Defence and Security organisations must not only counter evolving threats but also keep pace the rapid adoption of artificial intelligence by industry. Finance, healthcare, and manufacturing organisations are integrating AI to boost efficiency, productivity, and innovation, setting an expectation for similar advancements within the Defence sector (Brynjolfsson & McAfee, 2017). Furthermore, the competition for high-quality AI talent is fierce, with private sector organisations offering better compensation, more flexible working conditions and a sense of achievement for delivering AI-enabled technologies (Parker et al., 2020). Defence and Security organisations can keep pace by giving people access to leading edge technologies and by giving them opportunities to work on AI-enabled projects.
Responsible AI, Ethical AI and AI Safety
In response to these new threats, opportunities and challenges, Defence and Security leaders must implement robust governance frameworks to promote adoption while mitigating potential risks associated with AI deployments. Responsible AI and Ethical AI are complementary but distinct concepts in AI governance.
Responsible AI goes beyond ethical concerns by incorporating practical governance measures, including risk management and compliance, to ensure that AI system development produces safe, reliable, and legally compliant systems (Mittelstadt et al., 2016).
Ethical AI focuses on aligning AI development with moral principles such as fairness, transparency, and accountability, in line with both domestic and international law (Floridi et al., 2018). Ethical frameworks guide leaders in deploying AI systems that avoid harm, prevent bias, and uphold societal values.
AI Safety is the set of practices, research and governance designed to ensure that advanced AI systems operate as intended, without causing harm, either through misuse, accidental failure, or unintended side-effects.
Responsible AI ensures that AI-driven technologies operate effectively and in line with organisational standards, while Ethical AI ensures that applications uphold fundamental rights and democratic principles. Without these governance models, decisions made using faulty AI systems could undermine the people Defence and Security organisations are tasked with protecting (Crawford, 2021).
An analogy for Responsible AI, Ethical AI and AI Safety is the development of weapon systems.
Responsible AI is analogous to the development of weapon systems, which begins with compliance with Article 36 of Humanitarian Law. Article 36 of the 1977 Additional Protocol I to the Geneva Conventions requires states to review new weapons, means, and methods of warfare to determine if their use would be prohibited by international law. This law defines the parameters for how Defence collaborates with industry to develop AI-enabled weapon systems.
Ethical AI is analogous to the deployment of weapon systems within Rules of Engagement (ROE). Rules of engagement serve as national military directives that translate international humanitarian law (IHL) into specific instructions for when, where, how, and against whom military force may be used. Weapon system operators learn the rules of engagement through training and judgmental training. The application of AI requires similar ethical frameworks to the rules of engagement.
AI safety is equivalent to the weapons safety regime, which ensures that weapon systems operate as intended. Weapon safety regimes ensure that the combination of the human user and weapon system operates as intended. For example, annual weapon tests to ensure the user can hit the intended target and weapon handling drills to protect against negligent discharge.
Realising New Value from Text
New threats like hostile narratives, online radicalisation, and conspiracy theories over the internet most often manifest through text data. Moreover, text documents contain valuable insights which are otherwise hidden in large intelligence databases. Manually analysing such large volumes of text is virtually impossible for human analysts alone. By leveraging AI to analyse text data, Defence and Security leaders can gain a significant edge in staying ahead of emerging risks and adapting to the fast-paced, data-driven nature of defence and security (Shalev-Shwartz & Ben-David, 2014).
Natural language processing (NLP) algorithms can automate extracting relevant information, identify trends, and generate actionable intelligence from diverse text sources, including news articles, social media posts, and intelligence reports (Vaswani et al., 2017). Unlocking insights from text using NLP can help Defence and Security organisations better understand adversaries' actions, predict threats, and inform strategic decision-making by providing insights faster and more accurately than traditional methods (Ritter et al., 2020). The ability to process massive text datasets at speed and scale is essential to keep pace with adversaries and criminals. Defence and security leaders can gain an operational advantage from text data when making operational decisions.
Integrating Qualitative Approaches into AI Systems
With a focus on unlocking new insights from text through natural language processing, an unacknowledged problem needs addressing—the absence of qualitative research approaches in most NLP systems. While NLP excels at processing large amounts of text, the underlying algorithms often miss the complexities of human narratives, motivations, and the social contexts contained in the language used by both adversaries and populations. Despite an apparent demand from UK Government Defence and Security policy documents, however, qualitative research is rarely integrated into NLP systems, which tend to prioritise more quantitative approaches. The consequence is a level of analytical rigour that fades into the light of sparkling infographics (Lindgren, 2020). The potential for harm arises from decisions made using these purely quantitative methods that strip humanity of how people use language.
References:
Binns, R., et al. (2020). Artificial Intelligence and its Impact on Defence and Security. Defence Review Journal, 32(3), 45-58.
Brynjolfsson, E., & McAfee, A. (2017). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.
Crawford, K. (2021). Atlas of AI: Mapping the Forces that Shape Our Lives. Yale University Press.
Dastin, J. (2019). AI in Public Sector Work: The Promise and Perils. Policy Press.
Fuchs, C. (2019). Social Media: A Critical Introduction. Sage Publications.
Floridi, L., et al. (2018). Ethics of Artificial Intelligence and Robotics. Stanford Encyclopedia of Philosophy.
Huang, M. H., & Rust, R. T. (2020). Artificial Intelligence in Business: A Framework for Implementation. Journal of the Academy of Marketing Science, 48(4), 1047-1064.
Lindgren, S. (2020). Data theory: interpretive sociology and computational methods. Polity.
Mittelstadt, B. D., et al. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 3(2), 1-21.
Parker, D., et al. (2020). Talent Wars: AI and the Fight for Skilled Professionals. Harvard Business Review, 98(9), 55-65.
Rid, T. (2013). Cyber War Will Not Take Place. United Kingdom: Oxford University Press.
Schneider, E. (2021). Cybersecurity in the 21st Century: Challenges and Solutions. Routledge.
Binns, R., et al. (2020). Artificial Intelligence and its Impact on Defence and Security. Defence Review Journal, 32(3), 45-58.
Liu, X., et al. (2019). Combining Natural Language Processing and Qualitative Analysis for Threat Assessment. Journal of Security Studies, 45(4), 567-580.
O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
Rajpurkar, P., et al. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), 2383-2392.
Ritter, A., et al. (2020). Natural Language Processing for Intelligence and Security Applications. IEEE Transactions on Neural Networks and Learning Systems, 31(5), 1559-1572.
Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press.
Vaswani, A., et al. (2017). Attention is All You Need. Advances in Neural Information Processing Systems (NeurIPS 2017), 30, 5998-6008.



Comments