Skip to content

Artificial Intelligence and Protective Security

  • Knowledge Level: All Levels
  • Protection Stage: All Stages
  • Time to read:

Considerations for use of AI in protective security and guidance on defending against AI enabled threats

Last Updated: 17 June 2025
Share this article:

Artificial Intelligence

Artificial intelligence (AI) describes computer systems that can perform tasks that would normally require human intelligence. The term AI is used to describe a collection of related technologies including Machine Learning and Generative AI.

Machine learning is a type of AI by which computers find patterns in data or solve problems automatically without having to be explicitly programmed. Almost all AI in current use is built using Machine Learning techniques. Generative AI is a term used to describe AI tools that can produce different types of content, including text, images and video.

AI will play an increasing role in protective security. When applied to appropriate tasks, and implemented securely, AI can enhance physical, personnel and information security. NPSA provides guidance to help organisations understand risks associated with using AI, defend against AI-enabled threats, and securely buy/build AI technology.

Applications of AI in Protective Security

To help organisations understand current and anticipated uses of AI in protective security, a list of example uses is given. Applicable established protective security and cyber security guidance is highlighted to mitigate risks.  Application of AI in protective security includes:

  • Detection of insider events. In addition to ensuring that devices are used in accordance with organisational policies, protective monitoring and logging tools can be used to identify anomalous behaviour on IT systems – which may indicate activity (intentional or unintentional) that could result in harm or loss to the organisation. These tools typically employ Machine Learning which helps to build a baseline of normal activity, from which anomalous activity can be detected. AI-enabled protective monitoring systems will become more autonomous, becoming capable of taking specific security actions on detection of events.
  • Pre-employment screening. Services specialising in employment screening can assist, alongside human-led processes, in checking multiple information sources to provide suitability and credibility assessments as part of the recruitment process. This application of AI-enabled tools is similar to the long-standing use of AI for detection of financial fraud. Use of AI for this purpose must be compatible with all relevant legislation and organisations should ensure that these services are used in addition to, not as a substitute for, the good practice set out in the NPSA Employment Screening guidance.
  • Generation of organisation-specific security material. Easily accessible, low/no cost Generative-AI tools enable the creation of organisation-specific content. Outputs in the form of text, images, graphics and videos can be produced based on user instructions. Example uses of this technology, for security-related content, include drafting security policies, creating organisation-specific security awareness campaigns, e-learning content and assistance planning and producing content for exercises. As with all AI-generated content, human scrutiny is essential.
  • Improved access to concise organisation-specific security material. Organisation-specific AI assistants or chatbots can be developed by using secure and private Large Language Models (LLMs) with the ability to draw on corporate knowledge. This use of Generative-AI can present clear, concise, organisation-specific security-related information. In the context of security, these tools give a workforce easy access to messaging on security behaviours, summaries of appropriate security policies or relevant sections of policies, and incident management procedures.
  • Speech-to-text and summarisation. AI-powered tools are increasingly used to transcribe and summarise in-person and virtual meetings. In the context of security, this includes use for recruitment interviews as part of pre-employment screening and use in progress reviews as a part of good insider risk management. AI transcription and summarisation requires human oversight to assess accuracy, completeness, and to ensure that records are reflective of the event.
  • Building management and security systems. Security and safety products and services are increasingly enabled/enhanced by AI. Physical security systems are more commonly becoming cyber-physical systems that are increasingly augmented with AI. Building management systems (BMS) can learn normal patterns, leading to proactive detection of security incidents and emergencies. Modern systems can streamline processing of visitors giving improved control over occupancy and visitor access to sensitive areas. Biometric systems and video analytics have, for many years, been underpinned by Machine Learning.

Considerations when using AI in Protective Security

To understand general risks and benefits associated with AI tools, the NCSC has issued AI and cyber security: what you need to know. Developing, deploying or operating a Machine Learning system, or a wider system with a Machine Learning component, must be done in a secure and responsible way. To help organisations get this right, the UK Government provides the Code of Practice for the Cyber Security of AI. Non-technical considerations applicable to use of AI in protective security include:

  • AI system integrity and insider risk. Throughout their life, AI tools require human access for development, configuration, maintenance and decommissioning. The integrity of the data on which a system is trained and uses, is essential to the correct operation of the AI system. Therefore, Insider Risk Management and Security Culture are important in the context of organisational use of AI.
  • Dependencies, over-reliance, complacency and trust. Organisations should be cautious with respect to creating critical dependencies on AI tools. AI-enabled tools will increasingly perform security-related functions and become components of wider risk management processes – for example enabling protective monitoring to identify insider events on IT systems as part of a wider set of activities to manage insider risk. Where AI tools perform well, organisations should be conscious about the potential to lose skills and capabilities amongst their security staff. Over-reliance on sophisticated tools should be a risk that is continuously monitored. Effective and tested plans should be in place for instances where the performance of a tool declines or where an AI tool partially or completely fails. Where AI is used as an assistant or as a teammate, and for situations where human-oversight/checking is a critical part of a process, organisations should be conscious of the potential for complacency. Too much trust may be placed in an AI tool that performs consistently well. Equally, organisations should be mindful that poor performance of a tool, or an incident involving a tool, may result in a lack of trust.
  • Aggregation of data and creation of new sensitive data assets. Training and use of AI tools may require formation of new datasets that would otherwise not have existed. New datasets can be sensitive by virtue of the sensitivity of the constituent data by itself, or through aggregation of non-sensitive data. Sensitivity of datasets can also increase through aggregation. These assets should be captured and protected as part of Protective Security Risk Management.
  • Increasing automation and reduction of human interaction. AI systems and tools will increasingly communicate with, and use, other AI systems and tools, reducing the need for human interaction over time. Increased AI automation, and progression to Agentic AI - where tools work using objectives set by human operators, rather than individual task-based instructions - may necessitate a greater need for human monitoring of AI decisions and actions.
  • Oversight, responsibility and liability. AI tools, undertaking or assisting in security functions, will be deployed in circumstances where they have a given level of ability to make decisions and/or take action - for example assisting in the selection and vetting of candidate job-holders. AI systems must operate in compliance with relevant legislation and regulations – for example adhering to the applicable data protection legislation where AI systems are trained on, store or otherwise process personally sensitive information. The use of AI should be overseen by a governance board with appropriately qualified members such as staff with security, technology and legal specialisms. These boards may also be used for escalation of AI security issues, initiation and creation of policies, and review of AI business cases. AI systems may be subject to scrutiny in the event of an audit or a legal challenge, therefore transparency (the ability to see what the system is doing with data) and explainability (the ability to describe how the system produces outputs and makes decisions) are important considerations.

Defending against AI-enabled Threat Actors

Threat-actors benefit from advancements in AI. This includes terrorist and state threats. The impact of AI on the cyber threat is described by NCSC. Organisations should be aware of ways in which physical, personnel and information security measures can protect organisations against AI-enabled adversaries.

To help organisations consider the importance of protective security in the context of AI-enabled threat actors, NPSA provides important protective security activities with examples of theoretical malicious use of AI.

  • Reduce risks of Generative AI revealing sensitive information. Large Language Models (LLM), where a system is trained on a large amount of text-based data, can be trained on information made public by an organisation. The situation can occur whereby an LLM is trained using non-sensitive publicly available information from different sources – and the resulting trained LLM can, under some circumstances, draw sensitive conclusions and insights that are publicly accessible. LLMs and AI in general are designed to be as helpful to human operators as possible and so they can be used to provide useful information to adversaries acting in physical, cyber and personal domains. Through this situation, adversaries can receive AI-assistance with identification of targets, weaknesses and vulnerabilities, sensitive assets, people and groups. It should be noted that even non-malicious, legitimate LLMs can unintentionally reveal sensitive information. To reduce risks associated with LLM use of information, a Security Minded approach to Information Management is essential.
  • Follow good practice on pre-employment screening. Threat actors continue to seek employment in organisations with the aim of exploiting legitimate accesses for malicious purposes. Easily accessible, low/no-cost generative AI tools make deceiving prospective employers easier. The technology can assist in writing convincing covering letters, emails, CVs and other communications tailored to adopt cultural nuances and linguistic styles that may increase appeal to the recruiting organisation. Such content will appear human rather than robotic. Generative AI can assist with the creation of false images and documentation in support of applications. Organisations should be mindful, that during remotely conducted interviews, generative AI could be used in near-real-time, to provide candidates with answers to interview questions. Appropriate application of pre-employment screening will detect fraudulent applications at the recruitment/application phase.
  • Manage risks from use of Shadow AI. Non-malicious use of unknown AI tools within an organisation for business purposes, can weaken security and harm an organisation. Use of shadow AI presents risks to an organisation – similar to those presented by use of other shadow IT, as described by NCSC Shadow IT.
Did you find this page useful? Yes No
helpfulness rating