
CISA's new guidelines to mitigate AI risks to critical infrastructure
The public release of ChatGPT in late 2022 has announced the emergence of a new era in computing technology. It opened the door to leveraging generative AI into various applications for individuals and organizations. Nevertheless, as with everything in technology, this comes with a cost: security.
Artificial intelligence (AI) technologies have introduced radical changes to how different business tasks are performed, enhancing efficiency and cost-effectiveness. However, on the dark side, this technology can be used to facilitate executing various types of cyberattacks against anything dependent on digital technology, and critical infrastructure is the most prominent example.
In April 2024, the U.S. government's cybersecurity agency (CISA) issued official guidelines to enforce critical infrastructure safety and security against AI-related threats. This article will cover the main points mentioned in this guide. However, before we start, let’s discuss some examples of how AI technologies can be leveraged in the context of cybersecurity, according to the new CISA guidelines.
AI risks type against critical infrastructure
CISA is grouping AI risks against critical infrastructure into three main types:
Attacks using AI
In this type, adversaries leverage AI technology to execute attacks against critical infrastructure. AI could be used to directly execute the attack, plan it, or enhance its effectiveness. Here are some examples:
- Adversaries leverage AI to create sophisticated malware that can evade traditional detection techniques. Such attacks are commonly leveraged by advanced persistent threat (APT) groups that launch ransomware to attempt to steal confidential data or for financial gain.
- AI technology can be used to create more efficient botnets to execute distributed denial of service attacks (DDoS).
- Adversaries are increasingly using AI technologies to create convincing social engineering attacks. For example, deepfake and voice cloning have already been used in many scam attacks that resulted in multimillion-dollar losses.
- AI technology can be used efficiently to discover vulnerabilities in IT infrastructure. This facilitates finding entry points that can be exploited to execute different types of attacks.
- Machine learning (ML) algorithms can be used to speed up the process of cracking passwords using brute-force attack techniques.
Attacks targeting AI systems
These attacks are executed directly against the AI systems that power or manage critical infrastructure, like power grids, water supply facilities, or transportation systems. It is worth noting that these attacks may not always be a direct assault on the AI system itself, as they could target the datasets used to train the AI system or against the input system used to feed information into AI.
The primary attacks targeting AI systems include:
Data poisoning attacks
Data poisoning attacks aim to manipulate the training datasets used to train AI and ML models. Attackers aim to affect the output of these systems, leading them to produce biased or incorrect results. When it comes to critical infrastructure, such as water supply facilities, a data poisoning attack could have severe consequences for millions.
For example, consider a water supply facility powered by an AI system that balances the amount and volume of chemical ingredients needed to clean the water. A misconfiguration in the AI system, caused by data poisoning, could result in unsafe chemical levels, which will result in poisoning the water supplies of millions of people.
Evasion attacks
In an evasion attack, adversaries feed malicious samples to confuse the AI system classifier to affect its performance. In the context of critical infrastructure, this means attackers use sophisticated methods to surpass defenses that rely on AI for detecting threats. As a result, malicious activities can get past defenses without being noticed.
For example, a power plant uses an industrial control system (ICS) to manage its operations. This ICS is protected by an AI-powered antivirus program designed to detect and block malware. Threat actors use advanced malware created using AI technology to change its behavior and signature according to the target environment to prevent the AI-powered antivirus tool from detecting it.
Interruption of service attack
These attacks are designed to disrupt the normal functioning of AI systems responsible for critical infrastructure operations. The goal of these attacks is to render the AI systems nonfunctional. This will leave the infrastructure unprotected and vulnerable to other types of attacks. This disruption can be executed via various methods, including:
- Overwhelming target AI system with a large number of queries or requests, making it unable to respond to legitimate queries. For example, an AI-powered intrusion detection system could be fed with a massive volume of complex network traffic patterns to exhaust its processing abilities, making it stop functioning.
- Exploiting a bug in a target AI system used to protect or manage critical infrastructure to make it behave incorrectly or stop functioning.
Failures in AI design and implementation
This refers to any weaknesses or flaws in any phase of the AI system development and deployment process, which includes planning, structure, implementation, or execution. Any deficiencies in any of these phases can hinder the AI system's effectiveness, reliability, and security, making it susceptible to numerous cyberthreats.
These vulnerabilities can appear in various ways. For instance, failure to consider potential attack vectors (such as input injection) during the planning phase can expose the AI system. In the structure phase, insecure design patterns might introduce weaknesses. In the implementation phase, any flaws in coding, using outdated libraries, or insufficient input validation will introduce security gaps. Finally, in the execution phase, misconfigurations or improper access controls can allow unauthorized access to the AI system and, consequently, to the underlying systems it manages or protects.
Guidelines for critical infrastructure owners and operators
According to the CISA guidelines, the AI risk management framework is composed of the following four functions:
Govern: Establish an organizational culture of AI risk management
Critical infrastructure owners and operators should create policies, business processes, and procedures that can predict, address, and manage the various benefits and risks of leveraging AI technology throughout its entire lifecycle. This ensures security is inherent, beginning from the design phase and ending with the deployment and usage of the AI system.
Different tasks are required to foster a culture of AI risk management in critical infrastructure. Here are the most prominent ones:
- Prepare a cybersecurity risk plan that notes the risks associated with using AI in your organization. For example, using AI to execute attacks, direct attacks against AI systems, and AI design and implementation failures.
- Establish a "secure by design" philosophy throughout the entire AI system development lifecycle.
- Monitor components from AI third-party vendors to ensure they adhere to your organization's security standards (e.g., create an AI bill of materials (AIBOM)).
- Assess the risks and costs of developing in-house AI systems instead of using solutions developed by external providers.
- Collaborate with government agencies and industry bodies about the risks associated with using AI technology in your work. This helps you learn from other organizations' experiences and adopt workplace best practices.
Map: Understand your individual AI use context and risk profile
The guidelines in this section point to the fundamental concepts that critical infrastructure owners and operators can use to understand the context of AI risks in their organizations. For instance, to tackle AI risks in critical infrastructure, we must delve into AI deployment specifics. We can ask these questions to understand our current organization context:
How: Is the AI making decisions or just processing data we feed into it?
Where: What is the criticality of the infrastructure it is operating in? For example, is the AI system responsible for managing a nuclear facility or an anti-spam solution?
Why: What is the purpose of using AI here? For example, is it for management, processing data, or facilitating user communication?
Measure: Develop systems to assess, analyze, and track AI risks
The guidelines in this section help critical infrastructure owners and operators create robust ways to assess AI risks throughout the life of the AI system. They can make informed decisions about how AI systems will behave under different conditions by developing their own testing, evaluation, verification, and validation (TEVV) processes.
For example, a water treatment facility might execute regular simulations to test its AI system decision-making under various scenarios, such as how it will behave under a distributed denial of service attack (DDoS).
Manage: Prioritize and act upon AI risks to safety and security
The guidelines in this section list the key controls and practices for effectively managing AI systems in critical infrastructure. It focuses on maximizing benefits while minimizing safety and security risks. To manage AI risks effectively, organizations need to consistently dedicate resources and apply mitigations based on their established work processes.
For example, an intelligent city traffic management system powered by AI technology might implement regular software updates to patch vulnerabilities, perform regular audits of AI decision-making processes, and maintain human oversight for critical decisions — such as preventing traffic on some roads in emergency conditions.
The guidelines in the CISA document should not be viewed as mere recommendations for tackling AI risks that could mess with critical infrastructure safety and security. Critical infrastructure owners and operators need to see this as a set of mandatory procedures or a roadmap to secure their AI systems and mitigate the different types of cyberthreats they may face.
Subscribe to the Barracuda Blog.
Sign up to receive threat spotlights, industry commentary, and more.