On Wednesday, the US Cybersecurity and Infrastructure Security Agency (CISA) and partners from the Five Eyes intelligence alliance issued a new warning on the security dangers inherent in deploying AI models with Operational Technology (OT) systems that manage critical infrastructure.
The joint guidance tackled AI’s integration in infrastructure areas, such as power grids, water treatment facilities, and transportation, and AI could compromise the safety and stability of OT systems in the US and EU.
leading OT companies for industrial systems.
The “Principles for the Secure Integration of Artificial Intelligence in Operational Technology,” guidance, developed by CISA, the FBI, and the National Security Agency (NSA), with partners from Australia, Canada, Germany, the Netherlands, New Zealand, and the UK.
AI holds “tremendous promise for enhancing the performance and resilience” of OT security monitoring environments, but it “must be matched with vigilance,” according to acting director at CISA, Madhu Gottumukkala.
The AI tools based on like machine learning (ML) and large language models (LLMS) boost the efficiency of leading OT security companies for industrial systems, but at the same time, such deployments create new attack surfaces that operators must mitigate before implementation.
https://www.youtube.com/watch?v=EwoKgJ8BCwU
Moving Through Vulnerabilities
The joint document moves beyond merely warning of AI being abused by hackers, to explicitly caution against threats arising from AI being used from within OT systems for industrial automation.
Integrating OT systems can expand the attack surface through increased connectivity and reliance on third-party, vendor-managed components, creating visibility gaps, the agencies warned.
“Understand the correctness of AI system results to support continued safe operation of systems in an OT environment,” the document highlighted.
“It is vital for critical infrastructure owners and operators to understand the states where an AI system can fail to produce accurate and reliable results. This understanding includes expectations for false positives and false negatives in the system’s performance, and how the false positives compare to the base rate of true positives,” it added.
AI integration to the best OT systems for manufacturing related to model drift, poor training data quality, and operator may overload from noisy or incorrect alerts.
CISA’s Executive Assistant Director for Cybersecurity, Nick Andersen, said applying the four key principles in the guidance will help ensure AI integration for cyber security for industrial control systems and will be carried out in a secure and responsible manner.
The top OT security providers networking cybersecurity principles cover:
- Understanding AI by training employees on the OT security architecture risks, benefits, and development practices.
- Assessing the usage of AI by determining if AI is appropriate for a use case based on operational needs and potential system impacts.
- Establishing AI governance by defining clear roles, responsibilities, and conducting continuous testing and compliance audits.
- Maintaining Safety and Security by implementing OT ICS security oversight, transparency, and updating incident response plans to account for AI.
Companies must obligate OT security vendors to contractually disclose any embedded AI features, allowing operators to disable or limit the functions.
The guidance warns, “critical infrastructure owners and operators should review how they are integrating the AI system into their existing procedures and create new safe use and implementation procedures that focus on the AI system integration into the OT environment.”
Implementing Essential Safeguards
The agencies recommend that operators develop oversight mechanisms to manage any risk, through human-in-the-loop protocols that prevent AI models from taking potentially dangerous actions without human intervention.
AI systems must have “failsafe mechanisms that enable AI systems to fail gracefully without disrupting critical operations.”
Strengthening OT security compliance and data governance frameworks before any AI initiative begins, given the sensitivity of OT systems data used to train models, including strict access controls, reinforcement and data stored off premises remains secure. The document also stresses the need for operators to continuously validate AI’s compliance with regulatory and safety requirements.
As the White House’s AI Action Plan acknowledged previously, “the use of AI in cyber and critical infrastructure exposes those AI systems to adversarial threats,” making these safeguards important for maintaining system availability and functional securing OT networks.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.