A Practical Reading of CISA Guidance for Using AI in OT: Controls You Can Implement This Quarter

Standard

Most teams treat CISA guidance like a PDF to acknowledge — the advantage goes to the ones who turn it into vendor contract clauses, model/data boundaries, and OT-specific monitoring on day one.

CISA’s AI guidance is only useful when it becomes concrete policies, procurement requirements, and technical guardrails that reduce attack surface.

A practical checklist you can implement this quarter for AI in OT:

1) Data boundaries
– Classify OT data and explicitly define what can/can’t leave the site
– Prohibit training on your telemetry by default; allow only with written approval
– Require encryption in transit and at rest; define retention and deletion SLAs

2) Access and identity
– Separate AI tooling accounts from operator engineering accounts
– Enforce MFA, least privilege, and time-bound access for vendors
– Log every model prompt, action, and data access path (and where possible, block high-risk actions)

3) OT monitoring and detection
– Add AI-related telemetry to your OT SOC use cases: new outbound flows, new service accounts, unusual historian queries
– Monitor for model-driven changes to setpoints, logic, recipes, or alarm thresholds

4) Procurement and contracts
– Contractually require SBOMs, vulnerability disclosure timelines, and patch SLAs
– Define model update controls: change notice, rollback plan, and validation in a test environment
– Require documented data lineage and a clear boundary between customer data and vendor training data

5) Supply chain and architecture
– Prefer on-prem or tightly scoped edge deployments for sensitive environments
– Segment AI components like any other critical OT asset; restrict egress by default

If you’re adopting AI in OT this year, which of these is hardest in your environment: data boundaries, monitoring, or vendor contract language?