Most teams read CISA guidance like a PDF to file away.
Treat it like an architecture spec: if you can’t point to the control in your OT network, you don’t have “AI security” — you have AI exposure.
Here’s a lightweight checklist to turn AI-in-OT principles into implementable controls:
1) Asset + data inventory
– Where are AI models running (edge gateway, historian tier, cloud)?
– What OT data feeds them (tags, logs, images), and where does it leave the plant?
2) Data handling controls
– Classify OT data; define allowed uses (training vs inference).
– Minimize retention; encrypt in transit/at rest; restrict exports.
3) Model and pipeline access
– Separate service accounts; least privilege; MFA for consoles.
– Signed artifacts; controlled model promotion (dev/test/prod).
4) Network segmentation
– Place AI components in a dedicated zone.
– Limit flows to required protocols/ports; one-way where feasible.
5) Monitoring + detection
– Log model access, prompts/inputs, outputs, and admin actions.
– Alert on abnormal data pulls, sudden model changes, new egress paths.
6) Supplier and integration risk
– Require SBOM/model provenance; patch SLAs; remote access controls.
– Validate connectors to PLC/HMI/historian; document trust boundaries.
7) Safety and fail-safe behavior
– Define what the AI can and cannot actuate.
– Ensure manual override; graceful degradation to known-safe mode.
8) Incident response for AI in OT
– Run playbooks for: data exfil, model tampering, prompt injection, drift.
– Pre-stage rollback models; isolate the AI zone without halting operations.
If you had to prove AI-in-OT security in 30 minutes, which of these would you struggle to evidence?