Contrarian take: the safest SIS is the one you can still patch, monitor, and validate.
Too many SIS environments get a security pass because they’re “safety-critical.”
That logic is backwards.
If a cyber event can change logic, blind diagnostics, or disrupt comms, your safety case is now conditional on security you didn’t specify.
ISA/IEC 62443 gives a practical way out: define Security Levels at SIS boundaries and turn risk talk into engineering requirements.
What that looks like in practice:
– Define SIS zones/conduits explicitly (SIS controller, engineering workstation, diagnostics, vendor remote access)
– Assign target SL based on credible threat capability, not comfort level
– Translate SL into design requirements: segmentation, authentication, hardening, logging, backup/restore, update strategy
– Make it testable: FAT/SAT cybersecurity checks, periodic validation, evidence for MOC and audits
– Assign ownership: who maintains accounts, patches, monitoring, and exception handling
Security levels aren’t bureaucracy. They’re how you prove the safety function still holds under cyber conditions.
If your SIS is “off-limits” to security engineering, it’s also off-limits to assurance.
How are you defining SIS security boundaries and target SLs today?