From IT AD to historian ransomware: the dual-homing pivot path most teams don’t model end-to-end

Standard

If your historian can talk both ways, assume an attacker will use it as a router.

Here’s the pivot path I see repeatedly when incidents cross from IT into OT:

1) AD compromise (IT)
– Phished creds or token theft lands an attacker on a workstation/server.
– They enumerate AD, find service accounts, remote management paths, and “who talks to the historian.”

2) Lateral movement to the historian (the choke point)
– The historian is trusted, always-on, and connected to everything that matters.
– Dual-homed networking or shared credentials turns it into the bridge.

3) Ransomware on the historian = encrypted visibility
– Even before PLCs are touched, operations lose trending, alarms, reports, and context.
– Recovery is slow because historians often sit outside normal backup discipline.

4) Pivot into OT
– From the historian host, attackers reuse credentials, remote tools, or open routes to reach engineering workstations, HMIs, jump hosts, and OT management services.

Three places to stop this early:
A) Kill the credential chain
– Separate identity boundaries for OT, no AD trust shortcuts, rotate and scope service accounts, remove shared local admin.

B) Break the network bridge
– True segmentation between IT and OT, tightly controlled conduits, deny-by-default, and avoid dual-homed “convenience” paths.

C) Make the historian resilient
– One-way data transfer patterns where possible (data diode / brokered replication), immutable backups, and tested restore procedures.

Most teams model IT ransomware and OT safety separately. The historian is where those stories merge.

Where does your historian live in the trust model: a sensor, or a router?

Modern web-based HMIs: Do they really add attack surface—or just make the existing one visible?

Standard

Hot take: the “web HMI = more insecure” claim is usually an architecture problem, not a technology problem.

Browser-based HMIs don’t magically create new risk. They often expose the risk you already had in thick clients: weak identity, flat networks, slow patching, and unclear ownership.

If you’re evaluating a web HMI, don’t debate web vs. native. Ask what you are actually deploying.

Key design choices that determine real-world risk:

1) Identity and access
– Central IdP, MFA for remote access
– Role-based access, least privilege
– Separate operator vs engineer privileges

2) Session handling
– Short-lived tokens, rotation, timeouts
– No shared accounts, no “always logged in” kiosks without compensating controls

3) Network exposure
– No direct internet path to OT
– DMZ, reverse proxy, allow-listing
– Remote access via VPN/ZTNA with device posture

4) Update and vulnerability cadence
– Who patches what, and how fast
– SBOM, dependency scanning, signed builds
– Documented rollback and maintenance windows

5) Observability
– Central logs, auth events, configuration change trails
– Alerting that someone actually reads

Modernization is not the risky part. Unclear boundaries are.

If you want a quick gut check: show me your auth model, network zones, and update process and I’ll show you your risk.

What’s the hardest part for your team today: identity, network segmentation, or patching?

In OT, risk isn’t a buzzword: operationalize (threat × vulnerability × asset) into a weekly prioritization loop

Standard

Most OT programs fail because they rank vulnerabilities, not risk.

Flip it: start with your assets and credible threats, then decide which vulnerabilities actually matter enough to fix this week.

When every finding is “critical,” nothing gets done. The backlog becomes political, and engineering, IT, and operations debate severity instead of impact.

A simple, repeatable model breaks the stalemate:
Risk = Threat × Vulnerability × Asset

Turn that into a weekly loop:
1) Asset: Pick the top systems that keep product moving and people safe (not everything).
2) Threat: Agree on the few credible scenarios that could realistically hit those assets (not theoretical CVSS fear).
3) Vulnerability: Only then map weaknesses that enable those scenarios.
4) Score: Use a consistent 1–5 scale for each factor. Multiply. Rank.
5) Commit: Fix the top 5–10 items this week. Everything else waits.
6) Review: Capture what changed in the environment, threats, or compensating controls and rescore next week.

Outcome: shared language across OT engineering, IT security, and operations, and a prioritized plan tied to real-world impact.

If your OT backlog feels permanent, stop asking “Which vulnerabilities are worst?”
Start asking “Which asset-threated paths are most likely and most damaging this week?”

ISA/IEC 62443 for SIS: stop treating Safety Systems as “off-limits” and start applying security levels like an engineering spec

Standard

Contrarian take: the safest SIS is the one you can still patch, monitor, and validate.

Too many SIS environments get a security pass because they’re “safety-critical.”
That logic is backwards.

If a cyber event can change logic, blind diagnostics, or disrupt comms, your safety case is now conditional on security you didn’t specify.

ISA/IEC 62443 gives a practical way out: define Security Levels at SIS boundaries and turn risk talk into engineering requirements.

What that looks like in practice:
– Define SIS zones/conduits explicitly (SIS controller, engineering workstation, diagnostics, vendor remote access)
– Assign target SL based on credible threat capability, not comfort level
– Translate SL into design requirements: segmentation, authentication, hardening, logging, backup/restore, update strategy
– Make it testable: FAT/SAT cybersecurity checks, periodic validation, evidence for MOC and audits
– Assign ownership: who maintains accounts, patches, monitoring, and exception handling

Security levels aren’t bureaucracy. They’re how you prove the safety function still holds under cyber conditions.

If your SIS is “off-limits” to security engineering, it’s also off-limits to assurance.

How are you defining SIS security boundaries and target SLs today?

BeyondTrust RS/PRA command injection (CVE-2026-1731): why Zero Trust is necessary but not sufficient for remote support tools

Standard

Zero Trust won’t save you from a vulnerable admin tool by itself.

Ask one question:
If this box is compromised, what’s the maximum damage it can do in 10 minutes?

A command injection in a privileged remote support platform collapses the trust boundary. The “helpdesk tool” becomes:
– Immediate privileged code execution
– Credential access and session hijack potential
– Fast lateral movement across managed endpoints

Zero Trust helps only if it is translated into hard controls that shrink blast radius:
– Least privilege for the platform service accounts and integrations
– Network segmentation so the tool cannot reach everything by default
– Just-in-time access for technicians and elevated actions
– Isolation: dedicated jump hosts, separate admin planes, restricted egress
– Application allowlisting and controlled script execution
– Session recording and strong audit logs that cannot be tampered with
– Compensating monitoring: alert on unusual commands, new tool binaries, and rapid host-to-host pivots

Remote support is operationally critical. Treat it like a Tier 0 asset.
Design it so compromise is survivable, not catastrophic.

Poland’s energy-sector cyber incident: the overlooked OT/ICS gaps that still break most “enterprise” security programs

Standard

Contrarian take: If your OT security plan looks like your IT plan (patch faster, add more agents, buy another SIEM), you’re probably increasing risk.

Critical infrastructure incidents rarely fail because of exotic malware.
They fail because IT-first controls don’t translate to OT realities: uptime constraints, legacy protocols, safety interlocks, and always-on vendor access.

Where most “enterprise security” programs still break in OT/ICS:

1) Asset visibility that stops at the switch
If you can’t answer “what is this PLC/HMI, what talks to it, and what would break if it changes,” you’re operating blind.

2) Remote access governance built for convenience, not safety
Shared vendor accounts, always-on VPNs, no session recording, no time bounds, no approvals. This is the common entry point.

3) Segmentation designed for org charts, not process safety
Flat networks and dual-homed boxes turn a small intrusion into plant-wide impact. Segment by function and consequence, then control the conduits.

4) Monitoring that can’t see OT protocols
If telemetry is only Windows logs and SIEM alerts, you’ll miss the real story on Modbus, DNP3, IEC 60870-5-104, OPC, and proprietary vendor traffic.

5) Patch expectations that ignore outage windows and certification
In OT, “just patch” can equal downtime. Compensating controls and risk-based maintenance matter.

If you lead security or build products for critical infrastructure: start with asset inventory, remote access, and safety-driven segmentation. Reduce risk without disrupting operations.

What OT/ICS gap do you see most often in the field?

Just‑In‑Time, time‑bound access for OT: the fastest way to cut vendor and admin risk without slowing operations

Standard

The goal isn’t “least privilege” on paper. It’s least time.

If an account can exist forever, it will be used forever (and eventually abused).

In OT environments, persistent accounts and always‑on remote access are still common. They also show up repeatedly as root causes in incidents:
– Shared vendor logins that never expire
– Standing admin rights “just in case”
– Remote tunnels left open after maintenance
– Accounts that outlive the contract, but not the risk

Just‑In‑Time (JIT) + time‑bound access changes the default:
Access is requested, approved, logged, and automatically revoked.

What you gain immediately:
– Smaller blast radius when credentials are exposed
– Clear audit trails for who accessed what, when, and why
– Faster offboarding for vendors and rotating staff
– Fewer exceptions that turn into permanent backdoors

The key is designing around OT realities:
– Support urgent break/fix with pre-approved workflows
– Time windows aligned to maintenance shifts
– Offline/limited-connectivity options where needed
– Access that’s scoped to assets and tasks, not “the whole site”

If you’re still managing vendor access with permanent accounts and manual cleanup, JIT is one of the highest-impact controls you can deploy without slowing operations.

Where are persistent accounts still hiding in your OT environment today?

Virtual patching isn’t a fix — it’s a risk-managed bridge (if you treat it like one)

Standard

Most teams use virtual patching as a comfort blanket.

In OT, you often cannot patch immediately. Uptime commitments, safety certification, vendor approvals, and fragile legacy stacks make “just patch it” unrealistic.

Virtual patching can be a smart short-term control, but only if you treat it like a bridge, not a destination.

What it actually reduces:
– Exploitability of known vulnerabilities by blocking specific traffic patterns or behaviors
– Exposure window while you coordinate downtime, testing, and vendor support

What it does not fix:
– The vulnerable code still exists
– Local exploitation paths, misconfigurations, and unknown variants may remain
– Asset and protocol blind spots can turn into false confidence

Disciplined virtual patching looks like:
1) Define an expiration date and an owner (when does “temporary” end?)
2) Specify coverage: which CVEs, which assets, which protocol functions, which zones
3) Validate with proof, not assumptions: replay exploit attempts, run vulnerability scans where safe, confirm logs/alerts, verify rule hit rates
4) Run a parallel plan to remove it: patch path, compensating controls, maintenance window, rollback plan
5) Reassess after changes: new firmware, new routes, new vendors, new threats

If you can’t explain what your virtual patch blocks, how you tested it, and when it will be removed, you have not reduced risk. You’ve created permanent temporary security.

How are you validating virtual patching effectiveness in your OT environment?

Stop chasing “bad actors” in OT: baseline behavior to catch “normal” actions at the wrong time, place, or sequence

Standard

The most dangerous OT insider events won’t trip alerts because nothing looks “malicious.”

In many incidents, the user is legitimate:
Operators, contractors, engineers.
Credentials are valid.
Commands are allowed.

The risk is in small deviations that create big safety and uptime impact.

Instead of hunting for “bad actors,” define safe normal per role and process, then alert on drift.

What to baseline in OT:
– Time: out-of-shift changes, weekend maintenance that wasn’t scheduled
– Place: new asset touchpoints (a contractor suddenly interacting with SIS-adjacent systems)
– Sequence: unusual command chains (mode changes followed by setpoint edits, repeated downloads, rapid start/stop loops)
– Pace: bursts of commands, retry storms, “workarounds” that bypass standard steps

What this enables:
– Detection of insider risk without relying on signatures
– Fewer false positives because “normal” is defined by your plant’s reality
– Earlier intervention before a deviation becomes a safety or downtime event

If your OT monitoring mostly looks for known indicators of compromise, you are missing the events that look like routine work.

Question for OT security and operations leaders: do you have role-based behavioral baselines, or are you still alerting on isolated events?

Maritime OT security isn’t “remote OT with worse Wi‑Fi” — it’s a moving, intermittently connected supply chain

Standard

Contrarian take: If your maritime OT strategy starts with patch cadence and endpoint agents, you’re already behind.

Ships, offshore platforms, and port equipment don’t behave like always-on plants.
They run with:
– Long offline windows between port calls and stable links
– Satellite bandwidth constraints and high latency
– Third-party vendor access across multiple owners and charterers
– Safety-critical systems where “just patch it” is not a plan

That combination creates invisible exposure: configuration drift, unverified vendor actions, and monitoring gaps that only surface after the vessel reconnects.

What to design for instead:
1) Disconnected-by-default controls
Local logging, local detection, local time sync, and store-and-forward telemetry
2) Vendor trust boundaries
Brokered access, least privilege by task, session recording, and break-glass workflows
3) Provable state while offline
Baselines, signed change packages, asset identity, and tamper-evident logs
4) Risk-based maintenance windows
Patch only when it’s safe, testable, and operationally feasible; compensate with segmentation and allowlisting

Maritime OT security is less about perfect visibility and more about maintaining safety and assurance when connectivity disappears.

If you’re building a maritime OT program, start with: What must still be true when the vessel is offline?