BeyondTrust RS/PRA command injection (CVE-2026-1731): why Zero Trust is necessary but not sufficient for remote support tools

Standard

Zero Trust won’t save you from a vulnerable admin tool by itself.

Ask one question:
If this box is compromised, what’s the maximum damage it can do in 10 minutes?

A command injection in a privileged remote support platform collapses the trust boundary. The “helpdesk tool” becomes:
– Immediate privileged code execution
– Credential access and session hijack potential
– Fast lateral movement across managed endpoints

Zero Trust helps only if it is translated into hard controls that shrink blast radius:
– Least privilege for the platform service accounts and integrations
– Network segmentation so the tool cannot reach everything by default
– Just-in-time access for technicians and elevated actions
– Isolation: dedicated jump hosts, separate admin planes, restricted egress
– Application allowlisting and controlled script execution
– Session recording and strong audit logs that cannot be tampered with
– Compensating monitoring: alert on unusual commands, new tool binaries, and rapid host-to-host pivots

Remote support is operationally critical. Treat it like a Tier 0 asset.
Design it so compromise is survivable, not catastrophic.

Poland’s energy-sector cyber incident: the overlooked OT/ICS gaps that still break most “enterprise” security programs

Standard

Contrarian take: If your OT security plan looks like your IT plan (patch faster, add more agents, buy another SIEM), you’re probably increasing risk.

Critical infrastructure incidents rarely fail because of exotic malware.
They fail because IT-first controls don’t translate to OT realities: uptime constraints, legacy protocols, safety interlocks, and always-on vendor access.

Where most “enterprise security” programs still break in OT/ICS:

1) Asset visibility that stops at the switch
If you can’t answer “what is this PLC/HMI, what talks to it, and what would break if it changes,” you’re operating blind.

2) Remote access governance built for convenience, not safety
Shared vendor accounts, always-on VPNs, no session recording, no time bounds, no approvals. This is the common entry point.

3) Segmentation designed for org charts, not process safety
Flat networks and dual-homed boxes turn a small intrusion into plant-wide impact. Segment by function and consequence, then control the conduits.

4) Monitoring that can’t see OT protocols
If telemetry is only Windows logs and SIEM alerts, you’ll miss the real story on Modbus, DNP3, IEC 60870-5-104, OPC, and proprietary vendor traffic.

5) Patch expectations that ignore outage windows and certification
In OT, “just patch” can equal downtime. Compensating controls and risk-based maintenance matter.

If you lead security or build products for critical infrastructure: start with asset inventory, remote access, and safety-driven segmentation. Reduce risk without disrupting operations.

What OT/ICS gap do you see most often in the field?

Just‑In‑Time, time‑bound access for OT: the fastest way to cut vendor and admin risk without slowing operations

Standard

The goal isn’t “least privilege” on paper. It’s least time.

If an account can exist forever, it will be used forever (and eventually abused).

In OT environments, persistent accounts and always‑on remote access are still common. They also show up repeatedly as root causes in incidents:
– Shared vendor logins that never expire
– Standing admin rights “just in case”
– Remote tunnels left open after maintenance
– Accounts that outlive the contract, but not the risk

Just‑In‑Time (JIT) + time‑bound access changes the default:
Access is requested, approved, logged, and automatically revoked.

What you gain immediately:
– Smaller blast radius when credentials are exposed
– Clear audit trails for who accessed what, when, and why
– Faster offboarding for vendors and rotating staff
– Fewer exceptions that turn into permanent backdoors

The key is designing around OT realities:
– Support urgent break/fix with pre-approved workflows
– Time windows aligned to maintenance shifts
– Offline/limited-connectivity options where needed
– Access that’s scoped to assets and tasks, not “the whole site”

If you’re still managing vendor access with permanent accounts and manual cleanup, JIT is one of the highest-impact controls you can deploy without slowing operations.

Where are persistent accounts still hiding in your OT environment today?

Virtual patching isn’t a fix — it’s a risk-managed bridge (if you treat it like one)

Standard

Most teams use virtual patching as a comfort blanket.

In OT, you often cannot patch immediately. Uptime commitments, safety certification, vendor approvals, and fragile legacy stacks make “just patch it” unrealistic.

Virtual patching can be a smart short-term control, but only if you treat it like a bridge, not a destination.

What it actually reduces:
– Exploitability of known vulnerabilities by blocking specific traffic patterns or behaviors
– Exposure window while you coordinate downtime, testing, and vendor support

What it does not fix:
– The vulnerable code still exists
– Local exploitation paths, misconfigurations, and unknown variants may remain
– Asset and protocol blind spots can turn into false confidence

Disciplined virtual patching looks like:
1) Define an expiration date and an owner (when does “temporary” end?)
2) Specify coverage: which CVEs, which assets, which protocol functions, which zones
3) Validate with proof, not assumptions: replay exploit attempts, run vulnerability scans where safe, confirm logs/alerts, verify rule hit rates
4) Run a parallel plan to remove it: patch path, compensating controls, maintenance window, rollback plan
5) Reassess after changes: new firmware, new routes, new vendors, new threats

If you can’t explain what your virtual patch blocks, how you tested it, and when it will be removed, you have not reduced risk. You’ve created permanent temporary security.

How are you validating virtual patching effectiveness in your OT environment?

Stop chasing “bad actors” in OT: baseline behavior to catch “normal” actions at the wrong time, place, or sequence

Standard

The most dangerous OT insider events won’t trip alerts because nothing looks “malicious.”

In many incidents, the user is legitimate:
Operators, contractors, engineers.
Credentials are valid.
Commands are allowed.

The risk is in small deviations that create big safety and uptime impact.

Instead of hunting for “bad actors,” define safe normal per role and process, then alert on drift.

What to baseline in OT:
– Time: out-of-shift changes, weekend maintenance that wasn’t scheduled
– Place: new asset touchpoints (a contractor suddenly interacting with SIS-adjacent systems)
– Sequence: unusual command chains (mode changes followed by setpoint edits, repeated downloads, rapid start/stop loops)
– Pace: bursts of commands, retry storms, “workarounds” that bypass standard steps

What this enables:
– Detection of insider risk without relying on signatures
– Fewer false positives because “normal” is defined by your plant’s reality
– Earlier intervention before a deviation becomes a safety or downtime event

If your OT monitoring mostly looks for known indicators of compromise, you are missing the events that look like routine work.

Question for OT security and operations leaders: do you have role-based behavioral baselines, or are you still alerting on isolated events?

Maritime OT security isn’t “remote OT with worse Wi‑Fi” — it’s a moving, intermittently connected supply chain

Standard

Contrarian take: If your maritime OT strategy starts with patch cadence and endpoint agents, you’re already behind.

Ships, offshore platforms, and port equipment don’t behave like always-on plants.
They run with:
– Long offline windows between port calls and stable links
– Satellite bandwidth constraints and high latency
– Third-party vendor access across multiple owners and charterers
– Safety-critical systems where “just patch it” is not a plan

That combination creates invisible exposure: configuration drift, unverified vendor actions, and monitoring gaps that only surface after the vessel reconnects.

What to design for instead:
1) Disconnected-by-default controls
Local logging, local detection, local time sync, and store-and-forward telemetry
2) Vendor trust boundaries
Brokered access, least privilege by task, session recording, and break-glass workflows
3) Provable state while offline
Baselines, signed change packages, asset identity, and tamper-evident logs
4) Risk-based maintenance windows
Patch only when it’s safe, testable, and operationally feasible; compensate with segmentation and allowlisting

Maritime OT security is less about perfect visibility and more about maintaining safety and assurance when connectivity disappears.

If you’re building a maritime OT program, start with: What must still be true when the vessel is offline?

Stop looking for “bad actors”—use behavioral baselines to catch insider risk in OT before it becomes downtime

Standard

Most insider detection fails because it hunts intent.
OT needs to hunt anomalies that predict impact.

In industrial environments, “insiders” are often trusted technicians, engineers, and contractors.
Their actions look legitimate until one small change turns into:
– an unsafe state
– a quality excursion
– unplanned downtime

That’s why the winning question isn’t “who is malicious?”
It’s: “What behavior would cause an unsafe state if repeated at scale?”

Behavioral baselines help you answer that without relying on malware signatures or perfect asset inventories.
You’re not trying to label a person.
You’re watching for deviations in:
– what changed
– when it changed
– from where it changed
– how often it changed
– which systems are being touched outside normal patterns

Examples of high-signal OT deviations:
– new engineering workstation talking to a controller it never touched before
– a contractor account executing the same write operation across multiple PLCs
– after-hours logic changes followed by disabled alarms or altered setpoints
– a burst of “normal” commands at an abnormal rate

Outcome: earlier detection, fewer escalations, and interventions before production feels it.

If you could baseline one behavior in your OT environment to reduce risk fast, what would it be?

Ships are the harshest edge-case for OT: how salt, satellite links, and vendor handoffs create remote-by-default attack paths

Standard

Contrarian take: if your OT security assumes stable connectivity and on-site admins, it’s not a security program, it’s a lab demo.

Maritime OT lives in the worst possible assumptions:
– Intermittent satellite links and high latency
– Tiny patch windows tied to port calls and class rules
– Vendors doing remote support through shifting handoffs (ship crew, management company, OEM, integrator)
– Physical exposure: shared spaces, swapped laptops, removable media, and “temporary” networks that become permanent

That combination creates remote-by-default attack paths:
A single weak credential, a poorly controlled remote session, or an untracked engineering workstation can outlive the voyage.

A sea-ready baseline looks different:
1) Design for comms failure: local logging, local detection, and store-and-forward telemetry
2) Treat remote access as a product: per-vendor isolation, just-in-time access, recorded sessions, and strong device identity
3) Patch like aviation: plan by voyage/port cycle, pre-stage updates, verify by checksum, and prove rollback
4) Control the engineering toolchain: signed configs, golden images, USB governance, and offline recovery media
5) Clarify accountability at handoff points: who owns credentials, approvals, and emergency access when the link drops

If you build for the ship, you’ll usually harden every remote industrial site.

What’s the biggest OT security failure mode you’ve seen offshore: connectivity, patching, third-party access, or physical exposure?

Securing oil & gas IoT wireless (LoRaWAN, LTE, NB-IoT, Zigbee, Wi‑Fi, BLE): a threat model + control map by layer

Standard

Contrarian take: choosing the “most secure” wireless standard won’t save you.

Most breaches aren’t about the radio name. They’re about weak identity, mismanaged keys, and poor segmentation across device, network, and cloud.

If you’re deploying LoRaWAN, LTE/NB‑IoT, Zigbee, Wi‑Fi, or BLE in oil & gas, a faster way to make decisions is to map:
1) Typical attack paths per wireless option
2) Minimum viable controls per layer

Control map by layer (works across all of the above):

Device layer
– Unique device identity (no shared credentials)
– Secure boot + signed firmware; locked debug ports
– Key storage in secure element/TPM where possible
– OTA updates with rollback and provenance

Radio / link layer
– Strong join/onboarding; ban default keys and weak pairing modes
– Replay protection and message integrity enabled
– Rotate keys; define key ownership (vendor vs operator) and lifecycle

Network layer
– Segment OT/IoT from enterprise and from each other (zone/asset based)
– Private APN/VPN for cellular; gateways isolated and hardened
– Least-privilege routing; deny-by-default; egress controls

Cloud / platform layer
– Per-device authN/authZ; short-lived tokens; mutual TLS where feasible
– Secrets management, KMS/HSM, and audit logging
– Tight IAM, data minimization, and secure API gateways

Operations
– Asset inventory and certificate/key rotation schedule
– Detection for rogue gateways/devices, unusual join rates, and data exfil
– Incident playbooks that include field swap, rekey, and revocation

Procurement should ask less “Which wireless is most secure?” and more:
Who provisions identity? How are keys rotated/revoked? How is segmentation enforced end-to-end?

If you want, I can share a one-page threat model + control checklist by radio type and layer.

EDR for air-gapped ICS: a requirements-first selection checklist (and why “agent-based” is the wrong starting point)

Standard

Stop asking “Which EDR is best?” Start asking “Which EDR can survive our maintenance windows, offline updates, and safety requirements without creating new downtime risk?”.

Air-gapped doesn’t mean risk-free. It means different failure modes:
– Limited connectivity
– Strict change control
– Safety-critical uptime

In OT, “agent-based vs agentless” is the wrong first filter. Start with requirements that match plant reality, then evaluate architectures.

A requirements-first checklist for air-gapped ICS EDR:
1) Deployment model: can it be installed, approved, and rolled back within change control?
2) Offline updates: signed packages, deterministic upgrades, no cloud dependency, clear SBOM and versioning.
3) Resource impact: CPU/RAM/disk caps, no surprise scans, predictable scheduling around maintenance windows.
4) Telemetry in an offline world: local buffering, store-and-forward, export via removable media, and clear data formats.
5) Forensics readiness: timeline and process tree visibility, integrity of logs, evidence handling that fits your procedures.
6) Recovery and containment: safe isolation actions, kill/deny options that won’t trip safety systems or stop critical processes.
7) Coverage of OT endpoints: legacy Windows, embedded boxes, HMIs, engineering workstations, plus vendor support lifecycles.
8) Auditability: repeatable reporting, configuration drift detection, and approvals traceability.

If the tool assumes always-on connectivity, frequent updates, or “we’ll tune it later,” it’s not OT-ready.

Select the EDR that fits the plant, not the plant that fits the EDR.