Human Node is a research-driven cognitive defence layer that identifies how targeted cyber pressure breaks human decisions and put organisations at risk. We map these failure points and provide corrective controls before they turn into incidents or material impact.
OSº1 – Operating System for Human Cognitive Defence
The New Reality – Why it matters
The human layer is now the primary breach vector. AI has given attackers the ability to shape perceptions, influence judgement, and steer decisions in ways organisations are not built to detect. Their objective is straightforward: exploit the path of least resistance, often through unsuspecting employees, to obtain valid credentials.
One shifted cue, one fabricated sense of urgency, one misread signal is enough to compromise an entire organisation. Cyber risk has become a Cognitive Problem. Defence must move there too.
Cognitive Defence is the Solution
Fewer points of human failure and decision surface that's harder to exploit.
Our focus is to understand and stabilise the human mind under attack conditions. We do this by treating human decision-making as a layered system: surface behaviours, deeper cognitive patterns, and the pressures that shape them. Attacks exploit these layers in sequence; defence must model them the same way.
Illuminate
Reveal the patterns, pressures and blind spots attackers exploit. Mapping the cognitive structure behind decisions.
Model
Translate these internal and external forces into a measurable risk layer that shows where decision failure is likely to occur.
Reinforce
Provide precise, model-driven guidance that strengthens the decision layers most vulnerable to manipulation.
Model intelligence as the cornerstone
Our BTO model fuses two independent vantage points to build a complete picture of cyber risk. We merge these into a single cognitive map that shows where human failure is most likely to be induced and how to prevent it.
Internal intelligence
Behavioural patterns, decision pressure, operational patterns that shape human exposure.
External intelligence
Signals about how attackers are evolving pressure tactics: AI-enhanced persuasion, authority simulation, timing manipulation, financial urgency patterns, and behavioural exploitation techniques used across your sector.
Introducing DHOS
The Deterministic Human Operating System. DHOS is the system that models how people make decisions under cognitive pressure. It establishes behavioural baselines, detects drift, fuses AI-driven threat signals and predicts failure before it occurs.
The Deliverable – Auditable Certainty
Board-level metric proof of due diligence for Regulators (ICO, NCSC), Insurers, and Partners. Move from guessing to governing.
We work with organisations where human error is a structural liability.
Regulated Infrastructure
Organisations operating under strict governance mandates (UK GDPR, DORA, NIS2) where the cost of a breach extends beyond finance into regulatory sanctions and operational paralysis.
High-Trust Supply Chains
Entities integrated into critical supply networks (Automotive, Defence, Food) where a single human failure can trigger cascading liabilities across multiple partners.
Capital allocators & Insurers
Firms that need to price risk accurately. We provide the data layer required to underwrite cyber liability with actuarial precision, moving from guesswork to quantified exposure.
Strategic Leadership
Boards and C-suites that recognise cybersecurity is a governance function. Leaders who demand the same predictive certainty for human risk that they expect for financial audit.
We partner with organisations building structural resilience.
Analysis - Threat behaviour matrix of the cyber landscape
Every AI-driven attack applies a different kind of pressure on human judgement. People don't fail for the same reasons — some break under urgency, some under overload, some under uncertainty, and some under routine. This is the real map of human cyber-risk.
To capture this, we've built the Threat Behaviour Matrix — a simple structural model that shows how each modern attack vector exploits specific behavioural pressure states.
| ai attack vector ↓ | trust | overload | urgency | fatigue | routine | speed | uncertainty |
|---|---|---|---|---|---|---|---|
| ai spear phishing | ✓ | ✓ | ✓ | – | ✓ | – | ✓ |
| deepfake voice | ✓ | – | ✓ | – | – | ✓ | ✓ |
| adaptive bots | – | ✓ | ✓ | ✓ | – | ✓ | ✓ |
| supply-chain | ✓ | ✓ | ✓ | – | ✓ | – | ✓ |
| lateral movement | – | ✓ | – | ✓ | ✓ | – | ✓ |
| financial deepfake | ✓ | – | ✓ | – | – | ✓ | – |
| multichannel | – | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Our work is grounded in cognitive defence, adversarial modelling and real-world threat behaviour. We maintain an internal research programme and publish selective insights, without exposing our engines.
Core research areas
1. Cognitive pressure modelling
We study how attention, interpretation, and judgement break down under different forms of digital pressure — speed, uncertainty, overload, impersonation, and AI-driven manipulation. This research supports the development of stable behavioural baselines (BVO).
2. Threat-behaviour interaction
We analyse how modern attack patterns influence human perception and decision-making. This includes AI-generated impersonation, adaptive phishing bots, lateral pressure tactics, and multi-channel deception. These insights shape our external intelligence layer and our threat-behaviour ontology.
3. Organisational decision dynamics
We investigate how culture, workflow, and operational pressure shape breach probability. Our focus is on quantifying the cost of inaction and identifying systemic weak points that technical controls cannot see.
Our aim
To transform human cyber-risk from an unmeasured assumption into a predictable, governable system based on science, not guesswork.
We are a hybrid team of Strategic Cyber Advisors (SCA), CISO, Data Architects and Research. Our methodology was born from leading post-breach remediations (DFIR) where we witnessed C-suite teams, forensics firms, and insurers fail to solve the human root cause of the breach.
We built the system we needed — one that moves beyond generic content to model the cause of human failure.