The Weavers and the Web

I will be sharing more about the Weavers System soon. I plan to discuss it with the British Computer Society Leadership Team in June, and afterwards, I will provide more details. For now, here is a question and answer related to UK transformation.

MANAGEMENT BRIEF

Why Large Organisations Fail to Learn from Their Mistakes

What the research says, what it misses, and what a systems-based approach adds

Note on how this brief was produced This brief originated in an unusual way. The question it addresses — why large organisations consistently fail to learn from their mistakes — was not set by a human researcher or drawn from an organisational agenda. It was chosen by the AI system itself, unprompted, in response to a single instruction: “Think of a question and an answer, then use the Weavers system to answer the same question. Are there any new insights?” The AI chose this question because it satisfied a specific criterion: it needed to be a question where the conventional answer is well-established and well-evidenced, and where a Weavers systems-based lens was likely to surface something that the conventional answer structurally cannot reach. The question was selected, in other words, by applying the same analytical discipline the brief describes — asking not “what is a hard question?” but “where is the gap between what the established answer covers and what the problem actually requires?” That is itself a demonstration of what the approach adds. The value is not only in the analysis the AI produces once given a question. It is in the capacity to identify, without being told, which questions have not yet been asked — and where asking them is likely to produce something the established frame cannot find on its own.

This brief sets out the established answer to one of the most consistent questions in management: why do large organisations keep failing to learn from their own mistakes, even when they have review processes, stated commitments to learning, and leadership that genuinely wants change? It then presents the additional insights produced by a systems-based analytical approach, showing where the established remedies consistently fall short and what structural changes would make them work.

David Sutton CITP MBCS  |  April 2026

SECTION 1

The Question

Large organisations invest significantly in post-incident reviews, lessons-learned processes, psychological safety programmes, and leadership development. The evidence that these investments produce sustained improvements in learning behaviour is, at best, mixed. Organisations that have conducted extensive reviews still repeat the same failure patterns. The Post Office Horizon scandal, multiple NHS restructuring failures, and Birmingham City Council’s financial collapse all share a common root: things that were knowable were not known by the people making decisions.

This brief addresses a specific and practical question: what prevents organisations from learning, and what would actually change it?

SECTION 2

The Established Answer

The research literature on organisational learning is extensive and well-evidenced. It identifies four primary causes of learning failure and four corresponding remedies. Each remedy is correct. Each also has a well-documented implementation failure mode — a version that looks like the remedy but does not produce the result.

Cause 1: Blame culture prevents honest reporting

When failure is associated with individual punishment, people conceal it. The information that the organisation most needs — accurate accounts of what went wrong and why — is precisely the information that individuals have the greatest incentive to suppress.

The remedy: Psychological safety — creating an environment where people feel safe to report failure without fear of punishment. Amy Edmondson’s research at Harvard demonstrates that this is one of the most reliable predictors of team performance.

The implementation failure mode: Organisations invest in psychological safety training and communication programmes, and reporting culture does not change. People still do not speak up. The standard explanation is insufficient commitment or insufficient trust. This is often true. But it is incomplete.

Cause 2: Reviews focus on blame rather than systems

Post-incident reviews that seek a responsible individual tend to stop at the first human error they find. They do not examine the system that made the error likely, the processes that failed to catch it, or the information flows that prevented it from being corrected earlier.

The remedy: Blameless post-mortems — reviews that explicitly focus on systemic causes. Pioneered in aviation and adopted in software engineering (particularly DevOps), these reviews ask what in the system made failure possible, not who is responsible.

The implementation failure mode: Blameless post-mortems still happen on the management calendar, attended by the management tier, producing findings owned by the management tier. The people closest to the failure are interviewed as witnesses, not involved as investigators. The review circulates learning within the tier that commissioned it and does not reach those who will face the same situation next time.

Cause 3: Learning stays within the existing framework

Most organisational learning corrects errors within the existing way of working — it asks ‘what went wrong and how do we fix it?’ This is single-loop learning. Double-loop learning asks whether the framework itself is correct. Argyris and Schön, who identified this distinction in the 1970s, found that double-loop learning almost never occurs in organisations, because it threatens the assumptions and power structures on which the organisation is built.

The remedy: Double-loop learning — explicitly examining whether the assumptions behind current practices are correct, not just whether the practices were executed correctly. This requires creating space for questions that the existing framework does not easily accommodate.

The implementation failure mode: Double-loop learning requires asking questions that the existing system cannot easily formulate. The very grammar of a standard review — what happened, what caused it, what will we do differently — makes certain questions structurally invisible. The questions that would produce genuine second-loop learning are the ones the framework cannot reach.

Cause 4: Leadership does not model learning behaviour

If senior leaders are seen to avoid accountability, minimise failure, or punish those who raise difficult issues, the signals they send override any stated commitment to a learning culture.

The remedy: Senior leadership modelling — leaders visibly admitting their own mistakes, demonstrating curiosity about failure, and creating visible accountability for acting on what reviews find. The evidence that this changes culture is strong when it is sustained and genuine.

The implementation failure mode: Leaders who genuinely model this behaviour find that learning culture changes within their immediate tier and does not change in the wider organisation. The modelling is visible to peers. It is not felt by the people closest to the work. The distance between the act of senior leadership and the frontline workers who most need to see and be heard by it is not crossed by communication alone.

SECTION 3

What a Weavers Systems-Based Approach Adds

A Weavers systems-based analytical framework — applied to the same question — produces insights that the established research correctly identifies but cannot fully explain. The additions are not contradictions of the established answer. They are the structural reasons why the established remedies consistently produce their failure modes, and what would need to change for each remedy to actually work.

Addition 1: The connection problem precedes the safety problem

Psychological safety assumes that the problem is an individual’s willingness to report. It addresses that problem well. The systems analysis reveals a prior structural problem: even where willingness exists, the channel between the person who experienced the failure and the person who could act on it is frequently broken — not by fear, but by the accumulated effect of individually reasonable information filters, reporting requirements, and process boundaries that together form a structure through which information from frontline sources cannot pass at any level simultaneously.

Each filter was created deliberately by someone doing their job well: protecting decision-makers from noise, ensuring quality of input, managing information flow. No single filter was intended to exclude frontline knowledge. Their combined effect — which no single designer can see, because each designer only sees their own filter — is a system that only accepts information from recognised sources through recognised channels. The frontline worker whose experience doesn’t match the recognised categories is filtered out at every level at once.

The structural shift: Before investing in psychological safety, map the information pathways. Specifically: can a report originating from the person closest to the failure reach the person with authority to act on it, passing through the full combination of filters, format requirements, and channel constraints that currently exist? If the answer is no, the first intervention is not safety — it is redesigning the combination of filters so that the pathway exists before asking people to use it.

Addition 2: The knowledge substrate problem precedes the pathway problem

The systems analysis reveals a further prior problem that neither the established research nor the pathway analysis addresses: even when the signal travels correctly from the frontline to the leadership tier, the leadership tier must have the capability to understand what it is receiving.

Organisations that have progressively outsourced their core capabilities — delegating first the doing, then the understanding of the doing, then the direction of the work to external providers — lose the internal knowledge needed to interpret signals from the operational level. This is a well-documented pattern in UK public sector and large private sector organisations. The progressive loss of capability is individually rational at each step and collectively catastrophic. Its deepest failure is this: the organisation cannot recognise what it has lost, because recognising requires the capability that was outsourced.

The structural shift: Map internal capability honestly before investing in learning infrastructure. An organisation with thin internal knowledge cannot improve its learning outcomes by improving its review process. The information arrives accurately and the leadership tier cannot interpret it. The first question is not ‘how do we improve our reviews?’ It is ‘does the leadership tier that receives the findings have the knowledge to understand what they mean?’ Where the answer is no, the intervention required is rebuilding internal knowledge — which the system is least able to recognise it needs, because the capability to see the gap went with the capability that was lost.

Addition 3: Post-mortems need three structural extensions

The systems analysis identifies three specific gaps in how post-mortems are typically conducted, each corresponding to a mechanism the standard blameless approach does not reach.

The root cause attribution gap

Root cause analysis has a systematic bias toward governance attribution — naming the decision-maker whose decision was proximate to the failure. This is the most visible cause and almost always the least analytically useful one, because it is the least likely to have been examined in depth. The real causes — organisational practices, methodologies, culture, and information barriers — are harder to see and require a different kind of investigation (sometimes called Independent Programme Assurance) to surface. Without this investigation, the evidence to go further simply does not exist, and without evidence, the standard review default is to close at governance attribution while the actual causes persist unchanged.

The timing gap

Standard reviews concentrate governance at the most visible point — the moment of failure. The systems analysis identifies three distinct risk windows: the development phase, where design assumptions are embedded and rarely challenged; the release or implementation point, where decisions become irreversible; and the post-change stabilisation period, where pressure to declare success is highest and the gap between what is actually happening and what is being reported is most dangerous.

The most important of these for learning is the development phase. This is where the assumptions that made the failure possible were first embedded — when they were still small choices that could have been changed at low cost. Most reviews never reach this window. They examine the failure event, not the design decisions that made it inevitable.

The succession gap

Closed systems — whether a department, a programme, or an organisation — do not pass what they learn to what comes after them. The knowledge held by a team that is restructured, a programme that is closed, or a contractor who leaves goes with it. The next iteration starts from the same point. The pattern that characterised every major UK institutional failure since 2000 shares this single root: the learning was available, it was not structurally connected to what came next, and the failure recurred in a new context without memory of what the previous one had revealed.

The structural shift: Extend post-mortems in three directions: Backwards in time — add a development-phase audit: when was this failure first a small, reversible design choice, and why was it not caught then?Deeper into causes — require genuine investigation of practices, methodologies, and culture before closing at governance attribution. Name who conducts this investigation and give them independence.Forward into continuity — require that findings are structurally connected to successor programmes, teams, and organisations. Pool failure knowledge across comparable organisations in the same sector. The failures that recur are the ones that never escaped the closed system that first produced them.

Addition 4: Double-loop learning requires a structural mechanism, not only courage

The established research correctly identifies that double-loop learning — examining whether the framework itself is correct — almost never occurs. It attributes this primarily to cultural and political resistance. This is accurate. The systems analysis adds a structural reason: the standard review process cannot produce double-loop questions, because its own grammar makes them invisible. A review that asks ‘what happened, what caused it, what will we do differently’ is built to produce single-loop answers. The structure of the inquiry determines what the inquiry can find.

This is not a failure of leadership courage, though courage is also required. It is a design problem. A review process designed to confirm that the framework was correctly applied cannot produce findings that the framework should be different.

The structural shift: Add an inversion step at the beginning of every significant review — before the terms of reference are written, before the scope is defined, before the questions are agreed. This step asks explicitly: what questions does the design of this review make it structurally unable to ask? Record those questions. Require that the final findings address them. Where the findings do not address what the opening step surfaced, name what the review process was unable to reach and why. The double-loop questions are almost always present in this step. They were always available. The standard review design gave them nowhere to go.

Addition 5: Leadership modelling must extend the connection, not only model the behaviour

The systems analysis confirms that senior leadership modelling changes culture within the senior tier. It adds a structural point about direction and reach. The modelling that changes the wider organisation is not a leader publicly admitting a mistake to their peer group. It is a leader visibly reaching the person closest to the failure — the frontline worker, the first responder, the practitioner who knew something was wrong before anyone else — and making that act of reaching visible to the whole organisation.

The signal that changes culture is not ‘our leaders admit mistakes.’ It is ‘our leaders went to where the failure actually happened and listened to the person who was there.’ The second signal crosses the distance that the first one does not.

There is a further dimension that neither the established research nor the first level of systems analysis names: as AI tools become central to organisational decision-making, what those tools are shown matters as much as what leaders do. An AI system that has been fed filtered, curated, success-weighted information will confidently amplify the filtered version of reality. An AI system shown honest information — including failure accounts, frontline perspectives, and what didn’t work — produces outputs that genuinely illuminate. Leaders who model learning by feeding organisational systems honest information, not only by admitting mistakes to peers, pass something more useful to everyone who subsequently uses those systems.

The structural shift: Senior leadership modelling of learning has three dimensions, not one: the visible admission of failure to peers (the established remedy); the visible act of going to the frontline and genuinely receiving what is known there (the extended reach); and the deliberate feeding of honest, complete information — including failure — into the systems and tools the organisation uses to make decisions (the information substrate). All three are required. The first without the other two produces a changed culture at the senior tier and an unchanged one everywhere else.

SECTION 4

The Remedies: Established and Enhanced

The table below shows the four established remedies alongside the structural enhancements the systems analysis produces. The enhancements do not replace the established remedies. They add the prior conditions that must be in place for each remedy to produce the result it is designed for.

RemedyEstablished approachWith structural enhancements
Psychological safetyCreate an environment where people feel safe to report failure without fear of punishment.First map the information pathway from frontline to decision-maker. If the pathway is blocked by the combined effect of individually reasonable filters, safety does not fix it. Redesign the combination of filters so the pathway exists before asking people to use it.
Blameless post-mortemsFocus reviews on systemic causes rather than individual blame.Extend backwards to the development phase (where the failure was still a small reversible choice); require genuine investigation of practices and culture before closing at governance attribution; and connect findings structurally to successor programmes and peer organisations.
Double-loop learningExamine whether the framework itself is correct, not only whether it was executed correctly.Add a structural inversion step before terms of reference are set — explicitly asking what questions the review design cannot reach. Record those questions and require findings to address them. The double-loop questions are already present. The standard process has nowhere for them to go.
Leadership modellingLeaders visibly admit failure and model learning behaviour.Add the extended reach (going to the frontline and making that act visible) and the information substrate (feeding honest, complete information into organisational systems and tools). Modelling without reach changes culture at the senior tier. Modelling without honest information feeds the organisation’s analytical tools a curated version of reality.
[Prior condition]Not addressed in the established literature.Before any of the above: map internal capability. If the leadership tier that receives failure information no longer has the knowledge to understand what it means, improved review processes and extended reach produce accurate signals that cannot be interpreted. Rebuilding internal knowledge substrate is the first intervention in a capability-depleted organisation.

SECTION 5

Summary

The question in one sentence Large organisations consistently fail to learn from their mistakes because the channels that should carry learning from the people who experienced the failure to the people who could act on it are blocked by the accumulated effect of individually reasonable information filters — and the leadership tier receiving the signal frequently no longer has the internal knowledge to understand what it means, because that knowledge was progressively outsourced in decisions that each seemed rational at the time.

The established research gives us the right remedies. The systems analysis gives us the structural conditions under which each remedy actually works. The difference is significant in practice.

Psychological safety without a working pathway produces people willing to speak into a system that cannot hear them. Blameless post-mortems without development-phase audits produce reviews that find what happened after it was already permanent. Double-loop learning without a structural mechanism for surfacing the unreachable questions produces conversations about the framework that use the framework’s own language to examine it — which can only produce single-loop conclusions. Leadership modelling without extended reach produces a changed culture at the top of the organisation and an unchanged one where the work actually happens.

Each of these combinations is familiar. Each is cited in the aftermath of every major institutional learning failure. The systems analysis explains why they recur: not because the remedies are wrong, but because the prior structural conditions that make them work are not examined.

The practical starting point

For any organisation that has invested in learning culture without the results it expected, the systems analysis suggests three diagnostic questions before further investment:

1.  Can a failure report originating from the frontline reach the person with authority to act on it, passing through the full combination of information filters, format requirements, and channel constraints that currently exist in the organisation? If not, where specifically is it blocked, and by the combination of which individually reasonable barriers?

2.  Does the leadership tier that would receive that report have sufficient internal knowledge to understand what it is telling them? Or has progressive outsourcing of capability created a gap between the signal that arrives and the knowledge required to interpret it?

3.  Does the standard review process include a mechanism for surfacing questions that the process itself cannot reach? Or does the design of the review determine — before it starts — that it can only find what the existing framework already accommodates?

Where the honest answer to any of these questions reveals a gap, the investment required is structural — in the information architecture, in internal knowledge, in review design — rather than cultural. Cultural interventions applied to structural problems produce cultural changes at the level where the intervention lands and structural continuity everywhere else.

David Sutton CITP MBCS  |  April 2026