TMI Research Library
Transformation Science Monograph Series · B1 (2025)


The Emergence of Transformation Science

Our Response to the Seventy Percent

Authors: Jordan Vallejo and the Transformation Management Institute™ Research Group

Status: Monograph B1 | October 2025

Introduction

For more than four decades, organizations across industries have reported a stable outcome: many large transformations do not deliver the results they set out to achieve. Popular summaries compressed this into a “seventy-percent failure rate.” Subsequent research debated how consistently that figure was measured. The precise number remains contested. The recurrence of shortfall does not.

This monograph treats “the seventy percent” as shorthand for a disciplined observation. As transformation scope increases, systems repeatedly reach a point where people can no longer stay coordinated on what is happening, what has changed, what counts as done, and who can settle ambiguity when it appears. Work continues, but it stops being comparable across teams. The same words begin to point to different realities. Decisions begin to depend on who you ask and when you ask them.

Transformation Science emerged to explain that condition.

Transformation shortfall is often framed as an execution gap: insufficient alignment, insufficient leadership, insufficient adoption. In practice, many shortfalls begin earlier. They begin when the change creates more interpretive-event load than the environment is prepared to support. People are asked to move faster than the system can keep reality checkable, decisions legible, and updates consistent across roles and time. In Transformation Science, these breakdowns are tracked through interpretive events: the recurring cycles where signals are evaluated against declared reference conditions and routed into decisions, corrections, and closure.

This is not a claim about effort. It is a claim about load.

The Emergence

A transformation alters more than tasks or workflows. It changes the conditions under which work remains comparable, decisions remain legible, and outcomes remain attributable. Criteria shift. Ownership boundaries move. Definitions evolve. Interfaces change. Authority for settlement is redistributed. These changes rarely occur one at a time. They accumulate, and they often accelerate.

When the environment can keep up, the change feels difficult but navigable. People can still answer basic questions without improvising: What are we doing now? What is the current rule? Which artifact is the source of truth? Who can approve the exception? What happens when two instructions conflict? In a stable environment, these answers travel. A new hire can learn them. A partner team can rely on them. A correction can be made once and stay made.

When the environment cannot keep up, organizations adapt in predictable ways. Teams narrow the set of sources they trust. Local operating definitions emerge because the work needs a usable baseline. Exceptions persist because they are operationally necessary and nobody has the time or authority to settle them cleanly. Workarounds become routine because “the official way” no longer produces consistent outcomes. Communication increases, but it does not converge. People hear more words and gain less clarity.

The earliest signs show up in ordinary work, not in post-mortems. Decisions begin to rely on fewer checkable references over time. Definitions shift without clear revision trails. Exceptions stay active without an expiration rule or a stable owner. Updates are announced but do not fully arrive in the artifacts people actually use. People start asking the same questions again, not because they forgot, but because the answer keeps changing depending on context.

Over time, the organization develops two parallel realities: the one described in documents and the one people use to survive the day. Both can look reasonable. They just do not match.

Two mechanisms make this condition durable.

The first is the loss of clear limits on what things mean across the system. A policy, priority label, risk category, or readiness threshold can be interpreted in several workable ways at once. Tools allow multiple pathways that lead to different outcomes. Exceptions accumulate without being folded back into the baseline. In that environment, local teams do the rational thing: they build their own reliable interpretation so they can keep shipping work. Once that local baseline hardens, getting back to a shared baseline requires rebuilding the limits, not issuing another clarification.

The second is the failure of decisions to fully settle before they come back. A decision is made, but the rationale does not travel. A correction is announced, but the old version continues to operate in downstream workflows. A “final” answer remains revisitable because new information arrives and there is no clear method for integrating it without reopening the whole question. The result is repeat work, repeat meetings, repeat friction. People stop treating resolution as real. They begin relying on private judgment because the environment is no longer dependable as a settling mechanism.

When these conditions persist, inconsistency starts lasting longer and spreading farther. It becomes normal for two teams to operate from different baselines while believing they are aligned. It becomes normal for a handoff to fail because each side is working from a different version of “what we decided.” The system still functions, but it functions by local consistency rather than shared structure.

Viewed this way, the longstanding transformation shortfall pattern reflects overload rather than motivation or culture alone. Many transformation programs expand scope, interfaces, and novelty at the same time. They increase the volume of signals people must interpret and the pace of decisions people must make, while allowing the environment’s ability to keep reality checkable and outcomes comparable to lag behind.

Over time, organizations normalize workaround infrastructure. They build unofficial routes around unclear rules. They preserve local baselines because local baselines are how work stays possible. These adaptations keep things moving, but they also make shared understanding harder to restore later, because the organization has learned to function without it.

Transformation Science emerged from treating this pattern as lawful behavior. It can be observed in deliverables, traced through rework cycles, and governed through structural intervention. It does not require speculation about intent. It requires attention to the conditions that allow interpretation to stay consistent across roles, time, and interfaces.

Transformation Science models transformation as the behavior of an organization under changing interpretive conditions. It names a target that earlier transformation disciplines often implied but rarely formalized: keeping shared understanding reproducible while the system becomes something new. It treats consistency as an engineered condition rather than a hoped-for outcome. It allows emerging instability to be detected early through everyday system behavior rather than explained after the fact.

Transformation Science describes the system. Transformation Management governs it in motion.

Transformation Managers work to preserve stable, checkable understanding while systems become. They reduce unbounded variance, restore clear limits where meanings have multiplied, and help decisions settle in ways that actually propagate into daily work. They watch for the moment when local coping becomes durable structure, and they intervene before the organization reorganizes around parallel baselines.

The AI era intensifies these dynamics by increasing variation and throughput faster than many organizations can scale verification and settlement. AI lowers the cost of producing plausible artifacts and recommendations. It accelerates decision cycles. It multiplies outputs that must be evaluated and integrated into workflows that were already operating near their limit. Used without deliberate design, it increases confusion by increasing output velocity without increasing the system’s ability to confirm what is real and carry corrections forward.

AI does not introduce a new kind of instability. It compresses the timeline.

Used as infrastructure rather than acceleration, AI can support stability by improving verification, enforcing consistent rules in routine work, and carrying updates into the operational artifacts people depend on. Used without governance, it produces motion without meaning.

The coming decade will distinguish institutions that treat shared understanding as an engineered condition from those that rely on it to emerge informally. Transformation Science arose to explain a persistent constraint in organizational becoming. Transformation Management exists to govern that constraint in practice.

Citation

Vallejo, J. (2025). Monograph B1: The Emergence of Transformation Science. TMI Scientific Monograph Series. Transformation Management Institute.

References

Beer, Michael, and Nitin Nohria. “Cracking the Code of Change.” Harvard Business Review, 2000.

Boston Consulting Group. Flipping the Odds of Digital Transformation Success. BCG Report, 2020.

Hughes, Mark. “Do 70 per cent of all organisational change initiatives really fail?” Journal of Change Management, 2011.

Kotter, John P. Leading Change. Harvard Business School Press, 1996.

McKinsey & Company. Unlocking Success in Digital Transformations. McKinsey Global Survey, 2018.