
Category: General
The Unit of Value Problem
Why AI Deployment is a Management Discipline Before It Is a Technology Decision
Executive Summary
Two compounding failures are driving the wave of AI investment disappointments now surfacing across enterprise portfolios.
The first is the failure identified in this series' earlier papers: organisations are treating AI as a cost-reduction tool rather than a value-creation instrument, comparing token costs to salary lines rather than asking what unit of business value a given investment produces.
The second failure is upstream of the first, and it is a management failure rather than a financial one. Organisations attempting to deploy AI typically cannot answer a question that good management should have been able to answer long before any AI vendor appeared. They cannot say, for a given role, process, or function, what unit of value it produces, what good performance looks like in measurable terms, or how its contribution connects to the outcomes the business exists to generate.
This is the Unit of Value Problem. It is not a problem AI created. AI merely makes it impossible to ignore.
The organisations that deploy AI well are not the ones with the best technology or the largest budgets. They are the ones where management already knows what every significant function is actually for. AI deployment, done properly, is a consequence of that clarity. Done without it, AI is an expensive way to automate processes whose value no one has examined and whose failure no one will be able to diagnose.
Before any organisation asks "where should we apply AI?", it must first ask "do we know what this process produces, and do we know how to measure it?" Those questions are management questions. They have always been management questions. The AI era has raised the cost of not answering them to a level that is no longer ignorable.
Part One: The Question AI Forces You to Answer
The Hidden Assumption in Every AI Business Case
Every AI deployment rests on a hypothesis, usually implicit, that the process being automated produces something worth producing. The business case says: this currently costs X in labour; AI can do it for Y, where Y is less than X; therefore deploy.
That calculation contains a buried assumption that almost no one examines: that the process being automated is producing value worth the cost X in the first place.
If the process is not generating meaningful value relative to its cost, automating it does not create a return. It creates a cheaper version of something that should perhaps not exist at all. The AI deployment looks like a success on the cost line and registers nothing as a failure on the value line, because the value was never measured to begin with.
This explains a significant portion of the AI deployments that have produced disappointing returns. The projects succeeded technically. The automation worked. The headcount came out. And then, somewhere between twelve and thirty-six months later, the organisation noticed that certain things had stopped happening, certain client relationships had deteriorated, or the business had lost the ability to respond to conditions it had never encountered before. The value that was lost was never counted because it was never counted to begin with.
Why the Cost Comparison Is the Wrong Frame
The instinct to compare AI cost to labour cost is understandable. Cost is visible. Salary lines are in the budget. Token costs are in the API bill. The arithmetic is legible.
But cost is a measure of input. What organisations need to evaluate is cost per unit of output. A unit of output requires a unit: something you can count, assess, or at minimum meaningfully describe.
A human role produces value in several ways simultaneously. Some of it is transactional: tasks completed, documents processed, tickets resolved. Some of it is relational: the trust maintained with a client, the informal knowledge passed to a colleague, the judgment exercised in a situation that the process documentation did not anticipate. And some of it is adaptive: the person who notices something is wrong before the data shows it, who flags a pattern, who brings context from outside the immediate workflow.
Transactional value is easiest to measure and therefore the part most likely to appear in an AI business case. Relational and adaptive value is harder to quantify, slower to lose, and almost completely absent from the cost comparison. A business case built only on transactional value is systematically incomplete. The error is not in the arithmetic; it is in the definition of what is being measured.
Cost Per Unit of Value: The Right Frame
If AI enables your team to produce four times the output at one and a half times the cost, the unit economics are compelling even though the absolute cost has risen. That is a more honest calculation than the salary comparison, but it requires knowing what the unit is.
If the unit of value includes relational and adaptive dimensions that AI cannot produce, the comparison is between a partial capability and a whole one. Choosing the cheaper partial capability is a decision to stop producing the parts of value you did not bother to count. That is not a cost saving. It is an unacknowledged reduction in what the business does.
Part Two: The Management Failure That Preceded the AI Failure
Why Organisations Cannot Answer the Foundational Question
Ask a manager to articulate what unit of value their team produces. Ask them how they would know if performance improved by twenty percent. Ask them which activities in their function connect directly to revenue, client retention, or competitive advantage, and which are primarily administrative.
In the average organisation, these questions land as unusual, even provocative. Managers can describe what their teams do. They can explain the processes, the deliverables, the workflow. But the connection between those activities and the measurable outcomes the business cares about is often assumed rather than examined, implicit rather than explicit, understood vaguely in aggregate rather than specified at the level of the role or the process.
This is a systemic failure, not a personal one. Performance management systems tend to measure activity rather than value. Budget processes typically allocate headcount to functions without requiring a demonstrated connection between the headcount and the outcomes it produces. Strategic planning exercises routinely define goals at the top of the organisation and rely on a chain of assumption and delegation to connect those goals to the work of individual teams.
None of this has been fatal, barely tolerable at times, because human labour is self-organising in ways that partially compensate for definitional vagueness. Hire a capable person, give them a rough sense of what success looks like, and they will find ways to create value even when the brief is incomplete. They bring judgment and contextual intelligence that fills the gaps. The organisation gets value it did not specify because the human supplied it without being asked.
An AI system does not do this. It does exactly what it is designed to do. If the metric it is optimising for drifts away from the outcome it was meant to serve, it will not notice. If the context shifts, it will not adapt. A vague brief produces consistent wrong outputs at scale, with no one in a position to notice or correct it. The self-correcting properties of human labour simply are not available. Deploying AI into a function where value has not been properly defined embeds the vagueness in the system, where it compounds silently.
The Value Attribution Gap
In many organisations, the connection between a given function and business value is real but indirect. Legal, people, finance — none of these generate revenue directly. Their contribution operates through mechanisms that are several steps removed from the outcomes on the income statement.
When these functions are subject to AI deployment decisions, the business case compares the cost of the human team to the cost of the AI alternative. Because the value contribution is not directly attributed, it is not included in the comparison. The function looks like a cost centre, and the AI case looks straightforward.
The problem surfaces later. The legal team was managing risk through relationships and judgment as much as through documentation. The people function was maintaining organisational health through practices that no workflow captured. Finance was providing interpretation and context, not just numbers.
These contributions were real. They were not in the business case because no one had done the work of attributing them. Their absence shows up not in a single dramatic failure but in a gradual degradation of organisational capability that takes months or years to diagnose — by which point the AI deployment has long since been counted as a success.
This Is a Management Problem, Not a Technology Problem
It is tempting to frame the Unit of Value Problem as a measurement challenge that better analytics could solve. That framing lets management off the hook too easily.
The inability to specify what value a function produces reflects a gap in management clarity about what the organisation is actually trying to produce and how each part of the enterprise contributes to that production. No AI system can supply that clarity from the outside. It has to come from management: from the people responsible for the function, in conversation with the people responsible for the organisation's strategic direction, working through the specific connection between this team's work and the outcomes the business depends on.
That conversation is a management act. It requires managers who can distinguish between activity and value, between input and outcome, between what a function does and what it is for. It requires executives who demand that clarity before approving technology investment, not after.
Requiring managers to say what their functions are for before they are authorised to change how those functions operate is not bureaucracy. It is the basic accountability that technology investment demands.
Part Three: A Discipline for Knowing What You Have
The Value Inventory
Before any AI deployment decision, an organisation needs a working answer to four questions about each function or process under consideration.
The first is the output question: what does this process produce? Not what activities does it perform, but what is the thing it creates, resolves, or moves forward? The answer should be specific enough that two people could independently assess whether the output exists and whether it is good.
The second is the connection question: how does this output contribute to an outcome the business cares about? This requires tracing the chain from the immediate output of the process to the metrics, relationships, or capabilities that the organisation depends on. For some processes that chain is short and direct. For others it is long and indirect, and the work of making it explicit is itself a form of management discipline.
The third is the measurement question: how would you know if this process were performing better or worse? The answer does not have to be a precise metric. It can be a proxy, a leading indicator, a qualitative judgment made on a structured basis. What it cannot be is "we would just know" — an answer that protects vagueness rather than resolving it.
The fourth is the replacement question: if this process were automated, which dimensions of its current value output would the automated version produce, and which would it not? This is the question AI business cases routinely skip. It requires being specific about what the human contribution actually consists of across the transactional, relational, and adaptive dimensions described earlier.
Organisations that can answer these four questions for a given function are ready to make an AI deployment decision for it. Those that cannot are not ready, regardless of what the technology can do.
Categorising Value for AI Readiness
Not all value is equally tractable to AI. A working taxonomy distinguishes three categories.
Transactional value is produced by processing, transforming, or routing well-defined inputs to well-defined outputs. Speed and consistency matter most, the unit of value is usually easy to define, and AI deployment can be evaluated against clear criteria. This category is where AI genuinely earns its cost.
Relational value is produced through sustained human interaction: trust built over time, context accumulated through relationship, judgment exercised by someone who knows the parties involved. AI can support the humans doing this work. It cannot do the work itself. An AI business case that treats relational value as equivalent to transactional value is not a conservative estimate — it is just wrong.
Adaptive value is produced by the capacity to notice and respond to things the organisation has not encountered before. It lives in the humans who understand the system well enough to know when it is failing, who can act outside the prescribed workflow, and who carry the institutional memory to contextualise novel situations. This category is the most dangerous to eliminate because its value is most invisible in normal conditions and most critical in abnormal ones.
An honest AI readiness assessment assigns each function a profile across these three categories. Predominantly transactional functions are strong AI candidates. Predominantly relational or adaptive functions are candidates for AI augmentation, not replacement. Functions that carry significant adaptive value are governance risks if automated without deliberate attention to what replaces that human capacity.
Cost Per Unit of Value in Practice
The Cost per Business Outcome (CBO) framework introduced in the companion paper on AI cost governance provides the economic structure for evaluating AI deployments once the unit of value has been defined. The relationship between the two frameworks is sequential.
CBO requires a business outcome to calculate. Defining that business outcome is the work of the value inventory. Without it, CBO is an elegant formula applied to an undefined quantity — it produces a number that feels precise but means nothing.
The practical sequence is: define the unit of value for the process; identify which dimensions of that value AI can produce and at what quality; calculate the fully loaded cost per unit for AI delivery versus human delivery across those dimensions; account separately for the value dimensions AI cannot produce and the cost of replacing them through other means; then make the deployment decision on the basis of the complete picture.
This is more work than the standard business case. It is also the only version of the business case that will hold up when someone asks, a year later, whether the deployment actually worked.
Part Four: What This Changes About Management
The New Management Accountability
The AI era does not make management less important. It makes a particular kind of management more important and renders another kind structurally redundant.
Management as information aggregation and execution supervision is substantially absorbed by AI. As argued in the first paper of this series, the manager who primarily exists to gather reports, synthesise data, and ensure that tasks are being performed is managing a function that AI performs better.
What remains — and what becomes harder, not easier — is management as value definition and governance. In an AI-native organisation, the manager's job is to articulate what this function is for, how its value connects to the outcomes the organisation depends on, and whether the AI systems doing the work are producing that value correctly. This requires conceptual clarity that execution management never demanded: the ability to distinguish value from activity, outcome from output, what a function is for from what it currently does. And it requires the willingness to surface and resolve the vagueness that organisations have tolerated for decades because human labour was forgiving enough to work despite it.
Management Education for the AI Era
The standard management curriculum is largely oriented toward execution: how to manage projects, how to run processes, how to lead teams through defined tasks toward defined deliverables. That orientation served the era it was designed for. It does not serve this one.
What managers actually need now is the ability to think in terms of value chains rather than process flows — to understand how the work of their function connects to the outcomes the business depends on, and to specify that connection clearly enough that an AI system could be designed to serve it. They also need to understand AI systems well enough to govern them, which means knowing what those systems are optimising for and recognising when that optimisation is producing the wrong result.
None of this requires technical AI expertise. It requires rigorous thinking about purpose, value, and accountability — the kind that good management has always demanded and that organisations have allowed to atrophy because the immediate cost of that atrophy was low. The AI era has made the cost high. The organisations that invest in building this capability now will be substantially ahead of those that wait for failure to make the case for them.
The Prior Question
If you cannot answer the value inventory questions for a function, you should not be deploying AI to that function. The problem is not that the technology is unready. The problem is that you are not ready. The inability to specify what a function produces and how that output connects to business outcomes is a gap that has to be closed by management before any technology deployment begins. Deploying AI into that gap does not resolve it. It embeds it in an automated system where it will be invisible, uncorrectable, and compounding.
This is a hard discipline to enforce. There is commercial pressure to move quickly, competitive pressure to deploy before rivals, and cultural pressure to appear AI-forward. All of those pressures are real, and none of them changes the underlying logic. Moving fast into AI without doing this foundational work does not put you ahead. It puts you on an unstable foundation that will require expensive reconstruction later — if you recognise the problem at all.
Part Five: The Connection to Governance and Cost
Where This Paper Sits in the Series
The three papers in this series address connected failures at different levels of the organisation.
The first paper argues that the dominant AI investment narrative — that AI is a labour replacement and margin improvement tool — is wrong because it misunderstands what organisations are for. The human functions being eliminated are not costs to be reduced. They are the governance layer, the cultural capital, and the adaptive capacity that make an enterprise worth having. Strip them out and you have a more efficient machine that has lost the capacity to be trusted, corrected, or directed.
The second paper argues that the cost governance failure is structural. Replacing a predictable owned cost with a variable rented one, without the attribution infrastructure to understand what each workflow is producing relative to what it consumes, is a cost restructuring with embedded risks that financial models are pricing incorrectly. The Cost per Business Outcome framework is the mechanism for making AI economics legible and governable.
This paper argues that both of those failures share a common origin: attempting to deploy AI without first doing the management work of understanding what each function produces, how that value connects to the outcomes the business depends on, and whether the AI investment will preserve or quietly eliminate dimensions of value that the business cannot do without.
The governance argument and the cost governance argument both require a prior definition of what value is being produced. The Unit of Value Problem is the first problem. The others cannot be solved without solving it.
The Sequence That Actually Works
The sequence for AI deployment that produces durable returns, rather than short-term cost reductions followed by medium-term capability erosion, runs as follows.
Start with the value inventory. For each function under consideration, answer the four questions: what does it produce, how does that connect to business outcomes, how would you measure performance, and what dimensions of its value output would an automated system not replicate.
Use the value inventory to categorise functions by AI readiness. Transactional-dominant functions are deployment candidates. Relational- and adaptive-dominant functions are augmentation candidates. Functions with significant adaptive value require explicit governance design before automation proceeds.
Build the CBO framework on the foundation of the value inventory. Now that the unit of value is defined, calculate the fully loaded cost of producing that unit via AI versus human delivery, across all dimensions of value, not just transactional ones.
Design the governance layer. Identify the humans who will govern each AI deployment: who decides what the system is optimising for, who monitors whether it is producing the right value, who can intervene when it is not, and who is accountable when something goes wrong.
Then deploy, with the measurement infrastructure in place to know whether the deployment is producing the value it was designed to produce.
This sequence is slower to start and much faster to produce durable returns. Deploying first and discovering the value gaps later is fast to start and expensive to unwind. The organisations that have learned this lesson have mostly learned it the hard way.
Conclusion: The Discipline That Was Always Required
The Unit of Value Problem is not new. Management theory has gestured toward it for decades, in the language of value chains and activity-based costing and strategic alignment. But the cost of leaving it unresolved has been low enough that organisations could tolerate the vagueness. Human labour is forgiving. People figure out how to be useful even when the brief is incomplete. The gap between what a role is supposed to produce and what it actually produces gets filled, quietly, by capable people exercising judgment.
AI does not fill that gap. It falls into it.
You should not automate what you cannot define. If you cannot say what a process produces, how that output connects to something the business depends on, and what you would lose by replacing it, you do not understand it well enough to replace it. That is not a limitation of the technology. It is a gap in the management that has to be closed before the technology question is even relevant.
This obligation predates AI. Every organisation that has ever hired someone without being clear about what that person was for has been living with the same problem, just cheaply enough to ignore it. The AI era has not created a new management discipline. It has made the cost of skipping the old one impossible to absorb.
Appendix: The Value Inventory — Questions for Every Function Before AI Deployment
These questions should be answered by the manager responsible for the function, in conversation with the executive responsible for the business outcomes it serves, before any AI deployment decision is made.
On what the function produces:
What is the specific output of this function? If two people independently assessed whether a good output existed, what would they be looking at?
Which of those outputs are transactional (processing, transforming, routing defined inputs to defined outputs), relational (sustained human interaction, trust, contextual judgment), or adaptive (noticing novel situations, applying judgment outside the prescribed workflow)?
What proportion of the function's total value output falls into each category?
On how value connects to business outcomes:
Trace the chain from this function's output to an outcome on the income statement, the balance sheet, or the risk register. How many steps does that chain have? Who else in the chain is relying on this function performing well?
If this function performed twenty percent better, which business outcome would improve? How much, and over what time period?
If this function disappeared entirely for ninety days, which business outcomes would deteriorate, and when would you first notice?
On measurement:
How do you currently know whether this function is performing well or poorly? Is that measure a direct measure of value output, or a measure of activity?
What proxy or leading indicator could you use to assess performance if direct measurement is not available?
What would a performance improvement of twenty percent look like in concrete terms?
On AI readiness:
Which specific outputs of this function could an AI system produce at equivalent or greater quality? Which could it not?
What relational or adaptive value is currently carried by the humans in this function that an AI system would not replicate? What is that value worth, and what would replace it?
If you automated the transactional dimensions of this function and retained the humans for relational and adaptive work, what would those humans' roles look like, and is that a viable operating model?
On governance:
Who would be responsible for governing the AI system deployed into this function? Do they understand the function's value logic well enough to know when the system is producing the wrong output?
What monitoring exists or needs to exist to detect whether the AI deployment is producing the intended value?
What is the recovery path if the AI deployment fails to produce the expected value? Is that path available, or have the human capabilities required for it already been eliminated?
This white paper is the third in a series on AI-native management and investment. The first paper, "What Organizations Are Now For," addresses the structural case for human governance in an AI-native enterprise. The second paper, "The Margin Killer," addresses AI cost governance and the Cost per Business Outcome framework. Together, the three papers argue that successful AI deployment requires organisational clarity, economic discipline, and management accountability — in that order.
This series was developed by Newthistle Consulting LLC. Newthistle helps organisations become genuinely AI-native, embedding AI into strategy, operations, and culture in ways that create durable competitive advantage. For further discussion, contact us at newthistle.com.


