Writing

The Memo She Couldn’t Write

What senior law partners are actually transferring when they hand over a matter — and why the form we give them was never designed to carry it.

The following is adapted from a real scenario one of my senior clients recently faced. Names have been changed.

Anika Sharma closed the matter on a Thursday afternoon. Fourteen months. A conglomerate restructuring its overseas holdings through Singapore, regulatory path layered across RBI, FEMA, and SEBI, with tax structuring across three jurisdictions. And a promoter family whose three siblings had spent the first half of the matter pretending to disagree about structure and the second half discovering that what they were really disagreeing about was control.

The final call ran twenty minutes. The chairman thanked Anika personally. Her associates exhaled. Someone in Mumbai sent a cake.

The next Monday, she opened her calendar and blocked a Saturday morning, two weeks out, to write the handoff note Rohan would need. Rohan had been on the matter since month two. Sixth-year. Trusted by the client, and quietly being positioned for partnership. The next phase would be his to run. The system would move the file forward. The team would brief him. Anika would give him the rest.

That was three Saturdays ago. The note still wasn’t written. It had defeated her three times, and the reason was not that the matter was unusually complicated.

The problem was that the file was not the thing she needed to transfer.

The chronology was in the record. The advice was in the record. The draft structures, the approvals, the tracked risks, the calls, the changes in position, the outstanding questions — all of it was somewhere.

Any competent team could surface the material. Increasingly, any decent system could too.

What Rohan needed from Anika was not another route into the file. He needed the thing that had made the file intelligible while the matter was alive.


On the first Saturday, she wrote the version she knew how to write in her sleep. Deal overview. Regulatory path and the decisions behind it. Tax structure, rationale, risks. Governance outcomes. Lessons learned. It took three hours, and when she read it back she knew it was not what Rohan needed. It was fine. It would pass any partner’s review. It would fail Rohan in the next phase of the work, and with the next family.

On the second Saturday, she tried to be more honest. She stopped writing about formal structure and started writing about the actual family. The older brother’s quiet authority over the younger sister’s capital. The mother’s late intervention, which had appeared in governance language but was really about inheritance. The subtle shift in the chairman’s own reading of his children in month nine, which had changed what became possible in month ten. Halfway through the third paragraph, she noticed what she was writing, and who might read it, and she stopped.

On the third Saturday, she split the difference. Formal memo, with a section at the end called “Judgment Notes.” Two paragraphs in, the notes read either like truisms — pay close attention to family dynamics in promoter-led restructurings — or like specifics no one could use — the younger brother’s position softened after the October dinner at the farmhouse. Neither was what she meant. She closed the laptop and made tea.

This was the fourth Saturday, and she had not yet opened the document.


What had carried the matter for fourteen months was not in any document or file.

What carried the matter was a live state: a compact, current mental model of what the matter was really about, which assumptions were still safe, what had shifted since the last call, where the real pressure was building, and where her judgment was likely to be needed next.

That was how Anika had run the matter. She had not been reconstructing it from scratch every morning. She had been re-entering it. Every Monday, re-entering. Every call, re-entering. Every time a sibling said something that sounded procedural but was actually about authority, re-entering. Every time the regulatory path moved because the family position had moved half an inch, re-entering.

She was not carrying a record in her head. She was carrying a maintained live state. The memo was asking her to do the opposite. It was asking her to flatten a live state into a static record.

And between Thursday evening and the first Saturday, something had started to happen that most firms feel all the time and almost never name.

The state had started to cool.

And every day made it worse. The family kept moving. The world kept moving. The next phase of the matter was already developing its own internal logic. Her own vantage point was changing too.

What she was trying to hand over was not just difficult to write. It was decaying.

That is where firms lose more than elegance in a handoff.


When live state fails to transfer, the cost rarely shows up as a dramatic mistake. It shows up as drag.

The next lawyer takes longer to become truly oriented than the file should require. The client keeps calling the original partner “just to sanity-check one thing.” The original partner stays half-attached to a matter that was supposedly handed over. The team re-learns what the firm already paid to learn once.

Most firms do not describe the problem that way. They call it uneven succession. Or chemistry. Or seasoning. Or partner centrality. Or the client being unusually attached.

Sometimes that is true. But more often, the firm is watching a transfer failure and calling it a talent or client problem.

On the fourth Saturday, Anika finally saw why the note would not come. She had been trying to produce an artifact. What Rohan needed was re-entry at altitude.

He needed the thread that had made the matter legible while it was live. He needed the last clean cut of that thread before it cooled. He needed to know what the matter had actually been about beneath its procedural surface, where that thread stood on Thursday afternoon, and what he should watch for first as the current world refreshed around him.

So she opened a blank document and wrote four short paragraphs.

She wrote what the matter had really been about. She wrote where that crucial thread stood at close. She wrote where the thread was likely to first reveal itself in the next phase.

Then she wrote the line that finally sounded like the matter she had actually been running: the younger brother does not yet know that October settled something for his sister that he still thinks is open; watch the first time they disagree on a procedural point. It will not be about procedure.

That was the note. Shorter than any handoff memo she had ever written. And the first honest transfer she had managed in several weeks.

On Monday morning, Rohan will open it. The file will tell him what changed since the close that Thursday. The team will tell him what’s moved with the family. The current record will do its job.

But Anika’s note will let him enter higher. He will not see the matter exactly as she saw it. But he will not have to climb from ground level. He will step into it with the live thread in his hands.

A firm that can reliably create re-entry at altitude is doing more than improving handoff hygiene. It is learning how to make scarce senior-partner judgment travel — across phases, across successors, across clients, and eventually across partners and practices — without flattening it into process.

That is a much more interesting capability than most firms are currently discussing.

Every top firm will keep getting faster at the task layer. Documents will be reviewed faster. Summaries will improve. Diligence will compress. Translation and news updates may even become trivial. The record will get easier to access, easier to search, and easier to synthesize.

All of that matters. None of it is the real divide.

The real divide is forming somewhere else: between firms that merely accelerate tasks and firms that preserve and compound live judgment.

Firms where partners do not have to rebuild state from scratch after every interruption. Firms whose clients feel continuity at the seam instead of a dip in altitude.

And eventually, firms whose best partner judgment becomes a genuine one-firm capability.

In Indian law, where the hardest mandates are often shaped by fast-moving, nuanced dynamics — regulatory sequencing, capital structure, family succession, domestic and foreign markets, timing, and judgment across several moving threads at once — that difference will not stay hidden for long.

Senior partners have always known more than their memos could hold.

The firms that pull away over the next few years will not be the ones asking for better memos. They will be the ones that finally ask for the right object.

Bud Bhattacharyya is founder and principal of re:compound, where he works with senior leaders at expert-led firms on messy, ambiguous, high-stakes problems.

Read on Substack →


What Apollo Knew That Silicon Valley Forgot

What makes a good team isn’t what its members get right. It’s whether they get things wrong in different ways.

In the early 1960s, Apollo engineers were confronting a brutal navigation fact: no single instrument could tell the truth continuously. The inertial system of gyroscopes and accelerometers operated continuously, but accuracy drifted over time. Optical star sightings could be highly precise, but they were intermittent and required crew focus. Ground tracking was powerful, but it depended on geometry, coverage, and timing. The problem was not finding a perfect sensor. It was keeping the best possible estimate of the spacecraft’s state alive as different kinds of evidence came and went.

Around the same time, Rudolf Kalman gave engineers a new language for exactly that kind of problem: recursive state estimation under uncertainty. You do not ask which instrument is “the real one.” You maintain a running estimate of the hidden state — where the vehicle is, how fast it is moving, how your current estimate is likely to be drifting — and you update that estimate whenever new evidence arrives.

The Kalman filter is not just a clever averaging trick. It is a disciplined way of deciding, moment by moment, how much a new measurement should move your estimate of reality. A continuous source that tends to drift should not be trusted in the same way as an intermittent source that is locally precise. A ground-based reading during rendezvous should not be treated like a crew optical sighting during coast. The system works because different inputs fail differently, and because the governing logic knows that.

The algorithm has landed

The Kalman filter is an algorithm for combining imperfect, heterogenous information. It takes readings from multiple sensors — each with its own characteristic pattern of error — and produces a single estimate of the truth that is better than any individual sensor could provide.

The key word is characteristic. The Kalman filter doesn’t just average its inputs; the inputs are often fundamentally different. It maintains an explicit model of how each sensor is wrong. The IMU drifts over time: the filter knows this and trusts it less as time passes. The optical sextant is intermittent but precise: the filter waits for its readings and weights them heavily when they arrive. The ground signal is noisy but absolute: the filter uses it to correct accumulated drift.

What makes this work is that all these errors are not systematically correlated.

But when two sensors with correlated errors measure the same thing, they might agree, but that agreement is almost worthless. You don’t really know whether both sensors are right or both are wrong in the same direction for the same reason.

Uncorrelated errors produce disagreement that a well-governed system can interpret. Correlated errors produce agreement that no system can see past.

The agent-count fallacy

This is the part of the Apollo story Silicon Valley keeps forgetting: the goal is not to create more inputs. The goal is to improve the estimate. What matters is not how many signals are in the system, but whether they are differently wrong, and whether the governing logic knows how to exploit the difference.

That distinction matters now because much of today’s agentic AI boom is built on a simpler intuition: if one model is useful, several models interacting should be more useful. More debate. More critique. More synthesis. More apparent corroboration.

But that is only true if the additional agents contribute genuinely different error structure, and if the system can govern the interaction without losing the state of the work. Otherwise, you do not get a better estimate of reality. You get a busier system that mistakes repeated agreement for independent confirmation.

Let me scope this. The claim is not about all work.

When a task is well-specified, decomposable, or highly parallelizable — extract these fields, classify these documents, transform this code — more agents can help. The problem is largely defined before the work begins.

The claim is about work where the problem itself is unstable while you solve it: diagnosing an unclear failure, synthesizing a research landscape, figuring out what the strategy question really is before answering it.

In December 2025, Google Research, Google DeepMind, and MIT published a controlled study of 180 agent configurations across three model families. They found that multi-agent setups degraded performance by 39 to 70 percent on sequential reasoning tasks. Coordination overhead grew faster than linearly past three to four agents. Independent agents amplified errors up to 17 times. And once a single agent could handle 45 percent accuracy on a task, adding more agents produced diminishing or negative returns.

The reason is quite specific. You might think that giving agents different roles solves the problem. One agent could be the generator, another the critic, a third the synthesizer. But to get their characteristic errors to be genuinely uncorrelated, you need agents that, if I had to venture an informed guess, were shaped by fundamentally different developmental histories. Those are surprisingly few between current top models.

And when you combine highly correlated error sources, it doesn’t matter how many you add. You don’t reduce uncertainty. You amplify shared blind spots with increasing confidence.


The mathematics of getting things wrong

The math underneath this is simple enough to be dangerous.

Communication paths between N agents grow as N(N−1)/2. Three agents, three paths. Six agents, fifteen. Ten agents, forty-five. Useful reasoning capacity might grow linearly. Coordination tax grows quadratically. On hard knowledge problems, the curves cross early.

In 1975, Frederick Brooks described something similar in human teams in The Mythical Man-Month. Adding people to a late software project makes it later (credit to Ron Miller for seeing this connection first.)

But the coordination cost isn’t even the real problem. The real problem is the error structure.

A single powerful model has a characteristic error profile. It gets certain things wrong in certain ways. Make it more powerful and it gets those things wrong less often, but the shape of its wrongness doesn’t change. It’s like improving the resolution of a camera that’s pointed in slightly the wrong direction. The pictures get sharper. They don’t turn the camera.

Now replicate that model three times and have the copies converse. You have three cameras pointed in the same direction. They’ll agree enthusiastically on what they see. Their shared blind spot is invisible to all of them because it’s the same blind spot. And their agreement will feel like corroboration when it’s actually just correlation.

This is the architecture of most multi-agent AI systems being built today. And it is, in my assessment, a terrible configuration for finding the truth in complex, ambiguous, high-stakes situations.

The smallest architecture that works

I was not surprised that my lived experience as a practitioner matches the math. For complex problems, the optimum answer is often three agents with genuinely uncorrelated errors, under clear governance, connected by a communication protocol that is sparse, explicit, and checkpointed.

Further: explicit checkpoints between phases rather than agents reacting to each other’s transient output. Sparse communication topology — chains and hubs before full meshes. And critically, one clear governor deciding which branch survives, what counts as evidence, and whether the system is actually converging or just becoming more articulate.

That governor role is the real bottleneck. The models are good enough that generation isn’t the constraint. Governing the loop is. And on most serious work today, that governor is still a human — not because the models are useless, but almost the opposite. They’re useful enough that the missing function becomes obvious.

The same principle shows up in human teams. A team of three strategists will produce elegant strategy with shared blind spots. A team comprising a strategist, an operator, and a technologist — three “agents” with genuinely different error profiles and “training” histories — will produce something less polished but that is more likely to survive contact with reality.


The uncomfortable implication

If this is right, then most multi-agent AI systems are being built with exactly the wrong design — adding more similar agents to the force, instead of thinking about how each agent is differently shaped, and whether the governance conditions are sufficient to exploit the difference.

A hundred agents drawn from the same model family, debating in a shared state, will produce the most confident, most articulate, most thoroughly corroborated wrong answer you have ever seen. A single human and a single AI, each wrong about different things, under governance conditions that maintain honesty, mutual admissibility, purpose, and calibrated self-uncertainty, will find the truth more often — because the truth is what’s left when uncorrelated errors are systematically scrubbed.

NASA understood this in 1960. They didn’t try to build a perfect sensor. They built a system that was composed of imperfect sensors that were imperfect in different ways, governed by an algorithm that knew exactly how to exploit those differences.

It might be time to remember what Apollo 11 well understood.

Read on Substack →


The AI Layoff Trap Is a Fiduciary Problem

Over-automation destroys value for shareholders, not just for workers.

A paper making the rounds in economics circles this spring deserves more serious attention from boards than it has been getting. The AI Layoff Trap, by Brett Hemenway Falk at the University of Pennsylvania and Gerry Tsoukalas at Boston University, models what happens when firms in a competitive industry each decide how much of their workforce to replace with AI. The math is clean and the conclusion unsettling. Under competition, each firm’s privately optimal automation rate exceeds the rate that would maximize aggregate industry profit. Every firm captures its own cost savings but bears only a fraction of the demand destruction that displaced workers’ lost spending creates. The result is a Prisoner’s Dilemma. Rational firms, with perfect foresight, race past the point where automation still serves their own collective interest. Standard policy instruments — universal basic income, capital income taxes, worker equity, upskilling programs, Coasian bargaining — fail to correct the distortion because they operate on the wrong margin. Only a Pigouvian tax on automation, the authors conclude, can realign private and social incentives.

The intelligent public reading of the paper, distinct from the social-media mob version, has tended to emphasize two findings. The first is the Prisoner’s Dilemma structure itself — the elegant mathematical demonstration that rationality plus competition produces collective self-destruction. The second is the policy conclusion that market mechanisms cannot self-correct. Serious commentary has mostly treated the paper as a labor problem awaiting a public-policy response, or as a macroeconomic warning about where aggregate employment is heading if nothing intervenes.

What has received less attention are the underlying fiduciary implications. Falk and Tsoukalas show that over-automation is not a transfer from workers to shareholders. It is deadweight loss. Both sides are worse off. Their Proposition 2 establishes Pareto dominance — the Nash equilibrium where firms compete themselves into excessive automation is strictly dominated by the cooperative outcome, even when a planner places zero weight on worker welfare. Workers lose wage income directly. Firm owners lose profits. No redistribution between the two groups can make the Nash outcome efficient. The destroyed value is captured by no one.

The Trap Is a Fiduciary Problem

A note on terminology before the argument. I am using “fiduciary” in the substantive sense in which serious governance practice understands the duty to shareholders, which is broader than what is legally enforceable under the business-judgment rule.

Read properly, the paper does not describe a labor market externality that boards can ignore while policy sorts out compensation for displaced workers. It describes a fiduciary problem. Boards that approve aggressive automation programs under the standard fiduciary framework — cost savings good, margin improvement good, narrative of strategic focus good — participate in an industry-level race in which their own shareholders are among the losers. The fiduciary case against over-automation is stronger than the compassion case, because the math shows that compassion and fiduciary rigor point at the same answer rather than at different ones.

This is not a small reframe. The standard fiduciary read on AI-driven layoffs is that they release capital that boards have a duty to redeploy to its highest-value use. The Falk-Tsoukalas math says that when every firm in an industry does this simultaneously, the redeployed capital produces lower aggregate profit than if firms had collectively restrained. The phrase “duty to redeploy” is doing uninspected work. Duty to redeploy where? Measured against what benchmark?

The argument that this is a genuine fiduciary problem and not a public policy problem rests on recognizing that the aggregate mechanism is not the only mechanism at work. Individual firms face a specific near-term fiduciary concern that the paper’s long-run aggregate math does not directly address, and a specific long-run fiduciary opportunity that the paper’s model cannot capture because it treats automation as a scalar choice.

What the Math Actually Requires

Start with what automation does in the Falk-Tsoukalas model. Firms choose a single number — the fraction of tasks to automate. The cost savings are bounded by the labor costs being replaced. The demand destruction is proportional to the wages displaced. Under these assumptions, the race-to-the-bottom logic is clean and the policy conclusion is the natural corrective.

The assumption doing the most work is scalar automation. Firms in the model face one dimension of choice: how much to automate. In reality, firms face a different and richer choice: not how much, but how to deploy the gains.

Consider three options for what a firm does with the capital released by automating a portion of its workforce. The first is to return the savings to shareholders through dividends, buybacks, or margin improvement — the default in the absence of explicit governance. The second is to recompound the savings into deeper task-layer automation, spending what the first round freed on the next round of efficiency gains. The third is to redeploy the savings into the altitudes of work that the task layer has been chronically understaffing: senior relationship management, judgment under complexity, mentorship that develops next-generation capability, institutional memory that compounds across decades, cross-domain translation that surfaces non-obvious opportunities, the formation of the human capital that distinguishes a firm that survives a crisis from one that does not.

The first two options fit the Falk-Tsoukalas model. They treat the automation gain as either cash to distribute or budget to reinvest in more automation. The third option is qualitatively different. It reinvests the gain into human capability at a level the firm could not previously afford to staff because the task layer was absorbing all available labor.

The shift from task layer to altitude layer is not a humanist supplement to fiduciary analysis. It is what the fiduciary math actually requires, for a structural reason that makes the case sharper than a general preference for capability over cost.

Cost reduction has a low and well-defined ceiling. You cannot save more than the labor costs you replace, and in practice you save less because some human work does not automate at useful quality. The savings are bounded, knowable, and fully priced by the market the moment the layoff is announced.

Capability investment is also bounded — diminishing returns are real and the asymptote is finite. But the asymptote sits at a much higher level than the cost-reduction ceiling, and it is currently unknown because the AI-paired human capability multiplier is still being established. The value of superior senior judgment, durable client relationships, institutional knowledge that compounds, mentorship chains that build successor capability — these scale with the quality and intensity of investment rather than with the size of the workforce being replaced, and they have not yet been priced because the multipliers are still being sorted out.

In a stable economic regime, this asymmetric payoff structure would still favor capability investment in principle but would be difficult to price in practice. You would roughly know what a great senior banker was worth, what a great research scientist was worth, what a great relationship manager was worth. The ceiling on capability returns would be high but established, and competitors would converge on it over time.

That is not the regime we are in. AI changes the multiplier on human capability itself. A senior partner who has figured out how to deploy AI at the judgment layer is not worth 1.2x what a senior partner was worth five years ago. The multiplier could be 3x, or 10x, or something else entirely. The firms that figure out what the AI-paired human capability multiplier looks like for their industry are establishing what the new ceiling is. Competitors who figure it out later have to rebuild capability against firms that have been compounding for years. First-mover advantage in capability formation is specifically what makes the altitude-redeployment choice fiduciarily sharp right now, in ways it would not have been in a stable multiplier regime and will not be a decade from now.

So the fiduciary choice facing any board approving AI-driven capital allocation is not cost-reduction-for-shareholders versus capability-investment-for-workers. It is a bounded, known, fully-priced return on the one hand and an unbounded, unknown, first-mover option on the new ceiling on the other. Under the asymmetric-payoff structure, it is not even close.

What the Paper Got Right, What It Held Fixed

I want to be precise about what I am arguing and what I am not.

Falk and Tsoukalas are correct that uniform automation under current competitive conditions is a Prisoner’s Dilemma producing deadweight loss. The mathematics of their Proposition 1 and the Prisoner’s Dilemma characterization in Section 3.2 are sound. The demonstration that the over-automation wedge is strictly increasing in the number of competitors is a real and important result. The proof that standard policy instruments — upskilling, UBI, capital income taxes, worker equity, Coasian bargaining — fail to correct the distortion because they operate on profit levels rather than on the per-task automation margin is correct. The Red Queen dynamic in which higher AI productivity widens rather than narrows the wedge is robust.

The paper’s mechanism belongs to a tradition of aggregate demand externality analysis that runs from Keynes through modern macroeconomics. The specific multilateral-externality structure is grounded in the big-push literature of Rosenstein-Rodan and Murphy, Shleifer, and Vishny, and in Cooper and John’s coordination-failure framework. What Falk and Tsoukalas add is bringing this tradition into the task-based automation literature of Acemoglu and Restrepo, where the aggregate-demand feedback channel has been structurally absent on the implicit assumption that wage flexibility and new task creation keep the labor-market-product-market loop stable. The paper is rigorous, its contribution is real, and it is being absorbed into the discourse less thoughtfully than it deserves.

What the paper holds fixed is deployment. Automation in their model is a quantity, not a direction. The paper concludes that only a Pigouvian tax can correct the distortion because within a scalar-automation framework, every private response operates on the wrong margin. Introduce deployment heterogeneity and the picture changes. Firms that redeploy gains to altitudes are not merely picking a different point on the scalar-automation curve. They are stepping off the curve entirely, into a space the paper’s equilibrium analysis does not capture.

Nothing in this argument requires the paper to be wrong. It requires only that the paper’s model abstracts from a firm-level choice that materially affects firm-level welfare — and that this abstraction, reasonable for the paper’s purposes, is precisely where the fiduciary argument lives.

There is also a second-order point worth making briefly. The Falk-Tsoukalas catastrophe depends on parameter choices that a recent replication by Jeremy McEntire shows are substantively contested. Under equally defensible parameters closer to realistic sectoral structures, the same model produces stability or mild under-automation rather than catastrophe. I will not take a position on which parameter set is closer to reality, because the fiduciary argument I am making does not depend on it. Whether the aggregate mechanism produces catastrophic, modest, or negligible deadweight loss, the firm-level choice between bounded-floor cost reduction and unbounded-ceiling capability investment during a multiplier-discovery window remains structurally what it is.

What This Looks Like in Practice

Consider HSBC, which is executing the most aggressive AI-driven restructuring of any major global bank and has been clear about what it is doing. The firm has announced workforce reductions of up to 20,000 positions, created a Chief AI Officer role for the first time, committed roughly $1.5 billion in annualized cost savings targeted by the end of 2026, and explicitly described the restructuring as making the matrix simpler. CEO Georges Elhedery has publicly framed the strategy as “keeping human judgement, decision-making, and accountability at the core” — language whose meaning depends entirely on what proportions the freed capital is allocated across.

The HSBC board faces, in concentrated form, the proportions question every board approving AI-driven workforce reduction is facing.

The default proportion captures the savings as margin improvement and returns the capital to shareholders through dividends or buybacks. By the Falk-Tsoukalas analysis, this contributes to the industry-level race that imposes demand losses across the ecosystem, including on HSBC’s own retail banking customer base. By the fiduciary argument developed above, it trades the known bounded savings for the foregone first-mover return on what HSBC could be building with that capital at this specific moment.

A second proportion redeploys the savings into another round of task-layer automation. Deeper AI in compliance, in customer service, in risk modeling, in middle-office operations. This compounds the cost-reduction story and accelerates the industry-level trap. It does nothing for the bank’s capability at the altitudes where genuine competitive advantage lives — senior relationship management, complex deal structuring, cross-border navigation, regulatory foresight, the specific judgment that distinguishes a bank that survives the next crisis from one that does not.

The third proportion deliberately redeploys freed resources into the altitudes the task layer has been suppressing. Relationship managers who finally have time to know their clients rather than process their documents. Senior credit officers who can develop integrative judgment rather than rubber-stamp model outputs. Risk functions that can see across markets rather than drown in compliance workflow. Management teams that can think three years ahead rather than managing this quarter. Mentorship chains that build next-generation senior bankers rather than the current attrition curve where the task layer burns people out before judgment can develop. Relationship depth, judgment quality, institutional memory — the things AI-paired human capability is newly able to multiply in ways that have not yet been priced because the multipliers are still being sorted out.

The fiduciary-sound choice is to weight the third proportion much more heavily than the default would. And it is exactly the kind of governance decision that does not get made without explicit board attention, because every local operational incentive pushes toward the first two. The board is the only place where the long-horizon capability question can be held against the quarterly cost-reduction pressure. This is what the Technology and Operations Committee of a bank like HSBC is specifically positioned to govern, and it is what most boards facing similar choices are not currently governing.

What IBM Is Doing

Against the backdrop of the largest single-year wave of technology-sector layoffs on record, IBM is doing the opposite of what most of its peers are doing.

In 2025, IBM’s stock rose approximately 38 percent, meaningfully outperforming both the Dow Jones Industrial Average and the S&P 500. The gain was not driven by layoff-and-buyback financial engineering. It was driven by measurable acceleration of IBM’s AI book of business — from approximately $1 billion in Q1 2024 to $5 billion by end of 2024 to over $12 billion by end of 2025 — anchored in a specific strategy that CEO Arvind Krishna has been publicly articulating for several years. IBM is not building frontier models. It is not participating in the hyperscaler data-center arms race. It is using AI to unlock enterprise productivity in regulated industries through its watsonx platform and its consulting business, with consulting engagements designing compliance and judgment-layer solutions and watsonx implementing them.

In February 2026, Chief Human Resources Officer Nickle LaMoreaux announced that IBM would triple US entry-level hiring in 2026. Her framing, at the Charter Leading with AI Summit, was explicit: “And yes, it’s for all these jobs that we’re being told AI can do.” The jobs would look different — developers spending less time on standard coding and more on customer interaction, HR staff supervising AI outputs rather than processing tickets — but the hiring commitment was specifically counter-cyclical to the industry pattern.

LaMoreaux’s reasoning, publicly stated, was the multiplier-discovery argument: “If we don’t continue to invest in entry-level hires, what happens in 3–5 years? There’s no pipeline; the well simply dries up.” This is exactly the first-mover-window logic. IBM is betting that the firms skipping entry-level capability investment now will face talent-pipeline collapse by 2028–2030, and that the firms building that pipeline now will have the capability base their competitors cannot quickly replicate.

IBM’s 2026 stock performance has been complicated, and I want to acknowledge the complication honestly rather than present a cleaner story than the facts support. The stock is down roughly 22 percent year to date as of late April. The decline reflects specific pressures on IBM’s legacy mainframe business (Anthropic’s claim that Claude Code could modernize COBOL triggered a 13 percent single-day drop in February) and the market’s reaction to recent earnings. Q1 2026 earnings beat top and bottom line expectations but did not produce the software-segment acceleration analysts had hoped for; the stock fell roughly 9 percent on the reaction. The decline is not the market punishing the capability-investment strategy. The 2025 outperformance was the market rewarding it, and the 2026 pressure comes from specific legacy-business and software-segment concerns unrelated to whether the strategy is right.

What IBM illustrates is that the altitude-redeployment strategy is not theoretical. It is being executed by a major public firm with publicly stated reasoning that matches the first-mover-window argument precisely. The market rewarded it in 2025 when its results became measurable. The market is currently sorting through specific legacy-business concerns unrelated to whether the strategy is right. The fiduciary case for the strategy does not rest on IBM’s month-to-month stock performance. It rests on whether, five years from now, the firms that invested in capability during the multiplier-discovery window have compounded advantages their competitors cannot catch up to. IBM has placed that bet explicitly. Most of its peers have placed the opposite bet explicitly.

The Near-Term Fiduciary Concern

There is also a shorter-horizon fiduciary issue that boards approving AI-driven layoffs should be pricing and mostly are not. This concern is independent of the long-horizon capability bet. It binds in the next twelve to twenty-four months on the specific decisions being made right now, regardless of how the multiplier-discovery question resolves.

Forrester Research’s Predictions 2026 report projects that roughly half of AI-attributed layoffs across all functions will be quietly reversed within twelve months, with jobs returning offshore or at lower wages. The prediction is not speculation. It rests on Forrester’s survey finding that 55 percent of employers already regret their AI-related layoff decisions, on documented reversal cases including Klarna’s high-profile rehiring of customer service agents after its AI replacement failed on quality, and on measured workforce AI-readiness gaps showing that only 16 percent of workers had the skills needed to work effectively with AI in 2025 and only 23 percent of organizations were investing in the training that would change this. Gartner, working from different methodology, forecasts that 50 percent of companies that attributed customer service headcount reductions to AI will rehire staff by 2027. The two findings have different denominators and different time horizons, but they describe overlapping phenomena and they converge on the same operational picture: a substantial share of AI-attributed layoffs is going to be reversed within the near term.

Across the population of AI-attributed layoffs being approved today, the aggregate reversal rate is going to land near a coin flip. The probability for any specific layoff depends on the function, the AI quality, and the training investment, with customer service and data-heavy back-office work at the high end and senior judgment work essentially unaffected. But for the population as a whole, this is not a tail risk. When the reversals happen, they will happen at cost — severance already paid, rehiring costs, productivity gaps during the intervening period, reputational damage from the implicit admission that the original decision was wrong, and the specific human cost of careers destroyed for savings that did not materialize.

This is fiduciary evidence, not just operational risk. The expected value of an AI-driven layoff decision approved today is meaningfully lower than what appears on the original board memo, because the memo is almost never pricing the reversal probability in. A board doing fiduciary analysis in the substantive sense is obligated to include known reversal probabilities in the expected-value calculation. Most are not. This is a specific failure of the analytical framework, and it binds independently of any view about long-horizon capability dynamics.

What Boards Should Actually Do

The governance implication of all of this is specific. Boards governing AI-driven capital allocation should ask, and require management to answer, questions that current governance processes typically do not surface.

The first is where the freed capital is actually going. Not at the level of the approved budget category, which will always show “strategic reinvestment” or “productivity capital” or some equivalent, but at the level of the actual deployment. How much of the savings is flowing to margin improvement or shareholder returns, how much to additional task-layer automation, and how much to investment in altitude capabilities that the task layer has been suppressing? The three options are different paths with different fiduciary implications, and boards that treat them as interchangeable under the aggregate heading of “redeployment” are not governing the decision that actually matters.

The second is what capability the firm is building during the multiplier-discovery window. AI-paired human capability is being established now across industries. The firms that figure out what the new multipliers look like for their sector are establishing the ceiling competitors will have to catch up to. Boards should be able to answer what their firm is doing, specifically, to figure out the multiplier for its industry and what capability base it is building against the possibility that the multiplier turns out to be large.

The third is what the reversal probability is on the layoffs being approved. If Forrester and Gartner are right that roughly half of AI-driven layoffs get reversed within a year, boards approving such layoffs should be explicitly pricing that probability. The correct fiduciary question is not “will these layoffs produce the savings management is projecting” but “what is the probability-weighted expected value, including the specific costs of reversal in the scenarios where the projected AI capability does not materialize at expected quality.”

The fourth is whether the decision is being made against a benchmark that actually matches the moment. The traditional fiduciary framework rewards layoff-and-buyback patterns because they have historically correlated with margin improvement and shareholder returns. That correlation held under different industry conditions than the ones that apply during a technological transition that is establishing new capability multipliers. Boards applying the historical framework to the current moment are optimizing against a benchmark that the moment itself has invalidated.

These are not soft-governance questions. They are fiduciary questions at the specific margin where automation decisions are actually made. The boards that see them and act on them will govern the next decade of their industries differently from the boards that do not. Twenty years from now, the distinction between firms that used AI to build capability and firms that used AI to strip cost will not look like a difference in style or values. It will look like the difference between firms that are still in the industry and firms that are not.

The AI layoff trap is a fiduciary problem. The fiduciary case against over-automation is stronger than the compassion case, not despite the math but because of it. The firms that see this and act on it — by redeploying automation gains to the altitudes where AI-paired human capability is now newly multiplicable, by pricing the reversal probability in their near-term decisions, by governing the question of where the freed capital actually goes rather than accepting the default answer — are making the choice their boards are specifically positioned to make. The rest are waiting for a Pigouvian tax that Falk and Tsoukalas can show is necessary but that no actual policy apparatus is close to delivering. By the time the policy catches up, the firms that acted on the fiduciary case will have compounded for a decade against the firms that did not.

That is the window that is open now. It will not be open indefinitely.

Bud Bhattacharyya is founder and principal of re:compound. BA Economics and BS Computer Science, University of Pennsylvania. MBA, Harvard Business School.

Read on Substack →


The Most Expensive Thing Your Organization Throws Away Every Day

AI made generation cheap. The new bottleneck is preserving the state of understanding from which the next real move can begin.

Most knowledge is not lost in dramatic moments. It dies in ordinary rooms, after real progress, when no one preserves the conditions for continuing it.

We have a childish picture of knowledge loss.

We imagine flames. We imagine conquerors burning libraries. We imagine a brilliant mathematician dying with the proof half-finished.

Those losses are real. They deserve to be mourned.

But they are not where most knowledge is destroyed.

Most knowledge dies in much more ordinary places: conference rooms, Zoom calls, strategy offsites, research sessions, late-night working meetings that seemed productive at the time. It dies whenever two or three people, thinking together, produce something none of them walked in with — and then fail to preserve it in a form that can actually be continued.

What gets lost is not information.

We have never been better at recording information. Every meeting can be transcribed. Every conversation can be summarized. Every action item can be logged, tagged, and filed. Organizations today can produce immaculate records of what happened.

And yet the thing that mattered most is usually missing: the live, half-formed, still-moving understanding that made the next non-obvious move possible.

That is the waste.

And I think it is one of the most expensive forms of waste in modern organizations.

A room becomes smarter than anyone in it

Take a productive meeting. Not the kind everyone hates. A genuinely good one.

Four people are working on a problem that has resisted easy answers. Forty minutes in, something shifts. Two ideas that had been living in separate corners of the organization suddenly collide. A third thing appears: a reframing, a connection, a new way of seeing the problem.

The room changes temperature.

People interrupt each other, but in the good way. Someone starts a sentence that someone else finishes. Objections stop being defensive and start becoming constructive. For ten minutes, the room is thinking at a level none of the individuals in it could have reached alone.

If you have done serious work — consulting, research, product strategy, investing, science, anything where judgment matters more than procedure — you know this moment.

A room becomes smarter than anyone in it.

Now watch what happens next.

The first thing that dies is the thought that had not yet finished forming

Meetings end on the schedule of calendars, not on the schedule of insight.

So the first casualty is usually the thing that was still becoming.

Not an unassigned action item. Something deeper. A connection that was one exchange away from becoming explicit. A reframing that needed another five minutes of pressure to crystallize. An integration that was still moving when the clock ran out.

This is the hardest loss to see, because the thing was never fully visible. No one can point to the transcript and say, “There. That is what we lost.”

It was not a conclusion that failed to get recorded. It was a process that failed to complete.

And because it never fully arrived, it never gets mourned. It simply vanishes, as if it had never been forming at all.

The second thing that dies is the insight that arrived from the side

In the middle of the productive stretch, someone says something that opens a door onto an entirely different problem.

It was not on the agenda. It was not why anyone came. But it is real — a genuine flash of clarity about something that has been opaque for months, made possible by this particular collision of people, context, and timing.

Everyone feels it.

Someone says, “That’s interesting — we should come back to that.”

They never do.

The meeting has an objective, and this wasn’t it. The insight becomes nobody’s responsibility. It does not appear in the next steps. It does not appear in the summary. Within days it has dissolved back into the ambient noise from which it briefly emerged.

Organizations lose extraordinary amounts of knowledge this way: not by rejecting good ideas, but by failing to catch the ones that arrive sideways.

The third thing that dies is the shared state that made the insight possible

The people in that room did not produce the breakthrough from a standing start.

They arrived carrying prior conversations, failed attempts, broken assumptions, tacit background, accumulated frustration, partial pattern-recognition, and a sense — often hard to verbalize — of where the dead ends already were.

In other words, the room was not just a set of individuals with opinions. It was a temporary cognitive system with a shared state.

That state took time to build.

A week later, someone opens the meeting summary and tries to pick up the thread.

The summary says the right words. It records the decisions. It captures the recommendations. It lists the next steps.

But it does not preserve the state that made those words meaningful.

“Explore the partnership model” may have been electric on Tuesday. By the following Monday, in a document, it is inert. The phrase survived. The significance did not.

Organizations are constantly preserving the nouns and losing the meaning.

Then comes the most familiar mistake: the summary

Someone writes the meeting up. They do a good job. It is clear, organized, thorough. By conventional standards, it is excellent.

And it is almost useless for the purpose that matters most.

Because a summary is an artifact.

It is a polished, self-contained record of what happened.

But what the work actually needed was not a record. It needed a restart mechanism. Something that could bring a person — or better, a group — back into the state from which the next real move could emerge.

Those are different objects.

One is designed to inform.
The other is designed to re-enter.

We are very good at producing the first kind.
We are astonishingly bad at producing the second.

So organizations do something perverse every day: they get close to a real integration, fail to preserve the conditions for continuing it, and then file an excellent summary documenting that it once existed.

A summary is built to inform. A checkpoint is built to resume.

This is not just a meeting problem

It happens everywhere serious thought happens.

You read a book and, on page 147, the author’s argument collides with something you have been wrestling with for months. For a few seconds, something new exists in your mind that neither you nor the author could have produced alone.

Then you put the book down.

You may remember that something interesting happened. You will probably not be able to reconstruct the exact integration that was forming.

It happens in conversations that change how you see something and then evaporate before you can hold them.

It happens in lectures where a student’s half-formed question contains the seed of an idea the professor has not yet had.

It happens in marriages, research labs, boardrooms, hospitals, design reviews, investment committees, and founders’ walks home from dinner.

This is one of the most common forms of knowledge destruction in human life.

It’s not very dramatic, and it’s rarely mourned.

But it’s constant. And at a civilizational scale.

Why this matters much more now

For most of history, the main bottleneck in knowledge work was generation.

Writing the first draft was hard. Running the analysis was hard. Producing the options was hard. Gathering the research was hard. Even a mediocre synthesis could take serious time and effort.

That bottleneck has now broken.

AI has made generation cheap.

Cheap drafts. Cheap analyses. Cheap summaries. Cheap options. Cheap recommendations. Cheap text everywhere.

Most organizations think this means they are entering an age of accelerated intelligence.

Some are.

Many are simply entering an age of accelerated artifact production.

That is not the same thing.

If you pour cheap generation into a system that does not know how to preserve live understanding, you do not get compounding intelligence. You get a prolific amnesiac.

And this is why the waste I have described is about to become far more expensive.

The more candidate outputs you can generate, the more pressure you place on the genuinely scarce functions: evaluation, integration, continuation, and memory of the right kind.

We are generating faster than we can integrate.

That is the real constraint now.

Not production. Not documentation. Not even access to ideas.

The real constraint is whether a person or an organization can recognize when something genuinely new has emerged, let it change the shared understanding, and preserve enough of that changed state for the next integration to begin.

The companies that solve this will not merely move faster. They will compound understanding.

The ones that do not will produce more, summarize more, archive more, and understand less.

The distinction that matters: artifact vs checkpoint

I think one of the least visible and most important distinctions in knowledge work is the distinction between an artifact and a checkpoint.

An artifact is a finished object. A report. A summary. A deck. A memo. Something that stands alone.

A checkpoint is different. A checkpoint is built not primarily to inform, but to resume. Its job is to shorten the path back to the state where the work became interesting.

A good checkpoint does not merely tell you what was said.

It helps you recover things like:

That is a different discipline from note-taking.
It is a different discipline from documentation.
And it is very different from asking AI for “a good summary.”

Most organizations do not know the difference yet.

That is why so much expensive intelligence leaves so little residue.

The new frontier is not generation

The organizations that win in the next decade will not be the ones that produce the most text, the most slides, or the most AI outputs.

They will be the ones that learn how to preserve, re-enter, and continue real thought.

That sounds abstract until you notice how much money is already being burned by the opposite.

A strategy team has the meeting that almost got there.
A research group has the conversation that nearly cracked the frame.
A founder and an operator finally see the real bottleneck — and then lose it to the weekly cadence.
A company buys AI tools to accelerate work and ends up accelerating the production of artifacts into systems that still do not know how to remember.

This is not a side issue in knowledge work.

It is the issue.

Because the most valuable thing an organization ever produces is often not the deliverable.

It is the changed state of understanding from which better deliverables, better decisions, and better discoveries become possible.

And that is the thing we are currently worst at keeping.

The first step is just to see the waste clearly.

To see that every time a room reached a higher level of understanding and no one preserved the conditions for continuing it, something valuable was not merely undocumented.

It was destroyed.

And then an excellent summary was filed in its place.

Closing note

I’m interested in a problem most organizations still cannot see clearly: how to preserve and continue real thinking now that AI has made generation cheap. That problem sits at the intersection of strategy, workflow, memory, and judgment. It is also increasingly central to my consulting work. Reply here, DM me, or visit my practice website if you’re working on it too.

Read on Substack →


The Constant of Integration

On emotion, mathematics, and the specific thing that cannot be transmitted when two people feel the same thing.

There is a particular kind of evening in our house. The kids are asleep. My wife and I are in the kitchen or on the couch, and one of us has just gotten off the phone with a parent — mine in India, hers in Australia — and we are sitting with whatever that call left behind. No parent is sick tonight. But all are older than they were last year, and they live on opposite sides of the world from us and from each other, and the children we are raising together have grandparents they will know mostly through screens and the occasional long-haul flight.

What happens in those evenings is hard to describe precisely, which is why I am trying.

My wife and I are feeling the same thing. I can tell from how she is sitting. She can tell from how I am sitting. The feeling is shared, really shared. It’s not performed for each other, not politely matched. The specific weight of having elderly parents on the wrong side of the planet while raising small children in a third country is pressing on both of us in the same way.

And at the same time, in the same silence, we both know we are not feeling it from the same place.

Her parents are her parents. She grew up in Queensland, in a family I know only through her. My parents are my parents. I was born in Kolkata, moved to England as a small child, moved back to India at twelve, came to the United States at eighteen, and the specific shape of what it means to have my mother and father growing old in Kolkata while I am in Atlanta is a shape she understands from the outside, with love and with effort, but not from the inside. Our children are mixed. Our worries about our parents are, in one sense, a single family worry — and in another sense, completely, structurally, not.

Neither of us tries to close the distance on those evenings. I think we both know, without saying so, that the distance is not a failure of communication. It is something else. It was there before we knew we both felt it. It was still there after. We can feel the same thing and still not be in the same place.

I want to explain why.

The observation is everywhere once you look for it

Once I started paying attention, I began noticing the same structure in ordinary places.

A book you read at twenty-five does not land the same way at forty. The words are identical. What you bring to them is not. Some books stay shut to you until the thing they are about has happened to you, and then, suddenly, they are open — and they are not the same book other people are reading.

Telling someone what happened to you is not the same as them knowing what it was like. You can describe the hospital room in detail. They can understand every sentence. They cannot be where you were while it was happening, and the gap between understanding and having stood there is exactly the gap that matters.

Two people who have been married for forty years can sit in a kitchen and not speak and be in the same place. Two strangers at a dinner party can talk to each other all night and be in different ones. We usually say the couple communicates well. But that is imprecise. What the couple has is enough shared context to make the communication work.

These are not different observations. They are the same observation viewed from different angles. There is a specific structural property at work, and I want to get precise about what it is.


The math names what it is

There is a mathematical structure that names what this is, and if you will hold on with me for a few paragraphs, I will show you why it matters. If you know calculus intuitively, skip ahead three paragraphs.

Consider a ball rolling past you at this very moment at a given speed. You know how fast it is moving. From the speed alone, you cannot tell where it started. It could have begun its roll from ten feet up a hill or from the very top of the hill. It could have been rolling for three seconds or for three minutes. The speed tells you everything about how the ball is changing. It tells you nothing about where it began.

In the language of calculus, what you have there is a rate of change. The operation that gives you a rate of change from a full description of where the ball has been is called differentiation. Differentiation extracts how something is changing and discards where it started. The inverse operation, integration, tries to do the reverse: to recover the full story, given only the rate of change. But the starting point is gone. Integration cannot get it back. It can only tell you that the ball has been moving this way, along one of a family of possible paths that all have the same shape and sit at different heights. To pick the actual path, you need something outside the rate of change — a fact about where it started, which the math calls a boundary condition.

In calculus, this missing piece has a name. It is called the constant of integration, written as a plus-C (+C) at the end of every antiderivative. It is not a notational quirk. It is the mathematical signature of a specific kind of information loss: the loss of starting conditions, which integration cannot recover from the rate of change alone.

I am going to use this simple picture — single-variable calculus, a ball rolling on a line, +C as a single undetermined constant number — as the cleanest available window onto a general structural property: a forward operation that loses information, an inverse that cannot recover what was lost without data supplied from outside, and a specific form that the discarded information takes. This property is not confined to single-variable calculus. It shows up, in richer and richer forms, across multivariate calculus, functional analysis, the geometry physicists use for high-dimensional state spaces, and probably in many places the formalism has not yet been assembled for. When I say +C in what follows, I do not mean a single number. I mean a family of functions — a high-dimensional space of boundary conditions whose specific value pins down where on a much richer solution manifold a given trajectory actually sits. The principle is the same as in high school calculus. The dimensionality is not.

Now hold that, and think about what an emotion or feeling is.

An emotion, as you feel it, is the output of something that behaves like an integration running over your entire history. Your body and mind have been keeping track of what has happened to you — not as a list of events, but as an accumulated state that folds together bodily sensation, memory, context, relationship, and meaning into whatever you are feeling right now. The feeling at this moment is the current value of that long accumulation. The +C of that accumulation is your history — the specific starting conditions and accumulated boundary conditions that made this particular integration land at this particular value. And the +C here is not a number. It is whatever the high-dimensional analogue of a number is in a space rich enough to hold a life.

Two people can feel the same thing — can land at the same current value at the same moment — and have arrived there from different +C. They are on parallel paths with identical rates of change, sitting at different heights in a space neither of them can fully see. Neither of them, from the feeling alone, can reconstruct the other’s starting point.

This is what I mean when I say that the feeling can be sent but the history cannot be sent with the feeling. It is not a limitation of language. It is not a fact about consciousness being private. It is a structural property of the operation that produced the feeling. Integration over history is many-to-one: many different histories can land at the same felt output. The output does not uniquely determine what produced it. To recover what produced it, you need something outside the integration itself — the boundary conditions, which live in the person who did the integrating, not in the feeling they can share.

What this actually means for communication

If this is right, a number of things we ordinarily treat as separate start to look like the same thing.

Emotional communication between two people is not transmission of felt states. It is pattern-matching between two integrations. When someone tells you they are grieving, and you recognize the shape of what they are describing because you have also grieved, their grief has not somehow been sent across the gap. Your own grief, with its own +C, is being activated as a template. The template is similar enough to theirs that the match produces something that feels like understanding. It is understanding. But the understanding is mostly yours, running on your history, triggered by a signal from them specific enough to wake the right pattern.

Shared history deepens emotional communication for a specific reason. The +C values of two people who have lived through enough of the same things converge. Their boundary conditions become similar. The same felt signals point at recognizable territory in each of them, because each of them has enough of the same territory to recognize. It is not that they are better at communicating. It is that the integrations they are each running have more of the same starting conditions, so fewer of the signals between them get lost in translation.

Therapy is slow for the same reason. What good therapy does is surface boundary conditions — the specific, accumulated, often unnamed conditions under which you began integrating the thing you are now integrating. The feeling alone does not point at them. The feeling is the output. The therapist is helping you recover what integration cannot: the starting conditions that would make the current state interpretable to you. Without those, you can feel your feeling as clearly as you like and still not know what it means.

And this is why the forty-year marriage and the dinner party strangers are on different sides of the same line. The couple has been integrating side by side for a long time. Their +C values have converged — not identically, but enough that the same signals between them point at enough shared territory that silence can carry meaning. The strangers can exchange identical descriptions and remain in different places because their histories have not produced the boundary conditions that would make the descriptions mean the same thing.

None of this requires that feeling be mystical, or that selves be private in some deep metaphysical sense. It just requires that integration over history have the structural property integration has.


The same structure elsewhere

The structural property shows up in another domain, and I mention it because the second instance suggests the property is not specific to emotion but general to minds that accumulate over histories.

Senior professional judgment — the kind that takes decades to develop and cannot be transmitted to a new colleague through briefings or documents — has the same shape. What the senior person carries is not information about the situation. It is an integration: an accumulated read of how things move, what breaks, where the real pressure sits, which signals matter and which do not. The output — their judgment about what to do next — is transmissible. What cannot be transmitted is the +C: the boundary conditions accumulated over years of cases, decisions, mistakes, corrections, and watchings, which make their current read interpretable as judgment rather than as a guess. A new colleague with the same current information is on a parallel path at a different height. No amount of briefing can supply what only integration over time produces.

It is why senior judgment is expensive. It is why it gets lost when experienced people leave. It is why institutional culture cannot be written down, though many organizations keep trying. It is why experience is not a quantity of things known but a specific kind of accumulation that produces boundary conditions no one can shortcut.

Same property. Minds that integrate over history produce outputs that under-determine what produced them. The world runs on such integrations, and most of what we call communication is pattern-matching between them.

What I am actually asking

What I have offered here is a phenomenological observation pointed at a structural claim, and the structural claim is not finished.

The claim I want to put on the table, for the people who can do something with it, is that there is an informational layer sitting between neurology and phenomenology that our current questions are not quite reaching. The neurological side describes the substrate. The phenomenological side describes the experience. In between, something is happening that has the shape of a generalized integration — forward operators that lose information, inverses defined only up to a family of boundary conditions, current states that under-determine the trajectories that produced them. I suspect the mathematics that would describe it properly lives somewhere in the neighborhood of multivariate and variational calculus, operator theory, and the high-dimensional geometric tools physics has developed for other purposes. I suspect some of it has not been assembled for this yet. I am sure it is not single-variable calculus, which is why I have used single-variable calculus only as the cleanest picture of the structural property, not as the claim.

If cognitive and affective scientists are already asking these questions, I would like to know. If they are not, I would like them to. The phenomenon is visible in ordinary life — in the kitchen after the phone call, in the reader who meets a book at the right age, in the senior colleague whose judgment cannot be handed over. Something is producing these patterns, and the something has a shape. The shape is worth the math.


My wife and I, on those evenings in our kitchen, are doing what any two people who love different parents, and who love each other, do. We are feeling the same thing and recognizing that what produced the feeling in each of us cannot be sent across the space between us. The silence is honest. The distance is structural. There is nothing either of us could say to close it, because the thing that would close it is not sayable in the first place — it is the whole shape of each of our lives up to that moment, which neither of us can hand to the other.

And yet we sit together. The feeling is shared, even if the history is not. What passes between us is not nothing. It is what matters.

That, I think, is what emotional communication actually is. We are matching patterns across the gap. Each of us is running our own version of the worry, on our own boundary conditions, and the signals between us are close enough that each of us can locate where the other is, approximately, without being able to reach them exactly.

That is most of what there is, between people who are paying entire, honest attention. It is not less than we thought. It is just more specific than we usually name.

Bud Bhattacharyya is founder and principal of re:compound, where he works with senior leaders at expert-led firms on messy, ambiguous, high-stakes problems.

Read on Substack →

Your judgment is the layer this is built for.

If something here lands close to live work you’re carrying, the practice is built for that.

Start a conversation

Please reach out first to elissa@recompound.ai

Who’s behind re:compound

re:compound was built for the gap between AI adoption and actual performance change — the gap where most organizations have invested heavily in tools and training, but the work that depends on senior judgment hasn’t changed at all.

Bud works on the methodology of human-AI collaboration for complex knowledge work — specifically the question of how senior judgment compounds rather than resets with every interaction.

  • McKinsey & Company
  • Bridgewater Associates
  • Deloitte (Internal Strategy & Transformation)
  • Vega Factor & Primed to Perform (NYT Bestseller)
  • B.S. Computer Science & B.A. Economics, Penn
  • MBA, Harvard Business School

Elissa leads operations and practice, bringing a background in high-stakes clinical operations where judgment under complexity is the daily operating condition and the margin for error is real.

  • Founder, re:compound
  • Head of Operations & Practice
  • High-Stakes Clinical Operations
  • Australia · UK · US
  • Bachelor of Nursing, University of South Australia