# #

How Firms That Kill Taste Generate Their Own Competitors

The Entity Substitution Problem documents how labor protections dissolve when the entities carrying them die. The essay focuses on external competitive pressure — AI-native firms outperforming legacy enterprises until the legacy firm enters bankruptcy. But a faster pathway exists, and it runs through the front door.

Firms that adopt AI at scale cannot distinguish the curation function — priority judgment about what is worth doing — from organizational obstruction. Management sees someone saying "no" to the swarm's output and concludes they are blocking productivity. The curators get cut. What follows is a two-stroke engine of self-inflicted entity substitution. First, unfiltered swarm output accumulates structural damage that manifests as product degradation in every domain requiring human judgment. Second, the senior talent who can see the degradation coming leaves voluntarily — and those departures, not the layoffs, generate the AI-native competitors that entity-substitute the firm that failed to retain them.

The enshittification engine is the missing connector between the Orchestration Class and Entity Substitution. The Orchestration Class essay defines who carries the curation function. This essay documents what happens when that function is eliminated — and why the result is entity substitution generated from within.

Confidence calibration: 60–70% that the enshittification spiral operates as the primary quality degradation mechanism in AI-adopting firms that eliminate the curation layer. 50–60% that the competitive boomerang — voluntary departure of senior talent into AI-native ventures — constitutes a material entity substitution pathway within five years.


The Function Nobody Can See

The critical human function in AI-augmented production is not execution. Swarms handle execution with increasing competence — Cognition reports Devin merging hundreds of thousands of pull requests at a 67% success rate, Cursor's team demonstrated a multi-agent system pushing 1,000 commits per hour, Amazon's Q tool compressed Java migration from 50 developer-days to hours per application. [Measured — vendor-reported] The function is not coordination, either. Coordination is compressing rapidly as platforms internalize workflow orchestration into click-to-deploy products.

The critical function is curation — priority judgment about what is worth doing. Not "how do we solve this problem?" but "is this the right problem?" Not "can the swarm build this feature?" but "should the swarm build this feature?"

Curation manifests organizationally as saying no. The senior engineer who reviews 127 pull requests while peers review 30 — and rejects 40 of them. The staff architect who kills a feature proposal because it introduces architectural debt that will cost ten times more to unwind than the feature is worth. The principal who looks at the swarm's enthusiastic output and says: this is sophisticated-looking garbage built on a broken decomposition.

This function is invisible for the same reason the Dissipation Veil (essay soon) operates: it produces value through absence. The system that didn't collapse. The feature that didn't ship. The architectural decision that prevented six months of technical debt remediation. No dashboard measures prevented disasters. No OKR tracks rejected proposals. The curator's highest-value output is the decision that never becomes a line item.

Performance review systems compound the blindness. They measure output — code shipped, tickets closed, features delivered. The person who produces less visible output because they are filtering the swarm's output for the entire team appears, on the metrics, to be less productive than the person who rubber-stamps everything. In organizations under cost pressure — which, given the Ratchet's consumption of operating cash flow, now means virtually every AI-investing enterprise — the person who looks less productive gets cut first.

The pattern is measurable. When Meta, Google, Microsoft, Amazon, and Block executed their major layoffs between 2023 and 2026, the role category hit hardest was not junior engineers (those were frozen out through hiring pipeline exclusion, a Structural Exclusion mechanism). It was not senior technical ICs, who — per a rigorous SSRN study analyzing 62 million workers across 285,000 firms — saw employment remain largely unchanged following AI adoption. [Measured] The category hit hardest was the organizational judgment layer: product managers, program managers, directors, general managers. Meta reportedly flattened from approximately 300 VPs to 250. Google reportedly eliminated roughly a third of managers with fewer than three direct reports. Microsoft targeted a 10:1 engineer-to-manager ratio, up from approximately 5.5:1. Block removed an estimated 200 managers and eliminated the general manager role entirely. [Estimated — consistent across industry reporting but precise figures derive from analyst and insider accounts rather than public filings]

These are the people who decided what to build, not how to build it. They are the curation layer. And they are precisely the layer that management, under the Ratchet's cost pressure, cannot distinguish from overhead.


The Taste Deficit

When the curation layer is removed, the swarm does not stop producing. It accelerates. The volume of output increases because the bottleneck — the person who said "no" — is gone. What degrades is not the quantity of output but its structural coherence.

The evidence is now substantial enough to quantify.

GitClear analyzed 211 million changed lines of code between 2020 and 2024 and found an eight-fold increase in multi-line duplicate code blocks (five or more duplicated lines). Code churn — code discarded within two weeks of being written — increased dramatically. Refactoring collapsed, with moved lines falling from roughly 24% of changes to under 10%. The year 2024 was the first in which copy-pasted code frequency exceeded moved code frequency, reversing two decades of DRY principles. [Measured] CodeRabbit's analysis of 470 real-world GitHub pull requests found AI-authored code contained 1.7 times more issues and 75% higher logic and correctness errors, with substantially worse security, readability, and performance scores across every measured dimension. [Measured] A Carnegie Mellon difference-in-differences study of GitHub repositories found that after AI assistant adoption, static analysis warnings rose on the order of 30% and code complexity rose over 40%, with initial productivity gains vanishing within months. [Measured — approximate figures from derivative coverage of working paper]

The METR randomized controlled trial — the gold-standard study design — delivered the most provocative finding. Sixteen experienced open-source developers using AI tools completed tasks 19% slower than those working without AI assistance (95% CI: [-40%, -2%]). Those same developers estimated they were roughly 20% faster. In pre-study surveys, economics experts had forecasted a 39% speedup; ML experts forecasted 38%. Everyone was wrong in the same direction. [Measured]

This is what the taste deficit looks like from the inside. The output feels productive. The metrics register activity. The dashboards light up. But the structural quality — the coherence that emerges from someone exercising judgment about what belongs and what doesn't — degrades silently.

Stanford's Social Media Lab and BetterUp Labs gave it a name: workslop. Their research found 40% of workers received workslop in the prior month, with workers estimating roughly 15% of the content they receive qualifies as workslop — much of it AI-generated. Each instance costs an average of one hour and 56 minutes to identify and remediate — but that remediation cost assumes someone with the judgment to identify workslop is still present. [Measured] When the curators are gone, the workslop accumulates without anyone flagging it. It becomes the new baseline. What the Ratchet essay identified as architectural waste generating artificial token demand now has its organizational corollary: architectural waste generating artificial output volume that registers as productivity.

The case studies confirm the mechanism at company scale.

Sonos laid off 7% of staff in 2023 and subsequently launched a catastrophic app redesign in May 2024 — missing features, accessibility failures, extensive bugs. Revenue declined 16% in fiscal Q4 2024. The CEO resigned. Multiple executives pledged to forgo bonuses tied to recovery efforts. The company then announced further layoffs amid the fallout, cutting deeper into the workforce to fund the recovery from the damage caused by the first round of cuts. [Case Study — Illustrative]

CrowdStrike reduced QA-adjacent roles. Former employees told reporters that speed had become the priority over quality control. A faulty update subsequently crashed 8.5 million Windows devices worldwide, triggering a $500 million Delta Airlines lawsuit and estimated billions in direct global losses. The people who would have caught the defective update were no longer there to catch it. [Case Study — Illustrative]

Klarna cut from 5,527 to approximately 3,000 employees — roughly a 45% reduction — while growing revenue sharply and more than doubling revenue per employee over the same period. The efficiency metrics were spectacular. Then CEO Sebastian Siemiatkowski admitted that cost had been too dominant an evaluation factor and that the result was lower quality. The company began rehiring human customer service agents after customers complained about robotic AI responses. The budget channel described in the Dissipation Veil (essay soon) operated: headcount was cut, AI spending absorbed the freed resources, and when the deployment underperformed, the headcount was already gone. [Measured]


Block: The Mechanism in Real Time

Block is the case study the theory predicted, unfolding in real time.

Jack Dorsey's stated rationale evolved revealingly over eleven months. His March 2025 internal memo explicitly said that the restructuring was not trying to replace people with AI. By February 2026, the framing had shifted entirely: 4,000 workers — nearly 40% of the total workforce — were cut, with Dorsey publicly declaring that most companies would make similar cuts within a year. The AI narrative was adopted retrospectively to justify a restructuring that multiple analysts identified as something else entirely. Former head of communications Aaron Zamost called it organizational bloat wearing an AI costume. Mizuho analyst Dan Dolev agreed that the vast majority of cuts were not due to AI. Oxford Economics found many AI-attributed layoffs were corrections for COVID-era over-hiring. By one analysis, fewer than 5% of 2025 tech layoffs explicitly cited AI as a factor, while separate recruiter surveys suggest a majority of hiring managers have used AI as narrative cover for cuts motivated by other pressures. [Estimated — survey and tracker synthesis, not independently audited]

Block's early financial metrics appear to validate the cuts: Q4 2025 gross profit grew 22–24% year-over-year, and the company achieved its "Rule of 40" target for the first time. But transaction losses reportedly increased as a share of gross profit — a potential early signal of degraded risk judgment in the payment processing pipeline. [Measured for gross profit growth; Estimated for transaction loss ratio] The February 2026 cuts are ten days old at time of writing. The enshittification engine's prediction is specific: within 12 months, Block will show measurable product quality degradation in domains requiring human judgment — fraud detection, compliance, merchant dispute resolution — while maintaining or improving metrics in algorithmically optimizable domains like payment routing efficiency. The prediction is falsifiable. The clock is running.

The Meta Exception

The strongest counter to the enshittification thesis is Meta. After cutting 22% of its headcount — from 87,000 to approximately 70,800 — Meta's stock tripled in 2023. Operating margins expanded from 25% to 42%. AI-driven feed improvements increased Facebook time spent by 8% and Instagram by 6%. Zuckerberg claimed the company executes better and faster. [Measured]

This is not a case that can be dismissed or explained away. Meta genuinely improved its financial performance after eliminating a substantial fraction of its organizational judgment layer.

But the exception defines the boundary rather than destroying the thesis. Meta's post-layoff success concentrates in a specific domain: algorithmic optimization of advertising targeting and content engagement. These are pattern-matching tasks — precisely the domain where AI substitutes for human judgment most effectively. The algorithm does not need taste to maximize click-through rates. It needs data and compute.

The enshittification spiral operates in domains where the output requires judgment about qualitative coherence: customer-facing reliability (Sonos), security and compliance (CrowdStrike, and Block's $255 million in regulatory fines), human-judgment-dependent services (Klarna), and novel product design. It does not operate — or operates far more slowly — in domains where quality is measurable, feedback loops are tight, and optimization targets are well-defined.

The honest formulation: AI can curate where the objective function is clear. It cannot curate where the objective function is ambiguous, contested, or requires integration across domains that the training data does not connect. The Orchestration Class essay identified this as the boundary between workflow assembly and system governance. The enshittification engine runs when firms eliminate system governance because they mistake it for workflow assembly.

A Forrester survey reported that 55% of employers who executed AI-driven layoffs now regret the decision. [Estimated — survey data, not independently audited] A Glassdoor analysis of 304 layoff events across 197 companies found that layoffs drop employer ratings by 0.13 stars, with highly rated companies losing 0.22 stars. Recovery takes two or more years. [Measured] The damage is not hypothetical. It is measurable, persistent, and — critically — it takes longer to appear than the quarterly earnings cycle that rewards the initial cuts.


The Boomerang Runs Through Retention, Not Termination

The competitive threat does not come from fired employees. It comes from people who choose to leave.

The cases are dramatic. Anthropic's founding team — Dario and Daniela Amodei plus five former OpenAI co-founders — voluntarily departed in 2021. The company they built now carries a $380 billion valuation and $14 billion in annualized revenue. It is OpenAI's primary competitor. [Measured] Perplexity AI's founder left Google Brain, DeepMind, and OpenAI voluntarily. The company reached a $20 billion valuation and pioneered AI search features; amid growing competition from AI-search entrants, Google's search share has fallen below 90% in some recent measurements for the first time in fifteen years. [Measured] Mistral AI's three founders — from DeepMind and Meta — departed voluntarily and built a company valued at $14 billion, making all three billionaires. [Measured] CB Insights reportedly tracked 14 AI startups led by former Google employees that collectively raised on the order of $15 billion and reached a combined valuation exceeding $70 billion. [Estimated — paywalled report]

No major case of a fired employee building a successful AI-native competitor against their former employer exists in the current data. The boomerang mechanism is real, but the trigger is knowledge-driven departure, not displacement.

This matters for the theory because it reveals the enshittification engine's second stroke. The quality degradation caused by eliminating the curation layer does not just damage products. It damages retention. The senior engineers and architects who can see the structural damage accumulating — the growing code churn, the declining architectural coherence, the workslop that nobody is filtering — are precisely the people with the skills to leave and compete. They leave not because they were fired but because they can see what the organization is becoming and they don't want to be inside it when the consequences arrive.

The AI-native ventures they build carry a structural advantage the research quantifies. Revenue per employee at the top AI-native firms averages roughly $3.5 million versus approximately $600,000 at established SaaS companies — a gap on the order of 5–6x. [Estimated — composite from VC and analyst reports with limited company-specific disclosures] But this figure requires an important caveat: AI-native companies substitute compute costs for headcount costs. Anthropic is not currently profitable, with infrastructure costs reportedly consuming far more than revenue. Several AI coding startups are understood to have compute costs exceeding their top line. [Estimated] The revenue-per-employee metric captures headcount efficiency but not profitability. The boomerang ventures achieve extraordinary output per person while running margins that no legacy enterprise would accept.

This creates a specific competitive dynamic. The AI-native venture can underprice the legacy firm on a per-project basis because its labor costs are negligible, even though its compute costs are enormous. The legacy firm, carrying both labor costs and AI infrastructure costs (the Ratchet ensures they cannot shed the latter), faces cost pressure from both directions. The entity substitution pathway runs not through bankruptcy court — the mechanism described in the Entity Substitution essay — but through market share erosion driven by competitors who used to work there and who carry institutional knowledge of exactly where the legacy firm's vulnerabilities lie.

California's non-compete ban is the essential legal enabler. Virtually all major boomerang cases originate there. In states with enforceable non-competes, the mechanism is legally blocked for most workers. [Measured] Entity substitution via voluntary departure operates primarily in legal environments that permit post-employment competition — a jurisdictional variable that shapes where the enshittification engine's second stroke can fire.


The Shadow Automation Accelerant

The enshittification engine has a shadow fuel source that compounds the damage.

IDC's 2025 global survey found approximately 39% of EMEA employees use unauthorized AI tools at work, with over half unwilling to admit it formally. Sensitive data fed to AI tools reportedly increased from roughly 10% to over 25% in one year. [Estimated — survey data] Workers automate their own roles not from malice but from professional self-preservation — they feel compelled to maintain productivity expectations that were set before the curation layer was removed, when someone else was filtering the work before it reached them.

The result is a second invisible degradation channel. The firm believes a human is exercising judgment on the output. The human is routing the work through an AI tool and passing the output through with minimal review. The organizational assumption — that a human curation layer exists between AI execution and production deployment — is false. But no metric reveals this because the output looks professional, the volume looks productive, and the person producing it has every incentive to maintain the illusion.

When the firm eventually discovers the shadow automation — and it will, through a quality failure, a security breach, or a compliance audit — the standard response is termination of the individual. But the structural damage has already accumulated. Months or years of decisions made without human priority judgment are embedded in the codebase, the product architecture, the customer relationships, the compliance record. The shadow automation does not cause the enshittification. It accelerates the enshittification that the removal of the formal curation layer initiated.

The METR finding captures this acceleration mechanism precisely: developers using AI tools believed they were 20% faster while actually being 19% slower. The perception gap is not carelessness. It is the absence of the curation function that would have measured the structural quality of the output rather than the speed of its production.

The Self-Reinforcing Loop

The enshittification engine is not a one-time event. It is a reinforcing cycle.

The cycle runs as follows. The Ratchet creates budget pressure. Management cuts the curation layer because its value is invisible. Swarm output volume increases — the dashboards look better than ever. Quality degrades in domains requiring judgment, but the degradation presents as structural drift, not acute failure (the Dissipation Veil at organizational scale). The senior talent who would have caught the drift sees it accumulating and leaves voluntarily. Their departure removes more curation capacity. The remaining staff, now carrying heavier loads without the judgment infrastructure that previously supported them, adopt shadow AI tools to maintain throughput. The shadow AI further degrades quality. Management, seeing high output volumes and not seeing the quality degradation, concludes the AI-first strategy is working. The next round of cuts targets whoever is left who says "no."

Each turn of this cycle produces three measurable outputs: declining product quality in judgment-dependent domains, increasing voluntary attrition among senior talent, and increasing shadow AI adoption among remaining staff. All three are leading indicators of the enshittification spiral. All three are currently observable in firms that have executed major curation-layer cuts.

The cycle terminates in one of three ways. The firm experiences a catastrophic quality failure (CrowdStrike), acknowledges the damage and reverses course by rehiring humans (Klarna), or completes the enshittification and loses market position to competitors — including competitors staffed by the people who left (the entity substitution pathway). In no scenario does the firm successfully operate without the curation function in domains that require it. The question is how much damage accumulates before the absence becomes undeniable.


What Would Prove This Wrong

The enshittification engine thesis specifies what would falsify it.

Defeat Condition 1: Product quality holds without the curation layer. If companies that eliminated senior technical and organizational judgment roles show stable or improving product quality metrics over 12 or more months — measured by customer satisfaction, bug rates, security incidents, and architectural coherence — the curation function is not as irreducible as the thesis claims. Early evidence from Sonos, CrowdStrike, and Klarna points the other direction, but the 12-month window has not closed for Block's February 2026 cuts. This is directly observable. [Falsification timeline: March 2027 for Block; ongoing for the broader sample]

Defeat Condition 2: AI agent systems achieve reliable priority judgment. If tools like Devin, Cursor, or Claude Code demonstrate consistent ability to identify which problems are worth solving — not just solve problems they are given — the human curation function is transitional. Current evidence from Cognition's own assessment ("Devin can't independently tackle an ambiguous coding project"), Cursor's team conclusion ("taste, judgment, and direction came from humans"), and the ICLR 2025 finding that LLMs are not suited as standalone planners all indicate this condition is not met as of March 2026. [Measured] But agent capabilities are improving rapidly, and the boundary may shift.

Defeat Condition 3: The boomerang does not produce competitive advantage. If AI-native startups founded by voluntary departures from legacy firms do not achieve superior market position within five years, the retention-failure pathway does not produce entity substitution. The Anthropic, Perplexity, and Mistral cases are dramatic but may not generalize — 90% of all startups fail, and 70% of VC-backed startups don't return investor capital. [Measured] The boomerang is real in individual cases. Whether it operates at scale sufficient to constitute a systemic entity substitution pathway remains an open empirical question.

Defeat Condition 4: Shadow automation does not produce measurable quality degradation. If unfiltered AI output is indistinguishable from human-curated output in production environments, the taste deficit is imaginary. GitClear, CodeRabbit, and the CMU study all suggest otherwise, but these measure code quality in open-source repositories. Enterprise production environments with proprietary codebases and different quality requirements may show different patterns. [Falsification timeline: 12–18 months of enterprise code quality audits]


Where This Connects

The enshittification engine is not an isolated phenomenon. It is the organizational mechanism that connects two pillars of the Theory of Recursive Displacement.

The Orchestration Class defines the skill set: decomposition judgment, failure diagnosis in probabilistic systems, risk arbitration. This essay documents what happens when that skill set is eliminated from the organization — not because it was automated but because it was misidentified as overhead.

The Entity Substitution Problem describes entity substitution as an external competitive process: legacy firms carrying labor obligations die, AI-native replacements that never assumed those obligations capture the market. This essay adds an internal pathway: the legacy firm generates its own competitors by failing to retain the talent that understood what quality required.

The Ratchet explains why the curation layer gets cut in the first place. When hyperscalers consume 90–100% of operating cash flow on AI infrastructure, the budget pressure on every other line item becomes existential. The curation function — invisible, unmeasurable by standard metrics, manifesting as saying "no" — is the easiest line item to eliminate. The workslop dynamics documented in the Ratchet essay are the organizational-level expression of the enshittification engine.

The Dissipation Veil (essay soon) explains why the quality degradation is invisible until it becomes irreversible. The same mechanism that prevents political systems from seeing AI displacement as acute crisis prevents management from seeing the taste deficit accumulating. The damage presents as structural drift — gradually declining code quality, slowly rising bug counts, incrementally degrading customer satisfaction — rather than as a discrete failure that would trigger corrective action. By the time the damage becomes visible, the people who could have prevented it are gone.

The Adversarial Equilibrium Trap describes the legal sector parallel: bilateral AI escalation produces cost inflation rather than efficiency gains. The enshittification engine is the enterprise parallel — unilateral AI adoption without curation produces quality deflation rather than productivity gains. Both are cases where the absence of the judgment function transforms a theoretically efficiency-enhancing technology into a structurally degrading one.

The Wage Signal Collapse connects through the retention mechanism. As wage signals degrade for the roles that carry the curation function — as organizations eliminate the titles, compress the compensation, and fail to recognize the skill in their performance systems — the curators' incentive to stay diminishes. The wage signal for "saying no to the swarm" is zero, because no organization has figured out how to price the absence of disasters.

This essay describes the mechanism. The predecessor essays describe the conditions that create it and the consequences it produces. Together, they answer a question that the framework had not yet addressed: How does entity substitution happen when nobody goes bankrupt?

It happens because the firm kills its own taste. The product degrades. The people who could fix it leave. And the competitors they become never had to carry the legacy obligations that the firm was trying to shed when it cut the curation layer in the first place.


tylermaddox.info Theory of Recursive Displacement — Mechanism Connector March 2026

💬

Ask questions about this content?

I'm here to help clarify anything

Chat

Hello! Lets explore this topic together