What AI Is Actually Doing with Your IP

Part 1 of 10 - From Currency to Compounding: The Enterprise AI & IP Governance Series establishing the central thesis of enterprise AI governance: intellectual property is your currency, and AI either compounds it or dilutes it. 

In This Post, You’ll Learn: 

  • What generative AI is doing with your enterprise IP.

  • Why most enterprises cannot quantify their AI risk exposure or ROI

  • How the same structural blindness that created the 2008 derivatives crisis is now emerging inside enterprise AI deployments.

Enterprises are feeding AI their most valuable assets, and most cannot answer a basic question: what is it actually doing with them? This isn't an AI capability problem. It's an IP visibility problem. And the same blindness that creates liability also destroys ROI, because you cannot compound an asset you cannot see. 


Leading up to 2008, a mortgage was a governed asset. Reviewed, rated, approved. Banks had risk frameworks, compliance processes, and regulatory oversight. The underlying assets were visible and understood. 

The market had turned those assets into currency — bundling, securitizing, and trading against them — and built derivatives from that currency. Mortgage-backed securities. Collateralized debt obligations. Derivatives of derivatives. Each layer was created from governed inputs — but the derivatives themselves were ungoverned. No one tracked the lineage. There was no provenance — no documented chain from the governed asset to the derivative it produced. No one could reconstruct what was inside them. The derivatives multiplied faster than any human review process could follow. 

When defaults started rising and the market began to unravel, every institution asked the same question: “What is my risk exposure?” No one could answer with any precision. The provenance didn’t exist. They knew the risk was significant. They couldn’t see how deep it went. Bear Stearns, gone. Lehman Brothers, the largest bankruptcy in American history. AIG, requiring $182 billion to survive. A housing correction became a global collapse — not because the assets failed, but because no one could see inside the derivatives fast enough to contain it. 

The enterprise AI economy isn’t facing a single systemic collapse. It’s facing something more distributed and no less destructive — Gartner predicts more than 2,000 “death by AI” lawsuits by the end of 2026, each one targeting an enterprise that deployed AI on governed assets and couldn’t answer the same question: what is my risk exposure? 

This same pattern is running again. Not at the scale of global finance. But at the scale of every enterprise operating AI on its intellectual property without the infrastructure to trace what it consumed or to know whether what it produced is an asset or a liability.

The Enterprise IP Asset Base 

A governed asset becomes currency the moment an economy builds on it. Gold. Mortgages. Enterprise intellectual property.  

IP has been currency for as long as enterprises have competed on what they know. But no one has treated it that way — which means no one has built the infrastructure to govern it as one. 

The operational procedures, brand guidelines, product specifications, training content, customer data, and institutional knowledge that define how an enterprise competes — that’s the intellectual property asset base. It’s reviewed, approved, and governed. Legal clears it. Brand signs off. Compliance blesses it. The process is sound. It’s been sound for decades. Governance at human speed. 

When AI interacts with that asset base, the speed changes. Every interaction is a transaction — but now transactions happen continuously, autonomously, at machine speed. Something is consumed. Something is produced. Value is either compounded or diluted. And no one is tracking which. 

Ninety-five percent of U.S. enterprises are now deploying generative AI on that asset base. Sixty-one percent lack the tools to prevent proprietary data from being shared with external AI systems. Fewer than one in four can connect AI activity to business outcomes. And ninety-nine percent have encountered disrupted or derailed AI projects. 

These aren’t capability failures. They’re visibility failures. 

The currency is being spent. There is no ledger.

The Controlled AI Deployment Case. And What Breaks Governance

Consider the controlled case. Your enterprise deploys an AI agent to handle customer questions about your return policy. You do the work the supply side demands — curate the training data, validate responses, test edge cases, build confidence that the agent answers appropriately within the boundaries you designed. That confidence is real. It’s earned. 

It’s also a snapshot. 

Failure Mode 1: Model Drift 

The model changed last Tuesday. The AI company shipped a new version — or quietly revised the existing one. The update may improve performance. It may degrade it. Without infrastructure to assess the impact on your specific deployment, you won’t fully know which. And the version you tested against? It may not exist in six months. AI companies truncate prior versions faster than most enterprises realize. The supply-side confidence you built has an expiration date that someone else controls. 

Failure Mode 2: Unvalidated Derivatives 

Real users don’t stay inside your test scenarios. A customer asks a question that combines your return policy with a warranty claim in a way no one anticipated. The AI produces a response — a derivative minted from your governed IP — that was never validated. You tested for the questions you could predict. You cannot test for every derivative question a human will ask. Every novel interaction produces an unvalidated output. 

Failure Mode 3: Ungoverned Actors 

Not every user is asking in good faith. Some are probing for information they shouldn’t have. Some are manipulating the AI into revealing proprietary content or cornering it into commitments your enterprise never authorized. Social media and gaming platforms learned this years ago — you cannot release a system to the public and hope users behave. You build moderation infrastructure, behavioral monitoring, real-time intervention. Enterprise AI has none of it. There’s no distinction between a legitimate customer query and someone extracting your IP. 

Three failure modes. Model changes with unknown impact. Unvalidated derivatives. Bad actors. And this is the controlled case — one agent, one use case, one set of IP, deployed with the full discipline of supply-side governance. 

Now consider what’s actually happening across the enterprise. 

The Enterprise-Wide AI IP Visibility Problem

Sales is feeding pricing strategy into generative AI to draft proposals. Marketing is running brand guidelines through it to generate campaign copy. Legal is pasting contract language in for analysis. Engineers are running proprietary code through it. Executives are feeding board presentations in for summarization.  

Every department, every employee, every tool — each one consuming governed IP and producing ungoverned derivatives. Not because anyone decided to deploy AI on that IP. But because the tools are there and the productivity gain is real. 

And increasingly, they’re not just using AI as a tool — they’re outsourcing their thinking to it. The analyst who used to interpret the data and apply judgment now asks AI to do both. The manager who used to draft the recommendation and weigh the tradeoffs now prompts AI for the answer. The human thinking that once served as an implicit layer of governance — the judgment, the context, the institutional knowledge applied at the point of creation — is being replaced by AI outputs that carry none of it. Gartner predicts that by 2026, atrophy of critical thinking skills due to generative AI use will push 50% of global organizations to require AI-free skills assessments. The derivative doesn’t just lack provenance from the source IP. It lacks the human judgment that used to be part of the chain. 

And here’s where the 2008 parallel tightens. The marketing team’s AI-generated copy gets pasted into a sales deck. That deck gets fed into another AI tool for a customer summary. The customer summary gets used to update a training module. Each generation is a derivative of a derivative — further from the original governed source, with no provenance chain connecting any of it. 

The chatbot was one mortgage. Generative AI across the enterprise is the entire derivatives market.  

The governed assets are producing ungoverned derivatives. At scale. At speed. With no infrastructure tracking the lineage. 

From Information Transmission to AI Creation Speed 

This isn’t the first time the world has moved faster than its governance infrastructure. 

For most of human history, information moved at the speed a person could walk or a ship could sail — months to cross an ocean. Governance moved at the same pace. When the telegraph and railroad compressed communication to days, new governance systems emerged. When radio compressed it to hours, governance became architectural — classification systems, intelligence compartmentalization, operational security designed for speed. When the internet compressed it to seconds, enterprises spent two decades building the infrastructure to match — firewalls, SOC 2, data privacy frameworks, compliance automation. 

Every generation of speed required a corresponding generation of governance. And every time, the infrastructure caught up. Not instantly. Not without cost. But it caught up because the pattern is recognizable and the solution is buildable. 

AI is bringing a novel dimension to the governance problem. Every previous speed increase was about transmitting what humans produced — moving existing information faster. AI doesn’t just transmit. It creates. It produces new information, new content, new decisions from the assets it consumes and the instructions it’s given.  

The governance challenge isn’t “how do we review faster.” It’s “how do we govern an entity that creates derivative IP autonomously, continuously, at a speed where human review is structurally impossible.” 

The shift isn’t from slow to fast. It’s from transmission speed to creation speed. That’s a definable problem. And definable problems get solved — the same way every previous governance gap has been solved. With infrastructure.

The Wild: Where AI-Generated IP Goes Ungoverned

Enterprises govern their intellectual property. They always have. Legal reviews it. Brand approves it. Compliance signs off. It becomes the authoritative version. That gate is real and it works. 

But governance ends at the gate.  

Once IP is approved, there is no infrastructure governing what happens to it — who accesses it, what version they use, what AI does with it, or what derivatives it produces. This is the YouTube problem at enterprise scale. YouTube built Content ID — a $100 million infrastructure investment that has processed billions of copyright claims and paid $12 billion to rightsholders. And it still can’t fully govern the derivative problem. Users remix, sample, clip, and create derivatives faster than any system can track. 

Enterprise IP has the same structural gap. The approval was a snapshot. What happens after the snapshot is the Wild. 

The problem starts at deployment. The moment AI is released into the operating environment, it enters the Wild. AI doesn’t wait for the next review cycle. It pulls from wherever it has access — the approved version, the draft, the archived copy, the annotated version someone shared in a Teams channel. And it’s constantly learning. AI is designed for continuous improvement — absorbing new inputs, refining responses, evolving with every interaction. That’s the point. But continuous improvement without the infrastructure to manage both the AI and the IP it operates on doesn’t produce stability. It produces drift — ungoverned evolution away from the source of truth, with no mechanism to detect the distance. The governed input enters an ungoverned process that never stops changing. 

And then the real gap opens. Every interaction generates a derivative — a customer response, a training recommendation, a strategic summary, a decision framework. Each one was created from the enterprise’s governed assets. Each one was never reviewed, never classified, never recorded. Each one is currency minted from your IP. 

That’s the Wild. 

The Wild isn’t chaos. It isn’t a failure of diligence. It’s the structural gap between governance built for transmission speed and AI that produces derivatives at creation speed. Every AI interaction widens it. Every untracked output lives there. The Wild is where most of what enterprise AI produces exists today — invisible to the organization, unconnected to the source, unmeasured in its impact. 

In 2008, the Wild was a $60 trillion shadow derivatives market. In enterprise AI, the Wild is every piece of derivative IP your systems have produced that no one can trace back to its source. The scale is different. The structure of the blindness is the same.  

When litigation or regulatory inquiry demands the chain — and Mobley v. Workday shows it will — the enterprise operating in the Wild cannot produce what was never recorded. 

AI governance requires IP governance. You cannot govern what AI produces without governing what AI consumes. One without the other isn’t governance. It’s the illusion of governance. And the Wild grows in the gap between the two. 

The Hard Problem: Why Enterprise AI Governance Fails 

OpenAI appears to understand part of this problem. This week, they announced multiyear partnerships with McKinsey, BCG, Accenture, and Capgemini to deploy their Frontier enterprise platform. Their own words: the limiting factor for AI value isn’t model intelligence — it’s how agents are built and run inside organizations. 

The supply side of AI just admitted that the demand side is the hard problem. Capgemini's chief strategy and development officer put it simply: if it was a walk in the park, OpenAI would have done it by themselves. 

But consulting firms reorganize. They redesign operating models, restructure workflows, retrain teams. What they don’t produce is continuous governance infrastructure for every AI interaction with enterprise IP. They can tell you how to deploy AI. They cannot tell you what AI did with your IP after you deployed it. 

The National Bureau of Economic Research quantified the gap. In a survey of nearly 6,000 CEOs, CFOs, and executives across the U.S., U.K., Germany, and Australia, 70% of firms actively use AI. The technology is deployed. More than 80% report no measurable impact. Yet the same executives forecast significant productivity gains ahead. They believe the value is there. They can't see it.  Yet. 

You cannot measure what you cannot see. You cannot value what you cannot track. And you cannot compound an asset that lives in the Wild.

The Pattern: Governed Assets Producing Ungoverned Derivatives 

After 2008, the response was structural. Regulators mandated transparency. The market built clearing houses, derivative reporting, and new standards. It cost trillions to learn the lesson. But the infrastructure was built because the alternative — governed assets producing ungoverned derivatives at scale, with no visibility into what was inside them — was unsustainable. 

IP is currency. AI’s interactions with IP create transactions. Those transactions are producing derivatives at creation speed. The governance infrastructure was built for transmission speed. 

The gap between them is the Wild. And the Wild is growing with every interaction. 

The pattern is familiar. Every speed increase in history required new governance infrastructure, and every time, the infrastructure was built. Every episode of governed assets producing ungoverned derivatives at scale ended the same way — with the infrastructure that should have existed before the crisis. 

The question isn’t whether the infrastructure will be built. It will be. The question is whether your enterprise begins building governance disciplines now, or waits for the cost of operating without them to arrive first. 

If litigation, regulators, or your board asked today: 

  • What IP did this AI system consume? 

  • Was it authoritative? 

  • What derivative did it produce? 

  • Who acted on it? 

  • What was the outcome? 

Could you answer?  

Most enterprises cannot. And until they can, AI is operating in the Wild. 


Next in the series:  

The price tag. The $1.5 Billion Wake-Up Call: The market price of ungoverned AI: Set in courtrooms and attorney general offices. 

 

This is Post 2 of the "From Currency to Compounding" series on enterprise AI governance.  

Read Post 1: IP as Currency — The Missing Half of AI Governance

About the Author 

Ken Herfurth is the Founder and CEO of Ander, a performance intelligence company. With 30 years in engineering and C-suite roles across financial services and technology, he has spent more than two decades building and operating enterprise IP systems. He writes about demand-side AI governance as an operator, with a firsthand view of the widening gap between what AI vendors promise and what enterprises can actually control. 

 

Key Terms Defined 

Enterprise IP (Intellectual Property): 
The governed operational knowledge of an organization: policies, procedures, product specifications, training content, compliance documentation, brand standards, and institutional knowledge. 

Derivative IP: 
Any new content, recommendation, decision, or output generated by AI systems based on enterprise IP inputs. 

Creation Speed: 
The rate at which AI generates new derivative outputs. Faster than human review cycles can govern. 

Risk Exposure (in AI Governance): 
The inability to trace what IP was consumed, how it was transformed, and what actions resulted. 

 

Frequently Asked Questions 

What is enterprise AI governance? Enterprise AI governance is the infrastructure that tracks how AI systems consume intellectual property, generate derivatives, and influence decisions across the organization. 

Why can't enterprises see what AI is doing with their intellectual property? Governance in most enterprises ends at the point of approval - when content is reviewed, cleared, and published. There is no infrastructure governing what happens after: which version AI accesses, what it produces from that content, or how those derivatives propagate across the organization. AI operates at creation speed; enterprise governance was built for transmission speed. The gap between them is where most AI output lives — untracked and unconnected to its source. 

What is an ungoverned AI derivative? An ungoverned AI derivative is any content, recommendation, decision, or output that an AI system produces from enterprise IP without an auditable chain connecting it to the governed source material. Examples include AI-generated customer responses, strategic summaries, training content, and proposals — outputs that were never reviewed, classified, or recorded, but that can carry legal and reputational liability.