IP as Currency: The Missing Half of AI Governance

Part 1 of 10 - From Currency to Compounding: The Enterprise AI & IP Governance Series establishing the central thesis of enterprise AI governance: intellectual property is your currency, and AI either compounds it or dilutes it. 

In This Article, You’ll Learn: 

  • Why most enterprise AI governance efforts are structurally incomplete, and what’s missing from the equation. 

  • How liability, insurance exclusions, and landmark cases are redefining who owns AI risk, and why deployers can no longer assume insulation from vendor accountability. 

  • Why intellectual property is not just content, but enterprise currency — and what happens when AI compounds it without transaction-level governance. 

Every economic inflection point reveals the difference between activity and infrastructure. In 1740, fire exposed the fragility of pooled risk without governance. Today, AI is exposing the fragility of intellectual property without transaction discipline. Enterprises are deploying generative systems at scale, but the governance architecture for what those systems consume and produce has not caught up. This article begins at that inflection point with a structural thesis: intellectual property is your currency, and until it is governed as such, AI will multiply uncertainty before it multiplies value. 

AI is in its Charleston moment 

In 1740, a fire destroyed over 300 buildings in Charleston, South Carolina — and with them, America’s first attempt at insurance. The Friendly Society for the Mutual Insuring of Houses Against Fire had been pooling premiums since 1735. But it had no mechanism to assess risk, no standards for the properties it covered, and no capital structure to absorb a catastrophic loss. When the claims came, members couldn’t pay. The company dissolved. 

Twelve years later, Benjamin Franklin studied that failure and built something different. The Philadelphia Contributionship didn’t just collect premiums. It sent surveyors to inspect buildings before insuring them. It set standards still relevant today. It refused to cover properties that didn’t meet them. It governed the risk before it priced the risk. 

That company is still operating today. It’s the oldest property insurer in America. 

AI is in its Charleston moment.

The Insurance Industry Just Sent a Signal 

In January 2026, the Verisk ISO endorsements went into effect — CG 40 47, CG 40 48, CG 35 08. These endorsements give insurers the ability to exclude generative AI claims from standard commercial general liability policies. The filing was announced in July 2025. Adoption was reported by October. By January, the exclusions were live.  

And the trajectory is broadening — W. R. Berkley has implemented an absolute exclusion covering any use or deployment of AI in any form across its D&O, E&O, and fiduciary liability products. 

The insurance industry didn’t do this on a whim. They did it because they can’t price what they can’t see. And right now, they can’t see what’s happening inside the enterprises deploying AI. 

The Liability Shift Nobody Prepared For

Here’s what most enterprise leaders still haven’t internalized: the courts are not assigning liability to the companies that build AI. They’re assigning it to the companies that deploy it. 

Key cases establishing deployer liability:

In Mobley v. Workday, a federal court ruled that Workday — a leading human capital management platform whose AI-powered screening tools are used by employers across its more than 11,000 customers — can be held liable as an agent performing a traditional hiring function on behalf of its employer-customers. In May 2025, the court granted conditional certification of a nationwide collective action, opening the case to every rejected applicant over 40 since September 2020. The principle cuts both directions: the vendor that builds the AI carries liability, and the employer that deploys it cannot escape liability by delegating the decision to a third party. Employer-specific suits are expected to follow. 

California’s employment regulations, effective October 2025, make it explicit: employers are responsible for discriminatory decisions made by automated systems, even when those systems are provided by third-party vendors. 

The pattern isn’t limited to the United States.  

A Canadian tribunal held Air Canada liable for its chatbot’s misrepresentation — ruling that all content on a company’s platform, whether generated by a human or an AI, is the company’s responsibility. Regulatory frameworks from the EU to individual U.S. states are converging on the same principle. The enterprise that deploys the agent owns the outcome. 

In Bartz v. Anthropic, the parties agreed to a $1.5 billion settlement — the largest publicly reported copyright recovery in U.S. history — covering roughly 500,000 pirated works acquired from shadow libraries and used as AI training data. 

The liability exposure doesn’t end with training data. When enterprises deploy AI on content whose provenance, attribution, and rights haven’t been established — and most haven’t — every derivative the AI produces inherits that uncertainty. 

The consensus is global and accelerating. 

The Supply Side Has a Standard. The Demand Side Does Not. 

What’s remarkable about the current moment is how much intelligent infrastructure is being built on the supply side. Companies like AIUC are creating rigorous certification frameworks for AI agents — testing them for security, safety, and reliability before enterprises deploy them. MITRE, Cisco, UiPath, and Stanford are contributing to standards that will likely become the SOC-2 equivalent for AI systems. This is exactly the right work at exactly the right time. 

But here’s the structural gap: certifying that an AI agent is safe tells you nothing about whether the enterprise is using it responsibly. 

An AI agent can pass every adversarial test in a lab environment and still cause catastrophic harm in production — not because the agent failed, but because the enterprise fed it ungoverned content, lacked a designated source of truth, had no mechanism to track derivative actions, or employed humans who couldn’t distinguish between a confident hallucination and a valid recommendation. 

The supply side has a standard. The demand side — every enterprise deploying these agents in operations that touch customers, employees, and partners — has nothing.

IP is Enterprise Currency: The Core Thesis

Here’s the thesis I’ve been building toward for years, and it’s never been more urgent: 

Enterprise intellectual property is currency. 

Not metaphorically. Structurally. Every enterprise’s most valuable and most vulnerable asset is the knowledge, content, processes, and data that AI systems now consume and transform at scale. Product specifications. Safety protocols. Compliance documentation. Learning and training materials. Customer-facing language. Brand guidelines. Legal-approved terms. 

All of this is a company’s IP. And when AI interacts with it, those interactions are a continuous flow of transactions. 

According to Ocean Tomo’s 2025 Intangible Asset Market Value study, intangible assets — the knowledge, intellectual property, processes, and human capital that define an enterprise — now constitute 92% of S&P 500 market capitalization. In 1975, that number was 17%. The value of the modern enterprise is its IP and the people who create and operate on it. When AI interacts with those assets without governance, the enterprise cannot distinguish value creation from value dilution — and neither can its auditors, its insurers, or its board. 

Just like financial currency needs infrastructure to transact safely — exchanges, clearinghouses, audit trails, risk scores — IP being consumed by AI needs the same infrastructure. Without it, every AI interaction with enterprise IP is an ungoverned transaction. No rules. No audit trail. No risk scoring. No way for an underwriter to price the risk. 

That’s why insurers are excluding AI from standard policies. Not because AI is inherently dangerous, but because the transaction infrastructure producing the necessary governance data doesn’t exist yet. 

What AI Governance Actually Requires 

This isn’t theoretical. Here’s what operational governance actually requires — and what a recent article from MIT Sloan identified as the critical gap in enterprise AI deployment: 

Source of Truth Designation. Most enterprises have IP scattered across a broad range of systems accumulated over years of technology decisions, acquisitions, and departmental preferences. They may have designated sources of truth — but those sources often conflict with each other across departments and systems, with no mechanism to reconcile what’s current. And even when the source is clean, the derivatives produced from it overwhelm the capacity of humans to review. AI doesn’t reduce either problem. It multiplies both. 

Cross-System IP Governance. Your enterprise IP lives across dozens of systems — DAMs, learning platforms, file shares, content management systems, email, collaboration tools. Each manages IP assets in isolation. None of them knows that the product spec in one system contradicts the updated safety protocol in another. It doesn’t know that a training manual references a superseded compliance procedure. And it certainly can’t tell you which version your AI agent consumed when it generated that customer response. Governance requires a layer that sits across all your IP systems and establishes what is true, maps the relationships between artifacts, and tracks what AI consumes. 

Human Competency Measurement. When AI recommends and a human acts, the outcome depends on whether that human was equipped to interpret the recommendation, authorized to act on it, and exercised independent judgment in doing so. Did they have the domain knowledge to evaluate the output — or did they outsource their thinking to a confident-sounding answer? Did they have the authority to make that decision? Did they override appropriately? Did they escalate when confidence was low? Did they recognize a hallucination? This isn’t about training people to use AI. It’s about measuring whether the humans in the loop have the competence, the authority, and the critical thinking to operate safely in an AI-augmented environment. 

Derivative Action Tracking. When AI interacts with enterprise IP, it produces derivatives — recommendations, decisions, generated content, automated actions. Depending on the level of autonomy, the AI may recommend and a human decides. The AI may act and a human reviews. A business system may execute with no human touchpoint at all. In every case, human-in-the-loop governance applies — but the audit trail must capture the full chain: what IP was consumed, what the AI produced, what action was taken, by whom or by what system, and what outcome resulted. This is the same chain-of-custody infrastructure that pharmaceutical and food safety regulations have required for decades. The difference is that now it applies to intellectual property — and almost no enterprise can produce it today. When litigation or regulatory inquiry demands it — and cases like Mobley v. Workday show that it will — the enterprise that lacks this infrastructure cannot comply with discovery, let alone demonstrate governance. 

The Complete Equation 

The AI economy needs two things to function safely at scale: certified agents and governed enterprises. Supply-side standards and demand-side infrastructure. The agent needs to be trustworthy. The enterprise needs to be responsible. 

When you have both — when you can see how the agent was built AND how the enterprise is using it — you have the most complete risk picture available. And that’s the picture an underwriter needs to price the risk, an enterprise needs to demonstrate compliance, and a regulator needs to evaluate accountability. 

But governance infrastructure isn’t only defensive. The enterprise that can see how AI interacts with its IP — what’s consumed, what’s produced, what’s working — can measure and compound the value of every asset AI touches. Risk and value are two views through the same lens. Governance produces both. 

One without the other is incomplete. Certifying the agent without governing the enterprise is like certifying an aircraft without licensing the pilot. Licensing the pilot without certifying the aircraft is equally dangerous. Aviation is the safest industry in the world because it governs both sides. AI needs the same discipline. 

The complete equation is what makes AI insurable. And insurable AI is what unlocks the next wave of adoption — not recklessly, not paralyzed by liability fear, but with the confidence infrastructure that every transformative technology in history has eventually required. 

The insurance industry didn’t respond to the automobile with prohibition. They built the infrastructure — the Insurance Institute for Highway Safety, crash testing, risk scoring — that made an inherently dangerous technology safe enough to transform civilization. Franklin didn’t respond to fire with bans. He built the governance infrastructure that made property insurable — and that infrastructure is still standing 274 years later. 

We’re at that same inflection point with AI. The supply side is being built. The demand side is not. And the gap is widening faster than most enterprises realize. 

Next in the Series 

What AI Is Actually Doing with Your IP . And why most enterprises cannot answer the simplest visibility questions about their own deployments. 

 

About the Author 

Ken Herfurth is the Founder and CEO of Ander, a performance intelligence company. With 30 years in engineering and C-suite roles across financial services and technology, he has spent more than two decades building and operating enterprise IP systems. He writes about demand-side AI governance as an operator, with a firsthand view of the widening gap between what AI vendors promise and what enterprises can actually control. 

 

Frequently Asked Questions About AI and IP Governance 

Is AI governance the same as model governance? 

No. Model governance evaluates how an AI system was built — bias testing, security controls, reliability benchmarks, and performance thresholds. IP governance evaluates how your enterprise uses that system — what intellectual property it consumes, what derivatives it produces, and whether those interactions are traceable and defensible. You need both. One without the other is incomplete. 

 

Who is liable when enterprise AI causes harm? 

Courts are increasingly assigning responsibility to the enterprise deploying the AI — not just the vendor that built it. If your organization uses AI in hiring, customer communications, underwriting, or compliance decisions, you own the outcome. Delegating the decision to a model does not delegate accountability. 

 

Why are insurers excluding generative AI from coverage? 

Because insurers price risk based on visibility. If an enterprise cannot produce a transaction-level record of what AI consumed, produced, and executed, the risk cannot be quantified. And what cannot be quantified cannot be priced. Exclusions are not ideological. They are actuarial. 

 

What does “IP as currency” actually mean? 

It means your intellectual property. Policies, procedures, training materials, brand standards, product documentation, proprietary data - is the asset base AI operates on. When AI interacts with it, it creates transactions and derivatives. Without infrastructure to track those transactions, you cannot measure value creation or risk exposure. Currency ungoverned is currency at risk.