For years, cybersecurity and privacy conversations have revolved around a familiar acronym: PII - Personally Identifiable Information. Regulations, breach notifications, security programs, and tooling have all been designed to protect data that can identify an individual. Names, Social Security numbers, health records, financial data. We know what PII is, why it matters, and what happens when it’s exposed.
But the rise of generative AI has quietly created a new and largely unaddressed category of risk. One that doesn’t neatly fit into existing regulatory definitions, yet may be just as damaging when mishandled.
It’s time we name it.
I propose a new term: BII - Business Identifiable Information.
What Is BII?
Business Identifiable Information (BII) is non-public information that uniquely identifies, describes, or provides operational insight into an organization—its strategy, operations, customers, intellectual property, or decision-making processes.
BII isn’t about identifying a person. It’s about identifying the business.
Examples include:
None of this is PII. Much of it isn’t regulated by privacy law. Yet exposure of this information can result in competitive harm, contractual violations, legal liability, reputational damage, or security compromise.
And today, BII is being exposed in a new way, often unintentionally, through everyday employee use of AI tools.
AI Has Changed the Data Risk Equation
Tools like Microsoft Copilot, Google Gemini, and ChatGPT are rapidly becoming embedded in daily workflows. Employees use them to summarize documents, draft emails, analyze spreadsheets, troubleshoot issues, and generate code. In isolation, each use case seems harmless. In aggregate, they represent a massive new data egress channel.
The problem isn’t malicious intent. It’s convenience combined with weak data governance.
Employees paste internal documents into AI prompts. They upload spreadsheets to “get insights.” They ask AI to rewrite customer communications or explain internal processes. Often, they don’t know or don’t fully understand where that data goes, how it’s retained, or how it may be used to train models.
When permissions are overly broad, data classification is immature, and AI usage policies are vague or nonexistent, BII quietly leaks outside the organization’s control plane.
No breach. No alert. No incident response.
Just exposure.
Why BII Matters More Than Ever
Traditional security models focus on keeping attackers out. BII risk is different. It assumes the user is authorized, the action is intentional, and the tool is approved or at least tolerated.
That makes BII exposure harder to detect and harder to govern.
Unlike PII breaches, BII exposure may not trigger regulatory notifications. But the business impact can be just as severe:
In many cases, organizations don’t even realize what they’ve exposed until the consequences surface later.
Rethinking Governance for the AI Era
Recognizing BII as a distinct risk category forces a necessary shift in thinking. It moves the conversation beyond compliance checklists and toward intentional information governance.
That means:
Most importantly, it requires leadership to acknowledge that not all sensitive data is personal and not all damaging exposure is regulated.
Naming the Risk Is the First Step
Security leaders often say, “You can’t protect what you don’t understand.” The same applies here.
By naming Business Identifiable Information, we give organizations a shared language to discuss a growing blind spot in their risk models. BII helps explain why AI-related data exposure feels dangerous even when no PII is involved and why existing controls often fall short.
PII will always matter. But in an AI-driven workplace, BII may be the more immediate, more pervasive, and more underestimated risk.
Organizations that recognize and address BII now will be far better positioned to harness AI safely without sacrificing control of the very information that defines the business.