Corporate policies on HR, Financial & Legal data in ChatGPT, Claude & Gemini
Based on research across 50+ industry sources ยท Cisco ยท McKinsey ยท Netskope ยท LayerX ยท Cyberhaven ยท OpenAI ยท Anthropic ยท Google
Despite policies, 47% of employees use personal unsanctioned AI accounts for work. The average enterprise experiences 223 sensitive data incidents per month โ double 2024 levels.
Enterprises are rapidly migrating from public LLMs to private/enterprise deployments. Legal and financial firms now lead this shift, driven by confidentiality, compliance, and governance requirements.
Source: Bloomberry analysis of 76,000 companies (Oct 2025)
Source: Cisco 2024 Data Privacy Benchmark Study (2,600 organizations globally)
Most enterprises classify data into 4 tiers to determine what may be used with frontier LLMs
Job descriptions, public org charts, published DEI stats
Published earnings, public SEC filings, investor presentations
Public case law, published regulations, public court filings
Anonymized workforce stats, policy documents, general onboarding materials
Budget templates (no figures), generic financial models, process workflows
Generic contract templates, standard legal process docs, compliance checklists
Salaries, disciplinary records, health/benefits data, performance reviews
Forecasts, investor materials, M&A data, non-public earnings figures
Client matters, privileged communications, settlement negotiations
Biometric data, medical records, social security numbers, immigration status
Insider trading-sensitive MNPI, audit findings, card/payment transaction data
Privileged attorney comms, sealed case material, regulatory investigation details
Full ban after engineers uploaded sensitive source code to ChatGPT (2023). Now building secured internal AI environments.
Restricted employee use of ChatGPT to prevent leakage of confidential product information. Building internal AI tools.
Restricted employee use of ChatGPT. Building internally governed AI solutions while prohibiting external sensitive data sharing.
Blocked ChatGPT from corporate systems to prevent loss of control over customer data and source code.
Blocked public AI tools outright. As a defense contractor, all sensitive national security data must stay fully isolated.
No GenAI during coding; no company or client data may be uploaded to GenAI tools without explicit permissions from leadership.
Warned employees not to share confidential code or data with external AI. Promotes internal CodeWhisperer tool instead.
Temporarily blocked ChatGPT system-wide while building governance framework. Conditionally approved Google Cloud Platform AI.
Cited "unacceptable risks" from ChatGPT. Exploring Microsoft Copilot and Google Gemini in controlled, in-tenant environments instead.
Many Am Law 100 firms restrict client data in public LLMs โ while adopting private legal AI tools such as Harvey AI and Casetext.
Despite corporate bans, employees routinely share sensitive data through personal, unmonitored AI accounts
Source: Cyberhaven 2024 Shadow AI Report โ % of total sensitive data flowing to unapproved AI tools
May be used for model training (opt-out available). Must never be used for sensitive enterprise data.
NOT used for training by default. Zero Data Retention available. Customer-controlled encryption keys (EKM).
As of 2025, consumer chats used for training unless you opt out. Opting in extends retention significantly.
NOT used for training by default. Enterprise and API channels fully isolated from consumer data handling.
Gemini free/consumer: data may be reviewed by humans. 94.4% of workplace Gemini usage is via personal accounts.
Vertex AI / Google Workspace: data not used for model training. Full in-tenant processing within your Google environment.
EU: Strict rules on processing employee personal data. Requires explicit legal basis. Applies cross-border.
California: Employees have rights over their personal data. Cannot share without consent. Applies to all CA employees.
Health data (even in HR context: benefits, medical leave) is protected. Requires BAA with AI vendors.
AI use in hiring must not discriminate. LLMs used for resume screening or hiring carry legal bias liability risk.
Requires legal basis for processing; data minimisation; purpose limitation. Applies to employee data in LLMs. Strict cross-border transfer restrictions.
High-risk AI systems (HR, credit scoring, legal) face strict transparency and human oversight requirements. Enforcement expanding 2025โ2026.
Applies to HR health/benefits data. Requires BAA with any AI vendor. OpenAI and Google both offer healthcare BAAs for enterprise tiers.
Controls over financial data access and processing. Using public LLMs for financial workflows may create audit and internal controls deficiencies.
Prohibits selective disclosure of MNPI. Inputting material non-public financial info into AI tools (which have human reviewers/logs) may constitute improper disclosure.
Employees and customers have data rights. California now explicitly includes employees under consumer privacy rights. Applies to all CA-based employees.
The market is bifurcating: public tools for general tasks, private/enterprise deployments for anything sensitive
Attorney-client privilege and financial client confidentiality cannot be waived through LLM use
Must demonstrate full audit trails of decisions made and data accessed
AI hallucinations in contracts or financial advice carry direct legal and financial liability
Client data residency requirements demand in-jurisdiction, in-perimeter processing
Map all HR, financial, and legal data to Public / Internal / Confidential / Restricted tiers. Publish and enforce clear rules โ Restricted data never enters any external LLM API.
73.8% of ChatGPT workplace usage happens through personal accounts. Provision enterprise accounts for all employees. Block personal AI use on corporate devices via DLP tools.
For HR PII, financial MNPI, and legal privileged data โ use private/on-premise LLMs (Azure OpenAI, AWS Bedrock VPC, self-hosted Llama). Keep sensitive data inside your perimeter.
Create a cross-functional AI governance board (Legal, IT, HR, Finance, Compliance). Define RACI clearly. Only 9% of enterprises had mature AI governance in 2024 โ this is a competitive differentiator.
Only 24% of employees received any AI training. Staff must understand: what constitutes sensitive data, the difference between consumer and enterprise AI, and the legal consequences of shadow AI.
Deploy Data Loss Prevention tools that detect and block sensitive data before it reaches AI APIs. Microsoft Purview + Defender for Cloud Apps is a widely deployed enterprise stack for this purpose.
Use CASB and network monitoring to discover unapproved AI tools. The average enterprise has 1,200 unauthorized apps. Treat shadow AI discovery as an ongoing program โ not a one-time exercise.
For HIPAA-adjacent HR data, require Business Associate Agreements. For all vendors, sign Data Processing Addenda. Verify SOC 2 and ISO 27001 certifications. Audit sub-processors annually.