All Resources
In this article:
minus iconplus icon
Share the Blog

Your Data Lakehouse Is Now Your AI Data Plane. Is the Governance Designed In?

May 5, 2026
3
Min Read
AI and ML

Most enterprises are building their AI programs on top of a data foundation that was never designed to be governed at AI speed.

Data lakehouses, including Databricks, Snowflake, and Delta Lake, have become the default substrate for enterprise AI and analytics. They are where training data lives, where RAG pipelines retrieve context, where Copilots and agentic workflows find the information they need to function. That consolidation is genuinely useful. But it creates a governance problem that most security and compliance teams are still working to solve: sensitive data accumulates in lakehouses faster than classification and access controls keep up, and AI systems inherit whatever access their underlying service accounts allow.

What is the data lakehouse AI governance problem?

The data lakehouse AI governance problem is the gap between what AI systems can technically access in a lakehouse environment and what security teams have classified, controlled, and approved for AI use. This gap exists because lakehouses were originally designed for analytics and data science workloads — not as AI retrieval infrastructure. When GenAI copilots and agents were layered on top, they inherited access to years of accumulated data that was never evaluated through an AI risk lens.

In practical terms: a service account that powers a Databricks workflow may have read access to tables containing customer PII, financial records, and HR data accumulated over years of pipeline operations. An AI agent running under that service account can retrieve and surface any of it. The security team may have no visibility into what sensitive data those tables contain, because classification was never applied at lakehouse scale.


Why AI changes the stakes for lakehouse data security

Before generative AI, ungoverned sensitive data in a lakehouse was a slow-moving compliance risk. A human analyst who encountered data they should not have seen was an isolated incident with a clear remediation path. Generative AI changes the failure mode entirely.

When an AI agent traverses a knowledge base or a Copilot retrieves context from a lakehouse, it can synthesize sensitive information across thousands of records in milliseconds and surface it in a response — without any of the friction that would have slowed a human analyst. The blast radius of a single over-permissioned service account is no longer an audit finding. It is a live exposure in the AI systems running in production today.

The scale of this risk is now quantified. According to a March 2026 Check Point Research threat report, 1 in every 28 GenAI prompts poses a high risk of sensitive data leakage, and 91% of organizations using GenAI regularly are affected. A January 2026 Netskope Cloud and Threat Report found that GenAI data violations have more than doubled year-over-year, with personal cloud and unsanctioned GenAI tool usage creating policy violations that most security teams cannot detect without comprehensive cross-platform monitoring.

The AI security layer has to move upstream — to the data itself, before any model touches it.

What data lakehouse AI readiness requires

Data lakehouse AI readiness is the state in which an organization can continuously answer three questions about its lakehouse environment:

  1. What sensitive data exists in this environment, how is it classified, and where does it sit relative to AI system access?
  2. What can each AI system — Copilot, RAG pipeline, agent, fine-tuning job — actually reach, based on the underlying identity's permissions?
  3. When data moves between environments (Snowflake to Databricks, S3 to a Bedrock knowledge base), does security posture, access controls and sensitivity classification travel with it?

Most enterprise environments cannot answer all three today. Not because the data is hidden, but because classification was never applied at lakehouse scale, and access mapping was never built to track AI agent identities alongside human ones.

David Stuart, Sr. Director of Product Marketing at Sentra, describes the pattern consistently across enterprise conversations: "Security and compliance teams often become the bottleneck on large-scale AI programs — not because they want to slow things down, but because they genuinely cannot sign off on a program they cannot see clearly. The lakehouse is full of years of pipeline data. Nobody knows exactly what sensitive data is in there, how it's classified, or what the AI can actually reach."

How enterprises are closing the lakehouse AI governance gap

The organizations deploying AI fastest without creating compliance exposure share a specific set of data governance capabilities applied to their lakehouse environments.

Continuous, in-place classification at petabyte scale. Traditional classification approaches that copy data into a separate security platform break down at lakehouse scale, both economically and architecturally. Effective lakehouse governance requires classification that runs inside the environment where data already lives — scanning without moving data, maintaining classification continuously as new data arrives, and updating the inventory in near real time rather than on a quarterly audit cycle.

Context-aware classification, not pattern matching. Regex-based classification misses too much in the unstructured and semi-structured data that lakehouses accumulate — documents, log files, JSON records with embedded PII, free-text fields alongside structured columns. Classification models that understand the meaning of a document rather than just the patterns within it reach accuracy above 95%, which is the threshold compliance teams need to rely on the output without manual review.

Identity-to-data access mapping for AI agents. Standard access reviews map human identities to resources. AI agents operate under service principals and application identities that may have accumulated permissions across an entire lakehouse without any review. Mapping those identities to the sensitive data they can reach — and flagging where that reach exceeds what an AI use case requires — closes the blast radius gap that traditional IAM reviews miss.

Access that follows data lineage. When data moves from a production lakehouse into a RAG vector store or a fine-tuning dataset, the access controls from the source environment do not automatically transfer. Governance that tracks data lineage across those movements — and reconciles access in the destination environment — prevents classification and access controls from becoming stale the moment data flows to a new system.

Sentra's approach to lakehouse governance applies to all four. Scanning happens in-place within the customer environment — sensitive data never leaves the perimeter during the classification process. An independent third-party audit by Expedia validated Sentra’s classification accuracy above 98%. At scale, Sentra scanned nine petabytes of data in under 72 hours without degrading accuracy. The first scan delivers real findings without configuration or custom rules.

Frequently asked questions: Data lakehouse AI security

What is the difference between lakehouse data security and traditional cloud data security (CNAPP/CSPM)? Traditional cloud data security focuses on storage configurations, bucket policies, and network controls. Lakehouse data security adds classification of the data itself both at the entity and file level — understanding what sensitive records exist within tables and files and the business purpose — and maps that classification to AI system access. This is essential because AI agents bypass application-level controls and retrieve data directly based on underlying identity permissions.

How does DSPM apply to data lakehouse environments? Data Security Posture Management (DSPM) applied to lakehouses continuously discovers and classifies sensitive data within Databricks, Snowflake, Delta Lake, and similar platforms, assesses its risk, then maps identified sensitive data to the identities that can access it. This gives security teams an accurate, current inventory of what AI systems can reach and where access creates risk — replacing periodic manual audits with continuous automated governance.

What sensitive data typically accumulates in enterprise lakehouses? In production lakehouse environments, scans commonly surface customer PII from CRM and event pipelines, financial records from transaction systems, health-adjacent information from HR and benefits platforms, authentication credentials in log files and configuration data, and internal documents ingested from file storage integrations. Most of these categories arrive through legitimate pipeline operations rather than misconfiguration.

How do you govern what AI agents can access in a lakehouse? Governing AI agent access in a lakehouse requires three steps: classifying the sensitive data in the environment so you know what exists, mapping the service accounts and application identities that AI agents run under to the data those identities can reach, and applying least-privilege controls and remediation to remove access that exceeds the AI use case's requirements. Continuous monitoring then flags when access drift occurs — when new data is added or permissions change — rather than waiting for the next periodic review.

What is the cost of continuous lakehouse classification at petabyte scale? In-place, agentless classification at petabyte scale is approximately ten times more cost-efficient than approaches that extract and analyze data in a vendor's external infrastructure.

The question to answer before your next AI deployment

For most organizations, the honest answer to "is our lakehouse AI-ready?" is: not yet. The good news is that this is a solvable problem, and it is being solved in production environments today at petabyte scale.

The organizations moving fastest on AI are not the ones waiting until governance is perfect. They are the ones running continuous classification now so that governance improves in parallel with deployment — and so that when the compliance team asks what the AI can reach, there is a clear, current, accurate answer.

The question is not whether to govern the data underneath your AI program. The question is whether you do it before the first incident or after.

To see how Sentra maps sensitive data in lakehouse environments, schedule a demo.

David Stuart is Senior Director of Product Marketing for Sentra, a leading cloud-native data security platform provider, where he is responsible for product and launch planning, content creation, and analyst relations. Dave is a 20+ year security industry veteran having held product and marketing management positions at industry luminary companies such as Symantec, Sourcefire, Cisco, Tenable, and ZeroFox. Dave holds a BSEE/CS from University of Illinois, and an MBA from Northwestern Kellogg Graduate School of Management.

Subscribe

Latest Blog Posts

Yair Cohen
Yair Cohen
May 14, 2026
4
Min Read
Data Security

The OpenLoop Health Breach: Aggregator inconsistent data security triggers exposure of 716,000 Patients and 120+ Brands

The OpenLoop Health Breach: Aggregator inconsistent data security triggers exposure of 716,000 Patients and 120+ Brands

The quick take: The OpenLoop Health breach isn't just another data leak. It's a massive failure in multi-tenant security. A single intrusion into a shared provider exposed 716,000 patients across 120 downstream healthcare companies.

One attack. One unauthorized session lasting less than 24 hours. Names, addresses, dates of birth, and medical records for 716,000 patients were exposed. A threat actor took this data from a company most patients had never heard of.

HHS confirmed the incident in May 2026. It occurred on January 7-8. OpenLoop provides the white-label clinical and operational infrastructure for telehealth brands like Remedy Meds and Fridays.

One breach. One shared layer. 120 separate companies affected.

What Happened: A Single Aggregation Point for 120 Downstream Brands

OpenLoop's business model is designed to be invisible. Healthcare companies use their platform to build virtual care programs. Patients interact with brands like JoinFridays, unaware that a shared backend aggregates their clinical data.

That model creates significant operational efficiency. It also creates a significant data security problem.

OpenLoop aggregates PHI from over 120 organizations. This data must be classified by sensitivity and mapped to specific clients. It requires strict access controls to isolate tenant data. Breach notification filings suggest the data was not segmented at the storage or access layers. It was aggregated, so the attacker took everything.

The specific attack vector is not public. Forensic timelines show access on January 7 and exfiltration by January 8. The attacker moved quickly. There was no lateral movement required because the data was accessible and easy to take.

Why This Keeps Happening: Third-Party Data Aggregators as Invisible Risk

Healthcare organizations spend significant resources securing their own systems. HIPAA compliance programs, annual risk assessments, penetration tests, vendor reviews. But those programs typically examine the primary vendor relationship, not the full stack.

HHS reports that healthcare breaches exposed 167 million records in 2024. Third-party breaches account for a disproportionate share of these incidents. The Change Healthcare breach is the primary example of how one clearinghouse can impact nearly every U.S. insurer.

OpenLoop is a smaller version with the same structural problem. When a third party aggregates sensitive data at scale, they become a high-value, single-point target. And because the data belongs to the third party's clients, not the third party itself, the classification and governance posture of that data often reflects neither the originating client's standards nor a sufficient security investment by the aggregator.

Gartner calls this "shadow PHI." This is protected health information outside the governance perimeter of the responsible organization. It is stored by intermediaries without continuous, consistent data classification controls.

The patients of Remedy Meds, MEDVi, and Fridays did not know OpenLoop existed. Their data did not show up in OpenLoop's public-facing privacy disclosures. And yet it was there, aggregated, accessible, and ultimately exfiltrated.

What Would Have Changed the Outcome

  1. Identify Inventory Gaps: Continuous discovery would have surfaced the concentration of multi-tenant PHI in shared stores. This identifies which datasets belong to which clients and confirms if they are appropriately segmented.
  2. Flag Co-mingled PHI: Sentra's classification layer flags co-mingled regulated records. This is a critical posture signal that warrants immediate remediation rather than being buried in a report.
  3. Analyze Identity and Access: Continuous analysis shows which service accounts and API keys have read access. Least privilege enforcement would have significantly reduced the blast radius of compromised credentials.
  4. Map Data Lineage: Lineage mapping provides real-time answers about compromise impact. Security teams need to know exactly how many records are reachable on demand.
  5. Consistent Data Labeling: Universal classification tagging, across disparate sensitive data stores, applied automatically enables effective remediation actions to ensure data privacy.

These controls detect and address exposure risk before a breach. While they may not stop every initial access vector, they materially reduce the blast radius with proactive risk management. Visible governance turns a massive incident into a contained event.

What to Do Now

If your organization relies on third-party platforms that aggregate or process sensitive data on your behalf, four things are worth doing this week:

1. Map your data supply chain. Identify every third-party or SaaS vendor that receives, processes, or stores PHI, PII, or regulated data on your behalf. This includes infrastructure providers, not just application vendors.

2. Ask your BAA partners about their data classification posture. A Business Associate Agreement establishes legal accountability. It does not guarantee that your patients' data is classified, segmented, and access-controlled inside the partner's environment. Ask specifically: can they show you where your data lives, who can access it, and how it is isolated from other clients' data?

3. Audit your own aggregation points. Most organizations have internal equivalents of the OpenLoop problem; data lakes, data warehouses, or shared analytics environments where sensitive data from multiple business units or customer segments has been aggregated without consistent classification or access segmentation. Run an inventory.

4. Review your incident response scope. The OpenLoop breach required notifications in Texas, California, Rhode Island, and other states. If a third party was breached and your customers' data was in scope, your incident response obligations may be triggered even without direct access to your own systems. Know your notification posture.

Longer term, consider Data Security Posture Management (DSPM), which is the discipline of continuously discovering, classifying, and governing sensitive data across a distributed data estate — exactly the kind of visibility that a multi-tenant health infrastructure provider needs to avoid what happened here.

Sentra maps sensitive data exposures across your entire environment. This includes all third-party integrations. Start with a data estate inventory. Request a demo.

Read More
Nikki Ralston
Nikki Ralston
May 14, 2026
3
Min Read
AI and ML

What Does AI Data Readiness Actually Look Like at Scale? Lyft, SoFi, and Expedia Will Demonstrate at Gartner SRM 2026

What Does AI Data Readiness Actually Look Like at Scale? Lyft, SoFi, and Expedia Will Demonstrate at Gartner SRM 2026

Most organizations I talk to have the same answer when I ask what their AI sees: "We're not entirely sure."

That's not a technology problem. It's a data governance problem - and it's the most consequential unsolved problem in enterprise security right now.

AI doesn't discriminate. Copilot, cloud-based agents, internal LLMs, can access everything their users can access, and synthesize it in seconds. Years of overpermissioned, unclassified data that security teams have been meaning to clean up is now directly in the path of AI systems that move faster than any previous tool your organization has deployed.

The good news is some organizations have actually solved this. At the Gartner Security & Risk Management Summit this June, three of them are sharing exactly how.

The AI Data Readiness Problem Is Bigger Than Most Teams Realize

Here's what I see repeatedly across security programs. Organizations are deploying AI faster than they're governing the data underneath it.

The data estate didn't get cleaned up before Copilot rolled out. Shadow data stores weren't fully catalogued before the internal agent went live. Classification policies that worked fine for DLP weren't built to handle the access patterns that AI introduces.

When AI systems traverse a knowledge base, they don't stay in their lane - they surface whatever they can reach. If sensitive customer records, financial data, or PII are accessible to a user, they're accessible to that user's AI tools. And AI doesn't just retrieve; it synthesizes and presents, which means the exposure risk compounds.

Governing AI data readiness means knowing three things with accuracy and continuity:

What sensitive data exists and where it lives. Not from a six-month-old scan. From a continuously maintained inventory that reflects the environment as it actually is today.

Who and what can access it. Not just humans; AI agents, service accounts, automated pipelines. The access surface for AI is substantially wider than traditional access models account for.

Whether it's classified correctly before AI touches it. Classification is the foundation. It's what DLP runs on. It's what Copilot safety controls enforce against. If the labels are wrong or missing, every downstream control fails.

Expedia operates 450 petabytes of cloud data. Lyft and SoFi each manage 70+ petabytes. These aren't edge cases — they're the environments where AI data readiness problems are biggest, and where solving them produces the most visible results.

What You'll Hear at Gartner SRM 2026

Sentra is at Gartner SRM all week — June 1 through 3 at National Harbor — and we've built the week around the practitioners who've done this work, not around slides about why it matters.

Here's what's on the calendar.

Wednesday, June 3: Gartner Solution Provider Session

From Data Risk to AI Ready: The Lyft & Expedia Playbook 11:15–11:45 AM | Gartner Solution Provider Stage | Maryland C Ballroom

Hear from the Lyft CISO and Expedia on how they tackled the AI data readiness challenge in 100+ petabyte environments - classifying, governing, and securing the data sprawl already in the path of their AI initiatives. As AI proliferates across the enterprise, the data underneath it becomes the greatest unmanaged risk. In this session, experts share the decisions, tradeoffs, and tools that built their foundation - and what it made possible at scale. Walk away knowing the data readiness essentials so your AI initiative succeeds.

If you're at Gartner SRM this is the one solution provider session you won’t want to miss on Wednesday.

Use the Garter Agenda App to register for:
From Data Risk to AI Ready: The Lyft & Expedia Playbook
11:15–11:45 AM, Wednesday June 11,2026

Monday–Wednesday Morning Roundtables

Invite-Only Breakfast Sessions | Sentra Meeting Suite

These small-group sessions are the intimate version of the stage conversation — tailored to the specific attendee group, with real back-and-forth on what's working and what isn't.

Monday, June 1 | 8:00–8:45 AM (Breakfast) Lyft CISO Chaim Sanders on how Lyft built continuous data readiness and governance in a 70+ petabyte environment. How they classified at scale, where they found the unexpected exposure, and what they'd do differently.

Tuesday, June 2 | 8:00–8:45 AM (Breakfast) Expedia Distinguished Architect Payam Chychi on governing a 450-petabyte environment — the sprawl problem, the AI data access challenge, and the architecture decisions that made classification actionable.

Wednesday, June 3 | 8:00–8:45 AM (Breakfast) SoFi Sr. Manager of Product Security Engineering Zach Schulze on making 70+ PB of cloud data AI-ready — including how they combined Sentra DSPM with Wiz CSPM to reduce noise and govern safely.

Seats are limited and these sessions fill fast. Register at the Gartner SRM 2026 event page →

Tuesday, June 2: CISO Executive Dinner

7:30–9:30 PM | Grace's Mandarin | National Harbor

An invitation-only dinner with a small group of security leaders, including the Lyft CISO and security teams from Expedia and SoFi. Small tables. No presentations. The kind of conversation that only happens when the right people are in the right room.

If you'd like to be considered for an invitation, reach out directly via the event page or connect with your Sentra contact.

Monday–Wednesday: Executive 1:1 Briefings

8:00 AM–5:00 PM | Sentra Private Meeting Suite

For security leaders who want to apply the Lyft, SoFi, and Expedia learnings to their own environment — what AI readiness actually means given your data estate, your AI initiatives, and where your exposure lives. Sessions are led by Sentra's head of product or customer implementations. No slides. Just the right conversation.

Book a 1:1 briefing →

All Week: Live Demos at Booth #222

See how Sentra discovers, classifies, and secures the data already in the path of your AI. The demo is built around your questions — bring the hard ones. The team onsite has worked with some of the largest data environments in the world.

Book a demo at the booth →

Why This Matters Right Now

Gartner SRM is the right venue for this conversation, and 2026 is the right year to have it.

AI deployment accelerated faster than most security teams anticipated. The governance frameworks, classification foundations, and access controls that data-driven AI requires were, in many cases, not in place when the rollout happened. Now those teams are working backward — trying to understand what their AI can actually reach, and whether the data feeding it is classified accurately enough to trust.

The organizations presenting at our events this week tackled this problem at a scale that most enterprises haven't reached yet. What they learned applies regardless of environment size: classification has to happen before AI touches the data, not after. The inventory has to reflect reality continuously, not periodically. And governing AI access requires a fundamentally different approach than governing human access.

If you're at Gartner SRM and this is the problem your organization is working on, the sessions above are worth your time.

See the full schedule and register at sentra.io/gartner-srm-2026 →

Read More
Dan Gutstadt
Dan Gutstadt
May 13, 2026
8
Min Read
Data Security

How to Manage Data Access in the Cloud: A Practical Guide to Cloud Data Access Governance

How to Manage Data Access in the Cloud: A Practical Guide to Cloud Data Access Governance

Most security teams can now answer: “Where does our sensitive data live?”
Far fewer can confidently answer: “Who can access it right now and how will that change in the next hour?”

That gap between knowing where your data is and knowing who can reach it under what conditions is what cloud data access governance is designed to close. And in 2026, with cloud data estates sprawling across dozens of accounts, AI agents processing sensitive workloads, and identity-based attacks accounting for the majority of cloud breaches, that gap is no longer a theoretical risk. It’s an operational emergency waiting to happen.

This guide is written for security architects, cloud security engineers, and data security leaders who already understand IAM, DSPM, and basic cloud security controls—and are ready for the practical, implementation-level guidance on how to make data access governance actually work across complex, multi-cloud environments.

Why Managing Cloud Data Access Is So Hard

Cloud data access feels like a solvable problem. You have IAM. You have policies. You have role assignments. And yet, organizations consistently find themselves exposed—not because they lack tools, but because those tools were never designed to answer data-level access questions at the scale and speed cloud environments demand.

Here’s what’s actually driving the challenge:

Identity Sprawl at Machine Scale

Modern cloud environments don’t just have thousands of human users—they have tens of thousands of non-human identities: service accounts, Lambda functions, CI/CD pipelines, third-party integrations, and increasingly, AI agents like copilot.

Every one of these identities carries some level of data entitlement. Most of them carry far more access than they need.

Shadow Data and ROT Expanding the Attack Surface

Sensitive data doesn’t stay where you put it. It moves. It gets copied into test environments, replicated into analytics pipelines, exported to SaaS tools, and forgotten in deprecated storage buckets.

This shadow data—and the redundant, obsolete, and trivial (ROT) data that piles up over time—silently expands your data attack surface without triggering a single IAM alert.

IAM Operates at the Wrong Layer

IAM is foundational and non-negotiable. But IAM was built to manage access to resources and services—not to specific tables, columns, files, or records within those resources.

Granting a role access to a BigQuery dataset doesn’t tell you which tables contain PII, which columns are restricted under GDPR, or whether that role was ever actually used. IAM gives you the plumbing; it doesn’t tell you what flows through the pipes.

The Authorization Gap

This is the core problem cloud data access governance is built to solve.

The authorization gap is the difference between what users, applications, and AI systems can access and what they should access under least-privilege and zero trust principles.

The gap grows every time data is copied, a role is inherited, a permissions boundary drifts, or an AI agent is granted broad read access to accelerate onboarding. Without a data-first governance layer that continuously maps access to sensitivity, the gap widens invisibly—until a breach makes it visible.

Foundational Concepts: IAM, DSPM, and Data Access Governance

Before outlining a practical lifecycle, it’s worth defining the three pillars that effective cloud data access governance rests on—and how they interact.

Identity and Access Management (IAM)

IAM is the authentication and authorization backbone of any cloud security architecture. Whether implemented through AWS IAM, Azure Entra ID, Google Cloud IAM, or enterprise identity platforms like Okta and CyberArk, IAM handles who can authenticate, what permissions they carry, and how access is administered and audited.

Best-in-class IAM implementations incorporate SSO, MFA, Zero Trust network segmentation, and automated access reviews. These are necessary conditions for cloud data security—but not sufficient ones.

Data Security Posture Management (DSPM)

DSPM continuously discovers and classifies sensitive data across cloud infrastructure, SaaS platforms, data warehouses, and on-premises systems.

It evaluates each data store’s posture:

  • Is the data encrypted at rest and in transit?
  • Is logging enabled?
  • Is the bucket publicly accessible?
  • Is PII stored in a geography that violates data residency requirements?

The output is a continuously updated data inventory with risk scores—giving security teams the data-aware context that IAM alone cannot provide.

Data Access Governance (DAG)

DAG is the policies, processes, and enforcement controls that ensure only authorized identities—humans, applications, and AI agents—can access, modify, or distribute sensitive data, and only in ways that align with least-privilege and compliance requirements.

DAG is the bridge between IAM (which manages resource access) and DSPM (which understands data sensitivity and exposure). It uses DSPM’s classification context to answer the operational question: Given that this data store contains PHI regulated under HIPAA, who should be allowed to query it, under what conditions, and how do we enforce that continuously?

DSPM, DAG, and DDR Together

Together, DSPM, DAG, and Data Detection and Response (DDR) form a unified architecture for modern cloud data security:

Layer Function Key Question Answered
DSPM Discover, classify, and evaluate posture of sensitive data What sensitive data exists, where, and how exposed is it?
DAG Govern and enforce least-privilege access to sensitive data Who should have access, and are current permissions aligned?
DDR Monitor runtime access and detect/respond to anomalous behavior Is access being used as expected, and are there active threats?

A Lifecycle for Managing Cloud Data Access

Effective cloud data access governance is not a one-time project. It’s a continuous lifecycle—and organizations that treat it as a periodic audit will perpetually find themselves behind.

Here is the six-stage lifecycle that closes the authorization gap at scale.

Stage 1: Discover and Classify Data

You cannot govern what you cannot see.

Automated, agentless discovery should scan all data stores across clouds (AWS, Azure, GCP), data warehouses (BigQuery, Snowflake, Redshift), managed databases, object storage (S3, GCS, Azure Blob), and SaaS platforms on a continuous basis. The goal is a complete, always-current data inventory—not a snapshot that’s stale the moment it’s taken.

Classification should go beyond pattern matching. Effective classification:

  • Identifies sensitive data categories: PII, PHI, PCI card data, intellectual property, financial records
  • Assigns business context: department ownership, environment (prod vs. dev), geography, regulatory domain
  • Surfaces shadow data: sensitive files in forgotten buckets, test databases with production data, unsanctioned SaaS exports

Key metric: Sentra has processed 9 PB of data in under 72 hours and scanned 100 PB environments for approximately $40,000—demonstrating that comprehensive in-environment discovery is operationally feasible, even at hyperscale.

Stage 2: Map Identities, Access Paths, and Posture

With a complete data inventory in hand, the next step is building a data-access graph: a normalized map of which identities (users, groups, roles, service accounts, AI agents) have what level of access to which sensitive data stores, through which paths.

This means normalizing entitlements across:

  • Cloud IAM roles and policies (AWS, Azure, GCP)
  • Data platform permissions (BigQuery datasets, Snowflake roles, Redshift schemas)
  • SaaS app roles (Salesforce profiles, M365 sharing settings, Workday security groups)
  • Non-human identities: service accounts, workload identities, OAuth tokens, AI agent credentials

Simultaneously, evaluate posture for each sensitive store: encryption state, audit logging status, backup coverage, external exposure (public endpoints, cross-account sharing), and regulatory boundary alignment.

Stage 3: Prioritize Risks and Identify Toxic Combinations

Not all access misconfigurations are equal. A security group with overly broad access to a low-sensitivity analytics table is a low-priority finding. The same group with access to an unencrypted S3 bucket containing 50 million Social Security Numbers is a critical incident waiting to happen.

Toxic combinations—the highest-priority risk patterns in data access governance—emerge from the intersection of:

Risk Factor Example
High data sensitivity PCI cardholder data, PHI, employee SSNs
Broad access scope All-users groups, wildcard IAM policies, inherited super-roles
External exposure Publicly accessible buckets, externally shared folders
Anomalous behavior signals Bulk downloads, after-hours queries, unusual geographic access
AI agent over-reach Copilot with access to unmasked HR records or financial models

DSPM risk scores combined with DAG access analytics should surface these combinations automatically, prioritized by potential blast radius.

Stage 4: Enforce Least Privilege and Remediate Access

This is where governance moves from analysis to action.

Remediation at the data layer involves:

  • Removing over-broad group memberships: Eliminating all-users, domain-wide, or project-level access grants where dataset- or table-level access is appropriate
  • Cleaning up dormant accounts and stale keys: Revoking access for users, service accounts, or API keys that haven’t been used in 30, 60, or 90 days
  • Fixing misaligned shares and labels: Correcting externally shared folders containing sensitive data; applying classification labels that trigger downstream DLP and access policies
  • Eliminating shadow and ROT data: Deleting or archiving sensitive data that has no legitimate active use—which both reduces attack surface and, in Sentra’s experience, drives approximately 20% cloud storage cost reduction for typical customers

Effective remediation requires tight integration between the DAG layer and enforcement points: IAM platforms, cloud-native DLP tools, data warehouse access controls, and masking/row-level security policies.

Stage 5: Monitor Access and Respond in Real Time

Governance doesn’t end at policy enforcement. Identities evolve, data moves, and attackers adapt.

Data Detection and Response (DDR) provides the runtime visibility layer that DSPM and DAG cannot supply on their own. DDR monitors data access events continuously:

  • Queries executed against sensitive tables in BigQuery, Snowflake, or Redshift
  • File reads and downloads from S3, GCS, or SharePoint
  • API calls accessing sensitive records in SaaS applications
  • Bulk exports, unusual query volumes, or access from anomalous geolocations

When suspicious patterns emerge—an analyst querying 10x their normal data volume, a service account accessing tables outside its defined scope, or an AI agent traversing ACLs it was never meant to reach—DDR triggers guided or automated responses: access suspension, alert escalation, or automated IAM policy revocation.

Stage 6: Review, Audit, and Iterate

The final stage closes the loop. Periodic access reviews—grounded in actual usage data rather than static role assignments—are how organizations progressively tighten their least-privilege posture over time.

Effective access reviews should:

  • Use behavioral data (who actually accessed what, when) to challenge standing permissions
  • Generate audit-ready evidence for PCI DSS 4.0 log review requirements, GDPR accountability obligations, HIPAA access control audits, and SOC 2 Type II certifications
  • Feed findings back into Stage 4 remediation workflows to create a continuous improvement cycle

Implementing Least-Privilege Access in Practice: Platform Patterns

The lifecycle above describes what to do. This section covers how to implement least-privilege access in the cloud data platforms most security architects deal with day to day.

Designing Roles and Scopes

The most common mistake in cloud data access design is defaulting to project- or account-level roles because they’re easier to administer. Project-wide BigQuery Data Viewer access to all datasets in a GCP project—granted because a data scientist needed access to one analytics table—is a textbook authorization gap.

Guiding principles for role design:

  • Grant at the narrowest scope possible: dataset > table > column, not project > dataset
  • Create purpose-built data roles rather than repurposing infrastructure roles (e.g., a dedicated FINANCE_ANALYST_RO Snowflake role, not a shared SYSADMIN-derived role)
  • Separate ingestion/ETL roles from read/analytics roles; separate production roles from sandbox roles
  • Never use account owner or project admin roles for routine data operations

Object Storage: S3 and GCS Patterns

Pattern Implementation What It Prevents
Dedicated storage integration roles IAM roles with scoped STORAGE_ALLOWED_LOCATIONS (Snowflake external stages) Broad bucket access from warehouse integrations
Granular S3 bucket policies s3:GetObject scoped to specific prefixes, not s3:* on arn:aws:s3:::* Wildcard policies exposing entire accounts
Block public access by default S3 Block Public Access settings enforced at account level Accidental public bucket exposure
No hard-coded credentials IAM roles and instance profiles; no long-lived access keys in application code Credential exfiltration from code repositories
Object-level logging S3 Server Access Logging or CloudTrail data events enabled on sensitive buckets Blind spots in DDR and audit trails

Common pitfalls: Overly broad ETL roles that carry s3:* access across all buckets; shared Glue or Spark job roles that accumulate permissions over time; lifecycle policies that fail to delete sensitive data in staging prefixes.

Data Warehouse Patterns: BigQuery

BigQuery’s IAM model is powerful but frequently misconfigured at scale.

Recommended BigQuery access architecture:

Access Type Recommended Scope IAM Role
Analysts (read-only) Dataset level roles/bigquery.dataViewer at dataset, not project
Engineers (read/write) Dataset or table level roles/bigquery.dataEditor scoped to target dataset
Pipelines/ETL Dataset or table level Custom role with minimum required permissions
Admins Project level, with audit roles/bigquery.admin restricted to named individuals

Advanced controls to implement:

  • Column-level security: BigQuery policy tags enable column masking and fine-grained access by data classification—PII columns tagged and masked for default consumers, accessible in raw form only through approved roles
  • Row-level security: Row access policies (filter expressions) limit which records specific identities can query within a shared table
  • Authorized views: Expose constrained projections of sensitive tables without granting underlying table access

Data Warehouse Patterns: Snowflake

Snowflake’s role hierarchy is a common source of “access debt”—the accumulated, under-managed entitlements that make toxic combinations difficult to detect manually.

Snowflake access hygiene framework:

Issue Symptom Remediation
Super-roles Single role with access to all databases and schemas Decompose into environment- and domain-specific roles
Dormant roles Roles granted but unused for 90+ days Revoke and require re-justification
Role hierarchy sprawl Inherited permissions cascade unexpectedly through GRANT ROLE TO ROLE Map full effective permissions; audit inheritance chains
Shared ETL credentials One SYSADMIN-level user running all pipelines Dedicated service users per pipeline with scoped permissions
Production data in dev Dev databases containing real customer records DSPM discovery to identify and quarantine; masking in non-prod

DSPM platforms like Sentra can identify toxic combinations in Snowflake—for example, a broadly-granted analyst role that, through role inheritance, carries access to unmasked PII tables in a production schema—and guide targeted remediation without requiring a full role architecture rebuild.

Managed Databases and SaaS

For managed relational databases (Amazon RDS, Google Cloud SQL, Azure SQL):

  • Maintain separate application users (minimal SELECT/INSERT/UPDATE on specific schemas) and analytics users (read-only, ideally pointing to read replicas)
  • Avoid all-powerful shared users like root or master for routine operations
  • Rotate credentials using secrets managers (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) rather than static passwords

For SaaS platforms (Salesforce, M365, Workday):

Application-native role management is necessary but insufficient. A DAG/DSPM layer that normalizes cross-platform access and correlates identity-to-data across SaaS apps provides the unified visibility that app-by-app administration cannot.

Governing AI and Copilot Data Access

No guide to cloud data access governance in 2026 is complete without addressing the category of identity that most organizations have not yet learned to treat as a security problem: AI agents and copilots.

AI systems—whether internal LLM deployments, third-party copilots integrated into SaaS workflows, or autonomous agents connected to data warehouses—operate as high-privilege data consumers. They query broadly. They often have access granted for convenience rather than least-privilege design. And unlike human users, their access behavior is harder to baseline and anomaly-detect without purpose-built tooling.

The AI data access problem in practice:

  • Copilots integrated into M365 or Salesforce may inherit user-level permissions—including access to sensitive files, emails, and records the user has accumulated over years
  • AI agents connected to BigQuery or Snowflake for RAG pipelines may have schema-wide SELECT permissions intended for development that were never scoped down before production deployment
  • AI systems that generate code or SQL may exfiltrate schema information as part of their normal operation, even without directly accessing data records

Governing AI identities requires the same lifecycle applied to human identities:

  1. Inventory: Discover all AI agents and copilot integrations with data access—including shadow AI deployments
  2. Classify: Map which sensitive datasets each AI agent can reach, with what level of access, through which credentials
  3. Constrain: Apply least-privilege access; use classification labels and policy tags to enforce data boundaries (e.g., AI agents cannot access raw PII, only masked or synthetic equivalents)
  4. Monitor: Apply DDR to AI access patterns; establish baselines and alert on deviations (bulk reads, unusual schema traversal, access to tables outside defined scope)
  5. Govern: Treat AI agent access provisioning and review with the same rigor as human privileged access—including JIT elevation for sensitive operations

Comparison: Key Approaches to Cloud Data Access Governance

Approach Strengths Limitations
IAM-only governance Mature tooling; cloud-native integration; widely understood No data-layer visibility; doesn't distinguish sensitive from non-sensitive data; authorization gap grows as data sprawls
DSPM without DAG Excellent data discovery and risk visibility; surfaces exposure Identifies problems but doesn't enforce access changes; no continuous remediation workflow
DAG without DSPM Can enforce access policies and manage entitlements Without data classification context, policy decisions lack sensitivity-aware prioritization
Manual access reviews Meets minimum compliance bar; human judgment applied Slow, resource-intensive, stale between cycles; can't keep pace with cloud environment velocity
DSPM + DAG + DDR (unified) Continuous discovery, data-aware enforcement, runtime detection; closes the authorization gap end-to-end Requires integrated platform or well-orchestrated toolchain; initial discovery and classification effort at deployment

Just-Enough and Just-In-Time Access for Cloud Data

Standing privileges—long-lived, always-on access to sensitive data—are the single largest contributor to breach blast radius in cloud environments. When a privileged identity is compromised, standing access means the attacker inherits everything, immediately. JEA and JIT are the practical alternatives.

Just-Enough Access (JEA)

JEA means users and systems receive access calibrated to their actual role requirements—not the role requirements of their team, their manager’s interpretation of their role, or what was convenient to grant six months ago.

In practice, JEA for data teams typically means:

  • Default access to masked or aggregated versions of sensitive data (e.g., tokenized PII, row-sampled datasets, pre-aggregated analytics views)
  • Explicit approval workflows for access to raw, highly sensitive data—triggered on demand, logged, and time-bounded
  • Policy tag enforcement at the data warehouse layer (BigQuery policy tags, Snowflake data classification tags) that dynamically apply masking based on the requesting identity’s clearance level

This shifts the burden from “deny access by default and re-grant manually” to “grant minimal access by default and elevate via audited workflow”—which is operationally sustainable at scale.

Just-In-Time (JIT) Access

JIT goes further: rather than maintaining standing access (even minimal access), high-sensitivity operations trigger temporary elevation for a defined window, with automatic revocation when the window closes or the task completes.

JIT access workflow for cloud data:

1. Analyst requests access to production PII dataset for incident investigation

2. Request triggers approval workflow (manager + data owner)

3. Upon approval, JIT system grants time-bound IAM binding (e.g., 4-hour window)

4. Access is logged in full; queries are captured for audit trail

5. At window expiration, IAM binding is automatically revoked

6. DDR monitors for anomalous behavior during the access window

Cloud-native JIT tooling includes GCP’s Privileged Access Manager (PAM), AWS IAM Identity Center with temporary permission sets, and enterprise PAM platforms like CyberArk and BeyondTrust. DSPM and DAG platforms provide the data sensitivity signals that make JIT decisions meaningful—the system knows whether the dataset being requested contains regulated PHI, its current exposure posture, and whether the requesting identity has a legitimate business justification based on their historical access patterns.

Zero Standing Privilege: The Target State

For the highest-sensitivity data environments—customer PII stores, financial records, regulated health data—the target architecture is zero standing privilege: no human identity holds persistent access to raw sensitive data. All access is JIT-elevated, time-bounded, and fully audited.

This is not achievable overnight for most organizations, but it is the direction of travel. The maturity model below provides a practical path.

Cloud Data Access Governance Maturity Model

Maturity Level Posture Key Characteristics
Level 1: Ad Hoc Reactive Access granted on request; no consistent least-privilege enforcement; no data classification; periodic manual audits
Level 2: Defined Policy-driven IAM roles defined by team/function; some data classification; access reviews on a fixed schedule (quarterly/annual)
Level 3: Managed DSPM-informed Continuous data discovery and classification; data-access graph mapped; toxic combinations identified; remediation tracked
Level 4: Governed DAG-enforced Least-privilege enforced at data layer; JEA implemented; access reviews driven by usage data; SaaS and AI covered
Level 5: Optimized Continuous Zero standing privilege for sensitive data; JIT elevation with automated provisioning/revocation; DDR with automated response; AI agents governed like human identities

Frequently Asked Questions

What is cloud data access governance?
Cloud data access governance is the set of policies, processes, and technical controls that ensure only authorized identities—humans, applications, and AI agents—can access sensitive cloud data, under conditions aligned with least-privilege, zero trust, and compliance requirements. It bridges IAM (resource-level access control) and DSPM (data discovery and classification) to enforce data-first access management continuously.

How is data access governance different from IAM?
IAM manages access to cloud resources and services at the infrastructure layer. Data access governance operates at the data layer—it understands what data is sensitive, who should be allowed to access it based on that sensitivity, and whether current permissions are aligned with least-privilege requirements. IAM is a necessary component of DAG, but DAG extends IAM with data-awareness and continuous enforcement.

What is the authorization gap?
The authorization gap is the difference between what identities can access (based on their current permissions) and what they should access under least-privilege principles. The gap grows as data is copied, roles accumulate permissions over time, and access is granted for convenience without ongoing review. DSPM and DAG together are designed to continuously measure and close this gap.

What is DSPM and how does it relate to data access governance?
Data Security Posture Management (DSPM) continuously discovers and classifies sensitive data across cloud environments, evaluating each data store’s security posture—encryption, exposure, logging, regulatory alignment. DSPM provides the data intelligence layer that makes access governance decisions meaningful: rather than reviewing permissions in the abstract, DAG uses DSPM context to understand which sensitive data is behind which permissions, and prioritizes remediation accordingly.

What does least-privilege data access mean in practice?
Least-privilege data access means granting identities the minimum level of access—to the most narrowly scoped data resource—required to perform their legitimate function. In practice, this means dataset-level (not project-level) access in BigQuery, domain-specific roles (not inherited super-roles) in Snowflake, prefix-scoped (not bucket-wide) policies in S3, and time-bounded JIT elevation rather than standing access to highly sensitive data.

How should AI agents be governed in a data access governance framework?
AI agents and copilots should be treated as first-class identities in the data access governance lifecycle. This means inventorying all AI agents with data access, mapping which sensitive datasets they can reach, constraining access using classification labels and policy tags, monitoring their data access behavior with DDR, and applying JIT elevation patterns for AI-initiated access to high-sensitivity data—just as you would for privileged human users.

What is Just-In-Time (JIT) access for cloud data?
JIT access is a pattern where sensitive data access is granted temporarily—for a defined window tied to a specific task or incident—rather than maintained as a standing permission. JIT workflows typically require approval, generate a full audit trail, and automatically revoke access when the window closes. JIT is increasingly considered the target state for access to regulated and high-sensitivity data in zero trust architectures.

How do you implement data access governance across multiple clouds?
Multi-cloud data access governance requires a platform that can normalize entitlements across cloud-native IAM systems (AWS IAM, Azure Entra ID, GCP IAM), data warehouse permission models (BigQuery, Snowflake, Redshift), and SaaS applications into a unified data-access graph. This graph, enriched with DSPM classification context, enables consistent least-privilege enforcement and risk prioritization regardless of which cloud or platform the data lives in.

What compliance frameworks require cloud data access governance?
PCI DSS 4.0 requires access control reviews and log monitoring for cardholder data environments. GDPR mandates demonstrable controls over who can access personal data and the ability to audit access history. HIPAA requires access controls, audit controls, and integrity controls for PHI. SOC 2 Type II requires evidence of access control design and operating effectiveness. Cloud data access governance—particularly when backed by continuous DSPM and DAG—provides the evidentiary foundation for all of these frameworks.

Conclusion: Closing the Authorization Gap with a Data-First Approach

The trajectory of cloud data risk runs in one direction: more data, more identities, more movement, more exposure. IAM alone cannot keep pace. Periodic audits cannot keep pace. One-time DSPM scans cannot keep pace.

What can keep pace is a continuous, data-first governance lifecycle—one that starts with knowing where your sensitive data lives, extends to mapping every identity that can reach it, enforces least-privilege access at the data layer, and monitors runtime behavior to detect and respond to threats as they emerge.

The authorization gap is not a theoretical problem. It is the documented precondition for most major cloud data breaches. Closing it requires treating data access governance as an operational discipline, not a compliance checkbox—and building the architecture to support it at the speed and scale cloud environments demand.

For a deeper look at how Sentra’s DSPM, DAG, and DDR capabilities work together to close the authorization gap across cloud, SaaS, and AI environments, explore our Data Access Governance solution page, DSPM overview, and Data Detection and Response documentation.

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.