All Resources
In this article:
minus iconplus icon
Share the Blog

How SLMs (Small Language Models) Make Sentra’s AI Faster and More Accurate

November 6, 2025
4
Min Read

The LLM Hype, and What’s Missing

Over the past few years, large language models (LLMs) have dominated the AI conversation. From writing essays to generating code, LLMs like GPT-4 and Claude have proven that massive models can produce human-like language and reasoning at scale.

But here's the catch: not every task needs a 70-billion-parameter model. Parameters are computationally expensive - they require both memory and processing time.

At Sentra, we discovered early on that the work our customers rely on for accurate, scalable classification of massive data flows - isn’t about writing essays or generating text. It’s about making decisions fast, reliably, and cost-effectively across dynamic, real-world data environments. While large language models (LLMs) are excellent at solving general problems, it creates a lot of unnecessary computational overhead.

That’s why we’ve shifted our focus toward Small Language Models (SLMs) - compact, specialized models purpose-built for a single task - understanding and classifying data efficiently. By running hundreds of SLMs in parallel on regular CPUs, Sentra can deliver faster insights, stronger data privacy, and a dramatically lower total cost of AI-based classification that scales with their business, not their cloud bill.

What Is an SLM?

An SLM is a smaller, domain-specific version of a language model. Instead of trying to understand and generate any kind of text, an SLM is trained to excel at a particular task, such as identifying the topic of a document (what the document is about or what type of document it is), or detecting sensitive entities within documents, such as passwords, social security numbers, or other forms of PII.

In other words: If an LLM is a generalist, an SLM is a specialist. At Sentra, we use SLMs that are tuned and optimized for security data classification, allowing them to process high volumes of content with remarkable speed, consistency, and precision. These SLMs are based on standard open source models, but trained with data that was curated by Sentra, to achieve the level of accuracy that only Sentra can guarantee.

From LLMs to SLMs: A Strategic Evolution

Like many in the industry, we started by testing LLMs to see how well they could classify and label data. They were powerful, but also slow, expensive, and difficult to scale. Over time, it became clear: LLMs are too big and too expensive to run on customer data for Sentra to be a viable, cost effective solution for data classification.

Each SLM handles a focused part of the process: initial categorization, text extraction from documents and images, and sensitive entity classification. The SLMs are not only accurate (even more accurate than LLMs classifying using prompts) - they can run on standard CPUs efficiently, and they run inside the customer’s environment, as part of Sentra’s scanners.

The Benefits of SLMs for Customers

a. Speed and Efficiency

SLMs process data faster because they’re lean by design. They don’t waste cycles generating full sentences or reasoning across irrelevant contexts. This means real-time or near-real-time classification, even across millions of data points.

b. Accuracy and Adaptability

SLMs are pre-trained “zero-shot” language models that can categorize and classify generically, without the need to pre-train on a specific task in advance. This is the meaning of “zero shot” - it means that regardless of the data it was trained on, the model can classify an arbitrary set of entities and document labels without training on each one specifically. This is possible due to the fact that language models are very advanced, and they are able to capture deep natural language understanding at the training stage.

Regardless of that, Sentra fine tunes these models to further increase the accuracy of the classification, by curating a very large set of tagged data that resembles the type of data that our customers usually run into.

Our feedback loops ensure that model performance only gets better over time - a direct reflection of our customers’ evolving environments.

c. Cost and Sustainability

Because SLMs are compact, they require less compute power, which means lower operational costs and a smaller carbon footprint. This efficiency allows us to deliver powerful AI capabilities to customers without passing on the heavy infrastructure costs of running massive models.

d. Security and Control

Unlike LLMs hosted on external APIs, SLMs can be run within Sentra’s secure environment, preserving data privacy and regulatory compliance. Customers maintain full control over their sensitive information - a critical requirement in enterprise data security.

A Quick Comparison: SLMs vs. LLMs

The difference between SLMs and LLMs becomes clear when you look at their performance across key dimensions:

Factor SLMs LLMs
Speed Fast, optimized for classification throughput Slower and more compute-intensive for large-scale inference
Cost Cost-efficient Expensive to run at scale
Accuracy (for simple tasks) Optimized for classification Comparable but unnecessary overhead
Deployment Lightweight, easy to integrate Complex and resource-heavy
Adaptability (with feedback) Continuously fine-tuned, ability to fine tune per customer Harder to customize, fine-tuning costly
Best Use Case Classification, tagging, filtering Reasoning and analysis, generation, synthesis

Continuous Learning: How Sentra’s SLMs Grow

One of the most powerful aspects of our SLM approach is continuous learning. Each Sentra customer project contributes valuable insights, from new data patterns to evolving classification needs. These learnings feed back into our training workflows, helping us refine and expand our models over time.

While not every model retrains automatically, the system is built to support iterative optimization: as our team analyzes feedback and performance, models can be fine-tuned or extended to handle new categories and contexts.

The result is an adaptive ecosystem of SLMs that becomes more effective as our customer base and data diversity grow, ensuring Sentra’s AI remains aligned with real-world use cases.

Sentra’s Multi-SLM Architecture

Sentra’s scanning technology doesn’t rely on a single model. We run many SLMs in parallel, each specializing in a distinct layer of classification:

  1. Embedding models that convert data into meaningful vector representations
  2. Entity Classification models that label sensitive entities
  3. Document Classification models that label documents by type
  4. Image-to-text and speech-to-text models that are able to process non-textual data into textual data

This layered approach allows us to operate at scale - quickly, cheaply, and with great results. In practice, that means faster insights, fewer errors, and a more responsive platform for every customer.

The Future of AI Is Specialized

We believe the next frontier of AI isn’t about who can build the biggest model, it’s about who can build the most efficient, adaptive, and secure ones.

By embracing SLMs, Sentra is pioneering a future where AI systems are purpose-built, transparent, and sustainable. Our approach aligns with a broader industry shift toward task-optimized intelligence - models that do one thing extremely well and can learn continuously over time.

Conclusion: The Power of Small

At Sentra, we’ve learned that in AI, bigger isn’t always better. Our commitment to SLMs reflects our belief that efficiency, adaptability, and precision matter most for customers. By running thousands of small, smart models rather than a single massive one, we’re able to classify data faster, cheaper, and with greater accuracy - all while ensuring customer privacy and control.

In short: Sentra’s SLMs represent the power of small, and the future of intelligent classification.

<blogcta-big>

Explore Gilad’s insights, drawn from his extensive experience in R&D, software engineering, and product management. With a strategic mindset and hands-on expertise, he shares valuable perspectives on bridging development and product management to deliver quality-driven solutions.

Subscribe

Latest Blog Posts

Dan Gutstadt
Dan Gutstadt
May 13, 2026
8
Min Read
Data Security

How to Manage Data Access in the Cloud: A Practical Guide to Cloud Data Access Governance

How to Manage Data Access in the Cloud: A Practical Guide to Cloud Data Access Governance

Most security teams can now answer: “Where does our sensitive data live?”
Far fewer can confidently answer: “Who can access it right now and how will that change in the next hour?”

That gap between knowing where your data is and knowing who can reach it under what conditions is what cloud data access governance is designed to close. And in 2026, with cloud data estates sprawling across dozens of accounts, AI agents processing sensitive workloads, and identity-based attacks accounting for the majority of cloud breaches, that gap is no longer a theoretical risk. It’s an operational emergency waiting to happen.

This guide is written for security architects, cloud security engineers, and data security leaders who already understand IAM, DSPM, and basic cloud security controls—and are ready for the practical, implementation-level guidance on how to make data access governance actually work across complex, multi-cloud environments.

Why Managing Cloud Data Access Is So Hard

Cloud data access feels like a solvable problem. You have IAM. You have policies. You have role assignments. And yet, organizations consistently find themselves exposed—not because they lack tools, but because those tools were never designed to answer data-level access questions at the scale and speed cloud environments demand.

Here’s what’s actually driving the challenge:

Identity Sprawl at Machine Scale

Modern cloud environments don’t just have thousands of human users—they have tens of thousands of non-human identities: service accounts, Lambda functions, CI/CD pipelines, third-party integrations, and increasingly, AI agents like copilot.

Every one of these identities carries some level of data entitlement. Most of them carry far more access than they need.

Shadow Data and ROT Expanding the Attack Surface

Sensitive data doesn’t stay where you put it. It moves. It gets copied into test environments, replicated into analytics pipelines, exported to SaaS tools, and forgotten in deprecated storage buckets.

This shadow data—and the redundant, obsolete, and trivial (ROT) data that piles up over time—silently expands your data attack surface without triggering a single IAM alert.

IAM Operates at the Wrong Layer

IAM is foundational and non-negotiable. But IAM was built to manage access to resources and services—not to specific tables, columns, files, or records within those resources.

Granting a role access to a BigQuery dataset doesn’t tell you which tables contain PII, which columns are restricted under GDPR, or whether that role was ever actually used. IAM gives you the plumbing; it doesn’t tell you what flows through the pipes.

The Authorization Gap

This is the core problem cloud data access governance is built to solve.

The authorization gap is the difference between what users, applications, and AI systems can access and what they should access under least-privilege and zero trust principles.

The gap grows every time data is copied, a role is inherited, a permissions boundary drifts, or an AI agent is granted broad read access to accelerate onboarding. Without a data-first governance layer that continuously maps access to sensitivity, the gap widens invisibly—until a breach makes it visible.

Foundational Concepts: IAM, DSPM, and Data Access Governance

Before outlining a practical lifecycle, it’s worth defining the three pillars that effective cloud data access governance rests on—and how they interact.

Identity and Access Management (IAM)

IAM is the authentication and authorization backbone of any cloud security architecture. Whether implemented through AWS IAM, Azure Entra ID, Google Cloud IAM, or enterprise identity platforms like Okta and CyberArk, IAM handles who can authenticate, what permissions they carry, and how access is administered and audited.

Best-in-class IAM implementations incorporate SSO, MFA, Zero Trust network segmentation, and automated access reviews. These are necessary conditions for cloud data security—but not sufficient ones.

Data Security Posture Management (DSPM)

DSPM continuously discovers and classifies sensitive data across cloud infrastructure, SaaS platforms, data warehouses, and on-premises systems.

It evaluates each data store’s posture:

  • Is the data encrypted at rest and in transit?
  • Is logging enabled?
  • Is the bucket publicly accessible?
  • Is PII stored in a geography that violates data residency requirements?

The output is a continuously updated data inventory with risk scores—giving security teams the data-aware context that IAM alone cannot provide.

Data Access Governance (DAG)

DAG is the policies, processes, and enforcement controls that ensure only authorized identities—humans, applications, and AI agents—can access, modify, or distribute sensitive data, and only in ways that align with least-privilege and compliance requirements.

DAG is the bridge between IAM (which manages resource access) and DSPM (which understands data sensitivity and exposure). It uses DSPM’s classification context to answer the operational question: Given that this data store contains PHI regulated under HIPAA, who should be allowed to query it, under what conditions, and how do we enforce that continuously?

DSPM, DAG, and DDR Together

Together, DSPM, DAG, and Data Detection and Response (DDR) form a unified architecture for modern cloud data security:

Layer Function Key Question Answered
DSPM Discover, classify, and evaluate posture of sensitive data What sensitive data exists, where, and how exposed is it?
DAG Govern and enforce least-privilege access to sensitive data Who should have access, and are current permissions aligned?
DDR Monitor runtime access and detect/respond to anomalous behavior Is access being used as expected, and are there active threats?

A Lifecycle for Managing Cloud Data Access

Effective cloud data access governance is not a one-time project. It’s a continuous lifecycle—and organizations that treat it as a periodic audit will perpetually find themselves behind.

Here is the six-stage lifecycle that closes the authorization gap at scale.

Stage 1: Discover and Classify Data

You cannot govern what you cannot see.

Automated, agentless discovery should scan all data stores across clouds (AWS, Azure, GCP), data warehouses (BigQuery, Snowflake, Redshift), managed databases, object storage (S3, GCS, Azure Blob), and SaaS platforms on a continuous basis. The goal is a complete, always-current data inventory—not a snapshot that’s stale the moment it’s taken.

Classification should go beyond pattern matching. Effective classification:

  • Identifies sensitive data categories: PII, PHI, PCI card data, intellectual property, financial records
  • Assigns business context: department ownership, environment (prod vs. dev), geography, regulatory domain
  • Surfaces shadow data: sensitive files in forgotten buckets, test databases with production data, unsanctioned SaaS exports

Key metric: Sentra has processed 9 PB of data in under 72 hours and scanned 100 PB environments for approximately $40,000—demonstrating that comprehensive in-environment discovery is operationally feasible, even at hyperscale.

Stage 2: Map Identities, Access Paths, and Posture

With a complete data inventory in hand, the next step is building a data-access graph: a normalized map of which identities (users, groups, roles, service accounts, AI agents) have what level of access to which sensitive data stores, through which paths.

This means normalizing entitlements across:

  • Cloud IAM roles and policies (AWS, Azure, GCP)
  • Data platform permissions (BigQuery datasets, Snowflake roles, Redshift schemas)
  • SaaS app roles (Salesforce profiles, M365 sharing settings, Workday security groups)
  • Non-human identities: service accounts, workload identities, OAuth tokens, AI agent credentials

Simultaneously, evaluate posture for each sensitive store: encryption state, audit logging status, backup coverage, external exposure (public endpoints, cross-account sharing), and regulatory boundary alignment.

Stage 3: Prioritize Risks and Identify Toxic Combinations

Not all access misconfigurations are equal. A security group with overly broad access to a low-sensitivity analytics table is a low-priority finding. The same group with access to an unencrypted S3 bucket containing 50 million Social Security Numbers is a critical incident waiting to happen.

Toxic combinations—the highest-priority risk patterns in data access governance—emerge from the intersection of:

Risk Factor Example
High data sensitivity PCI cardholder data, PHI, employee SSNs
Broad access scope All-users groups, wildcard IAM policies, inherited super-roles
External exposure Publicly accessible buckets, externally shared folders
Anomalous behavior signals Bulk downloads, after-hours queries, unusual geographic access
AI agent over-reach Copilot with access to unmasked HR records or financial models

DSPM risk scores combined with DAG access analytics should surface these combinations automatically, prioritized by potential blast radius.

Stage 4: Enforce Least Privilege and Remediate Access

This is where governance moves from analysis to action.

Remediation at the data layer involves:

  • Removing over-broad group memberships: Eliminating all-users, domain-wide, or project-level access grants where dataset- or table-level access is appropriate
  • Cleaning up dormant accounts and stale keys: Revoking access for users, service accounts, or API keys that haven’t been used in 30, 60, or 90 days
  • Fixing misaligned shares and labels: Correcting externally shared folders containing sensitive data; applying classification labels that trigger downstream DLP and access policies
  • Eliminating shadow and ROT data: Deleting or archiving sensitive data that has no legitimate active use—which both reduces attack surface and, in Sentra’s experience, drives approximately 20% cloud storage cost reduction for typical customers

Effective remediation requires tight integration between the DAG layer and enforcement points: IAM platforms, cloud-native DLP tools, data warehouse access controls, and masking/row-level security policies.

Stage 5: Monitor Access and Respond in Real Time

Governance doesn’t end at policy enforcement. Identities evolve, data moves, and attackers adapt.

Data Detection and Response (DDR) provides the runtime visibility layer that DSPM and DAG cannot supply on their own. DDR monitors data access events continuously:

  • Queries executed against sensitive tables in BigQuery, Snowflake, or Redshift
  • File reads and downloads from S3, GCS, or SharePoint
  • API calls accessing sensitive records in SaaS applications
  • Bulk exports, unusual query volumes, or access from anomalous geolocations

When suspicious patterns emerge—an analyst querying 10x their normal data volume, a service account accessing tables outside its defined scope, or an AI agent traversing ACLs it was never meant to reach—DDR triggers guided or automated responses: access suspension, alert escalation, or automated IAM policy revocation.

Stage 6: Review, Audit, and Iterate

The final stage closes the loop. Periodic access reviews—grounded in actual usage data rather than static role assignments—are how organizations progressively tighten their least-privilege posture over time.

Effective access reviews should:

  • Use behavioral data (who actually accessed what, when) to challenge standing permissions
  • Generate audit-ready evidence for PCI DSS 4.0 log review requirements, GDPR accountability obligations, HIPAA access control audits, and SOC 2 Type II certifications
  • Feed findings back into Stage 4 remediation workflows to create a continuous improvement cycle

Implementing Least-Privilege Access in Practice: Platform Patterns

The lifecycle above describes what to do. This section covers how to implement least-privilege access in the cloud data platforms most security architects deal with day to day.

Designing Roles and Scopes

The most common mistake in cloud data access design is defaulting to project- or account-level roles because they’re easier to administer. Project-wide BigQuery Data Viewer access to all datasets in a GCP project—granted because a data scientist needed access to one analytics table—is a textbook authorization gap.

Guiding principles for role design:

  • Grant at the narrowest scope possible: dataset > table > column, not project > dataset
  • Create purpose-built data roles rather than repurposing infrastructure roles (e.g., a dedicated FINANCE_ANALYST_RO Snowflake role, not a shared SYSADMIN-derived role)
  • Separate ingestion/ETL roles from read/analytics roles; separate production roles from sandbox roles
  • Never use account owner or project admin roles for routine data operations

Object Storage: S3 and GCS Patterns

Pattern Implementation What It Prevents
Dedicated storage integration roles IAM roles with scoped STORAGE_ALLOWED_LOCATIONS (Snowflake external stages) Broad bucket access from warehouse integrations
Granular S3 bucket policies s3:GetObject scoped to specific prefixes, not s3:* on arn:aws:s3:::* Wildcard policies exposing entire accounts
Block public access by default S3 Block Public Access settings enforced at account level Accidental public bucket exposure
No hard-coded credentials IAM roles and instance profiles; no long-lived access keys in application code Credential exfiltration from code repositories
Object-level logging S3 Server Access Logging or CloudTrail data events enabled on sensitive buckets Blind spots in DDR and audit trails

Common pitfalls: Overly broad ETL roles that carry s3:* access across all buckets; shared Glue or Spark job roles that accumulate permissions over time; lifecycle policies that fail to delete sensitive data in staging prefixes.

Data Warehouse Patterns: BigQuery

BigQuery’s IAM model is powerful but frequently misconfigured at scale.

Recommended BigQuery access architecture:

Access Type Recommended Scope IAM Role
Analysts (read-only) Dataset level roles/bigquery.dataViewer at dataset, not project
Engineers (read/write) Dataset or table level roles/bigquery.dataEditor scoped to target dataset
Pipelines/ETL Dataset or table level Custom role with minimum required permissions
Admins Project level, with audit roles/bigquery.admin restricted to named individuals

Advanced controls to implement:

  • Column-level security: BigQuery policy tags enable column masking and fine-grained access by data classification—PII columns tagged and masked for default consumers, accessible in raw form only through approved roles
  • Row-level security: Row access policies (filter expressions) limit which records specific identities can query within a shared table
  • Authorized views: Expose constrained projections of sensitive tables without granting underlying table access

Data Warehouse Patterns: Snowflake

Snowflake’s role hierarchy is a common source of “access debt”—the accumulated, under-managed entitlements that make toxic combinations difficult to detect manually.

Snowflake access hygiene framework:

Issue Symptom Remediation
Super-roles Single role with access to all databases and schemas Decompose into environment- and domain-specific roles
Dormant roles Roles granted but unused for 90+ days Revoke and require re-justification
Role hierarchy sprawl Inherited permissions cascade unexpectedly through GRANT ROLE TO ROLE Map full effective permissions; audit inheritance chains
Shared ETL credentials One SYSADMIN-level user running all pipelines Dedicated service users per pipeline with scoped permissions
Production data in dev Dev databases containing real customer records DSPM discovery to identify and quarantine; masking in non-prod

DSPM platforms like Sentra can identify toxic combinations in Snowflake—for example, a broadly-granted analyst role that, through role inheritance, carries access to unmasked PII tables in a production schema—and guide targeted remediation without requiring a full role architecture rebuild.

Managed Databases and SaaS

For managed relational databases (Amazon RDS, Google Cloud SQL, Azure SQL):

  • Maintain separate application users (minimal SELECT/INSERT/UPDATE on specific schemas) and analytics users (read-only, ideally pointing to read replicas)
  • Avoid all-powerful shared users like root or master for routine operations
  • Rotate credentials using secrets managers (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) rather than static passwords

For SaaS platforms (Salesforce, M365, Workday):

Application-native role management is necessary but insufficient. A DAG/DSPM layer that normalizes cross-platform access and correlates identity-to-data across SaaS apps provides the unified visibility that app-by-app administration cannot.

Governing AI and Copilot Data Access

No guide to cloud data access governance in 2026 is complete without addressing the category of identity that most organizations have not yet learned to treat as a security problem: AI agents and copilots.

AI systems—whether internal LLM deployments, third-party copilots integrated into SaaS workflows, or autonomous agents connected to data warehouses—operate as high-privilege data consumers. They query broadly. They often have access granted for convenience rather than least-privilege design. And unlike human users, their access behavior is harder to baseline and anomaly-detect without purpose-built tooling.

The AI data access problem in practice:

  • Copilots integrated into M365 or Salesforce may inherit user-level permissions—including access to sensitive files, emails, and records the user has accumulated over years
  • AI agents connected to BigQuery or Snowflake for RAG pipelines may have schema-wide SELECT permissions intended for development that were never scoped down before production deployment
  • AI systems that generate code or SQL may exfiltrate schema information as part of their normal operation, even without directly accessing data records

Governing AI identities requires the same lifecycle applied to human identities:

  1. Inventory: Discover all AI agents and copilot integrations with data access—including shadow AI deployments
  2. Classify: Map which sensitive datasets each AI agent can reach, with what level of access, through which credentials
  3. Constrain: Apply least-privilege access; use classification labels and policy tags to enforce data boundaries (e.g., AI agents cannot access raw PII, only masked or synthetic equivalents)
  4. Monitor: Apply DDR to AI access patterns; establish baselines and alert on deviations (bulk reads, unusual schema traversal, access to tables outside defined scope)
  5. Govern: Treat AI agent access provisioning and review with the same rigor as human privileged access—including JIT elevation for sensitive operations

Comparison: Key Approaches to Cloud Data Access Governance

Approach Strengths Limitations
IAM-only governance Mature tooling; cloud-native integration; widely understood No data-layer visibility; doesn't distinguish sensitive from non-sensitive data; authorization gap grows as data sprawls
DSPM without DAG Excellent data discovery and risk visibility; surfaces exposure Identifies problems but doesn't enforce access changes; no continuous remediation workflow
DAG without DSPM Can enforce access policies and manage entitlements Without data classification context, policy decisions lack sensitivity-aware prioritization
Manual access reviews Meets minimum compliance bar; human judgment applied Slow, resource-intensive, stale between cycles; can't keep pace with cloud environment velocity
DSPM + DAG + DDR (unified) Continuous discovery, data-aware enforcement, runtime detection; closes the authorization gap end-to-end Requires integrated platform or well-orchestrated toolchain; initial discovery and classification effort at deployment

Just-Enough and Just-In-Time Access for Cloud Data

Standing privileges—long-lived, always-on access to sensitive data—are the single largest contributor to breach blast radius in cloud environments. When a privileged identity is compromised, standing access means the attacker inherits everything, immediately. JEA and JIT are the practical alternatives.

Just-Enough Access (JEA)

JEA means users and systems receive access calibrated to their actual role requirements—not the role requirements of their team, their manager’s interpretation of their role, or what was convenient to grant six months ago.

In practice, JEA for data teams typically means:

  • Default access to masked or aggregated versions of sensitive data (e.g., tokenized PII, row-sampled datasets, pre-aggregated analytics views)
  • Explicit approval workflows for access to raw, highly sensitive data—triggered on demand, logged, and time-bounded
  • Policy tag enforcement at the data warehouse layer (BigQuery policy tags, Snowflake data classification tags) that dynamically apply masking based on the requesting identity’s clearance level

This shifts the burden from “deny access by default and re-grant manually” to “grant minimal access by default and elevate via audited workflow”—which is operationally sustainable at scale.

Just-In-Time (JIT) Access

JIT goes further: rather than maintaining standing access (even minimal access), high-sensitivity operations trigger temporary elevation for a defined window, with automatic revocation when the window closes or the task completes.

JIT access workflow for cloud data:

1. Analyst requests access to production PII dataset for incident investigation

2. Request triggers approval workflow (manager + data owner)

3. Upon approval, JIT system grants time-bound IAM binding (e.g., 4-hour window)

4. Access is logged in full; queries are captured for audit trail

5. At window expiration, IAM binding is automatically revoked

6. DDR monitors for anomalous behavior during the access window

Cloud-native JIT tooling includes GCP’s Privileged Access Manager (PAM), AWS IAM Identity Center with temporary permission sets, and enterprise PAM platforms like CyberArk and BeyondTrust. DSPM and DAG platforms provide the data sensitivity signals that make JIT decisions meaningful—the system knows whether the dataset being requested contains regulated PHI, its current exposure posture, and whether the requesting identity has a legitimate business justification based on their historical access patterns.

Zero Standing Privilege: The Target State

For the highest-sensitivity data environments—customer PII stores, financial records, regulated health data—the target architecture is zero standing privilege: no human identity holds persistent access to raw sensitive data. All access is JIT-elevated, time-bounded, and fully audited.

This is not achievable overnight for most organizations, but it is the direction of travel. The maturity model below provides a practical path.

Cloud Data Access Governance Maturity Model

Maturity Level Posture Key Characteristics
Level 1: Ad Hoc Reactive Access granted on request; no consistent least-privilege enforcement; no data classification; periodic manual audits
Level 2: Defined Policy-driven IAM roles defined by team/function; some data classification; access reviews on a fixed schedule (quarterly/annual)
Level 3: Managed DSPM-informed Continuous data discovery and classification; data-access graph mapped; toxic combinations identified; remediation tracked
Level 4: Governed DAG-enforced Least-privilege enforced at data layer; JEA implemented; access reviews driven by usage data; SaaS and AI covered
Level 5: Optimized Continuous Zero standing privilege for sensitive data; JIT elevation with automated provisioning/revocation; DDR with automated response; AI agents governed like human identities

Frequently Asked Questions

What is cloud data access governance?
Cloud data access governance is the set of policies, processes, and technical controls that ensure only authorized identities—humans, applications, and AI agents—can access sensitive cloud data, under conditions aligned with least-privilege, zero trust, and compliance requirements. It bridges IAM (resource-level access control) and DSPM (data discovery and classification) to enforce data-first access management continuously.

How is data access governance different from IAM?
IAM manages access to cloud resources and services at the infrastructure layer. Data access governance operates at the data layer—it understands what data is sensitive, who should be allowed to access it based on that sensitivity, and whether current permissions are aligned with least-privilege requirements. IAM is a necessary component of DAG, but DAG extends IAM with data-awareness and continuous enforcement.

What is the authorization gap?
The authorization gap is the difference between what identities can access (based on their current permissions) and what they should access under least-privilege principles. The gap grows as data is copied, roles accumulate permissions over time, and access is granted for convenience without ongoing review. DSPM and DAG together are designed to continuously measure and close this gap.

What is DSPM and how does it relate to data access governance?
Data Security Posture Management (DSPM) continuously discovers and classifies sensitive data across cloud environments, evaluating each data store’s security posture—encryption, exposure, logging, regulatory alignment. DSPM provides the data intelligence layer that makes access governance decisions meaningful: rather than reviewing permissions in the abstract, DAG uses DSPM context to understand which sensitive data is behind which permissions, and prioritizes remediation accordingly.

What does least-privilege data access mean in practice?
Least-privilege data access means granting identities the minimum level of access—to the most narrowly scoped data resource—required to perform their legitimate function. In practice, this means dataset-level (not project-level) access in BigQuery, domain-specific roles (not inherited super-roles) in Snowflake, prefix-scoped (not bucket-wide) policies in S3, and time-bounded JIT elevation rather than standing access to highly sensitive data.

How should AI agents be governed in a data access governance framework?
AI agents and copilots should be treated as first-class identities in the data access governance lifecycle. This means inventorying all AI agents with data access, mapping which sensitive datasets they can reach, constraining access using classification labels and policy tags, monitoring their data access behavior with DDR, and applying JIT elevation patterns for AI-initiated access to high-sensitivity data—just as you would for privileged human users.

What is Just-In-Time (JIT) access for cloud data?
JIT access is a pattern where sensitive data access is granted temporarily—for a defined window tied to a specific task or incident—rather than maintained as a standing permission. JIT workflows typically require approval, generate a full audit trail, and automatically revoke access when the window closes. JIT is increasingly considered the target state for access to regulated and high-sensitivity data in zero trust architectures.

How do you implement data access governance across multiple clouds?
Multi-cloud data access governance requires a platform that can normalize entitlements across cloud-native IAM systems (AWS IAM, Azure Entra ID, GCP IAM), data warehouse permission models (BigQuery, Snowflake, Redshift), and SaaS applications into a unified data-access graph. This graph, enriched with DSPM classification context, enables consistent least-privilege enforcement and risk prioritization regardless of which cloud or platform the data lives in.

What compliance frameworks require cloud data access governance?
PCI DSS 4.0 requires access control reviews and log monitoring for cardholder data environments. GDPR mandates demonstrable controls over who can access personal data and the ability to audit access history. HIPAA requires access controls, audit controls, and integrity controls for PHI. SOC 2 Type II requires evidence of access control design and operating effectiveness. Cloud data access governance—particularly when backed by continuous DSPM and DAG—provides the evidentiary foundation for all of these frameworks.

Conclusion: Closing the Authorization Gap with a Data-First Approach

The trajectory of cloud data risk runs in one direction: more data, more identities, more movement, more exposure. IAM alone cannot keep pace. Periodic audits cannot keep pace. One-time DSPM scans cannot keep pace.

What can keep pace is a continuous, data-first governance lifecycle—one that starts with knowing where your sensitive data lives, extends to mapping every identity that can reach it, enforces least-privilege access at the data layer, and monitors runtime behavior to detect and respond to threats as they emerge.

The authorization gap is not a theoretical problem. It is the documented precondition for most major cloud data breaches. Closing it requires treating data access governance as an operational discipline, not a compliance checkbox—and building the architecture to support it at the speed and scale cloud environments demand.

For a deeper look at how Sentra’s DSPM, DAG, and DDR capabilities work together to close the authorization gap across cloud, SaaS, and AI environments, explore our Data Access Governance solution page, DSPM overview, and Data Detection and Response documentation.

Read More
David Stuart
David Stuart
May 12, 2026
3
Min Read
AI and ML

Daybreak Answers the Vulnerability Question. Here's the One It Doesn't.

Daybreak Answers the Vulnerability Question. Here's the One It Doesn't.

A month after Anthropic announced Mythos and Project Glasswing, OpenAI launched Daybreak.

The competitive framing is hard to avoid. Two frontier AI labs, one month apart, both building systems designed to find and fix vulnerabilities before attackers can exploit them. The Hacker News called it OpenAI taking on Anthropic in the AI cybersecurity race. That framing is accurate but slightly misses the point for security teams evaluating what to do with either of them.

Both tools are solving a real and important problem: the window between a vulnerability being discoverable and being exploited has collapsed. As OpenAI's own announcement noted, AI can now compress hours of security analysis into minutes. The goal is to get defenders to vulnerabilities before attackers do. Daybreak builds editable threat models from actual codebases, validates findings in isolated environments, and proposes patches for human review. That is a genuinely useful capability.

But there are two separate questions in play here, and it's worth being precise about which one Daybreak answers.

THE QUESTION DAYBREAK ANSWERS

Daybreak answers, “What vulnerabilities exist in your code, and how do we fix them faster than an attacker can exploit them?”

That is the right question for a vulnerability management platform. It's the offense-versus-defense race that Mythos dramatized and Daybreak responds to. If you can identify and remediate a vulnerability before an attacker has a working exploit, you've won that exchange.

THE QUESTION DAYBREAK DOESN'T ANSWER

Daybreak doesn't answer, “If a vulnerability is exploited before it's patched, what does the attacker reach?”

This is the blast radius question. And it's the question that determines whether a successful exploit becomes a contained incident or a material breach.

The answer depends entirely on what sensitive data is accessible from the compromised position. What's in the codebase environment, what service accounts have access to, what data flows through the infrastructure Daybreak is analyzing. Vulnerability detection doesn't map sensitive data to identities. It doesn't tell you whether a compromised CI/CD pipeline has access to a production database containing customer PII. It doesn't tell you what an AI agent operating in that environment can reach and synthesize.

These are data governance questions. And they require a different kind of answer.

THE AI AGENT ACCESS PROBLEM

There's a second dimension here that I think is underappreciated in the Daybreak coverage.

Daybreak - like every AI security agent - needs access to your environment to do its job. Codebases, repositories, infrastructure configurations, build pipelines. That access is necessary for the tool to work. And it means that the data those environments contain becomes part of the access footprint of the AI agent operating in them.

Most organizations haven't fully inventoried what sensitive data lives in their development and security infrastructure. Production credentials in configuration files. Customer data in test environments that were never properly cleaned. PII that migrated into a repository through an integration nobody fully audited. This data exists in most large enterprise environments, not because of negligence, but because data accumulates faster than it gets classified.

Before you bring an AI agent into those environments - any AI agent, not just Daybreak - the governance question needs an answer. “What sensitive data is in here, who can reach it, and is that access picture appropriate for an AI system to operate within?

WHAT THIS MEANS FOR SECURITY TEAMS DEPLOYING DAYBREAK

Three things worth doing before or alongside a Daybreak deployment:

First, classify what's in the environments Daybreak will access. Codebases and CI/CD pipelines accumulate sensitive data that isn't always visible in a standard data inventory. Running a classification pass before bringing an AI agent in tells you what's there and what the exposure looks like if that environment is compromised.

Second, map what Daybreak's service account can reach. The blast radius of any compromise - including a compromise of Daybreak itself or a prompt injection against it - is bounded by what its operating identity can access. Scoping that access to the minimum necessary before deployment is the right architecture.

Third, know what patch you're protecting. Daybreak's value is highest when you know which vulnerabilities, if exploited, would expose the most sensitive data. That prioritization requires a continuous, current picture of where sensitive data lives in your environment - so that a critical vulnerability in a system with no sensitive data downstream gets triaged differently from one with a direct path to regulated records.

THE PACE OF THIS IS ACCELERATING

Mythos in April. Daybreak in May. The AI security capability race is compressing timelines for everyone.

Organizations that haven't yet built a continuous, current picture of their sensitive data estate are running out of runway to do it before AI security agents are operating inside their environments. The governance work - classification, identity-to-data mapping, access rationalization - is the foundation that makes all of these tools safer to deploy and more effective when they find something.

Vulnerability tools tell you where the door is. Data security tells you what's in the room. Both questions matter. The pace of the AI security race means you need to be working on both at the same time.

---

FREQUENTLY ASKED QUESTIONS

What is OpenAI Daybreak?

OpenAI Daybreak is a cybersecurity initiative launched May 11, 2026 that combines GPT-5.5 and Codex Security to help organizations identify, validate, and remediate software vulnerabilities. It builds editable threat models from enterprise codebases, validates likely vulnerabilities in isolated environments, and proposes patches for human review. Access is currently limited — organizations must request a vulnerability scan or contact OpenAI sales.

How is Daybreak different from Anthropic Mythos?

Both platforms use frontier AI to find and exploit vulnerabilities — Mythos focuses on autonomous zero-day discovery, while Daybreak is positioned more as a developer-integrated defense platform with a broader partner ecosystem. Anthropic has emphasized restricted access and high-risk vulnerability discovery; OpenAI is taking a broader platform approach tied to enterprise development workflows. Both address the vulnerability discovery question; neither addresses the blast radius question of what data is accessible if a vulnerability is exploited.

What does Daybreak mean for enterprise data security?

Daybreak requires feeding AI agents access to your codebase and infrastructure environments. Before deploying any AI security agent, organizations should classify what sensitive data lives in those environments, map what the agent's operating identity can access, and ensure that access reflects least privilege. The same access that makes these tools effective makes them part of your data attack surface.

What is the blast radius question in cybersecurity?

Blast radius refers to the scope of damage from a successful exploit — specifically, what sensitive data becomes accessible to an attacker who gains a foothold through a vulnerability. Vulnerability tools like Daybreak address how to find and fix vulnerabilities faster. Data Security Posture Management (DSPM) addresses what an attacker reaches if a vulnerability is exploited before it's patched — which is determined by how sensitive data is distributed, classified, and access-controlled across the environment.

How does DSPM complement AI vulnerability tools like Daybreak?

DSPM continuously discovers and classifies sensitive data across cloud, SaaS, and on-premises environments, maps which identities can access sensitive stores, and identifies overpermissioned access. In a Daybreak deployment, DSPM answers three questions: what sensitive data lives in the environments Daybreak will access, what can Daybreak's operating identity reach, and which vulnerabilities are highest priority because they have a direct path to regulated or sensitive data. DSPM and vulnerability management address sequential parts of the same problem — not competing solutions.

Daybreak and Mythos are compressing the vulnerability window on both sides. The organizations best positioned to respond aren't the ones scrambling to understand their data exposure after an exploit — they're the ones who already have a continuous, current picture of what sensitive data lives in their environments, what every identity can reach, and where access needs to be tightened before an AI agent touches it.

See how Sentra maps sensitive data across your cloud, SaaS, and development environments — and what your blast radius actually looks like today. Schedule a Demo →

Read More
Ron Reiter
Ron Reiter
May 8, 2026
3
Min Read
Data Security

Mythos Is Already Here. The Question Is What Attackers Will Find.

Mythos Is Already Here. The Question Is What Attackers Will Find.

I've spent a lot of time thinking about what Mythos actually changes — and what it doesn't.

The vulnerabilities Mythos found are not new in nature. They're variations of known vulnerability classes — buffer overflows, race conditions, memory corruption. These aren't novel attack categories. What's new is the speed and scale at which Mythos surfaces them. In pre-release testing, Mythos Preview autonomously developed working exploits for Mozilla Firefox vulnerabilities 181 times, compared to the prior model's two successful attempts out of several hundred. That isn't an incremental improvement. It's a different class of capability.

Modeled scenarios show attackers discovering the majority of new vulnerabilities within a few years, meaning defenders increasingly respond to issues adversaries may already know about. The core challenge shifts from finding vulnerabilities faster to fixing them faster.

That's the real strategic shift. And for data security specifically, it has a specific implication that I think is underappreciated in the current conversation.

PATCH SPEED IS NECESSARY. IT'S NOT SUFFICIENT.

When a Mythos-class tool helps an attacker gain initial access — through a zero-day in a browser, an OS, an unpatched server — the next thing that determines outcome is what they find. What data is accessible from the compromised position. What identities and service accounts can be traversed. What sensitive records sit in environments with overly broad permissions.

Most security conversations right now are about accelerating patch cycles, which is the right instinct. A 2025 report found that over 45% of discovered security vulnerabilities in large organizations remain unpatched after 12 months. Closing that gap matters enormously. But patching controls the entry point. It doesn't control the blast radius once someone is in.

The blast radius question is a data question.

WHAT ATTACKERS FIND WHEN THEY GET IN

The uncomfortable truth is that most organizations don't have a comprehensive, current answer to: what sensitive data is accessible from any given position in my environment?

Data accumulates in ways that security teams don't fully track. Salesforce orgs fill up with PII from integrations that nobody audited. Lakehouses absorb years of production data pipelines. Cloud storage buckets get misconfigured and forgotten. Service accounts accumulate permissions that outlive the workflows they were created for. And increasingly, AI agents and copilots run under those service accounts — meaning whatever a service account can reach, the AI can retrieve and synthesize.

This isn't a hypothetical. It's the operational reality in most enterprises I talk to.

When Mythos-class capabilities become more widely available — and Anthropic's own estimate is that similar capabilities will proliferate from other AI labs within six to eighteen months — the attack surface question becomes: not just "what vulnerabilities can be exploited" but "what data becomes accessible when they are." Those are different problems with different solutions.

WHAT ACTUALLY CHANGES YOUR RISK PROFILE

Assume breach. Not as a thought experiment, but as the operating reality it now is.

Given that, the most meaningful thing you can do in the next 90 days isn't buy another scanner. It's get a clear, continuous answer to where your sensitive data actually lives — across cloud, SaaS, data warehouses, and the AI systems layered on top of them — and make sure the access picture reflects least privilege, not accumulated permissions from three years of workflow changes.

That means:

Knowing what's in your environment, continuously. Not a quarterly scan. Not a point-in-time audit. When Mythos-class tools can find and exploit a vulnerability overnight, a quarterly data inventory is operationally useless. You need to know what sensitive data exists and where it lives as a continuous fact, not a periodic report.

Understanding what each identity can reach. The blast radius of any successful exploit is bounded by what the compromised identity — human or service account or AI agent — can access. If that access picture isn't mapped to sensitive data at the record level, you can't assess exposure or contain it quickly after a breach.

Eliminating data that shouldn't be where it is. The most effective way to reduce Mythos-era blast radius is to not have sensitive data sitting in places it doesn't need to be. Redundant copies of regulated records, production data that migrated to dev environments, PII sitting in SaaS tools it arrived in through integration workflows — this is the data that causes the notifications, the regulatory exposure, and the headlines. Getting rid of it before an attacker finds it is categorically better than discovering it during incident response.

THE PART OF THIS CONVERSATION THAT ISN'T GETTING ENOUGH ATTENTION

Most of the Mythos coverage has focused, reasonably, on the vulnerability discovery side. That's where the dramatic capability jump is visible. But the quieter implication is about what happens after discovery and exploitation — which is where data security actually determines outcome.

"The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI," according to one Project Glasswing partner. If that compression applies equally to time-to-exploit, it applies equally to time-to-data. The faster an attacker can reach a compromised system, the faster they reach whatever's accessible from it.

This is the Mythos implication for data security teams: the window for containment is shrinking, and continuous data visibility is how you make that window matter.

---

FREQUENTLY ASKED QUESTIONS

What is Claude Mythos Preview?

Claude Mythos Preview is an AI model announced by Anthropic in April 2026 capable of autonomously discovering and exploiting zero-day vulnerabilities across every major operating system and browser, at a speed and scale that significantly exceeds human security researchers.

Is Mythos publicly available?

Anthropic has withheld general release, citing offensive risk. Access has been granted to approximately 40 organizations through Project Glasswing, a defensive security consortium. Anthropic estimates comparable capabilities will emerge from other AI labs within 6 to 18 months.

What does "assume breach" mean in a Mythos context?

Assume breach means designing your security posture around the expectation that attackers will get in — focusing less on prevention at the perimeter and more on limiting what they find inside. In a Mythos context, where exploit development can happen overnight, assume breach shifts from a framework to an operating reality.

How does data visibility reduce breach blast radius?

Blast radius — the scope of damage from a successful breach — is determined by what sensitive data is accessible from a compromised position, not by the exploit itself. Organizations with continuous, comprehensive data classification and least-privilege access governance can identify what was exposed quickly and contain the damage. Organizations without it typically discover their exposure during incident response, when it's too late.

What is DSPM and how does it help with Mythos preparedness?

Data Security Posture Management (DSPM) is a continuous monitoring discipline that discovers and classifies sensitive data across cloud, SaaS, and on-premises environments, maps access to that data, and identifies where sensitive records are exposed to over-permissioned identities or misconfigured controls. In a Mythos-era threat model, DSPM provides the continuous data inventory that makes blast radius assessment and containment possible.

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.