Updates From the Front Lines of Data Security

Subscribe to Sentra’s data security blog for the latest updates and insights from our researchers
Nikki Ralston
March 9, 2026
4
Min Read

7 Data Loss Prevention Best Practices to Cut False Positives and Blind Spots

Fix your DLP instead of replacing it. Learn practical data loss prevention best practices to cut false positives, close blind spots, and make DLP work with DSPM

Kristin Grimes
David Stuart
March 9, 2026
3
Min Read

Meet Sentra at RSAC 2026: AI Data Readiness, Continuous Compliance, and Modern DLP in Action

RSAC 2026 is one of the most important Conferences to date, especially for security teams navigating AI adoption, Copilot readiness, and large-scale data governance. Meet Sentra at RSAC to learn more.

Mark Kiley
March 8, 2026
5
Min Read

Florida Information Protection Act (FIPA): 30‑Day Data Breach Deadline and Compliance Checklist

Ron Reiter
March 8, 2026
3
Min Read

Why Audio and Video Files Are Your Next Big Risk

Learn how enterprises can discover sensitive data in call recordings, meeting archives, and video files using ML-powered transcription and DSPM.

Nikki Ralston
Nikki Ralston
March 9, 2026
4
Min Read

7 Data Loss Prevention Best Practices to Cut False Positives and Blind Spots

7 Data Loss Prevention Best Practices to Cut False Positives and Blind Spots

Most security leaders aren’t asking for “more DLP.” They’re asking why the DLP they already own is noisy, brittle, and still misses real risk. You turn on endpoint, email, and network DLP. You import PCI and PII templates. Within weeks, users complain that normal work is blocked, so policies get relaxed or disabled. Analysts drown in meaningless alerts. Meanwhile, you know there are blind spots in SaaS, cloud data stores, and AI tools that DLP never sees.

The problem usually isn’t that you bought the “wrong” DLP. It’s that DLP is doing too much on its own: trying to discover sensitive data, understand business context, and enforce policies in one step. To improve the functioning of your DLP, you have to separate those responsibilities and give DLP the data intelligence it has always been missing.

This guide walks through seven data loss prevention best practices that:

1. Start with a specific DLP problem, not a vague mandate

Many DLP programs are born from a broad requirement like “prevent data loss” or “achieve compliance.” That sounds reasonable, but it’s too fuzzy to drive design decisions. If everything is “data loss,” every event looks important and tuning turns into guesswork. Instead, define one or two sharp, testable problems to solve in the next 90 days.

For example:

  • Reduce DLP false positives by 50% while maintaining coverage across email and collaboration tools.
  • Eliminate unknown PHI exposures in Microsoft 365 and Google Workspace before the next HIPAA audit.
  • Stop real customer data from leaking into lower environments and AI training pipelines.

Once you frame the goal concretely, a few things fall into place. You know what to measure (false-positive rate, blind-spot coverage, number of mis‑labeled data stores). You can see which parts are posture problems (where data lives, how it’s labeled, who can touch it) and which are pure enforcement. And you have a clear way to tell whether the program is actually improving, rather than just “having DLP turned on.” In short, give your DLP initiative a narrow, measurable purpose before you touch any rules.

2. Fix classification before you tune DLP rules

Almost every struggling DLP deployment eventually discovers the same truth: it doesn’t really have a DLP problem, it has a classification problem. Traditional DLP leans heavily on pattern matching and static dictionaries. In modern environments, that leads to constant mistakes:

  • Internal IDs or ticket numbers mistaken for card data or SSNs
  • Highly sensitive business documents missed because they don’t match canned patterns
  • Each product (endpoint DLP, email DLP, CASB) trying to re‑implement classification in its own silo

This is exactly the gap DSPM is designed to fill. A platform like Sentra DSPM continuously:

  • Discovers sensitive data at scale across cloud, SaaS, data warehouses, on‑prem stores, and AI pipelines, without copying it out of your environment
  • Classifies that data using multi‑signal, AI‑driven models that combine entity‑level signals (PII, PCI, PHI fields, secrets) with file‑level semantics (document type, business function, domain)
  • Labels assets consistently, for example, by auto‑applying Microsoft Purview Information Protection (MPIP) labels that downstream tools, including DLP, can consume

Once you trust the labels, DLP can stop trying to “guess” sensitivity from raw content and location. Policies get simpler and more stable because they key off well‑defined labels instead of brittle regular expressions.

Best practice: before you tweak another DLP rule, invest in getting classification right with DSPM, then let DLP enforce on the resulting labels.

3. Reduce DLP false positives with labels and context

“Reduce DLP false positives” is one of the most common reasons security teams revisit their DLP strategy. Most false positives come from two root causes:

  • Over‑broad content rules that match anything vaguely sensitive
  • Lack of business context like; who the user is, which system they’re in, where the data is going, and whether that’s normal behavior

The first step is to move to label‑driven policies wherever possible. Instead of “block anything that looks like a credit card number,” write rules like “block sending files labeled PCI to personal email domains” or “quarantine emails with PHI labels sent outside approved partners.” DSPM plus accurate labeling makes that possible at scale.

The second step is to bring in more context. A file labeled Confidential going to a known external auditor is very different from that same file going to a new personal Dropbox account at 2 a.m.

When you combine labels with:

  • Identity and role
  • Channel (email, web, SaaS, AI)
  • Destination and geography
  • Simple behavior analytics (volume, unusual time, unusual location)

You can reserve hard blocks and escalations for situations that actually look risky.

Finally, you need a real feedback loop. Let users override certain DLP prompts with a required justification and log “reported false positives.” Review those regularly with business owners. That feedback is invaluable for tightening rules where they truly matter and relaxing them where they are just creating friction. In practice, enforce on labels first, then refine with business context and user feedback, instead of trying to make regexes infinitely smarter.

4. Treat DSPM and DLP as a single system, not a “DSPM vs DLP” choice

If you search for “DSPM vs DLP,” you’ll find plenty of comparison articles and vendor takes. From the customer’s side, though, the most useful framing is not “which one?” but what does each do, and how do they work together?”

At a high level:

  • DSPM focuses on data-at-rest intelligence: it shows what sensitive data you have, where it resides, who and what can access it, how it’s configured, and whether that posture is acceptable for your risk and compliance requirements.
  • DLP focuses on data-in-motion enforcement: it monitors data leaving (or moving within) the organization via email, endpoints, web, SaaS, and APIs, and decides what to block, encrypt, or just log based on policies.

When you connect them, you get a closed loop:

  1. DSPM discovers, classifies, and labels sensitive data consistently across cloud, SaaS, on‑prem, and AI.
  2. Data access governance uses that context to right‑size permissions and remediate over‑exposure.
  3. DLP and related controls enforce label‑driven policies at the edges, with far fewer false positives and blind spots.

DSPM doesn’t replace DLP; it makes DLP accurate, scalable, and cloud/AI‑ready. Takeaway, stop framing it as DSPM versus DLP. Your DLP will only be as good as the DSPM feeding it.

5. Bring SaaS, cloud, and AI into scope for DLP

Most older DLP programs were built around email and endpoints. But in cloud‑first organizations, the riskiest data flows now run through:

  • Cloud and object storage (S3, GCS, Azure Blob)
  • Data warehouses and lakes (Snowflake, BigQuery, Databricks)
  • SaaS platforms (M365, Google Workspace, Box, Salesforce, Slack, Teams)
  • AI systems (M365 Copilot, Gemini for GWS, Bedrock, custom RAG apps)

Trying to bolt classic inline DLP controls onto all of those surfaces is expensive and incomplete. You’ll still miss shadow data, lower environments that contain real customer data, and AI pipelines that consume sensitive content by design.

DSPM gives you a more scalable pattern:

  • Inventory and classify sensitive data where it sits across cloud, SaaS, and AI.
  • Use that intelligence to drive native controls: MPIP labels and Microsoft Purview DLP, CASB/SSE policies, Snowflake dynamic masking, IAM/CIEM, and AI guardrails.

For example, a healthcare organization might combine:

  • Sentra’s DSPM to discover PHI in Google Drive, M365, Salesforce, and Snowflake
  • Auto‑labeling of that PHI so Purview and DLP can enforce correctly
  • AI‑aware classification to govern which labeled data copilots and agents are allowed to see


See How Valenz Health Uses DSPM to Protect PHI Across AWS, Azure, and Modern Data Platforms

Similarly, the DLP for Google Workspace story shows how cloud‑native, DSPM‑powered classification is essential to make platform DLP effective for unstructured content in OneDrive, SharePoint, and Teams. Best practice, treat SaaS, cloud, and AI as first‑class DLP surfaces, and use DSPM to make them visible and governable before you try to enforce.

6. Design DLP policies for real workflows, then harden them

Many DLP programs fail not because the tools are weak, but because the policies were designed for whiteboards, not for real users.

Very often:

  • The ruleset is too broad, with dozens of overlapping controls per channel
  • Business stakeholders had little input, so workflows break in production
  • There’s no staged rollout path; policies jump straight from “off” to “block”

A better pattern is to treat DLP policies as something you product‑manage. Start by expressing a very small set of core policies in business terms, independent of channel.

For example:

  • “Regulated data (PII, PCI, PHI) must not leave specific regions or approved partners.”
  • “Files labeled Highly Confidential must never be shared to personal email or cloud domains.”
  • “AI assistants and copilots may only access data labeled Internal or below.”

Then map those policies onto channels with graduated responses:

  • Log only (for simulation and tuning)
  • User prompts (“This file is labeled Confidential; are you sure?”)
  • Override with justification (captured for review)
  • Hard block + ticket for the riskiest conditions

Throughout, involve legal, compliance, HR, and business owners. If DLP events could lead to performance conversations or disciplinary action, you don’t want those stakeholders to be surprised by how the system behaves.

Ready to get started? Read: How to Build a Modern DLP Strategy That Actually Works: DSPM + Endpoint + Cloud DLP

Key idea, roll out label‑driven policies gently, let reality teach you where controls can be strict, and only then lock them down.

7. Measure DLP like a product, not a checkbox

If your goal is to “supercharge DLP so it performs better,” you need to know how it’s performing now, and how changes affect it. That means treating DLP like a product with KPIs, not a compliance box you either have or don’t.

High‑performing teams tend to track four categories:

  • Coverage: percentage of data stores under DSPM visibility; proportion of sensitive assets correctly labeled; number of major SaaS and cloud platforms within scope.
  • Quality: false positive and false negative rates by policy and channel; serious incidents discovered outside DLP that should have triggered it.
  • Operational impact: mean time to detect and respond to data‑loss incidents; analyst hours spent per week on DLP triage; number of issues auto‑remediated via workflows (auto‑labeling, auto‑revoking access, auto‑quarantining content).
  • Business alignment: frequency of stakeholder requests to disable or bypass policies; time to prepare for audits compared to prior years.

A platform like Sentra’s data security platform gives you much of this telemetry out of the box through its unified inventory, access graph, and integration hooks into SIEM/SOAR, IAM, DLP, SSE/CASB, and ITSM. Bottom line, you can’t fix what you can’t measure. Decide which DLP metrics matter to your organization and revisit them as you evolve your DSPM + DLP architecture.

What “Supercharge Your DLP” means in practice

When teams say “we need to fix our DLP,” they usually don’t mean “rip everything out.” They mean:

  • “We don’t trust the alerts we get.”
  • “We know there are blind spots in cloud, SaaS, and AI.”
  • “We’re tired of fighting with brittle rules that don’t reflect how the business actually works.”

Supercharging DLP in the cloud and AI era starts with data intelligence. That means:

  • Using DSPM to discover and classify sensitive data everywhere
  • Applying consistent labels that encode business meaning
  • Wiring those labels into the DLP and access controls you already own

From there, DLP can finally do what it was always meant to do: prevent real data loss, at scale, without paralyzing your organization or your AI initiatives. That’s the real promise behind “Supercharge Your DLP.” You don’t start over, you make the DLP you already have smarter, quieter where it should be, and louder where it counts.

<blogcta-big>

Read More
Kristin Grimes
Kristin Grimes
David Stuart
David Stuart
March 9, 2026
3
Min Read

Meet Sentra at RSAC 2026: AI Data Readiness, Continuous Compliance, and Modern DLP in Action

Meet Sentra at RSAC 2026: AI Data Readiness, Continuous Compliance, and Modern DLP in Action

RSAC 2026 is shaping up to be one of the most important RSA Conferences to date, especially for security teams navigating AI adoption, Copilot readiness, and large-scale data governance. At RSA Conference 2026 in San Francisco, Sentra is bringing together security leaders from major enterprises across financial services and global consumer industries to discuss how modern enterprises are preparing their data for AI, strengthening governance, and rethinking DLP in an AI-driven world.

If you’re attending RSAC 2026, here’s where to find us, and why it matters.

CISO AI Copilot Readiness Roundtables at RSAC 2026

March 23–26 | W Hotel | Steps from Moscone

AI assistants like Microsoft Copilot and Google Gemini are transforming how employees access enterprise data. What once required manual searches across drives, mailboxes, and SaaS applications can now be surfaced instantly.

That shift is powerful, but it also forces CISOs to confront a difficult question: is our data actually AI-ready?

During RSAC 2026, Sentra is hosting closed-door CISO AI Copilot Readiness Roundtables, bringing together security leaders from major enterprises across financial services and global consumer industries. These sessions are intentionally intimate and designed for candid peer discussion rather than vendor presentations.

No slides. No marketing decks. Just real-world insights on what’s working, and what isn’t - as organizations operationalize AI securely. Register for a roundtable.

AI Data Readiness for 70+ PB: Lessons from a Leading Financial Platform at RSAC 2026

March 24 | 7:45 AM – 9:00 AM

Preparing data for AI at scale is not theoretical, especially when you're dealing with more than 70 petabytes of data.

In this RSAC 2026 session, a former Director of Product Security from a leading digital financial platform will share how their organization approached AI data readiness using Sentra. The session will explore how large financial institutions can gain visibility into massive data environments, reduce exposure risk, and enable Copilot and machine learning adoption without compromising governance.

If you're managing AI adoption in a complex, high-scale environment, this session offers practical lessons grounded in real-world enterprise execution. Register for the session.

Continuous Compliance with AI Visibility: Lessons from a Major Mortgage Institution at RSAC 2026

March 25 | 12:00 PM – 1:00 PM

For a $500B U.S. mortgage institution, compliance is not a one-time event, it’s a continuous obligation.

In this RSA Conference 2026 session, a CISO from one of the largest mortgage lenders in the United States will share how their organization uses Sentra to gain visibility into sensitive data, automate Jira masking workflows, and transform compliance from a reactive burden into a proactive advantage.

As regulatory expectations increase around AI systems and data governance, continuous compliance becomes a strategic capability rather than just an audit checkbox. Register for the session.

A Global Enterprise Blueprint for Modern DLP Compliance at RSAC 2026

Global enterprises face an even more complex challenge: governing data consistently across Azure, Snowflake, Microsoft 365, and Purview, while preparing for AI and Copilot integration. At RSAC 2026, data security leaders from one of the world’s largest consumer brands will share how they built a governance framework that integrates large data catalogs with modern DLP controls. The session explores how traditional policy-based DLP can evolve into a model that combines deep data intelligence with enforcement aligned to business context.

For organizations operating across regions and platforms, this blueprint offers a practical path forward. Register for the session.

Visit Sentra at Booth #N4607 at RSA Conference 2026

If you’re walking the floor at RSAC 2026, stop by Booth N4607 to explore how Sentra enables AI-ready data security.

Our team will be showcasing how organizations can:

  • Eliminate risk from AI agents and ML model adoption
  • Discover unknown sensitive data exposures
  • Add AI-powered intelligence to improve DLP precision

Rather than simply layering new policies on top of old systems, we’ll demonstrate how DSPM and DLP can work together in a unified architecture. Book a Demo at Booth N4607.

Executive Briefings at RSAC 2026

For security leaders looking to go deeper, Sentra is offering private briefings during RSA Conference 2026. These sessions provide the opportunity to discuss real-world data security challenges, proven best practices, and lessons learned from enterprise deployments.

Each discussion is tailored to your environment, whether your focus is AI readiness, exposure reduction, or continuous compliance. Schedule a Personal Briefing.

Special Events During RSAC 2026

The Women in Security Documentary

March 24 & 25 | AMC Metreon 16

Just steps from Moscone Center, join us for a special screening celebrating women redefining leadership in cybersecurity. The red carpet begins at 4:00 PM, with the screening starting at 4:45 PM.

Register Now

Sentra + Defensive Networks RSA Dinner

March 25 | 7:00 PM | The Tavern, San Francisco

We’re hosting an intimate, relationship-centered dinner for security leaders navigating today’s most pressing AI and data security challenges. Designed for meaningful dialogue and peer exchange, this event offers space for authentic conversation beyond the conference floor.

Why AI Data Security Defines RSAC 2026

The defining theme of RSA Conference 2026 is clear: AI has changed the security equation. AI systems do not create new data, but they dramatically increase its discoverability, accessibility, and movement. That reality exposes gaps between visibility and enforcement that many organizations have tolerated for years. To secure AI adoption, organizations need more than isolated tools. They need continuous data intelligence, context-aware enforcement, and feedback between the two. That is the architecture Sentra is bringing to RSAC 2026.

See You at RSA Conference 2026

If you’re attending RSAC 2026 in San Francisco, we’d love to connect.

📍 Booth N4607
📅 March 23–26, 2026
📍 Moscone Center

Join us to explore how AI-ready data security becomes practical, measurable, and operational- not just theoretical.

<blogcta-big>

Read More
Mark Kiley
Mark Kiley
March 8, 2026
5
Min Read

Florida Information Protection Act (FIPA): 30‑Day Data Breach Deadline and Compliance Checklist

Florida Information Protection Act (FIPA): 30‑Day Data Breach Deadline and Compliance Checklist

When I talk to CISOs and privacy leaders in Florida, the conversation usually starts the same way:

“We know we should be better prepared for a breach. But the 30‑day deadline under FIPA… that’s what keeps us up at night.”

I get it. On paper, Florida’s Information Protection Act of 2014 (FIPA), codified in Florida Statutes § 501.171, is just another notification law. In real life, that 30‑day requirement to notify affected Floridians (and sometimes the Attorney General and credit bureaus) collides with the messy reality of cloud data sprawl, legacy systems, and half‑documented SaaS.

In this post, I want to walk through FIPA the way I explain it in one‑on‑one conversations:

  • What FIPA actually says, in plain language
  • Why the 30‑day breach clock is so unforgiving
  • The patterns I see in Florida across healthcare, insurance, and travel/hospitality
  • How a data‑centric approach and DSPM specifically changes the game

I’m not your lawyer (you should definitely loop them in), but I am someone who spends a lot of time working with Florida‑based teams trying to operationalize this law.

What FIPA actually requires (without the legalese)

FIPA was passed to “better protect Floridians’ personal information” and to force businesses and government entities to do two big things:

  1. Take reasonable measures to protect personal information
  2. Notify people quickly when something goes wrong

The law lives in § 501.171 of the Florida Statutes. The core ideas are:

  • If you’re a covered entity (a business or government entity that “acquires, maintains, stores, or uses” personal information), you have to secure that data and follow FIPA’s rules when there’s a breach.
  • If you experience a breach involving Florida residents’ personal information, you usually have to notify them within 30 days of determining a breach occurred, with a narrow option for a 15‑day extension if you can show good cause to the Attorney General.
  • If 500 or more Florida residents are affected, you also have to notify the Florida Attorney General within that same 30‑day window.
  • If more than 1,000 residents are affected, you must notify the nationwide credit reporting agencies (think Equifax, Experian, TransUnion) as well.

On top of that, FIPA imposes:

  • Data security obligations: “reasonable measures” to protect and secure personal information in electronic form.
  • Disposal requirements: you must take reasonable measures to dispose of customer records containing personal information when no longer needed; shredding, erasing, or otherwise making the data unreadable.
  • Civil penalties for failure to notify, up to $500,000 per breach depending on how long you delay.

The Florida Attorney General’s own guidance makes the intent clear: FIPA isn’t just about writing a nice policy; it’s about timely, meaningful transparency when Floridians’ data is at risk.

What “personal information” means under FIPA

One thing that trips teams up is how broad Florida’s definition of “personal information” really is.

Under § 501.171, personal information generally means a Florida resident’s first name or first initial and last name in combination with one or more of these data elements, when not encrypted:

  • Social Security number
  • Driver’s license, ID card, passport, military ID, or similar government identifier
  • Financial account number, credit or debit card number plus any required code, PIN, or password needed to access the account
  • Information about a person’s medical history, mental or physical condition, or medical treatment or diagnosis by a healthcare professional
  • Health insurance policy numbers, subscriber IDs, or unique identifiers used by a health insurer
  • A username or email address combined with a password or security question/answer that would permit access to an online account

So if you’re in Florida healthcare, insurance, banking, or even e‑commerce, FIPA isn’t just about raw SSNs. It picks up:

  • Patient portal credentials
  • Online banking logins
  • Health plan IDs
  • Medical billing data

And it doesn’t stop there: the University of Florida’s privacy office, for example, explicitly points out that FIPA’s definition covers both medical and financial identifiers, plus account credentials.

This matters, because it means you can’t treat “regulated data” as just PHI or PCI. FIPA cares about all of those elements.

What counts as a “breach” and when the 30‑day clock starts

FIPA defines a “breach of security” (or “breach”) as unauthorized access of data in electronic form containing personal information.

A few important nuances I always emphasize:

  • The access has to be unauthorized. Good‑faith access by an employee or agent for legitimate business purposes isn’t a breach as long as the data isn’t misused or further disclosed.
  • The data in question has to contain personal information as Florida defines it—so you need to know what’s actually stored where.
  • Encrypted data generally doesn’t trigger a breach unless the encryption keys or methods themselves are compromised.

The 30‑day notification deadline doesn’t start the moment your EDR fires an alert. It starts when you “determine that a breach has occurred” or have reason to believe it has.

And this is where reality bites:

  • To “determine that a breach occurred,” you have to scope the incident: what system, what data, which individuals, what type of information.
  • The Attorney General and courts will absolutely look at whether you dragged your feet on that determination. FIPA allows a short extension (15 days) if you show good cause in writing, but it doesn’t give you months to figure things out.

I’ve yet to meet a Florida CISO who feels like 30 days is generous. For most, it’s barely enough time if they don’t have good visibility going in.

What notice actually looks like in Florida

Once you’ve determined you have a FIPA breach, here’s what notice looks like in practice.

Notice to individuals

You must notify each affected Florida resident as expeditiously as possible and without unreasonable delay, but no later than 30 days after you determine a breach occurred (unless law enforcement asks you to delay, or you get that 15‑day AG extension).

The notice has to include at least:

  • The date or estimated date range of the breach
  • A description of the personal information that was accessed
  • Contact information for your organization so people can ask questions or get help

You can send notice by mail or email, depending on how you normally communicate with that person, with substitute notice (website + media) allowed when certain cost or scale thresholds are met.

Notice to the Attorney General

If 500 or more Florida residents are affected, you must also notify the Florida Attorney General’s Office within that same 30‑day window.

That notice must include:

  • A synopsis of the events
  • The number of affected residents
  • Any services you’re offering (like credit monitoring)
  • A copy of what you sent to consumers
  • Contact information for someone at your organization who can answer follow‑up questions

And if the AG asks, you also need to be able to provide things like police or incident reports, your internal breach policies, and the steps you’ve taken to fix the problem.

Notice to credit bureaus

If more than 1,000 individuals are notified, you must also notify all nationwide consumer reporting agencies about the timing, distribution, and content of the notice.

Why this is so hard for Florida organizations in 2026

Most of the teams I work with in Florida aren’t struggling because they don’t care about FIPA. They’re struggling because, when something bad happens, they can’t answer three basic questions fast enough:

  1. What data was actually in the affected systems?
    • Was it just emails and low‑risk metadata?
    • Or did that S3 bucket / SQL database / M365 site hold SSNs, health data, insurance IDs, or account credentials for Florida residents?
  2. How many Floridians are actually impacted?
    • Do we have 73 residents involved, or 73,000?
    • Can we reliably separate Florida addresses from the rest of the world for notification purposes?
  3. Was the data really “unsecured”?
    • Was it properly encrypted with keys stored separately?
    • Do we have logs that show whether an attacker actually exfiltrated data, or just probed the perimeter?

The 30‑day clock feels brutal because you’re trying to do all of that from a cold start. Digging through logs, reconstructing schemas, pulling sample rows, manually joining data to geography, arguing about what “personal information” means asset by asset.

I see this especially clearly in Florida’s core industries:

  • Healthcare teams trying to line up FIPA with HIPAA’s 60‑day breach rule and HHS obligations.
  • Insurers and health plans juggling FIPA alongside sector‑specific regulations and contractual obligations.
  • Travel and hospitality brands sitting on huge volumes of guest data; IDs, payment details, loyalty credentials. All of which can qualify as personal information under FIPA.

When you already have patchy visibility, the law’s timeline just exposes that weakness and creates crushing pressure for security, privacy, and GRC teams.

How a data‑centric approach and DSPM change the equation

This is why I keep coming back to data‑centric security and Data Security Posture Management (DSPM) in conversations about FIPA.

Instead of starting each incident from zero, a DSPM platform like Sentra gives you an always‑on, high‑accuracy answer to:

  • What sensitive data do we have?
  • Where does it live (down to specific buckets, tables, and documents)?
  • How sensitive is it, based on FIPA, HIPAA, PCI, and other regimes?
  • Who can actually access it; including users, service accounts, and AI tools?

That changes the FIPA conversation in a few ways:

  • Before an incident, you can see where Florida‑defined “personal information” has ended up—especially in cloud storage, data lakes, and collaboration tools—and fix obvious exposures (like unencrypted data or over‑permissioned access) long before someone breaks in.
  • During an incident, you’re not guessing which assets in the blast radius actually contain personal information; you already know. That lets you scope affected systems and residents much faster.
  • After an incident, you have a defensible record of what you did, why you did it, and how you’re preventing a repeat. This is exactly what the AG and auditors tend to ask for.

And because DSPM is agentless and API‑driven, you don’t have to slow your developers down with heavy‑weight deployments. It fits into the cloud‑native world most Florida organizations already live in.

If you’re curious how this looks in a highly regulated, fast‑moving environment, the SoFi DSPM story with Sentra is a good parallel, even though it’s financial services, not Florida healthcare or hospitality. They had to solve the same problems: data sprawl, regulatory pressure, and the need to move quickly without losing control.

A FIPA‑ready checklist I walk through with Florida teams

When I’m sitting with a Florida customer and FIPA is on the agenda, we usually work through some version of this:

  1. Do we really know where FIPA‑defined personal information lives across our environment?
    Not just in the EHR, policy admin system, or booking engine, but in data lakes, backup buckets, BI tools, and SaaS.

  2. Can we tell, with confidence, how many Florida residents are in those datasets?
    If an S3 bucket in us‑east‑1 is compromised, can we quickly identify the Florida slice?

  3. Do we have a FIPA‑aware incident playbook?
    One that explicitly calls for:
    • Pulling DSPM data to identify affected systems and data types
    • Running a structured risk assessment around “breach of security”
    • Triggering the right notices (residents, AG ≥500, CRAs ≥1,000) inside 30 days

  4. Are we shrinking our FIPA exposure over time?
    Are we cleaning up old copies, tightening access, and encrypting the right things?

When those answers are “yes,” the 30‑day clock feels a lot less like a panic button, and a lot more like a tight but manageable SLA.

Final thought (and a practical next step)

FIPA isn’t going away. If anything, the broader trend in Florida is toward more privacy and security scrutiny, not less.

My honest view, after a lot of conversations in this state, is that the only sustainable way to live with that 30‑day breach deadline is to stop treating data security as an abstract perimeter problem and start treating it as a continuous, data‑centric discipline. That’s exactly what Sentra’s DSPM platform is built for.

If this resonates and you’re looking at FIPA wondering how you’d really perform under a 30‑day clock, let’s make it concrete.

See how Sentra can show you exactly where FIPA‑defined personal information lives today, what’s exposed, and how to cut your breach‑response time from weeks to days. Request a Sentra demo.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
March 8, 2026
3
Min Read

Why Audio and Video Files Are Your Next Big Risk

Why Audio and Video Files Are Your Next Big Risk

Every enterprise security team knows how to scan documents, spreadsheets, and databases for sensitive data. But what about the thousands of call recordings sitting on your file servers? The Zoom meetings archived in cloud storage? The voicemails accumulating in your communications infrastructure?

Audio and video files represent the fastest-growing category of unstructured data in the enterprise, and for most organizations, they remain completely invisible to data security programs. That gap is not just an oversight. It is a liability.

The Explosion of Audio and Video Data

The modern enterprise generates an extraordinary volume of audio and video content. Customer service centers record every call. Sales teams capture prospect conversations. HR departments archive interview recordings. Legal teams store depositions and witness interviews. And since the shift to hybrid work, nearly every meeting produces a recording.

This content is rich with sensitive information. A single customer service call might include a spoken Social Security number, a credit card number read aloud for verification, an account number, and a full name and address. A recorded executive meeting might contain confidential M&A discussions, unreleased financial results, or strategic plans that constitute material nonpublic information. A telehealth session captures protected health information that falls squarely under HIPAA.

Yet the vast majority of Data Loss Prevention (DLP) and Data Security Posture Management (DSPM) solutions simply skip these files. They were built for text. They parse documents, scan databases, and index emails, but when they encounter an MP4 or a WAV file, they move on. The result is a massive blind spot that grows larger every quarter.

How Sentra Scans Audio and Video at Scale

Sentra closes this gap with purpose-built audio and video scanning capabilities that bring the same depth of sensitive data discovery to media files as organizations already expect for documents and databases.

Broad Format Coverage

Sentra supports more than 20 audio formats — including MP3, WAV, FLAC, AAC, OGG, OPUS, WMA, M4A, AIFF, AMR, APE, AU, CAF, DTS, AC3, ALAC, PCM, WV, RA, SDP, and many more — along with 15+ video formats such as MP4, MKV, AVI, MOV, WebM, FLV, WMV, MPG/MPEG, 3GP/3G2, VOB, ASF, MXF, OGV, M4V, and F4V. This is not a narrow proof of concept limited to a handful of common codecs. It is production-grade coverage designed for the diversity of formats found in real enterprise environments.

ML-Powered Transcription and Extraction

At the core of Sentra's media scanning is a dedicated ML server that performs audio transcription using advanced machine learning models. For video files, Sentra automatically extracts the audio track and routes it through the same transcription pipeline. The transcribed text then flows into Sentra's full classification and extraction engine, where it is analyzed against hundreds of data classifiers to identify PII, financial data, healthcare information, credentials, and other sensitive content.

This entire process runs inside your cloud environment, using streaming-based processing that avoids sending media files to Sentra’s SaaS and minimizes any persistence of sensitive audio.

Where This Matters Most

Financial Services

Regulatory requirements in financial services make audio scanning not just useful, but essential. MiFID II mandates the recording and monitoring of communications related to client orders, including voice calls. Dodd-Frank imposes similar requirements on swap dealers and major swap participants. SEC and FINRA recordkeeping rules require broker-dealers to retain and supervise communications, and those rules have expanded to cover a widening range of channels.

Trading floor recordings, client advisory calls, and internal communications all potentially contain material nonpublic information, account details, and transaction data. Without the ability to scan this content, compliance teams are operating with an incomplete picture of where sensitive data lives.

Healthcare

Telehealth has moved from a pandemic stopgap to a permanent fixture of care delivery. Every virtual appointment generates a recording that may contain diagnoses, treatment plans, medication names, patient identifiers, and insurance details — all of which constitute protected health information under HIPAA. Healthcare organizations that scan their document repositories but ignore their telehealth archives are leaving a significant compliance gap unaddressed.

Legal

Law firms and corporate legal departments handle some of the most sensitive information in any organization. Deposition recordings, witness interviews, settlement discussions, and privileged attorney–client conversations are routinely captured as audio or video files. A single misplaced recording can constitute a privilege waiver or a data breach. Knowing exactly what sensitive content these files contain is a prerequisite for proper data governance.

Customer Service and Sales

Contact centers are among the largest producers of audio data in any enterprise. Every recorded call is a potential repository of customer PII - names, addresses, phone numbers, account numbers, and payment card data spoken aloud during verification procedures. Organizations subject to PCI DSS have a particular obligation to understand where cardholder data exists, and that includes call recordings where a customer reads their card number to an agent.

Corporate Communications

The post-pandemic workplace runs on recorded meetings. Zoom, Microsoft Teams, and Google Meet archives grow continuously, containing everything from routine standups to board-level strategy sessions. These recordings may capture discussions about personnel matters, financial performance, product roadmaps, and partnership negotiations. They are a rich and largely unmonitored source of sensitive data exposure.

Closing the Last Major Gap in Data Discovery

Most organizations have invested heavily in scanning their structured and semi-structured data stores. They have cataloged their databases, indexed their document repositories, and classified their cloud storage. But the audio and video content accumulating across their infrastructure remains a blind spot - not because it is unimportant, but because the tooling to scan it simply did not exist at enterprise scale.

Sentra changes that equation. By extending the same rigorous data discovery and classification capabilities to dozens of audio and video formats, with ML-powered transcription running inside your cloud environment, Sentra enables security and compliance teams to achieve genuine visibility into their complete data estate. The sensitive data in your call recordings, meeting archives, and video files is not going away. If anything, the volume is accelerating. The question is whether your data security program can see it.

<blogcta-big>

Read More
David Stuart
David Stuart
March 7, 2026
4
Min Read

Microsoft Copilot Chat Incident: A Wake-Up Call for AI Assistant Security in Microsoft 365

Microsoft Copilot Chat Incident: A Wake-Up Call for AI Assistant Security in Microsoft 365

The recent Microsoft Copilot Chat incident, in which enterprise users reportedly saw AI-generated summaries that included confidential content from Drafts and Sent Items despite sensitivity labels and DLP policies, has reignited a critical conversation about AI assistant security.

Microsoft clarified that Copilot did not bypass underlying access controls. But that explanation only addresses part of the problem. The real issue isn’t whether Microsoft Copilot broke security controls. It's that Copilot inherits user permissions, and can apply its extensive abilities to uncover data the user may have long forgotten (or never properly secured in the first place).

Copilot didn’t create new risks, it surfaced existing exposure - instantly, at scale, and in a way that made it visible. For organizations deploying Microsoft Copilot, that distinction matters.

Why the Microsoft Copilot Incident Matters More Than It Appears

Microsoft Copilot operates within the permissions of the signed-in user. On paper, that sounds safe. In reality, it means Copilot can access everything the user can access - across years of accumulated data.

In a typical Microsoft 365 environment, that includes:

  • Emails stretching back years
  • Linked SharePoint Online documents
  • OneDrive folders shared broadly across teams
  • External guest-accessible sites
  • Archived projects no one has reviewed in years

When Copilot summarizes a mailbox, it can follow embedded links into SharePoint and OneDrive. If those linked files contain overshared financials, HR investigations, contracts, or regulated data, Copilot can surface insights from them in seconds.

Previously, this data exposure existed quietly in the background. AI assistants remove friction:

  • No need to manually search multiple systems
  • No need to remember file locations
  • No need to understand organizational silos

A single natural-language prompt can traverse it all.

That is the shift. And that is the risk.

AI Assistants Change the Data Risk Model

Traditional enterprise security assumes that risk is constrained by human effort. Data may technically be accessible, but if it requires time, institutional knowledge, or manual searching, exposure is limited.

AI assistants like Microsoft Copilot eliminate those barriers.

Instead of asking, “Who has access to this file?” organizations must now ask:

What can an AI assistant synthesize from everything a user can access?

This is a fundamentally different security model.

The Microsoft Copilot Chat incident demonstrated that even when sensitivity labels and DLP policies are in place, unexpected AI-generated outputs can undermine confidence. The concern is not only regulatory exposure, its reputational, operational, and executive trust in AI initiatives.

Why Sensitivity Labels and DLP Are Not Sufficient for Copilot Security

Many organizations rely on Microsoft Purview, sensitivity labels, and Data Loss Prevention (DLP) policies to control how information is handled in Microsoft 365.

Those tools are essential, but they are not enough on their own.

In real-world environments:

  • Labels are inconsistently applied
  • Legacy data predates modern classification policies
  • SharePoint sites remain broadly accessible long after projects end
  • OneDrive folders accumulate stale and redundant files
  • Linked documents inherit exposure from misconfigured parent sites

AI assistants operate on access reality, not policy intention. If sensitive data is accessible (even unintentionally) Copilot can surface it. The Copilot Chat incident did not reveal a failure of AI. It revealed a failure of data posture alignment.

Microsoft Copilot Requires AI Data Readiness

Before enabling Copilot broadly across Microsoft 365, organizations need what can be described as AI Data Readiness.

AI Data Readiness means achieving continuous visibility into:

  • Where sensitive data lives
  • How it is shared internally and externally
  • Which SharePoint and OneDrive assets are overshared
  • Whether classification matches actual content
  • What historical data remains unnecessarily accessible

Without this foundation, Copilot becomes a force multiplier for hidden exposure.

With it, Copilot becomes a productivity accelerator.

DSPM: The Missing Layer in Secure Microsoft Copilot Deployment

Data Security Posture Management (DSPM) provides the continuous, data-centric visibility required for secure AI adoption.

Rather than focusing solely on permissions or labels, DSPM answers deeper questions:

  • What sensitive and regulated data exists across Microsoft 365?
  • Where is it exposed?
  • What is its purpose? 
  • Who can access it?
  • How does it move?
  • Is it properly classified and governed?

Sentra’s DSPM-driven approach continuously discovers and classifies sensitive data across SharePoint Online, OneDrive, cloud storage, and SaaS platforms. Using AI-enhanced classification, it differentiates routine collaboration documents from high-risk assets such as HR investigations, financial statements, intellectual property, and regulated PII or PHI.

This creates a unified, context-rich map of enterprise data exposure, the exact context Copilot relies on when generating responses.

From Visibility to Remediation

Once visibility exists, security teams can act with precision.

Instead of broadly restricting Copilot access, which reduces productivity, organizations can surgically reduce risk by:

  • Identifying overexposed SharePoint sites containing sensitive data
  • Detecting OneDrive folders shared with large groups or external guests
  • Removing stale, redundant, and “ghost” data
  • Reconciling missing or misaligned sensitivity labels
  • Aligning MPIP and DLP controls with actual content reality

The result is not AI avoidance. It is controlled AI expansion.

The Strategic Shift: Treat Copilot Security as a Data Problem

The Microsoft Copilot Chat incident should not trigger panic. It should trigger maturity.

AI assistants reflect the state of your data. If your Microsoft 365 environment contains overshared, misclassified, or stale sensitive information, AI will surface it.

Organizations that succeed with Microsoft Copilot will be those that:

  • Audit their Microsoft 365 data exposure continuously
  • Reduce unnecessary access before enabling AI at scale
  • Align labels, policies, and actual content
  • Limit AI blast radius through data posture improvements
  • Treat AI adoption as a data governance transformation

The conversation should move from “Is Copilot safe?” to:

Is our data posture ready for Copilot?

When DSPM underpins AI adoption, Copilot shifts from potential liability to competitive advantage.

Final Thought: AI Assistants Don’t Create Risk - They Reveal It

The Microsoft Copilot incident is not an isolated anomaly. It is an early indicator of how AI assistants will reshape enterprise security assumptions. Copilot can only summarize what users already have access to. If access is overly broad, outdated, or misconfigured, AI will expose that reality faster than any audit ever could.

Organizations that invest in AI Data Readiness today will not only prevent future incidents, they will accelerate secure AI transformation across Microsoft 365.

<blogcta-big>

Read More
Alejandro Hernández
Alejandro Hernández
March 6, 2026
4
Min Read

From Observing to Operating: How Sentra's MCP Server Turns DSPM Into an AI-Driven Security Operations Platform

From Observing to Operating: How Sentra's MCP Server Turns DSPM Into an AI-Driven Security Operations Platform

DSPM Has a Labor Problem

Every security team knows the cycle: an alert fires, you open a dashboard, click through four screens to understand the context, pivot to a second tool to check who has access, cross-reference a spreadsheet to determine the data's sensitivity, then manually update the alert status. Multiply that by dozens of alerts a day, and your team's most experienced engineers spend more time navigating tools than actually improving security posture.

The data security industry invested heavily in visibility. We can tell you where your PII lives, which buckets are public, and how many identities can reach your crown jewels. But visibility without action is just a more sophisticated way to worry. The gap between seeing a problem and resolving it remains filled with manual work, context switching, and tribal knowledge locked in senior engineers' heads.

What if an AI agent could do the navigation, the correlation, and the remediation for you, and you could just tell it what you need in plain English?

What Is MCP, and Why Should Security Teams Care?

The Model Context Protocol (MCP) is an open standard that connects AI assistants like Claude to external tools and data sources. Think of it as a universal adapter: instead of building custom integrations for every AI workflow, MCP provides a standardized way for AI agents to discover and call tools, read data, and execute operations.

For security teams, MCP means you can interact with your entire security platform through natural language. No more memorizing API endpoints, constructing filter syntax, or building one-off scripts. You describe what you need, and the AI agent chains together the right API calls to deliver it.

But here's the critical distinction: not all MCP servers are created equal.

Some MCP implementations expose a handful of read-only catalog queries that are useful for asking "what data do I have?" but powerless when you need to actually do something about what you find. Read-only MCP servers give you a conversational interface to a dashboard. That's a UX improvement, not a paradigm shift.

Sentra's MCP server is fundamentally different.

What Sentra's MCP Server Actually Does

Sentra's MCP server exposes 130+ tools across 13+ security domains, covering not just queries but write operations, composite investigations, and guided workflows. It's not just a chatbot layer on top of a dashboard. It's a full security operations interface.

Capability Read-Only MCP Servers Sentra MCP Server
Data catalog queries Yes Yes
Alert/threat investigation No Yes — full triage chains
Write operations No 11 tools across 6 safety tiers
Composite investigation tools No Yes — multi-step in one call
Guided workflow prompts No 5 pre-built security workflows
Identity & access analysis No Full graph traversal
Compliance audit prep No Framework-level readiness
Policy management No Create, enable, disable policies
Scan triggering No On-demand store and asset scans
DSAR processing No End-to-end request tracking
AI asset risk assessment No Dedicated AI/ML asset tools

The difference is the gap between observing and operating. Sentra's MCP server closes the loop from detection to response.

Real Workflow: One Prompt, One Complete Policy Audit

Here's a real prompt a security engineer used during a policy noise reduction exercise:

"Audit all enabled security policies. For each policy, show me how many open alerts it generates and its severity. Identify policies that generate more than 50 low-severity alerts, those are candidates for tuning. For the noisiest policy, show me a sample violated assets so I can determine if it's misconfigured. Then disable that policy and resolve its existing alerts."

Behind the scenes, the MCP server chains 6+ tools to fulfill this request:

  1. `policies_get_all` -- Retrieves all enabled policies with severity metadata
  2. `policies_get_policy_incidents_count` -- Gets open alert counts per policy
  3. `alerts_get_all_external` -- Fetches alerts filtered to the noisiest policy
  4. `alerts_get_violated_store_data_assets_by_alert` -- Shows sample violated assets for review
  5. `policy_change_status` -- Disables the misconfigured policy (write operation)
  6. `alert_transition` -- Resolves existing alerts with reason "false_positive" (write operation)

No script. No runbook. No context switching between tabs. A single natural language prompt drove an end-to-end audit-to-remediation workflow that would typically take an engineer 30-60 minutes of manual work.

This is what "from observing to operating" looks like in practice.

6 Ready-to-Use Prompts for Data Security Posture Management

The policy audit above is just one example. Sentra's MCP server supports a progression from simple queries to complex, multi-tool operations:

Quick status check: "Show me open alerts by severity and our current security rating." Two tools fire, you get a snapshot in seconds.

Compliance audit preparation: "Prepare HIPAA compliance evidence: show all controls, our compliance score, open violations, and data classification coverage for PHI." The compliance_audit_prep workflow prompt chains 6+ tools into an audit-ready report.

Alert triage and resolution: "Investigate alert abc-123: what data is at risk, who has access, is this recurring? If it's a false positive, resolve it with a comment explaining why." The investigate_alert composite tool gathers details, blast radius, and history in one call. Then write operations close the loop.

Identity access review: "Show me all external identities with access to high-sensitivity stores. For the riskiest one, map the full access graph from identity to roles to stores to assets." Identity search, graph traversal, and sensitivity analysis,all through conversation.

Board-ready security briefing: "Prepare my quarterly board briefing: posture trends for 90 days, compliance status by framework, open alerts by severity, security rating trend, and top 5 recommendations." The security_posture_summary composite tool pulls dashboard, alerts, ratings, compliance, risk distribution, and sensitivity data in one call.

AI data risk assessment: "Show me all AI-related assets, what sensitive data they contain, who has access to training data, and whether there are security alerts on those stores." Dedicated AI/ML asset tools surface machine learning risks that traditional DSPM tools miss.

Enterprise-Grade Architecture

Conversational doesn't mean casual. Sentra's MCP server is built for production security operations:

  • Connection pooling via a shared httpx.AsyncClient with keep-alive for sustained performance
  • Automatic retry with exponential backoff for rate limits (429) and server errors (5xx)
  • SSRF protection that blocks requests to private/metadata IP ranges
  • 6-tier write operation hierarchy -- from additive-only comments (Tier 1) up to destructive operations requiring explicit safety confirmation (Tier 6)
  • Feature flag control -- all write operations gated by SENTRA_ENABLE_WRITE_OPS, disabled with a single environment variable
  • UUID validation on all identifier parameters before HTTP calls are made
  • Error sanitization that strips internal details (hostnames, file paths) from client-facing responses
  • TLS-native deployment with certificate configuration for direct HTTPS serving
  • API key authentication on the MCP endpoint itself, separate from Sentra API credentials

Getting Started

Three deployment paths, from local development to production:

Claude Desktop (local, stdio): Add Sentra's MCP server to your Claude Desktop configuration. Point it at your Sentra API key, and start asking questions. Zero infrastructure required.

Claude Code / Cursor (developer workflow): Run the MCP server alongside your IDE. Security engineers get conversational access to Sentra while they work, without switching contexts.

Docker (production, HTTP transport): Deploy as a containerized service with TLS, API key authentication, and CORS controls. Multiple AI agents or team members can connect to a single shared instance.

All three paths expose the same 130+ tools, 11 write operations, 5 guided workflows, and 2 composite investigation tools.

The Future of Data Security Operations Is Conversational

The security industry spent the last decade building visibility. We can see everything. The challenge now is turning that visibility into action at the speed modern environments demand. Sentra's MCP server represents a fundamental shift: from dashboards you read to agents that operate. From runbooks that describe steps to AI that executes them. From alert fatigue to conversational triage and resolution.

The tools are real. The write operations are real. The workflows are real. And they're available today.

Investigate, triage, and resolve - not just query. That's the difference between an MCP server that observes and one that operates.

Sentra's MCP server is available now for existing customers. Schedule a Demo to see how it works.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
March 5, 2026
4
Min Read

Sentra Can Now Parse AutoCAD DWG Files - Here’s Why That Matters for Data Security

Sentra Can Now Parse AutoCAD DWG Files - Here’s Why That Matters for Data Security

Walk into any aerospace, defense, semiconductor or industrial design organization and you’ll find one file format everywhere: AutoCAD’s DWG. These drawings are the blueprints for missiles, fabs, turbines, containment domes and critical infrastructure. They’re also one of the biggest blind spots in most data security programs. Traditional DSPM and DLP tools see a DWG as a big opaque blob: “binary, probably sensitive, treat with caution.” That’s no longer good enough if you are operating under ITAR, EAR or handling multi‑billion‑dollar IP assets.

This is why we built native DWG parsing into Sentra. We now read AutoCAD DWG files directly, with no AutoCAD license, no intermediate conversion and no third‑party libraries. For the first time, security and compliance teams can discover, classify and monitor the sensitive data hiding inside CAD drawings across cloud storage, file shares and engineering data lakes.

Why DWG Has Been Invisible to Security

As a CTO I’ve sat in many reviews where teams are confident they know where PII lives and where source code lives. When I ask, “What about your CAD drawings?” the room usually goes quiet.

DWG is a proprietary binary format, engineered for performance and fidelity, not for generic content inspection. Security tools that rely on text extraction or simple file signatures can’t see anything meaningful inside it. On top of that, CAD is often considered “engineering’s problem.” Drawings live on legacy engineering servers, PLM systems, or “temporary” project shares that never get decommissioned. When those repositories are lifted and shifted to S3, Azure Blob or SharePoint, security inherits terabytes of DWG files with almost no insight into what they actually contain.

Regulations add more pressure. ITAR and EAR talk about “technical data,” but the tooling most teams use for export‑control compliance was built around PDFs and Office documents, not native CAD formats. The result is predictable: either every DWG is treated as maximally toxic—which paralyzes engineering—or they’re collectively ignored, which is worse.

We wanted to break that stalemate by making DWG as transparent to security teams as a Word document.

What’s Really Inside a DWG File?

A DWG file is far more than geometry. It’s a container for rich metadata, text and structural elements that describe both the design and its context.

Sentra’s parser now extracts several key categories of information:

  • Document properties such as author, “last saved by,” creation and modification timestamps, total editing time and revision counters. This tells you who touched a drawing and when.
  • Title block attributes where engineering teams encode drawing numbers, project IDs, revision codes, department names, approvers and—crucially—export control markings like ECCN codes and ITAR statements.
  • Text content from notes, MText blocks, labels and callouts. This is where you see manufacturing tolerances, material specifications, part numbers and phrases like “COMPANY CONFIDENTIAL” or “EXPORT CONTROLLED.”
  • Layer names, which engineers often use to signal sensitivity or ownership:
    ITAR-CONTROLLED, PROPRIETARY, CLIENT-CONFIDENTIAL, CLASSIFIED-GEOMETRY, and so on.
  • Application metadata such as the AutoCAD version, build and locale that created the file. That can help tie drawings back to specific offices or workstation groups.
  • File dependencies and paths including fonts, external references (xrefs), plot configurations and linked drawings. These paths routinely expose server names, share names, usernames and department structures.

If you’re an attacker, that metadata is a reconnaissance goldmine. If you’re running security for a regulated engineering environment, it’s exactly the context you’ve been missing.

Why DWG Data Is Exceptionally Sensitive

Literal blueprints of your IP

In many organizations, DWGs are the most literal representation of intellectual property that exists. They encode the shape of a missile fin, the trace layout of a secure ASIC, or the reinforcement pattern of a containment vessel. A leaked drawing isn’t a description of the product—it is the product. Unlike a slide deck or a spec sheet, a DWG often contains everything a capable adversary needs to replicate or attack the system. That makes these files high‑value targets for nation‑state actors and sophisticated competitors.

Export control and regulatory risk

For companies operating under ITAR and EAR, DWGs are typically where export‑controlled “technical data” actually lives.

The ECCN code or ITAR statement is rarely in the filename or the folder name. It’s embedded in the title block attributes and in annotations on the page. A single file with those markings sitting in an uncontrolled S3 bucket, or shared via a public link, can trigger a regulatory violation with multi‑million‑dollar consequences and long‑term impact on your ability to win future contracts.

Because Sentra parses DWGs directly, we can programmatically answer questions like:

  • “Show me every DWG in our cloud environment that contains an ITAR statement or ECCN code.”
  • “Where exactly are those files stored, and who can access them?”

That’s impossible to do reliably if you treat DWGs as opaque binary blobs.

Supply‑chain exposure

Drawings don’t stay within a single company. They flow between primes, subcontractors, design houses, manufacturers and integration partners. Each stop along that chain leaves traces: author names, revision histories, local file paths, department identifiers. When you ingest a partner’s DWG, you’re often ingesting their sensitive operational metadata as well as your own IP. That creates both an obligation to protect it and an opportunity for attackers to learn about everyone involved in your programs.

People and infrastructure reconnaissance

From an attacker’s perspective, seemingly benign fields like “Last saved by,” or dependency paths like \\ENGSERVER03\Projects\F35-Wing\Stress\ are a treasure map. They reveal usernames, project names, server names and network topology.

From a defender’s perspective, that same metadata is invaluable for incident response and insider‑risk investigations—if you can see it.

How Security Teams Are Already Using DWG Parsing

Let me make this more concrete with a few patterns we’re seeing in early deployments.

Discovering export‑controlled drawings in cloud storage

An aerospace manufacturer had migrated years of engineering history from on‑premises file servers into S3 and Azure Blob. They knew “there’s a lot of CAD in there,” but they couldn’t distinguish a generic fixture drawing from a file that actually carried ITAR or EAR restrictions.

With Sentra scanning those buckets, they can now automatically identify DWGs whose title blocks or annotations contain ITAR statements, ECCN codes or proprietary markings. That means they can focus remediation and access reviews on the subset of drawings that are actually regulated, instead of blanket‑treating every DWG the same way.

Engineers get fewer unnecessary reviews. Security gets a precise map of where controlled technical data lives in cloud storage.

Monitoring technical data exfiltration via collaboration platforms

Another customer, an energy company, shares drawings with EPC contractors through SharePoint, OneDrive and Box. Hundreds of DWGs move every week. Previously, they had no idea whether the files shared externally described generic mounting brackets or detailed layouts of protected infrastructure.

By parsing DWGs inline as they pass through those platforms, Sentra can now flag drawings whose contents match sensitive keywords, export‑control markings, or proprietary statements. Security teams see alerts like “DWG with ITAR language shared with external account” rather than “some DWG went out,” which is what most tools can tell you today.

Building a defensible ITAR audit trail

A defense contractor we work with has to periodically prove to auditors that all ITAR‑controlled technical data is stored and processed only in approved regions and systems. Historically they relied on manual attestations from engineering teams and small sample reviews.

Now they scan every DWG in scope with Sentra. We generate an inventory of all drawings that contain ITAR or EAR markings, map each file to its exact storage location and access control set, and surface any out‑of‑policy placements. When an auditor asks “Show us where your ITAR technical data is,” they can answer with data, not with a slide deck.

How Our DWG Parser Works

From an engineering standpoint, we wanted a solution that was:

  • Native: no dependence on AutoCAD or closed‑source SDKs.
  • Wide‑ranging: support for virtually all real‑world DWG files.
  • Predictable: deterministic behavior at petabyte scale.

We implemented a parser that reads the binary DWG format directly, supporting AutoCAD versions from 2000 through 2024 (formats AC1015 through AC1032). There’s no AutoCAD installation required anywhere in the environment. We don’t convert files to DXF, PDF or images. We don’t send data to external services.

All parsing happens where Sentra runs—inside the customer’s cloud accounts or VPCs—so sensitive technical data never leaves their control.

Closing the Gap Between “Stored” and “Understood”

DWG support is part of a broader direction for Sentra. As more specialized workloads move to the cloud—EDA, PLM, simulation, scientific computing -the number of proprietary and domain‑specific file formats in your environment explodes.

Most security tools weren’t built for that world. They know how to read emails and office documents. They can fingerprint code repositories. But they look at a DWG, a GDSII, or a proprietary simulation output and shrug.

The reality is simple:

You cannot secure data you don’t understand.

Understanding means being able to answer, at scale, not only “Where is this file?” but “What is inside this file, and how sensitive is it?”

For organizations in aerospace, defense, energy, manufacturing and other technical industries, DWG files are often where your most tightly regulated and most commercially valuable data lives. Being able to automatically discover and classify that content is not a nice‑to‑have. It’s a compliance requirement that has been hiding in plain sight.

If you want to see what’s actually hiding in your own drawings, the easiest next step is to run a focused assessment: pick a few representative buckets or repositories, let Sentra scan the DWGs in place, and look at the inventory of export‑controlled and proprietary designs that surfaces.

My experience is that once you see those results, you’ll never look at “just another CAD file” the same way again.

<blogcta-big>


Read More
Nikki Ralston
Nikki Ralston
February 25, 2026
3
Min Read

SOC 2 Without the Spreadsheet Chaos: Automating Evidence for Regulated Data Controls

SOC 2 Without the Spreadsheet Chaos: Automating Evidence for Regulated Data Controls

SOC 2 has become table stakes for cloud‑native and SaaS organizations. But for many security and GRC teams, each SOC 2 cycle still feels like starting from scratch; hunting for the latest access reviews, exporting encryption settings from multiple consoles, proving backups and logs exist - per data set, per environment. If your SOC 2 evidence process is still a patchwork of spreadsheets and screenshots, you’re not alone. The missing piece is a data‑centric view of your controls, especially around regulated data.

Why SOC 2 Evidence Is So Hard in Cloud and SaaS Environments

Under SOC 2, trust service criteria like Security, Availability, and Confidentiality translate into specific expectations around data:

Is sensitive or regulated data discovered and classified consistently?

Are core controls (encryption, backup, access, logging) actually in place where that data lives?

Can you show continuous monitoring instead of point‑in‑time screenshots?

In a typical multi‑cloud/SaaS environment:

  • Sensitive data is scattered across S3, databases, Snowflake, M365/Google Workspace, Salesforce, and more.
  • Different teams own pieces of the puzzle (infra, security, data, app owners).
  • Legacy tools are siloed by layer (CSPM for infra, DLP for traffic, privacy catalog for RoPA).

So when SOC 2 comes around, you spend weeks assembling a story instead of being able to show a trusted, provable compliance posture at the data layer.

The Data‑First Approach to SOC 2 Evidence

Instead of treating SOC 2 as a separate project, leading teams are aligning it with their data security posture management (DSPM) strategy:

  1. Start from the data, not from the infrastructure
  • Build a unified inventory of sensitive and regulated data across IaaS, PaaS, SaaS, and on‑prem.
  • Enrich each store with sensitivity, residency, and business context.

  1. Attach control posture to each data store
  • For each regulated data store, track encryption status, backup configuration, access model, and logging/monitoring coverage as posture attributes.

  1. Generate SOC‑aligned evidence from the same system
  • Use the regulated‑data inventory plus posture engine to produce SOC 2‑friendly reports and CSVs, rather than collecting evidence manually for each audit cycle.

This is exactly the pattern that modern data security platforms like Sentra are implementing.

How Sentra Helps Security and GRC Teams Automate SOC 2 Evidence

Sentra sits across your data estate and focuses on regulated data, with capabilities that map directly onto SOC 2 evidence needs:

Comprehensive data‑store discovery and classification
Agentless discovery of data stores (managed and unmanaged) across multi‑cloud and on‑prem, combined with high‑accuracy classification for regulated and business‑critical data.

Data‑centric security posture
For each store, Sentra tracks security properties—including encryption, backup, logging, and access configuration, and surfaces gaps where sensitive data is insufficiently protected.

Framework‑aligned reporting
SOC 2 and other frameworks can be represented as report templates that pull directly from Sentra’s inventory and posture attributes, giving GRC teams “audit‑ready” exports without rebuilding evidence from scratch.

The result is you can prove control over regulated data, for SOC 2 and beyond, with far less manual overhead.

Mapping SOC 2 Criteria to Data‑Level Evidence

Here’s how a data‑first posture shows up in SOC 2:

CC6.x (Logical and Physical Access Controls)

Evidence: Identity‑to‑data mapping showing which users/roles can access which sensitive datasets across cloud and SaaS.

CC7.x (Change Management / Monitoring)

Evidence: Data Detection & Response (DDR) signals and anomaly analytics around access to crown‑jewel data; logs that tie back to sensitive data stores.

CC8.x (Risk Mitigation)

Evidence: Risk‑prioritized view of data stores based on sensitivity and missing controls, plus remediation workflows or automatic labeling/tagging to tighten upstream policies.

When this data‑level view is in place, SOC 2 becomes evidence selection rather than evidence construction.

A Repeatable SOC 2 Playbook for Security, GRC, and Privacy

To operationalize this approach, many teams follow a recurring pattern:

  1. Define a “regulated data perimeter” for SOC 2: Identify which clouds, SaaS platforms, and on‑prem stores contain in‑scope data (PII, PHI, PCI, financial records).

  1. Instrument with DSPM: Deploy a data security platform like Sentra to discover, classify, and map access to that data perimeter.

  1. Connect GRC to the same source of truth: Have GRC and privacy teams pull their SOC 2 evidence from the same inventory and posture views Security uses for day‑to‑day risk management.

  1. Continuously refine controls: Use posture and DDR insights to reduce exposure, close misconfigurations, and improve your next SOC 2 cycle before it starts.

The more you lean on a shared, data‑centric foundation, the easier it becomes to maintain a trusted, provable compliance posture across frameworks, not just SOC 2.

Turning SOC 2 From a Project Into a Capability

Ultimately, the goal is to stop treating SOC 2 as a once-a-year project and start treating it as an ongoing capability embedded into how your organization operates. Security, GRC, and privacy teams should work from a single, unified view of regulated data and controls. Evidence should always be a few clicks away - not the result of a month-long scramble. And every audit should strengthen your data security posture, not distract from it. If you’re still managing compliance in spreadsheets, it’s worth asking what it would take to make your SOC 2 posture something you can prove on demand.

Ready to end the fire drills and move to continuous compliance? Book a Demo 

<blogcta-big>

Read More
Adi Voulichman
Adi Voulichman
February 23, 2026
4
Min Read

How to Discover Sensitive Data in the Cloud

How to Discover Sensitive Data in the Cloud

As cloud environments grow more complex in 2026, knowing how to discover sensitive data in the cloud has become one of the most pressing challenges for security and compliance teams. Data sprawls across IaaS, PaaS, SaaS platforms, and on-premise file shares, often duplicating, moving between environments, and landing in places no one intended. Without a systematic approach to discovery, organizations risk regulatory exposure, unauthorized AI access, and costly breaches. This article breaks down the key methods, tools, and architectural considerations that make cloud sensitive data discovery both effective and scalable.

Why Sensitive Data Discovery in the Cloud Is So Difficult

The core problem is visibility. Sensitive data, PII, financial records, health information, intellectual property, doesn't stay in one place. It gets copied from production to development environments, ingested into AI pipelines, backed up across regions, and shared through SaaS applications. Each transition creates a new exposure surface.

  • Toxic combinations: High-sensitivity data behind overly permissive access configurations creates dangerous scenarios that require continuous, context-aware monitoring, not just point-in-time scans.
  • Shadow and ROT data: Redundant, obsolete, or trivial data inflates cloud storage costs and expands the attack surface without adding business value.
  • Multi-environment sprawl: Data moves across cloud providers, regions, and service tiers, making a single unified view extremely difficult to maintain.

What Are Cloud DLP Solutions and How Do They Work?

Cloud Data Loss Prevention (DLP) solutions discover, classify, and protect sensitive information across cloud storage, applications, and databases. They operate through several interconnected mechanisms:

  • Scan and classify: Pattern matching, machine learning, and custom detectors identify sensitive content and assign classification labels (e.g., public, confidential, restricted).
  • Enforce automated policies: Context-aware rules trigger encryption, masking, or access restrictions based on classification results.
  • Monitor data movement: Continuous tracking of transfers and user behaviors detects anomalies like unusual download patterns or overly broad sharing.
  • Integrate with broader controls: Many DLP tools work alongside CASBs and Zero Trust frameworks for end-to-end protection.

The result is enhanced visibility into where sensitive data lives and a proactive enforcement layer that reduces breach risk while supporting regulatory compliance.

What Is Google Cloud Sensitive Data Protection?

Google Cloud Sensitive Data Protection is a cloud-native service that automatically discovers, classifies, and protects sensitive information across Cloud Storage buckets, BigQuery tables, and other Google Cloud data assets.

Core Capabilities

  • Automated discovery and profiling: Scans projects, folders, or entire organizations to generate data profiles summarizing sensitivity levels and risk indicators, enabling continuous monitoring at scale.
  • Detailed data inspection: Performs granular analysis using hundreds of built-in detectors alongside custom infoTypes defined through dictionaries, regular expressions, or contextual rules.
  • De-identification techniques: Supports redaction, masking, and tokenization, making it a strong foundation for data governance within the Google Cloud ecosystem.

How Sensitive Data Protection’s Data Profiler Finds Sensitive Information

Sensitive Data Protection’s data profiler automates scanning across BigQuery, Cloud SQL, Cloud Storage, Vertex AI datasets, and even external sources like Amazon S3 or Azure Blob Storage (for eligible Security Command Center customers). The process starts with a scan configuration defining scope and an inspection template specifying which sensitive data types to detect.

Profile Dimension Details
Granularity levels Project, table, column (structured); bucket or container (file stores)
Statistical insights Null value percentages, data distributions, predicted infoTypes, sensitivity and risk scores
Scan frequency On a schedule you define and automatically when data is added or modified
Integrations Security Command Center, Dataplex Universal Catalog for IAM refinement and data quality enforcement

These profiles give security and governance teams an always-current view of where sensitive data resides and how risky each asset is.

Understanding Sensitive Data Protection Pricing

Sensitive Data Protection primarily uses per-GB profiling charges, billed based on the amount of input data scanned, with minimums and caps per dataset or table. Certain tiers of Security Command Center include organization-level discovery as part of the subscription, but for most workloads several factors directly influence total cost:

Cost Factor Impact Optimization Strategy
Data volume Larger datasets and full scans cost more Scope discovery to high-risk data stores first
Scan frequency Recurring scans accumulate costs quickly Scan only new or modified data
Scan complexity Multiple or custom detectors require more processing Filter irrelevant file types before scanning
Integration overhead Compute, network egress, and encryption keys add cost Minimize cross-region data movement during scans

For organizations operating at petabyte scale, these factors make it essential to design discovery workflows carefully rather than running broad, undifferentiated scans.

Tracking Data Movement Beyond Static Location

Static discovery, knowing where sensitive data sits right now, is necessary but insufficient. The real risk often emerges when data moves: from production to development, across regions, into AI training pipelines, or through ETL processes.

  • Data lineage tracking: Captures transitions in real time, not just periodic snapshots.
  • Boundary crossing detection: Flags when sensitive assets cross environment boundaries or land in unexpected locations.
  • Practical example: Detecting when PII flows from a production database into a dev environment is a critical control, and requires active movement monitoring.

This is where platforms differ significantly. Some tools focus on cataloging data at rest, while more advanced solutions continuously monitor flows and surface risks as they emerge.

How Sentra Approaches Sensitive Data Discovery at Scale

Sentra is built specifically for the challenges described throughout this article. Its agentless architecture connects directly to cloud provider APIs without inline components on your data path and operates entirely in-environment, so sensitive data never leaves your control for processing. This design is critical for organizations with strict data residency requirements or preparing for regulatory audits.

Key Capabilities

  • Unified multi-environment coverage: Spans IaaS, PaaS, SaaS, and on-premise file shares with AI-powered classification that distinguishes real sensitive data from mock or test data.
  • DataTreks™ mapping: Creates an interactive map of the entire data estate, tracking active data movement including ETL processes, migrations, backups, and AI pipeline flows.
  • Toxic combination detection: Surfaces sensitive data behind overly broad access controls with remediation guidance.
  • Microsoft Purview integration: Supports automated sensitivity labeling across environments, feeding high-accuracy labels into Purview DLP and broader Microsoft 365 controls.

What Users Say (Early 2026)

Strengths:

  • Classification accuracy: Reviewers note it is “fast and most accurate” compared to alternatives.
  • Shadow data discovery: “Brought visibility to unstructured data like chat messages, images, and call transcripts” that other tools missed.
  • Compliance facilitation: Teams report audit preparation has become significantly more manageable.

Considerations:

  • Initial learning curve with the dashboard configuration.
  • On-premises capabilities are less mature than cloud coverage, relevant for organizations with significant legacy infrastructure.

Beyond security, Sentra's elimination of shadow and ROT data typically reduces cloud storage costs by approximately 20%, extending the business case well beyond compliance.

For teams looking to understand how to discover sensitive data in the cloud at enterprise scale, Sentra's Data Discovery and Classification offers a comprehensive starting point, and its in-environment architecture ensures the discovery process itself doesn't introduce new risk.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
Jonathan Kreiner
Jonathan Kreiner
February 20, 2026
4
Min Read

Thinking Beyond Policies: AI‑Ready Data Protection

Thinking Beyond Policies: AI‑Ready Data Protection

AI assistants, SaaS, and hybrid work have made data easier than ever to discover, share, and reuse. Tools like Gemini for Google Workspace and Microsoft 365 Copilot can search across drives, mailboxes, chats, and documents in seconds - surfacing information that used to be buried in obscure folders and old snapshots.

That’s great for productivity, but dangerous for data security.

Traditional, policy‑based DLP wasn’t designed to handle this level of complexity. At the same time, many organizations now use DSPM tools to understand where their sensitive data lives, but still lack real‑time control over how that data moves on endpoints, in browsers, and across SaaS.

Together, Sentra and Orion close this gap: Sentra brings next‑gen, context-driven DSPM; Orion brings next‑gen, behavior‑driven DLP. The result is end‑to‑end, AI‑ready data protection from data store to last‑mile usage, creating a learning, self‑improving posture rather than a static set of controls.

Why DSPM or DLP Alone Isn’t Enough

Modern data environments require two distinct capabilities: deep data intelligence and real-time enforcement based on contextual business context.

DSPM solutions provide a data-centric view of risk. They continuously discover and classify sensitive data across cloud, SaaS, and on-prem environments. They map exposure, detect shadow data, and surface over-permissioned access. This gives security teams a clear understanding of what sensitive data exists, where it resides, who can access it, and how exposed it is.

DLP solutions operate where data moves - on endpoints, in browsers, across SaaS, and in email. They enforce policies and prevent exfiltration as it happens. 

Without rich data context like accurate sensitivity classification, exposure mapping, and identity-to-data relationships, DLP solutions often rely on predefined rules or limited signals to decide what to block, allow, or escalate.

DLP can be enforced, but its precision depends on the quality of the data intelligence behind it.

In AI-enabled, multi-cloud environments, visibility without enforcement is insufficient - and enforcement without deep data understanding lacks precision. To protect sensitive data from discovery by AI assistants, misuse across SaaS, or exfiltration from endpoints, organizations need accurate, continuously updated data intelligence, real-time, context-aware enforcement, and feedback between the two layers. 

That is where Sentra and Orion complement each other.

Sentra: Data‑Centric Intelligence for AI and SaaS

Sentra provides the data foundation: a continuous, accurate understanding of what you’re protecting and how exposed it is.

Deep Discovery and Classification

Sentra continuously discovers and classifies sensitive data across cloud‑native platforms, SaaS, and on‑prem data stores, including Google Workspace, Microsoft 365, databases, and object storage. Under the hood, Sentra uses AI/ML, OCR, and transcription to analyze both structured and unstructured data, and leverages rich data class libraries to identify PII, PHI, PCI, IP, credentials, HR data, legal content, and more, with configurable sensitivity levels.

This creates a live, contextual map of sensitive data: what it is, where it resides, and how important it is.

Reducing Shadow Data and Exposure

Sentra helps teams clean up the environment before AI and users can misuse it. 

It uncovers shadow data and obsolete assets that still carry sensitive content, highlights redundant or orphaned data that increases exposure (without adding business value), and supports collaborative workflows for remediation for security, data, and app owners.

Access Governance and Labeling for AI and DLP

Sentra turns visibility into governance signals. It maps which identities have access to which sensitive data classes and data stores, exposing overpermissioning and risky external access, and driving least‑privilege by aligning access rights with sensitivity and business needs.

To achieve this, Sentra automatically applies and enforces:

Google Labels across Google Drive, powering Gemini controls and DLP for Drive, and Microsoft Purview Information Protection (MPIP) labels across Microsoft 365, powering Copilot and DLP policies.

These labels become the policy fabric downstream AI and DLP engines use to decide what can be searched, summarized, or shared.

Orion: Behavior‑Driven DLP That Thinks Beyond Policies

Orion replaces policy reliance with a set of intelligent, context-aware proprietary AI agents

AI Agents That Understand Context

Orion’s agents collect rich context about data, identity, environment, and business relationships

This includes mapping data lineage and movement patterns from source to destination, a contextual understanding of identities (role, department, tenure, and more), environmental context (geography, network zone, working hours), external business relationships (vendor/customer status), Sentra’s data classification, and more. 

Based on this rich, business-aware context, Orion’s agents detect indicators of data loss and stop potential exfiltrations before they become incidents. That means a full alignment between DLP and how your business actually operates, rather than how it was imagined in static policies.

Unified Coverage Where Data Moves

Orion is designed as a unified DLP solution, covering: 

  • Endpoints
  • SaaS applications
  • Web and cloud
  • Email
  • On‑prem and storage, including channels like print

From initial deployment, Orion quickly provides meaningful detections grounded in real behavior, not just pattern hits. Security teams then get trusted, high‑quality alerts.

Better Together: End‑to‑End, AI‑Ready Protection

Individually, Sentra and Orion address critical yet distinct challenges. Together, they create a closed loop:

Sentra → Orion: Smarter Detections

Sentra gives Orion high‑quality context:

  • Which assets are truly sensitive, and at what level.
  • Where they live, how widely they’re exposed, and which identities can reach them.
  • Which documents and stores carry labels or policies that demand stricter treatment.

Orion uses this information to prioritize and enrich detections, focusing on events involving genuinely high‑risk data. It can then adapt behavior models to each user and data class, improving precision over time.

Orion → Sentra: Real‑World Feedback

Orion’s view into actual data movement feeds back into Sentra, exposing data stores that repeatedly appear in risky behaviors and serve as prime candidates for cleanup or stricter access governance. It also highlights identities whose actions don’t align with their expected access profile, feeding Sentra’s least‑privilege workflows. This turns data protection into a self‑improving system instead of a set of static controls.

What this means for Security and Risk Teams

With Sentra and Orion together, organizations can:

  • Securely adopt AI assistants like Gemini and Copilot, with Sentra controlling what they can see and Orion controlling how data is actually used on endpoints and SaaS.
  • Eliminate shadow data as an exfil path by first mapping and reducing it with Sentra, then guarding remaining high‑risk assets with Orion until they’re remediated.
  • Make least‑privilege real, with Sentra defining who should have access to what and Orion enforcing that principle in everyday behavior.
  • Provide auditors and boards with evidence that sensitive data is discovered, governed, and protected from exfiltration across both data platforms and endpoints.

Instead of choosing between “see everything but act slowly” (DSPM‑only) and “act without deep context” (DLP‑only), Sentra and Orion let you do both well - with one data‑centric brain and one behavior‑aware nervous system.

Ready to See Sentra + Orion in Action?

If you’re looking to secure AI adoption, reduce data loss risk, and retire legacy DLP noise, the combination of Sentra DSPM and Orion DLP offers a practical, modern path forward.

See how a unified, AI‑ready data protection architecture can look in your environment by mapping your most critical data and exposures with Sentra, and letting Orion protect that data as it moves across endpoints, SaaS, and web in real time.

Request a joint demo to explore how Sentra and Orion together can help you think beyond policies and build a data protection program designed for the AI era.

<blogcta-big>

Read More
Meni Besso
Meni Besso
February 19, 2026
3
Min Read

Automating Records of Processing Activities (ROPA) with Real Data Visibility

Automating Records of Processing Activities (ROPA) with Real Data Visibility

Enterprises managing sprawling multi-cloud environments struggle to keep ROPA (Records of Processing Activities) reporting accurate and up to date for GDPR compliance. As manual, spreadsheet-based workflows hit their limits, automation has become essential - not just to save time, but to build confidence in what data is actually being processed across the organization.

Recently, during a strategy session, a leading GDPR-regulated customer shared how they are using Sentra to move beyond manual ROPA processes. By relying on Sentra’s automated data discovery, AI-driven classification, and environment-aware reporting, the organization has operationalized a high-confidence ROPA across ~100 cloud accounts. Their experience highlights a critical shift: ROPA as a trusted source of truth rather than a checkbox exercise.

Why ROPA Often Comes Up Short in Practice

For many organizations, maintaining a ROPA is a regulatory requirement, but not a reliable one.

As the customer explained:

“What I’ve often seen is the ROPA or the records of processing activity being something that is a very checkbox thing to do. And that’s because it’s really hard to understand what data you actually have unless you literally go and interrogate every database.”

Without direct visibility into cloud data stores, ROPA documentation often relies on assumptions, interviews, and outdated spreadsheets. This approach doesn’t scale and creates risk during audits, due diligence, and regulatory inquiries, especially for companies operating across multiple clouds or growing through acquisition.

From Guesswork to a High-Confidence ROPA

The same customer described how Sentra fundamentally changed their approach:

“What Sentra allowed us to do is really have what I’ll describe as a high confidence ROPA. Our ROPA wasn’t guesswork, it was based on actual information that Sentra had gone out, touched our databases, looked inside them, identified the specific types of data records, and then gave us that inventory of what we had.”

By directly scanning databases and cloud data stores, Sentra replaces assumptions with facts. ROPA reports are generated from live discovery results, giving compliance teams confidence that they can accurately attest to:

  • What personal data they hold
  • Where it resides
  • How it is processed
  • And how it is governed

This transforms ROPA from a static document into a defensible, audit-ready asset.

The Need for Automated ROPA Reporting at Scale

Manual ROPA reporting becomes unmanageable as cloud environments expand. Organizations with dozens or hundreds of cloud accounts quickly face gaps, inconsistencies, and outdated records. Industry research shows that privacy automation can reduce manual ROPA effort by up to 80% and overall compliance workload by 60%. But effective automation requires focus. Reporting must concentrate on production environments, where real customer data lives, rather than drowning teams in noise from test or development systems.

As a privacy champion on this project, explains:

“What I’m interested in is building a data inventory that gives me insight from a privacy point of view on what kind of customer data we are holding.”

This shift toward privacy-focused inventories ensures ROPA reporting stays meaningful, actionable, and aligned with regulatory intent.

How Sentra Enables Template-Driven, Environment-Aware ROPA Reporting

Sentra’s reporting framework allows organizations to create custom ROPA templates tailored to their regulatory, operational, and business needs. These templates automatically pull from continuously updated discovery and classification results, ensuring reports stay accurate as environments evolve.

A critical component of this approach is environment tagging. By clearly distinguishing production systems from non-production environments, Sentra ensures ROPA reports reflect only systems that actually process personal data. This reduces reporting noise, improves audit clarity, and aligns with modern GDPR automation best practices.

The result is ROPA reporting that is both scalable and precise - without requiring manual filtering or spreadsheet maintenance.

Solving the Data Classification Problem with Context-Aware AI

Accurate ROPA automation depends on intelligent data classification. Many tools rely on basic pattern matching, which often leads to false positives, such as mistaking airline or airport codes for regulated personal data in HR or internal systems.

Sentra addresses this challenge with AI-based, context-aware classification that understands how data is structured, where it appears, and how it is used. Rather than flagging data solely based on patterns, Sentra analyzes context to reliably distinguish between regulated personal data and non-regulated business data.

This approach dramatically reduces false positives and gives privacy teams confidence that ROPA reports reflect real regulatory exposure - without manual cleanup, lookup tables, or ongoing tuning.

What Sets Sentra Apart for ROPA Automation

While many platforms claim to support ROPA automation, few can deliver accurate, production-ready reporting across complex cloud environments. Sentra stands out through:

  • Agentless data discovery
  • Native multi-cloud support (AWS, Azure, GCP, and hybrid)
  • Context-aware AI classification
  • Data-centric inventory of all customer regulated data
  • Flexible, customizable ROPA reporting templates
  • Strong handling of inconsistent metadata and environment tagging

As the customer summarized:

“It’s no longer a checkbox exercise. It’s a very high confidence attestation of what we definitely have. That visibility allowed us to comply with GDPR in a much more comprehensive way.”

Conclusion

ROPA automation is not just about efficiency, it’s about trust. By grounding ROPA reporting in real data discovery, environment awareness, and AI-driven classification, Sentra enables organizations to replace guesswork with confidence.

The result is a scalable, defensible ROPA that reduces manual effort, lowers compliance risk, and supports long-term privacy maturity.

Interested in seeing high-confidence ROPA automation in action? Book a demo with Sentra to learn how you can turn ROPA into a living source of truth for GDPR compliance.

<blogcta-big>

Read More
David Stuart
David Stuart
February 18, 2026
3
Min Read

Entity-Level vs. File-Level Data Classification: Effective DSPM Needs Both

Entity-Level vs. File-Level Data Classification: Effective DSPM Needs Both

Most security teams think of data classification as a single capability. A tool scans data, finds sensitive information, and labels it. Problem solved. In reality, modern data environments have made classification far more complex.

As organizations scale across cloud platforms, SaaS apps, data lakes, collaboration tools, and AI systems, security teams must answer two fundamentally different questions:

  1. What sensitive data exists inside this asset?
  2. What is this asset actually about?

These questions represent two distinct approaches:

  • Entity-level data classification
  • File-level (asset-level) data classification

A well-functioning Data Security Posture Management (DSPM) requires both.

What Is Entity-Level Data Classification?

Entity-level classification identifies specific sensitive data elements within structured and unstructured content. Instead of labeling an entire file as sensitive, it determines exactly which regulated entities are present and where they appear. These entities can include personal identifiers, financial account numbers, healthcare codes, credentials, digital identifiers, and other protected data types.

This approach provides precision at the field or token level. By detecting and validating individual data elements, security teams gain measurable visibility into exposure - including how many sensitive values exist, where they are located, and how they are used. That visibility enables targeted controls such as masking, redaction, tokenization, and DLP enforcement. In cloud and AI-driven environments, where risk is often tied to specific identifiers rather than document categories, this level of granularity is essential.

Examples of Entity-Level Detection

Entity-level classifiers detect atomic data elements such as:

  • Personal identifiers (names, emails, Social Security numbers)
  • Financial data (credit card numbers, IBANs, bank accounts)
  • Healthcare markers (diagnoses, ICD codes, treatment terms)
  • Credentials (API keys, tokens, private keys, passwords)
  • Digital identifiers (IP addresses, device IDs, user IDs)

This level of granularity enables precise policy enforcement and measurable risk assessment.

How Entity-Level Classification Works

High-quality entity detection is not just regex scanning. Effective systems combine multiple validation layers to reduce false positives and increase accuracy:

  • Deterministic patterns (regular expressions, format checks)
  • Checksum validation (e.g., Luhn algorithm for credit cards)
  • Keyword and proximity analysis
  • Dictionaries and structured reference tables
  • Natural Language Processing (NLP) with Named Entity Recognition
  • Machine learning models to suppress noise

This multi-signal approach ensures detection works reliably across messy, real-world data.

When Entity-Level Classification Is Essential

Entity-level classification is essential when security controls depend on the presence of specific data elements rather than broad document categories. Many policies are triggered only when certain identifiers appear together ,such as a Social Security number paired with a name - or when regulated financial or healthcare data exceeds defined thresholds. In these cases, security teams must accurately locate, validate, and quantify sensitive fields to enforce controls effectively.

This precision is also required for operational actions such as masking, redaction, tokenization, and DLP enforcement, where controls must be applied to exact values instead of entire files. In structured data environments like databases and warehouses, entity-level classification enables column- and table-level visibility, forming the basis for exposure measurement, risk scoring, and access governance decisions.

However, entity-level detection does not explain the broader business context of the data. A credit card number may appear in an invoice, a support ticket, a legal filing, or a breach report. While the identifier is the same, the surrounding context changes the associated risk and the appropriate response.

This is where file-level classification becomes necessary.

What Is File-Level (Asset-Level) Data Classification?

File-level classification determines the semantic meaning and business context of an entire data asset.

Instead of asking what sensitive values exist, it asks:

What kind of document or dataset is this? What is its business purpose?

Examples of File-Level Classification

File-level classifiers identify attributes such as:

  • Business domain (HR, Legal, Finance, Healthcare, IT)
  • Document type (NDA, invoice, payroll record, resume, contract)
  • Business purpose (compliance evidence, client matter, incident report)

This context is essential for appropriate governance, access control, and AI safety.

How File-Level Classification Works

File-level classification relies on semantic understanding, typically powered by:

  • Small and Large Language Models (SLMs/LLMs)
  • Vector embeddings for topic similarity
  • Confidence scoring and ensemble validation
  • Trainable models for organization-specific document types

This allows systems to classify documents even when sensitive entities are sparse, masked, or absent.

For example, an employment contract may contain limited PII but still require strict access controls because of its business context.

When File-Level Classification Is Essential

File-level classification becomes essential when security decisions depend on business context rather than just the presence of sensitive strings. For example, enforcing domain-based access controls requires knowing whether a document belongs to HR, Legal, or Finance - not just whether it contains an email address or account number. The same applies to implementing least-privilege access models, where entire categories of documents may need tighter controls based on their purpose.

File-level classification also plays a critical role in retention policies and audit workflows, where governance rules are applied to document types such as contracts, payroll records, or compliance evidence. And as organizations adopt generative AI tools, semantic understanding becomes even more important for implementing AI governance guardrails, ensuring copilots don’t ingest sensitive HR files or privileged legal documents.

That said, file-level classification alone is not sufficient. While it can determine what a document is about, it does not precisely locate or quantify sensitive data within it. A document labeled “Finance” may or may not contain exposed credentials or an excessive concentration of regulated identifiers, risks that only entity-level detection can accurately measure.

Entity-Level vs. File-Level Classification: Key Differences

Entity-Level Classification File-Level Classification
Detects specific sensitive values Identifies document meaning and context
Enables masking, redaction, and DLP Enables context-aware governance
Works well for structured data Strong for unstructured documents
Provides precise risk signals Provides business intent and domain context
Lacks semantic understanding of purpose Lacks granular entity visibility

Each approach solves a different security problem. Relying on only one creates blind spots or false positives. Together, they form a powerful combination.

Why Using Only One Approach Creates Security Gaps

Entity-Only Approaches

Tools focused exclusively on entity detection can:

  • Flag isolated sensitive values without context
  • Generate high alert volumes
  • Miss business intent
  • Treat all instances of the same entity as equal risk

A payroll file and a legal complaint may both contain Social Security numbers — but they represent different governance needs.

File-Only Approaches

Tools focused only on semantic labeling can:

  • Identify that a document belongs to “Finance” or “HR”
  • Apply domain-based policies
  • Enable context-aware access

But they may miss:

  • Embedded credentials
  • Excessive concentrations of regulated identifiers
  • Toxic combinations of data types (e.g., PII + healthcare terms)

Without entity-level precision, risk scoring becomes guesswork.

How Effective DSPM Combines Both Layers

The real power of modern Data Security Posture Management (DSPM) emerges when entity-level and file-level classification operate together rather than in isolation. Each layer strengthens the other. Context can reinforce entity validation: for example, a dense concentration of financial identifiers helps confirm that a document truly belongs in the Finance domain or represents an invoice. At the same time, entity signals can refine context. If a file is semantically classified as an invoice, the system can apply tighter validation logic to account numbers, totals, and other financial fields, improving accuracy and reducing noise.

This combination also enables more intelligent policy enforcement. Instead of relying on brittle, one-dimensional rules, security teams can detect high-risk combinations of data. Personal identifiers appearing within a healthcare context may elevate regulatory exposure. Credentials embedded inside operational documents may signal immediate security risk. An unusually high concentration of identifiers in an externally shared HR file may indicate overexposure. These are nuanced risk patterns that neither entity-level nor file-level classification can reliably identify alone.

When both layers inform policy decisions, organizations can move toward true risk-based governance. Sensitivity is no longer determined solely by what specific data elements exist, nor solely by what category a document falls into, but by the intersection of the two. Risk is derived from both what is inside the data and what the data represents.

This dual-layer approach reduces false positives, increases analyst trust, and enables more precise controls across cloud and SaaS environments. It also becomes essential for AI governance, where understanding both sensitive content and business context determines whether data is safe to expose to copilots or generative AI systems.

What to Look for in a DSPM Classification Engine

Not all DSPM platforms treat classification equally.

When evaluating solutions, security leaders should ask:

  • Does the platform classify and validate sensitive entities beyond basic regex?
  • Can it semantically identify document type and business domain?
  • Are entity-level and file-level signals tightly integrated?
  • Can policies reason across both layers simultaneously?
  • Does risk scoring incorporate both precision and context?

The goal is not simply to “classify data,” but to generate actionable, risk-aligned data  intelligence.

The Bottom Line

Modern data estates are too complex for single-layer classification models. Entity-level classification provides precision, identifying exactly what sensitive data exists and where.

File-level classification provides context - understanding what the data is and why it exists.

Together, they enable accurate risk detection, effective policy enforcement, least-privilege access, and AI-safe governance. In today’s cloud-first and AI-driven environments, data security posture management must go beyond isolated detections or broad labels. It must understand both the contents of data and its meaning - at the same time.

That’s the new standard for data classification.

<blogcta-big>

Read More
Ariel Rimon
Ariel Rimon
Daniel Suissa
Daniel Suissa
February 16, 2026
4
Min Read

How Modern Data Security Discovers Sensitive Data at Cloud Scale

How Modern Data Security Discovers Sensitive Data at Cloud Scale

Modern cloud environments contain vast amounts of data stored in object storage services such as Amazon S3, Google Cloud Storage, and Azure Blob Storage. In large organizations, a single data store can contain billions (or even tens of billions) of objects. In this reality, traditional approaches that rely on scanning every file to detect sensitive data quickly become impractical.

Full object-level inspection is expensive, slow, and difficult to sustain over time. It increases cloud costs, extends onboarding timelines, and often fails to keep pace with continuously changing data. As a result, modern data security platforms must adopt more intelligent techniques to build accurate data inventories and sensitivity models without scanning every object.

Why Object-Level Scanning Fails at Scale

Object storage systems expose data as individual objects, but treating each object as an independent unit of analysis does not reflect how data is actually created, stored, or used.

In large environments, scanning every object introduces several challenges:

  • Cost amplification from repeated content inspection at massive scale
  • Long time to actionable insights during the first scan
  • Operational bottlenecks that prevent continuous scanning
  • Diminishing returns, as many objects contain redundant or structurally identical data

The goal of data discovery is not exhaustive inspection, but rather accurate understanding of where sensitive data exists and how it is organized.

The Dataset as the Correct Unit of Analysis

Although cloud storage presents data as individual objects, most data is logically organized into datasets. These datasets often follow consistent structural patterns such as:

  • Time-based partitions
  • Application or service-specific logs
  • Data lake tables and exports
  • Periodic reports or snapshots

For example, the following objects are separate files but collectively represent a single dataset:

logs/2026/01/01/app_events_001.json

logs/2026/01/02/app_events_002.json

logs/2026/01/03/app_events_003.json

While these objects differ by date, their structure, schema, and sensitivity characteristics are typically consistent. Treating them as a single dataset enables more accurate and scalable analysis.

Analyzing Storage Structure Without Reading Every File

Modern data discovery platforms begin by analyzing storage metadata and object structure, rather than file contents.

This includes examining:

  • Object paths and prefixes
  • Naming conventions and partition keys
  • Repeating directory patterns
  • Object counts and distribution

By identifying recurring patterns and natural boundaries in storage layouts, platforms can infer how objects relate to one another and where dataset boundaries exist. This analysis does not require reading object contents and can be performed efficiently at cloud scale.

Configurable by Design

Sampling can be disabled for specific data sources, and the dataset grouping algorithm can be adjusted by the user. This allows teams to tailor the discovery process to their environment and needs.


Automatic Grouping into Dataset-Level Assets

Using structural analysis, objects are automatically grouped into dataset-level assets. Clustering algorithms identify related objects based on path similarity, partitioning schemes, and organizational patterns. This process requires no manual configuration and adapts as new objects are added. Once grouped, these datasets become the primary unit for further analysis, replacing object-by-object inspection with a more meaningful abstraction.

Representative Sampling for Sensitivity Inference

After grouping, sensitivity analysis is performed using representative sampling. Instead of inspecting every object, the platform selects a small, statistically meaningful subset of files from each dataset.

Sampling strategies account for factors such as:

  • Partition structure
  • File size and format
  • Schema variation within the dataset

By analyzing these samples, the platform can accurately infer the presence of sensitive data across the entire dataset. This approach preserves accuracy while dramatically reducing the amount of data that must be scanned.

Handling Non-Standard Storage Layouts

In some environments, storage layouts may follow unconventional or highly customized naming schemes that automated grouping cannot fully interpret. In these cases, manual grouping provides additional precision. Security analysts can define logical dataset boundaries, often supported by LLM-assisted analysis to better understand complex or ambiguous structures. Once defined, the same sampling and inference mechanisms are applied, ensuring consistent sensitivity assessment even in edge cases.

Scalability, Cost, and Operational Impact

By combining structural analysis, grouping, and representative sampling, this approach enables:

  • Scalable data discovery across millions or billions of objects
  • Predictable and significantly reduced cloud scanning costs
  • Faster onboarding and continuous visibility as data changes
  • High confidence sensitivity models without exhaustive inspection

This model aligns with the realities of modern cloud environments, where data volume and velocity continue to increase.

From Discovery to Classification and Continuous Risk Management

Dataset-level asset discovery forms the foundation for scalable classification, access governance, and risk detection. Once assets are defined at the dataset level, classification becomes more accurate and easier to maintain over time. This enables downstream use cases such as identifying over-permissioned access, detecting risky data exposure, and managing AI-driven data access patterns.

Applying These Principles in Practice

Platforms like Sentra apply these principles to help organizations discover, classify, and govern sensitive data at cloud scale - without relying on full object-level scans. By focusing on dataset-level discovery and intelligent sampling, Sentra enables continuous visibility into sensitive data while keeping costs and operational overhead under control.

<blogcta-big>

Read More
Elie Perelman
Elie Perelman
February 13, 2026
3
Min Read

Best Data Access Governance Tools

Best Data Access Governance Tools

Managing access to sensitive information is becoming one of the most critical challenges for organizations in 2026. As data sprawls across cloud platforms, SaaS applications, and on-premises systems, enterprises face compliance violations, security breaches, and operational inefficiencies. Data Access Governance Tools provide automated discovery, classification, and access control capabilities that ensure only authorized users interact with sensitive data. This article examines the leading platforms, essential features, and implementation strategies for effective data access governance.

Best Data Access Governance Tools

The market offers several categories of solutions, each addressing different aspects of data access governance. Enterprise platforms like Collibra, Informatica Cloud Data Governance, and Atlan deliver comprehensive metadata management, automated workflows, and detailed data lineage tracking across complex data estates.

Specialized Data Access Governance (DAG) platforms focus on permissions and entitlements. Varonis, Immuta, and Securiti provide continuous permission mapping, risk analytics, and automated access reviews. Varonis identifies toxic combinations by discovering and classifying sensitive data, then correlating classifications with access controls to flag scenarios where high-sensitivity files have overly broad permissions.

User Reviews and Feedback

Varonis

  • Detailed file access analysis and real-time protection capabilities
  • Excellent at identifying toxic permission combinations
  • Learning curve during initial implementation

BigID

  • AI-powered classification with over 95% accuracy
  • Handles both structured and unstructured data effectively
  • Strong privacy automation features
  • Technical support response times could be improved

OneTrust

  • User-friendly interface and comprehensive privacy management
  • Deep integration into compliance frameworks
  • Robust feature set requires organizational support to fully leverage

Sentra

  • Effective data discovery and automation capabilities (January 2026 reviews)
  • Significantly enhances security posture and streamlines audit processes
  • Reduces cloud storage costs by approximately 20%

Critical Capabilities for Modern Data Access Governance

Effective platforms must deliver several core capabilities to address today's challenges:

Unified Visibility

Tools need comprehensive visibility across IaaS, PaaS, SaaS, and on-premises environments without moving data from its original location. This "in-environment" architecture ensures data never leaves organizational control while enabling complete governance.

Dynamic Data Movement Tracking

Advanced platforms monitor when sensitive assets flow between regions, migrate from production to development, or enter AI pipelines. This goes beyond static location mapping to provide real-time visibility into data transformations and transfers.

Automated Classification

Modern tools leverage AI and machine learning to identify sensitive data with high accuracy, then apply appropriate tags that drive downstream policy enforcement. Deep integration with native cloud security tools, particularly Microsoft Purview, enables seamless policy enforcement.

Toxic Combination Detection

Platforms must correlate data sensitivity with access permissions to identify scenarios where highly sensitive information has broad or misconfigured controls. Once detected, systems should provide remediation guidance or trigger automated actions.

Infrastructure and Integration Considerations

Deployment architecture significantly impacts governance effectiveness. Agentless solutions connecting via cloud provider APIs offer zero impact on production latency and simplified deployment. Some platforms use hybrid approaches combining agentless scanning with lightweight collectors when additional visibility is required.

Integration Area Key Considerations Example Capabilities
Microsoft Ecosystem Native integration with Microsoft Purview, Microsoft 365, and Azure Varonis monitors Copilot AI prompts and enforces consistent policies
Data Platforms Direct remediation within platforms such as Snowflake BigID automatically enforces dynamic data masking and tagging
Cloud Providers API-based scanning without performance overhead Sentra’s agentless architecture scans environments without deploying agents

Open Source Data Governance Tools

Organizations seeking cost-effective or customizable solutions can leverage open source tools. Apache Atlas, originally designed for Hadoop environments, provides mature governance capabilities that, when integrated with Apache Ranger, support tag-based policy management for flexible access control.

DataHub, developed at LinkedIn, features AI-powered metadata ingestion and role-based access control. OpenMetadata offers a unified metadata platform consolidating information across data sources with data lineage tracking and customized workflows.

While open source tools provide foundational capabilities, metadata cataloging, data lineage tracking, and basic access controls, achieving enterprise-grade governance typically requires additional customization, integration work, and infrastructure investment. The software is free, but self-hosting means accounting for operational costs and expertise needed to maintain these platforms.

Understanding the Gartner Magic Quadrant for Data Governance Tools

Gartner's Magic Quadrant assesses vendors on ability to execute and completeness of vision. For data access governance, Gartner examines how effectively platforms define, automate, and enforce policies controlling user access to data.

<blogcta-big>

Read More
Gilad Golani
Gilad Golani
David Stuart
David Stuart
February 12, 2026
4
Min Read

How to Supercharge Microsoft Purview DLP and Make Copilot Safe by Fixing Labels at the Source

How to Supercharge Microsoft Purview DLP and Make Copilot Safe by Fixing Labels at the Source

For organizations invested in Microsoft 365, Purview and Copilot now sit at the center of both data protection and productivity. Purview offers rich DLP capabilities, along with sensitivity labels that drive encryption, retention, and policy. Copilot promises to unlock new value from content in SharePoint, OneDrive, Teams, and other services.

But there is a catch. Both Purview DLP and Copilot depend heavily on labels and correct classification.

If labels are missing, wrong, or inconsistent, then:

  • DLP rules fire in the wrong places (creating false positives) or miss critical data (worse!).
  • Copilot accesses content you never intended it to see and can inadvertently surface it in responses.

In many environments, that’s exactly what’s happening. Labels are applied manually. Legacy content, exports from non‑Microsoft systems, and AI‑ready datasets live side by side with little or no consistent tagging. Purview has powerful controls, it just doesn’t always have the accurate inputs it needs.

The fastest way to boost performance of Purview DLP and make Copilot safe is to fix labels at the source using a DSPM platform, then let Microsoft’s native controls do the work they’re already good at.

The Limits of M365‑Only Classification

Purview’s built-in classifiers understand certain patterns and can infer sensitivity from content inside the Microsoft 365 estate. That can be useful, but it doesn’t solve two big problems.

First, PHI, PCI, PII, and IP often originate in systems outside of M365; core banking platforms, claims systems, Snowflake, Databricks, and third‑party SaaS applications. When that data is exported or synced into SharePoint, OneDrive, or Teams, it often arrives without accurate labels.

Second, even within M365, there are years of accumulated documents, emails, and chat history that have never been systematically classified. Applying labels retroactively is time‑consuming and error‑prone if you rely on manual tagging or narrow content rules. And once there, without contextual analysis and deeper understanding of the unstructured files in which the data lives, it becomes extremely difficult to apply precise sensitivity labels.When you add Copilot (or any AI agent/assistant) into the mix, any mislabeling or blind spots in classification can quickly turn into AI‑driven data exposure. The stakes are higher, and so is the need for a more robust foundation.

Using DSPM to Fix Labels at the Source

A DSPM platform like Sentra plugs into your environment at a different layer. It connects not just to Microsoft 365, but also to cloud providers, data warehouses, SaaS applications, collaboration tools, and AI platforms. It then builds a cross‑environment view of where sensitive data lives and what it contains, based on multi‑signal, AI‑assisted classification that’s tuned to your business context.

Once it has that view, Sentra can automatically apply or correct Microsoft Purview Information Protection (MPIP) labels across M365 content and, where appropriate, back into other systems. Instead of relying on spotty manual tagging and local heuristics, you get labels that reflect a consistent, enterprise‑wide understanding of sensitivity.

Supercharging Microsoft Purview DLP with Sentra



Those labels become the language that Purview DLP, encryption, retention, and Copilot controls understand. You are effectively giving Microsoft’s native tools a richer, more accurate map of your data, enabling them to confidently apply appropriate controls and streamline remediations.

Making Purview DLP Work Smarter

When labels are trustworthy, Purview DLP policies become easier to design and maintain. Rather than creating sprawling rule sets that combine patterns, locations, and exceptions, you can express policies in simple, label‑centric terms:

  • “Encrypt and allow PHI sent to approved partners; block PHI sent anywhere else.”
  • “Block Highly Confidential documents shared with external accounts; prompt for justification when Internal documents leave the tenant.”

DSPM’s role is to ensure that content carrying PHI or other regulated data is actually labeled as such, whether it started life in M365 or came from elsewhere. Purview then enforces DLP based on those labels, with far fewer false positives and far fewer edge cases. During rollout, you can run new label‑driven policies in audit mode to observe how they would behave, work with business stakeholders to adjust where necessary, and then move the most critical rules into full enforcement.

Keeping Copilot Inside the Guardrails

Copilot adds another dimension to this story. By design, it reads and reasons over large swaths of your content, then generates responses or summaries based on that content. If you don’t control what Copilot can see, it may surface PHI in a chat about scheduling, or include sensitive IP in a generic project update.

Here again, labels should be the control plane. Once DSPM has ensured that sensitive content is labeled accurately and consistently, you can use those labels to govern Copilot:

  • Limit Copilot’s access to certain labels or sites, especially those holding PHI, PCI, or trade secrets.
  • Restrict certain operations (such as summarization or sharing) when output would be based on Highly Confidential content.
  • Exclude specific labeled datasets from Copilot’s index entirely.

Because DSPM also tracks where labeled data moves, it can alert you when sensitive content is copied into a location with different Copilot rules. That gives you an opportunity to remediate before an incident, rather than discovering the issue only after a problematic AI response.

A Practical Path for Microsoft‑Centric Organizations

For organizations that have standardized on Microsoft 365, the message is not “replace Purview” or “turn off Copilot.” It’s to recognize that Purview and Copilot need a stronger foundation of data intelligence to act safely and predictably.

That foundation comes from pairing DSPM and auto‑labeling with Purview’s native capabilities, which combined enable you to:

  1. Discover and classify sensitive data across your full estate, including non‑Microsoft sources.
  2. Auto‑apply MPIP labels so that M365 content is tagged accurately and consistently.
  3. Simplify DLP and Copilot policies to be label‑driven rather than pattern‑driven.
  4. Iterate in audit mode before expanding enforcement.

Once labels are fixed at the source, you can lean on Purview DLP and Copilot with much more confidence. You’ll spend less time chasing noisy alerts and unexpected AI behavior, and more time using the Microsoft ecosystem the way it was intended: as a powerful, integrated platform for secure productivity.

Ready to supercharge Purview DLP and make M365 Copilot safe by fixing labels at the source? Schedule a Sentra demo.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
February 11, 2026
3
Min Read

Best Data Classification Tools in 2026: Compare Leading Platforms for Cloud, SaaS, and AI

Best Data Classification Tools in 2026: Compare Leading Platforms for Cloud, SaaS, and AI

As organizations navigate the complexities of cloud environments and AI adoption, the need for robust data classification has never been more critical. With sensitive data sprawling across IaaS, PaaS, SaaS platforms, and on-premise systems, enterprises require tools that can discover, classify, and govern data at scale while maintaining compliance with evolving regulations. The best data classification tools not only identify where sensitive information resides but also provide context around data movement, access controls, and potential exposure risks. This guide examines the leading solutions available today, helping you understand which platforms deliver the accuracy, automation, and integration capabilities necessary to secure your data estate.

Key Consideration What to Look For
Classification Accuracy AI-powered classification engines that distinguish real sensitive data from mock or test data to minimize false positives
Platform Coverage Unified visibility across cloud, SaaS, and on-premises environments without moving or copying data
Data Movement Tracking Ability to monitor how sensitive assets move between regions, environments, and AI pipelines
Integration Depth Native integrations with major platforms such as Microsoft Purview, Snowflake, and Azure to enable automated remediation

What Are Data Classification Tools?

Data classification tools are specialized platforms designed to automatically discover, categorize, and label sensitive information across an organization's entire data landscape. These solutions scan structured and unstructured data, from databases and file shares to cloud storage and SaaS applications, to identify content such as personally identifiable information (PII), financial records, intellectual property, and regulated data subject to compliance frameworks like GDPR, HIPAA, or CCPA.

Effective data classification tools leverage machine learning algorithms, pattern matching, metadata analysis, and contextual awareness to tag data accurately. Beyond simple discovery, these platforms correlate classification results with access controls, data lineage, and risk indicators, enabling security teams to identify "toxic combinations" where highly sensitive data sits behind overly permissive access settings. This contextual intelligence transforms raw classification data into actionable security insights, helping organizations prevent data breaches, meet compliance obligations, and establish the governance guardrails necessary for secure AI adoption.

Top Data Classification Tools

Sentra

Sentra is a cloud-native data security platform specifically designed for AI-ready data governance. Unlike legacy classification tools built for static environments, Sentra discovers and governs sensitive data at petabyte scale inside your own environment, ensuring data never leaves your control.

What Users Like:

  • Classification accuracy and contextual risk insights consistently praised in January 2026 reviews
  • Speed and precision of classification engine described as unmatched
  • DataTreks capability creates interactive maps tracking data movement, duplication, and transformation
  • Distinguishes between real sensitive data and mock data to prevent false positives

Key Capabilities:

  • Unified visibility across IaaS, PaaS, SaaS, and on-premise file shares without moving data
  • Deep Microsoft integration leveraging Purview Information Protection with 95%+ accuracy
  • Identifies toxic combinations by correlating data sensitivity with access controls
  • Tracks data movement to detect when sensitive assets flow into AI pipelines
  • Eliminates shadow and ROT data, typically reducing cloud storage costs by ~20%

BigID

BigID uses AI-powered discovery to automatically identify sensitive or regulated information, continuously monitoring data risks with a strong focus on privacy compliance and mapping personal data across organizations.

What Users Like:

  • Exceptional data classification capabilities highlighted in January 2026 reviews
  • Comprehensive data-discovery features for privacy, protection, and governance
  • Broad source connectivity across diverse data environments

Varonis

Varonis specializes in unstructured data classification across file servers, email, and cloud content, providing strong access monitoring and insider threat detection.

What Users Like:

  • Detailed file access analysis and real-time protection
  • Actionable insights and automated risk visualization

Considerations:

  • Learning curve when dealing with comprehensive capabilities

Microsoft Purview

Microsoft Purview delivers exceptional integration for organizations invested in the Microsoft ecosystem, automatically classifying and labeling data across SharePoint, OneDrive, and Microsoft 365 with customizable sensitivity labels and comprehensive compliance reporting.

Nightfall AI

Nightfall AI stands out for real-time detection capabilities across modern SaaS and generative AI applications, using advanced machine learning to prevent data exfiltration and secret sprawl in dynamic environments.

Other Notable Solutions

Forcepoint takes a behavior-based approach, combining context and user intent analysis to classify and protect data across cloud, network, and endpoints, though its comprehensive feature set requires substantial tuning and comes with a steeper learning curve.

Google Cloud DLP excels for teams pursuing cloud-first strategies within Google's environment, offering machine-learning content inspection that scales seamlessly but may be less comprehensive across broader SaaS portfolios.

Atlan functions as a collaborative data workspace emphasizing metadata management, automated tagging, and lineage analysis, seamlessly connecting with modern data stacks like Snowflake, BigQuery, and dbt.

Collibra Data Intelligence Cloud employs self-learning algorithms to uncover, tag, and govern both structured and unstructured data across multi-cloud environments, offering detailed reporting suited to enterprises requiring holistic data discovery with strict compliance oversight.

Informatica leverages AI to profile and classify data while providing end-to-end lineage visualization and analytics, ideal for large, distributed ecosystems demanding scalable data quality and governance.

Evaluation Criteria for Data Classification Tools

Selecting the right data classification tool requires careful assessment across several critical dimensions:

Classification Accuracy

The engine must reliably distinguish between genuine sensitive data and mock or test data to prevent false positives that create alert fatigue and waste security resources. Advanced solutions employ multiple techniques including pattern matching, proximity analysis, validation algorithms, and exact data matching to improve precision.

Platform Coverage

The best solutions scan IaaS, PaaS, SaaS, and on-premise file shares without moving data from its original location, using metadata collection and in-environment scanning to maintain data sovereignty while delivering centralized governance. This architectural approach proves especially critical for organizations subject to strict data residency requirements.

Automation and Integration

Look for tools that automatically tag and label data based on classification results, integrate with native platform controls (such as Microsoft Purview labels or Snowflake masking policies), and trigger remediation workflows without manual intervention. The depth of integration with your existing technology stack determines how seamlessly classification insights translate into enforceable security policies.

Data Movement Tracking

Modern tools must monitor how sensitive assets flow between regions, migrate across environments (production to development), and feed into AI systems. This dynamic visibility enables security teams to detect risky data transfers before they result in compliance violations or unauthorized exposure.

Scalability and Performance

Evaluate whether the solution can handle your data volume without degrading scan performance or requiring excessive infrastructure resources. Consider the platform's ability to identify toxic combinations, correlating high-sensitivity data with overly permissive access controls to surface the most critical risks requiring immediate remediation.

Best Free Data Classification Tools

For organizations seeking to implement data classification without immediate budget allocation, two notable free options merit consideration:

Imperva Classifier: Data Classification Tool is available as a free download (requiring only email submission for installation access) and supports multiple operating systems including Windows, Mac, and Linux. It features over 250 built-in search rules for enterprise databases such as Oracle, Microsoft SQL, SAP Sybase, IBM DB2, and MySQL, making it a practical choice for quickly identifying sensitive data at risk across common database platforms.

Apache Atlas represents a robust open-source alternative originally developed for the Hadoop ecosystem. This enterprise-grade solution offers comprehensive metadata management with dedicated data classification capabilities, allowing organizations to tag and categorize data assets while supporting governance, compliance, and data lineage tracking needs.

While free tools offer genuine value, they typically require more in-house expertise for customization and maintenance, may lack advanced AI-powered classification engines, and often provide limited support for modern cloud and SaaS environments. For enterprises with complex, distributed data estates or strict compliance requirements, investing in a commercial solution often proves more cost-effective when factoring in total cost of ownership.

Making the Right Choice for Your Organization

Selecting among the best data classification tools requires aligning platform capabilities with your specific organizational context, data architecture, and security objectives. User reviews from January 2026 provide valuable insights into real-world performance across leading platforms.

When evaluating solutions, prioritize running proof-of-concept deployments against representative samples of your actual data estate. This hands-on testing reveals how well each platform handles your specific data types, integration requirements, and performance expectations. Develop a scoring framework that weights evaluation criteria according to your priorities, whether that's classification accuracy, automation capabilities, platform coverage, or integration depth with existing systems.

Consider your organization's trajectory alongside current needs. If AI adoption is accelerating, ensure your chosen platform can discover AI copilots, map their knowledge base access, and enforce granular behavioral guardrails on sensitive data. For organizations with complex multi-cloud environments, unified visibility without data movement becomes non-negotiable. Enterprises subject to strict compliance regimes should prioritize platforms with proven regulatory alignment and automated policy enforcement.

The data classification landscape in 2026 offers diverse solutions, from free and open-source options suitable for organizations with strong technical teams to comprehensive commercial platforms designed for petabyte-scale, AI-driven environments. By carefully evaluating your requirements against the strengths of leading platforms, you can select a solution that not only secures your current data estate but also enables confident adoption of AI technologies that drive competitive advantage.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
February 9, 2026
4
Min Read

DSPM vs DLP vs DDR: How to Architect a Data‑First Stack That Actually Stops Exfiltration

DSPM vs DLP vs DDR: How to Architect a Data‑First Stack That Actually Stops Exfiltration

Many security stacks look impressive at first glance. There is a DLP agent on every endpoint, a CASB or SSE proxy watching SaaS traffic, EDR and SIEM for hosts and logs, and perhaps a handful of identity and access governance tools. Yet when a serious incident is investigated, it often turns out that sensitive data moved through a path nobody was really watching, or that multiple tools saw fragments of the story but never connected them.

The common thread is that most stacks were built around infrastructure, not data. They understand networks, workloads, and log lines, but they don’t share a single, consistent understanding of:

  • What your sensitive data is
  • Where it actually lives
  • Who and what can access it
  • How it moves across cloud, SaaS, and AI systems

To move beyond that, security leaders are converging on a data‑first architecture that brings together four capabilities: DSPM (Data Security Posture Management), DLP (Data Loss Prevention), DAG (Data Access Governance), and DDR (Data Detection & Response) in a unified model.

Clarifying the Roles

At the heart of this architecture is DSPM. DSPM is your data‑at‑rest intelligence layer. It continuously discovers data across clouds, SaaS, on‑prem, and AI pipelines, classifies it, and maps its posture; configurations, locations, access paths, and regulatory obligations. Instead of a static inventory, you get a living view of where sensitive data resides and how risky it is.

DLP sits at the edges of the system. Its job is to enforce policy on data in motion and in use: emails leaving the organization, files uploaded to the web, documents synced to endpoints, content copied into SaaS apps, or responses generated by AI tools. DLP decides whether to block, encrypt, quarantine, or simply log based on policies and the context it receives.

DAG bridges the gap between “what” and “who.” It’s responsible for least‑privilege access; understanding which human and machine identities can access which datasets, whether they really need that access, and what toxic combinations exist when sensitive data is exposed to broad groups or powerful service accounts.

DDR closes the loop. It monitors access to and movement of sensitive data in real time, looking for unusual or risky behavior: anomalous downloads, mass exports, unusual cross‑region copies, suspicious AI usage. When something looks wrong, DDR triggers detections, enriches them with data context, and kicks off remediation workflows.

When these four functions work together, you get a stack that doesn’t just warn you about potential issues; it actively reduces your exposure and stops exfiltration in motion.

Why “DSPM vs DLP” Is the Wrong Framing

It’s tempting to think of DSPM and DLP as competing answers to the same problem. In reality, they address different parts of the lifecycle. DSPM shows you what’s at risk and where; DLP controls how that risk can materialize as data moves.

Trying to use DLP as a discovery and classification engine is what leads to the noise and blind spots described in the previous section. Conversely, running DSPM without any enforcement at the edges leaves you with excellent visibility but too little control over where data can go.

DSPM and DAG reduce your attack surface; DLP and DDR reduce your blast radius. DSPM and DAG shrink the pool of exposed data and over‑privileged identities. DLP and DDR watch the edges and intervene when data starts to move in risky ways.

A Unified, Data‑First Reference Architecture

In a data‑first architecture, DSPM sits at the center, connected API‑first into cloud accounts, SaaS platforms, data warehouses, on‑prem file systems, and AI infrastructure. It continuously updates an inventory of data assets, understands which are sensitive or regulated, and applies labels and context that other tools can use.

On top of that, DAG analyzes which users, groups, service principals, and AI agents can access each dataset. Over‑privileged access is identified and remediated, sometimes automatically: by tightening IAM roles, restricting sharing, or revoking legacy permissions. The result is a significant reduction in the number of places where a single identity can cause significant damage.

DLP then reads the labels and access context from DSPM and DAG instead of inferring everything from scratch. Email and endpoint DLP, cloud DLP via SSE/CASB, and even platform‑native solutions like Purview DLP all begin enforcing on the same sensitivity definitions and labels. Policies become more straightforward: “Block Highly Confidential outside the tenant,” “Encrypt PHI sent to external partners,” “Require justification for Customer‑Identifiable data leaving a certain region.”

DDR runs alongside this, monitoring how labeled data actually moves. It can see when a typically quiet user suddenly downloads thousands of PHI records, when a service account starts copying IP into a new data store, or when an AI tool begins interacting with a dataset marked off‑limits. Because DDR is fed by DSPM’s inventory and DAG’s access graph, detections are both higher fidelity and easier to interpret.

From there, integration points into SIEM, SOAR, IAM/CIEM, ITSM, and AI gateways allow you to orchestrate end‑to‑end responses: open tickets, notify owners, roll back risky changes, block certain actions, or update policies.

Where Sentra Fits

Sentra’s product vision aligns directly with this data‑first model. Rather than treating DSPM, DAG, DDR, and DLP intelligence as separate products, Sentra brings them together into a single, cloud‑native data security platform.

That means you get:

  • DSPM that discovers and classifies data across cloud, SaaS, on‑prem, and AI
  • DAG that maps and rationalizes access to that data
  • DDR that monitors sensitive data in motion and detects threats
  • Integrations that feed this intelligence into DLP, SSE/CASB, Purview, EDR, and other controls

In other words, Sentra is positioned as the brain of the data‑first stack, giving DLP and the rest of your security stack the insight they need to actually stop exfiltration, not just report on it afterward.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
February 5, 2026
3
Min Read

OpenClaw (MoltBot): The AI Agent Security Crisis Enterprises Must Address Now

OpenClaw (MoltBot): The AI Agent Security Crisis Enterprises Must Address Now

OpenClaw, previously known as MoltBot, isn't just another cybersecurity story - it's a wake-up call for every organization. With over 150,000 GitHub stars and more than 300,000 users in just two months, OpenClaw’s popularity signals a huge change: autonomous AI agents are spreading quickly and dramatically broadening the attack surface in businesses. This is far beyond the risks of a typical ChatGPT plugin or a staff member pasting data into a chatbot. These agents live on user machines and servers with shell-level access, file system privileges, live memory control, and broad integration abilities, usually outside IT or security’s purview.

Older perimeter and endpoint security tools weren’t built to find or control agents that can learn, store information, and act independently in all kinds of environments. As organizations face this shadow AI risk, the need for real-time, data-level visibility becomes critical. Enter Data Security Posture Management (DSPM): a way for enterprises to understand, monitor, and respond to the unique threats that OpenClaw and its next-generation kin pose.

What makes OpenClaw different - and uniquely dangerous - for security teams?

OpenClaw runs by setting up a local HTTP server and agent gateway on endpoints. It provides shell access, automates browsers, and links with over 50 messaging platforms. But what really sets it apart is how it combines these features with persistent memory. That means agents can remember actions and data far better than any script or bot before. Palo Alto Networks calls this the 'lethal trifecta': direct access to private data, exposure to untrusted content, communication outside the organization, and persistent memory.

This risk isn't hypothetical. OpenClaw’s skill ecosystem functions like an unguarded software supply chain. Any third-party 'skill' a user adds to an agent can run with full privileges, opening doors to vulnerabilities that original developers can’t foresee. While earlier concerns focused on employees leaking information to public chatbots, tools like OpenClaw operate quietly at system level, often without IT noticing.

From theory to reality: OpenClaw exploitation is active and widespread

This threat is already real. OpenClaw’s design has exposed thousands of organizations to actual attacks. For instance, CVE-2026-25253 is a severe remote code execution flaw caused by a WebSocket validation error, with a CVSS score of 8.8. It lets attackers compromise an agent with a single click (critical OpenClaw vulnerability).

Attackers wasted no time. The ClawHavoc malware campaign, for example, spread over 341 malicious 'skills’, using OpenClaw’s official marketplace to push info-stealers and RATs directly into vulnerable environments. Over 21,000 exposed OpenClaw instances have turned up on the public internet, often protected by nothing stronger than a weak password, or no authentication at all. Researchers even found plaintext password storage in the code. The risk is both immediate and persistent.

The shadow AI dimension: why you’re likely exposed

One of the trickiest parts of OpenClaw and MoltBot is how easily they run outside official oversight. Research shows that more than 22% of enterprise customers have found MoltBot operating without IT approval. Agents connect with personal messaging apps, making it easy for employees to use them on devices IT doesn’t manage, creating blind spots in endpoint management.

This reflects a bigger shift: 68% of employees now access free AI tools using personal accounts, and 57% still paste sensitive data into these services. The risks tied to shadow AI keep rising, and so does the cost of breaches: incidents involving unsanctioned AI tools now average $670,000 higher than those without. No wonder experts at Palo Alto, Straiker, Google Cloud, and Intruder strongly advise enterprises to block or at least closely watch OpenClaw deployments.

Why classic security tools are defenseless - and why DSPM is essential

Despite many advances in endpoint, identity, and network defense, these tools fall short against AI agents such as OpenClaw. Agents often run code with system privileges and communicate independently, sometimes over encrypted or unfamiliar channels. This blinds existing security tools to what internal agent 'skills' are doing or what data they touch and process. The attack surface now includes prompt injection through emails and documents, poisoning of agent memory, delayed attacks, and natural language input that bypasses static scans.

The missing link is visibility: understanding what data any AI agent - sanctioned or shadow - can access, process, or send out. Data Security Posture Management (DSPM) responds to this by mapping what data AI agents can reach, tracing sensitive data to and from agents everywhere they run. Newer DSPM features such as real-time risk scoring, shadow AI discovery, and detailed flow tracking help organizations see and control risks from AI agents at the data layer (Sentra DSPM for AI agent security).

Immediate enterprise action plan: detection, mapping, and control

Security teams need to move quickly. Start by scanning for OpenClaw, MoltBot, and other shadow AI agents across endpoints, networks, and SaaS apps. Once you know where agents are, check which sensitive data they can access by using DSPM tools with AI agent awareness, such as those from Sentra (Sentra’s AI asset discovery). Treat unauthorized installations as active security incidents: reset credentials, investigate activity, and prevent agents from running on your systems following expert recommendations.

For long-term defense, add continuous shadow AI tracking to your operations. Let DSPM keep your data inventory current, trace possible leaks, and set the right controls for every workflow involving AI. Sentra gives you a single place to find all agent activity, see your actual AI data exposure, and take fast, business-aware action.

Conclusion

OpenClaw is simply the first sign of what will soon be a string of AI agent-driven security problems for enterprises. As companies use AI more to boost productivity and automate work, the chance of unsanctioned agents acting with growing privileges and integrations will continue to rise. Gartner expects that by 2028, one in four cyber incidents will stem from AI agent misuse - and attacks have already started to appear in the news.

Success with AI is no longer about whether you use agents like OpenClaw; it’s about controlling how far they reach and what they can do. Old-school defenses can’t keep up with how quickly shadow AI spreads. Only data-focused security, with total AI agent discovery, risk mapping, and ongoing monitoring, can provide the clarity and controls needed for this new world. Sentra's DSPM platform offers precisely that. Take the first steps now: identify your shadow AI risks, map out where your data can go, and make AI agent security a top priority.

<blogcta-big>

Read More
David Stuart
David Stuart
January 28, 2026
3
Min Read

Data Privacy Day: Why Discovery Isn’t Enough

Data Privacy Day: Why Discovery Isn’t Enough

Data Privacy Day is a good reminder for all of us in the tech world: finding sensitive data is only the first step. But in today’s environment, data is constantly moving -across cloud platforms, SaaS applications, and AI workflows. The challenge isn’t just knowing where your sensitive data lives; it’s also understanding who or what can touch it, whether that access is still appropriate, and how it changes as systems evolve.

I’ve seen firsthand that privacy breaks down not because organizations don’t care, but because access decisions are often disconnected from how data is actually being used. You can have the best policies on paper, but if they aren’t continuously enforced, they quickly become irrelevant.

Discovery is Just the Beginning

Most organizations start with data discovery. They run scans, identify sensitive files, and map out where data lives. That’s an important first step, and it’s necessary, but it’s far from sufficient. Data is not static. It moves, it gets copied, it’s accessed by humans and machines alike. Without continuously governing that access, all the discovery work in the world won’t stop privacy incidents from happening.

The next step, and the one that matters most today, is real-time governance. That means understanding and controlling access as it happens. 

Who can touch this data? Why do they have access? Is it still needed? And crucially, how do these permissions evolve as your environment changes?

Take, for example, a contractor who needs temporary access to sensitive customer data. Or an AI workflow that processes internal HR information. If those access rights aren’t continuously reviewed and enforced, a small oversight can quickly become a significant privacy risk.

Privacy in an AI and Automation Era

AI and automation are changing the way we work with data, but they also change the privacy equation. Automated processes can move and use data in ways that are difficult to monitor manually. AI models can generate insights using sensitive information without us even realizing it. This isn’t a hypothetical scenario, it’s happening right now in organizations of all sizes.

That’s why privacy cannot be treated as a once-a-year exercise or a checkbox in an audit report. It has to be embedded into daily operations, into the way data is accessed, used, and monitored. Organizations that get this right build systems that automatically enforce policies and flag unusual access - before it becomes a problem.

Beyond Compliance: Continuous Responsibility

The companies that succeed in protecting sensitive data are those that treat privacy as a continuous responsibility, not a regulatory obligation. They don’t wait for audits or compliance reviews to take action. Instead, they embed privacy into how data is accessed, shared, and used across the organization.

This approach delivers real results. It reduces risk by catching misconfigurations before they escalate. It allows teams to work confidently with data, knowing that sensitive information is protected. And it builds trust - both internally and with customers because people know their data is being handled responsibly.

A New Mindset for Data Privacy Day

So this Data Privacy Day, I challenge organizations to think differently. The question is no longer “Do we know where our sensitive data is?” Instead, ask:

“Are we actively governing who can touch our data, every moment, everywhere it goes?”

In a world where cloud platforms, AI systems, and automated workflows touch nearly every piece of data, privacy isn’t a one-time project. It’s a continuous practice, a mindset, and a responsibility that needs to be enforced in real time.

Organizations that adopt this mindset don’t just meet compliance requirements, they gain a competitive advantage. They earn trust, strengthen security, and maintain a dynamic posture that adapts as systems and access needs evolve.

Because at the end of the day, true privacy isn’t something you achieve once a year. It’s something you maintain every day, in every process, with every decision. This Data Privacy Day, let’s commit to moving beyond discovery and audits, and make continuous data privacy the standard.

<blogcta-big>

Read More
David Stuart
David Stuart
Nikki Ralston
Nikki Ralston
January 27, 2026
3
Min Read

DSPM Dirty Little Secrets: What Vendors Don’t Want You to Test

DSPM Dirty Little Secrets: What Vendors Don’t Want You to Test

Discover  What DSPM Vendors Try to Hide 

Your goal in running a data security/DSPM POV is to evaluate all important performance and cost parameters so you can make the best decision and avoid unpleasant surprises. Vendors, on the other hand, are looking for a ‘quick win’ and will often suggest shortcuts like using a limited test data set and copying your data to their environment.

 On the surface this might sound like a reasonable approach, but if you don’t test real data types and volumes in your own environment, the POV process may hide costly failures or compliance violations that will quickly become apparent in production. A recent evaluation of Sentra versus another top emerging DSPM exposed how the other solution’s performance dropped and costs skyrocketed when deployed at petabyte scale. Worse, the emerging DSPM removed data from the customer environment - a clear controls violation.

If you want to run a successful POV and avoid DSPM buyers' remorse you need to look out for these "dirty little secrets".

Dirty Little Secret #1:
‘Start small’ can mean ‘fails at scale’

The biggest 'dirty secret' is that scalability limits are hidden behind the 'start small' suggestion. Many DSPM platforms cannot scale to modern petabyte-sized data environments. Vendors try to conceal this architectural weakness by encouraging small, tightly scoped POVs that never stress the system and create false confidence. Upon broad deployment, this weakness is quickly exposed as scans slow and refresh cycles stretch, forcing teams to drastically reduce scope or frequency. This failure is fundamentally architectural, lacking parallel orchestration and elastic execution, proving that the 'start small' advice was a deliberate tactic to avoid exposing the platform’s inevitable bottleneck.In a recent POV, Sentra successfully scanned 10x more data in approximately the same time than the alternative:

Dirty Little Secret #2:
High cloud cost breaks continuous security

Another reason some vendors try to limit the scale of POVs is to hide the real cloud cost of running them in production. They often use brute-force scanning that reads excessive data, consumes massive compute resources, and is architecturally inefficient. This is easy to mask during short, limited POVs, but quickly drives up cloud bills in production. The resulting cost pressure forces organizations to reduce scan frequency and scope, quietly shifting the platform from continuous security control to periodic inventory. Ultimately, tools that cannot scale scanners efficiently on-demand or scan infrequently trade essential security for cost, proving they are only affordable when they are not fully utilized. In a recent POV run on 100 petabytes of data, Sentra proved to be 10x more operationally cost effective to run:

Dirty Little Secret #3:
‘Good enough’ accuracy degrades security

Accuracy is fundamental to Data Security Posture Management (DSPM) and should not be compromised. While a few points difference may not seem like a deal breaker, every percentage point of classification accuracy can dramatically affect all downstream security controls. Costs increase as manual intervention is required to address FPs. When organizations automate controls based on these inaccuracies, the DSPM platform becomes a source of risk. Confidence is lost. The secret is kept safe because the POV never validates the platform's accuracy against known sensitive data.

In a recent POV Sentra was able to prove less than one percent rate of false positives and false negatives:

DSPM POV Red Flags 

  • Copy data to the vendor environment for a “quick win”
  • Limit features or capabilities to simplify testing
  • Artificially reduce the size of scanned data
  • Restrict integrations to avoid “complications”
  • Limit or avoid API usage

These shortcuts don’t make a POV easier - they make it misleading.

Four DSPM POV Requirements That Expose the Truth

If you want a DSPM POV that reflects production reality, insist on these requirements:

1. Scalability

Run discovery and classification on at least 1 petabyte of real data, including unstructured object storage. Completion time must be measured in hours or days - not weeks.

2. Cost Efficiency

Operate scans continuously at scale and measure actual cloud resource consumption. If cost forces reduced frequency or scope, the model is unsustainable.

3. Accuracy

Validate results against known sensitive data. Measure false positives and false negatives explicitly. Accuracy must be quantified and repeatable.

4. Unstructured Data Depth

Test long-form, heterogeneous, real-world unstructured data including audio, video, etc. Classification must demonstrate contextual understanding, not just keyword matches.

A DSPM solution that only performs well in a limited POV will lead to painful, costly buyer’s regret. Once in production, the failures in scalability, cost efficiency, accuracy, and unstructured data depth quickly become apparent.

Getting ready to run a DSPM POV? Schedule a demo.

<blogcta-big>

Read More
David Stuart
David Stuart
January 27, 2026
4
Min Read

DSPM for Modern Fintech: From Masking to AI-Aware Data Protection

DSPM for Modern Fintech: From Masking to AI-Aware Data Protection

Fintech leaders, from digital-first banks to API-driven investment platforms, face a major data dilemma today. With cloud-native architectures, real-time analytics, and the rapid integration of AI, the scale, speed, and complexity of sensitive data have skyrocketed. Fintech platforms are quickly surpassing what legacy Data Loss Prevention (DLP) and Data Security Posture Management (DSPM) tools can handle.

Why? Fintech companies now need more than surface-level safeguards. They require true depth: AI-driven data classification, dynamic masking, and fluid integrations across a massive tech stack that includes Snowflake, AWS Bedrock, and Microsoft 365. Below, we look at why DSPM in financial services is at a defining moment, what recurring pain points exist with traditional, and even many emerging, tools, and how Sentra is reimagining what the modern data protection stack should deliver.

The Pitfalls of Legacy DLP and Early DSPM in Fintech

Legacy DLP wasn’t built for fintech’s speed or expanding data footprint. These tools focus on rigid rules and tight boundaries, which aren’t equipped to handle petabyte-scale, multi-cloud, or AI-powered environments. Early DSPM tools brought some improvements in visibility, but problems persisted: incomplete data discovery, basic classification, lots of manual steps, and limited support for dynamic masking.

For fintech companies, this creates mounting regulatory risk as compliance pressures rise, and slow, manual processes lead to both security and operational headaches. Teams waste hours juggling alerts and trying to piece together patchwork fixes, often resorting to clunky add-on masking tools. The cost is obvious: a scattered protection strategy, long breach response times, and constant exposure to regulatory issues - especially as environments get more distributed and complex.

Why "Good Enough" DSPM Isn’t Enough Anymore

Change in fintech moves faster than ever. The DSPM for the financial services sector is growing at breakneck speed. But as financial applications get more sophisticated, and with cloud and AI adoption soaring, the old "good enough" DSPM falls short. Sensitive data is everywhere now. 82% percent of breaches happen in the cloud, with 39% stretching across multi-cloud or hybrid setups according to The Future of Data Security: Why DSPM is Here to Stay. Enterprise data is set to exceed 181 zettabytes by 2025, raising the stakes for automation, real-time classification, and tight integration with core infrastructure.

AI and automation are no longer optional. To effectively reduce risk and keep compliance manageable and truly auditable, DSPM systems need to automate classification, masking, remediation, and reporting as a central part of operations, not as last-minute additions.

Where Most DSPM Solutions Fall Short

Fintech organizations often struggle to scale legacy or early DSPM and DLP products, especially those similar to emerging DSPM or large CNAPP vendors. These tools might offer broad control and AI-powered classification, but they usually require too much manual orchestration to achieve full remediation, only automate certain pieces of the workflow, and rely on separate masking add-ons.

That leads to gaps in AI and multi-cloud data context, choppy visibility, and much of the workflow stuck in manual gear, a recipe for persistent exposure of sensitive data, especially in fast-moving fintech environments.

Fintech buyers, especially those scaling quickly, also point to a crucial need: ensuring DSPM tools natively and deeply support platforms like Snowflake, AWS Bedrock, and Macie. They want automated, business-driven policy enforcement without constantly babysitting the system.

Sentra’s Next-Gen DSPM: AI-Native, Masking-Aware, and Stack-Integrated for Fintech

Sentra was created with these modern fintech challenges in mind. It offers real-time, continuous, agentless classification and deep context for cloud, SaaS, and AI-powered environments.

What makes Sentra different?

  • Petabyte-scale agentless discovery: Always-on, friction-free classification, with no heavy infrastructure or manual tweaks.
  • AI-native contextualization: Pinpoints sensitive data at a business level and connects instantly with masking policies across Snowflake, Microsoft Purview, and more inferred masking synergy.
  • Automation-driven compliance: Handles everything from discovery to masking to changing permissions, with clear, auditable reporting automated masking/remediation.
  • Integrated for modern stacks: Ready-made, with out-of-the-box connections for Snowflake, Bedrock, Microsoft 365, and the wider AWS/fintech ecosystem.

More and more fintech companies are switching to Sentra DSPM to achieve true cross-cloud visibility and meet regulations without slowing down. By plugging into fintech data flows and covering AI model pipelines, Sentra lets organizations use DSPM with the same speed as their business.

Building a Future-Ready DSPM Strategy in Financial Services

Managing and protecting sensitive data is a competitive edge for fintech, not just a security concern. With compliance rising up the agenda - 84% of IT and security leaders now list it as a top driver - your DSPM investments need to focus on automation, consistent visibility, and enforceable policies throughout your architecture.

Next-gen DSPM means: less busywork, no more juggling between masking and classification tools, and instant, actionable insight into data risk, wherever your information lives. In other words, you spend less time firefighting, move faster, and can assure partners and customers that their data is in good hands.

See How SoFi

Request a demo and technical assessment to discover how Sentra’s AI-aware DSPM can speed up both your compliance and your innovation.

Conclusion

Legacy data protection simply can’t keep up with the size, complexity, and regulatory demands of financial data today. DSPM is now table stakes - as long as it’s automated, built with AI at its core, and actively reduces risk in real time, not just points it out.

Sentra helps you move forward confidently: always-on, agentless classification, automated fixes and masking, and deep stack integration designed for the most complex fintech systems. As you build the future of financial services, your DSPM should make it easier to stay compliant, agile, and protected - no matter how quickly your technology changes.

<blogcta-big>

Read More
Ariel Rimon
Ariel Rimon
January 21, 2026
4
Min Read

Cloud Security 101: Essential Tips and Best Practices

Cloud Security 101: Essential Tips and Best Practices

Cloud security in 2026 is about protecting sensitive data, identities, and workloads across increasingly complex cloud and multi-cloud environments. As organizations continue moving critical systems to the cloud, security challenges have shifted from basic perimeter defenses to visibility gaps, identity risk, misconfigurations, and compliance pressure. Following proven cloud security best practices helps organizations reduce risk, prevent data exposure, and maintain continuous compliance as cloud environments scale and evolve.

Cloud Security 101

At its core, cloud security aims to protect the confidentiality, integrity, and availability of data and services hosted in cloud environments. This requires a clear grasp of the shared responsibility model, where cloud providers secure the underlying physical infrastructure and core services, while customers remain responsible for configuring settings, protecting data and applications, and managing user access.

Understanding how different service models affect your level of control is crucial:

  • Software as a Service (SaaS): Provider manages most security controls; you manage user access and data
  • Platform as a Service (PaaS): Shared responsibility for application security and data protection
  • Infrastructure as a Service (IaaS): You control most security configurations, from OS to applications

Modern cloud security demands cloud-native strategies and automation. Leveraging tools like infrastructure as code, Cloud Security Posture Management (CSPM), and Cloud Workload Protection Platforms helps organizations keep pace with the dynamic, scalable nature of cloud environments. Integrating security into the development process through a "shift left" approach enables teams to detect and remediate vulnerabilities early, before they reach production.

Cloud Security Tips for Beginners

For those new to cloud security, starting with foundational practices builds a strong defense against common threats.

Control Access with Strong Identity Management

  • Use multi-factor authentication on every login to add an extra layer of security
  • Apply the principle of least privilege by granting users and applications only the permissions they need
  • Implement role-based access control across your cloud environment
  • Regularly review and audit identity and access policies

Secure Your Cloud Configurations

Regularly audit your cloud settings and use automated tools like CSPM to continuously scan for misconfigurations and risky exposures. Protecting sensitive data requires encrypting information both at rest and in transit using strong standards such as AES-256, ensuring that even if data is intercepted, it remains unreadable. Follow proper key management practices by regularly rotating keys and avoiding hard-coded credentials.

Monitor and Detect Threats Continuously

  • Consolidate logs from all cloud services into a centralized system
  • Set up real-time monitoring with automated alerts to quickly identify unusual behavior
  • Employ behavioral analytics and threat detection tools to continuously assess your security posture
  • Develop, document, and regularly test an incident response plan

Security Considerations in Cloud Computing

Before adopting or expanding cloud computing, organizations must evaluate several critical security aspects. First, clearly define which security controls fall under the provider's responsibility versus your own. Review contractual commitments, service level agreements, and compliance with data privacy regulations to ensure data sovereignty and legal requirements are met.

Data protection throughout its lifecycle is paramount. Evaluate how data is collected, stored, transmitted, and protected with strong encryption both in transit and at rest. Establish robust identity and access controls, including multi-factor authentication and role-based access - to guard against unauthorized access.

Conducting a thorough pre-migration security assessment is essential:

  • Inventory workloads and classify data sensitivity
  • Map dependencies and simulate attack vectors
  • Deploy CSPM tools to continuously monitor configurations
  • Apply Zero Trust principles—always verify before granting access

Finally, evaluate the provider's internal security measures such as vulnerability management, routine patching, security monitoring, and incident response capabilities. Ensure that both the provider's and your organization's incident response and disaster recovery plans are coordinated, guaranteeing business continuity during security events.

Cloud Security Policies

Organizations should implement a comprehensive set of cloud security policies that cover every stage of data and workload protection.

Policy Type Key Requirements
Data Protection & Encryption Classify data (public, internal, confidential, sensitive) and enforce encryption standards for data at rest and in transit; define key management practices
Access Control & Identity Management Implement role-based access controls, enforce multi-factor authentication, and regularly review permissions to prevent unauthorized access
Incident Response & Reporting Establish formal processes to detect, analyze, contain, and remediate security incidents with clearly defined procedures and communication guidelines
Network Security Define secure architectures including firewalls, VPNs, and native cloud security tools; restrict and monitor network traffic to limit lateral movement
Disaster Recovery & Business Continuity Develop strategies for rapid service restoration including regular backups, clearly defined roles, and continuous testing of recovery plans
Governance, Compliance & Auditing Define program scope, specify roles and responsibilities, and incorporate continuous assessments using CSPM tools to enforce regulatory compliance

Cloud Computing and Cyber Security

Cloud computing fundamentally shifts cybersecurity away from protecting a single, static perimeter toward securing a dynamic, distributed environment. Traditional practices that once focused on on-premises defenses, like firewalls and isolated data centers—must now adapt to an infrastructure where applications and data are continuously deployed and managed across multiple platforms.

Security responsibilities are now shared between cloud providers and client organizations. Providers secure the core physical and virtual components, while clients must focus on configuring services effectively, managing identity and access, and monitoring for vulnerabilities. This dual responsibility model demands clear communication and proactive management to prevent issues like misconfigurations or exposure of sensitive data.

The cloud's inherent flexibility and rapid scaling require automated and adaptive security measures. Traditional manual monitoring can no longer keep pace with the speed at which applications and resources are provisioned or updated. Organizations are increasingly relying on AI-driven monitoring, multi-factor authentication, machine learning, and other advanced techniques to continuously detect and remediate threats in real time.

Cloud environments expand the attack surface by eliminating the traditional network boundary. With data distributed across multiple redundant sites and accessed via numerous APIs, new vulnerabilities emerge that require robust identity- and data-centric protections. Security measures must now encompass everything from strict encryption and access controls to comprehensive logging and incident response strategies that address the unique risks of multi-tenant and distributed architectures. For additional insights on protecting your cloud data, visit our guide on cloud data protection.

Securing Your Cloud Environment with AI-Ready Data Governance

As enterprises increasingly adopt AI technologies in 2026, securing sensitive data while maintaining complete visibility and control has become a critical challenge. Sentra's cloud-native data security platform addresses these challenges by delivering AI-ready data governance and compliance at petabyte scale. Unlike traditional approaches that require data to leave your environment, Sentra discovers and governs sensitive data inside your own infrastructure, ensuring data never leaves your control.

Cost Savings: By eliminating shadow and redundant, obsolete, or trivial (ROT) data, Sentra not only secures your organization for the AI era but also typically reduces cloud storage costs by approximately 20%.

The platform enforces strict data-driven guardrails while providing complete visibility into your data landscape, where sensitive data lives, how it moves, and who can access it. This "in-environment" architecture replaces opaque data sprawls with a regulator-friendly system that maps data movement and prevents unauthorized AI access, enabling enterprises to confidently adopt AI technologies without compromising security or compliance.

Implementing effective cloud security tips requires a holistic approach that combines foundational practices with advanced strategies tailored to your organization's unique needs. From understanding the shared responsibility model and securing configurations to implementing robust access controls and continuous monitoring, each element plays a vital role in protecting your cloud environment. As we move further into 2026, the integration of AI-driven security tools, automated governance, and comprehensive data protection measures will continue to define successful cloud security programs. By following these cloud security tips and maintaining a proactive, adaptive security posture, organizations can confidently leverage the benefits of cloud computing while minimizing risk and ensuring compliance with evolving regulatory requirements.

<blogcta-big>

Read More
Romi Minin
Romi Minin
Nikki Ralston
Nikki Ralston
January 20, 2026
4
Min Read

How to Choose a Data Access Governance Tool

How to Choose a Data Access Governance Tool

Introduction: Why Data Access Governance Is Harder Than It Should Be

Data access governance should be simple: know where your sensitive data lives, understand who has access to it, and reduce risk without breaking business workflows. In practice, it’s rarely that straightforward. Modern organizations operate across cloud data stores, SaaS applications, AI pipelines, and hybrid environments. Data moves constantly, permissions accumulate over time, and visibility quickly degrades. Many teams turn to data access governance tools expecting clarity, only to find legacy platforms that are difficult to deploy, noisy, or poorly suited for dynamic, fast-proliferating cloud environments.

A modern data access governance tool should provide continuous visibility into who and what can access sensitive data across cloud and SaaS environments, and help teams reduce overexposure safely and incrementally.

What Organizations Actually Need from Data Access Governance

Before evaluating vendors, it’s important to align on outcomes, just not features. Most teams are trying to solve the same core problems:

  • Unified visibility across cloud data stores, SaaS platforms, and hybrid environments
  • Clear answers to “which identities have access to what, and why?”
  • Risk-based prioritization instead of long, unmanageable lists of permissions
  • Safe remediation that tightens access without disrupting workflows

Tools that focus only on periodic access reviews or static policies often fall short in dynamic environments where data and permissions change constantly.

Why Legacy and Over-Engineered Tools Fall Short

Many traditional data governance and IGA tools were designed for on-prem environments and slower change cycles. In cloud and SaaS environments, these tools often struggle with:

  • Long deployment timelines and heavy professional services requirements
  • Excessive alert noise without clear guidance on what to fix first
  • Manual access certifications that don’t scale
  • Limited visibility into modern SaaS and cloud-native data stores

Overly complex platforms can leave teams spending more time managing the tool than reducing actual data risk.

Key Capabilities to Look for in a Modern Data Access Governance Tool

1. Continuous Data Discovery and Classification

A strong foundation starts with knowing where sensitive data lives. Modern tools should continuously discover and classify data across cloud, SaaS, and hybrid environments using automated techniques, not one-time scans.

2. Access Mapping and Exposure Analysis

Understanding data sensitivity alone isn’t enough. Tools should map access across users, roles, applications, and service accounts to show how sensitive data is actually exposed.

3. Risk-Based Prioritization

Not all exposure is equal. Effective platforms correlate data sensitivity with access scope and usage patterns to surface the highest-risk scenarios first, helping teams focus remediation where it matters most.

4. Low-Friction Deployment

Look for platforms that minimize operational overhead:

  • Agentless or lightweight deployment models
  • Fast time-to-value
  • Minimal disruption to existing workflows

5. Actionable Remediation Workflows

Visibility without action creates frustration. The right tool should support guided remediation, tightening access incrementally and safely rather than enforcing broad, disruptive changes.

How Teams Are Solving This Today

Security teams that succeed tend to adopt platforms that combine data discovery, access analysis, and real-time risk detection in a single workflow rather than stitching together multiple legacy tools. For example, platforms like Sentra focus on correlating data sensitivity with who or what can actually access it, making it easier to identify over-permissioned data, toxic access combinations, and risky data flows, without breaking existing workflows or requiring intrusive agents.

The common thread isn’t the tool itself, but the ability to answer one question continuously:

“Who can access our most sensitive data right now, and should they?”

Teams using these approaches often see faster time-to-value and more actionable insights compared to legacy systems.

Common Gotchas to Watch Out For

When evaluating tools, buyers often overlook a few critical issues:

  • Hidden costs for deployment, tuning, or ongoing services
  • Tools that surface risk but don’t help remediate it
  • Point-in-time scans that miss rapidly changing environments
  • Weak integration with identity systems, cloud platforms, and SaaS apps

Asking vendors how they handle these scenarios during a pilot can prevent surprises later.
Download The Dirt on DSPM POVs: What Vendors Don’t Want You to Know

How to Run a Successful Pilot

A focused pilot is the best way to evaluate real-world effectiveness:

  1. Start with one or two high-risk data stores
  2. Measure signal-to-noise, not alert volume
  3. Validate that remediation steps work with real teams and workflows
  4. Assess how quickly the tool delivers actionable insights

The goal is to prove reduced risk, not just improved reporting.

Final Takeaway: Visibility First, Enforcement Second

Effective data access governance starts with visibility. Organizations that succeed focus first on understanding where sensitive data lives and how it’s exposed, then apply controls gradually and intelligently. Combining DAG with DSPM is an effective way to achieve this.

In 2026, the most effective data access governance tools are continuous, risk-driven, and cloud-native, helping security teams reduce exposure without slowing the business down.

Frequently Asked Questions (FAQs)

What is data access governance?

Data access governance is the practice of managing and monitoring who can access sensitive data, ensuring access aligns with business needs and security requirements.

How is data access governance different from IAM?

IAM focuses on identities and permissions. Data access governance connects those permissions to actual data sensitivity and exposure, and alerts when violations occur.

How do organizations reduce over-permissioned access safely?

By using risk-based prioritization and incremental remediation instead of broad access revocations.

What should teams look for in a modern data access governance tool?

This question comes up frequently in real-world evaluations, including Reddit discussions where teams share what’s worked and what hasn’t. Teams should prioritize tools that give fast visibility into who can access sensitive data, provide context-aware insights, and allow incremental, safe remediation - all without breaking workflows or adding heavy operational overhead. Cloud- and SaaS-aware platforms tend to outperform legacy or overly complex solutions.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
Nikki Ralston
Nikki Ralston
January 19, 2026
3
Min Read

One Platform to Secure All Data: Moving from Data Discovery to Full Data Access Governance

One Platform to Secure All Data: Moving from Data Discovery to Full Data Access Governance

The cloud has changed how organizations approach data security and compliance. Security leaders have mostly figured out where their sensitive data is, thanks to data security posture management (DSPM) tools. But that's just the beginning. Who can access your data? What are they doing with it?

Workloads and sensitive assets now move across multi-cloud, hybrid, and SaaS environments, increasing the need for control over access and use. Regulators, boards, and customers expect more than just awareness. They want real proof that you are governing access, lowering risk, and keeping cloud data secure. The next priority is here: shifting from just knowing what data you have to actually governing access to it. Sentra provides a unified platform designed for this shift.

Why Discovery Alone Falls Short in the Cloud Era

DSPM solutions make it possible to locate, classify, and monitor sensitive data almost anywhere, from databases to SaaS apps. This visibility is valuable, particularly as organizations manage more data than ever. Over half of enterprises have trouble mapping their full data environment, and 85% experienced a data loss event in the past year.

But simply seeing your data won’t do the job. DSPM can point out risks, like unencrypted data or exposed repositories, but it usually can’t control access or enforce policies in real time. Cloud environments change too quickly for static snapshots and scheduled reviews. Effective security means not only seeing your data but actively controlling who can reach it and what they can do.

Data Access Governance: The New Frontier for Cloud Data Security

Data Access Governance (DAG) covers processes and tools that constantly monitor, control, and audit who can access your data, how, and when, wherever it lives in the cloud.

Why does DAG matter so much now? Consider some urgent needs:

  • Compliance and Auditability: 82% of organizations rank compliance as their top cloud concern. Data access controls and real-time audit logs make it possible to demonstrate compliance with GDPR, HIPAA, and other data laws.
  • Risk Reduction: Cloud environments change constantly, so outdated access policies quickly become a problem. DAG enforces least-privilege access, supports just-in-time permissions, and lets teams quickly respond to risky activity.
  • AI and New Threats: As generative AI becomes more common, concerns about misuse and unsupervised data access are growing. Forty percent of organizations now see AI as a data leak risk.

DAG gives organizations a current view of “who has access to my data right now?” for both employees and AI agents, and allows immediate changes if permissions or risks shift.

The Power of a Unified, Agentless Platform for DSPM and DAG

Why should security teams look for a unified platform instead of another narrow tool? Most large companies use several clouds, with 83% managing more than one, but only 34% have unified compliance. Legacy tools focused on discovery or single clouds aren’t enough.

Sentra’s agentless, multi-cloud solution meets these needs directly. With nothing extra to install or maintain, Sentra provides:

  • Automated discovery and classification of data in AWS, Azure, GCP, and SaaS
  • Real-time mapping and management of every access, from users to services and APIs
  • Policy-as-code for dynamic enforcement of least-privilege access
  • Built-in detection and response that moves beyond basic rules

This approach combines data discovery with ongoing access management, helping organizations save time and money. It bridges the gaps between security, compliance, and DevOps teams. GlobalNewswire projects the global market for unified data governance will exceed $15B by 2032. Companies are looking for platforms that can keep things simple and scale with growth.

Strategic Benefits: From Reduced Risk to Business Enablement

What do organizations actually achieve with cloud-native, end-to-end data access governance?

  • Operational Efficiency: Replace slow, manual reviews and separate tools. Automate access reviews, policy enforcement, and compliance, all in one platform.
  • Faster Remediation and Lower TCO: Real-time alerts pinpoint threats faster, and automation speeds up response and reduces resource needs.
  • Future-Proof Security: Designed to handle multi-cloud and AI demands, with just-in-time access, zero standing privilege, and fast threat response.
  • Business Enablement and Audit Readiness: Central visibility and governance help teams prepare for audits faster, gain customer trust, and safely launch digital products.

In short, a unified platform for DSPM and DAG is more than a tech upgrade, it gives security teams the ability to directly support business growth and agility.

Why Sentra: The Converged Platform for Modern Data Security

Sentra covers every angle: agentless discovery, continuous access control, ongoing threat detection, and compliance, all within one platform. Sentra unites DSPM, DAG, and Data Detection & Response (DDR) in a single solution.

With Sentra, you can:

  • Stop relying on periodic reviews and move to real-time governance
  • See and manage data across all cloud and SaaS services
  • Make compliance easier while improving security and saving money

Conclusion

Data discovery is just the first step to securing cloud data. For compliance, resilience, and agility, organizations need to go beyond simply finding data and actually managing who can use it. DSPM isn’t enough anymore, full Data Access Governance is now a must.

Sentra’s agentless platform gives security and compliance teams a way to find, control, and protect sensitive cloud data, with full oversight along the way. Make the switch now and turn cloud data security into an asset for your business.

Looking to bring all your cloud data security and access control together? Request a Sentra demo to see how it works, or watch a 5-minute product demo for more on how Sentra helps organizations move from discovery to full data governance.

<blogcta-big>

Read More
Gilad Golani
Gilad Golani
January 18, 2026
3
Min Read

False Positives Are Killing Your DSPM Program: How to Measure Classification Accuracy

False Positives Are Killing Your DSPM Program: How to Measure Classification Accuracy

As more organizations move sensitive data to the cloud, Data Security Posture Management (DSPM) has become a critical security investment. But as DSPM adoption grows, a big problem is emerging: security teams are overwhelmed by false positives that create too much noise and not enough useful insight. If your security program is flooded with unnecessary alerts, you end up with more risk, not less.

Most enterprises say their existing data discovery and classification solutions fall short, primarily because they misclassify data. False positives waste valuable analyst time and deteriorate trust in your security operation. Security leaders need to understand what high-quality data classification accuracy really is, why relying only on regex fails, and how to use objective metrics like precision and recall to assess potential tools. Here’s a look at what matters most for accuracy in DSPM.

What Does Good Data Classification Accuracy Look Like?

To make real progress with data classification accuracy, you first need to know how to measure it. Two key metrics - precision and recall - are at the core of reliable classification. Precision tells you the share of correct positive results among everything identified as positive, while recall shows the percentage of actual sensitive items that get caught. You want both metrics to be high. Your DSPM solution should identify sensitive data, such as PII or PCI, without generating excessive false or misclassified results.

The F1-score adds another perspective, blending precision and recall for a single number that reflects both discovery and accuracy. On the ground, these metrics mean fewer false alerts, quicker responses, and teams that spend their time fixing problems rather than chasing noise. "Good" data classification produces consistent, actionable results, even as your cloud data grows and changes.

The Hidden Cost of Regex-Only Data Discovery

A lot of older DSPM tools still depend on regular expressions (regex) to classify data in both structured and unstructured systems. Regex works for certain fixed patterns, but it struggles with the diverse, changing data types common in today’s cloud and SaaS environments. Regex can't always recognize if a string that “looks” like a personal identifier is actually just a random bit of data. This results in security teams buried by alerts they don’t need, leading to alert fatigue.

Far from helping, regex-heavy approaches waste resources and make it easier for serious risks to slip through. As privacy regulations become more demanding and the average breach hit $4.4 million according to the annual "Cost of a Data Breach Report" by IBM and the Ponemon Institute, ignoring precision and recall is becoming increasingly costly.

How to Objectively Test DSPM Accuracy in Your POC

If your current DSPM produces more noise than value, a better method starts with clear testing. A meaningful proof-of-value (POV) process uses labeled data and a confusion matrix to calculate true positives, false positives, and false negatives. Don’t rely on vendor promises. Always test their claims with data from your real environment. Ask hard questions: How does the platform classify unstructured data? How much alert noise can you expect? Can it keep accuracy high even when scanning huge volumes across SaaS, multi-cloud, and on-prem systems? The best DSPM tool cuts through the clutter, surfacing only what matters.

Sentra Delivers Highest Accuracy with Small Language Models and Context

Sentra’s DSPM platform raises the bar by going beyond regex, using purpose-built small language models (SLMs) and advanced natural language processing (NLP) for context-driven data classification at scale. Customers and analysts consistently report that Sentra achieves over the highest classification accuracy for PII and PCI, with very few false positives.

Gartner Review - Sentra received 5 stars

How does Sentra get these results without data ever leaving your environment? The platform combines multi-cloud discovery, agentless install, and deep contextual awareness - scanning extensive environments and accurately discerning real risks from background noise. Whether working with unstructured cloud data, ever-changing SaaS content, or traditional databases, Sentra keeps analysts focused on real issues and helps you stay compliant. Instead of fighting unnecessary alerts, your team sees clear results and can move faster with confidence.

Want to see Sentra DSPM in action? Schedule a Demo.

Reducing False Positives Produces Real Outcomes

Classification accuracy has a direct impact on whether your security is efficient or overwhelmed. With compliance rules tightening and threats growing, security teams cannot afford DSPM solutions that bury them in false positives. Regex-only tools no longer cut it - precision, recall, and truly reliable results should be standard.

Sentra’s SLM-powered, context-aware classification delivers the trustworthy performance businesses need, changing DSPM from just another alert engine to a real tool for reducing risk. Want to see the difference yourself? Put Sentra’s accuracy to the test in your own environment and finally move past false positive fatigue.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
January 14, 2026
4
Min Read

The Real Business Value of DSPM: Why True ROI Goes Beyond Cost Savings

The Real Business Value of DSPM: Why True ROI Goes Beyond Cost Savings

As enterprises scale cloud usage and adopt AI, the value of Data Security Posture Management (DSPM) is no longer just about checking a tool category box. It’s about protecting what matters most: sensitive data that fuels modern business and AI workflows.

Traditional content on DSPM often focuses on cost components and deployment considerations. That’s useful, but incomplete. To truly justify DSPM to executives and boards, security leaders need a holistic, outcome-focused view that ties data risk reduction to measurable business impact.

In this blog, we unpack the real, measurable benefits of DSPM, beyond just cost savings, and explain how modern DSPM strategies deliver rapid value far beyond what most legacy tools promise. 

1. Visibility Isn’t Enough - You Need Context

A common theme in DSPM discussions is that tools help you see where sensitive data lives. That’s important, but it’s only the first step. Real value comes from understanding context. Who can access the data, how it’s being used, and where risk exists in the wider security posture. Organizations that stop at discovery often struggle to prioritize risk and justify spend.

Modern DSPM solutions go further by:

  • Correlating data locations with access rights and usage patterns
  • Mapping sensitive data flows across cloud, SaaS, and hybrid environments
  • Detecting shadow data stores and unmanaged copies that silently increase exposure
  • Linking findings to business risk and compliance frameworks

This contextual intelligence drives better decisions and higher ROI because teams aren’t just counting sensitive data, they’re continuously governing it.

2. DSPM Saves Time and Shrinks Attack Surface Fast

One way DSPM delivers measurable business value is by streamlining functions that used to be manual, siloed, and slow:

  • Automated classification reduces manual tagging and human error
  • Continuous discovery eliminates periodic, snapshot-alone inventories
  • Policy enforcement reduces time spent reacting to audit requests

This translates into:

  • Faster compliance reporting
  • Shorter audit cycles
  • Rapid identification and remediation of critical risks

For security leaders, the speed of insight becomes a competitive advantage, especially in environments where data volumes grow daily and AI models can touch every corner of the enterprise.

3. Cost Benefits That Matter, but with Context

Lately I’m hearing many DSPM discussions break down cost components like scanning compute, licensing, operational expenses, and potential cloud savings. That’s a good start because DSPM can reduce cloud waste by identifying stale or redundant data, but it’s not the whole story.

 

Here’s where truly strategic DSPM differs:

Operational Efficiency

When DSPM tools automate discovery, classification, and risk scoring:

  • Teams spend less time on manual reports
  • Alert fatigue drops as noise is filtered
  • Engineers can focus on higher-value work

Breach Avoidance

Data breaches are expensive. According to industry studies, the average cost of a data breach runs into millions, far outweighing the cost of DSPM itself. A DSPM solution that prevents even one breach or major compliance failure pays for itself tenfold

Compliance as a Value Center

Rather than treating compliance as a cost center consider that:

  • DSPM reduces audit overhead
  • Provides automated evidence for frameworks like GDPR, HIPAA, PCI DSS
  • Improves confidence in reporting accuracy

That’s a measurable business benefit CFOs can appreciate and boards expect.

4. DSPM Reduces Risk Vector Multipliers Like AI

One benefit that’s often under-emphasized is how DSPM reduces risk vector multipliers, the factors that amplify risk exponentially beyond simple exposure counts.

In 2026 and beyond, AI systems are increasingly part of the risk profile. Modern DSPM help reduce the heightened risk from AI by:

  • Identifying where sensitive data intersects with AI training or inference pipelines
  • Governing how AI tools and assistants can access sensitive content
  • Providing risk context so teams can prevent data leakage into LLMs

This kind of data-centric, contextual, and continuous governance should be considered a requirement for secure AI adoption, no compromise.

5. Telling the DSPM ROI Story

The most convincing DSPM ROI stories aren’t spreadsheets, they’re narratives that align with business outcomes. The key to building a credible ROI case is connecting metrics, security impact, and business outcomes:

Metric Security Impact Business Outcome
Faster discovery & classification Fewer blind spots Reduced breach likelihood
Consistent governance enforcement Fewer compliance issues Lower audit cost
Contextual risk scoring Better prioritization Efficient resource allocation
AI governance Controlled AI exposure Safe innovation

By telling the story this way, security leaders can speak in terms the board and executives care about: risk reduction, compliance assurance, operational alignment, and controlled growth.

How to Evaluate DSPM for Real ROI

To capture tangible return, don’t evaluate DSPM solely on cost or feature checklists. Instead, test for:

1. Scalability Under Real Load

Can the tool discover and classify petabytes of data, including unstructured content, without degrading performance?

2. Accuracy That Holds Up

Poor classification undermines automation. True ROI requires consistent, top-performing accuracy rates.

3. Operational Cost Predictability

Beware of DSPM solutions that drive unexpected cloud expenses due to inefficient scanning or redundant data reads.

4. Integration With Enforcement Workflows

Visibility without action isn’t ROI. Your DSPM should feed DLP, IAM/CIEM, SIEM/SOAR, and compliance pipelines (ticketing, policy automation, alerts).

ROI Is a Journey, Not a Number

Costs matter, but value lives in context. DSPM is not just a cost center, it’s a force multiplier for secure cloud operations, AI readiness, compliance, and risk reduction. Instead of seeing DSPM as another tool, forward-looking teams view it as a fundamental decision support engine that changes how risk is measured, prioritized, and controlled.

Ready to See Real DSPM Value in Your Environment?

Download Sentra’s “DSPM Dirty Little Secrets” guide, a practical roadmap for evaluating DSPM with clarity, confidence, and production reality in mind.

👉 Download the DSPM Dirty Little Secrets guide now

Want a personalized walkthrough of how Sentra delivers measurable DSPM value?
👉 Request a demo

<blogcta-big>

Read More
Ofir Yehoshua
Ofir Yehoshua
January 13, 2026
3
Min Read

Why Infrastructure Security Is Not Enough to Protect Sensitive Data

Why Infrastructure Security Is Not Enough to Protect Sensitive Data

For years, security programs have focused on protecting infrastructure: networks, servers, endpoints, and applications. That approach made sense when systems were static and data rarely moved. It’s no longer enough.

Recent breach data shows a consistent pattern. Organizations detect incidents, restore systems, and close tickets, yet remain unable to answer the most important question regulators and customers often ask:

Where does my sensitive data reside?

Who or what has access to this data and are they authorized?

Which specific sensitive datasets were accessed or exfiltrated?

Infrastructure security alone cannot answer that question.

Infrastructure Alerts Detect Events, Not Impact

Most security tooling is infrastructure-centric by design. SIEMs, EDRs, NDRs, and CSPM tools monitor hosts, processes, IPs, and configurations. When something abnormal happens, they generate alerts.

What they do not tell you is:

  • Which specific datasets were accessed
  • Whether those datasets contained PHI or PII
  • Whether sensitive data was copied, moved, or exfiltrated

Traditional tools monitor the "plumbing" (network traffic, server logs, etc.) While they can flag that a database was accessed by an unauthorized IP, they often cannot distinguish between an attacker downloading a public template or downloading a table containing 50,000 Social Security numbers. An alert is not the same as understanding the exposure of the data stored inside it. Without that context, incident response teams are forced to infer impact rather than determine it.

The “Did They Access the Data?” Problem

This gap becomes pronounced during ransomware and extortion incidents.

In many cases:

  • Operations are restored from backups
  • Infrastructure is rebuilt
  • Access is reduced
  • (Hopefully!) attackers are removed from the environment

Yet organizations still cannot confirm whether sensitive data was accessed or exfiltrated during the dwell time.

Without data-level visibility:

  • Legal and compliance teams must assume worst-case exposure
  • Breach notifications expand unnecessarily
  • Regulatory penalties increase due to uncertainty, not necessarily damage

The inability to scope an incident accurately is not a tooling failure during the breach, it is a visibility failure that existed long before the breach occurred. Under regulations like GDPR or CCPA/CPRA, if an organization cannot prove that sensitive data wasn’t accessed during a breach, they are often legally required to notify all potentially affected parties. This ‘over-notification’ is costly and damaging to reputation.

Data Movement Is the Real Attack Vulnerability

Modern environments are defined by constant data movement:

  • Cloud migrations
  • SaaS integrations
  • App dev lifecycles
  • Analytics and ETL pipelines
  • AI and ML workflows

Each transition creates blind spots.

Legacy platforms awaiting migration often exist in a “wait state” with reduced monitoring. Data copied into cloud storage or fed into AI pipelines frequently loses lineage and classification context. Posture may vary and traditional controls no longer apply consistently. From an attacker’s perspective, these environments are ideal. From a defender’s perspective, they are blind spots.

Policies Are Not Proof

Most organizations can produce policies stating that sensitive data is encrypted, access-controlled, and monitored. Increasingly, regulators are moving from point-in-time audits to requiring continuous evidence of control.  

Regulators are asking for evidence:

  • Where does PHI live right now?
  • Who or what can access it?
  • How do you know this hasn’t changed since the last audit?

Point-in-time audits cannot answer those questions. Neither can static documentation. Exposure and access drift continuously, especially in cloud and AI-driven environments.

Compliance depends on continuous control, not periodic attestation.

What Data-Centric Security Actually Requires

Accurately proving compliance and scoping breach impact requires security visibility that is anchored to the data itself, not the infrastructure surrounding it.

At a minimum, this means:

  • Continuous discovery and classification of sensitive data
  • Consistent compliance reporting and controls across cloud, SaaS, On-Prem, and migration states
  • Clear visibility into which identities, services, and AI tools can access specific datasets
  • Detection and response signals tied directly to sensitive data exposure and movement

This is the operational foundation of Data Security Posture Management (DSPM) and Data Detection and Response (DDR). These capabilities do not replace infrastructure security controls; they close the gap those controls leave behind by connecting security events to actual data impact.

This is the problem space Sentra was built to address.

Sentra provides continuous visibility into where sensitive data lives, how it moves, and who or what can access it, and ties security and compliance outcomes to that visibility. Without this layer, organizations are forced to infer breach impact and compliance posture instead of proving it.

Why Data-Centric Security Is Required for Today's Compliance and Breach Response

Infrastructure security can detect that an incident occurred, but it cannot determine which sensitive data was accessed, copied, or exfiltrated. Without data-level evidence, organizations cannot accurately scope breaches, contain risk, or prove compliance, regardless of how many alerts or controls are in place. Modern breach response and regulatory compliance require continuous visibility into sensitive data, its lineage, and its access paths. Infrastructure-only security models are no longer sufficient.

Want to see how Sentra provides complete visibility and control of sensitive data?

Schedule a Demo

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
January 9, 2026
3
Min Read
Data Security

How to Prevent Data Breaches in Healthcare and Protect PHI

How to Prevent Data Breaches in Healthcare and Protect PHI

Preventing data breaches in healthcare is no longer just about stopping cyberattacks. In 2026, the greater challenge is maintaining continuous visibility into where protected health information (PHI) lives, how it is accessed, and how it is reused across modern healthcare environments governed by HIPAA compliance requirements.

PHI no longer resides in a single system or under the control of one team. It moves constantly between cloud platforms, electronic health record (EHR) systems, business associates, analytics environments, and AI tools used throughout healthcare operations. While this data sharing enables better patient care and operational efficiency, it also introduces new healthcare cybersecurity risks that traditional, perimeter-based security controls were never designed to manage.

From Perimeter Security to Data-Centric PHI Protection

Many of the most damaging healthcare data breaches in recent years have shared a common root cause:

limited visibility into sensitive data and unclear ownership across shared environments.

Over-permissioned identities, long-lived third-party access, and AI systems interacting with regulated data without proper governance can silently expand exposure until an incident forces disruptive containment measures. Protecting PHI in 2026 requires a data-centric approach to healthcare data security. Instead of focusing only on where data is stored, organizations must continuously understand what sensitive data exists, who can access it, and how that access changes over time. This shift is foundational to effective HIPAA compliance, resilient incident response, and the safe adoption of AI in healthcare.

The Importance of Data Security in Healthcare

Healthcare organizations continue to face disproportionate risk from data breaches, with incidents carrying significant financial, operational, and reputational consequences. Recent industry analyses show that healthcare remains the costliest industry for data breaches, with the average breach costing approximately $7.4 million globally in 2025 and exceeding $10 million per incident in the U.S., driven by regulatory penalties and prolonged recovery efforts.

The scale and complexity of healthcare breaches have also increased. As of late 2025, hundreds of large healthcare data breaches affecting tens of millions of individuals had already been reported in the U.S. alone, including incidents tied to shared infrastructure and third-party service providers. These events highlight how a single exposure can rapidly expand across interconnected healthcare ecosystems.

Importantly, many recent breaches are no longer caused solely by external attacks. Instead, they stem from internal access issues such as over-permissioned identities, misdirected data sharing, and long-lived third-party access, risks now amplified by analytics platforms and AI tools interacting directly with regulated data. As healthcare organizations continue to adopt new technologies, protecting PHI increasingly depends on controlling how sensitive data is accessed, shared, and reused over time, not just where it is stored.

Healthcare Cybersecurity Regulations & Standards

For healthcare organizations, it is especially crucial to protect patient data and follow industry rules. Transitioning to the cloud shouldn't disrupt compliance efforts. But staying on top of strict data privacy regulations adds another layer of complexity to managing healthcare data.

Below are some of the top healthcare cybersecurity regulations relevant to the industry.


Health Insurance Portability and Accountability Act of 1996 (HIPAA)

HIPAA is pivotal in healthcare cybersecurity, mandating compliance for covered entities and business associates. It requires regular risk assessments and adherence to administrative, physical, and technical safeguards for electronic Protected Health Information (ePHI).

HIPAA, at its core, establishes national standards to protect sensitive patient health information from being disclosed without the patient's consent or knowledge. For leaders in healthcare data management, understanding the nuances of HIPAA's Titles and amendments is essential. Particularly relevant are Title II's (HIPAA Administrative Simplification), Privacy Rule, and Security Rule.

HHS 405(d)

HHS 405(d) regulations, under the Cybersecurity Act of 2015, establish voluntary guidelines for healthcare cybersecurity, embodied in the Healthcare Industry Cybersecurity Practices (HICP) framework. This framework covers email, endpoint protection, access management, and more.

Health Information Technology for Economic and Clinical Health (HITECH) Act

The HITECH Act, enacted in 2009, enhances HIPAA requirements, promoting the adoption of healthcare technology and imposing stricter penalties for HIPAA violations. It mandates annual cybersecurity audits and extends HIPAA regulations to business associates.

Payment Card Industry Data Security Standard (PCI DSS)

PCI DSS applies to healthcare organizations processing credit cards, ensuring the protection of cardholder data. Compliance is necessary for handling patient card information.

Quality System Regulation (QSR)

The Quality System Regulation (QSR), enforced by the FDA, focuses on securing medical devices, requiring measures like access prevention, risk management, and firmware updates. Proposed changes aim to align QSR with ISO 13485 standards.

Health Information Trust Alliance (HITRUST)

HITRUST, a global cybersecurity framework, aids healthcare organizations in aligning with HIPAA guidelines, offering guidance on various aspects including endpoint security, risk management, and physical security. Though not mandatory, HITRUST serves as a valuable resource for bolstering compliance efforts.

Preventing Data Breaches in Healthcare with Sentra

Sentra’s Data Security Posture Management (DSPM) automatically discovers and accurately classifies your sensitive patient data. By seamlessly building a well-organized data catalog, Sentra ensures all your patient data is secure, stored correctly and in compliance. The best part is, your data never leaves your environment.

Discover and Accurately Classify your High Risk Patient Data

Discover and accurately classify your high-risk patient data with ease using Sentra. Within minutes, Sentra empowers you to uncover and comprehend your Protected Health Information (PHI), spanning patient medical history, treatment plans, lab tests, radiology images, physician notes, and more. 

Seamlessly build a well-organized data catalog, ensuring that all your high-risk patient data is securely stored and compliant. As a cloud-native solution, Sentra enables you to scale security across your entire data estate. Your cloud data remains within your environment, putting you in complete control of your sensitive data at all times.

Sentra Reduces Data Risks by Controlling Posture and Access

Sentra is your solution for reducing data risks and preventing data breaches by efficiently controlling posture and access. With Sentra, you can enforce security policies for sensitive data, receiving alerts to violations promptly. It detects which users have access to sensitive Protected Health Information (PHI), ensuring transparency and accountability. Additionally, Sentra helps you manage third-party access risks by offering varying levels of access to different providers. Achieve least privilege access by leveraging Sentra's continuous monitoring and tracking capabilities, which keep tabs on access keys and user identities. This ensures that each user has precisely the right access permissions, minimizing the risk of unauthorized data exposure.

Stay on Top of Healthcare Data Regulations with Sentra

Sentra’s Data Security Posture Management (DSPM) solution streamlines and automates the management of your regulated patient data, preparing you for significant security audits. Gain a comprehensive view of all sensitive patient data, allowing our platform to automatically identify compliance gaps for proactive and swift resolution.

Sentra dashboard showing compliance frameworks
Sentra Dashboard shows the issues grouped by compliance frameworks, such as HIPAA and what the compliance posture is

Easily translate your compliance requirements for HIPAA, GDPR, and HITECH into actionable rules and policies, receiving notifications when data is copied or moved between regions. With Sentra, running compliance reports becomes a breeze, providing you with all the necessary evidence, including sensitive data types, regulatory controls, and compliance status for relevant regulatory frameworks.

Conclusion: From Perimeter Security to Continuous Data Governance

Healthcare organizations can no longer rely on perimeter-based controls or periodic audits to prevent data breaches. As PHI spreads across cloud platforms, business associates, and AI-driven workflows, the risk is no longer confined to a single system, it’s embedded in how data is accessed, shared, and reused.

Protecting PHI in 2026 requires continuous visibility into sensitive data and the ability to govern it throughout its lifecycle. This means understanding what regulated data exists, who has access to it, and how that access changes over time - across internal teams, third parties, and AI systems. Without this level of insight, compliance with HIPAA and other healthcare regulations becomes reactive, and incident response becomes disruptive by default.

A data-centric security model allows healthcare organizations to reduce their breach impact, limit regulatory exposure, and adopt AI safely without compromising patient trust. By shifting from static controls to continuous data governance, security and compliance teams can move from guessing where PHI lives to managing it with confidence.

To learn more about how you can enhance your data security posture, schedule a demo with one of our data security experts.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
January 5, 2026
4
Min Read
Data Security

How Does DSPM Safeguard Your Data When You Have CSPM/CNAPP

How Does DSPM Safeguard Your Data When You Have CSPM/CNAPP

After debuting in Gartner’s 2022 Hype Cycle, Data Security Posture Management (DSPM) has quickly become a transformative category and hot security topic. DSPM solutions are popping up everywhere, both as dedicated offerings and as add-on modules to established cloud native application protection platforms (CNAPP) or cloud security posture management (CSPM) platforms.

But which option is better: adding a DSPM module to one of your existing solutions or implementing a new DSPM-focused platform? On the surface, activating a module within a CNAPP/CSPM solution that your team already uses might seem logical. But, the real question is whether or not you can reap all of the benefits of a DSPM through an add-on module. While some CNAPP platforms offer a DSPM module, these add-ons lack a fully data-centric approach, which is required to make DSPM technology effective for a modern-day business with a sprawling data ecosystem. Let’s explore this further.

How are CNAPP/CSPM and DSPM Different?

While CNAPP/CSPM and DSPM seem similar and can be complementary in many ways, they are distinctly different in a few important ways. DSPMs are all about the data — protecting it no matter where it travels. CNAPP/CSPMs focus on detecting attack paths through cloud infrastructure. So naturally, they tie specifically to the infrastructure and lack the agnostic approach of DSPM to securing the underlying data.

Because a DSPM focuses on data posture, it applies to additional use cases that CNAPP/CSPM typically doesn’t cover. This includes data privacy and data protection regulations such as GDPR, PCI-DSS, etc., as well as data breach detection based on real-time monitoring for risky data access activity. Lastly, data at rest (such as abandoned shadow data) would not necessarily be protected by CNAPP/CSPM since, by definition, it’s unknown and not an active attack path.

Capability DSPM CSPM CNAPP
Data discovery & classification Deep and contextual Limited Limited
Shadow data detection Supported Not supported Not supported
On-prem & hybrid support Supported Not supported Not supported
Infrastructure misconfigurations Not supported Supported Supported
AI & privacy use cases Supported Not supported Not supported

What is a Data-Centric Approach?

A data-centric approach is the foundation of your data security strategy that prioritizes the secure management, processing, and storage of data, ensuring that data integrity, accessibility, and privacy are maintained across all stages of its lifecycle. Standalone DSPM takes a data-centric approach. It starts with the data, using contextual information such as data location, sensitivity, and business use cases to better control and secure it. These solutions offer preventative measures, such as discovering shadow data, preventing data sprawl, and reducing the data attack surface.

Data detection and response (DDR), often offered within a DSPM platform, provides reactive measures, enabling organizations to monitor their sensitive assets and detect and prevent data exfiltration. Because standalone DSPM solutions are data-centric, many are designed to follow data across a hybrid ecosystem, including public cloud, private cloud, and on-premises environments. This is ideal for the complex environments that many organizations maintain today.

What is an Infrastructure-Centric Approach?

An infrastructure-centric solution is focused on optimizing and protecting the underlying hardware, networks, and systems that support applications and services, ensuring performance, scalability, and reliability at the infrastructure level. Both CNAPP and CSPM use infrastructure-centric approaches. Their capabilities focus on identifying vulnerabilities and misconfigurations in cloud infrastructure, as well as some basic compliance violations. CNAPP and CSPM can also identify attack paths and use several factors to prioritize which ones your team should remediate first. While both solutions can enforce policies, they can only offer security guardrails that protect static infrastructure. In addition, most CNAPP and CSPM solutions only work with public cloud environments, meaning they cannot secure private cloud or on-premises environments.

How Does a DSPM Add-On Module for CNAPP/CSPM Work?

Typically, when you add a DSPM module to CNAPP/CSPM, it can only work within the parameters set by its infrastructure-centric base solution. In other words, a DSPM add-on to a CNAPP/CSPM solution will also be infrastructure-centric. It’s like adding chocolate chips to vanilla ice cream; while they will change the flavor a bit, they can’t transform the constitution of your dessert into chocolate ice cream. 

A DSPM module in a CNAPP or CSPM solution generally has one purpose: helping your team better triage infrastructure security issues. Its sole functionality is to look at the attack paths that threaten your public cloud infrastructure, then flag which of these would most likely lead to sensitive data being breached. 

However, this functionality comes with a few caveats. While CSPM and CNAPP have some data discovery capabilities, they use very basic classification functions, such as pattern-matching techniques. This approach lacks context and granularity and requires validation by your security team. 

In addition, the DSPM add-on can only perform this data discovery within infrastructure already being monitored by the CNAPP/CSPM solution. So, it can only discover sensitive data within known public cloud environments. It may miss shadow data that has been copied to local stores or personal machines, leaving risky exposure gaps.

Why Infrastructure-Centric Solutions Aren’t Enough

So, what happens when you only use infrastructure-centric solutions in a modern cloud ecosystem? While these solutions offer powerful functionality for defending your public cloud perimeter and minimizing misconfigurations, they miss essential pieces of your data estate. Here are a few types of sensitive assets that often slip through the cracks of an infrastructure-centric approach: 

In addition, DSPM modules within CNAPP/CSPM platforms lack the context to properly classify sensitive data beyond easily identifiable examples, such as social security or credit card numbers. But, the data stores at today’s businesses often contain more nuanced personal or product/service-specific identifiers that could pose a risk if exposed. Examples include a serial number for a product that a specific individual owns or a medical ID number as part of an EHR. Some sensitive assets might even be made up of “toxic combinations,” in which the sensitivity of seemingly innocuous data classes increases when combined with specific identifiers.

For example, a random 9-digit number alongside a headshot photo and expiration date is likely a sensitive passport number. Ultimately, DSPM built into a CSPM or CNAPP solution only sees an incomplete picture of risk. This can leave any number of sensitive assets unknown and unprotected in your cloud and on-prem environments.

Dedicated DSPM Completes the Data Security Picture

A dedicated, best-of-breed DSPM solution like Sentra, on the other hand, offers rich, contextual information about all of your sensitive data - no matter where it resides, how your business uses it, or how nuanced it is. 

Rather than just defending the perimeters of known public cloud infrastructure, Sentra finds and follows your sensitive data wherever it goes.

Here are a few of Sentra’s unique capabilities that complete your picture of data security:

  • Comprehensive, security-focused data catalog of all sensitive data assets across the entire data estate (IaaS, PaaS, SaaS, and On-Premises)
  • Ability to detect unmanaged, mislocated, or abandoned data, enabling your team to reduce your data attack surface, control data sprawl, and remediate security/privacy policy violations
  • Movement detection to surface out-of-policy data transformations that violate residency and security policies or that inadvertently create exposures
  • Nuanced discovery and classification, such as row/column/table analysis capabilities that can uncover uncommon personal identifiers, toxic combinations, etc.
  • Rich context for understanding the business purpose of data to better discern its level of sensitivity
  • Lower false positive rates due to deeper analysis of the context surrounding each sensitive data store and asset
  • Automation for remediating a variety of data posture, compliance, and security issues

All of this complex analysis requires a holistic, data-centric view of your data estate - something that only a standalone DSPM solution can offer. And when deployed together with a CNAPP or CSPM solution, a standalone DSPM platform can bring unmatched depth and context to your cloud data security program. It also provides unparalleled insight to facilitate prioritization of issue resolution.

Why DSPM Is Essential for Modern Data Security

DSPM, CSPM, and CNAPP each play an important role in modern cloud security, but they are designed to solve fundamentally different problems. CSPM and CNAPP focus on securing cloud infrastructure by identifying misconfigurations and attack paths, while DSPM is purpose-built to protect sensitive data itself - regardless of where that data lives or how it moves across environments.

As organizations manage increasingly complex data estates spanning public cloud, private cloud, SaaS, and on-premises systems, infrastructure-centric security alone is no longer sufficient. Sensitive data, shadow data, and nuanced “toxic combinations” require continuous discovery, contextual classification, and data-centric monitoring that only a dedicated DSPM solution can provide.

When deployed alongside CSPM or CNAPP, a standalone DSPM platform completes the data security picture by adding deep visibility into data risk, enabling stronger compliance with privacy regulations, and reducing the overall data attack surface. For organizations looking to protect sensitive data at scale, while supporting modern use cases like AI and analytics - DSPM is a critical foundation of an effective enterprise data security strategy.

To learn more about Sentra’s approach to data security posture management, read about how we use LLMs to classify structured and unstructured sensitive data at scale.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
December 28, 2025
3
Min Read

What CISOs Learned in 2025: The 5 Data Security Priorities Coming in 2026

What CISOs Learned in 2025: The 5 Data Security Priorities Coming in 2026

2025 was a pivotal year for Chief Information Security Officers (CISOs). As cyber threats surged and digital acceleration transformed business, CISOs gained more influence in boardrooms but also took on greater accountability. The old model of perimeter-based defense has ended. Security strategies now focus on resilience and real-time visibility with sensitive data protection at the core.

As 2026 approaches, CISOs are turning this year’s lessons into a proactive, AI-smart, and business-aligned strategy. This article highlights the top CISO priorities for 2026, the industry’s shift from prevention to resilience, and how Sentra supports security leaders in this new phase.

Lessons from 2025: Transparency, AI Risk, and Platform Resilience

Over the past year, CISOs encountered high-profile breaches and shifting demands. According to the Splunk 2025 CISO Report an impressive 82% reported direct interactions with CEOs, and 83% regularly attended board meetings. Still, only 29% of board members had cybersecurity experience, leading to frequent misalignment around budgets, innovation, and staffing.

The data is clear: 76% of CISOs expected a significant cyberattack, but 58% felt unprepared, as reported in the Proofpoint 2025 Voice of the CISO Report. Many CISOs struggled with overwhelming tool sprawl and alert fatigue, 76% named these as major challenges. The rapid growth in cloud, SaaS, and GenAI environments left major visibility gaps, especially for unstructured and shadow data. Most of all, CISOs concluded that resilience - quick detection, rapid response, and keeping the business running, matters more than just preventing attacks. This shift is changing the way security budgets will be spent in 2026.

The Evolution of DSPM: From Inventory to Intelligent, AI-Aware Defense

First generation data security posture management (DSPM) tools focused on identifying assets and manually classifying data. Now, CISOs must automatically map, classify, and assign risk scores to data - structured, unstructured, or AI-generated - across cloud, on-prem and SaaS environments, instantly. If organizations lack this capability, critical data remains at risk (Data as the Core Focus in the Cloud Security Ecosystem).

AI brings both opportunity and risk. CISOs are working to introduce GenAI security policies while facing challenges like data leakage, unsanctioned AI projects, and compliance issues. DSPM solutions that use machine learning and real-time policy enforcement have become essential.

The Top Five CISO Priorities in 2026

  1. Secure and Responsible AI: As AI accelerates across the business, CISOs must ensure it does not introduce unmanaged data risk. The focus will be on maintaining visibility and control over sensitive data used by AI systems, preventing unintended exposure, and establishing governance that allows the company to innovate with AI while protecting trust, compliance, and brand reputation.
  1. Modern Data Governance: As sensitive data sprawls across on-prem, cloud, SaaS, and data lakes, CISOs face mounting compliance pressure without clear visibility into where that data resides. The priority will be establishing accurate classification and governance of sensitive, unstructured, and shadow data - not only to meet regulatory obligations, but to proactively reduce enterprise risk, limit blast radius, and strengthen overall security posture.

  2. Tool Consolidation: As cloud and application environments grow more complex, CISOs are under pressure to reduce data sprawl without increasing risk. The priority is consolidating fragmented cloud and application security tools into unified platforms that embed protection earlier in the development lifecycle, improve risk visibility across environments, and lower operational overhead. For boards, this shift represents both stronger security outcomes and a clearer return on security investment through reduced complexity, cost, and exposure.
  1. Offensive Security/Continuous Testing: One-time security assessments can no longer keep pace with AI-driven and rapidly evolving threats. CISOs are making continuous offensive security a core risk-management practice, regularly testing environments across hardware, cloud, and SaaS to expose real-world vulnerabilities. For the board, this provides ongoing validation of security effectiveness and reduces the likelihood of unpleasant surprises from unknown exposures. Some exciting new AI red team solutions are appearing on the scene such as 7ai, Mend.io, Method Security, and Veria Labs.
  1. Zero Trust Identity Governance: Identity has become the primary attack surface, making advanced governance essential rather than optional. CISOs are prioritizing data-centric, Zero Trust identity controls to limit excessive access, reduce insider risk, and counter AI-enabled attacks. At the board level, this shift is critical to protecting sensitive assets and maintaining resilience against emerging threats.

These areas show a greater need for automation, better context, and clearer reporting for boards.

Sentra Enables Secure and Responsible AI with Modern Data Governance

As AI becomes central to business strategy, CISOs are being held accountable for ensuring innovation does not outpace security, governance, or trust. Secure and Responsible AI is no longer about policy alone, it requires continuous visibility into the sensitive data flowing into AI systems, control over shadow and AI-generated data, and the ability to prevent unintended exposure before it becomes a business risk.

At the same time, Modern Data Governance has emerged as a foundational requirement. Exploding data volumes across cloud, SaaS, data lakes, and on-prem environments have made traditional governance models ineffective. CISOs need accurate classification, unified visibility, and enforceable controls that go beyond regulatory checkboxes to actively reduce enterprise risk.

Sentra brings these priorities together by giving security leaders a clear, real-time understanding of where sensitive data lives, how it is being used - including by AI - and where risk is accumulating across the organization. By unifying DSPM and Data Detection & Response (DDR), Sentra enables CISOs to move from reactive security to proactive governance, supporting AI adoption while maintaining compliance, resilience, and board-level confidence.

Looking ahead to 2026, the CISOs who lead will be those who can see, govern, and secure their data everywhere it exists and ensure it is used responsibly to power the next phase of growth. Sentra provides the foundation to make that possible.

Conclusion

The CISO’s role in 2025 shifted from putting out fires to driving change alongside business leadership. Expectations will keep rising in 2026; balancing board expectations, the opportunities and threats of AI, and constant new risks takes a smart platform and real-time clarity.

Sentra delivers the foundation and intelligence CISOs need to build resilience, stay compliant, and fuel data-powered AI growth with secure data. Those who can see, secure, and respond wherever their data lives will lead. Sentra is your partner to move forward with confidence in 2026.

<blogcta-big>

Read More
Meni Besso
Meni Besso
December 23, 2025
Min Read
Compliance

How to Scale DSAR Compliance (Without Breaking Your Team)

How to Scale DSAR Compliance (Without Breaking Your Team)

Data Subject Access Requests (DSARs) are one of the most demanding requirements under privacy regulations such as GDPR and CPRA. As personal data spreads across cloud, SaaS, and legacy systems, responding to DSARs manually becomes slow, costly, and error-prone. This article explores why DSARs are so difficult to scale, the key challenges organizations face, and how DSAR automation enables faster, more reliable compliance.

Privacy regulations are no longer just legal checkboxes, they are a foundation of customer trust. In today’s data-driven world, individuals expect transparency into how their personal information is collected, used, and protected. Organizations that take privacy seriously demonstrate respect for their users, strengthening trust, loyalty, and long-term engagement.

Among these requirements, DSARs are often the most complex to support. They give individuals the right to request access to their personal data, typically with a strict response deadline of 30 days. For large enterprises with data scattered across cloud, SaaS, and on-prem environments, even a single request can trigger a frantic search across multiple systems, manual reviews, and legal oversight - quickly turning DSAR compliance into a race against the clock, with reputation and regulatory risk on the line.

What Is a Data Subject Access Request (DSAR)?

A Data Subject Access Request (DSAR) is a legal right granted under privacy regulations such as GDPR and CPRA that allows individuals to request access to the personal data an organization holds about them. In many cases, individuals can also request information about how that data is used, shared, or deleted.

Organizations are typically required to respond to DSARs within a strict timeframe, often 30 days, and must provide a complete and accurate view of the individual’s personal data. This includes data stored in databases, files, logs, SaaS platforms, and other systems across the organization.

Why DSAR Requests Are Difficult to Manage at Scale

DSARs are relatively manageable for small organizations with limited systems. At enterprise scale, however, they become significantly more complex. Personal data is no longer centralized. It is distributed across cloud platforms, SaaS applications, data lakes, file systems, and legacy infrastructure. Privacy teams must coordinate with IT, security, legal, and data owners to locate, review, and validate data before responding. As DSAR volumes increase, manual processes quickly break down, increasing the risk of delays, incomplete responses, and regulatory exposure.

Key Challenges in Responding to DSARs

Data Discovery & Inventory

For large organizations, pinpointing where personal data resides across a diverse ecosystem of information systems, including databases, SaaS applications, data lakes, and legacy environments, is a complex challenge. The presence of fragmented IT infrastructure and third-party platforms often leads to limited visibility, which not only slows down the DSAR response process but also increases the likelihood of missing or overlooking critical personal data.

Linking Identities Across Systems

A single individual may appear in multiple systems under different identifiers, especially if systems have been acquired or integrated over time. Accurately correlating these identities to compile a complete DSAR response requires sophisticated identity resolution and often manual effort.


Unstructured Data Handling

Unlike structured databases, where data is organized into labeled fields and can be efficiently queried, unstructured data (like PDFs, documents, and logs) is free-form and lacks consistent formatting. This makes it much harder to search, classify, or extract relevant personal information.

Response Timeliness

Regulatory deadlines force organizations to respond quickly, even when data must be gathered from multiple sources and reviewed by legal teams. Manual processes can lead to delays, risking non-compliance and fines.

Volume & Scalability

While most organizations can handle an occasional DSAR manually, spikes in request volume - driven by events like regulatory campaigns or publicized incidents - can overwhelm privacy and legal teams. Without scalable automation, organizations face mounting operational costs, missed deadlines, and an increased risk of inconsistent or incomplete responses.


The Role of Data Security Platforms in DSAR Automation

Sentra is a modern data security platform dedicated to helping organizations gain complete visibility and control over their sensitive data. By continuously scanning and classifying data across all environments (including cloud, SaaS, and on-premises systems) Sentra maintains an always up-to-date data map, giving organizations a clear understanding of where sensitive data resides, how it flows, and who has access to it. This data map forms the foundation for efficient DSAR automation, enabling Sentra’s DSAR module to search for user identifiers only in locations where relevant data actually exists - ensuring high accuracy, completeness, and fast response times.

Data Security Platform example of US SSN finding

Another key factor in managing DSAR requests is ensuring that sensitive customer PII doesn’t end up in unauthorized or unintended environments. When data is copied between systems or environments, it’s essential to apply tokenization or masking to prevent unintentional sprawl of PII. Sentra helps identify misplaced or duplicated sensitive data and alerts when it isn’t properly protected. This allows organizations to focus DSAR processing within authorized operational environments, significantly reducing both risk and response time.

Smart Search of Individual Data

To initiate the generation of a Data Subject Access Request (DSAR) report, users can submit one or more unique identifiers—such as email addresses, Social Security numbers, usernames, or other personal identifiers—corresponding to the individual in question. Sentra then performs a targeted scan across the organization’s data ecosystem, focusing on data stores known to contain personally identifiable information (PII). This includes production databases, data lakes, cloud storage services, file servers, and both structured and unstructured data sources.

Leveraging its advanced classification and correlation capabilities, Sentra identifies all relevant records associated with the provided identifiers. Once the scan is complete, it compiles a comprehensive DSAR report that consolidates all discovered personal data linked to the data subject that can be downloaded as a PDF for manual review or securely retrieved via Sentra’s API.

DSAR Requests

Establishing a DSAR Processing Pipeline

Large organizations that receive a high volume of DSAR (Data Subject Access Request) submissions typically implement a robust, end-to-end DSAR processing pipeline. This pipeline is often initiated through a self-service privacy portal, allowing individuals to easily submit requests for access or deletion of their personal data. Once a request is received, an automated or semi-automated workflow is triggered to handle the request efficiently and in compliance with regulatory timelines.

  1. Requester Identity Verification: Confirm the identity of the data subject to prevent unauthorized access (e.g., via email confirmation or secure login).

  2. Mapping Identifiers: Collect and map all known identifiers for the individual across systems (e.g., email, user ID, customer number).

  3. Environment-Wide Data Discovery (via Sentra): Use Sentra to search all relevant environments — cloud, SaaS, on-prem — for personal data tied to the individual. By using Sentra’s automated discovery and classification, Sentra can automatically identify where to search for.

  4. DSAR Report Generation (via Sentra): Compile a detailed report listing all personal data found and where it resides.

  5. Data Deletion & Verification: Remove or anonymize personal data as required, then rerun a search to verify deletion is complete.

  6. Final Response to Requester: Send a confirmation to the requester, outlining the actions taken and closing the request.

Sentra plays a key role in the DSAR pipeline by exposing a powerful API that enables automated, organization-wide searches for personal data. The search results can be programmatically used to trigger downstream actions like data deletion. After removal, the API can initiate a follow-up scan to verify that all data has been successfully deleted.

Benefits of DSAR Automation 

With privacy regulations constantly growing, and DSAR volumes continuing to rise, building an automated, scalable pipeline is no longer a luxury - it’s a necessity.


  • Automated and Cost-Efficient: Replaces costly, error-prone manual processes with a streamlined, automated approach.
  • High-Speed, High-Accuracy: Sentra leverages its knowledge of where PII resides to perform targeted searches across all environments and data types, delivering comprehensive reports in hours—not days.
  • Seamless Integration: A powerful API allows integration with workflow systems, enabling a fully automated, end-to-end DSAR experience for end users.

By using Sentra to intelligently locate PII across all environments, organizations can eliminate manual bottlenecks and accelerate response times. Sentra’s powerful API and deep data awareness make it possible to automate every step of the DSAR journey - from discovery to deletion - enabling privacy teams to operate at scale, reduce costs, and maintain compliance with confidence. 

Turning DSAR Compliance into a Scalable Advantage with Automation

As privacy expectations grow and regulatory pressure intensifies, DSARs are no longer just a compliance checkbox, they are a reflection of how seriously an organization treats user trust. Manual, reactive processes simply cannot keep up with the scale and complexity of modern data environments, especially as personal data continues to spread across cloud, SaaS, and on-prem systems.

By automating DSAR workflows with a data-centric security platform like Sentra, organizations can respond faster, reduce compliance risk, and lower operational costs - all while freeing privacy and legal teams to focus on higher-value initiatives. In this way, DSAR compliance becomes not just a regulatory obligation, but a measure of operational maturity and a scalable advantage in building long-term trust.

<blogcta-big>

Read More
Dean Taler
Dean Taler
December 22, 2025
3
Min Read

Building Automated Data Security Policies for 2026: What Security Teams Need Now

Building Automated Data Security Policies for 2026: What Security Teams Need Now

Learn how to build automated data security policies that reduce data exposure, meet GDPR, PCI DSS, and HIPAA requirements, and scale data governance across cloud, SaaS, and AI-driven environments as organizations move into 2026.

As 2025 comes to a close, one reality is clear: automated data security and governance programs are a must-have to truly leverage data and AI. Sensitive data now moves faster than human review can keep up with. It flows across multi-cloud storage, SaaS platforms, collaboration tools, logging pipelines, backups, and increasingly, AI and analytics workflows that continuously replicate data into new locations. For security and compliance teams heading into 2026, periodic audits and static policies are no longer sufficient. Regulators, customers, and boards now expect continuous visibility and enforcement.

This is why automated data security policies have become a foundational control, not a “nice to have.”

In this blog, we focus on how data security policies are actually used at the end of 2025, and how to design them so they remain effective in 2026.

You’ll learn:

  • The most important compliance and risk-driven policy use cases
  • How organizations operationalize data security policies at scale
  • Practical examples aligned with GDPR, PCI DSS, HIPAA, and internal governance

Why Automated Data Security Policies Matter Heading into 2026

The direction of regulatory enforcement and threat activity is consistent:

  • Continuous compliance is now expected, not implied
  • Overexposed data is increasingly used for extortion, not just theft
  • Organizations must prove they know where sensitive data lives and who can access it

Recent enforcement actions have shown that organizations can face penalties even without a breach, simply for storing regulated data in unapproved locations or failing to enforce access controls consistently.

Automated data security policies address this gap by continuously evaluating:

  • Data sensitivity
  • Access scope
  • Storage location and residency
  • surfacing violations in near real time.

Three Data Security Policy Use Cases That Deliver Immediate Value

As organizations prepare for 2026, most start with policies that reduce data  exposure quickly.

1. Limiting Data Exposure and Ransomware Impact

Misconfigured access and excessive sharing remain the most common causes of data exposure. In cloud and SaaS environments, these issues often emerge gradually, and go unnoticed without automation.

High-impact policies include:

  • Sensitive data shared with external users: Detect files containing credentials, PII, or financial data that are accessible to outside collaborators.
  • Overly broad internal access to sensitive data: Identify data shared with “Anyone in the organization,” significantly increasing exposure during account compromise.

These policies reduce blast radius and help prevent data from becoming leverage in extortion-based attacks.

2. Enforcing Secure Data Storage and Handling (PCI DSS, HIPAA, SOC 2)

Compliance violations in 2025 rarely result from intentional misuse. They happen because sensitive data quietly appears in the wrong systems.

Common policy findings include:

  • Payment card data in application logs or monitoring tools: A persistent PCI DSS issue, especially in modern microservice environments.
  • Employee or patient records stored in collaboration platforms: PII and PHI often end up in user-managed drives without appropriate safeguards.

Automated policies continuously detect these conditions and support fast remediation, reducing audit findings and operational risk.

3. Maintaining Data Residency and Sovereignty Compliance

As global data protection enforcement intensifies, data residency violations remain one of the most common and costly compliance failures.

Automated policies help identify:

  • EU personal data stored outside approved EU regions: A direct GDPR violation that is common in multi-cloud and SaaS environments.
  • Cross-region replicas and backups containing regulated data: Secondary storage locations frequently fall outside compliance controls.

These policies enable organizations to demonstrate ongoing compliance, not just point-in-time alignment.

What Modern Data Security Policies Must Do (2026-Ready)

As teams move into 2026, effective data security policies share three traits:

  1. They are data-aware: Policies are based on data sensitivity - not just resource labels or storage locations.
  2. They operate continuously: Policies evaluate changes as data is created, moved, shared, or copied into new systems.
  3. They drive action: Every violation maps to a remediation path: restrict access, move data, or delete it.

This is what allows security teams to scale governance without slowing the business.

Conclusion: From Static Rules to Continuous Data Governance

Heading into 2026, automated data security policies are no longer just compliance tooling, they are a core layer of modern security architecture.

They allow organizations to:

  • Reduce exposure and ransomware risk
  • Enforce regulatory requirements continuously
  • Govern sensitive data across cloud, SaaS, and AI workflows

Most importantly, they replace reactive audits with real-time data governance.

Organizations that invest in automated, data-aware security policies today will enter 2026 better prepared for regulatory scrutiny, evolving threats, and the continued growth of their data footprint.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
December 17, 2025
3
Min Read

How CISOs Will Evaluate DSPM in 2026: 13 New Buying Criteria for Security Leaders

How CISOs Will Evaluate DSPM in 2026: 13 New Buying Criteria for Security Leaders

Data Security Posture Management (DSPM) has quickly become part of mainstream security, gaining ground on older solutions and newer categories like XDR and SSE. Beneath the hype, most security leaders share the same frustration: too many products promise results but simply can't deliver in the messy, large-scale settings that enterprises actually have. The DSPM market is expected to jump from $1.86B in 2024 to $22.5B by 2033, giving buyers more choice - and greater pressure - to demand what really sets a solution apart for the coming years.

Instead of letting vendors dictate the RFP, what if CISOs led the process themselves? Fast-forward to 2026 and the checklist a CISO uses to evaluate DSPM solutions barely resembles the checklists of the past. Here are the 12 criteria everyone should insist on - criteria most vendors would rather you ignore, but industry leaders like Sentra are happy to highlight.

Why Legacy DSPM Evaluation Fails Modern CISOs

Traditional DSPM/DCAP evaluations were all about ticking off feature boxes: Can it scan S3 buckets? Show file types? But most CISO I meet point to poor data visibility as their biggest vulnerability. It's already obvious that today’s fragmented, agent-heavy tools aren’t cutting it.

So, what’s changed for 2026? Massive data volumes, new unstructured formats like chat logs or AI training sets, and rapid cloud adoption mean security leaders now need a different class of protection.

The right platform:

  • Works without agents, everywhere you operate
  • Focuses on bringing real, risk-based context - not just adding more alerts
  • Automates compliance and fixes identity/data governance gaps
  • Manages both structured and unstructured data across the whole organization

Old evaluation checklists don’t come close. It’s time to update yours.

The 13 DSPM Buying Criteria Vendors Hope You Don’t Ask

Here’s what should be at the heart of every modern assessment, especially for 2026:

  1. Is the platform truly agentless, everywhere? Agent-based designs slow you down and block coverage. The best solutions set up in minutes, with absolutely no agents - across SaaS, IaaS, or on-premises and will always discover any unknown and shadow data
  1. Does it operate fully in-environment? Your data needs to stay in your cloud or region - not copied elsewhere for analysis. In-environment processing guards privacy, simplifies compliance, and matches global regulations (Cloud Security Alliance).
  1. Can it accurately classify unstructured data (>98% accuracy)? Most tools stumble outside of databases. Insist on AI-powered classification that understands language, context, and sensitivity. This covers everything from PDF files to Zoom recordings to LLM training data.
  1. How does it handle petabyte-scale scanning and will it  break the bank? Legacy options get expensive as data grows. You need tools that can scan quickly and stay cost-effective across multi-cloud and hybrid environments at massive scale.
  1. Does it unify data and identity governance? Very few platforms support both human and machine identities - especially for service accounts or access across clouds. Only end-to-end coverage breaks down barriers between IT, business, and security.
  1. Can it surface business-contextualized risk insights? You need more than technical vulnerability. Leading platforms map sensitive data by its business importance and risk, making it easier to prioritize and take action.
  1. Is deployment frictionless and multi-cloud native? DSPM should work natively in AWS, Azure, GCP, and SaaS, no complicated integrations required. Insist on fast, simple onboarding.
  1. Does it offer full remediation workflow automation? It’s not enough to raise the alarm. You want exposures fixed automatically, at scale, without manual effort.

  2. Does this fit within my Data Security Ecosystem? Choose only platforms that integrate and enrich your current data governance stack so every tool operates from the same source of truth without adding operational overhead. 
  1. Are compliance and security controls bridged in a unified dashboard? No more switching between tools. Choose platforms where compliance and risk data are combined into a single view for GRC and SecOps.
  1. Does it support business-driven data discovery (e.g., by project, region, or owner)? You need dynamic views tied to business needs, helping cloud initiatives move faster without adding risk, so security can become a business enabler.
  1. What’s the track record on customer outcomes at scale? Actual results in complex, high-volume settings matter more than demo promises. Look for real stories from large organizations.
  2. How is pricing structured for future growth? Beware of pricing that seems low until your data doubles. Look for clear, usage-based models so expansion won’t bring hidden costs.

Agentless, In-Environment Power: Why It’s the New Gold Standard

Agentless, in-environment architecture removes hassles with endpoint installs, connectors, and worries about where your data goes. Gartner has highlighted that this approach reduces regulatory headaches and enables fast onboarding. As organizations keep adding new cloud and hybrid systems, only these platforms can truly scale for global teams and strict requirements.

Sentra’s platform keeps all processing inside your environment. There’s no need to export your data; offering peace of mind for privacy, sovereignty, and speed. With regulations increasing everywhere, this approach isn’t just helpful; it’s essential.

Classification Accuracy and Petabyte-Scale Efficiency: The Must-Haves for 2026

Unstructured data is growing fast, and workloads are now more diverse than ever. The difference between basic scanning and real, AI-driven classification is often the difference between protecting your company or ending up on the breach list. Leading platforms, including Sentra, deliver over 95% classification accuracy by using large language models and in-house methods across both structured and unstructured data.

Why is speed and scale so important? Old-school solutions were built with smaller data volumes in mind. Today, DSPM platforms must quickly and affordably identify and secure data in vast environments. Sentra’s scanning is both fast and affordable, keeping up as your data grows. To learn more about these challenges read: Reducing Cloud Data Attack Risk.

Don’t Settle: Redefining Best-in-Class DSPM Buying Criteria for 2026

Many vendors are still only comfortable offering the basics, but the demands facing CISOs today are anything but basic. Combining identity and data governance, multi-cloud support that works out of the box, and risk insights mapped to real business needs - these are the essential elements for protecting today’s and tomorrow’s data. If a solution doesn’t check all 12 boxes, you’re already limiting your security program before you start.

Need a side-by-side comparison for your next decision?  Request a personalized demo to see exactly how Sentra meets every requirement.

Conclusion

With AI further accelerating data growth, security teams can’t afford to settle for legacy features or generic checklists. By insisting on meaningful criteria - true agentless design, in-environment processing, precise AI-driven classification, scalable affordability, and business-first integration - CISOs set a higher standard for both their own organizations and the wider industry.

Sentra is ready to help you raise the bar. Contact us for a data risk assessment, or to discuss how to ensure your next buying decision leads to better protection, less risk, and a stronger position for the future.

Continue the Conversation

If you want to go deeper into how CISOs are rethinking data security, I explore these topics regularly on Guardians of the Data, a podcast focused on real-world data protection challenges, evolving DSPM strategies, and candid conversations with security leaders.

Watch or listen to Guardians of the Data for practical insights on securing data in an AI-driven, multi-cloud world.

<blogcta-big>

Read More
Nikki Ralston
Nikki Ralston
Romi Minin
Romi Minin
December 16, 2025
3
Min Read

Sentra Is One of the Hottest Cybersecurity Startups

Sentra Is One of the Hottest Cybersecurity Startups

We knew we were on a hot streak, and now it’s official.

Sentra has been named one of CRN’s 10 Hottest Cybersecurity Startups of 2025. This recognition is a direct reflection of our commitment to redefining data security for the cloud and AI era, and of the growing trust forward-thinking enterprises are placing in our unique approach.

This milestone is more than just an award. It shows our relentless drive to protect modern data systems and gives us a chance to thank our customers, partners, and the Sentra team whose creativity and determination keep pushing us ahead.

The Market Forces Fueling Sentra’s Momentum

Cybersecurity is undergoing major changes. With 94% of organizations worldwide now relying on cloud technologies, the rapid growth of cloud-based data and the rise of AI agents have made security both more urgent and more complicated. These shifts are creating demands for platforms that combine unified data security posture management (DSPM) with fast data detection and response (DDR).

Industry data highlights this trend: over 73% of enterprise security operations centers are now using AI for real-time threat detection, leading to a 41% drop in breach containment time. The global cybersecurity market is growing rapidly, estimated to reach $227.6 billion in 2025, fueled by the need to break down barriers between data discovery, classification, and incident response 2025 cybersecurity market insights. In 2025, organizations will spend about 10% more on cyber defenses, which will only increase the demand for new solutions.

Why Recognition by CRN Matters and What It Means

Landing a place on CRN’s 10 Hottest Cybersecurity Startups of 2025 is more than publicity for Sentra. It signals we truly meet the moment. Our rise isn’t just about new features; it’s about helping security teams tackle the growing risks posed by AI and cloud data head-on. This recognition follows our mention as a CRN 2024 Stellar Startup, a sign of steady innovation and mounting interest from analysts and enterprises alike.

Being on CRN’s list means customers, partners, and investors value Sentra’s straightforward, agentless data protection that helps organizations work faster and with more certainty.

Innovation Where It Matters: Sentra’s Edge in Data and AI Security

Sentra stands out for its practical approach to solving urgent security problems, including:

  • Agentless, multi-cloud coverage: Sentra identifies and classifies sensitive data and AI agents across cloud, SaaS, and on-premises environments without any agents or hidden gaps.
  • Integrated DSPM + DDR: We go further than monitoring posture by automatically investigating incidents and responding, so security teams can act quickly on why DSPM+DDR matters.
  • AI-driven advancements: Features like domain-specific AI Classifiers for Unstructure advanced AI classification leveraging SLMs, Data Security for AI Agents and Microsoft M365 Copilot help customers stay in control as they adopt new technologies Sentra’s AI-powered innovation.

With new attack surfaces popping up all the time, from prompt injection to autonomous agent drift, Sentra’s architecture is built to handle the world of AI.

A Platform Approach That Outpaces the Competition

There are plenty of startups aiming to tackle AI, cloud, and data security challenges. Companies like 7AI, Reco, Exaforce, and Noma Security have been in the news for their funding rounds and targeted solutions. Still, very few offer the kind of unified coverage that sets Sentra apart.

Most competitors stick to either monitoring SaaS agents or reducing SOC alerts. Sentra does more by providing both agentless multi-cloud DSPM and built-in DDR. This gives organizations visibility, context, and the power to act in one platform. With features like Data Security for AI Agents, Sentra helps enterprises go beyond managing alerts by automating meaningful steps to defend sensitive data everywhere.

Thanks to Our Community and What’s Next

This honor belongs first and foremost to our community: customers breaking new ground in data security, partners building solutions alongside us, and a team with a clear goal to lead the industry.

If you haven’t tried Sentra yet, now’s a great time to see what we can do for your cloud and AI data security program. Find out why we’re at the forefront: schedule a personalized demo or read CRN’s full 2025 list for more insight.

Conclusion

Being named one of CRN’s hottest cybersecurity startups isn’t just a milestone. It pushes us forward toward our vision - data security that truly enables innovation. The market is changing fast, but Sentra’s focus on meaningful security results hasn't wavered.

Thank you to our customers, partners, investors, and team for your ongoing trust and teamwork. As AI and cloud technology shape the future, Sentra is ready to help organizations move confidently, securely, and quickly.

<blogcta-big>

Read More
Meni Besso
Meni Besso
December 15, 2025
3
Min Read

AI Governance Starts With Data Governance: Securing the Training Data and Agents Fuelling GenAI

AI Governance Starts With Data Governance: Securing the Training Data and Agents Fuelling GenAI

Generative AI isn’t just transforming products and processes - it’s expanding the entire enterprise risk surface. As C-suite executives and security leaders rush to unlock GenAI’s competitive advantages, a hard truth is clear: effective AI governance depends on solid, end-to-end data governance.

Sensitive data is increasingly used for model training and autonomous agents. If organizations fail to discover, classify, and secure these resources early, they risk privacy breaches, regulatory violations, and reputational damage. To make GenAI safe, compliant, and trustworthy from the start, data governance for generative AI needs to be a top boardroom priority.

Why Data Governance is the Cornerstone of GenAI Trustworthiness and Safety

The opportunities and risks of generative AI depend not only on algorithms, but also on the quality, security, and history of the underlying data. AWS reports that 39% of Chief Data Officers see data cleaning, integration, and storage as the main barriers to GenAI adoption, and 49% of enterprises make data quality improvement a core focus for successful AI projects (AWS Enterprise Strategy - Data Governance). Without strong data governance, sensitive information can end up in training sets, leading to unintentional leaks or model behaviors that break privacy and compliance.

Regulatory requirements, such as the Generative AI Copyright Disclosure Act, are evolving fast, raising the pressure to document data lineage and make sure unauthorized or non-compliant datasets stay out. In the world of GenAI, governance goes far beyond compliance checklists. It’s essential for building AI that is safe, auditable, and trusted by both regulators and customers.

New Attack Surfaces: Risks From Unsecured Data and Shadow AI Agents

GenAI adoption increases risk. Today, 79% of organizations have already piloted or deployed agentic AI, with many using LLM-powered agents to automate key workflows (Wikipedia - Agentic AI). But if these agents, sometimes functioning as "shadow AI" outside official oversight, access sensitive or unclassified data, the fallout can be severe.

In 2024, over 30% of AI data breaches involve insider threats or accidental disclosure, according to Quinnox Data Governance for AI. Autonomous agents can mistakenly reveal trade secrets, financial records, or customer data, damaging brand trust. The risk multiplies rapidly if sensitive data isn’t properly governed before flowing into GenAI tools. To stop these new threats, organizations need up-to-the-minute insight and control over both data and the agents using it.

Frameworks and Best Practices for Data Governance in GenAI

Leading organizations now follow data governance frameworks that match changing regulations and GenAI's technical complexity. Standards like NIST AI Risk Management Framework (AI RMF) and ISO/IEC 42001:2023 are setting the benchmarks for building auditable, resilient AI programs (Data and AI Governance - Frameworks & Best Practices).

Some of the most effective practices:

  • Managing metadata and tracking full data lineage
  • Using data access policies based on role and context
  • Automating compliance with new AI laws
  • Monitoring data integrity and checking for bias

A strong data governance program for generative AI focuses on ongoing data discovery, classification, and policy enforcement - before data or agents meet any AI models. This approach helps lower risk and gives GenAI efforts a solid base of trust.

Sentra’s Approach: Proactive Pre-Integration Discovery and Continuous Enforcement

Many tools only secure data after it’s already being used with GenAI applications. This reactive strategy leaves openings for risk. Sentra takes a different path, letting organizations discover, classify, and protect sensitive data sources before they interact with language models or agentic AI.

By using agentless, API-based discovery and classification across multi-cloud and SaaS environments, Sentra delivers immediate visibility and context-aware risk scoring for all enterprise data assets. With automated policies, businesses can mask, encrypt, or restrict data access depending on sensitivity, business requirements, or audit needs. Live Continuous monitoring tracks which AI agents are accessing data, making granular controls and fast intervention possible. These processes help stop shadow AI, keep unauthorized data out of LLM training, and maintain compliance as rules and business needs shift.

Guardrails for Responsible AI Growth Across the Enterprise

The future of GenAI depends on how well businesses can innovate while keeping security and compliance intact. As AI regulations become stricter and adoption speeds up, Sentra’s ability to provide ongoing, automated discovery and enforcement at scale is critical. Further reading: AI Automation & Data Security: What You Need To Know.

With Sentra, organizations can:

  • Stop unapproved or unchecked data from being used in model training
  • Identify shadow AI agents or risky automated actions as they happen
  • Support audits with complete data classification
  • Meet NIST, ISO, and new global standards with ease

Sentra gives CISOs, CDOs, and executives a proactive, scalable way to adopt GenAI safely, protecting the business before any model training even begins.

AI Governance Starts with Data Governance

AI governance for generative AI starts, and is won or lost, at the data layer. If organizations don’t find, classify, and secure sensitive data first, every other security measure remains reactive and ineffective. As generative AI, agent automation, and regulatory demands rise, a unified data governance strategy isn’t just good practice, it’s an urgent priority. Sentra gives security and business teams real control, making sure GenAI is secure, compliant, and trusted.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
December 11, 2025
3
Min Read

US State Privacy Laws 2026: DSPM Compliance Requirements & What You Need to Know

US State Privacy Laws 2026: DSPM Compliance Requirements & What You Need to Know

By 2026, American data privacy will look very different as a wave of new state laws redefines what it means to protect sensitive information. Organizations face a regulatory maze: more than 20 states will soon require not only “reasonable security” but also Data Protection Impact Assessments (DPIAs), explicit limits on data collection, and, in some cases, detailed data inventories. These requirements are quickly becoming standard, and ignoring them simply isn’t an option. The risk of penalties and enforcement actions is climbing fast.

But through all these changes, one major question remains: How can any organization comply if it doesn’t even know where its most sensitive data is? Data Security Posture Management (DSPM) has become the solution, making data visibility and automation central for meeting ongoing compliance needs.

Mapping the New Wave of State Privacy Mandates

Several state privacy laws going into effect in 2025 and 2026 are raising the stakes for compliance. Kentucky, Indiana, and Rhode Island’s new laws, effective January 1, 2026, require both security measures and DPIAs for handling high-risk or sensitive data. Minnesota’s law stands out even more: it moves past earlier vague “reasonable” security language and mandates comprehensive data inventories.

Other key states include Minnesota, which explicitly requires data inventories, Maryland with strict data minimization rules, and Tennessee, which gives organizations an affirmative defense if they’ve adopted a NIST-aligned privacy program. These requirements mean organizations now need to track what data they collect, know exactly where it’s stored, and show evidence of compliance when asked. If your organization operates in more than one state, keeping up with this web of laws will soon become impossible without dedicated solutions (US consumer privacy laws 2025 update).

Why Data Visibility is Now Foundational to Compliance

To meet DPIA, minimization, and security safeguard rules, you need full visibility into where sensitive or regulated data lives - and how it moves across your environment. Recent privacy laws are moving closer to GDPR-like standards, with DPIAs required not only for biometric data but also for broad categories like targeted advertising and profiling. Minnesota leads with its clear requirement for full data inventories, setting the standard that you can’t prove compliance unless you understand your data (US cybersecurity and data privacy review and outlook 2025).

This shift puts DSPM front and center: you now need ongoing discovery and classification of your entire sensitive data footprint. Without a strong data foundation, organizations will find it hard to complete DPIAs, handle audits, or defend themselves in investigations.

Automation: The Only Viable Path for Assessment and Audit Readiness

State privacy rules are getting more complicated, and many enforcement authorities are shortening or removing 'right-to-cure' periods. That means manual compliance simply won’t keep up. Automation is now the only way to manage compliance as regulations tighten (5 trends to watch: 2025 US data privacy & cybersecurity).

With DSPM and automation, organizations get ongoing discovery, real-time data classification, and instant evidence collection - all required for fast DPIAs and responsive audits. For companies facing regulators or preparing for multi-state oversight, this means you already have the proof and documentation you need. Relying on spreadsheets or one-time assessments at this point only increases your risk.

Sentra: Your Strategic Bridge to Privacy Law Compliance

Sentra’s DSPM platform is built to tackle these expanding privacy law requirements. The agentless platform covers AWS, Azure, GCP, SaaS, and hybrid environments, removing both visibility gaps and the hassle found in older solutions (Sentra: DSPM for compliance use cases).

With continuous, automated discovery and data classification, you always know exactly where your sensitive data is, how it moves, and how it’s being protected. Sentra’s integrated Data Detection & Response (DDR) catches and fixes risks or policy violations early, closing gaps before regulators - or attackers - can take advantage (Sensitive data exposure insight). Combined with clear reporting and on-demand audit documentation, Sentra helps you meet new state privacy laws and stay audit-ready, even as your business or data needs change.

Conclusion

The arrival of new state privacy laws in 2025 and 2026 is changing how organizations must handle sensitive data. Security safeguards, DPIAs, minimization, and full inventories are now required - not just nice-to-have.

DSPM is now a compliance must-have. Without complete data visibility and automation, following the web of state rules isn’t difficult - it’s impossible. Sentra’s agentless, multi-cloud platform keeps your organization continuously informed, giving compliance, security, and privacy teams the control they need to keep up with new regulations.

Want to see how your organization stacks up for 2026 laws? Book a DSPM Compliance Readiness Assessment or check out Sentra’s automated DPIA tools today.

<blogcta-big>

Read More
David Stuart
David Stuart
Gilad Golani
Gilad Golani
December 4, 2025
3
Min Read

Zero Data Movement: The New Data Security Standard that Eliminates Egress Risk

Zero Data Movement: The New Data Security Standard that Eliminates Egress Risk

Cloud adoption and the explosion of data have boosted business agility, but they’ve also created new headaches for security teams. As companies move sensitive information into multi-cloud and hybrid environments, old security models start to break down. Shuffling data for scanning and classification adds risk, piles on regulatory complexity, and drives up operational costs.

Zero Data Movement (ZDM) offers a new architectural approach, reshaping how advanced Data Security Posture Management (DSPM) platforms provide visibility, protection, and compliance. This post breaks down what makes ZDM unique, why it matters for security-focused enterprises, and how Sentra provides an innovative agentless and scalable design that is genuinely a zero data movement DSPM .

Defining Zero Data Movement Architecture

Zero Data Movement (ZDM) sets a new standard in data security. The premise is straightforward: sensitive data should stay in its original environment for security analysis, monitoring, and enforcement. Older models require copying, exporting, or centralizing data to scan it, while ZDM ensures that all security actions happen directly where data resides.

ZDM removes egress risk -shrinking the attack surface and reducing regulatory issues. For organizations juggling large cloud deployments and tight data residency rules, ZDM isn’t just an improvement - it's essential. Groups like the Cloud Security Alliance and new privacy regulations are moving the industry toward designs that build in privacy and non-stop protection.

Risks of Data Movement: Compliance, Cost, and Egress Exposure

Every time data is copied, exported, or streamed out of its native environment, new risks arise. Data movement creates challenges such as:

  • Egress risk: Data at rest or in transit outside its original environment  increases risk of breach, especially as those environments may be less secure.
  • Compliance and regulatory exposure: Moving data across borders or different clouds can break geo-fencing and privacy controls, leading to potential violations and steep fines.
  • Loss of context and control: Scattered data makes it harder to monitor everything, leaving gaps in visibility.
  • Rising total cost of ownership (TCO): Scanning and classification can incur heavy cloud compute costs - so efficiency matters.  Exporting or storing data, especially shadow data, drives up storage, egress, and compliance costs as well.

As more businesses rely on data, moving it unnecessarily only increases the risk - especially with fast-changing cloud regulations.

Legacy and Competitor Gaps: Why Data Movement Still Happens

Not every security vendor practices true zero data movement, and the differences are notable. Products from Cyera, Securiti, or older platforms still require temporary data exporting or duplication for analysis. This might offer a quick setup, but it exposes users to egress risks, insider threats, and compliance gaps - problems that are worse in regulated fields.

Competitors like Cyera often rely on shortcuts that fall short of ZDM’s requirements. Securiti and similar providers depend on connectors, API snapshots, or central data lakes, each adding potential risks and spreading data further than necessary. With ZDM, security operations like monitoring and classification happen entirely locally, removing the need to trust external storage or aggregation. For more detail on how data movement drives up risk.

The Business Value of Zero Data Movement DSPM

Zero data movement DSPM changes the equation for businesses:

  • Designed for compliance: Data remains within controlled environments, shrinking audit requirements and reducing breach likelihood.
  • Lower TCO and better efficiency: Eliminates hidden expenses from extra storage, duplicate assets, and exporting to external platforms.
  • Regulatory clarity and privacy: Supports data sovereignty, cross-border rules, and new zero trust frameworks with an egress-free approach.

Sentra’s agentless, cloud-native DSPM provides these benefits by ensuring sensitive data is never moved or copied. And Sentra delivers these benefits at scale - across multi-petabyte enterprise environments - without the performance and cost tradeoffs others suffer from. Real scenarios show the results: financial firms keep audit trails without data ever leaving allowed regions. Healthcare providers safeguard PHI at its source. Global SaaS companies secure customer data at scale, cost-effectively while meeting regional rules.

Future-Proofing Data Security: ZDM as the New Standard

With data volumes expected to hit 181 zettabytes in 2025, older protection methods that rely on moving data can’t keep up. Zero data movement architecture meets today's security demands and supports zero trust, metadata-driven access, and privacy-first strategies for the future.

Companies wanting to avoid dead ends should pick solutions that offer unified discovery, classification and policy enforcement without egress risk. Sentra’s ZDM architecture makes this possible, allowing organizations to analyze and protect information where it lives, at cloud speed and scale.

Conclusion

Zero Data Movement is more than a technical detail - it's a new architectural standard for any organization serious about risk control, compliance, and efficiency. As data grows and regulations become stricter, the old habits of moving, copying, or centralizing sensitive data will no longer suffice.

Sentra stands out by delivering a zero data movement DSPMplatform that's agentless, real-time, and truly multicloud. For security leaders determined to cut egress risk, lower compliance spending, and get ahead in privacy, ZDM is the clear path forward.

<blogcta-big>

Read More
Charles Garlow
Charles Garlow
December 3, 2025
3
Min Read

Petabyte Scale is a Security Requirement (Not a Feature): The Hidden Cost of Inefficient DSPM

Petabyte Scale is a Security Requirement (Not a Feature): The Hidden Cost of Inefficient DSPM

As organizations scramble to secure their sprawling cloud environments and deploy AI, many are facing a stark realization: handling petabyte-scale data is now a basic security requirement. With sensitive information multiplying across multiple clouds, SaaS, and AI-driven platforms, security leaders can't treat true data security at scale as a simple add-on or upgrade.

At the same time, speeding up digital transformation means higher and less visible operational costs for handling this data surge. Older Data Security Posture Management (DSPM) tools, especially those boasting broad, indiscriminate scans as evidence of their scale, are saddling organizations with rising cloud bills, slowdowns, and dangerous gaps in visibility. The costs of securing petabyte-scale data are now economic and technical, demanding efficiency instead of just scale. Sentra solves this with a highly-efficient cloud-native design, delivering 10x lower cloud compute costs.

Why Petabyte Scale is a Security Requirement

Data environments have exploded in both size and complexity. For Fortune 500 companies, fast-growing SaaS providers, and global organizations, data exists across public and hybrid clouds, business units, regions, and a stream of new applications.

Regulations such as GDPR, HIPAA, and rules from the SEC now demand current data inventories and continuous proof of risk management. In this environment, defending data at the petabyte level is now essential. Failing to classify and monitor this data efficiently means risking compliance and losing business trust. Security teams are feeling the strain. I meet security teams everyday and too many of them still struggle with data visibility and are already seeing the cracks forming in their current toolset as data scales.

The Hidden Cost of Inefficient DSPM: API Calls and Egress Bills

How DSPM tools perform scanning and discovery drives the real costs of securing petabyte-scale data. Some vendors highlight their capacity to scan multiple petabytes daily. But here's the reality: scanning everything, record by record, relying on huge numbers of API calls, becomes very expensive as your data estate grows.

Every API call can rack up costs, and all the resulting data egress and compute add up too. Large organizations might spend tens of thousands of dollars each month just to track what’s in their cloud. Even worse, older "full scan" DSPM strategies jam up operations with throttling, delays, and a flood of alerts that bury real risk. These legacy approaches simply don’t scale, and organizations relying on them end up paying more while knowing less.

 

Cyera’s "Petabyte Scale" Claims: At What Cloud Cost?

Cyera promotes its tool as an AI-native, agentless DSPM that can scan as much as 2 petabytes daily . While that’s an impressive technical achievement, the strategy of scanning everything leads directly to massive cloud infrastructure costs: frequent API hits, heavy egress, and big bills from AWS, Azure, and GCP.

At scale, these charges don’t just appear on invoices, they can actually stop adoption and limit security’s effectiveness. Cloud operations teams face API throttling, slow results, and a surge in remediation tickets as risks go unfiltered. In these fast-paced environments, recognizing the difference between a real threat and harmless data comes down to speed. The Bedrock Security blog points out how inefficient setups buckle under this weight, leaving teams stuck with lagging visibility and more operational headaches.

Sentra’s 10x Efficiency: Optimized Scanning for Real-World Scale

Sentra takes another route to manage the costs of securing petabyte-scale data. By combining agentless discovery with scanning guided by context and metadata, Sentra uses pattern recognition and an AI-driven clustering algorithm designed to detect machine-generated content—such as log files, invoices, and similar data types. By intelligently sampling data within each cluster, Sentra delivers efficient scanning while reducing scanning costs.

This approach enables data scanning to be prioritized based on risk and business value, rather than wasting time and money scanning the same data over and over again, skipping unnecessary API calls, lowering egress, and keeping cloud bills in check.

Large organizations gain a 10x efficiency edge: quicker classification of data, instant visibility into actual threats, lower operational expenses, and less demand on the network. By focusing attention only where it matters, Sentra matches data security posture management to the demands of current cloud growth and regulatory requirements.

This makes it possible for organizations to hit regulatory and audit targets without watching expenses spiral or opening up security gaps.Sentra offers multiple sampling levels, Quick (default), Moderate, Thorough, and Full, allowing customers to tailor their scanning strategy to balance cost and accuracy. For example, a highly regulated environment can be configured for a full scan, while less-regulated environments can use more efficient sampling. Petabyte-scale security gives the user complete control of their data enterprise and turns into something operationally and financially sustainable, rather than a technical milestone with a hidden cost. 

Efficiency is Non-Negotiable

Fortune 500 companies and digital-first organizations can’t treat efficiency as optional. Inefficient DSPM tools pile on costs, drain resources, and let vulnerabilities slip through, turning their security posture into a liability once scale becomes a factor. Sentra’s platform shows that efficiency is security: with targeted scanning, real context, and unified detection and response, organizations gain clarity and compliance while holding down expenses.

Don’t let your data protection approach crumble under petabyte-scale pressure. See what Sentra can do, reduce costs, and keep essential data secure - before you end up responding to breaches or audit failures.

Conclusion

Securing data at the petabyte level isn't some future aspiration - it's the standard for enterprises right now. Treating it as a secondary feature isn’t just shortsighted; it puts your company at risk, financially and operationally.

The right DSPM architecture brings efficiency, not just raw scale. Sentra delivers real-time, context-rich security posture with far greater efficiency, so your protection and your cloud spending can keep up with your growing business. Security needs to grow along with scale. Rising costs and new risks shouldn’t grow right alongside it.

Want to see how your current petabyte security posture compares? Schedule a demo and see Sentra’s efficiency for yourself.

<blogcta-big>

Read More
Shiri Nossel
Shiri Nossel
December 1, 2025
4
Min Read

How Sentra Uncovers Sensitive Data Hidden in Atlassian Products

How Sentra Uncovers Sensitive Data Hidden in Atlassian Products

Atlassian tools such as Jira and Confluence are the beating heart of software development and IT operations. They power everything from sprint planning to debugging production issues. But behind their convenience lies a less-visible problem: these collaboration platforms quietly accumulate vast amounts of sensitive data often over years that security teams can’t easily monitor or control.

The Problem: Sensitive Data Hidden in Plain Sight

Many organizations rely on Jira to manage tickets, track incidents, and communicate across teams. But within those tickets and attachments lies a goldmine of sensitive information:

  • Credentials and access keys to different environments.
  • Intellectual property, including code snippets and architecture diagrams.
  • Production data used to reproduce bugs or validate fixes — often in violation of data-handling regulations.
  • Real customer records shared for troubleshooting purposes.

This accumulation isn’t deliberate; it’s a natural byproduct of collaboration. However, it results in a long-tail exposure risk - historical tickets that remain accessible to anyone with permissions.

The Insider Threat Dimension

Because Jira and Confluence retain years of project history, employees and contractors may have access to data they no longer need. In some organizations, teams include offshore or external contributors, multiplying the risk surface. Any of these users could intentionally or accidentally copy or export sensitive content at any moment.

Why Sensitive Data Is So Hard to Find

Sensitive data in Atlassian products hides across three levels, each requiring a different detection approach:

  1. Structured Data (Records): Every ticket or page includes structured fields - reporter, status, labels, priority. These schemas are customizable, meaning sensitive fields can appear unpredictably. Security teams rarely have visibility or consistent metadata across instances.

  2. Unstructured Data (Descriptions & Discussions): Free-text fields are where developers collaborate — and where secrets often leak. Comments can contain access tokens, internal URLs, or step-by-step guides that expose system details.
  3. Unstructured Data (Attachments): Screenshots, log files, spreadsheets, code exports, or even database snapshots are commonly attached to tickets. These files may contain credentials, customer PII, or proprietary logic, yet they are rarely scanned or governed.
Collaboration Platform DB - Jira issue screenshot (with sensitive content redacted) to visualize these three levels from the Demo env

The Challenge for Security Teams

Traditional security tools were never designed for this kind of data sprawl. Atlassian environments can contain millions of tickets and pages, spread across different projects and permissions. Manually auditing this data is impractical. Even modern DLP tools struggle to analyze the context of free text or attachments embedded within these platforms.

Compliance teams face an uphill battle: GDPR, HIPAA, and SOC 2 all require knowing where sensitive data resides. Yet in most Atlassian instances, that visibility is nonexistent.

How Sentra Solves the Problem

Sentra takes a different approach. Its cloud-native data security platform discovers and classifies sensitive data wherever it lives - across SaaS applications, cloud storage, and on-prem environments. When connecting your atlassian environment, Sentra delivers visibility and control across every layer of Jira and Confluence.

Comprehensive Coverage

Sentra delivers consistent data governance across SaaS and cloud-native environments. When connected to Atlassian Cloud, Sentra’s discovery engine scans Jira and Confluence content to uncover sensitive information embedded in tickets, pages, and attachments, ensuring full visibility without impacting performance.

In addition, Sentra’s flexible architecture can be extended to support hybrid environments, providing organizations with a unified view of sensitive data across diverse deployment models.

AI-Based Classification

Using advanced AI models, Sentra classifies data across all three tiers:

  • Structured metadata, identifying risky fields and tags.
  • Unstructured text, analyzing ticket descriptions, comments, and discussions for credentials, PII, or regulated data.
  • Attachments, scanning files like logs or database snapshots for hidden secrets.

This contextual understanding distinguishes between harmless content and genuine exposure, reducing false positives.

Full Lifecycle Scanning

Sentra doesn’t just look at new tickets, it scans the entire historical archive to detect legacy exposure, while continuously monitoring for ongoing changes. This dual approach helps security teams remediate existing risks and prevent future leaks.

The Real-World Impact

Organizations using Sentra gain the ability to:

  • Prevent accidental leaks of credentials or production data in collaboration tools.
  • Enforce compliance by mapping sensitive data across Jira and Confluence.
  • Empower DevOps and security teams to collaborate safely without stifling productivity.

Conclusion

Collaboration is essential, but it should never compromise data security. Atlassian products enable innovation and speed, yet they also hold years of unmonitored information. Sentra bridges that gap by giving organizations the visibility and intelligence to discover, classify, and protect sensitive data wherever it lives, even in Jira and Confluence.

<blogcta-big>

Read More
Gilad Golani
Gilad Golani
November 27, 2025
3
Min Read

Unstructured Data Is 80% of Your Risk: Why DSPM 1.0 Vendors, Like Varonis and Cyera, Fail to Protect It at Petabyte Scale

Unstructured Data Is 80% of Your Risk: Why DSPM 1.0 Vendors, Like Varonis and Cyera, Fail to Protect It at Petabyte Scale

Unstructured data is the fastest-growing, least-governed, and most dangerous class of enterprise data. Emails, Slack messages, PDFs, screenshots, presentations, code repositories, logs, and the endless stream of GenAI-generated content — this is where the real risk lives.

The Unstructured data dilemma is this: 80% of your organization’s data is essentially invisible to your current security tools, and the volume is climbing by up to 65% each year. This isn’t just a hypothetical - it’s the reality for enterprises as unstructured data spreads across cloud and SaaS platforms. Yet, most Data Security Posture Management (DSPM) solutions - often called DSPM 1.0 - were never built to handle this explosion at petabyte scale. Especially legacy vendors and first-generation players like Cyera — were never designed to handle unstructured data at scale. Their architectures, classification engines, and scanning models break under real enterprise load.

Looking ahead to 2026, unstructured data security risk stands out as the single largest blind spot in enterprise security. If overlooked, it won’t just cause compliance headaches and soaring breach costs - it could put your organization in the headlines for all the wrong reasons.

The 80% Problem: Unstructured Data Dominates Your Risk

The Scale You Can’t Ignore - Over 80% of enterprise data is unstructured

  • Unstructured data is growing 55-65% per year; by 2025, the world will store more than 180 zettabytes of it.
  • 95% of organizations say unstructured data management is a critical challenge but less than 40% of data security budgets address this high-risk area. Unstructured data is everywhere: cloud object stores, SaaS apps, collaboration tools, and legacy file shares. Unlike structured data in databases, it often lacks consistent metadata, access controls, or even basic visibility. This “dark data” is behind countless breaches, from accidental file exposures and overshared documents to sensitive AI training datasets left unmonitored.

The Business Impact - The average breach now costs $4-4.9M, with unstructured data often at the center.

  • Poor data quality, mostly from unstructured sources, costs the U.S. economy $3.1 trillion each year.
  • More than half of organizations report at least one non-compliance incident annually, with average costs topping $1M. The takeaway: Unstructured data isn’t just a storage problem.

Why DSPM 1.0 Fails: The Blind Spots of Legacy Approaches

Traditional Tools Fall Short in Cloud-First, Petabyte-Scale Environments

Legacy DSPM and DCAP solutions, such as Varonis or Netwrix - were built for an era when data lived on-premises, followed predictable structures, and grew at a manageable pace.

In today’s cloud-first reality, their limitations have become impossible to ignore:

  • Discovery Gaps: Agent-based scanning can’t keep up with sprawling, constantly changing cloud and SaaS environments. Shadow and dark data across platforms like Google Drive, Dropbox, Slack, and AWS S3 often go unseen.
  • Performance Limits: Once environments exceed 100 TB, and especially as they reach petabyte scale—these tools slow dramatically or miss data entirely.
  • Manual Classification: Most legacy tools rely on static pattern matching and keyword rules, causing them to miss sensitive information hidden in natural language, code, images, or unconventional file formats.
  • Limited Automation: They generate alerts but offer little or no automated remediation, leaving security teams overwhelmed and forcing manual cleanup.
  • Siloed Coverage: Solutions designed for on-premises or single-cloud deployments create dangerous blind spots as organizations shift to multi-cloud and hybrid architectures.

Example: Collaboration App Exposure

A global enterprise recently discovered thousands of highly sensitive files—contracts, intellectual property, and PII—were unintentionally shared with “anyone with the link” inside a cloud collaboration platform. Their legacy DSPM tool failed to identify the exposure because it couldn’t scan within the app or detect real-time sharing changes.

Further, even Emerging DSPM tools often rely on pattern matching or LLM-based scanning. These approaches also fail for three reasons:

  • Inaccuracy at scale: LLMs hallucinate, mislabel, and require enormous compute.
  • Cost blow-ups: Vendors pass massive cloud bills back to customers or incur inordinate compute cost.
  • Architectural limitations: Without clustering and elastic scaling, large datasets overwhelm the system.

This is exactly where Cyera and legacy tools struggle - and where Sentra’s SLM-powered classifier thrives with >99% accuracy at a fraction of the cost.

The New Mandate: Securing Unstructured Data in 2026 and Beyond

GenAI, and stricter privacy laws (GDPR, CCPA, HIPAA) have raised the stakes for unstructured data security. Gartner now recommends Data Access Governance (DAG) and AI-driven classification to reduce oversharing and prepare for AI-centric workloads.

What Modern Security Leaders Need - Agentless, Real-Time Discovery: No deployment hassles, continuous visibility, and coverage for unstructured data stores no matter where they live.

  • Petabyte-Scale Performance: Scan, classify, and risk-score all data, everywhere it lives.
  • AI-Driven Deep Classification: Use of natural language processing (NLP), Domain-specific  Small Language Models (SLMs), and context analysis for every unstructured format.
  • Automated Remediation: Playbooks that fix exposures, govern permissions, and ensure compliance without manual work.
  • Multi-Cloud & SaaS Coverage: Security that follows your data, wherever it goes.

Sentra: Turning the 80% Blind Spot into a Competitive Advantage

Sentra was built specifically to address the risks of unstructured data in 2026 and beyond. There are nuances involved in solving this.  Selecting an appropriate solution is key to a sustainable approach. Here’s what sets Sentra apart:
 

  • Agentless Discovery Across All Environments:Instantly scans and classifies unstructured data across AWS, Azure, Google, M365, Dropbox, legacy file shares, and more - no agents required, no blind spots left behind.
  • Petabyte-Tested Performance:Designed for Fortune 500 scale, Sentra keeps speed and accuracy high across petabytes, not just terabytes.
  • AI-Powered Deep Classification:Our platform uses advanced NLP, SLMs, and context-aware algorithms to classify, label, and risk-score every file - including code, images, and AI training data, not just structured fields.
  • Continuous, Context-Rich Visibility:Real-time risk scoring, identity and access mapping, and automated data lineage show not just where data lives, but who can access it and how it’s used.
  • Automated Remediation and Orchestration: Sentra goes beyond alerts. Built-in playbooks fix permissions, restrict sharing, and enforce policies within seconds.
  • Compliance-First, Audit-Ready: Quickly spot compliance gaps, generate audit trails, and reduce regulatory risk and reporting costs.     

During a recent deployment with a global financial services company, Sentra uncovered 40% more exposed sensitive files than their previous DSPM tool. Automated remediation covered over 10 million documents across three clouds, cutting manual investigation time by 80%.

Actionable Takeaways for Security Leaders 

1. Put Unstructured Data at the Center of Your 2026 Security Plan: Make sure your DSPM strategy covers all data, especially “dark” and shadow data in SaaS, object stores, and collaboration platforms.

2.  Choose Agentless, AI-Driven Discovery: Legacy, agent-based tools can’t keep up. And underperforming emerging tools may not adequately scale.  Look for continuous, automated scanning and classification that scales with your data.

3.  Automate Remediation Workflows: Visibility is just the start; your platform should fix exposures and enforce policies in real time.

4.  Adopt Multi-Cloud, SaaS-Agnostic Solutions: Your data is everywhere, and your security should be too. Ensure your solution supports all of your unstructured data repositories.

5.  Make Compliance Proactive: Use real-time risk scoring and automated reporting to stay ahead of auditors and regulators.

    

Conclusion: Ready for the 80% Challenge?

With petabyte-scale, cloud-first data, ignoring unstructured data risk is no longer an option. Traditional DSPM tools can’t keep up, leaving most of your data - and your business - vulnerable. Sentra’s agentless, AI-powered platform closes this gap, delivering the discovery, classification, and automated response you need to turn your biggest blind spot into your strongest defense. See how Sentra uncovers your hidden risk - book an instant demo today.

Don’t let unstructured data be your organization’s Achilles’ heel. With Sentra, enterprises finally have a way to secure the data that matters most.

<blogcta-big>

Read More
David Stuart
David Stuart
Nikki Ralston
Nikki Ralston
November 24, 2025
3
Min Read

Third-Party OAuth Apps Are the New Shadow Data Risk: Lessons from the Gainsight/Salesforce Incident

Third-Party OAuth Apps Are the New Shadow Data Risk: Lessons from the Gainsight/Salesforce Incident

The recent exposure of customer data through a compromised Gainsight integration within Salesforce environments is more than an isolated event - it’s a sign of a rapidly evolving class of SaaS supply-chain threats. Even trusted AppExchange partners can inadvertently create access pathways that attackers exploit, especially when OAuth tokens and machine-to-machine connections are involved. This post explores what happened, why today’s security tooling cannot fully address this scenario, and how data-centric visibility and identity governance can meaningfully reduce the blast radius of similar breaches.

A Recap of the Incident

In this case, attackers obtained sensitive credentials tied to a Gainsight integration used by multiple enterprises. Those credentials allowed adversaries to generate valid OAuth tokens and access customer Salesforce orgs, in some cases with extensive read capabilities. Neither Salesforce nor Gainsight intentionally misconfigured their systems. This was not a product flaw in either platform. Instead, the incident illustrates how deeply interconnected SaaS environments have become and how the security of one integration can impact many downstream customers.

Understanding the Kill Chain: From Stolen Secrets to Salesforce Lateral Movement

The attackers’ pathway followed a pattern increasingly common in SaaS-based attacks. It began with the theft of secrets; likely API keys, OAuth client secrets, or other credentials that often end up buried in repositories, CI/CD logs, or overlooked storage locations. Once in hand, these secrets enabled the attackers to generate long-lived OAuth tokens, which are designed for application-level access and operate outside MFA or user-based access controls.

What makes OAuth tokens particularly powerful is that they inherit whatever permissions the connected app holds. If an integration has broad read access, which many do for convenience or legacy reasons, an attacker who compromises its token suddenly gains the same level of visibility. Inside Salesforce, this enabled lateral movement across objects, records, and reporting surfaces far beyond the intended scope of the original integration. The entire kill chain was essentially a progression from a single weakly-protected secret to high-value data access across multiple Salesforce tenants.

Why Traditional SaaS Security Tools Missed This

Incident response teams quickly learned what many organizations are now realizing: traditional CASBs and CSPMs don’t provide the level of identity-to-data context necessary to detect or prevent OAuth-driven supply-chain attacks.

CASBs primarily analyze user behavior and endpoint connections, but OAuth apps are “non-human identities” - they don’t log in through browsers or trigger interactive events. CSPMs, in contrast, focus on cloud misconfigurations and posture, but they don’t understand the fine-grained data models of SaaS platforms like Salesforce. What was missing in this incident was visibility into how much sensitive data the Gainsight connector could access and whether the privileges it held were appropriate or excessive. Without that context, organizations had no meaningful way to spot the risk until the compromise became public.

Sentra Helps Prevent and Contain This Attack Pattern

Sentra’s approach is fundamentally different because it starts with data: what exists, where it resides, who or what can access it, and whether that access is appropriate. Rather than treating Salesforce or other SaaS platforms as black boxes, Sentra maps the data structures inside them, identifies sensitive records, and correlates that information with identity permissions including third-party apps, machine identities, and OAuth sessions.

One key pillar of Sentra’s value lies in its DSPM capabilities. The platform identifies sensitive data across all repositories, including cloud storage, SaaS environments, data warehouses, code repositories, collaboration platforms, and even on-prem file systems. Because Sentra also detects secrets such as API keys, OAuth credentials, private keys, and authentication tokens across these environments, it becomes possible to catch compromised or improperly stored secrets before an attacker ever uses them to access a SaaS platform.

OAuth 2.0 Access Token

Another area where this becomes critical is the detection of over-privileged connected apps. Sentra continuously evaluates the scopes and permissions granted to integrations like Gainsight, identifying when either an app or an identity holds more access than its business purpose requires. This type of analysis would have revealed that a compromised integrated app could see far more data than necessary, providing early signals of elevated risk long before an attacker exploited it.

Sentra further tracks the health and behavior of non-human identities. Service accounts and connectors often rely on long-lived credentials that are rarely rotated and may remain active long after the responsible team has changed. Sentra identifies these stale or overly permissive identities and highlights when their behavior deviates from historical norms. In the context of this incident type, that means detecting when a connector suddenly begins accessing objects it never touched before or when large volumes of data begin flowing to unexpected locations or IP ranges.

Finally, Sentra’s behavior analytics (part of DDR) help surface early signs of misuse. Even if an attacker obtains valid OAuth tokens, their data access patterns, query behavior, or geography often diverge from the legitimate integration. By correlating anomalous activity with the sensitivity of the data being accessed, Sentra can detect exfiltration patterns in real time—something traditional tools simply aren’t designed to do.

The 2026 Outlook: More Incidents Are Coming

The Gainsight/Salesforce incident is unlikely to be the last of its kind. The speed at which enterprises adopt SaaS integrations far exceeds the rate at which they assess the data exposure those integrations create. OAuth-based supply-chain attacks are growing quickly because they allow adversaries to compromise one provider and gain access to dozens or hundreds of downstream environments. Given the proliferation of partner ecosystems, machine identities, and unmonitored secrets, this attack vector will continue to scale.

Prediction:
Unless enterprises add data-centric SaaS visibility and identity-aware DSPM, we should expect three to five more incidents of similar magnitude before summer 2026.

Conclusion

The real lesson from the Gainsight/Salesforce breach is not to reduce reliance on third-party SaaS providers as modern business would grind to a halt without them. The lesson is that enterprises must know where their sensitive data lives, understand exactly which identities and integrations can access it, and ensure those privileges are continuously validated. Sentra provides that visibility and contextual intelligence, making it possible to identify the risks that made this breach possible and help to prevent the next one.

<blogcta-big>

Read More
David Stuart
David Stuart
November 24, 2025
3
Min Read

Securing Unstructured Data in Microsoft 365: The Case for Petabyte-Scale, AI-Driven Classification

Securing Unstructured Data in Microsoft 365: The Case for Petabyte-Scale, AI-Driven Classification

The modern enterprise runs on collaboration and nothing powers that more than Microsoft 365. From Exchange Online and OneDrive to SharePoint, Teams, and Copilot workflows, M365 hosts a massive and ever-growing volume of unstructured content: documents, presentations, spreadsheets, image files, chats, attachments, and more.

Yet unstructured = harder to govern. Unlike tidy database tables with defined schemas, unstructured repositories flood in with ambiguous content types, buried duplicates, or unused legacy files. It’s in these stacks that sensitive IP, model training data, or derivative work can quietly accumulate, and then leak.

Consider this: one recent study found that more than 81 % of IT professionals report data-loss events in M365 environments. And to make matters worse, according to the International Data Corporation (IDC), 60% of organizations do not have a strategy for protecting their critical business data that resides in Microsoft 365.

Why Traditional Tools Struggle

  • Built-in classification tools (e.g., M365’s native capabilities) often rely on pattern matching or simple keywords, and therefore struggle with accuracy, context, scale and derivative content.

  • Many solutions only surface that a file exists and carries a type label - but stop short of mapping who or what can access it, its purpose, and what its downstream exposure might be.

  • GenAI workflows now pump massive volumes of unstructured data into copilots, knowledge bases, training sets - creating new blast radii that legacy DLP or labeling tools weren’t designed to catch.

What a Modern Platform Must Deliver

  1. High-accuracy, petabyte-scale classification of unstructured data (so you know what you have, where it sits, and how sensitive it is). And it must keep pace with explosive data growth and do so cost efficiently.

  2. Unified Data Access Governance (DAG) - mapping identities (users, service principals, agents), permissions, implicit shares, federated/cloud-native paths across M365 and beyond.
  3. Data Detection & Response (DDR) - continuous monitoring of data movement, copies, derivative creation, AI agent interactions, and automated response/remediation.

How Sentra addresses this in M365

Assets contain plain text credit card numbers

At Sentra, we’ve built a cloud-native data-security platform specifically to address this triad of capabilities - and we extend that deeply into M365 (OneDrive, SharePoint, Teams, Exchange Online) and other SaaS platforms.

  • A newly announced AI Classifier for Unstructured Data accelerates and improves classification across M365’s unstructured repositories (see: Sentra launches breakthrough unstructured-data AI classification capabilities).

  • Petabyte-scale processing: our architecture supports classification and monitoring of massive file estates without astronomical cost or time-to-value.

  • Seamless support for M365 services: read/write access, ingestion, classification, access-graph correlation, detection of shadow/unmanaged copies across OneDrive and SharePoint—plus integration into our DAG and DDR layers (see our guide: How to Secure Regulated Data in Microsoft 365 + Copilot).

  • Cost-efficient deployment: designed for high scale without breaking the budget or massive manual effort.

The Bottom Line

In today’s cloud/AI era, saying “we discovered the PII in my M365 tenant” isn’t enough.

The real question is: Do I know who or what (user/agent/app) can access that content, what its business purpose is, and whether it’s already been copied or transformed into a risk vector?


If your solution can’t answer that, your unstructured data remains a silent, high-stakes liability and resolving concerns becomes a very costly, resource-draining burden. By embracing a platform that combines classification accuracy, petabyte-scale processing, unified DSPM + DAG + DDR, and deep M365 support, you move from “hope I’m secure” to “I know I’m secure.”

Want to see how it works in a real M365 setup? Check out our video or book a demo.

<blogcta-big>

Read More
Ofir Yehoshua
Ofir Yehoshua
November 17, 2025
4
Min Read

How to Gain Visibility and Control in Petabyte-Scale Data Scanning

How to Gain Visibility and Control in Petabyte-Scale Data Scanning

Every organization today is drowning in data - millions of assets spread across cloud platforms, on-premises systems, and an ever-expanding landscape of SaaS tools. Each asset carries value, but also risk. For security and compliance teams, the mandate is clear: sensitive data must be inventoried, managed and protected.

Scanning every asset for security and compliance is no longer optional, it’s the line between trust and exposure, between resilience and chaos.

Many data security tools promise to scan and classify sensitive information across environments. In practice, doing this effectively and at scale, demands more than raw ‘brute force’ scanning power. It requires robust visibility and management capabilities: a cockpit view that lets teams monitor coverage, prioritize intelligently, and strike the right balance between scan speed, cost, and accuracy.

Why Scan Tracking Is Crucial

Scanning is not instantaneous. Depending on the size and complexity of your environment, it can take days - sometimes even weeks to complete. Meanwhile, new data is constantly being created or modified, adding to the challenge.

Without clear visibility into the scanning process, organizations face several critical obstacles:

  • Unclear progress: It’s often difficult to know what has already been scanned, what is currently in progress, and what remains pending. This lack of clarity creates blind spots that undermine confidence in coverage.

  • Time estimation gaps: In large environments, it’s hard to know how long scans will take because so many factors come into play — the number of assets, their size, the type of data - structured, semi-structured, or unstructured, and how much scanner capacity is available. As a result, predicting when you’ll reach full coverage is tricky. This becomes especially stressful when scans need to be completed before a fixed deadline, like a compliance audit. 

    "With Sentra’s Scan Dashboard, we were able to quickly scale up our scanners to meet a tight audit deadline, finish on time, and then scale back down to save costs. The visibility and control it gave us made the whole process seamless”, said CISO of Large Retailer.
  • Poor prioritization: Not all environments or assets carry the same importance. Yet without visibility into scan status, teams struggle to balance historical scans of existing assets with the ongoing influx of newly created data, making it nearly impossible to prioritize effectively based on risk or business value.

Sentra’s End-to-End Scanning Workflow

Managing scans at petabyte scale is complex. Sentra streamlines the process with a workflow built for scale, clarity, and control that features:

1. Comprehensive Asset Discovery

Before scanning even begins, Sentra automatically discovers assets across cloud platforms, on-premises systems, and SaaS applications. This ensures teams have a complete, up-to-date inventory and visual map of their data landscape, so no environment or data store is overlooked.

Example: New S3 buckets, a freshly deployed BigQuery dataset, or a newly connected SharePoint site are automatically identified and added to the inventory.

Comprehensive Asset Discovery with Sentra

2. Configurable Scan Management

Administrators can fine-tune how scans are executed to meet their organization’s needs. With flexible configuration options, such as number of scanners, sampling rates, and prioritization rules - teams can strike the right balance between scan speed, coverage, and cost control.

For instance, compliance-critical assets can be scanned at full depth immediately, while less critical environments can run at reduced sampling to save on compute consumption and costs.

3. Real-Time Scan Dashboard

Sentra’s unified Scan Dashboard provides a cockpit view into scanning operations, so teams always know where they stand. Key features include:

  • Daily scan throughput correlated with the number of active scanners, helping teams understand efficiency and predict completion times.
  • Coverage tracking that visualizes overall progress and highlights which assets remain unscanned.
  • Decision-making tools that allow teams to dynamically adjust, whether by adding scanner capacity, changing sampling rates, or reordering priorities when new high-risk assets appear.
Real-Time Scan Dashboard with Sentra

Handling Data Changes

The challenge doesn’t end once the initial scans are complete. Data is dynamic, new files are added daily, existing records are updated, and sensitive information shifts locations. Sentra’s activity feeds give teams the visibility they need to understand how their data landscape is evolving and adapt their data security strategies in real time.


Conclusion

Tracking scan status at scale is complex but critical to any data security strategy. Sentra provides an end-to-end view and unmatched scan control, helping organizations move from uncertainty to confidence with clear prediction of scan timelines, faster troubleshooting, audit-ready compliance, and smarter, cost-efficient decisions for securing data.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
November 12, 2025
4
Min Read
Data Security

Best DSPM Tools: Top 9 Vendors Compared

Best DSPM Tools: Top 9 Vendors Compared

Enhanced DSPM Adoption Is the Most Important Data Security Trend of 2026

Over the past few years, organizations have realized that traditional security tools can’t keep pace with how data moves and grows today. Exploding volumes of sensitive data now flourish across multi-cloud environments, SaaS platforms, and AI systems, often without full visibility by the teams responsible for securing it. Unstructured data presents the greatest risk - representing over 80% of corporate data.

That’s why Data Security Posture Management (DSPM) has become a critical part of the modern security stack. DSPM tools help organizations automatically discover, classify, monitor, and protect sensitive data - no matter where it lives or travels.

But in 2026, the data security game is changing. Many DSPMs can tell you what your data is,  but more is needed. Leading DSPM platforms are going beyond visibility. They’re delivering real-time AI-enhanced contextual business insights, automated remediation, and AI-aware accurate protection that scales with your dynamic data.

AI-enhanced DSPM Capabilities in 2026

Not all DSPM tools are built the same. The top platforms share a few key traits that define the next generation of data security posture management:

Capability Why It Matters
Continuous discovery and classification at scale Real-time visibility into all sensitive data across cloud, SaaS, and on-prem systems. Efficiency, at petabyte scale, to allow for scanning frequency commensurate with business risk.
Contextual risk analysis Understanding what data is sensitive, who can access it, and how it’s being used. Understanding the business context around data so that appropriate actions can be taken.
Automated remediation Native capabilities and Integration with systems that correct risky configurations or excessive access automatically.
Integration and scalability Seamless connections to CSPM, SIEM, IAM, ITSM, and SOAR tools to unify data risk management and streamline workflows.
AI and model governance Capabilities to secure data used in GenAI agents, copilot assistants, and pipelines.

Top DSPM Tools to Watch in 2026

Based on recent analyst coverage, market growth, and innovation across the industry, here are the top DSPM platforms to watch this year, each contributing to how data security is evolving.

1. Sentra

As a cloud-native DSPM platform, Sentra focuses on continuous data protection, not just visibility. It discovers and accurately classifies sensitive data in real time across all cloud environments, while automatically remediating risks through policy-driven automation.

What sets Sentra apart:

  • Continuous, automated discovery and classification across your entire data estate - cloud, SaaS, and on-premises.
  • Business Contextual insights that understand the purpose of data, accurately linking data, identity, and risk.
  • Automatic learning to discern customer unique data types and continuously improve labeling over time.
  • Petabyte scaling and low compute consumption for 10X cost efficiency.
  • Automated remediation workflows and integrations to fix issues instantly.
  • Built-in coverage for data flowing through AI and SaaS ecosystems.

Ideal for: Security teams looking for a cloud-native DSPM platform built for scalability in the AI era with automation at its core.

2. BigID

A pioneer in data discovery and classification, BigID bridges DSPM and privacy governance, making it a good choice for compliance-heavy sectors.


Ideal for: Organizations prioritizing data privacy, governance, and audit readiness.

3. Prisma Cloud (Palo Alto Networks)

Prisma’s DSPM offering integrates closely with CSPM and CNAPP components, giving security teams a single pane of glass for infrastructure and data risk.


Ideal for: Enterprises with hybrid or multi-cloud infrastructures already using Palo Alto tools.

4. Microsoft Purview / Defender DSPM

Microsoft continues to invest heavily in DSPM through Purview, offering rich integration with Microsoft 365 and Azure ecosystems. Note: Sentra integrates with Microsoft Purview Information Protection (MPIP) labeling and DLP policies.

Ideal for: Microsoft-centric organizations seeking native data visibility and compliance automation.

5. Securiti.ai

Positioned as a “Data Command Center,” Securiti unifies DSPM, privacy, and governance. Its strength lies in automation and compliance visibility and SaaS coverage.


Ideal for: Enterprises looking for an all-in-one governance and DSPM solution.

6. Cyera

Cyera has gained attention for serving the SMB segment with its DSPM approach. It uses LLMs for data context, supplementing other classification methods, and provides integrations to IAM and other workflow tools.


Ideal for: Small/medium growing companies that need basic DSPM functionality.

7. Wiz

Wiz continues to lead in cloud security, having added DSPM capabilities into its CNAPP platform. They’re known for deep multi-cloud visibility and infrastructure misconfiguration detection.

Ideal for: Enterprises running complex cloud environments looking for infrastructure vulnerability and misconfiguration management.

8. Varonis

Varonis remains a strong player for hybrid and on-prem data security, with deep expertise in permissions and access analytics and focus on SaaS/unstructured data.


Ideal for: Enterprises with legacy file systems or mixed cloud/on-prem architectures.

9. Netwrix

Netwrix’s platform incorporates DSPM-related features into its auditing and access control suite.

Ideal for: Mid-sized organizations seeking DSPM as part of a broader compliance solution.

Emerging DSPM Trends to Watch in 2026

  1. AI Data Security: As enterprises adopt GenAI, DSPM tools are evolving to secure data used in training and inference.

  2. Identity-Centric Risk: Understanding and controlling both human and machine identities is now central to data posture.

  3. Automation-Driven Security: Remediation workflows are becoming the differentiator between “good” and “great.”

Market Consolidation: Expect to see CNAPP, legacy security, and cloud vendors acquiring DSPM startups to strengthen their coverage.

How to Choose the Right DSPM Tool

When evaluating a DSPM solution, align your choice with your data landscape and goals:

  • Cloud-Native Company Choose tools designed for cloud-first environments (like Sentra, Securiti, Wiz).
  • Compliance Priority Platforms like Sentra, BigID or Securiti excel in privacy and governance.
  • Microsoft-Heavy Stack Purview and Sentra DSPM offer native integration.
  • Hybrid Environment Consider Varonis, Prisma Cloud, or Sentra for extended visibility.
  • Enterprise Scalability Evaluate deployment ease, petabyte scalability, cloud resource consumption, scanning efficiency, etc. (Sentra excels here)

*Pro Tip: Run a proof of concept (POC) across multiple environments to test scalability, accuracy, and operational cost effectiveness before full deployment.

Final Thoughts: DSPM Is About Action

The best DSPM tools in 2026 share one core principle, they help organizations move from visibility to action.

At Sentra, we believe that the future of DSPM lies in continuous, automated data protection:

  • Real-time discovery of sensitive data @ scale
  • Context-aware prioritization for business insight
  • Automated remediation that reduces risk instantly

As data continues to power AI, analytics, and innovation, DSPM ensures that innovation never comes at the cost of security. See how Sentra helps leading enterprises protect data across multi-cloud and SaaS environments.

<blogcta-big>

Read More
Gilad Golani
Gilad Golani
November 6, 2025
4
Min Read

How SLMs (Small Language Models) Make Sentra’s AI Faster and More Accurate

How SLMs (Small Language Models) Make Sentra’s AI Faster and More Accurate

The LLM Hype, and What’s Missing

Over the past few years, large language models (LLMs) have dominated the AI conversation. From writing essays to generating code, LLMs like GPT-4 and Claude have proven that massive models can produce human-like language and reasoning at scale.

But here's the catch: not every task needs a 70-billion-parameter model. Parameters are computationally expensive - they require both memory and processing time.

At Sentra, we discovered early on that the work our customers rely on for accurate, scalable classification of massive data flows - isn’t about writing essays or generating text. It’s about making decisions fast, reliably, and cost-effectively across dynamic, real-world data environments. While large language models (LLMs) are excellent at solving general problems, it creates a lot of unnecessary computational overhead.

That’s why we’ve shifted our focus toward Small Language Models (SLMs) - compact, specialized models purpose-built for a single task - understanding and classifying data efficiently. By running hundreds of SLMs in parallel on regular CPUs, Sentra can deliver faster insights, stronger data privacy, and a dramatically lower total cost of AI-based classification that scales with their business, not their cloud bill.

What Is an SLM?

An SLM is a smaller, domain-specific version of a language model. Instead of trying to understand and generate any kind of text, an SLM is trained to excel at a particular task, such as identifying the topic of a document (what the document is about or what type of document it is), or detecting sensitive entities within documents, such as passwords, social security numbers, or other forms of PII.

In other words: If an LLM is a generalist, an SLM is a specialist. At Sentra, we use SLMs that are tuned and optimized for security data classification, allowing them to process high volumes of content with remarkable speed, consistency, and precision. These SLMs are based on standard open source models, but trained with data that was curated by Sentra, to achieve the level of accuracy that only Sentra can guarantee.

From LLMs to SLMs: A Strategic Evolution

Like many in the industry, we started by testing LLMs to see how well they could classify and label data. They were powerful, but also slow, expensive, and difficult to scale. Over time, it became clear: LLMs are too big and too expensive to run on customer data for Sentra to be a viable, cost effective solution for data classification.

Each SLM handles a focused part of the process: initial categorization, text extraction from documents and images, and sensitive entity classification. The SLMs are not only accurate (even more accurate than LLMs classifying using prompts) - they can run on standard CPUs efficiently, and they run inside the customer’s environment, as part of Sentra’s scanners.

The Benefits of SLMs for Customers

a. Speed and Efficiency

SLMs process data faster because they’re lean by design. They don’t waste cycles generating full sentences or reasoning across irrelevant contexts. This means real-time or near-real-time classification, even across millions of data points.

b. Accuracy and Adaptability

SLMs are pre-trained “zero-shot” language models that can categorize and classify generically, without the need to pre-train on a specific task in advance. This is the meaning of “zero shot” - it means that regardless of the data it was trained on, the model can classify an arbitrary set of entities and document labels without training on each one specifically. This is possible due to the fact that language models are very advanced, and they are able to capture deep natural language understanding at the training stage.

Regardless of that, Sentra fine tunes these models to further increase the accuracy of the classification, by curating a very large set of tagged data that resembles the type of data that our customers usually run into.

Our feedback loops ensure that model performance only gets better over time - a direct reflection of our customers’ evolving environments.

c. Cost and Sustainability

Because SLMs are compact, they require less compute power, which means lower operational costs and a smaller carbon footprint. This efficiency allows us to deliver powerful AI capabilities to customers without passing on the heavy infrastructure costs of running massive models.

d. Security and Control

Unlike LLMs hosted on external APIs, SLMs can be run within Sentra’s secure environment, preserving data privacy and regulatory compliance. Customers maintain full control over their sensitive information - a critical requirement in enterprise data security.

A Quick Comparison: SLMs vs. LLMs

The difference between SLMs and LLMs becomes clear when you look at their performance across key dimensions:

Factor SLMs LLMs
Speed Fast, optimized for classification throughput Slower and more compute-intensive for large-scale inference
Cost Cost-efficient Expensive to run at scale
Accuracy (for simple tasks) Optimized for classification Comparable but unnecessary overhead
Deployment Lightweight, easy to integrate Complex and resource-heavy
Adaptability (with feedback) Continuously fine-tuned, ability to fine tune per customer Harder to customize, fine-tuning costly
Best Use Case Classification, tagging, filtering Reasoning and analysis, generation, synthesis

Continuous Learning: How Sentra’s SLMs Grow

One of the most powerful aspects of our SLM approach is continuous learning. Each Sentra customer project contributes valuable insights, from new data patterns to evolving classification needs. These learnings feed back into our training workflows, helping us refine and expand our models over time.

While not every model retrains automatically, the system is built to support iterative optimization: as our team analyzes feedback and performance, models can be fine-tuned or extended to handle new categories and contexts.

The result is an adaptive ecosystem of SLMs that becomes more effective as our customer base and data diversity grow, ensuring Sentra’s AI remains aligned with real-world use cases.

Sentra’s Multi-SLM Architecture

Sentra’s scanning technology doesn’t rely on a single model. We run many SLMs in parallel, each specializing in a distinct layer of classification:

  1. Embedding models that convert data into meaningful vector representations
  2. Entity Classification models that label sensitive entities
  3. Document Classification models that label documents by type
  4. Image-to-text and speech-to-text models that are able to process non-textual data into textual data

This layered approach allows us to operate at scale - quickly, cheaply, and with great results. In practice, that means faster insights, fewer errors, and a more responsive platform for every customer.

The Future of AI Is Specialized

We believe the next frontier of AI isn’t about who can build the biggest model, it’s about who can build the most efficient, adaptive, and secure ones.

By embracing SLMs, Sentra is pioneering a future where AI systems are purpose-built, transparent, and sustainable. Our approach aligns with a broader industry shift toward task-optimized intelligence - models that do one thing extremely well and can learn continuously over time.

Conclusion: The Power of Small

At Sentra, we’ve learned that in AI, bigger isn’t always better. Our commitment to SLMs reflects our belief that efficiency, adaptability, and precision matter most for customers. By running thousands of small, smart models rather than a single massive one, we’re able to classify data faster, cheaper, and with greater accuracy - all while ensuring customer privacy and control.

In short: Sentra’s SLMs represent the power of small, and the future of intelligent classification.

<blogcta-big>

Read More
David Stuart
David Stuart
November 3, 2025
4
Min Read
Data Security

Safeguarding Data Integrity and Privacy in the Age of AI-Powered Large Language Models (LLMs)

Safeguarding Data Integrity and Privacy in the Age of AI-Powered Large Language Models (LLMs)

In the burgeoning realm of artificial intelligence (AI), Large Language Models (LLMs) have emerged as transformative tools, enabling the development of applications that revolutionize customer experiences and streamline business operations. These sophisticated models, trained on massive volumes of text data, can generate human-quality text, translate languages, write creative content, and answer complex questions.

Unfortunately, the rapid adoption of LLMs - coupled with their extensive data consumption - has introduced critical challenges around data integrity, privacy, and access control during both training and inference. As organizations operationalize LLMs at scale in 2025, addressing these risks has become essential to responsible AI adoption.

What’s Changed in LLM Security in 2025

LLM security in 2025 looks fundamentally different from earlier adoption phases. While initial concerns focused primarily on prompt injection and output moderation, today’s risk profile is dominated by data exposure, identity misuse, and over-privileged AI systems.

Several shifts now define the modern LLM security landscape:

  • Retrieval-augmented generation (RAG) has become the default architecture, dynamically connecting LLMs to internal data stores and increasing the risk of sensitive data exposure at inference time.
  • Fine-tuning and continual training on proprietary data are now common, expanding the blast radius of data leakage or poisoning incidents.
  • Agentic AI and tool-calling capabilities introduce new attack surfaces, where excessive permissions can enable unintended actions across cloud services and SaaS platforms.
  • Multi-model and hybrid AI environments complicate data governance, access control, and visibility across LLM workflows.

As a result, securing LLMs in 2025 requires more than static policies or point-in-time reviews. Organizations must adopt continuous data discovery, least-privilege access enforcement, and real-time monitoring to protect sensitive data throughout the LLM lifecycle.

Challenges: Navigating the Risks of LLM Training

Against this backdrop, the training of LLMs often involves the use of vast datasets containing sensitive information such as personally identifiable information (PII), intellectual property, and financial records. This concentration of valuable data presents a compelling target for malicious actors seeking to exploit vulnerabilities and gain unauthorized access.

One of the primary challenges is preventing data leakage or public disclosure. LLMs can inadvertently disclose sensitive information if not properly configured or protected. This disclosure can occur through various means, such as unauthorized access to training data, vulnerabilities in the LLM itself, or improper handling of user inputs.

Another critical concern is avoiding overly permissive configurations. LLMs can be configured to allow users to provide inputs that may contain sensitive information. If these inputs are not adequately filtered or sanitized, they can be incorporated into the LLM's training data, potentially leading to the disclosure of sensitive information.

Finally, organizations must be mindful of the potential for bias or error in LLM training data. Biased or erroneous data can lead to biased or erroneous outputs from the LLM, which can have detrimental consequences for individuals and organizations.

OWASP Top 10 for LLM Applications

The OWASP Top 10 for LLM Applications identifies and prioritizes critical vulnerabilities that can arise in LLM applications. Among these, LLM03 Training Data Poisoning, LLM06 Sensitive Information Disclosure, LLM08 Excessive Agency, and LLM10 Model Theft pose significant risks that cybersecurity professionals must address. Let's dive into these:

OWASP Top 10 for LLM Applications

LLM03: Training Data Poisoning

LLM03 addresses the vulnerability of LLMs to training data poisoning, a malicious attack where carefully crafted data is injected into the training dataset to manipulate the model's behavior. This can lead to biased or erroneous outputs, undermining the model's reliability and trustworthiness.

The consequences of LLM03 can be severe. Poisoned models can generate biased or discriminatory content, perpetuating societal prejudices and causing harm to individuals or groups. Moreover, erroneous outputs can lead to flawed decision-making, resulting in financial losses, operational disruptions, or even safety hazards.


LLM06: Sensitive Information Disclosure

LLM06 highlights the vulnerability of LLMs to inadvertently disclosing sensitive information present in their training data. This can occur when the model is prompted to generate text or code that includes personally identifiable information (PII), trade secrets, or other confidential data.

The potential consequences of LLM06 are far-reaching. Data breaches can lead to financial losses, reputational damage, and regulatory penalties. Moreover, the disclosure of sensitive information can have severe implications for individuals, potentially compromising their privacy and security.

LLM08: Excessive Agency

LLM08 focuses on the risk of LLMs exhibiting excessive agency, meaning they may perform actions beyond their intended scope or generate outputs that cause harm or offense. This can manifest in various ways, such as the model generating discriminatory or biased content, engaging in unauthorized financial transactions, or even spreading misinformation.

Excessive agency poses a significant threat to organizations and society as a whole. Supply chain compromises and excessive permissions to AI-powered apps can erode trust, damage reputations, and even lead to legal or regulatory repercussions. Moreover, the spread of harmful or offensive content can have detrimental social impacts.

LLM10: Model Theft

LLM10 highlights the risk of model theft, where an adversary gains unauthorized access to a trained LLM or its underlying intellectual property. This can enable the adversary to replicate the model's capabilities for malicious purposes, such as generating misleading content, impersonating legitimate users, or conducting cyberattacks.

Model theft poses significant threats to organizations. The loss of intellectual property can lead to financial losses and competitive disadvantages. Moreover, stolen models can be used to spread misinformation, manipulate markets, or launch targeted attacks on individuals or organizations.

Recommendations: Adopting Responsible Data Protection Practices

To mitigate the risks associated with LLM training data, organizations must adopt a comprehensive approach to data protection. This approach should encompass data hygiene, policy enforcement, access controls, and continuous monitoring.

Data hygiene is essential for ensuring the integrity and privacy of LLM training data. Organizations should implement stringent data cleaning and sanitization procedures to remove sensitive information and identify potential biases or errors.

Policy enforcement is crucial for establishing clear guidelines for the handling of LLM training data. These policies should outline acceptable data sources, permissible data types, and restrictions on data access and usage.

Access controls should be implemented to restrict access to LLM training data to authorized personnel and identities only, including third party apps that may connect. This can be achieved through role-based access control (RBAC), zero-trust IAM, and multi-factor authentication (MFA) mechanisms.

Continuous monitoring is essential for detecting and responding to potential threats and vulnerabilities. Organizations should implement real-time monitoring tools to identify suspicious activity and take timely action to prevent data breaches.

Solutions: Leveraging Technology to Safeguard Data

In the rush to innovate, developers must remain keenly aware of the inherent risks involved with training LLMs if they wish to deliver responsible, effective AI that does not jeopardize their customer's data.  Specifically, it is a foremost duty to protect the integrity and privacy of LLM training data sets, which often contain sensitive information.

Preventing data leakage or public disclosure, avoiding overly permissive configurations, and negating bias or error that can contaminate such models should be top priorities.

Technological solutions play a pivotal role in safeguarding data integrity and privacy during LLM training. Data security posture management (DSPM) solutions can automate data security processes, enabling organizations to maintain a comprehensive data protection posture.

DSPM solutions provide a range of capabilities, including data discovery, data classification, data access governance (DAG), and data detection and response (DDR). These capabilities help organizations identify sensitive data, enforce access controls, detect data breaches, and respond to security incidents.

Cloud-native DSPM solutions offer enhanced agility and scalability, enabling organizations to adapt to evolving data security needs and protect data across diverse cloud environments.

Sentra: Automating LLM Data Security Processes

Having to worry about securing yet another threat vector should give overburdened security teams pause. But help is available.

Sentra has developed a data privacy and posture management solution that can automatically secure LLM training data in support of rapid AI application development.

The solution works in tandem with AWS SageMaker, GCP Vertex AI, or other AI IDEs to support secure data usage within ML training activities.  The solution combines key capabilities including DSPM, DAG, and DDR to deliver comprehensive data security and privacy.

Its cloud-native design discovers all of your data and ensures good data hygiene and security posture via policy enforcement, least privilege access to sensitive data, and monitoring and near real-time alerting to suspicious identity (user/app/machine) activity, such as data exfiltration, to thwart attacks or malicious behavior early. The solution frees developers to innovate quickly and for organizations to operate with agility to best meet requirements, with confidence that their customer data and proprietary information will remain protected.

LLMs are now also built into Sentra’s classification engine and data security platform to provide unprecedented classification accuracy for unstructured data. Learn more about Large Language Models (LLMs) here.

Conclusion: Securing the Future of AI with Data Privacy

AI holds immense potential to transform our world, but its development and deployment must be accompanied by a steadfast commitment to data integrity and privacy. Protecting the integrity and privacy of data in LLMs is essential for building responsible and ethical AI applications. By implementing data protection best practices, organizations can mitigate the risks associated with data leakage, unauthorized access, and bias. Sentra's DSPM solution provides a comprehensive approach to data security and privacy, enabling organizations to develop and deploy LLMs with speed and confidence.

If you want to learn more about Sentra's Data Security Platform and how LLMs are now integrated into our classification engine to deliver unmatched accuracy for unstructured data, request a demo today.

<blogcta-big>

Read More
Aarti Gadhia
Aarti Gadhia
October 27, 2025
3
Min Read
Data Security

My Journey to Empower Women in Cybersecurity

My Journey to Empower Women in Cybersecurity

Finding My Voice: From Kenya to the Global Stage

I was born and raised in Kenya, the youngest of three and the only daughter. My parents, who never had the chance to finish their education, sacrificed everything to give me opportunities they never had. Their courage became my foundation.

At sixteen, my mother signed me up to speak at a community event, without telling me first! I stood before 500 people and spoke about something that had long bothered me: there were no women on our community board. That same year, two women were appointed for the first time in our community’s history. This year, I was given the recognition for being a Community Leader at the Global Gujrati Gaurav Awards in BC for my work in educating seniors on cyber safety and helping many immigrants secure jobs.

I didn’t realize it then, but that moment would define my purpose: to speak up for those whose voices aren’t always heard.

From Isolation to Empowerment

When I moved to the UK to study Financial Economics, I faced a different kind of challenge - isolation. My accent made me stand out, and not always in a good way. There were times I felt invisible, even rejected. But I made a promise to myself in those lonely moments that no one else should feel the same way.

Years later, as a founding member of WiCyS Western Affiliate, I helped redesign how networking happens at cybersecurity events. Instead of leaving it to chance, we introduced structured networking that ensured everyone left with at least one new connection. It was a small change, but it made a big difference. Today, that format has been adopted by organizations like ISC2 and ISACA, creating spaces where every person feels they belong. 

Breaking Barriers and Building SHE

When I pivoted into cybersecurity sales after moving to Canada, I encountered another wall. I applied for a senior role and failed a personality test, one that unfairly filtered out many talented women. I refused to accept that. I focused on listening, solving real customer challenges, and eventually became the top seller. That success helped eliminate the test altogether, opening doors for many more women who came after me. That experience planted a seed that would grow into one of my proudest initiatives: SHE (Sharing Her Empowerment).

It started as a simple fireside chat on diversity and inclusion - just 40 seats over lunch. Within minutes of sending the invite, we had 90 people signed up. Executives moved us into a larger room, and that event changed everything. SHE became our first employee resource group focused on empowering women, increasing representation in leadership, and amplifying women’s voices within the organization. Even with just 19% women, we created a ripple effect that reached the boardroom and beyond.

SHE showed me that when women stand together, transformation happens.

Creating Pathways for the Next Generation

Mentorship has always been close to my heart. During the pandemic, I met incredible women, who were trying to break into cybersecurity but kept facing barriers. I challenged hiring norms, advocated for fair opportunities, and helped launch internship programs that gave women hands-on experience. Today, many of them are thriving in their cyber careers, a true reflection of what’s possible when we lift as we climb.

Through Standout to Lead, I partnered with Women Get On Board to help women in cybersecurity gain board seats. Watching more women step into decision-making roles reminds me that leadership isn’t about titles, it’s about creating pathways for others.

Women in Cybersecurity: Our Collective Story

This year, I’m deeply honored to be named among the Top 20 Cybersecurity Women of the World by the United Cybersecurity Alliance. Their mission - to empower women, elevate diverse voices, and drive equity in our field, mirrors everything I believe in.

I’m also thrilled to be part of the upcoming documentary premiere, “The WOMEN IN SECURITY Documentary,” proudly sponsored by Sentra, Amazon WWOS, and Pinkerton among others. This film shines a light on the fearless women redefining what leadership looks like in our industry.

As a member of Sentra’s community, I see the same commitment to visibility, inclusion, and impact that has guided my journey. Together, we’re not just securing data, we’re securing the future of those who will lead next.

Asante Sana – Thank You

My story, my safari, is still being written. I’ve learned that impact doesn’t come from perfection, but from purpose. Whether it’s advocating for fairness, mentoring the next generation, or sharing our stories, every step we take matters.

To every woman, every underrepresented voice in STEM, and everyone who’s ever felt unseen - stay authentic, speak up, and don’t be afraid of the outcome. You might just change the world.

Join me and the Sentra team at The WOMEN IN SECURITY Documentary Premiere, a celebration of leadership, resilience, and the voices shaping the future of our industry.

Save your seat at The Women in Security premiere here (spots are limited).

Follow Sentra on LinkedIn and YouTube for more updates on the event and stories that inspire change.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
October 20, 2025
3
Min Read
Data Security

2026 Cybersecurity Budget Planning: Make Data Visibility a Priority

2026 Cybersecurity Budget Planning: Make Data Visibility a Priority

Why Data Visibility Belongs in Your 2026 Cybersecurity Budget

As the fiscal year winds down and security leaders tackle cybersecurity budget planning for 2026, you need to decide how to use every remaining 2025 dollar wisely and how to plan smarter for next year. The question isn’t just what to cut or keep, it’s what creates measurable impact. Across programs, data visibility and DSPM deliver provable risk reduction, faster audits, and clearer ROI,making them priority line items whether you’re spending down this year or shaping next year’s plan. Some teams discover unspent funds after project delays, postponed renewals, or slower-than-expected hiring. Others are already deep in planning mode, mapping next year’s security priorities across people, tools, and processes. Either way, one question looms large: where can a limited security budget make the biggest impact - right now and next year?

Across the industry, one theme is clear: data visibility is no longer a “nice-to-have” line item, it’s a foundational control. Whether you’re allocating leftover funds before year-end or shaping your 2026 strategy, investing in Data Security Posture Management (DSPM) should be part of the plan.

As Bitsight notes, many organizations look for smart ways to use remaining funds that don’t roll over. The goal isn’t simply to spend, it’s to invest in initiatives that improve posture and provide measurable, lasting value. And according to Applied Tech, “using remaining IT funds strategically can strengthen your position for the next budget cycle.”

That same principle applies in cybersecurity. Whether you’re closing out this year or planning for 2026, the focus should be on spending that improves security maturity and tells a story leadership understands. Few areas achieve that more effectively than data-centric visibility.

(For additional background, see Sentra’s article on why DSPM should take a slice of your cybersecurity budget.)

Where to Allocate Remaining Year-End Funds (Without Hurting Next Year’s Budget)

It’s important to utilize all of your 2025 budget allocations because finance departments frequently view underspending as a sign of overfunding, leading to smaller allocations next year. Instead, strategic security teams look for ways to convert every remaining dollar into evidence of progress.

That means focusing on investments that:

  • Produce measurable results you can show to leadership.
  • Strengthen core program foundations: people, visibility, and process.
  • Avoid new recurring costs that stretch future budgets.

Top Investments That Pay Off

1. Invest in Your People

One of the strongest points echoed by security professionals across industry communities: the best investment is almost always your people. Security programs are built on human capability. Certifications, practical training, and professional growth not only expand your team’s skills but also build morale and retention, two things that can’t be bought with tooling alone.

High-impact options include:

  • Hands-on training platforms like Hack The Box, INE Skill Dive, or Security Blue Team, which develop real-world skills through simulated environments.
  • Professional certifications (SANS GIAC, OSCP, or cloud security credentials) that validate expertise and strengthen your team’s credibility.
  • Conference attendance for exposure to new threat perspectives and networking with peers.
  • Cross-functional training between SOC, GRC, and AppSec to create operational cohesion.

In practitioner discussions, one common sentiment stood out: training isn’t just an expense, it’s proof of leadership maturity.

As one manager put it, “If you want your analysts to go the extra mile during an incident, show you’ll go the extra mile for them when things are calm.”

2. Invest in Data Visibility (DSPM)

While team capability drives execution, data visibility drives confidence. In recent conversations among mid-market and enterprise security teams, Data Security Posture Management (DSPM) repeatedly surfaced as one of the most valuable investments made in the past year, especially for hybrid-cloud environments.

One security leader described it this way:

“After implementing DSPM, we finally had a clear picture of where sensitive data actually lived. It saved our team hours of manual chasing and made the audit season much easier.”

That feedback reflects a growing consensus: without visibility into where sensitive data resides, who can access it, and how it’s secured, every other layer of defense operates partly in the dark.

*Tip: If your remaining 2025 budget won’t suffice for a full DSPM deployment, you can scope an initial implementation with the remaining budget, then expand to full coverage in 2026.

DSPM solutions provide that clarity by helping teams:

  • Map and classify sensitive data across multi-cloud and SaaS environments.
  • Identify access misconfigurations or risky sharing patterns.
  • Detect policy violations or overexposure before they become incidents.

Beyond security operations, DSPM delivers something finance and leadership appreciate, measurable proof. Dashboards and reports make risk tangible, allowing CISOs to demonstrate progress in data protection and compliance.

The takeaway: DSPM isn’t just a good way to use remaining funds, it’s a baseline investment every forward-looking security program should plan for in 2026 and beyond.

3. Invest in Testing

Training builds capability. Visibility builds understanding. Testing builds credibility.

External red team, purple team, or security posture assessments continue to be among the most effective ways to validate your defenses and generate actionable findings.

Security practitioners often point out that testing engagements create outcomes leadership understands:

“Training is great, but it’s hard to quantify. An external assessment gives you findings, metrics, and a roadmap you can point to when defending next year’s budget.”

Well-scoped assessments do more than uncover vulnerabilities—they benchmark performance, expose process gaps, and generate data-backed justification for continued investment.

4. Preserve Flexibility with a Retainer

If your team can’t launch a new project before year-end, a retainer with a trusted partner is an efficient way to preserve funds without waste. Retainers can cover services like penetration testing, incident response, or advisory hours, providing flexibility when unpredictable needs arise. This approach, often recommended by veteran CISOs, allows teams to close their books responsibly while keeping agility for the next fiscal year.

5. Strengthen Your Foundations

Not every valuable investment requires new tools. Several practitioners emphasized the long-term returns from process improvements and collaboration-focused initiatives:

  • Threat modeling workshops that align development and security priorities.
  • Framework assessments (like NIST CSF or ISO 27001) that provide measurable baselines.
  • Automation pilots to eliminate repetitive manual work.
  • Internal tabletop exercises that enhance cross-team coordination.

These lower-cost efforts improve resilience and efficiency, two metrics that always matter in budget conversations.

How to Decide: A Simple, Measurable Framework

When evaluating where to allocate remaining or future funds, apply a simple framework:

  1. Identify what’s lagging. Which pillar - people, visibility, or process most limits your current effectiveness?
  2. Choose something measurable. Prioritize initiatives that produce clear, demonstrable outputs: reports, dashboards, certifications.
  3. Aim for dual impact. Every investment should strengthen both your operations and your ability to justify next year’s funding.

Final Thoughts

A strong security budget isn’t just about defense, it’s about direction. Every spend tells a story about how your organization prioritizes resilience, efficiency, and visibility.

Whether you’re closing out this year’s funds or preparing your 2026 plan, focus on investments that create both operational value and executive clarity. Because while technologies evolve and threats shift, understanding where your data is, who can access it, and how it’s protected remains the cornerstone of a mature security program.

Or, as one practitioner summed it up: “Spend on the things that make next year’s budget conversation easier.”

DSPM fits that description perfectly.

<blogcta-big>

Read More
Meni Besso
Meni Besso
October 15, 2025
3
Min Read
Compliance

Hybrid Environments: Expand DSPM with On-Premises Scanners

Hybrid Environments: Expand DSPM with On-Premises Scanners

Data Security Posture Management (DSPM) has quickly become a must-have for organizations moving to the cloud. By discovering, classifying, and protecting sensitive data across SaaS apps and cloud services, DSPM gave security teams visibility into data risks they never knew they had before.

But here’s the reality: most enterprises aren’t 100% cloud. Legacy file shares, private databases, and hybrid workloads still hold massive amounts of sensitive data. Without visibility into these environments, even the most advanced DSPM platforms leave critical blind spots.

That’s why DSPM platform support is evolving - from cloud-only to truly hybrid.

The Evolution of DSPM

DSPM emerged as a response to the visibility problem created by rapid cloud adoption. As organizations moved to cloud services, SaaS applications, and collaboration platforms, sensitive data began to sprawl across environments at a pace traditional security tools couldn’t keep up with. Security teams suddenly faced oversharing, inconsistent access controls, and little clarity on where critical information actually lived.

DSPM helped fill this gap by delivering a new level of insight into cloud data. It allowed organizations to map sensitive information across their environments, highlight risky exposures, and begin enforcing least-privilege principles at scale. For cloud-native companies, this represented a huge leap forward - finally, there was a way to keep up with constant data changes and movements, helping customers safely adopt the cloud while maintaining data security best practices and compliance and without slowing innovation.

But for large enterprises, the model was incomplete. Decades of IT infrastructure meant that vast amounts of sensitive information still lived in legacy databases, file shares, and private cloud environments. While DSPM gave them visibility in the cloud, it left everything else in the dark.

The Blind Spot of On-Prem & Private Data

Despite rapid cloud adoption and digital transformation progress, large organizations still rely heavily on hybrid and on-prem environments, since data movement to the cloud can be a year’s long process. On-premises file shares such as NetApp ONTAP, SMB, and NTFS, alongside enterprise databases like Oracle, SQL Server, and MySQL, remain central to operations. Private cloud applications are especially common in regulated industries like healthcare, finance, and government, where compliance demands keep critical data on-premises.

To scan on premises data, many DSPM providers offer partial solutions by taking ephemeral ‘snapshots’ of that data and temporarily moving it to the cloud (either within customer environment, as Sentra does, or to the vendor cloud as some others do) for classification analysis. This can satisfy some requirements, but often is seen as a compliance risk for very sensitive or private data which must remain on-premises. What’s left are two untenable alternatives - ignoring the data which leaves serious visibility gaps or utilizing manual techniques which do not scale.

These approaches were clearly not built for today’s security or operational requirements. Sensitive data is created and proliferates rapidly, which means it may be unclassified, unmonitored, and overexposed, but how do you even know? From a compliance and risk standpoint, DSPM without on-prem visibility is like watching only half the field, and leaving the other half open to attackers or accidental exposure.

Expanding with On-Prem Scanners

Sentra is changing the equation. With the launch of its on-premise scanners, the platform now extends beyond the cloud to hybrid and private environments, giving organizations a single pane of glass for all their data security.

With Sentra, organizations can:

  • Discover and classify sensitive data across traditional file shares (SMB, NFS, CIFS, NTFS) and enterprise databases (Oracle, SQL Server, MySQL, MSSQL, PostgreSDL, MongoDB, MariaDB, IBM DB2, Teradata).
  • Detects and protects critical data as it moves between on-prem and cloud environments.
  • Apply AI-powered classification and enforce Microsoft Purview labeling consistently across environments.
  • Strengthen compliance with frameworks that demand full visibility across hybrid estates.
  • Have a choice of deployment models that best fits their security, compliance, and operational requirements.

Crucially, Sentra’s architecture allows customers to ensure private data always remains in their own environment. They need not move data outside their premises and nothing is ever copied into Sentra’s cloud, making it a trusted choice for enterprises that require secure, private data processing.

Extending the Hybrid Vision

This milestone builds on Sentra’s proven track record as the only cloud-native data security platform that guarantees data always remains within the customer’s cloud environments - never copied or stored in Sentra’s cloud.

Now, Sentra’s AI-powered classification and governance engine can also be deployed in organizations that require onsite data processing, giving them the flexibility to protect both structured and unstructured data across cloud and on-premises systems.

By unifying visibility and governance across all environments while maintaining complete data sovereignty, Sentra continues to lead the next phase of DSPM, one built for modern, hybrid enterprises.

Real-World Impact

Picture a global bank: with modern customer-facing websites and mobile applications hosted in the public cloud, providing agility and scalability for digital services. At the same time, the bank continues to rely on decades-old operational databases running in its private cloud — systems that power core banking functions such as transactions and account management. Without visibility into both, security teams can’t fully understand the risks these stores may pose and enforce least privilege, prevent oversharing, or ensure compliance.

With hybrid DSPM powered by on-prem scanners, that same bank can unify classification and governance across every environment - cloud or on-prem, and close the gaps that attackers or AI systems could otherwise exploit.

Conclusion

DSPM solved the cloud problem. But enterprises aren’t just in the cloud, they’re hybrid. Legacy systems and private environments still hold critical data, and leaving them out of your security posture is no longer an option.

Sentra’s on-premise scanners mark the next stage of DSPM evolution: one unified platform for cloud, on-prem, and private environments. With full visibility, accurate classification, and consistent governance, enterprises finally have the end-to-end data security they need for the AI era. Because protecting half your data is no longer enough.

<blogcta-big>

Read More
Meni Besso
Meni Besso
October 9, 2025
4
Min Read
Compliance

GDPR Compliance Failures Lead to Surge in Fines

GDPR Compliance Failures Lead to Surge in Fines

In recent years, the landscape of data privacy and protection has become increasingly stringent, with regulators around the world cracking down on companies that fail to comply with local and international standards.

The latest high-profile case involves TikTok, which was recently fined a staggering €530 million ($600 million) by the Irish Data Protection Commission (DPC) for violations related to the General Data Protection Regulation (GDPR). This is a wake up call for multinational companies.

Graph showing the rise of GDPR fines from 2018-2025

What is GDPR?

The General Data Protection Regulation (GDPR) is a data protection law that came into effect in the EU in May 2018. Its goal is to give individuals more control over their personal data and unify data protection rules across the EU.

GDPR gives extra protection to special categories of sensitive data. Both 'controllers' (who decide how data is processed) and 'processors' (who act on their behalf) must comply. Joint controllers may share responsibility when multiple entities manage data.

Who Does the GDPR Apply To?

GDPR applies to both EU-based and non-EU organizations that handle the data of EU residents. The regulation requires organizations to obtain clear consent for data collection and processing, and it gives individuals rights to access, correct, and delete their data. Organizations must also ensure strong data security and report any data breaches promptly.

What Are Data Subject Access Requests (DSARs)?

One of the core rights granted to individuals under GDPR is the ability to understand and control how their personal data is used. This is made possible through Data Subject Access Requests (DSARs).

A DSAR allows any EU resident to request access to the personal data an organization holds about them. In response, the organization must provide a comprehensive overview, including:

  • What personal data is being processed
  • The purpose of processing
  • Data sources and recipients
  • Retention periods
  • Information about automated decision-making

Organizations are required to respond to DSARs within one month, making them a time-sensitive and resource-intensive obligation, especially for companies with complex data environments.

What Are the Penalties for Non-Compliance with GDPR?

Non-compliance with the General Data Protection Regulation (GDPR) can result in substantial penalties.

Article 83 of the GDPR establishes the fine framework, which includes the following:

Maximum Fine: The maximum fine for GDPR non-compliance can reach up to 20 million euros, or 4% of the company’s total global turnover from the preceding fiscal year, whichever is higher.

Alternative Penalty: In certain cases, the fine may be set at 10 million euros or 2% of the annual global revenue, as outlined in Article 83(4).

Additionally, individual EU member states have the authority to impose their own penalties for breaches not specifically addressed by Article 83, as permitted by the GDPR’s flexibility clause.

So far, the maximum fine given under GDPR was to Meta in 2023, which was fined $1.3 billion for violating GDPR laws related to data transfers. We’ll delve into the details of that case shortly.

Can Individuals Be Fined for GDPR Breaches?

While fines are typically imposed on organizations, individuals can be fined under certain circumstances. For example, if a person is self-employed and processes personal data as part of their business activities, they could be held responsible for a GDPR breach. However, UK-GDPR and EU-GDPR do not apply to data processing carried out by individuals for personal or household activities. 

According to GDPR Chapter 1, Article 4, “any natural or legal person, public authority, agency, or body” can be held accountable for non-compliance. This means that GDPR regulations do not distinguish significantly between individuals and corporations when it comes to breaches.

Specific scenarios where individuals within organizations may be fined include:

  • Obstructing a GDPR compliance investigation.
  • Providing false information to the ICO or DPA.
  • Destroying or falsifying evidence or information.
  • Obstructing official warrants related to GDPR or privacy laws.
  • Unlawfully obtaining personal data without the data controller's permission.

The Top 3 GDPR Fines and Their Impact

1.  Meta - €1.2 Billion ($1.3 Billion), 2023 

In May 2023, Meta, the U.S. tech giant, was hit with a staggering $1.3 billion fine by an Irish court for violating GDPR regulations concerning data transfers between the E.U. and the U.S. This massive penalty came after the E.U.-U.S. Privacy Shield Framework, which previously provided legal cover for such transfers, was invalidated in 2020. The court found that the framework failed to offer sufficient protection for EU citizens against government surveillance. This fine now stands as the largest ever under GDPR, surpassing Amazon’s 2021 record.

2. Amazon - €746 million ($781 million), 2021

Which leads us to Amazon at number 2, not bad. In 2021, Amazon Europe received the second-largest GDPR fine to date from Luxembourg’s National Commission for Data Protection (CNPD). The fine was imposed after it was determined that the online retailer was storing advertisement cookies without obtaining proper consent from its users.

3. TikTok – €530 million ($600 million), 2025

The Irish Data Protection Commission (DPC) fined TikTok for failing to protect user data from unlawful access and for violating GDPR rules on international data transfers in May 2025. The investigation found that TikTok allowed EU users’ personal data to be accessed from China without ensuring adequate safeguards, breaching GDPR’s requirements for cross-border data protection and transparency. The DPC also cited shortcomings in how TikTok informed users about where their data was processed and who could access it. The case reinforced regulators’ focus on international data transfers and children’s privacy on social media platforms.

The Implications for Global Companies

The growing frequency of such fines sends a clear message to global companies: compliance with data protection regulations is non-negotiable. As European regulators continue to enforce GDPR rigorously, companies that fail to implement adequate data protection measures risk facing severe financial penalties and reputational harm.

In the case of Uber, the company’s failure to use appropriate mechanisms for data transfers, such as Standard Contractual Clauses, led to significant repercussions. This situation emphasizes the importance of staying current with regulatory changes, such as the introduction of the E.U.-U.S. Data Privacy Framework, and ensuring that all data transfer practices are fully compliant.

How Sentra Helps Organizations Stay Compliant with GDPR

Sentra helps organizations maintain GDPR compliance by effectively tagging data belonging to European citizens.

When EU citizens' Personally Identifiable Information (PII) is moved or stored outside of EU data centers, Sentra will detect and alert you in near real-time. Our continuous monitoring and scanning capabilities ensure that any data violations are identified and flagged promptly.

Example of EU citizens PII stored outside of EU data centers

Unlike traditional methods where data replication can obscure visibility and lead to issues during audits, Sentra provides ongoing visibility into data storage. This proactive approach significantly reduces the risk by alerting you to potential compliance issues as they arise.

Sentra does automatic classification of localized data - specifically in this case, EU data. Below you can see an example of how we do this. 

Sentra's automatic classification of localized data

The Rise of Compliance Violations: A Wake-up Call

The increasing number of compliance violations and the related hefty fines should serve as a wake-up call for companies worldwide. As the regulatory environment becomes more complex, it is crucial for organizations to prioritize data protection and privacy. By doing so, they can avoid costly penalties and maintain the trust of their customers and stakeholders.

Solutions such as Sentra provide a cost-effective means to ensure sensitive data always has the right posture and security controls - no matter where the data travels - and can alert on exceptions that require rapid remediation. In this way, organizations can remain regulatory compliant, avoid the steep penalties for violations, and ensure the proper, secure use of data throughout their ecosystem.

To learn more about how Sentra's Data Security Platform can help you stay compliant, avoid GDPR penalties, and ensure the proper, secure use of data, request a demo today.

<blogcta-big>

 

Read More
Shiri Nossel
Shiri Nossel
September 28, 2025
4
Min Read
Compliance

The Hidden Risks Metadata Catalogs Can’t See

The Hidden Risks Metadata Catalogs Can’t See

In today’s data-driven world, organizations are dealing with more information than ever before. Data pours in from countless production systems and applications, and data analysts are tasked with making sense of it all - fast. To extract valuable insights, teams rely on powerful analytics platforms like Snowflake, Databricks, BigQuery, and Redshift. These tools make it easier to store, process, and analyze data at scale.

But while these platforms are excellent at managing raw data, they don't solve one of the most critical challenges organizations face: understanding and securing that data.

That’s where metadata catalogs come in.

Metadata Catalogs Are Essential But They’re Not Enough

Metadata catalogs such as AWS Glue, Hive Metastore, and Apache Iceberg are designed to bring order to large-scale data ecosystems. They offer a clear inventory of datasets, making it easier for teams to understand what data exists, where it’s stored, and who is responsible for it.

This organizational visibility is essential. With a good catalog in place, teams can collaborate more efficiently, minimize redundancy, and boost productivity by making data discoverable and accessible.

But while these tools are great for discovery, they fall short in one key area: security. They aren’t built to detect risky permissions, identify regulated data, or prevent unintended exposure. And in an era of growing privacy regulations and data breach threats, that’s a serious limitation.

Different Data Tools, Different Gaps

It’s also important to recognize that not all tools in the data stack work the same way. For example, platforms like Snowflake and BigQuery come with fully managed infrastructure, offering seamless integration between storage, compute, and analytics. Others, like Databricks or Redshift, are often layered on top of external cloud storage services like S3 or ADLS, providing more flexibility but also more complexity.

Metadata tools have similar divides. AWS Glue is tightly integrated into the AWS ecosystem, while tools like Apache Iceberg and Hive Metastore are open and cloud-agnostic, making them suitable for diverse lakehouse architectures.

This variety introduces fragmentation, and with fragmentation comes risk. Inconsistent access policies, blind spots in data discovery, and siloed oversight can all contribute to security vulnerabilities.

The Blind Spots Metadata Can’t See

Even with a well-maintained catalog, organizations can still find themselves exposed. Metadata tells you what data exists, but it doesn’t reveal when sensitive information slips into the wrong place or becomes overexposed.

This problem is particularly severe in analytics environments. Unlike production environments, where permissions are strictly controlled, or SaaS applications, which have clear ownership and structured access models, data lakes and warehouses function differently. They are designed to collect as much information as possible, allowing analysts to freely explore and query it.

In practice, this means data often flows in without a clear owner and frequently without strict permissions. Anyone with warehouse access, whether users or automated processes, can add information, and analysts typically have broad query rights across all data. This results in a permissive, loosely governed environment where sensitive data such as PII, financial records, or confidential business information can silently accumulate. Once present, it can be accessed by far more individuals than appropriate.

The good news is that the remediation process doesn't require a heavy-handed approach. Often, it's not about managing complex permission models or building elaborate remediation workflows. The crucial step is the ability to continuously identify and locate sensitive data, understand its location, and then take the correct action whether that involves removal, masking, or locking it down.

How Sentra Bridges the Gap Between Data Visibility & Security

This is where Sentra comes in.

Sentra’s Data Security Posture Management (DSPM) platform is designed to complement and extend the capabilities of metadata catalogs, not just to address their limitations, but to elevate your entire data security strategy. Instead of replacing your metadata layer, Sentra works alongside it enhancing your visibility with real-time insights and powerful security controls.

Sentra scans across modern data platforms like Snowflake, S3, BigQuery, and more. It automatically classifies and tags sensitive data, identifies potential exposure risks, and detects compliance violations as they happen.

With Sentra, your metadata becomes actionable.

sentra dashboard datasets

From Static Maps to Live GPS

Think of your metadata catalog as a map. It shows you what’s out there and how things are connected. But a map is static. It doesn’t tell you when there’s a roadblock, a detour, or a collision. Sentra transforms that map into a live GPS. It alerts you in real time, enforces the rules of the road, and helps you navigate safely no matter how fast your data environment is moving.

Conclusion: Visibility Without Security Is a Risk You Can’t Afford

Metadata catalogs are indispensable for organizing data at scale. But visibility alone doesn’t stop a breach. It doesn’t prevent sensitive data from slipping into the wrong place, or from being accessed by the wrong people.

To truly safeguard your business, you need more than a map of your data—you need a system that continuously detects, classifies, and secures it in real time. Without this, you’re leaving blind spots wide open for attackers, compliance violations, and costly exposure.

Sentra turns static visibility into active defense. With real-time discovery, context-rich classification, and automated protection, it gives you the confidence to not only see your data, but to secure it.

See clearly. Understand fully. Protect confidently with Sentra.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
Meni Besso
Meni Besso
September 25, 2025
3
Min Read

Sentra Achieves TX-RAMP Certification: Demonstrating Leadership in Data Security Compliance

Sentra Achieves TX-RAMP Certification: Demonstrating Leadership in Data Security Compliance

Introduction

We’re excited to announce that Sentra has officially achieved TX-RAMP certification, a significant milestone that underscores our commitment to delivering trusted, compliant, and secure cloud data protection.

The Texas Risk and Authorization Management Program (TX-RAMP) establishes rigorous security standards for cloud products and services used by Texas state agencies. Achieving this certification validates that Sentra meets and exceeds these standards, ensuring our customers can confidently rely on our platform to safeguard sensitive data.

For agencies and organizations operating in Texas, this means streamlined procurement, faster adoption, and the assurance that Sentra’s solutions are fully aligned with state-mandated compliance requirements. For our broader customer base, TX-RAMP certification reinforces Sentra’s role as a trusted leader in data security posture management (DSPM) and our ongoing dedication to protecting data everywhere it lives.

What is TX-RAMP?

The Texas Risk and Authorization Management Program (TX-RAMP) is the state’s framework for evaluating the security of cloud solutions used by public sector agencies. Its goal is to ensure that organizations working with Texas state data meet strict standards for risk management, compliance, and operational security.

TX-RAMP certification focuses on key areas such as:

  • Audit & Accountability: Ensuring system activity is monitored, logged, and reviewed.
  • System Integrity: Protecting against malicious code and emerging threats.
  • Access Control: Managing user accounts and privileges with least-privilege principles.
  • Policy & Governance: Establishing strong security policies and updating them regularly.

By certifying vendors, TX-RAMP helps agencies reduce risk, streamline procurement, and ensure sensitive state and citizen data is well protected.

Why TX-RAMP Certification Matters

For Texas agencies, TX-RAMP certification means trust and speed. Working with a certified partner like Sentra simplifies procurement, reduces onboarding time, and provides confidence that solutions meet the state’s toughest security requirements.

For enterprises and organizations outside Texas, this milestone is just as meaningful. TX-RAMP certification validates that Sentra’s DSPM platform can meet and go beyond some of the most demanding compliance frameworks in the U.S. It’s another proof point that when customers choose Sentra, they are choosing a solution built with security, accountability, and transparency at its core.

Sentra’s Path to TX-RAMP Certification

Achieving TX-RAMP certification required proving that Sentra’s security controls align with strict state requirements.

Some of the measures that demonstrate compliance include:

  • Audit and Accountability: Continuous monitoring and quarterly reviews of audit logs under SOC 2 Type II governance.
  • System and Information Integrity: Endpoint protection and weekly scans to prevent, detect, and respond to malicious code.
  • Access Control: Strong account management practices using Okta, BambooHR, MFA, and quarterly access reviews.
  • Change Management and Governance: Structured SDLC processes with documented requests, multi-level approvals, and complete audit trails.

Together, these safeguards show that Sentra doesn’t just comply with TX-RAMP - we exceed the requirements, embedding security into every layer of our operations and platform.

What This Means for Sentra Customers

For Texas agencies, TX-RAMP certification makes it easier and faster to adopt Sentra’s platform, knowing that it has already been vetted against the state’s most stringent standards.

For global enterprises, it’s another layer of assurance: Sentra’s DSPM solution is designed to stand up to the highest levels of compliance practice, giving customers confidence that their most sensitive data is secure - wherever it lives.

Conclusion

Earning TX-RAMP certification is a major milestone in Sentra’s journey, but it’s only part of our broader mission: building trust through security, compliance, and innovation.

This recognition reinforces Sentra’s role as a leader in data security posture management (DSPM) and gives both public sector and private enterprises confidence that their data is safeguarded by a platform designed for the most demanding environments.

<blogcta-big>

Read More
Kristin Grimes
Kristin Grimes
Ryda Stegenga
Ryda Stegenga
September 21, 2025
3
Min Read

Sentra on the Road: Where to Find Us This October

Sentra on the Road: Where to Find Us This October

October is shaping up to be a big month for Sentra! From coast to coast, our team will be meeting with security leaders to share insights on securing sensitive data - no matter where it travels.

If you’re attending one of these top cybersecurity conferences, we’d love to connect and show you how Sentra helps organizations embrace innovation while keeping data secure. Here’s where you can find us this month:

Hou.Sec.Con: September 30–October 1, Houston, TX

We’re kicking off in Texas at Hou.Sec.Con, one of the region’s most anticipated security conferences. It’s a hub for IT and cybersecurity professionals looking to explore new ways to defend against today’s evolving threats.

Stop by and learn how Sentra helps organizations protect sensitive data across cloud environments.

Trace3 Evolve: September 30–October 3, Las Vegas, NV

Next up is Trace3 Evolve, where IT leaders and innovators gather to discuss the future of enterprise technology. With cloud adoption accelerating, conversations around data security, compliance, and innovation are more important than ever.

Meet our team to see how Sentra makes securing sensitive data simple and scalable.

GuidePoint GPSEC Security Forum: October 3, Dallas, TX

We’re heading back south to attend GuidePoint GPSEC Security Forum in Dallas which will bring together industry leaders, cybersecurity experts, and technology innovators for a full day of impactful conversations, networking, and hands-on learning. This conference will dive into today’s most pressing security challenges through dynamic keynote speakers, engaging breakout sessions, and a bustling vendor fair. 

Whether you're dealing with data sprawl, compliance complexity, or risk visibility, Sentra will be on-site to show how their platform helps reduce risk and strengthen security posture without slowing innovation.

GrrCON: October 2–3, Grand Rapids, MI

Heading north, we’ll be at GrrCON, a favorite for security practitioners, researchers, and executives alike. Known for its community-driven feel, this event fosters knowledge-sharing and collaboration.

Let’s chat about modern approaches to cloud data security and how to mitigate risk without slowing innovation.

Innovate Cybersecurity Summit: October 5–7, Scottsdale, AZ

We’re excited to join the Innovate Cybersecurity Summit, where industry leaders explore solutions to today’s toughest challenges in data protection and cyber defense.

Learn how Sentra empowers organizations to gain visibility into their sensitive data and take proactive steps to secure it.

FS-ISAC Scottsdale: October (Dinner & Meetings)

We will be in Scottsdale during FS-ISAC, a premier financial services cybersecurity community event.

Sentra will be hosting a private dinner where attendees can connect in an intimate setting. We’ll also be available for 1:1 meetings to discuss how Sentra helps financial institutions protect sensitive data and comply with complex regulatory requirements.

This is a great chance to meet our team and hear how we partner with organizations to balance innovation and data protection.

Gartner Symposium: October 20–23, Orlando, FL

One of the year’s biggest IT events, the Gartner Symposium brings together CIOs, CISOs, and technology leaders to discuss the future of digital business.

Sentra will be on-site at Booth #748, where our team will showcase how a data-first security approach empowers organizations to innovate confidently while ensuring sensitive information remains protected. Stop by to connect with our experts and learn how Sentra helps enterprises stay secure in the cloud era.

NYC Google Event: October 21, New York, NY

We’ll also be in New York City at the Google Event, connecting with forward-thinking organizations adopting cutting-edge cloud technologies.

Discover how Sentra seamlessly integrates with Google Cloud to protect sensitive data wherever it lives.

InfoSec World: October 27–29, Lake Buena Vista, FL

We’re wrapping up the month at InfoSec World, a leading cybersecurity event bringing together professionals from across industries.

Stop by to learn how Sentra helps organizations strengthen data security strategies and stay ahead of regulatory demands.

GuidePoint GPSEC Security Forum: October 29, Philadelphia, PA

We’re closing out October at the GuidePoint GPSEC Security Forum in Philadelphia. This annual event brings together security professionals, technology partners, and thought leaders for a full day of collaboration and learning.

Hosted at Convene at Commerce Square, the forum will run from 8:00 a.m. to 5:00 p.m. ET and features a rich agenda, including:

  • A keynote from a leading cybersecurity expert
  • Breakout sessions exploring today’s most pressing security challenges
  • A panel of CISOs sharing practical strategies and real-world insights
  • A showcase of more than 70 technology vendors driving innovation in security

The day wraps up with a networking reception, providing attendees with the opportunity to connect with peers, exchange ideas, and continue important conversations in a more relaxed setting. Sentra is proud to participate in this event and contribute to the dialogue on securing sensitive data in an increasingly complex landscape.

Why These Events Matter

Cybersecurity is a team sport. By joining these events, Sentra isn’t just sharing our vision for protecting sensitive data, we’re also listening, learning, and collaborating with the community to address the most pressing challenges in cloud security.

From data discovery and classification to continuous monitoring and protection, Sentra helps organizations embrace innovation without compromising on security.

Connect with Sentra This October

Will you be at one of these events? Let’s meet!

Schedule a meeting with Sentra or visit our team at any of the conferences listed above. We’d love to show you how we can help your organization protect sensitive data and move faster with confidence.

See you on the road this October!

<blogcta-big>

Read More
Aviv Zisso
Aviv Zisso
August 26, 2025
4
Min Read
Data Security

Global Travel Platform Secures Petabytes of Cloud Data in 30 Days

Global Travel Platform Secures Petabytes of Cloud Data in 30 Days

Introduction

Cloud-first travel platforms handle massive volumes of customer data every day, from booking details to payment information. With petabytes of data spread across hundreds of  cloud accounts, the stakes couldn’t be higher: customer trust, regulatory pressure (PCI DSS, GDPR), and business reputation are always on the line.

This is the story of how a global travel platform took action to ensure the highest level of customer data protection and set out to gain complete visibility and full control of its data estate, securing petabytes of sensitive information across 600+ cloud accounts in just 30 days.

At a Glance: Securing Petabytes at Scale

The Challenge

  • 100s of PBs of sensitive customer data
  • 600+ cloud accounts, 150K+ data stores
  • Manual tagging, blind spots, reactive DLP
  • Compliance risks (PCI DSS, GDPR)

The Solution

  • Sentra’s agentless DSPM platform
  • Automated discovery & AI-driven classification
  • Real-time data mapping and compliance alignment
  • Partnership-driven support and fast deployment

The Results

  • Full visibility across petabytes of data in 30 days
  • Streamlined governance across 600+ cloud accounts
  • Dramatic reduction in  false positives & alert fatigue
  • Stronger compliance with PCI DSS & GDPR
  • Data security transformed into a strategic advantage

The Challenge: Data Visibility at Scale

The travel tech company’s cloud footprint had grown rapidly, now its security practices needed to be brought up to speed. Relying on legacy Data Loss Prevention (DLP) tools left the security team in a reactive posture. Alerts were triggered only after data had already left the environment. In the high velocity world of digital travel, “too late” is not an acceptable outcome.

Manual tagging compounded the problem. It was slow, resource-intensive, inconsistent across teams, and prone to human error. With more than 600 cloud accounts and hundreds of petabytes of data in motion, the organization sought a reliable way to answer the most fundamental security questions:

  • What sensitive data do we have?
  • Where is it stored?
  • Who has access to it?

Answers to these three foundational questions would enable them to lock down exposure risk, misconfigurations, and regulatory noncompliance for sensitive customer information, including payment card data and personal identifiers.

Sentra Data Security: Scalable, Accurate, Agentless

After evaluating a wide mix of DLP and DSPM vendors, the company selected Sentra for its ability to deliver scale, accuracy, and scan efficiency.

  • Agentless discovery allowed rapid deployment across the entire multi-cloud environment without adding operational friction.
  • AI-driven classification replaced error-prone manual tagging, enabling sensitive data to be labeled consistently and accurately.
  • Regulatory mapping ensured risks were tied directly to frameworks such as PCI DSS and GDPR, making compliance reviews easier and faster.
  • Smart scanning lowered cloud compute costs and provided more timely results.

Just as importantly, Sentra’s customer success and engineering teams worked closely with the company. Rapid support and the ability to deliver custom features strengthened the partnership and accelerated adoption.

Implementation: Tackling Complexity Head-On

Securing hundreds of petabytes across over 600 cloud accounts, over 150K data stores, and 25K data storage locations was no small feat. The implementation involved coordination with six internal stakeholder teams.

Sentra’s engineering team collaborated directly with the customer to fine-tune scanning for high-memory formats and optimize scanning cycles. This ensured that even as the environment expanded, sensitive data could still be discovered, classified, and secured in near real time.

Despite the scope and complexity, deployment was completed on schedule. Within weeks, the company moved from chasing alerts to uncovering exposures proactively. Manual tagging errors were eliminated, and governance workflows became more consistent across business units.

Real Business Impact: From Reactive to Proactive

The shift in outcomes was dramatic. Within months, the security team achieved the visibility they sought. Instead of reacting to alerts, they were proactively discovering risks and preventing incidents before they escalated.

Key results included:

  • Discovery of sensitive data that had previously gone unnoticed
  • Streamlined governance across 600+ cloud accounts
  • Automated classification that reduced false positives and alert fatigue
  • Improved compliance posture with PCI DSS and GDPR

As one security engineering manager put it:

“The Sentra speed and support really stood out. We were able to quickly transform our approach from reactive alerts to proactive discovery. We’re not just detecting potential risks anymore; we’re gaining a comprehensive inventory of our data landscape across hundreds of petabytes, enabling us to truly protect our most critical assets.”

Sentra for Travel Tech: Setting the Pace

For travel technology companies, customer trust and agility are everything. Every transaction, every booking, every passenger record carries sensitive information that must be protected. At this scale, manual processes and reactive tools simply cannot keep up.

By adopting Sentra’s cloud-native DSPM platform, this global travel leader gained real-time visibility into its vast, fast-moving data estate. Booking and flight details, payment card data, and personal identifiers could now be classified automatically and governed consistently without slowing the pace of innovation.

What had once been a compliance bottleneck became a strategic advantage.

Bottom Line: Data Security is a Competitive Edge

The journey of this global travel platform illustrates what’s possible when scale, automation, and accuracy come together. In just 30 days, the company moved from dangerous blind spots to full visibility and control over petabytes of sensitive data.

But this is about more than one company’s success story. In the AI-powered economy, where data volumes are exploding and regulatory demands are intensifying, innovation speed without security is a liability. The leaders of the next decade will be those who can combine agility with trusted data security.

Sentra’s DSPM platform gives organizations the ability to:

  • Discover and classify sensitive data automatically
  • Map risks directly to compliance frameworks
  • Move from reactive alerts to proactive governance
  • Scale confidently across complex, cloud-first environments

This is about more than just compliance. For consumer industries like travel and hospitality, retail, financial services, and any enterprise that runs on data, it’s about protecting customer trust, unlocking innovation, and gaining a true competitive edge.

Discover how Sentra can help your organization secure its cloud data estate at scale.

<blogcta-big>

Read More
Meni Besso
Meni Besso
August 21, 2025
3
Min Read
Compliance

NYDFS 2.0: New Cybersecurity Requirements and Enforcement

NYDFS 2.0: New Cybersecurity Requirements and Enforcement

NYDFS Steps Up Enforcement

The New York State Department of Financial Services (NYDFS) has long been one of the most influential regulators in the financial sector, but over the past two years, it’s made one thing crystal clear: cybersecurity is no longer a back-office IT concern, it’s a regulatory priority.

In response to growing threats, increasing reliance on third-party services, and persistent operational risks, NYDFS has tightened its expectations around how financial institutions protect sensitive data. And it’s backing that stance with real financial consequences.

Just ask PayPal or OneMain Financial, two major firms hit with multimillion-dollar penalties for cybersecurity lapses. These weren’t headline-grabbing breaches or ransomware attacks, they were the result of basic control failures, delayed reporting, and repeated gaps in governance.

What do a $2M fine for PayPal and a $4.25M penalty for OneMain have in common?


Weak cybersecurity practices, and a regulator that’s no longer willing to wait for companies to catch up.

The Recent Crackdowns: PayPal and OneMain

a. PayPal – $2M Civil Penalty (January 2025)

In January 2025, NYDFS announced a $2 million penalty against PayPal for violations of its cybersecurity regulations under Part 500. The enforcement focused on failures to report a cybersecurity event in a timely manner and gaps in maintaining certain required controls.

The incident involved unauthorized access to over 34,000 user accounts, exposing sensitive personal data including tax IDs and financial information. NYDFS emphasized that PayPal’s delayed reporting and lack of specific security measures put both consumers and the broader financial ecosystem at risk.

What it signals: No company - not even a digital-native fintech giant is immune from enforcement. The bar is rising, and NYDFS is expecting organizations to report, respond, and remediate swiftly and transparently.

b. OneMain Financial – $4.25M Fine (May 2023)

In May 2023, NYDFS fined OneMain Financial $4.25 million after discovering systemic cybersecurity deficiencies, including improperly stored passwords, insufficient multi-factor authentication, and inadequate third-party risk management.

Even more concerning: many of these issues were identified in earlier audits and hadn’t been fully addressed. NYDFS made it clear that repeated inaction wouldn’t be tolerated.

What it signals: It’s not just about responding to one-off incidents — regulators are watching for long-term security maturity. Ongoing hygiene, policy enforcement, and consistent control testing are now table stakes.

What’s Changing: NYDFS 2.0 (Part 500 Amendments)

These enforcement actions aren’t just about past violations, they’re a preview of what’s to come.

With the rollout of the NYDFS Second Amendment to Part 500, also known as NYDFS 2.0, covered entities, especially those classified as Class A companies are facing a new set of enforceable expectations.

Key new requirements include:

  • Annual independent audits of cybersecurity programs
  • Mandatory multi-factor authentication (MFA) for all systems
  • Stronger access control policies, including role-based access
  • Board-level or senior executive oversight of cybersecurity governance

Full enforcement kicks in on November 1, 2025. At that point, these aren’t just checkboxes, they’re compliance requirements with real financial and reputational risk for falling short.

The message is clear: NYDFS is no longer satisfied with written policies and best-effort intentions. It's expecting demonstrated outcomes, measurable control, and leadership accountability.

The Broader Message: Enforcement Is the New Default

NYDFS isn’t the only regulator stepping up, but it’s arguably the most proactive, and most willing to act. These recent fines signal a broader shift across the industry: compliance is no longer about having good intentions or written policies. Regulators are now focused on evidence of execution, real controls, timely reporting, and provable outcomes.

In other words, enforcement is the new default. This shift reframes cybersecurity from a purely technical issue to a board-level governance challenge. It's not enough for IT or security teams to manage risk in isolation. Executive leadership, legal, and compliance functions all need to be aligned — and accountable.

If your organization is treating cybersecurity as just a tech responsibility, you’re behind.

What Organizations Should Do Now

The message from regulators is clear, and now is the time to act.

Here are four practical steps your team can take to stay ahead:

  • Audit your current posture against NYDFS Part 500. Focus especially on:
    • Incident reporting timelines
    • MFA coverage
    • Access controls
    • Third-party risk assessments

  • Prioritize visibility across your environment
    You can’t protect what you can’t see. Ensure you have continuous insight into where sensitive data lives, who can access it, and how it moves across cloud, SaaS, and on-prem systems.

  • Document everything
    Have clear records of your policies, security controls, vendor assessments, incident response processes, and risk decisions. If you had to prove your compliance tomorrow, could you?

  • Benchmark your controls against recent enforcement
    If PayPal and OneMain were fined for these issues, ask yourself:
    How would our program hold up under similar scrutiny?

Final Thoughts: Read the Signals Now, Not After a Fine

The writing is on the wall - NYDFS is raising the bar, and other regulators are likely to follow. This is your opportunity to get ahead of the curve, rather than scrambling after the fact.

Take these fines as what they are: a warning shot and a roadmap. Organizations that prepare now - with tighter controls, better visibility, and cross-functional ownership won’t just avoid penalties. They’ll be more resilient, more trusted, and better equipped to lead in a high-risk landscape.

If you’re not sure where to start, use these enforcement cases as a prompt for an internal review. And if you want to go deeper, we’ve put together a compliance checklist that can help you assess where you stand.

Better to find the gaps now before NYDFS does.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
August 18, 2025
4
Min Read
Data Security

CISO Challenges of 2025 and How to Overcome Them

CISO Challenges of 2025 and How to Overcome Them

The evolving digital landscape for cloud-first companies presents unprecedented challenges for chief information security officers (CISOs). The rapid adoption of AI-powered systems and the explosive growth of cloud-based deployments have expanded the attack surface, introducing novel risks and threats.

 

According to IBM's 2024 "Cost of a Data Breach Report," the average cost of a cloud data breach soared to $4.88 million - prompting a crucial question: Is your organization prepared to secure its expanding digital footprint? 

Regulatory frameworks and data privacy standards are in a constant state of flux, requiring CISOs to stay agile and proactive in their approach to compliance and risk management.

This article explores the top challenges facing CISOs today, illustrated by real-world incidents, and offers actionable solutions for them. By understanding these pressing concerns, organizations can stay proactive and secure their environments effectively.

Top Modern Challenges Faced by CISOs

Modern CISO concerns stem from a combination of technical complexity, workforce behavior, and external threats. Below, we explore these challenges in detail.

1. AI and Large Language Model (LLM) Data Protection Challenges

AI tools like large language models (LLMs) have become integral to modern organizations; however, they have also introduced significant risks to data security. In 2024, for example, Microsoft's AI system, Copilot, was manipulated to exfiltrate private data and automate spear-phishing attacks, revealing vulnerabilities in AI-powered systems.

Furthermore, insider threats have increased as employees misuse AI tools to leak sensitive data. For instance, the AI malware Imprompter exploited LLMs to facilitate data exfiltration, causing data loss and reputational harm. 

Robust governance frameworks that restrict unauthorized AI system access and implementation of real-time activity monitoring are essential to mitigate such risks.

2. Unstructured Data Management

Unstructured data (e.g., text, images, audio, and video files) is increasingly stored across cloud platforms, making it difficult to secure. Take the high-profile breach in 2022 involving Turkish Pegasus Airlines. It compromised 6.5 TB of unstructured data stored in an AWS S3 bucket, ultimately leading to 23 million files being exposed. 

This incident highlighted the dangers of poorly managed unstructured data, which can lead to severe reputational damage and potential regulatory penalties. Addressing this challenge requires automated classification and encryption tools to secure data at scale. In addition, real-time classification and encryption ensure sensitive information remains protected in diverse, dynamic environments.

3. Encryption and Data Labeling

Encryption and data labeling are vital for protecting sensitive information, yet many organizations struggle to implement them effectively. 

IBM's 2024 “Cost of a Data Breach Report” reveals that companies that have implemented security AI and automation “extensively” have saved an average of $2.2 million compared to those without these technologies.

 

The EU’s General Data Protection Regulation (GDPR) highlights the importance of data labeling and classification, requiring organizations to handle personal data appropriately based on its sensitivity. These measures are essential for protecting sensitive information and complying with all relevant data protection regulations.

Companies can enforce data protection policies more effectively by adopting dynamic encryption technologies and leveraging platforms that support automated labeling.

4. Regulatory Compliance and Global Standards

The expanding intricacies of data privacy regulations, such as GDPR, CCPA, and HIPAA, pose significant challenges for CISOs. In 2024, Microsoft and Google faced lawsuits for the unauthorized use of personal data in AI training, underscoring the financial and reputational risks of non-compliance.

Companies must leverage compliance automation tools and centralized management systems to navigate these complexities and streamline regulatory adherence.

5. Explosive Data Growth

The exponential growth of data creates immense opportunities but also heightens security risks. 

As organizations generate and store more data, legacy security measures often fall short, exposing critical vulnerabilities. Advanced, cloud-native, and scalable platforms help organizations scale their data protection strategies alongside data growth, offering real-time monitoring and automated controls to mitigate risks effectively.

6. Insider Threats

Both intentional and accidental insider threats remain among the most difficult challenges for CISOs to address. 

In 2024, a North Korean IT worker, hired unknowingly by an American company, stole sensitive data and demanded a cryptocurrency ransom. This incident exposed vulnerabilities in remote hiring processes, resulting in severe operational and reputational consequences. 

Combatting insider threats requires sophisticated behavior analytics and activity monitoring tools to detect and respond to anomalies early. Security platforms should provide enhanced visibility into user activity, enabling organizations to mitigate such risks and secure their data proactively.

7. Shadow Data

In the race to adopt new cloud and AI-powered tools, users are often generating, storing, and transmitting sensitive data in services that the security team never approved or even knew existed. This includes everything from unofficial file-sharing apps to unsanctioned SaaS platforms and ad hoc API integrations.

The result is shadow IT, shadow SaaS, and ultimately, shadow data: sensitive or regulated information that lives outside the visibility of traditional security tools. Without knowing where this data resides or how it’s being accessed, CISOs cannot protect it. These unknown data flows introduce real compliance, privacy, and security risk.

It is critical to expose and classify this hidden data in real time, in order to give security teams the visibility they need to secure what was previously invisible.

Overcoming the Challenges: A CISO's Playbook in 6 Steps

CISOs can follow a structured, data-driven, step-by-step playbook to navigate the hurdles of modern cybersecurity and data protection. However, in today's dynamic data landscape, simply checking off boxes is no longer sufficient—leaders must understand how each critical data security measure interconnects, creating a unified, forward-thinking strategy.

Before diving into these steps, it's important to note why they matter now more than ever: Emerging data technologies, rapidly evolving data regulations, and escalating insider threats demand an adaptable, holistic, and data-centric approach to security. By integrating these core elements with robust data analytics, CISOs can build an ecosystem that addresses current vulnerabilities and anticipates future data risks.

1. First, Develop a Scalable Security Strategy 

A strategic security roadmap should integrate seamlessly with organizational goals and data governance frameworks, guaranteeing that risk management, data integrity, and business priorities align. 

Accurately classifying and continuously monitoring data assets, even as they move throughout the organization, is a must to achieve sustainable scale. This solid data foundation empowers organizations to quickly pivot in response to emerging threats, keeping them agile and resilient.

The next step is key, as the right mindset is a must.

2. Build a Security-First Culture

Equip employees with the knowledge and tools to secure data effectively; regular data-focused training sessions and awareness initiatives help reduce human error and mitigate insider threats before they become critical risks. By fostering a culture of shared data responsibility, CISOs transform every team member into a first line of defense. 

This approach ensures that everyone is on the same page toward prioritizing data security. 

3. Leverage Advanced Tools and Automation

Utilize state-of-the-art platforms for comprehensive data discovery, real-time monitoring, automation, and visibility. By automating routine security tasks and delivering instant data-driven insights, these features empower CISOs to stay on top of new threats and make decisions based on the latest data. 

Naturally, even the best tools and automation require a strategic, data-centric approach to yield optimal results.

4. Implement Zero-Trust Principles 

Implement a zero-trust approach that verifies every user, device, and data transaction, ensuring zero implicit trust within the environment. Understand who has access to what data, and implement least privilege access. Continuous identity and device validation boosts security for both external and internal threats. 

Positioning zero trust as a core principle tightens data access controls across the entire ecosystem, but organizations must remain vigilant to the most recent threats.

5. Evaluate and Update Cybersecurity Frameworks

Regularly assess security policies, procedures, and data management tools to ensure alignment with the latest trends and regulatory requirements. Keep a current data inventory, and monitor all changes. Ongoing reviews maintain relevance and effectiveness, preventing outdated defenses from becoming liabilities.

For optimal data security, cross-functional collaboration is key.

6. Encourage Cross-Departmental Collaboration

Work closely with other teams, including IT, legal, compliance, and data governance, to ensure a unified and practical approach to data security challenges. Cooperation among stakeholders accelerates decision-making, streamlines incident response, and underscores the importance of security as a shared enterprise objective.

By adopting this data-centric playbook, CISOs can strengthen their organization's security posture, respond to threats quickly, and reduce the likelihood and impact of breaches. Platforms such as Sentra provide robust, data-driven tools and capabilities to execute this strategy effectively, enabling CISOs to confidently handle complex cybersecurity landscapes.  When these steps intertwine, the result is a robust defense that adapts to the ever-shifting digital landscape - empowering leaders to stay one step ahead.

The Sentra Edge

Sentra is an advanced data security platform that offers the strategic insights and automated capabilities modern CISOs need to navigate evolving threats without compromising agility or compliance. Sentra integrates seamlessly with existing processes, empowering security leaders to build holistic programs that anticipate new risks, reinforce best practices, and protect data in real time.

Below are several key areas where Sentra's approach aligns with the thought leadership necessary to stay ahead of modern cybersecurity challenges.

Secure Structured Data

Structured data - in tables, databases, and other organized repositories, forms the backbone of an organization’s critical assets. At Sentra, we prioritize structured data management first and foremost, ensuring automation drives our security strategy. While securing structured data might seem straightforward, rapid data proliferation can quickly overwhelm manual safeguards, exposing your data. By automating data movement tracking, continuous risk and security posture assessments, and real-time alerts for policy violations, organizations can offload these burdensome yet essential tasks. 

This automation-first approach not only strengthens data security but also ensures compliance and operational efficiency in today’s fast-paced digital landscape.

Secure Unstructured Data

Securing text, images, video, and other unstructured data is often challenging in cloud environments. Unstructured data is particularly vulnerable when organizations lack automated classification and encryption, creating blind spots that bad actors can exploit.

 

In response, Sentra underscores the importance of continuous data discovery, labeling, and protection—enabling CISOs to maintain visibility over their dynamic cloud assets and reduce the risk of inadvertent exposure.

Navigate Complex Regulations

Modern data protection laws, such as GDPR and CCPA, demand rigorous compliance structures that can strain security teams. Sentra's approach highlights centralized governance and real-time reporting, helping CISOs align with ever-shifting global standards.

 

By automating repetitive compliance tasks, organizations can focus more energy on strategic security initiatives, ensuring they remain nimble even as regulations evolve.

Tackle Insider Threats

Insider threats—accidental and malicious—remain one of the most challenging hurdles for CISOs. Sentra advocates a multi-layered strategy that combines behavior analytics, anomaly detection, and dynamic data labeling; this offers proactive visibility into user actions, enabling security leaders to detect and neutralize insider risks early. 

Such a holistic posture helps mitigate breaches before they escalate and preserves organizational trust.

Be Prepared for Future Risks

AI-driven attacks and large language model (LLM) vulnerabilities are no longer theoretical—they are rapidly emerging threats that demand forward-thinking responses. Sentra's focus on robust data control mechanisms and continuous monitoring means CISOs have the tools they need to safeguard sensitive information, whether it's accessed by human users or AI systems. 

This outlook helps security teams adapt quickly to the next wave of challenges. By emphasizing strategic insights, proactive measures, and ongoing adaptation, Sentra exemplifies an industry-leading approach that empowers CISOs to navigate complex data security landscapes without losing sight of broader organizational objectives.

Conclusion

As new threat vectors emerge and organizations face mounting pressures to protect their data, the role of CISO will become even more critical. Addressing modern challenges requires a proactive and strategic approach, incorporating robust security frameworks, cutting-edge tools, and a culture of vigilance.

Sentra's platform is a comprehensive data security solution designed to empower CISOs with the tools they need to navigate this complex landscape. By addressing key hurdles such as AI risks, structured and unstructured data management, and compliance, Sentra enables companies to stay on top of evolving risks and safeguard their operations. The modern CISO role is more demanding than ever, but the right tools make all the difference. Discover how Sentra's cloud-native approach empowers you to conquer pressing security challenges.

<blogcta-big>

Read More
Yogev Wallach
Yogev Wallach
August 11, 2025
4
Min Read
AI and ML

How to Secure Regulated Data in Microsoft 365 Copilot

How to Secure Regulated Data in Microsoft 365 Copilot

Microsoft 365 Copilot is a game-changer, embedding generative AI directly into your favorite tools like Word, Outlook, and Teams, and giving productivity a huge boost. But for governance, risk, and compliance (GRC) officers and CISOs, this exciting new innovation also brings new questions about governing sensitive data.

So, how can your organization truly harness Copilot safely without risking compliance? What are Microsoft 365 Copilot security best practices?

Frameworks like NIST’s AI Risk Management and the EU AI Act offer broad guidance, but they don't prescribe exact controls. At Sentra, we recommend a practical approach: treat Copilot as a sensitive data store capable of serving up data (including highly sensitive, regulated information).

This means applying rigorous data security measures to maintain compliance. Specifically, you'll need to know precisely what data Copilot can access, secure it, clearly map access, and continuously monitor your overall data security posture.

We tackle Copilot security through two critical DSPM concepts: Sanitization and Governance.

1. Sanitization: Minimize Unnecessary Data Exposure

Think of Copilot as an incredibly powerful search engine. It can potentially surface sensitive data hidden across countless repositories. To prevent unintended leaks, your crucial first step is to minimize the amount of sensitive data Copilot can access.

Address Shadow Data and Oversharing

It's common for organizations to have sensitive data lurking in overlooked locations or within overshared files. Copilot's incredible search capabilities can suddenly bring these vulnerabilities to light. Imagine a confidential HR spreadsheet, accidentally shared too broadly, now easily summarized by Copilot for anyone who asks.

The solution? Conduct thorough data housekeeping. This means identifying, archiving, or deleting redundant, outdated, or improperly shared information. Crucially, enforce least privilege access by actively auditing and tightening permissions – ensuring only essential identities have access to sensitive content.

How Sentra Helps

Sentra's DSPM solution leverages advanced AI technologies (like OCR, NER, and embeddings) to automatically discover and classify sensitive data across your entire Microsoft 365 environment. Our intuitive dashboards quickly highlight redundant files, shadow data, and overexposed folders. What's more, we meticulously map access at the identity level, clearly showing which users can access what specific sensitive data – enabling rapid remediation.

For example, in the screenshot below, you'll see a detailed view of an identity (Jacob Simmons) within our system. This includes a concise summary of the sensitive data classes they can access, alongside a complete list of accessible data stores and data assets.

sentra dspm identity access

2. Governance: Control AI Output to Prevent Data Leakage

Even after thorough sanitization, some sensitive data must remain accessible within your environment. This is where robust governance comes in, ensuring that Copilot's output never becomes an unintentional vehicle for sensitive data leakage.

Why Output Governance Matters

Without proper controls, Copilot could inadvertently include sensitive details in its generated content or responses. This risk could lead to unauthorized sharing, unchecked sensitive data sprawl, or severe regulatory breaches. The recent EchoLeak vulnerability, for instance, starkly demonstrated how attackers might exploit AI-generated outputs to silently leak critical information.

Leveraging DLP and Sensitivity Labels

Microsoft 365’s Purview Information Protection and DLP policies are powerful tools that allow organizations to control what Copilot can output. Properly labeled sensitive data, such as documents marked “Confidential – Financial,” prompt Copilot to restrict content output, providing users only with references or links rather than sensitive details.

Sentra’s Governance Capabilities

Sentra automatically classifies your data and intelligently applies MPIP sensitivity labels, directly powering Copilot’s critical DLP policies. Our platform integrates seamlessly with Microsoft Purview, ensuring sensitive files are accurately labeled based on flexible, custom business logic. This guarantees that Copilot's outputs remain fully compliant with your active DLP policies.

Below is an example of Sentra’s MPIP label automation in action, showing how we place sensitivity labels on data assets that contain Facebook profile URLs and credit card numbers belonging to EU citizens, which were modified in the past year:

Additionally, our continuous monitoring and real-time alerts empower organizations to immediately address policy violations – for instance, sensitive data with missing or incorrect MPIP labels – helping you maintain audit readiness and seamless compliance alignment.

sentra mpip label automation sensitive data microsoft purview information protection automation

A Data-Centric Security Approach to AI Adoption

By strategically combining robust sanitization and strong governance, you ensure your regulated data remains secure while enabling safe and compliant Copilot adoption across your organization. This approach aligns directly with the core principles outlined by NIST and the EU AI Act, effectively translating high-level compliance guidance into actionable, practical controls.

At Sentra, our mission is clear: to empower secure AI innovation through comprehensive data visibility and truly automated compliance. Our cutting-edge solutions provide the transparency and granular control you need to confidently embrace Copilot’s powerful capabilities, all without risking costly compliance violations.

Next Steps

Adopting Microsoft 365 Copilot securely doesn’t have to be complicated. By leveraging Sentra’s comprehensive DSPM solutions, your organization can create a secure environment where Copilot can safely enhance productivity without ever exposing your regulated data.


Ready to take control? Contact a Sentra expert today to learn more about seamlessly securing your sensitive data and confidently deploying Microsoft 365 Copilot.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
Gilad Golani
Gilad Golani
August 5, 2025
4
Min Read
Data Security

How Automated Remediation Enables Proactive Data Protection at Scale

How Automated Remediation Enables Proactive Data Protection at Scale

Scaling Automated Data Security in Cloud and AI Environments

Modern cloud and AI environments move faster than human response. By the time a manual workflow catches up, sensitive data may already be at risk. Organizations need automated remediation to reduce response time, enforce policy at scale, and safeguard sensitive data the moment it becomes exposed. Comprehensive data discovery and accurate data classification are foundational to this effort. Without knowing what data exists and how it's handled, automation can't succeed.

Sentra’s cloud-native Data Security Platform (DSP) delivers precisely that. With built-in, context-aware automation, data discovery, and classification, Sentra empowers security teams to shift from reactive alerting to proactive defense. From discovery to remediation, every step is designed for precision, speed, and seamless integration into your existing security stack. precisely that. With built-in, context-aware automation, Sentra empowers security teams to shift from reactive alerting to proactive defense. From discovery to remediation, every step is designed for precision, speed, and seamless integration into your existing security stack.

Automated Remediation: Turning Data Risk Into Action

Sentra doesn't just detect risk, it acts. At the core of its value is its ability to execute automated remediation through native integrations and a powerful API-first architecture. This lets organizations immediately address data risks without waiting for manual intervention.

Key Use Cases for Automated Data Remediation

Sensitive Data Tagging & Classification Automation

Sentra accurately classifies and tags sensitive data across environments like Microsoft 365, Amazon S3, Azure, and Google Cloud Platform. Its Automation Rules Page enables dynamic labels based on data type and context, empowering downstream tools to apply precise protections.

Sensitive Data Tagging and Classification Automation in Microsoft Purview

Automated Access Revocation & Insider Risk Mitigation

Sentra identifies excessive or inappropriate access and revokes it in real time. With integrations into IAM and CNAPP tools, it enforces least-privilege access. Advanced use cases include Just-In-Time (JIT) access via SOAR tools like Tines or Torq.

Enforced Data Encryption & Masking Automation

Sentra ensures sensitive data is encrypted and masked through integrations with Microsoft Purview, Snowflake DDM, and others. It can remediate misclassified or exposed data and apply the appropriate controls, reducing exposure and improving compliance.

Integrated Remediation Workflow Automation

Sentra streamlines incident response by triggering alerts and tickets in ServiceNow, Jira, and Splunk. Context-rich events accelerate triage and support policy-driven automated remediation workflows.

Architecture Built for Scalable Security Automation

Cloud & AI Data Visibility with Actionable Remediation

Sentra provides visibility across AWS, Azure, GCP, and M365 while minimizing data movement. It surfaces actionable guidance, such as missing logging or improper configurations, for immediate remediation.

Dynamic Policy Enforcement via Tagging

Sentra’s tagging flows directly into cloud-native services and DLP platforms, powering dynamic, context-aware policy enforcement.

API-First Architecture for Security Automation

With a REST API-first design, Sentra integrates seamlessly with security stacks and enables full customization of workflows, dashboards, and automation pipelines.

Why Sentra for Automated Remediation?

Sentra offers a unified platform for security teams that need visibility, precision, and automation at scale. Its advantages include:

  • No agents or connectors required
  • High-accuracy data classification for confident automation
  • Deep integration with leading security and IT platforms
  • Context-rich tagging to drive intelligent enforcement
  • Built-in data discovery that powers proactive policy decisions
  • OpenAPI interface for tailored remediation workflows

These capabilities are particularly valuable for CISOs, Heads of Data Security, and AI Security teams tasked with securing sensitive data in complex, distributed environments. 

Automate Data Remediation and Strengthen Cloud Security

Today’s cloud and AI environments demand more than visibility, they require decisive, automated action. Security leaders can no longer afford to rely on manual processes when sensitive data is constantly in motion.

Sentra delivers the speed, precision, and context required to protect what matters most. By embedding automated remediation into core security workflows, organizations can eliminate blind spots, respond instantly to risk, and ensure compliance at scale.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
July 30, 2025
3
Min Read
Data Security

How Sentra is Redefining Data Security at Black Hat 2025

How Sentra is Redefining Data Security at Black Hat 2025

As we move deeper into 2025, the cybersecurity landscape is experiencing a profound shift. AI-driven threats are becoming more sophisticated, cloud misconfigurations remain a persistent risk, and data breaches continue to grow in scale and cost.

In this rapidly evolving environment, traditional security approaches are no longer enough. At Black Hat USA 2025, Sentra will demonstrate how security teams can stay ahead of the curve through data-centric strategies that focus on visibility, risk reduction, and real-time response. Join us on August 4-8 at the Mandalay Bay Convention Center in Las Vegas to learn how Sentra’s platform is reshaping the future of cloud data security.

Understanding the Stakes: 2024’s Security Trends

Recent industry data underscores the urgency facing security leaders. Ransomware accounted for 35% of all cyberattacks in 2024 - an 84% increase over the prior year. Misconfigurations continue to be a leading cause of cloud incidents, contributing to nearly a quarter of security events. Phishing remains the most common vector for credential theft, and the use of AI by attackers has moved from experimental to mainstream.

These trends point to a critical shift: attackers are no longer just targeting infrastructure or endpoints. They are going straight for the data.

Why Data-Centric Security Must Be the Focus in 2025

The acceleration of multi-cloud adoption has introduced significant complexity. Sensitive data now resides across AWS, Azure, GCP, and SaaS platforms like Snowflake and Databricks. However, most organizations still struggle with foundational visibility - not knowing where all their sensitive data lives, who has access to it, or how it is being used.

Sentra’s approach to Data Security Posture Management (DSPM) is built to solve this problem. Our platform enables security teams to continuously discover, identify, classify, and secure sensitive data across their cloud environments, and to do so in real time, without agents or manual tagging.

Sentra at Black Hat USA 2025: What to Expect

At this year’s conference, Sentra will be showcasing how our DSPM and Data Detection and Response (DDR) capabilities help organizations proactively defend their data against evolving threats. Our live demonstrations will highlight how we uncover shadow data across hybrid and multi-cloud environments, detect abnormal access patterns indicating insider threats, and automate compliance mapping for frameworks such as GDPR, HIPAA, PCI-DSS, and SOX. Attendees will also gain visibility into how our platform enables data-aware threat detection that goes beyond traditional SIEM tools.

In addition to product walkthroughs, we’ll be sharing real-world success stories from our customers - including a fintech company that reduced its cloud data risk by 60% in under a month, and a global healthtech provider that cut its audit prep time from three weeks to just two days using Sentra’s automated controls.

Exclusive Experiences for Security Leaders

Beyond the show floor, Sentra will be hosting a VIP Security Leaders Dinner on August 5 - an invitation-only evening of strategic conversations with CISOs, security architects, and data governance leaders. The event will feature roundtable discussions on 2025’s biggest cloud data security challenges and emerging best practices.

For those looking for deeper engagement, we’re also offering one-on-one strategy sessions with our experts. These personalized consultations will focus on helping security leaders evaluate their current DSPM posture, identify key areas of risk, and map out a tailored approach to implementing Sentra’s platform within their environment.

Why Security Teams Choose Sentra

Sentra has emerged as a trusted partner for organizations tackling the challenges of modern data security. We were named a "Customers’ Choice" in the Gartner Peer Insights Voice of the Customer report for DSPM, with a 98% recommendation rate and an average rating of 4.9 out of 5. GigaOm also recognized Sentra as a Leader in its 2024 Radar reports for both DSPM and Data Security Platforms.

More importantly, Sentra is helping real organizations address the realities of cloud-native risk. As security perimeters dissolve and sensitive data becomes more distributed, our platform provides the context, automation, and visibility needed to protect it.

Meet Sentra at Booth 4408

Black Hat USA 2025 offers a critical opportunity for security leaders to re-evaluate their strategies in the face of AI-powered attacks, rising cloud complexity, and increasing regulatory pressure. Whether you are just starting to explore DSPM or are looking to enhance your existing security investments, Sentra’s team will be available for live demos, expert guidance, and strategic insights throughout the event.

Visit us at Booth 4408 to see firsthand how Sentra can help your organization secure what matters most - your data.

Register or Book a Session

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
July 27, 2025
3
Min Read
Data Security

How the Tea App Got Blindsided on Data Security

How the Tea App Got Blindsided on Data Security

A Women‑First Safety Tool - and a Very Public Breach

Tea is billed as a “women‑only” community where users can swap tips, background‑check potential dates, and set red or green flags. In late July 2025 the app rocketed to No. 1 in Apple’s free‑apps chart, boasting roughly four million users and a 900 k‑person wait‑list.

On 25 July 2025 a post on 4chan revealed that anyone could download an open Google Firebase Storage bucket holding verification selfies and ID photos. Technology reporters quickly confirmed the issue and confirmed the bucket had no authentication or even listing restrictions.

What Was Exposed?

About 72,000 images were taken. Roughly 13,000 were verification selfies that included “driver's license or passport photos; the rest - about 59,000 - were images, comments, and DM attachments from more than two years ago. No phone numbers or email addresses were included, but the IDs and face photos are now mirrored on torrent sites, according to public reports.

What Tea App data was exposed

Tea’s Official Response

On 27 July Tea posted the following notice to its Instagram account:

We discovered unauthorized access to an archived data system. If you signed up for Tea after February 2024, all your data is secure.

This archived system stored about 72,000 user‑submitted images – including approximately 13,000 selfies and selfies that include photo identification submitted during account verification. These photos can in no way be linked to posts within Tea.

Additionally, 59,000 images publicly viewable in the app from posts, comments, and direct messages from over two years ago were accessed. This data was stored to meet law‑enforcement standards around cyberbullying prevention.

We’ve acted fast and we’re working with trusted cyber‑security experts. We’re taking every step to protect this community – now and always.

(Full statement: instagram.com/theteapartygirls)

How Did This Happen?

At the heart of the breach was a single, deceptively simple mistake: the Firebase bucket that stored user images had been left wide open to the internet and even allowed directory‑listing. Whoever set it up apparently assumed that the object paths were obscure enough to stay hidden, but obscurity is never security. Once one curious 4chan user stumbled on the bucket, it took only minutes to write a script that walked the entire directory tree and downloaded everything. The files were zipped, uploaded to torrent trackers, and instantly became impossible to contain. In other words, a configuration left on its insecure default setting turned a women‑safety tool into a privacy disaster.

What Developers and Security Teams Can Learn

For engineering teams, the lesson is straightforward: always start from “private” and add access intentionally. Google Cloud Storage supports Signed URLs and Firebase Auth rules precisely so you can serve content without throwing the doors wide open; using those controls should be the norm, not the exception. Meanwhile, security leaders need to accept that misconfigurations are inevitable and build continuous monitoring around them.

Modern Data Security Posture Management (DSPM) platforms watch for sensitive data, like face photos and ID cards, showing up in publicly readable locations and alert the moment they do. Finally, remember that forgotten backups or “archive” buckets often outlive their creators’ attention; schedule regular audits so yesterday’s quick fix doesn’t become tomorrow’s headline.

How Sentra Would Have Caught This

Had Tea’s infrastructure been monitored by a DSPM solution like Sentra, the open bucket would have triggered an alert long before a stranger found it. Sentra continuously inventories every storage location in your cloud accounts, classifies the data inside so it knows those JPEGs contain faces and government IDs, and correlates that sensitivity with each bucket’s exposure. The moment a bucket flips to public‑read - or worse, gains listing permissions - Sentra raises a high‑severity alert or can even automate a rollback of the risky setting. In short, it spots the danger during development or staging, before the first user uploads a selfie, let alone before a leak hits 4chan. And, in case of a breach (perhaps by an inadvertent insider), Sentra monitors data accesses and movement and can alert when unusual activity occurs.

The Bottom Line


One unchecked permission wiped out the core promise of an app built to keep women safe. This wasn’t some sophisticated breach, it was a default setting left in place, a public bucket no one thought to lock down. A fix that would’ve taken seconds ended up compromising thousands of IDs and faces, now mirrored across the internet.

Security isn’t just about good intentions. Least-privilege storage, signed URLs, automated classification, and regular audits aren’t extras - they’re the baseline. If you’re handling sensitive data and not doing these things, you’re gambling with trust. Eventually, someone will notice. And they won’t be the only ones downloading.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
July 22, 2025
3
Min Read
Data Security

CVE-2025-53770: A Wake-Up Call for Every SharePoint Customer

CVE-2025-53770: A Wake-Up Call for Every SharePoint Customer

A vulnerability like this doesn’t just compromise infrastructure, it compromises trust. When attackers gain unauthenticated access to SharePoint, they’re not just landing on a server. They’re landing on contracts, financials, customer records, and source code - the very data that defines your business.

The latest zero-day targeting Microsoft SharePoint is a prime example. It’s not only critical in severity - it’s being actively exploited in the wild, giving threat actors a direct path to your most sensitive data.

Here’s what we know so far.

What Happened in the Sharepoint Zero-Day Attack?

On July 20, 2025, CISA confirmed that attackers are actively exploiting CVE-2025-53770, a remote-code-execution (RCE) zero-day that affects on-premises Microsoft SharePoint servers.

The flaw is unauthenticated and rated CVSS 9.8, letting threat actors run arbitrary code and access every file on the server - no credentials required.

Security researchers have tied the exploits to the “ToolShell” attack chain, which steals SharePoint machine keys and forges trusted ViewState payloads, making lateral movement and persistence dangerously easy.

Microsoft has issued temporary guidance (enabling AMSI, deploying Defender AV, or isolating servers) while it rushes a full patch. Meanwhile, CISA has added CVE-2025-53770 to its Known Exploited Vulnerabilities (KEV) catalog and urges immediate mitigations. CISA

Why Exploitation Is Alarmingly Easy

Attackers don’t need stolen credentials, phishing emails, or sophisticated malware. A typical adversary can move from a list of targets to full SharePoint server control in four quick moves:

  1. Harvest likely targets in bulk
    Public scanners like Censys, Shodan, and certificate transparency logs reveal thousands of company domains exposing SharePoint over HTTPS. A few basic queries surface sharepoint. subdomains or endpoints responding with the SharePoint logo or X-SharePointHealthScore header.

  2. Check for a SharePoint host
    If a domain like sharepoint.example.com shows the classic SharePoint sign-in page, it’s likely running ASP.NET and listening on TCP 443—indicating a viable target.

  3. Probe the vulnerable endpoint
    A simple GET request to /_layouts/15/ToolPane.aspx?DisplayMode=Edit should return HTTP 200 OK (instead of redirecting to login) on unpatched servers. This confirms exposure to the ToolShell exploit chain.

  4. Send one unauthenticated POST
    The vulnerability lies in how SharePoint deserializes __VIEWSTATE data. With a single forged POST request, the attacker gains full RCE—no login, no MFA, no further interaction.

That’s it. From scan to shell can take under five minutes, which is why CISA urged admins to disconnect public-facing servers until patched.

Why Data Security Leaders Should Care

SharePoint is where contracts, customer records, and board decks live. An RCE on the platform is a direct path to your crown jewel data:

  • Unbounded blast radius: Compromised machine keys let attackers impersonate any user and exfiltrate sensitive files at scale.
  • Shadow exposure: Even if you patch tomorrow, every document the attacker touched today is already outside your control.
  • Compliance risk: GDPR, HIPAA, SOX, and new AI-safety rules all require provable evidence of what data was accessed and when.

While vulnerability scanners stop at “patch fast,” data security teams need more visibility into what was exposed, how sensitive it was, and how to contain the fallout. That’s exactly what Sentra’s Data Security Posture Management (DSPM) platform delivers.

How Sentra DSPM Neutralizes the Impact of CVE-2025-53770

  • Continuous data discovery & classification: Sentra’s agentless scanner pinpoints every sensitive file - PII, PHI, intellectual-property, even AI model weights - across on-prem SharePoint, SharePoint Online, Teams, and OneDrive. No blind spots.
  • Posture-driven risk mapping: Sentra pinpoints sensitive data sitting on exploitable servers, open to the public, or granted excessive permissions, then automatically routes actionable alerts into your security team’s existing workflow platform.
  • Real-time threat detection: Sentra’s Data Detection and Response (DDR) instantly flags unusual access patterns to sensitive data, enabling your team to intervene before risk turns into breach.
  • Blast-radius analysis: Sentra shows which regulated data could have been accessed during the exploit window - crucial for incident response and breach notifications.
  • Automated workflows: Sentra integrates with Defender, Microsoft Purview, Splunk, CrowdStrike, and all leading SOARs to quarantine docs, rotate machine keys, or trigger legal hold—no manual steps required.
  • Attacker-resilience scoring: Executive dashboards translate SharePoint misconfigurations into dollar-value risk reduction and compliance posture—perfect for board updates after high-profile CVEs.

What This Means for Your Security Team

CVE-2025-53770 won’t be the last time attackers weaponize a collaboration platform you rely on every day. With Sentra DSPM, you know exactly where your sensitive data is, how exposed it is, and how to shrink that exposure continuously.

With Sentra DSPM, you gain more than visibility. You get the ability to map your most sensitive data, detect threats in real time, and respond with confidence - all while proving compliance and minimizing business impact.

It’s not just about patching faster. It’s about defending what matters most: your data.

Get our checklist on how Sentra DSPM helps neutralize SharePoint zero-day risks and protects your most critical data before the next exploit hits.

<blogcta-big>

Read More
Nikki Ralston
Nikki Ralston
David Stuart
David Stuart
July 13, 2025
4
Min Read
Data Security

Securing the Cloud: Advanced Strategies for Continuous Data Monitoring

Securing the Cloud: Advanced Strategies for Continuous Data Monitoring

In today's digital world, data security in the cloud is essential. You rely on popular observability tools to track availability, performance, and usage—tools that keep your systems running smoothly. However, as your data flows continuously between systems and regions, you need a layer of security that delivers granular insights without disrupting performance.

 

Cloud service platforms provide the agility and efficiency you expect; however, they often lack the ability to monitor real-time data movement, access, and risk across diverse environments. 

This blog post explains how cloud data monitoring strategies protect your data while addressing issues like data sprawl, data proliferation, and unstructured data challenges. Along the way, we will share practical information to help you deepen your understanding and strengthen your overall security posture.

Why Real-Time Cloud Monitoring Matters

In the cloud, data does not remain static. It shifts between environments, services, and geographical locations. As you manage these flows, a critical question arises: "Where is my sensitive cloud data stored?" 

Knowing the exact location of your data in real-time is crucial for mitigating unauthorized access, preventing compliance issues, and effectively addressing data sprawl and proliferation. 

Risk of Data Misplacement: When Data Is Stored Outside Approved Environments

Misplaced data refers to information stored outside its approved environment. This can occur when data is in unauthorized or unverified cloud instances or shadow IT systems. Such misplacement heightens security risks and complicates compliance efforts.

 

A simple table can clarify the differences in risk levels and possible mitigation strategies for various data storage environments:

Data Location Approved Environment Risk Level Example Mitigation Strategy
Authorized Cloud Yes Low Regular Audits
Shadow IT Systems No High Immediate remediation
Unsecured File Shares No Medium Enhanced access controls

Risk of Insufficient Monitoring: Gaps in Real-Time Visibility of Rapid Data Movements

The high velocity of data flows in vast cloud environments makes tracking data challenging, and traditional monitoring methods may fall short. 

The rapid data movement means that data proliferation often outstrips traditional monitoring efforts. Meanwhile, the sheer volume, variety, and velocity of data require risk analysis tools that are built for scale. 

Legacy systems typically struggle with these issues, making it difficult for you to maintain up-to-date oversight and achieve a comprehensive security posture. Explore Sentra's blog on data movement risks for additional details.

Limitations of Legacy Data Security Solutions

When evaluating how to manage and monitor cloud data, it’s clear that traditional security tools fall short in today’s complex, cloud-native environments.

Older security solutions (built for the on-prem era!) were designed for static environments, while today's dynamic cloud demands modern, more scalable approaches. Legacy data classification methods, as discussed in this Sentra analysis, also fail to manage unstructured data effectively.

Let’s take a deeper look at their limitations:

  • Inadequate data classification: Traditional data classification often relies on manual processes that fail to keep pace with real-time cloud operations. Manual classification is inefficient and prone to error, making it challenging to quickly identify and secure sensitive information.
    • Such outdated methods particularly struggle with unstructured data management, leaving gaps in visibility.
  • Scalability issues: As your enterprise grows and embraces the cloud, the volume of data you must handle also grows exponentially. When this happens, legacy systems cannot keep up. They lag behind and are slow to respond to potential risks, exposing your company to possible security breaches.
    • Modern requirements for cloud data management and monitoring call for solutions that scale with your business.
  • High operational costs: Maintaining outdated security tools can be expensive. Legacy systems often incur high operational costs due to manual oversight, taxing cloud compute consumption, and inefficient processes. 
    • These costs can escalate quickly, especially compared to cloud-native solutions offering automation, efficiency, and streamlined management.

To address these risks, it's essential to have a strategy that shows you how to monitor data as it moves, ensuring that sensitive files never end up in unapproved environments.

Best Practices for Cloud Data Monitoring and Protection

In an era of rapidly evolving cloud environments, implementing a cohesive cloud data monitoring strategy that integrates actionable recommendations is essential. This approach combines automated data discovery, real-time monitoring, robust access governance, and continuous compliance validation to secure sensitive cloud data and address emerging threats effectively.

Automated Data Discovery and Classification

Implementing an agentless, cloud-native solution enables you to continuously discover and classify sensitive data without any performance drawbacks. Automation significantly reduces manual errors and delivers real-time insights for robust and efficient data monitoring.

Benefits include:

  • Continuous data discovery and classification
  • Fewer manual interventions
  • Real-time risk assessment
  • Lower operational costs through automation
  • Simplified deployment and ongoing maintenance
  • Rapid response to emerging risks with minimal disruption

By adopting a cloud-native data security platform, you gain deeper visibility into your sensitive data without adding system overhead.

Real-Time Data Movement Monitoring

To prevent breaches, real-time cloud monitoring is critical. Receiving real-time alerts will empower you to take action quickly and mitigate threats in the event of unauthorized transfers or suspicious activities. 

A well-designed monitoring dashboard can visually display data flows, alert statuses, and remediation actions—all of which provide clear, actionable insights. Alerts can also flow directly to remediation platforms such as ITSM or SOAR systems.

In addition to real-time dashboards, implement automated alerting workflows that integrate with your existing incident response tools. This ensures immediate visibility when anomalies occur for a swift and coordinated response. Continuous monitoring highlights any unusual data movement, helping security teams stay ahead of threats in an environment where data volumes and velocities are constantly expanding.

Robust Access Governance

Only authorized parties should be able to access and utilize sensitive data. Maintain strict oversight by enforcing least privilege access and performing regular reviews. This not only safeguards data but also helps you adhere to the compliance requirements of any relevant regulatory standards.

 

A checklist for robust governance might include:

  • Implementation of role-based and attribute-based access control
  • Periodic access audits
  • Integration with identity management systems

Ensuring Compliance and Data Privacy

Adhering to data privacy regulations that apply to your sector or location is a must. Continuous monitoring and proactive validation will help you identify and address compliance gaps before your organization is hit with a security breach or legal violation. Sentra offers actionable steps related to various regulations to solidify your compliance posture.

Integrating automated compliance checks into your security processes helps you meet regulatory requirements. To learn more about scaling your security infrastructure, refer to Sentra’s guide to achieving exabyte-scale enterprise data security.

Beyond tools and processes, cultivating a security-minded culture is critical. Conduct regular training sessions and simulated breach exercises so that everyone understands how to handle sensitive data responsibly. Encouraging active participation and accountability across the organization solidifies your security posture, bridging the gap between technical controls and human vigilance.

Sentra Addresses Cloud Data Monitoring Challenges

Sentra's platform complements your current observability tools, enhancing them with robust data security capabilities. Let’s explore how Sentra addresses common challenges in cloud data monitoring.

Exabyte-Scale Mastery: Navigating Expansive Data Ecosystems

Sentra’s platform is designed to handle enormous data volumes with ease. Its distributed architecture and elastic scaling provide comprehensive oversight and ensure high performance as data proliferation intensifies. The platform's distributed architecture and elastic scaling capabilities guarantee high performance, regardless of data volume.

Key features:

  • Distributed architecture for high-volume data
  • Elastic scaling for dynamic cloud environments
  • Integration with primary cloud services

Seamless Automation: Transforming Manual Workflows into Continuous Security

By automating data discovery, classification, and monitoring, Sentra eliminates the need for extensive manual intervention. This streamlined approach provides uninterrupted protection and rapid threat response. 

Automation is essential for addressing the challenges of data sprawl without compromising system performance.

Deep Insights & Intelligent Validation: Harnessing Context for Proactive Risk Detection

Sentra distinguishes itself by providing deep contextual analysis of your data. Its intelligent validation process efficiently detects anomalies and prioritizes risks, enabling precise and proactive remediation. 

This capability directly addresses the primary concern of achieving continuous, real-time monitoring and ensuring precise, efficient data protection.

Unified Security: Integrating with your Existing Systems for Enhanced Protection

One of the most significant advantages of Sentra's platform is its seamless integration with your current SIEM and SOAR tools. This unified approach allows you to maintain excellent observability with your trusted systems while benefiting from enhanced security measures without any operational disruption.

Conclusion

Effective cloud data monitoring is achieved by blending the strengths of your trusted observability tools with advanced security measures. By automating data discovery and classification, establishing real-time monitoring, and enforcing robust access governance, you can safeguard your data against emerging threats. 

Elevate your operations with an extra layer of automated, cloud-native security that tackles data sprawl, proliferation, and compliance challenges. After carefully reviewing your current security and identifying any gaps, invest in modern tools that provide visibility, protection, and resilience.

Maintaining cloud security is a continuous task that demands vigilance, innovation, and proactive decision-making. Integrating solutions like Sentra's platform into your security framework will offer robust, scalable protection that evolves with your business needs. The future of your data security is in your hands, so take decisive steps to build a safer, more secure cloud environment.

<blogcta-big>

Read More
Asaf Kochan
Asaf Kochan
July 9, 2025
3
Min Read
Data Security

Data Security in 2025: Why DSPM Is Now a Business Imperative

Data Security in 2025: Why DSPM Is Now a Business Imperative

At RSAC 2025, I had the opportunity to speak with Adrian Sanabria about one of the most pressing and complex challenges facing security teams today: data security. Since then, the urgency around the future of data security has only intensified.

We're watching a major inflection point unfold across industries. Organizations are generating and storing more data than ever, while simultaneously adopting AI at a pace that outstrips most security programs. At the same time, regulators are enforcing data privacy with increasing sharpness. These trends all converge on one critical question:

 

Do you know where your sensitive data is - and who can access it?

If the answer is no, then it's time to rethink your approach.

Data is Now The Most Valuable, And Volatile Asset

For years, security tools have operated largely without visibility into the data itself. We've focused on endpoints, perimeters, and identities - all essential layers. But in 2025, that’s no longer sufficient.

Data is now the most valuable, and volatile asset most companies have. We’re seeing this in breach investigations, where the root cause often traces back to unmonitored or duplicated sensitive data left in the wrong place. We're seeing it in AI deployments, where teams rush to fine-tune models or deploy copilots without knowing what's inside the datasets they’re exposing. And we’re certainly seeing it in regulatory fines, many of which stem from nothing more than storing customer data longer than necessary, in the wrong place, or in unsecured formats.

What all of this underscores is a simple truth: you can’t protect what you can’t see.

The Role of DSPM in the Future of Data Security

At Sentra, we’ve built our platform around a core philosophy that Data Security Posture Management (DSPM) is not just a security tool, it’s the future of data security, an enabler of responsible innovation. The foundation starts with sensitive data discovery. Most organizations are surprised by how much sensitive data exists outside expected systems- in backups, temporary stores, or SaaS apps that were never properly offboarded. From there, classification adds context. It’s not enough to label something as “PII”, we need to understand how sensitive it is, who owns it, how it is being used, and how it should be governed.

We built Sentra as a cloud-native solution from day one. That means it works across IaaS, SaaS, PaaS, and even on-prem environments without needing agents or pulling data outside the customer’s environment. That last point is non-negotiable for us. As a security company, we believe strongly that extracting customer data for analysis creates unnecessary risk and liability.

To support classification at scale, especially for unstructured data, we developed our own language models using open-source LLMs. This provides the deep contextual understanding needed to accurately label large volumes of data all while maintaining cost efficiency and avoiding unnecessary compute overhead.

AI, Risk, and Responsibility in Data Securityy

One of the biggest shifts we’re seeing in the market is how AI has elevated data security from a technical concern to a boardroom issue. Security teams are now being asked to approve large-scale data usage for AI training, RAG systems, copilots, and internal assistants. But very few have the tools to answer basic questions about what’s in those datasets.

I’ve worked with customers who only realized after deploying AI that they had been exposing medical records, credentials, or confidential meeting data to the model. Once it’s in, you can’t pull it back. That’s why data classification and risk detection must come before any AI integration.

This is precisely the use case we had in mind when we built Sentra’s Data Security for AI Module. It helps teams scan, assess, and verify the contents of data before it ever touches a model. The goal isn’t to slow down innovation - it’s to make it safer, auditable, and repeatable.

Proactive Risk Management Helps Enterprises Ship Faster

One of the most exciting developments we’ve seen for the future of data security is how quickly Sentra’s data security platform becomes a strategic asset for enterprise data risk management. Time to value is fast in many cases, our customers discover major data risks just days after deployment. But beyond those early wins, the real power lies in alignment.

When security leaders can map data to risk, compliance, and governance frameworks, and do so continuously, they’re no longer operating reactively. They’re enabling the business, helping teams ship faster with fewer unknowns, and building trust around how AI and data are managed.

At scale, this kind of maturity is the difference between organizations that can confidently embrace generative AI and those that will always be playing catch-up.

A Final Word

From my time in the Israeli Defense Forces and Unit 8200 to helping enterprises build modern security programs, I’ve seen one truth over and over again: data left behind is data exposed. The volume may grow, the threats may change, but this principle doesn’t.

In 2025, securing data is no longer an aspiration, it’s a baseline. Whether you’re preparing for your next AI initiative, facing regulatory audits, or just trying to get visibility into sprawling cloud environments, DSPM should be your first step. At Sentra, we’re proud to help lead this change. And we believe the organizations that take control of their data today will be the ones best positioned to lead tomorrow.

<blogcta-big>

Read More
Team Sentra
Team Sentra
July 2, 2025
3
Min Read
Data Security

Data Blindness: The Hidden Threat Lurking in Your Cloud

Data Blindness: The Hidden Threat Lurking in Your Cloud

“If you don’t know where your sensitive data is, how can you protect it?”

It’s a simple question, but for many security and compliance teams, it’s nearly impossible to answer. When a Fortune 500 company recently paid millions in fines due to improperly stored customer data on an unmanaged cloud bucket, the real failure wasn’t just a misconfiguration. It was a lack of visibility.

Some in the industry are starting to refer to this challenge as "data blindness".

What Is Data Blindness?

Data Blindness refers to an organization’s inability to fully see, classify, and understand the sensitive data spread across its cloud, SaaS, and hybrid environments.

It’s not just another security buzzword. It’s the modern evolution of a very real problem: traditional data protection methods weren’t built for the dynamic, decentralized, and multi-cloud world we now operate in. Legacy DLP tools or one-time audits simply can’t keep up.

Unlike general data security issues, Data Blindness speaks to a specific kind of operational gap: you can’t protect what you can’t see, and most teams today are flying partially blind.

Why Data Blindness Is Getting Worse

What used to be a manageable gap in visibility has now escalated into a full-scale operational risk. As organizations accelerate cloud adoption and embrace SaaS-first architectures, the complexity of managing sensitive data has exploded. Information no longer lives in a few centralized systems, it’s scattered across AWS, Azure, and GCP instances, and a growing stack of SaaS tools, each with its own storage model, access controls, and risk profile.

At the same time, shadow data is proliferating. Sensitive information ends up in collaboration platforms, forgotten test environments, and unsanctioned apps - places that rarely make it into formal security inventories. And with the rise of generative AI tools, a new wave of unstructured content is being created and shared at scale, often without proper visibility or retention controls in place.

To make matters worse, many organizations are still operating with outdated identity and access frameworks. Stale permissions and misconfigured policies allow unnecessary access to critical data, dramatically increasing the potential impact of both internal mistakes and external breaches.

In short, the cloud hasn’t just moved the data, it’s multiplied it, fragmented it, and made it harder than ever to track. Without continuous, intelligent visibility, data blindness becomes the default.

The Hidden Risks of Operating Blind

When teams don’t have visibility into where sensitive data lives or how it moves, the consequences stack up quickly:

  • Compliance gaps: Regulations like GDPR, HIPAA, and PCI-DSS demand accurate data inventories, privacy adherence, and prompt response to DSARs. Without visibility, you risk fines and legal exposure.

  • Breach potential: Blind spots become attack vectors. Misplaced data, overexposed buckets, or forgotten environments are easy targets.

  • Wasted resources: Scanning everything (just in case) is expensive. Without prioritization, teams waste cycles on low-risk data.

  • Trust erosion: Customers expect you to know where their data is and how it’s protected. Data blindness isn’t a good look.

Do You Have Data Blindness? Here Are the Signs

  • Your security team can’t confidently answer, “Where is our most sensitive data and who has access to it?”

  • Data inventories are outdated, or built on manual tagging and spreadsheets.

  • You’re still relying on legacy DLP tools with poor context and high false positives.

  • Incident response is slow because it’s unclear what data was touched or how sensitive it was.

Sound familiar? You’re not alone.

Breaking Free from Data Blindness

Solving data blindness starts with visibility, but real progress comes from turning that visibility into action. Modern organizations need more than one-off audits or static reports. They need continuous data discovery that scans cloud, SaaS, and on-prem environments in real time, keeping up with the constant movement of data.

But discovery alone isn’t enough. Classification must go beyond content analysis, it needs to be context-aware, taking into account where the data lives, who has access to it, how it’s used, and why it matters to the business. Visibility must extend to both structured and unstructured data, since sensitive information often hides in documents, PDFs, chat logs, and spreadsheets. And finally, insights need to be integrated into existing security and compliance workflows. Detection without action is just noise.

How Sentra Solves Data Blindness

At Sentra, we give security and privacy teams the visibility and context they need to take control of their data - without disrupting operations or moving it out of place. Our cloud-native DSPM (Data Security Posture Management) platform scans and classifies data in-place across cloud, SaaS, and on-prem environments, with no agents or data removal required.

Sentra uses AI-powered, context-rich classification to achieve over 95% accuracy, helping teams identify truly sensitive data and prioritize what matters most. We provide full coverage of structured and unstructured sources, along with real-time insights into risk exposure, access patterns, and regulatory posture, all with a cost-efficient scanning model that avoids unnecessary compute usage.

One customer reduced their shadow data footprint by 30% in just a few weeks, eliminating blind spots that their legacy tools had missed for years. That’s the power of visibility, backed by context, at scale.

The Bottom Line: Awareness Is Step One

Data Blindness is real, but it’s also solvable. The first step is acknowledging the problem. The next is choosing a solution that brings your data out of the dark, without slowing down your teams or compromising security.

If you’re ready to assess your current exposure or just want to see what’s possible with modern data security, you can take a free data blindness assessment, or talk to our experts to get started.

<blogcta-big>

Read More
Yoav Regev
Yoav Regev
June 12, 2025
3
Min Read
Data Security

Why Sentra Was Named Gartner Peer Insights Customer Choice 2025

Why Sentra Was Named Gartner Peer Insights Customer Choice 2025

When we started Sentra three years ago, we had a hypothesis: organizations were drowning in data they couldn't see, classify, or protect. What we didn't anticipate was how brutally honest our customers would be about what actually works, and what doesn't.

This week, Gartner named Sentra a "Customer's Choice" in their Peer Insights Voice of the Customer report for Data Security Posture Management. The recognition is based on over 650 verified customer reviews, giving us a 4.9/5 rating with 98% willing to recommend us.

The Accuracy Obsession Was Right

The most consistent theme across hundreds of reviews? Accuracy matters more than anything else.

"97.4% of Sentra's alerts in our testing were accurate! By far the highest percentage of any of the DSPM platforms that we tested."

"Sentra accurately identified 99% of PII and PCI in our cloud environments with minimal false positives during the POC."

But customers don't just want data discovery—they want trustworthy data discovery. When your DSPM tool incorrectly flags non-sensitive data as critical, teams waste time investigating false leads. When it misses actual sensitive data, you face compliance gaps and real risk. The reviews validate what we suspected: if security teams can't trust your classifications, the tool becomes shelf-ware. Precision isn't a nice-to-have—it's everything.

How Sentra Delivers Time-to-Value

Another revelation: customers don't just want fast deployment, they want fast insights.

"Within less than a week we were getting results, seeing where our sensitive data had been moved to."

"We were able to start seeing actionable insights within hours."

I used to think "time-to-value" was a marketing term. But when you're a CISO trying to demonstrate ROI to your board, or a compliance officer facing an audit deadline, every day matters. Speed isn’t a luxury in security, it’s a necessity. Data breaches don't wait for your security tools to finish their months-long deployment cycles. Compliance deadlines don't care about your proof-of-concept timeline. Security teams need to move at the speed of business risk.

The Honesty That Stings (And Helps)

But here's what really struck me: our customers were refreshingly honest about our shortcomings.

"The chatbot is more annoying than helpful."

"Currently there is no SaaS support for something like Salesforce."

"It's a startup so it has all the advantages and disadvantages that those come with."

As a founder, reading these critiques was... uncomfortable. But it's also incredibly valuable. Our customers aren't just users, they're partners in our product evolution. They're telling us exactly where to invest our engineering resources.

The Salesforce integration requests, for instance, showed up in nearly every "dislike" section. Message received. We're shipping SaaS connectors specifically because it’s a top priority for our customers.

What Gartner Customer Choice Trends Reveal About the DSPM Market

Analyzing 650 reviews across 9 vendors revealed something fascinating about our market's maturity. Customers aren't just comparing features, they're comparing outcomes.

The traditional data security playbook focused on coverage: "How many data sources can you scan?" But customers are asking different questions:

  • How accurate are your findings?
  • How quickly can I act on your insights?
  • How much manual work does this actually eliminate?

This shift from inputs to outcomes suggests the DSPM market is maturing rapidly. 

The Gartner Voice of the Customer Validated

Perhaps the most meaningful insight came from what customers didn't say. I expected more complaints about deployment complexity, integration challenges, or learning curves. Instead, review after review mentioned how quickly teams became productive with Sentra.

"It was also the fastest set up."

"Quick setup and responsive support."

"The platform is intuitive and offers immediate insights."

This tells me we're solving a real problem in a way that feels natural to security teams. The best products don't just work, they feel inevitable once you use them.

The Road Ahead: Learning from Gartner Choice Recognition

These reviews crystallized our 2025 roadmap priorities:

1. SaaS-First Expansion: Every customer asked for broader SaaS coverage. We're expanding beyond IaaS to support the applications where your most sensitive data actually lives. Our mission is to secure data everywhere.

2. AI Enhancement: Our classification engine is industry-leading, but customers want more. We're building contextual AI that doesn't just find data, it understands data relationships and business impact.

3. Remediation Automation: Customers love our visibility but want more automated remediation. We're moving beyond recommendations to actual risk mitigation.

A Personal Thank You

To the customers who contributed to our Sentra Gartner Peer Insights success: thank you. Building a startup is often a lonely journey of best guesses and gut instincts. Your feedback is the compass that keeps us pointed toward solving real problems.

To the security professionals reading this: your honest feedback (both praise and criticism) makes our products better. If you're using Sentra, please keep telling us what's working and what isn't. If you're not, I'd love to show you what earned us Customer Choice 2025 recognition and why 98% of our customers recommend us.

The data security landscape is evolving rapidly. But with customers as partners and recognition like Gartner Peer Insights Customer Choice 2025, I'm confident we're building tools that don't just keep up with threats, they help organizations stay ahead of them.

<blogcta-big>

Read More
Yogev Wallach
Yogev Wallach
June 11, 2025
5
Min Read
AI and ML

Secure AI Adoption for Enterprise Data Protection: Are You Prepared?

Secure AI Adoption for Enterprise Data Protection: Are You Prepared?

In today’s fast-moving digital landscape, enterprise AI adoption presents a fascinating paradox for leaders: AI isn’t just a tool for innovation; it’s also a gateway to new security challenges. Organizations are walking a tightrope: Adopt AI to remain competitive, or hold back to protect sensitive data.
With nearly two-thirds of security leaders even considering a ban on AI-generated code due to potential security concerns, it’s clear that this tension is creating real barriers to AI adoption.

A data-first security approach provides solid guarantees for enterprises to innovate with AI safely. Since AI thrives on data - absorbing it, transforming it, and creating new insights - the key is to secure the data at its very source.

Let’s explore how data security for AI can build robust guardrails throughout the AI lifecycle, allowing enterprises to pursue AI innovation confidently.

Data Security Concerns with AI

Every AI system is only as strong as its weakest data link. Modern AI models rely on enormous data sets for both training and inference, expanding the attack surface and creating new vulnerabilities. Without tight data governance, even the most advanced AI models can become entry points for cyber threats.

How Does AI Store And Process Data?

The AI lifecycle includes multiple steps, each introducing unique vulnerabilities. Let’s consider the three main high-level stages in the AI lifecycle:

  • Training: AI models extract and learn patterns from data, sometimes memorizing sensitive information that could later be exposed through various attack vectors.
  • Storage: Security gaps can appear in model weights, vector databases, and document repositories containing valuable enterprise data.
  • Inference: This prediction phase introduces significant leakage risks, particularly with retrieval-augmented generation (RAG) systems that dynamically access external data sources.

Data is everywhere in AI. And if sensitive data is accessible at any point in the AI lifecycle, ensuring complete data protection becomes significantly harder.

AI Adoption Challenges

Reactive measures just won’t cut it in the rapidly evolving world of AI. Proactive security is now a must. Here’s why:

  1. AI systems evolve faster than traditional security models can adapt.

New AI models (like DeepSeek and Qwen) are popping up constantly, each introducing novel attack surfaces and vulnerabilities that can change with every model update..

Legacy security approaches that merely react to known threats simply can't keep pace, as AI demands forward-thinking safeguards.

  1. Reactive approaches usually try to remediate at the last second.

Reactive approaches usually rely on low-latency inline AI output monitoring, which is the last step in a chain of failures that lead to data loss and exfiltration, and the most challenging position to prevent data-related incidents. 

Instead, data security posture management (DSPM) for AI addresses the issue at its source, mitigating and remediating sensitive data exposure and enforcing a least-privilege, multi-layered approach from the outset.

  1. AI adoption is highly interoperable, expanding risk surfaces.

Most enterprises now integrate multiple AI models, frameworks, and environments (on-premise AI platforms, cloud services, external APIs) into their operations. These AI systems dynamically ingest and generate data across organizational boundaries, challenging consistent security enforcement without a unified approach.

Traditional security strategies, which only respond to known threats, can’t keep pace. Instead, a proactive, data-first security strategy is essential. By protecting information before it reaches AI systems, organizations can ensure AI applications process only properly secured data throughout the entire lifecycle and prevent data leaks before they materialize into costly breaches.

Of course, you should not stop there: You should also extend the data-first security layer to support multiple AI-specific controls (e.g., model security, endpoint threat detection, access governance).

What Are the Security Concerns with AI for Enterprises?

Unlike conventional software, AI systems continuously learn, adapt, and generate outputs, which means new security risks emerge at every stage of AI adoption. Without strong security controls, AI can expose sensitive data, be manipulated by attackers, or violate compliance regulations.

For organizations pursuing AI for organization-wide transformation, understanding AI-specific risks is essential:

  • Data loss and exfiltration: AI systems essentially share information contained in their training data and RAG knowledge sources and can act as a “tunnel” through existing data access governance (DAG) controls, with the ability to find and output sensitive data that the user is not authorized to access.
    In addition, Sentra’s rich best-of-breed sensitive data detection and classification empower AI to perform DLP (data loss prevention) measures autonomously by using sensitivity labels.
  • Compliance & privacy risks: AI systems that process regulated information without appropriate controls create substantial regulatory exposure. This is particularly true in heavily regulated sectors like healthcare and financial services, where penalties for AI-related data breaches can reach millions of dollars.
  • Data poisoning: Attackers can subtly manipulate training and RAG data to compromise AI model performance or introduce hidden backdoors, gradually eroding system reliability and integrity.
  • Model theft: Proprietary AI models represent significant intellectual property investments. Inadequate security can leave such valuable assets vulnerable to extraction, potentially erasing years of AI investment advantage.
  • Adversarial attacks: These increasingly prevalent threats involve strategic manipulations of AI model inputs designed to hijack predictions or extract confidential information. Adequate machine learning endpoint security has become non-negotiable.

All these risks stem from a common denominator: a weak data security foundation allowing for unsecured, exposed, or manipulated data.

The solution? A strong data security posture management (DSPM) coupled with comprehensive visibility into the AI assets in the system and the data they can access and expose. This will ensure AI models only train on and access trusted data, interact with authorized users and safe inputs, and prevent unintended exposure.

AI Endpoint Security Risks

Organizations seeking to balance innovation with security must implement strategic approaches that protect data throughout the AI lifecycle without impeding development.

Choosing an AI security solution: ‘DSPM for AI’ vs. AI-SPM

When evaluating security solutions for AI implementation, organizations typically consider two primary approaches:

  • Data security posture management (DSPM) for AI implements data-related AI security features while extending capabilities to encompass broader data governance requirements. ‘DSPM for AI’ focuses on securing data before it enters any AI pipeline and the identities that are exposed to it through Data Access Governance. It also evaluates the security posture of the AI in terms of data (e.g., a CoPilot with access to sensitive data, that has public access enabled).
  • AI security posture management (AI-SPM) focuses on securing the entire AI pipeline, encompassing models and MLOps workflows. AI-SPM features include AI training infrastructure posture (e.g., the configuration of the machine on which training runs) and AI endpoint security.

While both have merits, ‘DSPM for AI’ offers a more focused safety net earlier in the failure chain by protecting the very foundation on which AI operatesーdata. Its key functionalities include data discovery and classification, data access governance, real-time leakage and anomalous “data behavior” detection, and policy enforcement across both AI and non-AI environments.

Best Practices for AI Security Across Environments

AI security frameworks must protect various deployment environments—on-premise, cloud-based, and third-party AI services. Each environment presents unique security challenges that require specialized controls.

On-Premise AI Security

On-premise AI platforms handle proprietary or regulated data, making them attractive for sensitive use cases. However, they require stronger internal security measures to prevent insider threats and unauthorized access to model weights or training data that could expose business-critical information.

Best practices:

  • Encrypt AI data at multiple stages—training data, model weights, and inference data. This prevents exposure even if storage is compromised.
  • Set up role-based access control (RBAC) to ensure only authorized parties can gain access to or modify AI models.
  • Perform AI model integrity checks to detect any unauthorized modifications to training data or model parameters (protecting against data poisoning).

Cloud-Based AI Security

While home-grown cloud AI services offer enhanced abilities to leverage proprietary data, they also expand the threat landscape. Since AI services interact with multiple data sources and often rely on external integrations, they can lead to risks such as unauthorized access, API vulnerabilities, and potential data leakage.  

Best practices:

  • Follow a zero-trust security model that enforces continuous authentication for AI interactions, ensuring only verified entities can query or fine-tune models.
  • Monitor for suspicious activity via audit logs and endpoint threat detection to prevent data exfiltration attempts.
  • Establish robust data access governance (DAG) to track which users, applications, and AI models access what data.

Third-Party AI & API Security

Third-party AI models (like OpenAI's GPT, DeepSeek, or Anthropic's Claude) offer quick wins for various use cases. Unfortunately, they also introduce shadow AI and supply chain risks that must be managed due to a lack of visibility.

Best practices:

  • Restrict sensitive data input to third-party AI models using automated data classification tools.
  • Monitor external AI API interactions to detect if proprietary data is being unintentionally shared.
  • Implement AI-specific DSPM controls to ensure that third-party AI integrations comply with enterprise security policies.

Common AI implementation challenges arise when organizations attempt to maintain consistent security standards across these diverse environments. For enterprises navigating a complex AI adoption, a cloud-native DSPM solution with AI security controls offers a solid AI security strategy.

The Sentra platform is adaptable, consistent across environments, and compliant with frameworks like GDPR, CCPA, and industry-specific regulations.

Use Case: Securing GenAI at Scale with Sentra

Consider a marketing platform using generative AI to create branded content for multiple enterprise clients—a common scenario facing organizations today.

Challenges:

  • AI models processing proprietary brand data require robust enterprise data protection.
  • Prompt injections could potentially leak confidential company messaging.
  • Scalable security that doesn't impede creative workflows is a must. 

Sentra’s data-first security approach tackles these issues head-on via:

  • Data discovery & classification: Specialized AI models identify and safeguard sensitive information.
AI-powered Classification
Figure 1: A view of the specialized AI models that power data classification at Sentra
  • Data access governance (DAG): The platform tracks who accesses training and RAG data, and when, establishing accountability and controlling permissions at a granular level.  In addition, access to the AI agent (and its underlying information) is controlled and minimized.
  • Real-time leakage detection: Sentra’s best-of-breed data labeling engine feeds internal DLP mechanisms that are part of the AI agents (as well as external 3rd-party DLP and DDR tools).  In addition, Sentra monitors the interaction between the users and the AI agent, allowing for the detection of sensitive outputs, malicious inputs, or anomalous behavior.
  • Scalable endpoint threat detection: The solution protects API interactions from adversarial attacks, securing both proprietary and third-party AI services.
  • Automated security alerts: Sentra integrates with ServiceNow and Jira for rapid incident response, streamlining security operations.

The outcome: Sentra provides a scalable DSPM solution for AI that secures enterprise data while enabling AI-powered innovation, helping organizations address the complex challenges of enterprise AI adoption.

Takeaways

AI security starts at the data layer - without securing enterprise data, even the most sophisticated AI implementations remain vulnerable to attacks and data exposure. As organizations develop their data security strategies for AI, prioritizing data observability, governance, and protection creates the foundation for responsible innovation.

Sentra's DSPM provides cutting-edge AI security solutions at the scale required for enterprise adoption, helping organizations implement AI security best practices while maintaining compliance with evolving regulations.

Learn more about how Sentra has built a data security platform designed for the AI era.

<blogcta-big>

Read More
Ward Balcerzak
Ward Balcerzak
May 15, 2025
3
Min Read
Data Security

Why I Joined Sentra: A Data Defender’s Journey

Why I Joined Sentra: A Data Defender’s Journey

After nearly two decades immersed in cybersecurity, spanning Fortune 500 enterprises, defense contractors, manufacturing giants, consulting, and the vendor ecosystem, I’ve seen firsthand how elusive true data security remains. I've built and led data security programs from scratch in some of the world’s most demanding environments. But when I met the team from Sentra, something clicked in a way that’s rare in this industry.

Let me tell you why I joined Sentra and why I’m more excited than ever about the future of data security.

From Visibility to Vulnerability

In every role I've held, one challenge has consistently stood out: understanding data.
Not just securing it but truly knowing what data we have, where it lives, how it moves, how it's used, and who touches it. This sounds basic, yet it’s one of the least addressed problems in security.

Now, we layer on the proliferation of cloud environments and SaaS sprawl (without mentioning the increasing proliferation of AI agents). The traditional approaches simply don’t cut it. Most organizations either ignore cloud data discovery altogether or lean on point solutions that can’t scale, lack depth, or require endless manual tuning and triage.

That’s exactly where Sentra shines.

Why Sentra?

When I first engaged with Sentra, what struck me was that this wasn’t another vendor trying to slap a new UI on an old problem. Sentra understands the problem deeply and is solving it holistically across all environments. They’re not just keeping up; they’re setting the pace.

The AI-powered data classification engine at the heart of Sentra’s platform is, quite frankly, the best I’ve seen in the market. It automates what previously required a small army of analysts and does so with an accuracy and scale that’s unmatched. It's not just smart, it’s operationally scalable.

But technology alone wasn’t what sold me. It was the people.
The Sentra founders are visionaries who live and breathe this space. They’re not building in a vacuum, they’re listening to customers, responding to real-world friction, and delivering solutions that security teams will actually adopt. That’s rare. That’s powerful.

And finally, there’s the culture. Sentra radiates innovation, agility, and relentless focus on impact. Every person here knows the importance of their role and how it aligns with our mission. That energy is infectious and it’s exactly where I want to be.

Two Decades. One Mission: Secure the Data.

At Sentra, I’m bringing the scars, stories, and successes from almost 20 years “in the trenches”:

  • Deep experience building and maturing data security programs within highly regulated, high-stakes environments

  • A commitment to the full people-process-technology stack, because securing data isn’t just about tools

  • A background stitching together integrated solutions across silos and toolsets

  • A unique perspective shaped by my time as a practitioner, leader, consultant, and vendor

This blend helps me speak the language of security teams, empathize with their challenges, and design strategies that actually work.

Looking Ahead

Joining Sentra isn’t just the next step in my career; it’s a chance to help lead the next chapter of data security. We’re not here to incrementally improve what exists. We’re here to rethink it. Redefine it. Solve it.

If you’re passionate about protecting what matters most, your data. I’d love to connect.

This is more than a job; it’s a mission. And I couldn’t be prouder to be part of it.

<blogcta-big>

Read More
David Stuart
David Stuart
Meni Besso
Meni Besso
May 5, 2025
4
Min Read
Compliance

What the HIPAA Compliance Updates Mean for Your Security

What the HIPAA Compliance Updates Mean for Your Security

The Health Insurance Portability and Accountability Act (HIPAA) has long been a cornerstone of safeguarding sensitive health information in the U.S., particularly electronic protected health information (ePHI). As healthcare organizations continue to face growing cybersecurity challenges, ensuring the protection of ePHI has never been more critical. 

In response, for the first time in two decades, the U.S. Department of Health and Human Services (HHS) has proposed significant amendments to the HIPAA Security Rule, aimed at strengthening cybersecurity measures across the healthcare sector. These proposed changes are designed to address emerging threats and ensure that healthcare organizations have robust systems in place to protect patient data from unauthorized access and potential breaches. This blog presents the major changes that are coming soon and how you can prepare for them.

Instead of considering compliance as a one-time effort, with Sentra you can monitor your compliance status at any given moment, streamline reporting, and remediate compliance violations instantly.

How Sentra Can Help You Stay Compliant

Sentra’s data security platform equips healthcare organizations with the necessary tools to stay compliant with the new HIPAA Security Rule amendments. By providing continuous monitoring of ePHI data locations and assessing associated risks, Sentra helps organizations maintain full visibility and control over sensitive data.

Key Benefits of Using Sentra for HIPAA Compliance:

  • Automated Data Discovery & Classification: Instantly locate and classify ePHI across cloud and on-prem environments.
  • Real-time Risk Assessment: Continuously assess vulnerabilities and flag security gaps related to HIPAA requirements.
  • Access Control & Encryption Monitoring: Ensure compliance with mandatory MFA, encryption policies, and access termination requirements.
  • Smart Compliance Alerts: Sentra doesn’t just detect generic cloud misconfigurations. Instead, it pinpoints security issues affecting sensitive data, helping teams focus on what truly matters.

Without a solution such as Sentra, organizations waste valuable time manually searching for and classifying sensitive data, diverting key employees from higher-priority security tasks. With Sentra, security teams gain an ongoing, real-time dashboard that ensures efficient compliance and faster risk mitigation.

What You Need to Know About the Proposed HIPAA Security Rule Updates

The latest proposed updates to the HIPAA Security Rule represent some of the most significant changes in years. These updates aim to modernize data protection practices and ensure healthcare organizations are better equipped to handle today’s security challenges. Below are the key highlights compliance and security teams should focus on:

Mandatory Implementation Specifications
All implementation specifications under the HIPAA Security Rule will become mandatory. Covered entities and business associates must now fully comply with all safeguards—no more "addressable" exceptions.

Stricter Encryption Requirements
Encryption of electronic protected health information (ePHI) will be required both at rest and in transit. Organizations must ensure encryption is in place across all systems handling sensitive data.

Required Multifactor Authentication (MFA)
MFA will become mandatory to protect access to ePHI. This added security layer significantly reduces the risk of unauthorized access and credential compromise.

Network Segmentation for Threat Containment
Organizations must implement network segmentation to isolate sensitive systems and limit the spread of cyber threats in the event of a breach.

Timely Termination of Access
Access to ePHI must be revoked within 24 hours when an employee leaves or changes roles. This reduces the risk of insider threats and unauthorized access.

Comprehensive Documentation Requirements
Healthcare organizations must maintain detailed, up-to-date documentation of all security policies, procedures, risk assessments, and incident response plans.

Asset Inventories and Network Mapping
Annual updates to technology asset inventories and network maps will be required to ensure accurate tracking of where and how ePHI is stored and transmitted.

Enhanced Risk Analysis
Organizations must conduct regular, thorough risk assessments to identify vulnerabilities and assess threats across all systems that interact with ePHI.

Stronger Incident Response Plans
Entities must be able to restore lost systems and data within 72 hours after a cyber incident. Regular testing and refinement of incident response protocols will be essential.

Annual Compliance Audits
Healthcare organizations will be required to conduct annual audits of their HIPAA Security Rule compliance, covering all technical and administrative safeguards.

Mandatory Technical Controls
Technical safeguards like anti-malware tools, firewalls, and port restrictions must be in place and regularly reviewed to protect systems from evolving threats.

What’s Next?

The proposed changes to the HIPAA Security Rule are currently in the Notice of Proposed Rulemaking (NPRM) stage, with a 60-day public comment period that opened on January 6, 2025. During this period, stakeholders can provide feedback on the amendments, which may influence the final rule. Organizations should actively monitor the comment period, engage in the feedback process, and stay informed on any potential adjustments before the rule is finalized.

Steps Organizations Should Take Now:

  • Review the proposed changes and understand how they impact your current security posture.
  • Engage in the public comment process to share concerns or recommendations.
  • Start assessing security gaps to align with HIPAA’s evolving compliance requirements.

Conclusion

The new HIPAA compliance amendments represent a major shift in how healthcare organizations must protect electronic Protected Health Information (ePHI). The introduction of enhanced encryption standards, mandatory multi-factor authentication (MFA), and stricter access control measures means organizations must act swiftly to maintain compliance and reduce cybersecurity risks.

Compliance is not just about meeting regulations, it is about efficiency. Organizations relying on manual processes to locate and secure sensitive data waste valuable time and resources, making compliance efforts less effective.

With Sentra, healthcare organizations gain a powerful, automated data security solution that:

  • Eliminates manual data discovery by providing a real-time, continuous inventory of sensitive data.
  • Prioritizes relevant data security risks instead of overwhelming teams with unnecessary alerts.
  • Ensures compliance readiness by automating key processes like access control monitoring and encryption verification.

Now is the time for healthcare organizations to take proactive steps toward compliance. Stay informed, participate in the public comment process, and start implementing security enhancements today.

To learn how Sentra can help your organization achieve HIPAA compliance efficiently, request a demo today and take control of your sensitive data.

<blogcta-big>

Read More
Yoav Regev
Yoav Regev
April 23, 2025
3
Min Read
Data Security

Your AI Is Only as Secure as Your Data: Celebrating a $100M Milestone

Your AI Is Only as Secure as Your Data: Celebrating a $100M Milestone

Over the past year, we’ve seen an incredible surge in enterprise AI adoption. Companies across industries are integrating AI agents and generative AI into their operations to move faster, work smarter, and unlock innovation. But behind every AI breakthrough lies a foundational truth: AI is only as secure as the data behind it.

At Sentra, securing that data has always been our mission, not just to prevent breaches and data leaks, but to empower prosperity and innovation with confidence and control.

Data Security: The Heartbeat of Your Organization

As organizations push forward with AI, massive volumes of data, often sensitive, regulated, or business-critical are being used to train models or power AI agents. Too often, this happens without full visibility or governance. 


The explosion of the data security market reflects how critical this challenge has become. At Sentra, we’ve long believed that a Data Security Platform (DSP) must be cloud-native, scalable, and adaptable to real-world enterprise environments. We’ve been proud to lead the way, and our continued growth, especially among Fortune 500 customers, is a testament to the urgency and relevance of our approach.

Scaling for What's Next

With the announcement of our $50 million Series B funding round, bringing our total funding to over $100 million, we’re scaling Sentra to meet the moment. We're counting on strong customer momentum and more than tripling revenue year-over-year, and we’re using this investment to grow our team, strengthen our platform, and continue defining what modern data security looks like.

We’ve always said security shouldn’t slow innovation - it should fuel it. And that’s exactly what we’re enabling.

It's All About the People


At the end of the day, it’s people who build it, scale it, and believe in it. I want to extend a heartfelt thank you to our investors, customers, and, most importantly, our team. It’s all about you! Your belief in Sentra and your relentless execution make everything possible. We couldn’t make it without each and every one of you.

We’re not just building a product, we’re setting the gold standard for data security, because securing your data is the heartbeat of your organization!

Innovation without security isn’t progress. Let’s shape a future where both go together!

<blogcta-big>

Read More
David Stuart
David Stuart
April 3, 2025
3
Min Read
Data Security

The Rise of Next-Generation DSPs

The Rise of Next-Generation DSPs

Recently there has been a significant shift from standalone Data Security Posture Management (DSPM) solutions to comprehensive Data Security Platforms (DSPs). These platforms integrate DSPM functionality, but also encompass access governance, threat detection, and data loss prevention capabilities to provide a more holistic data protection solution. Additionally, the critical role of data in AI and LLM training requires holistic data security platforms that can manage data sensitivity, ensure security and compliance, and maintain data integrity.

This consolidation will improve security effectiveness and help organizations manage the growing complexity of their IT environments. Originally more of a governance/compliance tool, DSPs have evolved into a critical necessity for organizations managing sensitive data in sprawling cloud environments. With the explosion of cloud adoption, stricter regulatory landscapes, and the increasing sophistication of cyber threats, DSPs will continue to evolve to address the monumental data scale expected.

DSP Addressing Modern Challenges in 2025

As the threat landscape evolves, DSP is shifting to address modern challenges. New trends such as AI integration, real-time threat detection, and cloud-native architectures are transforming how organizations approach data security. DSPM is no longer just about assuring compliance and proper data governance, it’s about mitigating all data risks, monitoring for new threats, and proactively resolving them in real time.

Must-Have DSP Features for 2025

Over the years, Data Security Platforms (DSPs) have evolved significantly, with a range of providers emerging to address the growing need for robust data security in cloud environments. Initially, smaller startups began offering innovative solutions, and in 2024, several of these providers were acquired, signaling the increasing demand for comprehensive data protection. As organizations continue to prioritize securing their cloud data, it's essential to carefully evaluate DSP solutions to ensure they meet key security needs. When assessing DSP options for 2025, certain features stand out as critical for ensuring a comprehensive and effective approach to data security.

Below are outlined the must-have features for any DSP solution in the coming year:

  1. Cloud-Native Architecture

Modern DSPs are built for the cloud and address vast data scale with cloud-native technologies that leverage provider APIs and functions. This allows data discovery and classification to occur autonomously, within the customer cloud environment leveraging existing compute resources. Agentless approaches reduce administrative burdens as well.

  1. AI-Based Classification

AI has revolutionized data classification, providing context-aware accuracy exceeding 95%. By understanding data in its unique context, AI-driven DSP solutions ensure the right security measures are applied without overburdening teams with false positives.

  1. Anomaly Detection and Real-Time Threat Detection

Anomaly detection, powered by Data Detection and Response (DDR), identifies unusual patterns in data usage to spotlight risks such as ransomware and insider threats. Combined with real-time, data-aware detection of suspicious activities, modern DSP solutions proactively address cloud-native vulnerabilities, stopping breaches before they unfold and ensuring swift, effective action.

  1. Automatic Labeling

Manual tagging is too cumbersome and time consuming. When choosing DSP solutions, it’s critical to make sure that you choose ones that automate data tagging and labeling, seamlessly integrating with Data Loss Prevention (DLP), Secure Access Service Edge (SASE), and governance platforms. This reduces errors and accelerates compliance processes.

  1. Data Zones and Perimeters

As data moves across cloud environments, maintaining control is paramount. Leading DSP solutions monitor data movement, alerting teams when data crosses predefined perimeters or storage zones, ensuring compliance with internal and external policies.

  1. Automatic Remediation and Enforcement

Automation extends to remediation, with DSPs swiftly addressing data risks like excessive permissions or misconfigurations. By enforcing protection policies across cloud environments, organizations can prevent breaches before they occur.

The Business Case for DSP in 2025

Proactive Security

Cloud-native DSP represents a shift from reactive to proactive security practices. By identifying and addressing risks early, and across their entire data estate from cloud to on-premises, organizations can mitigate potential threats and strengthen their security posture.

Regulatory Compliance

As regulations such as GDPR and CCPA continue to evolve, DSPM solutions play a crucial role in simplifying compliance by automating data discovery and labeling. This automation reduces the manual effort required to meet regulatory requirements. In fact, 84% of security and IT professionals consider data protection frameworks like GDPR and CCPA to be mandatory for their industries, emphasizing the growing need for automated solutions to ensure compliance.

The Rise of Gen AI

The rise of Gen AI is expected to be a main theme in 2025. Gen AI is a driver for data proliferation in the cloud and for a transition between legacy data technologies and modern ones that require an updated data security program.

Operational Efficiency

By automating repetitive tasks, DSPM significantly reduces the workload for security teams. This efficiency allows teams to focus on strategic initiatives rather than firefighting. According to a 2024 survey, organizations using DSPM reported a 40% reduction in time spent on manual data management tasks, demonstrating its impact on operational productivity.

Future-Proofing Your Organization with Cloud-Native DSP

To thrive in the evolving security landscape, organizations must adopt forward-looking strategies. Cloud-native DSP tools integrate seamlessly with broader security frameworks, ensuring resilience and adaptability. As technology advances, features like predictive analytics and deeper AI integration will further enhance capabilities.

Conclusion

Data security challenges are only becoming more complex, but new Data Security Platforms (DSPs) provide the tools to meet them head-on. Now is the time for organizations to take a hard look at their security posture and consider how DSPs can help them stay protected, compliant, and trusted. DSPs are quickly becoming essential to business operations, influencing strategic decisions and enabling faster, more secure innovation.

Ready to see it in action?

Request a demo to discover how a modern DSP can strengthen your security and support your goals.

<blogcta-big>

Read More
Meni Besso
Meni Besso
April 3, 2025
3
Min Read
Compliance

The Need for Continuous Compliance

The Need for Continuous Compliance

As compliance breaches rise and hefty fines follow, establishing and maintaining strict compliance has become a top priority for enterprises. However, compliance isn't a one-time or  even periodic task or something you can set and forget. To stay ahead, organizations are embracing continuous compliance - a proactive, ongoing strategy to meet regulatory requirements and uphold security standards.

Let’s explore what continuous compliance is, the advantages it offers, some challenges it may present, and how Sentra can help organizations achieve and sustain it.

What is Continuous Compliance?

Continuous compliance is the ongoing process of monitoring a company’s security practices and applying appropriate controls to ensure they consistently meet regulatory standards and industry best practices. Instead of treating compliance as a one-time task, it involves real-time monitoring and advanced data protection strategies to catch and address non-compliance issues as they happen. It also includes maintaining a complete inventory of where your data is at all times, what risks and security posture is associated, and who has access to it. This proactive approach, including continuous compliance testing to verify controls are working effectively, ensures you are always ‘audit ready’ and helps avoid last-minute fixes before audits or cyber attacks., ensuring The result is continuous security across the organization.

Why Do Companies Need Continuous Compliance?

Continuous compliance is essential for companies to ensure they are always aligned with industry regulations and standards, reducing the risk of violations and penalties. 

Here are a few key reasons why it's crucial:

  1. Regulatory Changes: Compliance standards frequently evolve. Continuous compliance monitoring ensures companies can adapt quickly to new regulations without major disruptions.
  2. Avoiding Fines and Penalties: Non-compliance can lead to hefty fines and regulatory enforcement, legal actions or even loss of licenses. Staying compliant helps avoid these risks.
  3. Protecting Reputation: Data breaches, especially in industries dealing with sensitive data, can damage a company’s reputation. Continuous compliance helps protect established trust with customers, partners, and stakeholders.
  4. Reducing Security Risks: Many compliance frameworks are designed to enhance data security. Continuous compliance works alongside automated remediation capabilities to keep a company’s security posture is always up-to-date, reducing the risk of data breaches.
  5. Operational Efficiency: Automated, continuous compliance monitoring can streamline processes, reducing manual audits and interventions, saving time and resources.

For modern businesses, especially those managing sensitive data in the cloud, a continuous cloud compliance strategy is critical to maintaining a secure, efficient, and trusted operation.

Cost Considerations for Compliance Investments

Investing in continuous compliance can lead to significant long-term savings. By maintaining consistent compliance practices, organizations can avoid the hefty fines associated with non-compliance, minimize resource surges during audits, and reduce the impacts of breaches through early detection. Continuous compliance provides security and financial predictability, often resulting in more manageable and predictable expenses.

In contrast, periodic compliance can lead to fluctuating costs. While expenses may be lower between audits, costs typically spike as audit dates approach. These spikes often result from hiring consultants, deploying temporary tools, or incurring overtime charges. Moreover, gaps between audits increase the risk of undetected non-compliance or security breaches, potentially leading to significant unplanned expenses from fines or mitigation efforts.

When evaluating cost implications, it's crucial to look beyond immediate expenses and consider the long-term financial impact. Continuous compliance not only offers a steadier expenditure pattern but also potential savings through proactive measures. On the other hand, periodic compliance can introduce cost variability and financial uncertainties associated with risk management.

Challenges of Continuous Compliance

  1. Keeping Pace with Technological Advancements: The fast-evolving tech landscape makes compliance a moving target. Organizations need to regularly update their systems to stay in line with new technology, ensuring compliance procedures remain effective. This requires investment in infrastructure that can adapt quickly to these changes. Additionally, keeping up with emerging security risks requires continuous threat detection and response strategies, focusing on real-time compliance monitoring and adaptive security standards to safeguard against new threats.
  2. Data Privacy and Protection Across Borders: Global organizations face the challenge of navigating multiple, often conflicting, data protection regulations. To maintain compliance, they must implement unified strategies that respect regional differences while adhering to international standards. This includes consistent data sensitivity tagging and secure data storage, transfer, and processing, with measures like encryption and access controls to protect sensitive information.
  3. Internal Resistance and Cultural Shifts: Implementing continuous compliance often meets internal resistance, requiring effective change management, communication, and education. Building a compliance-oriented culture, where it’s seen as a core value rather than a box-ticking exercise, is crucial.

Organizations must be adaptable, invest in the right technology, and create a culture that embraces compliance. This both helps meet regulatory demands and also strengthens risk management and security resilience.

How You Can Achieve Continuous Compliance With Sentra

First, Sentra's automated data discovery and classification engine and takes a fraction of the time and effort it would take to manually catalog all sensitive data. It’s far more accurate, especially when using a solution that leverages LLMs to classify data with more granularity and rich context.  It’s also more responsive to the frequent changes in your modern data landscape.

Sentra also can automate the process of identifying regulatory violations and ensuring adherence to compliance requirements using pre-built policies that update and evolve with compliance changes (including policies that map to common compliance frameworks). It ensures that sensitive data stays within the correct environments and doesn’t travel to regions in violation of retention policies or without data encryption.

In contrast, manually tracking data inventory is inefficient, difficult to scale, and prone to errors and inaccuracies. This often results in delayed detection of risks, which can require significant time and effort to resolve as compliance audits approach.

<blogcta-big>

Read More
Ran Shister
Ran Shister
Sapir Gottdiner
Sapir Gottdiner
March 27, 2025
3
Min Read
Sentra Case Study

Empowering Users to Self-Protect Their Data

Empowering Users to Self-Protect Their Data

In today’s rapidly evolving cybersecurity landscape, protecting sensitive cloud data requires more than deploying advanced security tools, it demands operationalized data security and empowered users. Organizations must reduce alert fatigue, gain visibility into sensitive data exposure, and enable data owners to take action without slowing the business.

In a recent discussion with Sapir Gottdiner, Cyber Security Architect at Global-e, we explored how Global-e approaches cloud data security using automation and Data Security Posture Management (DSPM). Operating across multiple regions and complying with strict regulations such as GDPR, PCI, and SOC 2, Global-e needed a scalable way to identify sensitive data, manage risk, and streamline remediation - without overburdening security teams.

This customer-driven perspective highlights how DSPM-powered automation and user enablement can transform data protection from a reactive process into a proactive, scalable security strategy.

Automating Security Tasks for Efficiency

“One of the primary challenges faced by any security team is keeping pace with the volume of security alerts and the effort required to address them”, said Sapir. Automating human resource-constrained tasks is crucial for efficiency. For example, sensitive data should only exist in certain controlled environments, as improper data handling can lead to vulnerabilities. By leveraging DSPM which acts as a validation tool, organizations can automate the detection of sensitive information stored in incorrect locations and initiate remediation processes without human intervention.

Strengthening Sensitive Data Protection

A concern identified in the discussion was data accessible to unauthorized personnel in Microsoft OneDrive, that may contain sensitive information. To mitigate this, organizations should automate the creation of support tickets (in Jira, for instance) for security incidents, ensuring critical and high-risk alerts are addressed immediately. Assigning these incidents to the relevant departments and data owners ensures accountability and prompt resolution. Additionally, identifying the type and location of sensitive data enables organizations to implement precise fixes, reducing exposure risks.

Risk Management and Process Improvement

Permissioning is equally important and organizations must establish clear procedures and policies for managing authentication credentials. Different actions for different levels of risk to ensure no business interruption is applicable in most cases. This can vary from easy, quick access revocation for low-risk cases while requiring manual verification for critical credentials.

Furthermore, proper data storage is an important protection factor, given sovereignty regulations, data proliferation, etc. Implementing well-defined data mapping strategies and systematically applying proper hygiene and ensuring correct locations will minimize security gaps. For the future, Sapir envisions smart data mapping within O365 and deeper integrations with automated remediation workflow tools to further enhance security posture.

Continuous Review and Training

Sapir also suggests that to ensure compliance and effective security management, organizations should conduct monthly security reviews. These reviews help define when to close or suppress alerts, preventing unnecessary effort on minor issues. Additionally, policies should align with infrastructure security and regulatory compliance requirements such as GDPR, PCI and SOC2. Expanding security training programs is another essential step, equipping users with the knowledge on proper storage and handling of controlled data and how to avoid common security missteps. Empowering users to self-police/self-remediate allows lean security teams to scale data protection operations more efficiently.

Enhancing Communication and Future Improvements

Operationalizing data security is an ongoing effort that blends automation, process refinement, and user education. As Global-e’s experience shows, empowering users and data owners to self-protect and self-remediate sensitive data allows security teams to scale their impact while maintaining strong compliance and governance.

Since implementing Sentra’s DSPM solution, Global-e has significantly strengthened its cloud data security posture. The organization now has greater visibility into sensitive data exposure, faster remediation workflows, and reduced operational overhead for its security, IT, DevOps, and engineering teams - all while remaining compliant with global regulatory requirements.

By shifting data security closer to the people who create and use the data, and supporting them with the right DSPM tools and automation, organizations can reduce risk, improve efficiency, and build a culture of shared responsibility for data protection. User-driven data security isn’t just more scalable, it’s a competitive advantage.

<blogcta-big>

Read More
Meni Besso
Meni Besso
March 19, 2025
4
Min Read
Data Loss Prevention

Data Loss Prevention for Google Workspace

Data Loss Prevention for Google Workspace

We know that Google Workspace (formerly known as G Suite) and its assortment of services, including Gmail, Drive, Calendar, Meet, Docs, Sheets, Slides, Chat, and Vids, is a powerhouse for collaboration.

But the big question is: Do you know where your Google Workspace data is—and if it’s secure and who has access to it?

While Google Workspace has become an indispensable pillar in cloud operations and collaboration, its widespread adoption introduces significant security risks that businesses simply can't afford to ignore. To optimize Google Workspace data protection, enterprises must know how Google Workspace protects and classifies data. Knowing the scope, gaps, limitations, and silos of Google Workspace data protection mechanisms can help businesses strategize more effectively to mitigate data risks and ensure more holistic data security coverage across multi-cloud estates.

The Risks of Google Workspace Security

As with any dynamic cloud platform, Google Workspace is susceptible to data security risks, the most dangerous of which can do more than just undercut its benefits. Primarily, businesses should be concerned about the exposure of sensitive data nested within large volumes of unstructured data. For instance, if an employee shares a Google Drive folder or document containing sensitive data but with suboptimal access controls, it could snowball into a large-scale data security disaster. 

Without comprehensive visibility into sensitive data exposures across Google Workspace applications, businesses risk serious security threats. Besides sensitive data exposure, these include exploitable vulnerabilities, external attacks, human error, and shadow data. Complex shared responsibility models and unmet compliance policies also loom large, threatening the security of your data. 

To tackle these risks, businesses must prioritize and optimize data security across Google Workspace products while acknowledging that Google is rarely the sole platform an enterprise uses.

How Does Google Store Your Data?

To understand how to protect sensitive data in Google Workspace, it's essential to first examine how Google stores and manages this data. Why? Because the intricacies of data storage architectures and practices have significant implications for your security posture. 

Here are three-steps to help you understand and optimize your data storage in Google Workspace:

1. Know Where and How Google Stores Your Data

  • Google stores your files in customized servers in secure data centers.
  • Your data is automatically distributed across multiple regions, guaranteeing redundancy and availability.

2. Control Data Retention

  • Google retains your Workspace data until you or an admin deletes it.
  • Use Google Vault to manage retention policies and set custom retention rules for emails and files.
  • Regularly review and clean up unnecessary stored data to reduce security risks.

3. Secure Your Stored Data

  • Enable encryption for sensitive files in Google Drive.
  • Restrict who can view, edit, and share stored documents by implementing access controls.
  • Monitor data access logs to detect unauthorized access.

How Does Google Workspace Classify Your Data?

Google’s built-in classification tools are an acceptable starting point. However, they fall short of securing and classifying all unstructured data across complex cloud environments. This is because today's cloud attack surface expands across multiple providers, making security more complex than ever before. Consequently, Google's myopic classification often snowballs into bigger security problems, as data moves. Because of this evolving attack surface across multi-cloud environments, risk-ridden shadow data and unstructured data fester in Google Workspace apps. 

The Issue of Unstructured Data

It’s important to remember that most enterprise data is unstructured. Unstructured data refers to data that isn’t stored in standardized or easily manageable formats. In Google Workspace, this could be data in a Gmail draft, multimedia files in Google Drive, or other informal exchanges of sensitive information between Workspace apps. 

For years, unstructured data has been a nightmare for businesses to map, manage, and secure. Unstructured document stores and employee GDrives are hot zones for data risks. Native Google Drive data classification capabilities can be a useful source of metadata to support a more comprehensive external data classification solution. A cloud-native DSP solution can map, classify, and organize sensitive data, including PHI, PCI, and business secrets, across both Google Workspace and cloud platforms that Google's built-in capabilities do not cover, like AWS and S3.

How Does Google Workspace Protect Your Data?

Like its built-in classification mechanisms, Google's baseline security features, such as encryption and access controls, are good for simple use cases but aren't capable enough to fully protect complex environments. 

For both the classification and security of unstructured data, Google’s native tools may not suffice. A robust data loss prevention (DLP) solution should ideally do the trick for unstructured data. However, Google Workspace DLP alone and other protection measures (formerly referred to as G Suite data protection) are unlikely to provide holistic data security, especially in dynamic cloud environments.

Google Native Tool Challenges

Google’s basic protection measures don't tackle the full spectrum of critical Google Workspace data risks because they can't permeate unstructured documents, where sensitive data may reside in various protected states.

For example, an employee's personal Google Drive can potentially house exposed and exploitable sensitive data that can slip through Google's built-in security mechanisms. It’s also important to remember that Google Workspace data loss prevention capabilities do nothing to protect critical enterprise data hosted in other cloud platforms. 

Ultimately, while Google provides some security controls, they alone don’t offer the level of protection that today’s complex cloud environments demand. To close these gaps, businesses must look to complement Google’s built-in capabilities and invest in robust data security solutions.

Only a highly integrable data security tool with advanced AI and ML capabilities can protect unstructured data across Google Workspace’s diverse suite of apps, and further, across the entire enterprise data estate. This has become mandatory since multi-cloud architectures are the norm today.

A Robust Data Security Platform: The Key to Holistic Google Workspace Data Protection 

The speed, complexity, and rapid evolution of multi-cloud and hybrid cloud environments demand more advanced data security capabilities than Google Workspace’s native storage, classification, and protection features provide. 

It is becoming increasingly difficult to mitigate the risks associated with sensitive data.

To successfully remediate these risks, businesses urgently need robust data security posture management (DSPM) and data detection and response (DDR) solutions - preferably all in one platform. There's simply no other way to guarantee comprehensive data protection across Google Workspace. Furthermore, as mentioned earlier, most businesses don't exclusively use Google platforms. They often mix and match services from cloud providers like Google, Azure, and AWS.

In other words, besides limited data classification and protection, Google's built-in capabilities won't be able to extend into other branches of an enterprise's multi-cloud architecture. And having siloed data security tools for each of these cloud platforms increases costs and further complicates administration that can lead to critical coverage gaps. That's why the optimal solution is a holistic platform that can fill the gaps in Google's existing capabilities to provide unified data classification, security, and coverage across all other cloud platforms.

Sentra: The Ultimate Cloud-Agnostic Data Protection and Classification Solution 

To truly secure sensitive data across Google Workspace and beyond, enterprises need a cloud-native data security platform. That’s where Sentra comes in. It hands you enterprise-scale data protection by seamlessly integrating powerful capabilities like data discovery and classification, data security posture management (DSPM), data access governance (DAG), and data detection and response (DDR) into an all-in-one, easy-to-use platform.

By combining rule-based and large language model (LLM)-based classification, Sentra ensures accurate and scalable data security across Workspace apps like Google Drive—as well as data contained in apps from other cloud providers. This is crucial for any enterprise that hosts its data across disparate cloud platforms, not just Workspace. To classify unstructured data across these platforms, Sentra leverages supervised AI training models like BERT. It also uses zero-shot classification techniques to zero in on and accurately classify unstructured data. 

Sentra is particularly useful for anyone asking business-, industry-, or geography-specific data security questions such as “Does Google Workspace have HIPAA compliance frameworks?” and “Is my organization's use of Google Workspace GDPR-compliant?” The short answer to these questions: Integrate Sentra with your Google Workspace apps and you will see. 

Boost Your Google Workspace Data Protection with Sentra

By integrating Sentra with Google Workspace, companies can leverage AI-driven insights to distinguish employee data from customer data, ensuring a clearer understanding of their information landscape. Sentra also identifies customer-specific data types, such as personally identifiable information (PII), protected health information (PHI), product IDs, private codes, and localization requirements. Additionally, it detects toxic data combinations that may pose security risks.

Beyond insights, Sentra provides robust data protection through comprehensive inventorying and classification of unstructured data. It helps organizations right-size permissions, expose shadow data, and implement real-time detection of sensitive data exposure, security breaches, and suspicious activity, ensuring a proactive approach to data security.

No matter where your unstructured data resides, whether in Google Drive or any other cloud service, Sentra ensures it is accurately identified, classified, and protected with over 95% precision.

If you’re ready to take control of your data security, book a demo to discover how Sentra’s AI-driven protection secures your most valuable information across Google Workspace and beyond.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
March 4, 2025
4
Min Read
AI and ML

AI in Data Security: Guardian Angel or Trojan Horse?

AI in Data Security: Guardian Angel or Trojan Horse?

Artificial intelligence (AI) is transforming industries, empowering companies to achieve greater efficiency, and maintain a competitive edge. But here’s the catch: although AI unlocks unprecedented opportunities, its rapid adoption also introduces complex challenges, especially for data security and privacy.

 

How do you accelerate transformation without compromising the integrity of your data? How do you harness AI’s power without it becoming a threat?

For security leaders, AI presents this very paradox. It is a powerful tool for mitigating risk through better detection of sensitive data, more accurate classification, and real-time response. However, it also introduces complex new risks, including expanded attack surfaces, sophisticated threat vectors, and compliance challenges.

As AI becomes ubiquitous and enterprise data systems become increasingly distributed, organizations must navigate the complexities of the big-data AI era to scale AI adoption safely. 

This article explores the key data security challenges introduced by AI and outlines practical strategies organizations can use to protect sensitive data while safely scaling AI adoption.

AI and Data Security in 2025: Why the Risk Profile Has Changed

AI-driven systems in 2025 operate at a fundamentally different scale than earlier generations. Generative AI, large language models (LLMs), and AI-powered automation now interact directly with sensitive enterprise data across cloud platforms, SaaS tools, and distributed data stores.

Several shifts define the modern AI data security landscape:

  • AI systems increasingly consume and generate sensitive data in real time
  • Employees regularly interact with external and internal AI tools
  • Cloud-native and multi-environment data architectures reduce centralized visibility
  • Threat actors now leverage AI to automate and amplify attacks

As a result, traditional perimeter-based or reactive security approaches are no longer sufficient. Organizations must adopt continuous data discovery, governance, and monitoring to manage AI-driven data risk effectively.

The Emerging Challenges for Data Security with AI

AI-driven systems are driven by vast amounts of data, but this reliance introduces significant security risks - both from internal AI usage and external client-side AI applications. As organizations integrate AI deeper into their operations, security leaders must recognize and mitigate the growing vulnerabilities that come with it.

Below, we outline the four biggest AI security challenges that will shape how you protect data and how you can address them.

1. Expanded Attack Surfaces

AI’s dependence on massive datasets, often unstructured and spread across cloud environments—creates an expansive attack surface. This data sprawl increases exposure to adversarial threats, such as model inversion attacks, where bad actors can reverse-engineer AI models to extract sensitive attributes or even re-identify anonymized data.

To put this in perspective, an AI system trained on healthcare data could inadvertently leak protected health information (PHI) if improperly secured. As adversaries refine their techniques, protecting AI models from data leakage must be a top priority.

For a detailed analysis of this challenge, refer to NIST’s report,Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.

2. Sophisticated and Evolving Threat Landscape

The same AI advancements that enable organizations to improve detection and response are also empowering threat actors. Attackers are leveraging AI to automate and enhance malicious campaigns, from highly targeted phishing attacks to AI-generated malware and deepfake fraud.

According to StrongDM's “The State of AI in Cybersecurity Report,” 65% of security professionals believe their organizations are unprepared for AI-driven threats. This highlights a critical gap: while AI-powered defenses continue to improve, attackers are innovating just as fast - if not faster. Organizations must adopt AI-driven security tools and proactive defense strategies to keep pace with this rapidly evolving threat landscape.

3. Data Privacy and Compliance Risks

AI’s reliance on large datasets introduces compliance risks for organizations bound by regulations such as GDPR, CCPA, or HIPAA. Improper handling of sensitive data within AI models can lead to regulatory violations, fines, and reputational damage. One of the biggest challenges is AI’s opacity, in many cases, organizations lack full visibility into how AI systems process, store, and generate insights from data. This makes it difficult to prove compliance, implement effective governance, or ensure that AI applications don’t inadvertently expose personally identifiable information (PII). As regulatory scrutiny on AI increases, businesses must prioritize AI-specific security policies and governance frameworks to mitigate legal and compliance risks.

4. Risk of Unintentional Data Exposure

Even without malicious intent, generative AI models can unintentionally leak sensitive or proprietary data. For instance, employees using AI tools may unknowingly input confidential information into public models, which could then become part of the model’s training data and later be disclosed through the model’s outputs. Generative AI models, especially large language models (LLMs)—are particularly susceptible to data extrapolation attacks, where adversaries manipulate prompts to extract hidden information.

Techniques like “divergence attacks” on ChatGPT can expose training data, including sensitive enterprise knowledge or personally identifiable information. The risks are real, and the pace of AI adoption makes data security awareness across the organization more critical than ever.

For further insights, explore our analysis of “Emerging Data Security Challenges in the LLM Era.”

Top 5 Strategies for Securing Your Data with AI

To integrate AI responsibly into your security posture, companies today need a proactive approach is essential. Below we outline five key strategies to maximize AI’s benefits while mitigating the risks posed by evolving threats. When implemented holistically, these strategies will empower you to leverage AI’s full potential while keeping your data secure.

1. Data Minimization, Masking, and Encryption

The most effective way to reduce risk exposure is by minimizing sensitive data usage whenever possible. Avoid storing or processing sensitive data unless absolutely necessary. Instead, use techniques like synthetic data generation and anonymization to replace sensitive values during AI training and analysis.

When sensitive data must be retained, data masking techniques, such as name substitution or data shuffling—help protect confidentiality while preserving data utility. However, if data must remain intact, end-to-end encryption is critical. Encrypt data both in transit and at rest, especially in cloud or third-party environments, to prevent unauthorized access.

2. Data Governance and Compliance with AI-SPM

Governance and compliance frameworks must evolve to account for AI-driven data processing. AI Security Posture Management (AI-SPM) tools help automate compliance monitoring and enforce governance policies across hybrid and cloud environments. 

AI-SPM tools enable:

  • Automated data lineage mapping to track how sensitive data flows through AI systems.
  • Proactive compliance monitoring to flag data access violations and regulatory risks before they become liabilities.

By integrating AI-SPM into your security program, you ensure that AI-powered workflows remain compliant, transparent, and properly governed throughout their lifecycle.

3. Secure Use of AI Cloud Tools

AI cloud tools accelerate AI adoption, but they also introduce unique security risks. Whether you’re developing custom models or leveraging pre-trained APIs, choosing trusted providers like Amazon Bedrock or Google’s Vertex AI ensures built-in security protections. 

However, third-party security is not a substitute for internal controls. To safeguard sensitive workloads, your organization should:

  • Implement strict encryption policies for all AI cloud interactions.
  • Enforce data isolation to prevent unauthorized access.
  • Regularly review vendor agreements and security guarantees to ensure compliance with internal policies.

Cloud AI tools can enhance your security posture, but always review the guarantees of your AI providers (e.g., OpenAI's security and privacy page) and regularly review vendor agreements to ensure alignment with your company’s security policies.

4. Risk Assessments and Red Team Testing

While offline assessments provide an initial security check, AI models behave differently in live environments—introducing unpredictable risks. Continuous risk assessments are critical for detecting vulnerabilities, including adversarial threats and data leakage risks.

Additionally, red team exercises simulate real-world AI attacks before threat actors can exploit weaknesses. A proactive testing cycle ensures AI models remain resilient against emerging threats.

To maintain AI security over time, adopt a continuous feedback loop—incorporating lessons learned from each assessment to strengthen your AI systems

5. Organization-Wide AI Usage Guidelines

AI security isn’t just a technical challenge—it’s an organizational imperative. To democratize AI security, companies must embed AI risk awareness across all teams.

  • Establish clear AI usage policies based on zero trust and least privilege principles.
  • Define strict guidelines for data sharing with AI platforms to prevent shadow AI risks.
  • Integrate AI security into broader cybersecurity training to educate employees on emerging AI threats.

By fostering a security-first culture, organizations can mitigate AI risks at scale and ensure that security teams, developers, and business leaders align on responsible AI practices.

Key Takeaways: Moving Towards Proactive AI Security 

AI is transforming how we manage and protect data, but it also introduces new risks that demand ongoing vigilance. By taking a proactive, security-first approach, you can stay ahead of AI-driven threats and build a resilient, future-ready AI security framework.

AI integration is no longer optional for modern enterprises, it is both inevitable and transformative. While AI offers immense potential, particularly in security applications, it also introduces significant risks, especially around data security. Organizations that fail to address these challenges proactively risk increased exposure to evolving threats, compliance failures, and operational disruptions.

By implementing strategies such as data minimization, strong governance, and secure AI adoption, organizations can mitigate these risks while leveraging AI’s full potential. A proactive security approach ensures that AI enhances—not compromises—your overall cybersecurity posture. As AI-driven threats evolve, investing in comprehensive, AI-aware security measures is not just a best practice but a competitive necessity. Sentra’s Data Security Platform provides the necessary visibility and control, integrating advanced AI security capabilities to protect sensitive data across distributed environments.

To learn how Sentra can strengthen your organization’s AI security posture with continuous discovery, automated classification, threat monitoring, and real-time remediation, request a demo today.

<blogcta-big>

Read More
Yoav Regev
Yoav Regev
January 15, 2025
3
Min Read

The Importance of Data Security for Growth: A Blueprint for Innovation

The Importance of Data Security for Growth: A Blueprint for Innovation

“For whosoever commands the sea commands the trade; whosoever commands the trade of the world commands the riches of the world, and consequently the world itself.” — Sir Walter Raleigh.

For centuries, power belonged to those who ruled the seas. Today, power belongs to those who control and harness their data’s potential. But let’s face it—many organizations are adrift, overwhelmed by the sheer volume of data and rushing to keep pace in a rapidly shifting threatscape. Navigating these waters requires clarity, foresight, and the right tools to stay afloat and steer toward success. Sound familiar? 

In this new reality, controlling data now drives success. But success isn’t just about collecting data, it’s about being truly data-driven. For modern businesses, data isn’t just another resource. Data is the engine of growth, innovation, and smarter decision-making.

Yet many leaders still grapple with critical questions:

  • Are you really in control of your data?
  • Do you make decisions based on the insights your data provides?
  • Are you using it to navigate toward long-term success?

In this blog, I’ll explore why mastering your data isn’t just a strategic advantage—it’s the foundation of survival in today’s competitive market - Data is the way to success and prosperity in an organization. I’ll also break down how forward-thinking organizations are using comprehensive Data Security Platforms to navigate this new era where speed, innovation, and security can finally coexist.

The Role of Data in Organizational Success

Data drives innovation, fuels growth, and powers smart decision-making. Businesses use data to develop new products, improve customer experiences, and maintain a competitive edge. But let’s be clear, collecting vast amounts of data isn’t enough. True success comes from securing it, understanding it, and putting it to work effectively.

If you don’t fully understand or protect your data, how valuable can it really be?

Organizations face a constant barrage of threats: data breaches, shadow data, and excessive access permissions. Without strong safeguards, these vulnerabilities don’t just pose risks—they become ticking time bombs.

For years, controlling and understanding your data was impossible—it was a complex, imprecise, expensive, and time-consuming process that required significant resources. Today, for the first time ever, there is a solution. With innovative approaches and cutting-edge technology, organizations can now gain the clarity and control they need to manage their data effectively!

With the right approach, businesses can transform their data management from a reactive process to a competitive advantage, driving both innovation and resilience. As data security demands grow, these tools have evolved into something much more powerful: comprehensive Data Security Platforms (DSPs). Unlike basic solutions, you can expect a data security platform to deliver advanced capabilities such as enhanced access control, real-time threat monitoring, and holistic data management. This all-encompassing approach doesn’t just protect sensitive data—it makes it actionable and valuable, empowering organizations to thrive in an ever-changing landscape.

Building a strong data security strategy starts with visionary leadership. It’s about creating a foundation that not only protects data but enables organizations to innovate fearlessly in the face of uncertainty.

The Three Key Pillars for Securing and Leveraging Data

1. Understand Your Data

The foundation of any data security strategy is visibility. Knowing where your data is stored, who has access to it, and what sensitive information it contains is essential. Data sprawl remains a challenge for many organizations. The latest tools, powered by automation and intelligence, provide unprecedented clarity by discovering, classifying, and mapping sensitive data. These insights allow businesses to make sharper, faster decisions to protect and harness their most valuable resource.

Beyond discovery, advanced tools continuously monitor data flows, track changes, and alert teams to potential risks in real-time. With a complete understanding of their data, organizations can shift from reactive responses to proactive management.

2. Control Your Data

Visibility is the first step; control is the next. Managing access to sensitive information is critical to minimizing risk. This involves identifying overly broad permissions and ensuring that access is granted only to those who truly need it.

Having full control of your data becomes even more challenging when data is copied or moved between environments—such as from private to public or from encrypted to unencrypted. This process creates "similar data," in which data that was initially secure becomes exposed to greater risk by being moved into a lower environment. Data that was once limited to a small, regulated group of identities (users) then becomes accessible by a larger number of users, resulting in a significant loss of control.

Effective data security strategies go beyond identifying these issues. They enforce access policies, automate corrective actions, and integrate with identity and access management systems to help organizations maintain a strong security posture, even as their business needs change and evolve. In addition to having robust data identification methods, it’s crucial to prioritize the implementation of access control measures. This involves establishing Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC) policies, so that the right users have permissions at the right times.

3. Monitor Your Data

Real security goes beyond awareness—it demands a dynamic approach. Real-time monitoring doesn’t just detect risks and threats; it anticipates them. By spotting unusual behaviors or unauthorized access early, businesses can preempt incidents and maintain trust in an increasingly volatile digital environment. Advanced tools provide visibility into suspicious activities, offer real-time alerts, and automate responses, enabling security teams to act swiftly. This ongoing oversight ensures that businesses stay resilient and adaptive in an ever-changing environment.

Being Fast and Secure

In today’s competitive market, speed drives success—but speed without security is a recipe for disaster. Organizations must balance rapid innovation with robust protection.

Modern tools streamline security operations by delivering actionable insights for faster, more informed risk responses. A comprehensive Data Security Platform goes further by integrating security workflows, automating threat detection, and enabling real-time remediation across multi-cloud environments. By embedding security into daily processes, businesses can maintain agility while protecting their most critical assets.

Why Continuous Data Security is the Key to Long-Term Growth

Data security isn’t a one-and-done effort—it’s an ongoing commitment. As businesses scale and adopt new technologies, their data environments grow more complex, and security threats continue to evolve. Organizations that continuously understand and control their data are poised to turn uncertainty into opportunity. By maintaining this control, they sustain growth, protect trust, and future-proof their success.

Adaptability is the foundation of long-term success. A robust data security platform evolves with your business, providing continuous visibility, automating risk management, and enabling proactive security measures. By embedding these capabilities into daily operations, organizations can maintain speed and agility without compromising protection.

In today’s data-driven world, success hinges on making informed decisions with secure data. Businesses that master continuous data security will not only safeguard their assets but also position themselves to thrive in an ever-changing competitive landscape.

Conclusion: The Critical Link Between Data Security and Success

Data is the lifeblood of modern businesses, driving growth, innovation, and decision-making. But with this immense value comes an equally immense responsibility: protecting it. A comprehensive data security platform goes beyond the basics, unifying discovery, classification, access governance, and real-time protection into a single proactive approach. True success in a data-driven world demands more than agility—it requires mastery. Organizations that embrace data security as a catalyst for innovation and resilience are the ones who will lead the way in today’s competitive landscape.

The question is: Will you lead the charge or risk being left behind? The opportunity to secure your future starts now.

Final thought: In my work with organizations across industries, I’ve seen firsthand how those who treat data security as a strategic enabler, rather than an obligation, consistently outperform their peers. The future belongs to those who lead with confidence, clarity, and control.

If you're interested in learning how Sentra's Data Security Platform can help you understand and protect your data to drive success in today’s competitive landscape, request a demo today.

<blogcta-big>

Read More
Yair Cohen
Yair Cohen
Aviv Zisso
Aviv Zisso
January 13, 2025
4
Min Read
Data Security

Automating Sensitive Data Classification in Audio, Image and Video Files

Automating Sensitive Data Classification in Audio, Image and Video Files

The world we live in is constantly changing. Innovation and technology are advancing at an unprecedented pace. So much innovation and high tech. Yet, in the midst of all this progress, vast amounts of critical data continue to be stored in various formats, often scattered across network file shares network file shares or cloud storage. Not just structured documents—PDFs, text files, or PowerPoint presentations - we're talking about audio recordings, video files, x-ray images, engineering charts, and so much more.

How do you truly understand the content hidden within these formats? 

After all, many of these files could contain your organization’s crown jewels—sensitive data, intellectual property, and proprietary information—that must be carefully protected.

Importance of Extracting and Understanding Unstructured Data

Extracting and analyzing data from audio, image and video files is crucial in a data-driven world. Media files often contain valuable and sensitive information that, when processed effectively, can be leveraged for various applications.

  • Accessibility: Transcribing audio into text helps make content accessible to people with hearing impairments and improves usability across different languages and regions, ensuring compliance with accessibility regulations.
  • Searchability: Text extraction enables indexing of media content, making it easier to search and categorize based on keywords or topics. This becomes critical when managing sensitive data, ensuring that privacy and security standards are maintained while improving data discoverability.
  • Insights and Analytics: Understanding the content of audio, video, or images can help derive actionable insights for fields like marketing, security, and education. This includes identifying sensitive data that may require protection, ensuring compliance with privacy regulations, and protecting against unauthorized access.
  • Automation: Automated analysis of multimedia content supports workflows like content moderation, fraud detection, and automated video tagging. This helps prevent exposure of sensitive data and strengthens security measures by identifying potential risks or breaches in real-time.
  • Compliance and Legal Reasons: Accurate transcription and content analysis are essential for meeting regulatory requirements and conducting audits, particularly when dealing with sensitive or personally identifiable information (PII). Proper extraction and understanding of media data help ensure that organizations comply with privacy laws such as GDPR or HIPAA, safeguarding against data breaches and potential legal issues.

Effective extraction and analysis of media files unlocks valuable insights while also playing a critical role in maintaining robust data security and ensuring compliance with evolving regulations.

Cases Where Sensitive Data Can Be Found in Audio & MP4 Files

In industries such as retail and consumer services, call centers frequently record customer calls for quality assurance purposes. These recordings often contain sensitive information like personally identifiable information (PII) and payment card data (PCI), which need to be safeguarded. In the media sector, intellectual property often consists of unpublished or licensed videos, such as films and TV shows, which are copyrighted and require protection with rights management technology. However, it's common for employees or apps to extract snippets or screenshots from these videos and store them on personal drives or in unsecured environments, exposing valuable content to unauthorized access.

Another example is when intellectual property or trade secrets are inadvertently shared through unsecured audio or video files, putting sensitive business information at risk - or simply a leakage of confidential information such as non-public sales figures for a publicly traded company. Serious damage can occur to a public company if a bad actor got a hold of an internal audio or video call recording in advance where forecasts or other non-public sales figures are discussed. This would likely be a material disclosure requiring regulatory reporting (ie., for SEC 4-day material breach compliance).

Discover Sensitive Data in MP4s and Audio with Sentra

AI-powered technologies that extract text from images, audio, and video are built on advanced machine learning models like Optical Character Recognition (OCR) and Automatic Speech Recognition (ASR)

OCR converts visual text in images or videos into editable, searchable formats, while ASR transcribes spoken language from audio and video into text. These systems are fueled by deep learning algorithms trained on vast datasets, enabling them to recognize diverse fonts, handwriting, languages, accents, and even complex layouts. At scale, cloud computing enables the deployment of these AI models by leveraging powerful GPUs and scalable infrastructure to handle high volumes of data efficiently. 

The Sentra Cloud-Native Platform integrates tools like serverless computing, distributed processing, and API-driven architectures, allowing it to access these advanced capabilities that run ML models on-demand. This seamless scaling capability ensures fast, accurate text extraction across the global user base.

Sentra is rapidly adopting advancements in AI-driven text extraction. A few examples of recent advancements are Optical Character Recognition (OCR) that works seamlessly on dynamic video streams and robust Automatic Speech Recognition (ASR) models capable of transcribing multilingual and domain-specific content with high accuracy. Additionally, innovations in pre-trained transformer models, like Vision-Language and Speech-Language models, enable context-aware extractions, such as identifying key information from complex layouts or detecting sentiment in spoken text. These breakthroughs are pushing the boundaries of accessibility and automation across industries, and enable data security and privacy teams to achieve what was previously thought impossible.

Large volume of sensitive data was copied into a shared drive
Data at Risk - Data Activity Overview

Sentra: An Innovator in Sensitive Data Discovery within Video & Audio

Sentra’s innovative approach to sensitive data discovery goes beyond traditional text-based formats, leveraging advanced ML and AI algorithms to extract and classify data from audio, video, and images. Extracting and understanding unstructured data from media files is increasingly critical in today’s data-driven world. These files often contain valuable and sensitive information that, when properly processed, can unlock powerful insights and drive better decision-making across industries. Sentra’s solution contextualizes multimedia content to highlight what matters most for your unique needs, delivering instant answers with a single click—capabilities we believe set us apart as the only DSPM solution offering this level of functionality.

As threats continue to evolve across multiple vectors, including text, audio, and video—solution providers must constantly adopt new techniques for accurate classification and detection. AI plays a critical role in enhancing these capabilities, offering powerful tools to improve precision and scalability. Sentra is committed to driving innovation by leveraging these advanced technologies to keep data secure.

Want to see it in action? Request a demo today and discover how Sentra can help you protect sensitive data wherever it resides, even in image and audio formats.

<blogcta-big>

Read More
Team Sentra
Team Sentra
December 9, 2024
3
Min Read
Data Security

8 Holiday Data Security Tips for Businesses

8 Holiday Data Security Tips for Businesses

As the end of the year approaches and the holiday season brings a slight respite to many businesses, it's the perfect time to review and strengthen your data security practices. With fewer employees in the office and a natural dip in activity, the holidays present an opportunity to take proactive steps that can safeguard your organization in the new year. From revisiting access permissions to guarding sensitive data access during downtime, these tips will help you ensure that your data remains protected, even when things are quieter.

Here's how you can bolster your business’s security efforts before the year ends:

  1. Review Access and Permissions Before the New Year
    Take advantage of the holiday downtime to review data access permissions in your systems. Ensure employees only have access to the data they need, and revoke permissions for users who no longer require them (or worse, are no longer employees). It's a proactive way to start the new year securely.
  2. Limit Access to Sensitive Data During Holiday Downtime
    With many staff members out of the office, review who has access to sensitive data. Temporarily restrict access to critical systems and data for those not on active duty to minimize the risk of accidental or malicious data exposure during the holidays.
  3. Have a Data Usage Policy
    With the holidays bringing a mix of time off and remote work, it’s a good idea to revisit your data usage policy. Creating and maintaining a data usage policy ensures clear guidelines for who can access what data, when, and how, especially during the busy holiday season when staff availability may be lower. By setting clear rules, you can help prevent unauthorized access or misuse, ensuring that your data remains secure throughout the holidays, and all the way to 2025.
  4. Eliminate Unnecessary Data to Reduce Shadow Data Risks
    Data security risks increase as long as data remains accessible. With the holiday season bringing potential distractions, it's a great time to review and delete any unnecessary sensitive data, such as PII or PHI, to prevent shadow data from posing a security risk as the year wraps up with the new year approaching.
  5. Apply Proper Hygiene to Protect Sensitive Data
    For sensitive data that must exist, be certain to apply proper hygiene such as masking/de-identification, encryption, logging, etc., to ensure the data isn’t improperly disclosed. With holiday sales, year-end reporting, and customer gift transactions in full swing, ensuring sensitive data is secure is more important than ever. Many stores have native tools that can assist (e.g., Snowflake DDM, Purview MIP, etc.).
  6. Monitor Third-Party Data Access
    Unchecked third-party access can lead to data breaches, financial loss, and reputational damage. The holidays often mean new partnerships or vendors handling seasonal activities like marketing campaigns or order fulfillment. Keep track of how vendors collect, use, and share your data. Create an inventory of vendors and map their data access to ensure proper oversight, especially during this busy time.
  7. Monitor Data Movement and Transformations
    Data is dynamic and constantly on the move. Monitor whenever data is copied, moved from one environment to another, crosses regulated perimeters (e.g., GDPR), or is ETL-processed, as these activities may introduce new sensitive data vulnerabilities. The holiday rush often involves increased data activity for promotions, logistics, and end-of-year tasks, making it crucial to ensure new data locations are secure and configurations are correct.
  8. Continuously Monitor for New Data Threats
    Despite our best protective measures, bad things happen. A user’s credentials are compromised. A partner accesses sensitive information. An intruder gains access to our network. A disgruntled employee steals secrets. The holiday season’s unique pressures and distractions increase the likelihood of these incidents. Watch for anomalies by continually monitoring data activity and alerting whenever suspicious things occur—so you can react swiftly to prevent damage or leakage, even amid the holiday bustle. A user’s credentials are compromised. A partner accesses sensitive information. An intruder gains access to our network. A disgruntled employee steals secrets. Watch for these anomalies by continually monitoring data activity and alerting whenever suspicious things occur - so you can react swiftly to prevent damage or leakage.

Wrapping Up the Year with Stronger Data Security

By taking the time to review and update your data security practices before the year wraps up, you can start the new year with confidence, knowing that your systems are secure and your data is protected. Implementing these simple but effective measures will help mitigate risks and set a strong foundation for 2025. Don't let the holiday season be an excuse for lax security - use this time wisely to ensure your organization is prepared for any data security challenges the new year may bring.

Visit Sentra's demo page to learn more about how you can ensure your organization can stay ahead and start 2025 with a stronger data security posture.

<blogcta-big>

Read More
Romi Minin
Romi Minin
December 5, 2024
3
Min Read
Data Security

Top Data Security Resolutions

Top Data Security Resolutions

As we reflect on 2024, a year marked by a surge in cyber attacks, we are reminded of the critical importance of prioritizing data security. Widespread breaches in various industries, such as the significant Ticketmaster data breach impacting 560 million users, have highlighted vulnerabilities and led to both financial losses and damage to reputations. In response, regulatory bodies have imposed strict penalties for non-compliance, emphasizing the importance of aligning security practices with industry-specific regulations.

By September 2024, GDPR fines totaled approximately €2.41 billion, significantly surpassing the total penalties issued throughout 2023. This reflects stronger enforcement across sectors and a heightened focus on data protection compliance. Entering 2025, the dynamic threat landscape demands a proactive approach. Technology's rapid advancement and cybercriminals' adaptability require organizations to stay ahead. The importance of bolstering data security cannot be overstated, given potential legal consequences, reputational risks, and disruptions to business operations that a data breach can cause.

The data security resolutions for 2025 outlined below serve as a guide to fortify defenses effectively. Compliance with regulations, reducing attack surfaces, governing data access, safeguarding AI models, and ensuring data catalog integrity are crucial steps. Adopting these resolutions enables organizations to navigate the complexities of data security, mitigating risks and proactively addressing the evolving threat landscape.

Adhere to Data Security and Compliance Regulations

The first data security resolution you should keep in mind is aligning your data security practices with industry-specific data regulations and standards. Data protection regulatory requirements are becoming more stringent (for example, note the recent SEC requirement of public US companies for notification within 4 days of a material breach). Penalties for non compliance are also increasing.

With explosive growth of cloud data it is incumbent upon regulated organizations to facilitate effective data security controls and to while keeping pace with the dynamic business climate. One way to achieve this is through adopting Data Security Posture Management (DSPM) which automates cloud-native discovery and classification, improving accuracy and reporting timeliness. Sentra supports more than a dozen leading frameworks, for policy enforcement and streamlined reporting.

Reduce Attack Surface by Protecting Shadow Data and Enforcing Data Lifecycle Policies

As cloud adoption accelerates, data proliferates. This data sprawl, also known as shadow data, brings with it new risks and exposures. When a developer moves a copy of the production database into a lower environment for testing purposes, do all the same security controls and usage policies travel with it? Likely not. 

Organizations must institute security controls that stay with the data - no matter where it goes. Additionally, automating redundant, trivial, obsolete (ROT) data policies can offload the arduous task of ‘policing’ data security, ensuring data remains protected at all times and allowing the business to innovate safely. This has an added bonus of avoiding unnecessary data storage expenditure.

Implement Least Privilege Access for Sensitive Data

Organizations can reduce their attack surface by limiting access to sensitive information. This applies equally to users, applications, and machines (identities). Data Access Governance (DAG) offers a way to implement policies that alert on and can enforce least privilege data access automatically. This has become increasingly important as companies build cloud-native applications, with complex supply chain / ecosystem partners, to improve customer experience. DAG often works in concert with IAM systems, providing added context regarding data sensitivity to better inform access decisions. DAG is also useful if a breach occurs - allowing responders to rapidly determine the full impact and reach (blast radius) of an exposure event to more quickly contain damages.

Protect Large Language Models (LLMs) Training by Detecting Security Risks

AI holds immense potential to transform our world, but its development and deployment must be accompanied by a steadfast commitment to data integrity and privacy. Protecting the integrity and privacy of data in Large Language Models (LLMs) is essential for building responsible and ethical AI applications. By implementing data protection best practices, organizations can mitigate the risks associated with data leakage, unauthorized access, and bias/data corruption. Sentra's Data Security Posture Management (DSPM) solution provides a comprehensive approach to data security and privacy, enabling organizations to develop and deploy LLMs with speed and confidence.

Ensure the Integrity of Your Data Catalogs

Enrich data catalog accuracy for improved governance with Sentra's classification labels and automatic discovery. Companies with data catalogs (from leading providers such as Alation, Collibra, Atlan) and data catalog initiatives struggle to keep pace with the rapid movement of their data to the cloud and the dynamic nature of cloud data and data stores. DSPM automates the discovery and classification process - and can do so at immense scale - so that organizations can accurately know at any time what data they have, where it is located, and what its security posture is. DSPM also provides usage context (owner, top users, access frequency, etc.) that enables validation of information in data catalogs, ensuring they remain current, accurate, and trustworthy as the authoritative source for their organization. This empowers organizations to maintain security and ensure the proper utilization of their most valuable asset—data!

How Sentra’s DSPM Can Help Achieve Your 2025 Data Security Resolutions

By embracing these resolutions, organizations can gain a holistic framework to fortify their data security posture. This approach emphasizes understanding, implementing, and adapting these resolutions as practical steps toward resilience in the face of an ever-evolving threat landscape. Staying committed to these data security resolutions can be challenging, as nearly 80% of individuals tend to abandon their New Year’s resolutions by February. However, having Sentra’s Data Security Posture Management (DSPM) by your side in 2025 ensures that adhering to these data security resolutions and refining your organization's data security strategy becomes guaranteed.

To learn more, schedule a demo with one of our experts.

<blogcta-big>

Read More
Gilad Golani
Gilad Golani
November 28, 2024
3
Min Read
Data Security

New Healthcare Cyber Regulations: What Security Teams Need to Know

New Healthcare Cyber Regulations: What Security Teams Need to Know

Why New Healthcare Cybersecurity Regulations Are Critical

In today’s healthcare landscape, cyberattacks on hospitals and health services have become increasingly common and devastating. For organizations that handle vast amounts of sensitive patient information, a single breach can mean exposing millions of records, causing not only financial repercussions but also risking patient privacy, trust, and care continuity.

Top Data Breaches in Hospitals in 2024: A Year of Costly Cyber Incidents

In 2024, there have been a series of high-profile data breaches in the healthcare sector, exposing critical vulnerabilities and emphasizing the urgent need for stronger cybersecurity measures in 2025 and beyond. Among the most significant incidents was the breach at Change Healthcare, Inc., which resulted in the exposure of 100 million records. As one of the largest healthcare data breaches in history, this event highlighted the challenges of securing patient data at scale and the immense risks posed by hacking incidents. Similarly, HealthEquity, Inc. suffered a breach impacting 4.3 million individuals, highlighting the vulnerabilities associated with healthcare business associates who manage data for multiple organizations. Finally, Concentra Health Services, Inc. experienced a breach that compromised nearly 4 million patient records, raising critical concerns about the adequacy of cybersecurity defenses in healthcare facilities. These incidents have significantly impacted patients and providers alike, highlighting the urgent need for robust cybersecurity measures and stricter regulations to protect sensitive data.

New York’s New Cybersecurity Reporting Requirements for Hospitals

In response to the growing threat of cyberattacks, many healthcare organizations and communities are implementing stronger cybersecurity protections. In October, New York State took a significant step by introducing new cybersecurity regulations for general hospitals aimed at safeguarding patient data and reinforcing security measures across healthcare systems. Under these regulations, hospitals in New York must report any “material cybersecurity incident” to the New York State Department of Health (NYSDOH) within 72 hours of discovery.

This 72-hour reporting window aligns with other global regulatory frameworks, such as the European Union’s GDPR and the SEC’s requirements for public companies. However, its application in healthcare represents a critical shift, ensuring incidents are addressed and reported promptly.

The rapid reporting requirement aims to:

  • Enable the NYSDOH to assess and respond to cyber incidents across the state’s healthcare network.
  • Help mitigate potential fallout by ensuring hospitals promptly address vulnerabilities.
  • Protect patients by fostering transparency around data breaches and associated risks.

For hospitals, meeting this requirement means refining incident response protocols to act swiftly upon detecting a breach. Compliance with these regulations not only safeguards patient data but also strengthens trust in healthcare services.

With these regulations, New York is setting a precedent that could reshape healthcare cybersecurity standards nationwide. By emphasizing proactive cybersecurity and quick incident response, the state is establishing a higher bar for protecting sensitive data in healthcare organizations, inspiring other states to potentially follow as well.

HIPAA Updates and the Role of HHS

While New York leads with immediate, state-level action, the Department of Health and Human Services (HHS) is also working to update the HIPAA Security Rule with new cybersecurity standards. These updates, expected to be proposed later this year, will follow a lengthy regulatory process, including a notice of proposed rulemaking, a public comment period, and the eventual issuance of a final rule. Once finalized, healthcare organizations will have time to comply.

In the interim, the HHS has outlined voluntary cybersecurity goals, announced in January 2024. While these recommendations are a step forward, they lack the urgency and enforceability of New York’s state-level regulations. The contrast between the swift action in New York and the slower federal process highlights the critical role state initiatives play in bridging gaps in patient data protection.

Together, these developments—New York’s rapid reporting requirements and the ongoing HIPAA updates—show a growing recognition of the need for stronger cybersecurity measures in healthcare. They emphasize the importance of immediate action at the state level while federal efforts progress toward long-term improvements in data security standards.

Penalties for Healthcare Cybersecurity Non-Compliance in NY

Non-compliance with any health law or regulation in New York State, including cybersecurity requirements, may result in penalties. However, the primary goal of these regulations is not to impose financial penalties but to ensure that healthcare facilities are equipped with the necessary resources and guidance to defend against cyberattacks. Under Section 12 of health law regulations in New York State, violations can result in civil penalties of up to $2,000 per offense, with increased fines for more severe or repeated infractions. If a violation is repeated within 12 months and poses a serious health threat, the fine can rise to $5,000. For violations directly causing serious physical harm to a patient, penalties may reach $10,000. A portion of fines exceeding $2,000 is allocated to the Patient Safety Center to support its initiatives. These penalties aim to ensure compliance, with enforcement actions carried out by the Commissioner or the Attorney General. Additionally, penalties may be negotiated or settled under certain circumstances, providing flexibility while maintaining accountability.

Importance of Prioritizing Breach Reporting

With the rapid digitization of healthcare services, regulations are expected to tighten significantly in the coming years. HIPAA, in particular, is anticipated to evolve with stronger privacy protections and expanded rules to address emerging challenges. Healthcare providers must make cybersecurity a top priority to protect patients from cyber threats. This involves adopting proactive risk assessments, implementing strong data protection strategies, and optimizing breach detection, response, and reporting capabilities to meet regulatory requirements effectively.

Data Security Platforms (DSPs) are essential for safeguarding sensitive healthcare data. These platforms enable organizations to locate and classify patient information, such as lab results, prescriptions, personally identifiable information, or medical images - across multiple formats and environments, ensuring comprehensive protection and regulatory compliance.

Breach Reporting With Sentra

A proper classification solution is essential for understanding the nature and sensitivity of your data at all times. With Sentra, you gain a clear, real-time view of your data's classification, making it easier to determine if sensitive data was involved in a breach, identify the types of data affected, and track who had access to it. This ensures that your breach reports are accurate, comprehensive, and aligned with regulatory requirements.

Sentra can help you to adhere to many compliance frameworks, including PCI, GDPR, SOC2 and more, that may be applicable to your sensitive data as it travels around the organization. It automatically will alert you to violations, provide insight into the impact of any compromise, help you to prioritize associated risks, and integrate with common IR tools to streamline remediation. Sentra automates these processes so you can focus energies on eliminating risks.

Data Breach Report November 2024

If you want to learn more about Sentra's Data Security Platform, and how you can get started with adhering to the different compliance frameworks, please visit Sentra's demo page.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
November 17, 2024
5
Min Read
AI and ML

Enhancing AI Governance: The Crucial Role of Data Security

Enhancing AI Governance: The Crucial Role of Data Security

In today’s hyper-connected world, where big data powers decision-making, artificial intelligence (AI) is transforming industries and user experiences around the globe. Yet, while AI technology brings exciting possibilities, it also raises pressing concerns, particularly related to security, compliance, and ethical integrity. 

As AI adoption accelerates - fueled by increasingly vast and unstructured data sources, organizations seeking to secure AI deployments (and investments) must establish a strong AI governance initiative with data governance at its core.

This article delves into the essentials of AI governance, outlines its importance, examines the challenges involved, and presents best practices to help companies implement a resilient, secure, and ethically sound AI governance framework centered around data.

What is AI Governance?

AI governance encompasses the frameworks, practices, and policies that guide the responsible, safe, and ethical use of AI systems across an organization. Effective AI governance integrates technical elements—data, models, and code—with human oversight for a holistic framework that evolves alongside an organization’s AI initiatives.

Embedding AI governance, along with related data security measures, into organizational practices not only guarantees responsible AI use but also long-term success in an increasingly AI-driven world.

With an AI governance structure rooted in secure data practices, your company can:

  • Mitigate risks: Ongoing AI risk assessments can proactively identify and address potential threats, such as algorithmic bias, transparency gaps, and potential data leakage; this ensures fairer AI outcomes while minimizing reputational and regulatory risks tied to flawed or opaque AI systems.
  • Ensure strict adherence: Effective AI governance and compliance policies create clear accountability structures, aligning AI deployments and data use with both internal guidelines and the broader regulatory landscape such as data privacy laws or industry-specific AI standards.
  • Optimize AI performance: Centralized AI governance provides full visibility into your end-to-end AI deployments一from data sources and engineered feature sets to trained models and inference endpoints; this facilitates faster and more reliable AI innovations while reducing security vulnerabilities.
  • Foster trust: Ethical AI governance practices, backed by strict data security, reinforce trust by ensuring AI systems are transparent and safe, which is crucial for building confidence among both internal and external stakeholders.

A robust AI governance framework means your organization can safeguard sensitive data, build trust, and responsibly harness AI’s transformative potential, all while maintaining a transparent and aligned approach to AI.

Why Data Governance Is at the Center of AI Governance

Data governance is key to effective AI governance because AI systems require high-quality, secure data to properly function. Accurate, complete, and consistent data is a must for AI performance and the decisions that guide it. Additionally, a strong data access governance platform enables organizations to navigate complex regulatory landscapes and mitigate ethical concerns related to bias.

Through a structured data governance framework, organizations can not only achieve compliance but also leverage data as a strategic asset, ultimately leading to more reliable and ethical AI outcomes.

Risks of Not Having a Data-Driven AI Governance Framework

AI systems are inherently complex, non-deterministic, and highly adaptive—characteristics that pose unique challenges for governance. 

Many organizations face difficulty blending AI governance with their existing data governance and IT protocols; however, a centralized approach to governance is necessary for comprehensive oversight.

Without a data-centric AI governance framework, organizations face risks such as:

  • Opaque decision-making: Without clear lineage and governance, it becomes difficult to trace and interpret AI decisions, which can lead to unethical, discriminatory, or harmful outcomes.
  • Data breaches: AI systems rely on large volumes of data, making rigorous data security protocols essential to avoid leaks of sensitive information across an extended attack surface covering both model inputs and outputs. 
  • Regulatory non-compliance: The fast-paced evolution of AI regulations means organizations without a governance framework risk large penalties for non-compliance and potential reputational damage. 

For more insights on managing AI and data privacy compliance, see our tips for security leaders.

Implementing AI Governance: A Balancing Act

While centralized, robust AI governance is crucial, implementing it successfully poses significant challenges. Organizations must find a balance between driving innovation and maintaining strict oversight of AI operations.

A primary issue is ensuring that governance processes are both adaptable enough to support AI innovation and stringent enough to uphold data security and regulatory compliance. This balance is difficult to achieve, particularly as AI regulations vary widely across jurisdictions and are frequently updated. 

Another key challenge is the demand for continuous monitoring and auditing. Effective governance requires real-time tracking of data usage, model behavior, and compliance adherence, which can add significant operational overhead if not managed carefully.

To address these challenges, organizations need an adaptive governance framework that prioritizes privacy, data security, and ethical responsibility, while also supporting operational efficiency and scalability.

Frameworks & Best Practices for Implementing Data-Driven AI Governance

While there is no universal model for AI governance, your organization can look to established frameworks, such as the AI Act or OECD AI Principles, to create a framework tailored to your own risk tolerance, industry regulations, AI use cases, and culture.

Below we explore key data-driven best practices—relevant across AI use cases—that can best help you structure an effective and secure data-centric AI governance framework.

Adopt a Lifecycle Approach

A lifecycle approach divides oversight into stages. Implementing governance at each stage of the AI lifecycle enables thorough oversight of projects from start to finish following a multi-layered security strategy. 

For example, in the development phase, teams can conduct data risk assessments, while ongoing performance monitoring ensures long-term alignment with governance policies and control over data drift.

Prioritize Data Security

Protecting sensitive data is foundational to responsible AI governance. Begin by achieving full visibility into data assets, categorize them by relevance, and then assign risk scores to prioritize security actions. 

An advanced data risk assessment combined with data detection and response (DDR) can help you streamline risk scoring and threat mitigation across your entire data catalog, ensuring a strong data security posture.

Adopt a Least Privilege Access Model

Restricting data access based on user roles and responsibilities limits unauthorized access and aligns with a zero-trust security approach. By ensuring that sensitive data is accessible only to those who need it for their work via least privilege, you reduce the risk of data breaches and enhance overall data security.

Establish Data Quality Monitoring

Ongoing data quality checks help maintain data integrity and accuracy, meaning AI systems will be trained on high-quality data sets and serve quality requests. Implement processes for continuous monitoring of data quality and regularly assess data integrity and accuracy; this will minimize risks associated with poor data quality and improve AI performance by keeping data aligned with governance standards.

Implement AI-Specific Detection and Response Mechanisms

Continuous monitoring of AI systems for anomalies in data patterns or performance is critical for detecting risks before they escalate. Anomaly detection for AI deployments can alert security teams in real time to unusual access patterns or shifts in model performance. Automated incident response protocols guarantee quick intervention, maintaining AI output integrity and protecting against potential threats.

A data security posture management (DSPM) tool allows you to incorporate continuous monitoring with minimum overhead to facilitate proactive risk management.

Conclusion

AI governance is essential for responsible, secure, and compliant AI deployments. By prioritizing data governance, organizations can effectively manage risks, enhance transparency, and align with ethical standards while maximizing the operational performance of AI.

As AI technology evolves, governance frameworks must be adaptive, ready to address advancements such as generative AI, and capable of complying with new regulations, like the UK GDPR.

To learn how Sentra can streamline your data and AI compliance efforts, explore our data security platform guide.

Or, see Sentra in action today by signing up for a demo.

<blogcta-big>

Read More
David Stuart
David Stuart
November 7, 2024
3
Min Read
Sentra Case Study

Understanding the Value of DSPM in Today’s Cloud Ecosystem

Understanding the Value of DSPM in Today’s Cloud Ecosystem

As businesses accelerate their digital growth, the complexity of securing sensitive data in the cloud is growing just as fast. Data moves quickly and threats are evolving even faster; keeping cloud environments secure has become one of the biggest challenges for security teams today.

In The Hacker News’ webinar, Benny Bloch, CISO at Global-e, and David Stuart, Senior Director of Product Marketing at Sentra, discuss the challenges and solutions associated with Data Security Posture Management (DSPM) and how it's reshaping the way organizations approach data protection in the cloud.

The Shift from Traditional IT Environments to the Cloud

Benny highlights how the move from traditional IT environments to the cloud has dramatically changed the security landscape. 

"In the past, we knew the boundaries of our systems. We controlled the servers, firewalls, and databases," Benny explains. However, in the cloud, these boundaries no longer exist. Data is now stored on third-party servers, integrated with SaaS solutions, and constantly moved and copied by data scientists and developers. This interconnectedness creates security challenges, as it becomes difficult to control where data resides and how it is accessed. This transition has led many CISOs to feel a loss of control.

 

As Benny points out, "When using a SaaS solution, the question becomes, is this part of your organization or not? And where do you draw the line in terms of responsibility and accountability?"

The Role of DSPM in Regaining Control

To address this challenge, organizations are turning to DSPM solutions. While Cloud Security Posture Management (CSPM) tools focus on identifying infrastructure misconfigurations and vulnerabilities, they don’t account for the movement and exposure of data across environments. DSPM, on the other hand, is designed to monitor sensitive data itself, regardless of where it resides in the cloud.

David Stuart emphasizes this difference: "CSPM focuses on your infrastructure. It’s great for monitoring cloud configurations, but DSPM tracks the movement and exposure of sensitive data. It ensures that security protections follow the data, wherever it goes."

For Benny, adopting a DSPM solution has been crucial in regaining a sense of control over data security. "Our primary goal is to protect data," he says. "While we have tools to monitor our infrastructure, it’s the data that we care most about. DSPM allows us to see where data moves, how it’s controlled, and where potential exposures lie."

Enhancing the Security Stack with DSPM

One of the biggest advantages of DSPM is its ability to complement existing security tools. For example, Benny points out that DSPM helps him make more informed decisions about where to prioritize resources. "I’m willing to take more risks in environments that don’t hold significant data. If a server has a vulnerability but isn’t connected to sensitive data, I know I have time to patch it."

By using DSPM, organizations can optimize their security stack, ensuring that data remains protected even as it moves across different environments. This level of visibility enables CISOs to focus on the most critical threats while mitigating risks to sensitive data.

A Smooth Integration with Minimal Disruption

Implementing new security tools can be a challenge, but Benny notes that the integration of Sentra’s DSPM solution was one of the smoothest experiences his team has had. "Sentra’s solution is non-intrusive. You provide account details, install a sentinel in your VPC, and you start seeing insights right away," he explains. Unlike other tools that require complex integrations, DSPM offers a connector-less architecture that reduces the need for ongoing maintenance and reconfiguration. This ease of deployment allows security teams to focus on monitoring and securing data, rather than dealing with the technical challenges of integration.

The Future of Data Security with Sentra’s DSPM

As organizations continue to rely on cloud-based services, the need for comprehensive data security solutions will only grow. DSPM is emerging as a critical component of the security stack, offering the visibility and control that CISOs need to protect their most valuable assets: data.

By integrating DSPM with other security tools like CSPM, organizations can ensure that their cloud environments remain secure, even as data moves across borders and infrastructures. As Benny concludes, "You need an ecosystem of tools that complement each other. DSPM gives you the visibility you need to make informed decisions and protect your data, no matter where it resides."

This shift towards data-centric protection is the future of AI-era security, helping organizations stay ahead of threats and maintain control over their ever-expanding digital environments.

Want to learn more about DSPM? Request a demo today!

Read More
Daniel Suissa
Daniel Suissa
November 7, 2024
3
Min Read
Data Security

Top 5 GCP Security Tools for Cloud Security Teams

Top 5 GCP Security Tools for Cloud Security Teams

Like its primary competitors Amazon Web Services (AWS) and Microsoft Azure, Google Cloud Platform (GCP) is one of the largest public cloud vendors in the world – counting companies like Nintendo, eBay, UPS, The Home Depot, Etsy, PayPal, 20th Century Fox, and Twitter among its enterprise customers. 

In addition to its core cloud infrastructure – which spans some 24 data center locations worldwide - GCP offers a suite of cloud computing services covering everything from data management to cost management, from video over the web to AI and machine learning tools. And, of course, GCP offers a full complement of security tools – since, like other cloud vendors, the company operates under a shared security responsibility model, wherein GCP secures the infrastructure, while users need to secure their own cloud resources, workloads and data.

To assist customers in doing so, GCP offers numerous security tools that natively integrate with GCP services. If you are a GCP customer, these are a great starting point for your cloud security journey.

In this post, we’ll explore five important GCP security tools security teams should be familiar with. 

Security Command Center

GCP’s Security Command Center is a fully-featured risk and security management platform – offering GCP customers centralized visibility and control, along with the ability to detect threats targeting GCP assets, maintain compliance, and discover misconfigurations or vulnerabilities. It delivers a single pane view of the overall security status of workloads hosted in GCP and offers auto discovery to enable easy onboarding of cloud resources - keeping operational overhead to a minimum. To ensure cyber hygiene, Security Command Center also identifies common attacks like cross-site scripting, vulnerabilities like legacy attack-prone binaries, and more.

Chronicle Detect

GCP Chronicle Detect is a threat detection solution that helps enterprises identify threats at scale. Chronicle Detect’s next generation rules engine operates ‘at the speed of search’ using the YARA detection language, which was specially designed to describe threat behaviors. Chronicle Detect can identify threat patterns - injecting logs from multiple GCP resources, then applying a common data model to a petabyte-scale set of unified data drawn from users, machines and other sources. The utility also uses threat intelligence from VirusTotal to automate risk investigation. The end result is a complete platform to help GCP users better identify risk, prioritize threats faster, and fill in the gaps in their cloud security.

Event Threat Detection

GCP Event Threat Detection is a premium service that monitors organizational cloud-based assets continuously, identifying threats in near-real time. Event Threat Detection works by monitoring the cloud logging stream - API call logs and actions like creating, updating, reading cloud assets, updating metadata, and more. Drawing log data from a wide array of sources that include syslog, SSH logs, cloud administrative activity, VPC flow, data access, firewall rules, cloud NAT, and cloud DNS – the Event Threat Detection utility protects cloud assets from data exfiltration, malware, cryptomining, brute-force SSH, outgoing DDoS and other existing and emerging threats.

Cloud Armor

The Cloud Armor utility protects GCP-hosted websites and apps against denial of service and other cloud-based attacks at Layers 3, 4, and 7. This means it guards cloud assets against the type of organized volumetric DDoS attacks that can bring down workloads. Cloud Armor also offers a web application firewall (WAF) to protect applications deployed behind cloud load balancers – and protects these against pervasive attacks like SQL injection, remote code execution, remote file inclusion, and others. Cloud Armor is an adaptive solution, using machine learning to detect and block Layer 7 DDoS attacks, and allows extension of Layer 7 protection to include hybrid and multi-cloud architectures.

Web Security Scanner

GCP’s Web Security Scanner was designed to identify vulnerabilities in App Engines, Google Kubernetes Engines (GKEs), and Compute Engine web applications. It does this by crawling applications at their public URLs and IPs that aren't behind a firewall, following all links and exercising as many event handlers and user inputs as it can. Web Security Scanner protects against known vulnerabilities like plain-text password transmission, Flash injection, mixed content, and also identifies weak links in the management of the application lifecycle like exposed Git/SVN repositories. To monitor web applications for compliance control violations, Web Security Scanner also identifies a subset of the critical web application vulnerabilities listed in the OWASP Top Ten Project.

 

Securing the cloud ecosystem is an ongoing challenge, partly because traditional security solutions are ineffective in the cloud – if they can even be deployed at all. That’s why the built-in security controls in GCP and other cloud platforms are so important.

The solutions above, and many others baked-in to GCP, help GCP customers properly configure and secure their cloud environments - addressing the ever-expanding cloud threat landscape.

<blogcta-big>

Read More
Haim Roth
Haim Roth
October 28, 2024
3
Min Read
Data Security

Spooky Stories of Data Breaches

Spooky Stories of Data Breaches

As Halloween approaches, it’s the perfect time to dive into some of the scariest data breaches of 2024. Just like monsters hiding in haunted houses, cyber threats quietly move through the digital world, waiting to target vulnerable organizations.

The financial impact of cyberattacks is immense. Cybersecurity Ventures estimates global cybercrime will reach $9.5 trillion in 2024 and $10.5 trillion by 2025. Ransomware, the top threat, is projected to cause damages from $42 billion in 2024 to $265 billion by 2031.

If those numbers didn’t scare you, the 2024 Verizon Data Breach Investigations Report highlights that out of 30,458 cyber incidents, 10,626 were confirmed data breaches, with one-third involving ransomware or extortion. Ransomware has been the top threat in 92% of industries and, along with phishing, malware, and DDoS attacks, has caused nearly two-thirds of data breaches in the past three years.

Let's explore some of the most spine-tingling breaches of 2024 and uncover how they could have been avoided.

Major Data Breaches That Shook the Digital World

The Dark Secrets of National Public Data

The latest National Public Data breach is staggering, just this summer, a hacking group claims to have stolen 2.7 billion personal records, potentially affecting nearly everyone in the United States, Canada, and the United Kingdom. This includes American Social Security numbers. They published portions of the stolen data on the dark web, and while experts are still analyzing how accurate and complete the information is (there are only about half a billion people between the US, Canada, and UK), it's likely that most, if not all, social security numbers have been compromised.

The Haunting of AT&T

AT&T faced a nightmare when hackers breached their systems, exposing the personal data of 7.6 million current and 65.4 million former customers. The stolen data, including sensitive information like Social Security numbers and account details, surfaced on the dark web in March 2024.

Change Healthcare Faces a Chilling Breach

In February 2024, Change Healthcare fell victim to a massive ransomware attack that exposed the personal information of millions of individuals, with 145 million records exposed. This breach, one of the largest in healthcare history, compromised names, addresses, Social Security numbers, medical records, and other sensitive data. The incident had far-reaching effects on patients, healthcare providers, and insurance companies, prompting many in the healthcare industry to reevaluate their security strategies.

The Nightmare of Ticketmaster

Ticketmaster faced a horror of epic proportions when hackers breached their systems, compromising 560 million customer records. This data breach included sensitive details such as payment information, order history, and personal identifiers. The leaked data, offered for sale online, put millions at risk and led to potential federal legal action against their parent company, Live Nation.

How Can Organizations Prevent Data Breaches: Proactive Steps

To mitigate the risk of data breaches, organizations should take proactive steps. 

  • Regularly monitor accounts and credit reports for unusual activity.
  • Strengthen access controls by minimizing over-privileged users.
  • Review permissions and encrypt critical data to protect it both at rest and in transit. 
  • Invest in real-time threat detection tools and conduct regular security audits to help identify vulnerabilities and respond quickly to emerging threats.
  • Implement Data Security Posture Management (DSPM) to detect shadow data and ensure proper data hygiene (i.e. encryption, masking, activity logging, etc.) 

These measures, including multi-factor authentication and routine compliance audits, can significantly reduce the risk of breaches and better protect sensitive information.

Best Practices to Secure Your Data 

Enough of the scary news, how do we avoid these nightmares?

Organizations can defend themselves starting with Data Security Posture Management (DSPM) tools. By finding and eliminating shadow data, identifying over-privileged users, and monitoring data movement, companies can significantly reduce their risk of facing these digital threats.

Looking at these major breaches, it's clear the stakes have never been higher. Each incident highlights the vulnerabilities we face and the urgent need for strong protection strategies. Learning from these missteps underscores the importance of prioritizing data security.

As technology continues to evolve and regulations grow stricter, it’s vital for businesses to adopt a proactive approach to safeguarding their data. Implementing proper data security measures can play a critical role in protecting sensitive information and minimizing the risk of future breaches.

Sentra: The Data Security Platform for the AI era

Sentra enables security teams to gain full visibility and control of data, as well as protect against sensitive data breaches across the entire public cloud stack. By discovering where all the sensitive data is, how it's secured, and where it's going, Sentra reduces the 'data attack surface', the sum of all places where sensitive or critical data is stored or traveling to.Sentra’s cloud-native design combines powerful Data Discovery and Classification, DSPM, DAG, and DDR capabilities into a complete Data Security Platform (DSP). With this, Sentra customers achieve enterprise-scale data protection and answer the important questions about their data. Sentra DSP provides a crucial layer of protection distinct from other infrastructure-dependent layers. It allows organizations to scale data protection across multi-clouds to meet enterprise demands and keep pace with ever-evolving business needs. And it does so very efficiently - without creating undue burdens on the personnel who must manage it.

Read More
Haim Roth
Haim Roth
October 1, 2024
3
Min Read
Data Security

5 Cybersecurity Tips for Cybersecurity Awareness Month

5 Cybersecurity Tips for Cybersecurity Awareness Month

Secure our World: Cybersecurity Awareness Month 2024

As we kick off October's Cybersecurity Awareness Month and think about this year’s theme, “Secure Our World,” it’s important to remember that safeguarding our digital lives doesn't have to be complex. Simple, proactive steps can make a world of difference in protecting yourself and your business from online threats.

In many cases, these simple steps relate to data — the sensitive information about users’ personal and professional lives. As a business, you are largely responsible for keeping your customers' and employees’ data safe. Starting with cybersecurity is the best way to ensure that this valuable information stays secure, no matter where it’s stored or how you use it.

Keeping Personal Identifiable Information (PII) Safe

Data security threats are more pervasive than ever today, with cybercriminals constantly evolving their tactics to exploit vulnerabilities. From phishing attacks to ransomware, the risks are not just technical but also deeply personal, especially when it comes to protecting Personal Identifiable Information (PII).

Cybersecurity Awareness Month is a perfect time to reflect on the importance of strong data security. Businesses, in particular, can contribute to a safer digital environment through Data Security Posture Management (DSPM). DSPM helps businesses - big and small alike -  monitor, assess, and improve their security posture, ensuring that sensitive data, such as PII, remains protected against breaches. By implementing DSPM, businesses can identify weak spots in their data security and take action before an incident occurs, reinforcing the idea that securing our world starts with securing our data.

Let's take this month as an opportunity to Secure Our World by embracing these simple but powerful DSPM measures to protect what matters most: data.

5 Cybersecurity Tips for Businesses

  1. Discover and Classify Your Data: Understand where all of your data resides, how it’s used, and its levels of sensitivity and protection. By leveraging data discovery and classification tools, you can maintain complete visibility and control over your business’s data, reducing the risks associated with shadow data (unmanaged or abandoned data).
  2. Ensure data always has a good risk posture: Maintain a strong security stance by ensuring your data always has a good posture through Data Security Posture Management (DSPM). DSPM continuously monitors and strengthens your data’s security posture (readiness to tackle potential cybersecurity threats), helping to prevent breaches and protect sensitive information from evolving threats.
  3. Protect Private and Sensitive Data: Keep your private and sensitive data secure, even from internal users. By implementing Data Access Governance (DAG) and utilizing techniques like data de-identification and masking, you can protect critical information and minimize the risk of unauthorized access.
  4. Embrace Least-Privilege Control: Control data access through the principle of least privilege — only granting access to the users and systems who need it to perform their jobs. By implementing Data Access Governance (DAG), you can limit access to only what is necessary, reducing the potential for misuse and enhancing overall data security.
  5. Continual Threat Monitoring for Data Protection: To protect your data in real-time, implement continual monitoring of new threats. With Data Detection and Response (DDR), you can stay ahead of emerging risks, quickly identifying and neutralizing potential vulnerabilities to safeguard your sensitive information.

How Sentra Helps Secure Your Business’s World

Today, a business's “world” is extremely complex and ever-changing. Users can easily move, change, or copy data and connect new applications/environments to your ecosystem. These factors make it challenging to pinpoint where your data resides and who has access to it at any given moment. 

Sentra helps by giving businesses a vantage point of their entire data estate, including multi-cloud and on-premises environments. We combine all of the above practices—granular discovery and classification, end-to-end data security posture management, data access governance, and continuous data detection and response into a single platform.

To celebrate Cybersecurity Awareness Day, check out how our data security platform can help improve your security posture.

<blogcta-big>

Read More
David Stuart
David Stuart
September 25, 2024
3
Min Read
Data Security

Top Advantages and Benefits of DSPM

Top Advantages and Benefits of DSPM

Addressing data protection in today’s data estates requires innovative solutions. Data in modern environments moves quickly, as countless employees in a given organization can copy, move, or modify sensitive data within seconds. In addition, many organizations operate across a variety of on premises environments, along with multiple cloud service providers and technologies like PaaS and IaaS. Data quickly sprawls across this multifaceted estate as team members perform daily tasks. 

Data Security Posture Management (DSPM) is a key technology that meets these challenges by discovering and classifying sensitive data and then protecting it wherever it goes. DSPM helps organizations mitigate risks and maintain compliance across a complex data landscape by focusing on the continuous discovery and monitoring of sensitive information. 

If you're not familiar with DSPM, you can check out our comprehensive DSPM guide to get up to speed. But for now, let's delve into why DSPM is becoming indispensable for modern cloud enterprises.

Why is DSPM Important?

DSPM is an innovative cybersecurity approach designed to safeguard and monitor sensitive data as it traverses different environments. This technology focuses on the discovery of sensitive data across the entire data estate, including cloud platforms such as SaaS, IaaS, and PaaS, as well as on-premises systems. DSPM assesses exposure risks, identifies who has access to company data, classifies how data is used, ensures compliance with regulatory requirements like GDPR, PCI-DSS, and HIPAA, and continuously monitors data for emerging threats.

As organizations scale up their data estate and add multiple cloud environments, on-prem databases, and third-party SaaS applications, DSPM also helps them automate key data security practices and keep pace with this rapid scaling. For instance, DSPM offers automated data tags that help businesses better understand the deeper context behind their most valuable assets — regardless of location within the data estate. It leverages integrations with other security tools (DLP, CNAPP, etc.) to collect this valuable data context, allowing teams to confidently remediate the security issues that matter most to the business.

What are the Benefits of DSPM?

DSPM empowers all security stakeholders to monitor data flow, access, and security status, preventing risks associated with data duplication or movement in various cloud environments. It simplifies robust data protection, making it a vital asset for modern cloud-based data management.

Now, you might be wondering, why do we need another acronym? 

Let's explore the top five benefits of implementing DSPM:

1) Sharpen Visibility When Identifying Data Risk

DSPM enables you to continuously analyze your security posture and automate risk assessment across your entire landscape. It can detect data concerns across all cloud-native and unmanaged databases, data warehouses, data lakes, data pipelines, and metadata catalogs. By automatically discovering and classifying sensitive data, DSPM helps teams prioritize actions based on each asset’s sensitivity and relationship to policy guidelines.

Automating the data discovery and classification process takes a fraction of the time and effort it would take to manually catalog all sensitive data. It’s also far more accurate, especially when using a DSPM solution that leverages LLMs to classify data with more granularity and rich meta-data. In addition, it ensures that you stay up-to-date with the frequent changes in your modern data landscape.

2) Strengthen Adherence with Security & Compliance Requirements 

DSPM can also automate the process of identifying regulatory violations and ensuring adherence to custom and pre-built policies (including policies that map to common compliance frameworks). By contrast, manually implementing policies is prone to errors and inaccuracies. It’s common for teams to misconfigure policies that either overalert and inhibit daily work or miss significant user activities and changes to access permissions.

Instead, DSPM offers policies that travel with your data and automatically reveal compliance gaps. It ensures that sensitive data stays within the correct environments and doesn’t travel to regions with retention policies or without data encryption.

3) Improve Data Access Governance

Many DSPM solutions also offer data access governance (DAG). This functionality enforces the appropriate access permissions for all user identities, third parties, and applications within your organization. DAG automatically ensures that the proper controls follow your data, mitigating risks such as excessive permission, unauthorized access, inactive or unused identities and API keys, and improper provisioning/deprovisioning for services and users.

By using DSPM to govern data access, teams can successfully achieve the least privilege within an ever-changing and growing data ecosystem. 


4) Minimize your Data Attack Surface

DSPM also enables teams to detect unmanaged sensitive data, including mislocated, shadow, or duplicate assets. Its powerful data detection capabilities ensure that sensitive data, such as historical assets stored within legacy apps, development test data, or information within shadow IT apps, don’t go unnoticed in a lower environment. By automatically finding and classifying these unknown assets, DSPM minimizes your data attack surface, controls data sprawl, and better protects your most valuable assets from breaches and leaks.


5) Protect Data Used by LLMs

DSPM also extends to LLM applications, enabling you to maintain a strong risk posture as your team adopts new technologies. It considers LLMs as part of the data attack surface, applying the same DAG and data discovery/classification capabilities to any training data leveraged within these applications. 

By including LLMs in your overarching data security approach, DSPM alleviates any GenAI data privacy concerns and sets up your organization for future success as these technologies continue to evolve.

Enhance Your DSPM Strategy with Sentra

Sentra offers an AI-powered DSPM platform that moves at the speed of data, enabling you to strengthen your data risk posture across your entire hybrid ecosystem. Our platform can identify and mitigate data risks and threats with deep context, map identities to permissions, prevent exfiltration with a modern DLP, and maintain a rich data catalog with details on both known and unknown data. 

In addition, our platform runs autonomously and only requires minimal administrative support. It also adds a layer of security by discovering and intelligently categorizing all data with removing it from your environment. 

Conclusion

DSPM is quickly becoming an essential tool for modern cloud enterprises, offering comprehensive benefits to the complex challenges of data protection. By focusing on discovering and monitoring sensitive information, DSPM helps organizations mitigate risks and maintain compliance across various environments, including cloud and on-premises systems.

The rise of DSPM in the past few years highlights its importance in enhancing security. It allows security teams to monitor data flow, access, and status, effectively preventing data duplication or movement risks. With advanced threat detection, improved compliance and governance, detailed access control, rapid incident response, and seamless integration with cloud services, DSPM provides significant benefits and advantages over other data security solutions. Implementing DSPM is a strategic move for organizations aiming to fortify their data protection strategies in today's digital landscape.

To learn more about DSPM, request a demo today.

<blogcta-big>

Read More
Yoav Regev
Yoav Regev
August 28, 2024
3
Min Read
Data Security

Sentra’s 3-Year Journey: From DSPM to Data Security Platform

Sentra’s 3-Year Journey: From DSPM to Data Security Platform

If you had searched for "DSPM" on Google three years ago, you likely would have only found information related to a dspm manufacturing website… But in just a few short years, the concept of Data Security Posture Management (DSPM) has evolved from an idea into a critical component of modern cybersecurity for enterprises.

Let’s rewind to the summer of 2021. Back then, when we were developing what would become Sentra and our DSPM solution, the term didn’t even exist. All that existed was the problem - data was being created, moved and duplicated in the cloud, and its security posture wasn’t keeping pace. Organizations didn’t know where all of their data was, and even if they could find it, its level of protection was inadequate for its level of sensitivity.

After extensive discussions with CISOs and security experts, we realized a critical gap between data security and the modern environments (further exacerbated by the fast pace of AI). Addressing this gap wasn’t just important—it was essential. Through these conversations, we identified the need for a new approach, leading to the creation of the DSPM concept, which didn't exist before.

 

It was thrilling to hear my Co-Founder and VP Product, Yair Cohen, declare for the first time, “the world’s first DSPM is coming in 2021.” We embraced the term "Data Security Posture Management," now widely known as "DSPM."

Why DSPM Has Become an Essential Tool

Today, DSPM has become mainstream, helping organizations safeguard their most valuable asset: their data.

"Three years ago, when we founded Sentra, we dreamed of creating a new category called DSPM. It was a huge bet to pursue new budgets, but we believed that data security would be the next big thing due to the shift to the cloud. We could never have imagined that it would become the world’s hottest security category and that the potential would be so significant."

-Ron Reiter, Co-Founder and CTO, Sentra

This summer, Gartner has released its 2024 Hype Cycle for Data Security, and DSPM is in the spotlight for good reason. Gartner describes DSPM as having "transformative" potential, particularly for addressing long-standing data security challenges. As companies rapidly move to the cloud, DSPM solutions are gaining traction by filling critical visibility gaps.

The best DSPM solutions offer coverage across multi-cloud and on-premises environments, creating a unified approach to data security. DSPM plays a pivotal role in the modern cybersecurity landscape by providing organizations with real-time visibility into their data security posture. It helps identify, prioritize and mitigate risks across the entire data estate. By continuously monitoring data movement and access patterns, DSPM ensures that any policy violations or deviations from normal behavior are quickly flagged and addressed, preventing potential breaches before they can cause damage.

DSPM is also critical in maintaining compliance with data protection regulations. As organizations handle increasingly complex data environments, meeting regulatory requirements becomes more challenging. DSPM simplifies this process by automating compliance checks and providing clear insights into where sensitive data resides, how it’s being used, and who has access to it. This not only helps organizations avoid hefty fines but also builds trust with customers and stakeholders by demonstrating a commitment to data security and privacy.

In a world where data privacy and security threats rank among the biggest challenges facing society, DSPM provides a crucial layer of protection. Businesses, individuals, and governments are all at risk, with sensitive information constantly under threat.

 

That’s why we are committed to developing our data security platform, which ensures your data remains secure and intact, no matter where it travels.

From DSPM to Data Security Platform in the AI Age

We began with a clear understanding of the critical need for Data Security Posture Management (DSPM) to address data proliferation risks in the evolving cloud landscape. As a leading data security platform, Sentra has expanded its capabilities based on our customers’ needs to include Data Access Governance (DAG), Data Detection and Response (DDR), and other essential tools to better manage data access, detect emerging threats, and assist organizations in their journey to implement Data Loss Prevention (DLP). We now do this across all environments (IaaS, PaaS, SaaS, and On-Premises).

We continue to evolve. In a world rapidly changing with advancements in AI, our platform offers the most comprehensive and effective data security solutions to keep pace with the demands of the AI age. As AI reshapes the digital landscape, it also creates new vulnerabilities, such as the risk of data exposure through AI training processes. Our platform addresses these AI-specific challenges, while continuing to tackle the persistent security issues from the cloud era, providing an integrated solution that ensures data security remains resilient and adaptive.

DSPMs facilitate swift AI development and smooth business operations by automatically securing LLM training data. Integrations with platforms like AWS SageMaker and GCP Vertex AI, combined with features such as DAG and DDR, ensure robust data security and privacy. This approach both supports responsible AI applications and also reduces risks such as breaches and bias. So, Sentra is no longer only a DSPM solution, it’s a data security platform. Today, we provide holistic solutions that allow you to locate any piece of data and access all the information you need. Our mission is to continuously build and enhance the best data security platform, empowering organizations to move faster and succeed in today’s digital world. 

Success Driven by Our Amazing People

We’re proud that Sentra has emerged as a leader in the data security industry, making a significant impact on how organizations protect their data. Our success is driven by our incredible team, their hard work, dedication, and energy are the foundation of everything we do. From day one, our people have always been our top priority. It's inspiring to see our team work tirelessly to transform the world of data security and build the best solution out there.

This team of champions never stops innovating, inspiring, and striving to be the best version of themselves every day.

Their passion is evident in their work, as shown in recent projects that they initiated, from the new video series, “Answering the Most Searched DSPM Questions”, to a behind the scenes walkthrough of our data security platform, and more.

We’re excited to continue to push the boundaries of what’s possible in data security.

A heartfelt thank you to our incredible team, loyal customers, supportive investors, and dedicated partners. We’re excited to keep driving innovation in data security and to continue our mission of making the digital world a safer place for everyone.

<blogcta-big>

Read More
Daniel Suissa
Daniel Suissa
August 26, 2024
3
Min Read
Data Security

Overcoming Gartner’s Obstacles for DSPM Mass Adoption

Overcoming Gartner’s Obstacles for DSPM Mass Adoption

Gartner recently released its much-anticipated 2024 Hype Cycle for Data Security, and the spotlight is shining bright on Data Security Posture Management (DSPM). Described as having a "transformative" potential, DSPM is lauded for its ability to address long-standing data security challenges. 

DSPM solutions are gaining traction to fill visibility gaps as companies rush to the cloud.  Best of breed solutions provide coverage across multi-clouds and on-premises, providing a holistic approach that can become the authoritative inventory of data for an organization - and a useful up-to-date source of contextual detail to inform other security stack tools such as DLPs, CSPMs/CNAPPS, data catalogs, and more, enabling these to work more effectively. Learn more about this in our latest blog, Data: The Unifying Force Behind Disparate GRC Functions.

However, as with any emerging technology, Gartner also highlighted several obstacles that could hinder its widespread adoption. In this blog, we’ll dive into these obstacles, separating the legitimate concerns from those that shouldn't deter any organization from embracing DSPM—especially when using a comprehensive solution like Sentra.

Obstacle 1: Scanning the Entire Infrastructure for Data Can Take Days to Complete

This concern holds some truth, particularly for organizations managing petabytes of data. Full infrastructure scans can indeed take time. However, this doesn’t mean you're left twiddling your thumbs waiting for results. With Sentra, insights start flowing while the scan is still in progress. Our platform is designed to alert you to data vulnerabilities as they’re detected, ensuring you're never in the dark for long. So, while the scan might take days to finish, actionable insights are available much sooner. And scans for changes occur continuously so you’re always up to date.

Obstacle 2: Limited Integration with Security Controls for Remediation

Gartner pointed out that DSPM tools often integrate with a limited set of security controls, potentially complicating remediation efforts. While it’s true that each security solution prioritizes certain integrations, this is not a challenge unique to DSPM. Sentra, for instance, offers dozens of built-in integrations with popular ticketing systems and data remediation tools. Moreover, Sentra enables automated actions like auto-masking and revoking unauthorized access via platforms like Okta, seamlessly fitting into your existing workflow processes and enhancing your cloud security posture.

Obstacle 3: DSPM as a Function within Broader Data Security Suites

Another obstacle Gartner identified is that DSPM is sometimes offered merely as a function within a broader suite of data security offerings, which may not integrate well with other vendor products. This is a valid concern. Many cloud security platforms are introducing DSPM modules, but these often lack the discovery breadth and classification granularity needed for robust and accurate data security.

Sentra takes a different approach by going beyond surface-level vulnerabilities. Our platform uses advanced automatic grouping to create "Data Assets"—groups of files with similar structures, security postures, and business functions. This allows Sentra to reduce petabytes of cloud data into manageable data assets, fully scanning all data types daily without relying on random sampling. This level of detail and continuous monitoring is something many other solutions simply cannot match.

Obstacle 4: Inconsistent Product Capabilities Across Environments

Gartner also highlighted the varying capabilities of DSPM solutions, especially when it comes to mapping user access privileges and tracking data across different environments—on-premises, cloud services, and endpoints. While it’s true that DSPM solutions can differ in their abilities, the key is to choose a platform designed for multi-cloud and hybrid environments. Sentra is built precisely for this purpose, offering robust capabilities to identify and protect data across diverse environments (IaaS, PaaS, SaaS, and On-premises), ensuring consistent security and risk management no matter where your data resides.

Conclusion

While Gartner's 2024 Hype Cycle for Data Security outlines several obstacles to DSPM adoption, many of these challenges are either surmountable or less significant than they might first appear. With the right DSPM solution, organizations can effectively overcome these obstacles and harness the full transformative power of DSPM.

Curious about how Sentra can elevate your data security? 

<blogcta-big>

Read More
David Stuart
David Stuart
August 22, 2024
3
Min Read
Data Security

Data: The Unifying Force Behind Disparate GRC Functions

Data: The Unifying Force Behind Disparate GRC Functions

In the ever-evolving world of cybersecurity, a common thread weaves its way through the seemingly disconnected disciplines of data security, data privacy, and compliancedata. This critical element forms the cornerstone of each function, yet existing solutions often fall short in fostering a holistic approach to data governance and security.

This blog delves into the importance of data as the unifying force behind disparate GRC (Governance, Risk & Compliance) functions. We'll explore how a data-centric approach can overcome the limitations of traditional solutions, paving the way for a more efficient and secure future.

The Expanding Reach of DSPM: Evidence from the Hype Cycle

Gartner's Hype Cycles serve as an insightful snapshot of emerging trends within the cybersecurity landscape. Both the "2024 Hype Cycle for Data Security" and the "2024 Gartner Hype Cycle for Cyber-Risk Management" highlight Data Security Posture Management (DSPM) as a key area of focus. This analyst perspective signifies a significant shift, recognizing DSPM as a discipline, not merely a set of features within existing security solutions. It's a recognition that data security is fundamental to achieving all GRC objectives.

Traditionally, data security has been the domain of security teams and Chief Information Security Officers (CISOs). Data privacy, on the other hand, resides with Chief Data Privacy Officers (CDPUs). Compliance, a separate domain altogether, falls under the responsibility of Chief Compliance Officers (CCOs). This siloed approach often leads to a disjointed view of data security and privacy, creating vulnerabilities and inefficiencies.

Data: The Universal Element

Data, however, transcends these functional boundaries. It's the universal element that binds security, privacy, and compliance together. Regardless of its form – financial records, customer information, intellectual property – securing data forms the foundation of a strong security posture. 

Identity, too, plays a crucial role in data security. Understanding user access and behavior is critical for data security and compliance. An effective data security solution will require deep integration with identity management to ensure proper access controls and policy enforcement.

Imagine a Venn diagram formed by the three disciplines: Data Security (CISO), Data Privacy (CDPO), and Compliance (CCO). At the center, where all three circles intersect, lies the critical element – Data. Each function operates within its own domain yet shares ownership of data at its core.

While these functions may seem distinct, the underlying element—data—connects them all. Data is the common thread woven throughout every GRC activity. It's the lifeblood of any organization, and its security and privacy are paramount. We can't talk about securing data without considering privacy, and compliance often hinges on controls that safeguard sensitive data.

For a truly comprehensive approach, organizations need a standardized method for classifying data based on its sensitivity. This common ground allows each GRC function to view and manage data through a shared lens. A unified data discovery and classification layer increases chances for collaboration amongst functions - DSPM provides this.

Existing Solutions Fall Short in a Dynamic Landscape

Traditional GRC solutions often fall short due to their myopic nature. They cater primarily to a single function – data security, data privacy, or compliance – leaving a fragmented landscape.

These solutions also struggle to keep pace with the dynamic nature of data. Data volumes are constantly growing, changing formats, and moving across diverse platforms. Mapping such a dynamic resource can be a nightmare with traditional approaches. Here at Sentra, we've explored this challenge in detail in a previous blog, Understanding Data Movement to Avert Proliferation Risks.

A New Approach: Cloud-Native DSPM for Agility and Scalability

The future of GRC demands a new approach, one that leverages the unifying force of data. Enter cloud-native Data Security Posture Management (DSPM) solutions, specifically designed for scalability and agility. This new breed of platforms offers several key advantages:

  • Comprehensive Data Discovery: The platform actively identifies all data across your organization, regardless of location or format. This holistic view provides a solid foundation for understanding and managing your data security posture.
  • Consistent Data Classification: With a central platform, data classification becomes a unified process. Sensitive data can be identified and flagged consistently across various functions, ensuring consistent handling.
  • Pre-built Integrations: Streamline your workflows with seamless integrations to existing tools across your organization, such as data catalogs, Incident Response (IR) platforms, IT Service Management (ITSM) systems, and compliance management solutions.

Towards a Unified Data Governance and Security Platform

The need for best-of-breed DSPM solutions like Sentra will remain strong to meet the ever-expanding requirements of data security and privacy. However, a future where GRC functionalities are more closely integrated is also emerging.

We're already witnessing a shift in our own customer base, where initial deployments for one specific use case have evolved into broader platform adoption for multiple use cases. Organizations are beginning to recognize the value of a unified platform for data governance and security.

Imagine a future where data officers, application owners, developers, compliance officers, and security teams all utilize a common data governance and security platform. This platform would be built on a foundation of consistent data sensitivity definitions, promoting a shared understanding of data security risks and responsibilities across the entire organization.

This interconnected future is closer than you might think. By embracing the unifying power of data and leveraging cloud-native DSPM solutions, organizations can achieve a more holistic and unified approach to GRC. With data at the center, everyone wins: security, privacy, and compliance all benefit from a more collaborative and data-driven approach.

At Sentra, we believe the inclusion of DSPM in multiple hype cycles signifies the increasing importance of these solutions for security teams worldwide. As DSPM solutions become more integrated into cybersecurity strategies, their impact on enhancing overall security posture is becoming increasingly evident.

Curious about how Sentra can elevate your data security? 

Talk to our data security experts and request a demo today.

<blogcta-big>

Read More
Roy Levine
Roy Levine
August 12, 2024
3
Min Read
Data Security

How Contextual Data Classification Complements Your Existing DLP

How Contextual Data Classification Complements Your Existing DLP

Using data loss prevention (DLP) technology is a given for many organizations. Because these solutions have historically been the best way to prevent data exposure, many organizations already have DLP solutions deeply entrenched within their infrastructure and security systems to assist with data discovery and classification.

However, as we discussed in a previous blog post about embracing cloud DLP and DSPM, traditional DLP often struggles to keep up with disparate cloud environments and the sheer volume of data that comes with them. As a result, many teams experience false alarms and alert fatigue — not to mention extensive manual tuning — as they try to get their DLP solutions to work with their cloud-based or hybrid data ecosystems. However, simply ripping out and replacing these solutions isn’t an option for most organizations, as they are costly and play such a significant role in security programs.

 

Many organizations need a complementary solution instead of a replacement for their DLP — something that will improve the effectiveness and accuracy of their existing data discovery and “border control” security technologies.

Contextual data classification can play this role with cloud-aware functionality that can discover all data, identify what data is at risk, and gauge the actions that cloud users take and differentiate between routine activities and anomalies that could indicate actual threats. This can then be used to better harden the policies and controls governing data movement.

Why Cloud Data Security Requires More than DLP

While traditional data loss prevention (DLP) technology plays an integral role in many businesses’ data security approaches, it can start to falter when used within a cloud environment. Why? DLP uses pre-defined patterns to detect suspicious activity. Often, this doesn’t work in the context of regular cloud activities. Here are the two main ways that DLP conflicts with the cloud:

Perimeter-Based Security Controls

DLP was originally created for on-premise environments with a clearly defensible perimeter. A DLP solution can only see general patterns, such as a file getting sent, shared, or copied, and cannot capture nuanced information beyond this. So, a DLP solution often flags routine activities (e.g., sharing data with third-party applications) as suspicious in the data discovery process. When the DLP blocks these everyday actions, it impedes business velocity and alerts the security team needlessly.

In modern cloud-first organizations, data needs to move freely to / from the cloud in order to meet dynamic business demands. DLP often is too restrictive (or, conversely, too permissive) since it lacks a fundamental understanding of the data sensitivity and only sees data when it moves. As a result, it misses the opportunity to protect data at rest. If too restrictive, it can disrupt business. If too permissive, it can miss numerous insider, supply chain, or other threats that look like authorized activity to the DLP.

Limited Classification Engines

The classification engines built into traditional DLPs are limited to common data types, such as social security or credit card numbers. As a result, they can miss nuanced, sensitive data, which is more common in a cloud ecosystem. For example, passport numbers stored alongside the passport holders’ names could pose a risk if exposed, while either the names or numbers on their own are not a risk. Or, DLP solutions could miss intellectual property or trade secrets, a form of data that wasn’t even stored online twenty years ago but is now prevalent in cloud environments.

Data unique to the industry or specific business may also be missed if proper classifiers don’t detect it. The ability to tailor classifiers for these proprietary data types is very important (but often absent in commercial DLP offerings!)

Because of these limitations, many businesses see a gap between traditional DLP solutions' discovery and classification patterns and the realities of a multi-cloud and/or hybrid data estate.

Existing DLP solutions ultimately can’t comprehend what’s going on within a cloud environment because they don’t understand the following pieces of information:

  • Where sensitive data exists, whether within structured or unstructured data. 
  • Who uses it and how they use it in an everyday business context. 
  • Which data is likely sensitive because of its origins, neighboring data, or other unique characteristics.

Without this information, the DLP technology will likely flag non-risky actions as suspicious (e.g., blocking services in IaaS/PaaS environments) and overlook legitimate threats (e.g., exfiltration of unstructured sensitive data). 

Improve Data Security with Sentra’s Contextual Data Classification

Adding contextual data classification to your DLP can provide this much-needed context. Sentra’s DSPM solutionoffers data classification functionality that can work alongside or feed your existing DLP technology. We leverage LLM-based algorithms to accurately understand the context of where and how data is used, then detect when any sensitive data is misplaced or misused based on this information. Applicable sensitivity tags can be sent via API directly to the DLP solution for actioning. 

When you integrate Sentra into your existing DLP solution, our classification engine will tag and label files, and then add this rich, contextual information as metadata.

 

Here are some examples of how our technology complements and extends the abilities of DLP solutions:

  1. Sentra can discover nuanced proprietary, sensitive data and detect new identifiers such as “transaction ID” or “intellectual property.” 
  2. Sentra can use exact data matching to detect whether data was partially copied from production and flag it as sensitive.
  3. Sentra can detect when a given file likely contains business context because of its owner, location, etc. For example, a file taken from the CEO’s Google Drive or from a customer’s data lake can be assumed to be sensitive.  

In addition, we offer a simple, agentless deployment and prioritize the security of your data by keeping it all within your environment during scanning.

Watch a one-minute video to learn more about how Sentra discovers and classifies nuanced, sensitive data in a cloud environment.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
June 26, 2024
3
Min Read
Data Security

AI & Data Privacy: Challenges and Tips for Security Leaders

AI & Data Privacy: Challenges and Tips for Security Leaders

Balancing Trust and Unpredictability in AI

AI systems represent a transformative advancement in technology, promising innovative progress across various industries. Yet, their inherent unpredictability introduces significant concerns, particularly regarding data security and privacy. Developers face substantial challenges in ensuring the integrity and reliability of AI models amidst this unpredictability.

This uncertainty complicates matters for buyers, who rely on trust when investing in AI products. Establishing and maintaining trust in AI necessitates rigorous testing, continuous monitoring, and transparent communication regarding potential risks and limitations. Developers must implement robust safeguards, while buyers benefit from being informed about these measures to mitigate risks effectively.

AI and Data Privacy

Data privacy is a critical component of AI security. As AI systems often rely on vast amounts of personal data to function effectively, ensuring the privacy and security of this data is paramount. Breaches of data privacy can lead to severe consequences, including identity theft, financial loss, and erosion of trust in AI technologies. Developers must implement stringent data protection measures, such as encryption, anonymization, and secure data storage, to safeguard user information.

The Role of Data Privacy Regulations in AI Development

Data privacy regulations are playing an increasingly significant role in the development and deployment of AI technologies. As AI continues to advance globally, regulatory frameworks are being established to ensure the ethical and responsible use of these powerful tools.

  • Europe:

The European Parliament has approved the AI Act, a comprehensive regulatory framework designed to govern AI technologies. This Act is set to be completed by June and will become fully applicable 24 months after its entry into force, with some provisions becoming effective even sooner. The AI Act aims to balance innovation with stringent safeguards to protect privacy and prevent misuse of AI.

  • California:

In the United States, California is at the forefront of AI regulation. A bill concerning AI and its training processes has progressed through legislative stages, having been read for the second time and now ordered for a third reading. This bill represents a proactive approach to regulating AI within the state, reflecting California's leadership in technology and data privacy.

  • Self-Regulation:

In addition to government-led initiatives, there are self-regulation frameworks available for companies that wish to proactively manage their AI operations. The National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) and the ISO/IEC 42001 standard provide guidelines for developing trustworthy AI systems. Companies that adopt these standards not only enhance their operational integrity but also position themselves to better align with future regulatory requirements.

  • NIST Model for a Trustworthy AI System:

The NIST model outlines key principles for developing AI systems that are ethical, accountable, and transparent. This framework emphasizes the importance of ensuring that AI technologies are reliable, secure, and unbiased. By adhering to these guidelines, organizations can build AI systems that earn public trust and comply with emerging regulatory standards.Understanding and adhering to these regulations and frameworks is crucial for any organization involved in AI development. Not only do they help in safeguarding privacy and promoting ethical practices, but they also prepare organizations to navigate the evolving landscape of AI governance effectively.

How to Build Secure AI Products

Ensuring the integrity of AI products is crucial for protecting users from potential harm caused by errors, biases, or unintended consequences of AI decisions. Safe AI products foster trust among users, which is essential for the widespread adoption and positive impact of AI technologies. These technologies have an increasing effect on various aspects of our lives, from healthcare and finance to transportation and personal devices, making it such a critical topic to focus on. 

How can developers build secure AI products?

  1. Remove sensitive data from training data (pre-training): Addressing this task is challenging, due to the vast amounts of data involved in AI-training, and the lack of automated methods to detect all types of  sensitive data.
  2. Test the model for privacy compliance (pre-production): Like any software, both manual tests and automated tests are done before production. But, how can users guarantee that sensitive data isn’t exposed during testing? Developers must explore innovative approaches to automate this process and ensure continuous monitoring of privacy compliance throughout the development lifecycle.
  3. Implement proactive monitoring in production: Even with thorough pre-production testing, no model can guarantee complete immunity from privacy violations in real-world scenarios. Continuous monitoring during production is essential to promptly detect and address any unexpected privacy breaches. Leveraging advanced anomaly detection techniques and real-time monitoring systems can help developers identify and mitigate potential risks promptly.

Secure LLMs Across the Entire Development Pipeline With Sentra

Gain Comprehensive Visibility and Secure Training Data (Sentra’s DSPM)

  • Automatically discover and classify sensitive information within your training datasets.
  • Protect against unauthorized access with robust security measures.
  • Continuously monitor your security posture to identify and remediate vulnerabilities.

Monitor Models in Real Time (Sentra’s DDR)

  • Detect potential leaks of sensitive data by continuously monitoring model activity logs.
  • Proactively identify threats such as data poisoning and model theft.
  • Seamlessly integrate with your existing CI/CD and production systems for effortless deployment.

Finally, Sentra helps you effortlessly comply with industry regulations like NIST AI RMF and ISO/IEC 42001, preparing you for future governance requirements. This comprehensive approach minimizes risks and empowers developers to confidently state:

"This model was thoroughly tested for privacy safety using Sentra," fostering trust in your AI initiatives.

As AI continues to redefine industries, prioritizing data privacy is essential for responsible AI development. Implementing stringent data protection measures, adhering to evolving regulatory frameworks, and maintaining proactive monitoring throughout the AI lifecycle are crucial.
 

By prioritizing strong privacy measures from the start, developers not only build trust in AI technologies but also maintain ethical standards essential for long-term use and societal approval.

<blogcta-big>

Read More
Meni Besso
Meni Besso
June 18, 2024
4
Min Read
Compliance

Understanding the FTC Data Breach Reporting Requirements

Understanding the FTC Data Breach Reporting Requirements

More Companies Need to Report Data Breaches

In a significant move towards enhancing data security and transparency, new data breach reporting rules have taken effect for various financial institutions. Since May 13, 2024, non-banking financial institutions, including mortgage brokers, payday lenders, and tax preparation firms, must report data breaches to the Federal Trade Commission (FTC) within 30 days of discovery. This new mandate, part of the FTC's Safeguards Rule, expands the breach notification requirements to a broader range of financial entities not overseen by the Securities and Exchange Commission (SEC). 

Furthermore, by June 15, 2024, smaller reporting companies—those with a public float under $250 million or annual revenues under $100 million—must comply with the SEC’s new cybersecurity incident reporting rules, aligning their disclosure obligations with those of larger corporations. These changes mark a significant step towards enhancing transparency and accountability in data breach reporting across the financial sector.

How Can Financial Institutions Secure Their Data?

Understanding and tracking your sensitive data is fundamental to robust data security practices. The first step in safeguarding data is detecting and classifying what you have. It's far easier to protect data when you know it exists. This allows for appropriate measures such as encryption, controlling access, and monitoring for unauthorized use. By identifying and mapping your data, you can ensure that sensitive information is adequately protected and compliance requirements are met.

Identify Sensitive Data: Data is constantly moving, which makes it a challenge to know exactly what data you have and where it resides. This includes customer information, financial records, intellectual property, and any other data deemed sensitive. Having an automated data classification tool is a crucial first step. This includes ‘shadow’ data that may not be well known or well managed.

Data Mapping: Create and maintain an up-to-date map of your data landscape. This map should show where data is stored, processed, and transmitted, and who has access to it. It helps in quickly identifying which systems and data were affected by a breach and the impact blast radius (how extensive is the damage).

"Your Data Has Been Breached, Now What?"

When a data breach occurs, the immediate response is critical in mitigating damage and addressing the aftermath effectively. The investigation phase is particularly crucial as it determines the extent of the breach, the type and sensitivity of the data compromised, and the potential impact on the organization.

A key challenge during the investigation phase is understanding where the sensitive data was located at the time of the data breach and why or how existing controls were insufficient. 

Without a proper data classification process or solution in place, it is difficult to ascertain the exact locations of the sensitive data or the applicable security posture at the time of the breach within the short timeframe required by the SEC and FTC reporting rules. 

Here's a breakdown of the essential steps and considerations during the investigation phase:

1. Develop Appropriate Posture Policies and Enforce Adherence:

Establish policies that alert on and can help enforce appropriate security posture and access controls - these can be out-of-the-box fitting various compliance frameworks or can be customized for unique business or privacy requirements. Monitor for policy violations and initiate appropriate remediation actions (which can include ticket issuance, escalation notification, and automated access revocation or de-identification).

2. Conduct the Investigation: Determine Data Breach Source:

Identify how the breach occurred. This could involve phishing attacks, malware, insider threats, or vulnerabilities in your systems.

According to the FTC, it is critical to clearly describe what you know about the compromise. This includes:

  • How it happened
  • What information was taken
  • How the thieves have used the information (if you know)
  • What actions you have taken to remedy the situation
  • What actions you are taking to protect individuals, such as offering free credit monitoring services
  • How to reach the relevant contacts in your organization

Create a Comprehensive Plan: Additionally, create a comprehensive plan that reaches all affected audiences, such as employees, customers, investors, business partners, and other stakeholders.

Affected and Duplicated Data: Ascertain which data sets were accessed, altered, or exfiltrated. This involves checking logs, access records, and utilizing forensic tools. Assess if sensitive data has been duplicated or moved to unauthorized locations. This can compound the risk and potential damage if not addressed promptly.

How Sentra Helps Automate Compliance and Incident Response

Sentra’s Data Security Posture Management solution provides organizations with full visibility into their data’s locations (including shadow data) and an up-to-date data catalog with classification of sensitive data. Sentra provides this without any complex deployment or operational work involved, this is achieved due to a cloud-native agentless architecture, using cloud provider APIs and mechanisms.

Below you can see the different data stores on the Sentra dashboard.

Sentra Dashboard data stores

Sentra Makes Data Access Governance (DAG) Easy

Sentra helps you understand which users have access to what data and enrich metadata catalogs for comprehensive data governance. The accurate classification of cloud data provides advanced classification labels, including business context regarding the purpose of data, and automatic discovery, enabling organizations to gain deeper insights into their data landscape. This both enhances data governance while also providing a solid foundation for informed decision-making.

Sentra's detection capabilities can pinpoint over permissioning to sensitive data, prompting organizations to swiftly control them. This proactive measure not only mitigates the risk of potential breaches but also elevates the overall security posture of the organization by helping to institute least-privilege access.

Below you can see an example of a user’s access and privileges to which sensitive data.

An example of a user’s access and privileges to which sensitive data

Breach Reporting With Sentra

Having a proper classification solution helps you understand what kind of data you have at all times.

With Sentra, it's easier to pull the information for the report and understand whether there was sensitive data at the time of breach,  what kind of data there was, and who/what had access to it, in order to have an accurate report.

Example of Sentra's Data Breach Report

To learn more about how you can gain full coverage and an up-to-date data catalog with classification of sensitive data, schedule a live demo with our experts.

<blogcta-big>

 

Read More
Meni Besso
Meni Besso
June 10, 2024
3
Min Read
Compliance

Key Practices for Responding to Compliance Framework Updates

Key Practices for Responding to Compliance Framework Updates

Most privacy, IT, and security teams know the pain of keeping up with ever-changing data compliance regulations. Because data security and privacy-related regulations change rapidly over time, it can often feel like a game of “whack a mole” for organizations to keep up. Plus, in order to adhere to compliance regulations, organizations must know which data is sensitive and where it resides. This can be difficult, as data in the typical enterprise is spread across multiple cloud environments, on premises stores, SaaS applications, and more. Not to mention that this data is constantly changing and moving.

While meeting a long list of constantly evolving data compliance regulations can seem daunting, there are effective ways to set a foundation for success. By starting with data security and hygiene best practices, your business can better meet existing compliance requirements and prepare for any future changes.

Recent Updates to Common Data Compliance Frameworks 

The average organization comes into contact with several voluntary and mandatory compliance frameworks related to security and privacy. Here’s an overview of the most common ones and how they have changed in the past few years:

Payment Card Industry Data Security Standard (PCI DSS)

What it is: PCI DSS is a set of over 500 requirements for strengthening security controls around payment cardholder data. 

Recent changes to this framework: In March 2022, the PCI Security Standards Council announced PCI DSS version 4.0. It officially went into effect in Q1 2024. This newest version has notably stricter standards for defining which accounts can access environments containing cardholder data and authenticating these users with multi-factor authentication and stronger passwords. This update means organizations must know where their sensitive data resides and who can access it.  

U.S. Securities and Exchange Commission (SEC) 4-Day Disclosure Requirement

What it is:  The SEC’s 4-day disclosure requirement is a rule that requires more established SEC registrants to disclose a known cybersecurity incident within four business days of its discovery.

Recent changes to this framework: The SEC released this disclosure rule in December 2023. Several Fortune 500 organizations had to disclose cybersecurity incidents, including a description of the nature, scope, and timing of the incident. Additionally, the SEC requires that the affected organization release which assets were impacted by the incident. This new requirement significantly increases the implications of a cyber event, as organizations risk more reputational damage and customer churn when an incident happens.

In addition, the SEC will require smaller reporting companies to comply with these breach disclosure rules in June 2024. In other words, these smaller companies will need to adhere to the same breach disclosure protocols as their larger counterparts.

Health Insurance Portability and Accountability Act (HIPAA)

What it is: HIPPA safeguards that protect patient information through stringent disclosure and privacy standards.

Recent changes to this framework: Updated HIPAA guidelines have been released recently, including voluntary cybersecurity performance goals created by the U.S. Department of Health and Human Services (HHS). These recommendations focus on data security best practices such as strengthening access controls, implementing incident planning and preparedness, using strong encryption, conducting asset inventory, and more. Meeting these recommendations strengthens an organization’s ability to adhere to HIPAA, specifically protecting electronic protected health information (ePHI).

General Data Protection Regulation (GDPR) and EU-US Data Privacy Framework

What it is: GDPR is a robust data privacy framework in the European Union. The EU-US Data Privacy Framework (DPF) adds a mechanism that enables participating organizations to meet the EU requirements for transferring personal data to third countries.

Recent changes to this framework: The GDPR continues to evolve as new data privacy challenges arise. Recent changes include the EU-U.S. Data Privacy framework, enacted in July 2023. This new framework requires that participating organizations significantly limit how they use personal data and inform individuals about their data processing procedures. These new requirements mean organizations must understand where and how they use EU user data.

National Institute of Standards and Technology (NIST) Cybersecurity Framework

What it is:  NIST is a voluntary guideline that provides recommendations to organizations for managing cybersecurity risk. However, companies that do business with or a part of the U.S. government, including agencies and contractors, are required to comply with NIST.

Recent changes to this framework: NIST recently released its 2.0 version. Changes include a new core function, “govern,” which brings in more leadership oversight. It also highlights supply chain security and executing more impactful cyber incident responses. Teams must focus on gaining complete visibility into their data so leaders can fully understand and manage risk.    

ISO/IEC 27001:2022

What it is: ISO/IEC 27001 is a certification that requires businesses to achieve a level of information security standards. 

Recent changes to this framework: ISO 27001 was revised in 2022. While this addendum consolidated many of the controls listed in the previous version, it also added 11 brand-new ones, such as data leakage protection, monitoring activities, data masking, and configuration management. Again, these additions highlight the importance of understanding where and how data gets used so businesses can better protect it.

California Consumer Privacy Act (CCPA)

What it is: CCPA is a set of mandatory regulations for protecting the data privacy of California residents.

Recent changes to this framework: The CCPA was amended in 2023 with the California Privacy Rights Act (CPRA). This new edition includes new data rights, such as consumers’ rights to correct inaccurate personal information and limit the use of their personal information. As a result, businesses must have a stronger grasp on how their CA users’ data is stored and used across the organization.

2024 FTC Mandates

What it is: The Federal Trade Commission (FTC)’s new mandates require some businesses to disclose data breaches to the FTC as soon as possible — no later than 30 days after the breach is discovered. 

Recent changes to this framework: The first of these new data breach reporting rules is the Standards for Safeguarding Customer Information (Safeguards Rule) which took effect in May 2024. The Safeguards Rule puts disclosure requirements on non-banking financial institutions and financial institutions that aren’t required to register with the SEC (e.g, mortgage brokers, payday lenders, and vehicle dealers). 

Key Data Practices for Meeting Compliance

These frameworks are just a portion of the ever-changing compliance and regulatory requirements that businesses must meet today. Ultimately, it all goes back to strong data security and hygiene: knowing where your data resides, who has access to it, and which controls are protecting it. 

To gain visibility into all of these areas, businesses must operationalize the following actions throughout their entire data estate:

  • Discover data in both known and unknown (shadow) data stores.
  • Accurately classify and organize discovered data so they can adequately protect their most sensitive assets.
  • Monitor and track access keys and user identities to enforce least privilege access and to limit third-party vendor access to sensitive data.
  • Detect and alert on risky data movement and suspect activity to gain early warning into potential breaches.

Sentra enables organizations to meet data compliance requirements with data security posture management (DSPM) and data access governance (DAG) that travel with your data. We help organizations gain a clear view of all sensitive data, identify compliance gaps for fast resolution, and easily provide evidence of regulatory controls in framework-specific reports. 

Find out how Sentra can help your business achieve data and privacy compliance requirements.

If you want to learn more, request a demo with our data security experts.

Read More
David Stuart
David Stuart
May 28, 2024
3
Min Read
Data Security

Retail Data Breaches: How to Secure Customer Data With DSPM

Retail Data Breaches: How to Secure Customer Data With DSPM

In 2023, the average cost of a retail data breach reached $2.96 million, with the retail sector representing 6% of global data breaches, a rise from 5% in the prior year. 

Consequently, retail now ranks as the 8th most frequently targeted industry in cyber attacks, climbing from 10th place in 2022. According to the Sophos State of Ransomware in Retail report, ransomware affected 69% of retail enterprises in 2023. Nearly 75% of these ransomware incidents led to data encryption, marking an increase from 68% and 54% in the preceding two years. Yet, these breaches aren't merely a concern for retailers alone; they pose a severe threat to customer confidence at large. 


The need for retailers to focus on data security is crucial since the retail sector serves such a large community (and therefore is a huge target for fraud, account compromise, etc.).  Retailers, increasingly conducting business online, are subject to evolving privacy and credit card regulations, to protect consumers. One compromise or breach event can prove disastrous to the customer trust that retailers may have built over years.  

With the evolving cyber threats, the proliferation of cloud computing, and the persistent risk of human error, retailers confront a multifaceted security landscape. Retailers should take proactive measures, and gain a deeper understanding of the potential risks in order to properly harden their defenses.


The year 2024 had just begun when VF Corporation, a global apparel and footwear giant, experienced a significant breach. This incident served as a stark reminder of the far-reaching consequences of ransomware attacks in the retail industry. Approximately 35 million individuals, including employees, customers, and vendors, were affected. Personal information such as names, addresses, and Social Security numbers fell into the hands of malicious actors, emphasizing the urgent need for retailers to secure sensitive data.

How to Secure Customer Data

Automatically Discover, Classify and Secure All Customer Data

Automatically discovering, classifying, and securing all customer data is essential for businesses today. Sentra offers a comprehensive retail data security solution, uncovering sensitive customer data such as personally identifiable information (PII), cardholder data, payment account information, and order details across both known and unknown cloud data stores. 

With Sentra's Data Security Posture Management (DSPM) solution, no sensitive data is left undiscovered; the platform provides extensive coverage of data assets, custom data classes, and detailed cataloging of tables and objects. This not only ensures compliance but also supports data-driven decision-making through safe collaboration and data sharing. As a cloud-native solution, Sentra offers full coverage across major platforms like AWS, Azure, Snowflake, GCP, and Office 365, as well as on-premise file shares and databases. Your cloud data remains within your environment, ensuring you retain control of your sensitive data at all times.

Comply with Data Security and Privacy Regulations

Ensuring compliance with data security and privacy regulations is paramount in today's business landscape. With Sentra’s DSPM solution, you can streamline the process of preparing for security audits concerning customer and credit card/account data. Sentra’s platform efficiently identifies compliance discrepancies, enabling swift and proactive remediation measures.

You can also simplify the translation of requirements from various regulatory frameworks such as PCI-DSS, GDPR, CCPA, DPDPA, among others, using straightforward rules and policies. For instance, you'll receive notifications if regulated data is transferred between regions or to an insecure environment. 

Sentra Dashboard Issues showing top compliance frameworks

Furthermore, our system detects specific policy violations, such as uncovering PCI-DSS violations that indicate classified information, including credit cards and bank account numbers, being publicly accessible or located outside of a PCI compliant environment. Finally, we generate comprehensive compliance reports containing all necessary evidence, including sensitive data categories, regulatory measures, security posture, and the status of relevant regulatory standards.

Mitigate Supply Chain Risks and Emerging Threats

Addressing supply chain risks and emerging threats is critical for safeguarding your organization. Sentra leverages real-time threat monitoring, Data Detection and Response (DDR) to prevent fraud, data exfiltration, or breaches, thereby reducing downtime and ensuring the security of sensitive customer data.

Sentra dashboard example of sensitive data accessed from suspicious IP address

Sentra’s DSPM solution offers automated detection capabilities to alert you when third parties gain access to sensitive account and customer data, empowering you to take immediate action. By implementing least privilege access based on necessity, we help minimize supply chain risks, ensuring that only authorized individuals can access sensitive information. 

Additionally, Sentra’s DSPM enables you to enforce security posture and retention policies, thereby mitigating the risks associated with abandoned data. You'll receive instant alerts regarding suspicious data movements or accesses, such as those from unknown IP addresses, enabling you to promptly investigate and respond. In the event of a breach, our solution facilitates swift evaluation of its impact and enables you to initiate remedial actions promptly, thereby limiting potential damage to your organization.

<blogcta-big>

Read More
David Stuart
David Stuart
May 6, 2024
3
Min Read
Data Security

Securing Your Microsoft 365 Environment with Sentra

Securing Your Microsoft 365 Environment with Sentra

Picture this scenario: a senior employee at your organization has access to a restricted folder in SharePoint that contains sensitive data. Another employee needs access to a specific document in the folder and asks the senior employee for help. To save time, the senior employee simply copies the entire document and drops it into a folder with less stringent access controls so the other employee can easily access it. Because of this action taken by the senior employee, which only took seconds to complete, there’s now a copy of sensitive data — outside a secure folder and unknown to the data security team. 

The Sentra team hears repeatedly that Microsoft 365 services, like SharePoint, are a pressing concern for data security teams because this type of data proliferation is so common. While Microsoft services like OneDrive, SharePoint, Office Online, and Teams drive productivity and collaboration, they also pose a unique challenge for data security teams: identifying and securing the constantly changing data landscape without inhibiting collaboration or slowing down innovation. 

Today’s hybrid environments — including Microsoft 365 services — present many new security challenges. Teams must deal with vast and dynamic data within SharePoint, coupled with explosive cloud growth and data movement between environments (cloud to on prem or vice versa). They must also find ways to find and secure the unstructured sensitive data stored within Microsoft 365 services.

Legacy, connector- and agent-based solutions can’t fit the bill — they face performance and scaling constraints and are an administrative nightmare for teams trying to keep pace. Instead, teams need a data security solution that can automatically comprehend unstructured data in several formats and is more responsive and reliable than legacy tools. 

A cloud-native approach is one viable, scalable solution to address the multitude of security challenges that complex, modern environments create. It provides versatile, agile protection for the multi-cloud, hybrid, SaaS (i.e., Microsoft), and on-prem environments that comprise a business’s operations. 

The Challenge of Protecting Your Microsoft 365 Environment

When employees use Microsoft 365, they can copy, move, or delete data instantly, making it challenging to keep track of where sensitive data resides and who has access to it. For instance, sensitive data can easily be stored improperly or left behind in a OneDrive after an employee leaves an organization. This is commonplace when using Teams and/or SharePoint for document collaborations. This misplaced sensitive data can become ammunition for an insider threat, such as a disgruntled employee who wants to cause company damage.

Assets contain plain text credit card numbers

Defending your Microsoft 365 environment against these risks can be difficult because Microsoft 365 stores data, such as Teams messages or OneDrive documents, in a free-form layout. It’s far more challenging to classify this unstructured data than it is to classify structured data because it doesn’t follow a clear schema and formatting protocol. For instance, in a structured database, sensitive information like names and birthdates would be stored in neighboring columns labeled “names” and “birthdates.” However, in an unstructured data environment like Microsoft 365, someone might share their birthdate or other PII in a quick Teams message to an HR staff member, which is then stored in SharePoint behind the scenes. 

In addition, unstructured data lacks context. Some data is only considered sensitive under certain conditions. For example, 9-digit passport numbers alone wouldn’t pose a significant risk if exposed, while a combination of passport numbers and the identity of the passport holders would. Structured databases make it easy to see these relationships, as they likely contain column titles (e.g., “passport number,” “passport holder name”) or other clear schemas. Unstructured file repositories, on the other hand, might have all of this information buried in documents with a free-form block of text, making it especially difficult for teams to understand the context of each data asset fully.

Protection Measures to Address Microsoft 365 Data Risks

Today’s businesses must get ahead of these challenges by instituting best practices such as least privilege access, or else face consequences such as violating compliance regulations or putting sensitive data at risk of exposure

Since sensitive data is far more nuanced and complex to discern in Microsoft 365, businesses need a cloud-native solution that identifies the subtle signs associated with sensitive data in unstructured cloud environments and takes appropriate action to protect it. 

Sentra’s Integration with Microsoft 365

Sentra’s data security posture management (DSPM) platform enables secure collaboration and file sharing across services such as SharePoint, OneDrive, Teams, OneNote, and Office Online.

Its new integration with Microsoft 365 offers unmatched discovery and classification capabilities for security, data owners and risk management teams to secure data — not stopping activity but allowing it to happen securely. Here are a few of the features we offer teams using Microsoft 365: 

Advanced ML/AI analysis for accurate data discovery.

Sentra’s data security platform can autonomously discover data across your entire environment, including shadow data (i.e., misplaced, abandoned, or unknown data) or migrated data (data that may have sprawled to a lesser protected environment). It can then accurately rank data sensitivity levels by conducting in-depth analysis based on nuanced contextual information such as metadata, location, neighboring assets, and file path.

Sensitive data that is stored on-premise was found in a cloud environment

This contextual approach differs from traditional security methods, which rely on very prescriptive data formats and overlook unstructured data that doesn’t fit into these formats. Sentra’s high level of accuracy minimizes the number of false positives, requiring less hands-on validation from your team.

Use case scenario: An employee has set up their company OneDrive account to be directly accessible through their personal computer’s central file system. While working on personal tasks on their computer, this employee accidentally saves their child’s medical paperwork inside the company OneDrive rather than a personal file. To prevent this situation, Sentra can discover and notify the appropriate users if PII is residing in a OneDrive business account and violating company policy.

Precise data classification to support remediation. 

After discovering sensitive data, Sentra classifies the data using data context classes. This granular classification level provides rich usage context and enables teams to perform better risk prioritization, sensitivity analysis, and control actioning. Its data context classes can identify very specific types of data: configuration, log, tabular, image, etc. By labeling their resources with this level of precision and context, businesses can better understand usage and which files are more likely to contain sensitive information and which are not. 

In addition, Sentra consolidates classified data security findings from across your entire data estate into a single platform. This includes insights from multiple cloud environments, SaaS platforms, and on-premises data stores. Sentra offers a centralized, always-up-to-date data catalog and visualizations of data movement between environments.

Use case scenario: An employee requests access to a SharePoint folder containing a nonsensitive document. A senior employee authorizes access without realizing that sensitive documents are also stored within this folder. To prevent this type of excessive privileged access, Sentra labels sensitive documents, emails, and other Microsoft file formats so your team can enforce access policies and take the correct actions to secure these assets. 

Guardrails to enforce data hygiene across your environment.

Sentra also enforces data hygiene best practices across your Microsoft 365. environment, proactively preventing staff from taking risky actions or going against company policies.

For instance, it can determine excessive access permission and alert on these violations. Sentra can also monitor sharing permissions to enforce least privilege access on sensitive files. 

Use case scenario: During onboarding, a new junior employee is given access permissions across Microsoft 365 services. By default, they now have access to confidential intellectual property stored in SharePoint, even though they’ll never need this information in their daily work. To prevent this type of excessive access control, Sentra can enforce more stringent access controls for sensitive SharePoint folders.

Automation to accelerate incident response.

Sentra also supports automated incident response with early breach detections. It can identify data similarities to instigate an investigation of potentially risky data proliferation. In addition, it provides real-time alerting when any anomalous activity occurs within the environment and supports incident investigation and breach impact analysis with automated remediation and in-product guidance. Sentra also integrates with data catalogs and other incident response/ITSM tools to quickly alert the proper teams and kick off the right response processes. 

Use case example: An employee who was just laid off feels disgruntled with the company. They decide to go into SharePoint and start a large download of several files containing intellectual property. To protect your data from these types of internal threats, Sentra can immediately detect and alert you to suspicious activities, such as unusual activity, within your Microsoft 365 environment.

DSPM, the Key to Securing Microsoft 365

After talking with many customers and prospects facing challenges securing Microsoft 365, the Sentra team has seen the significance of a DSPM platform compatible with services like SharePoint, OneDrive, and Office Online. We prioritize bringing all data, including assets buried in your Microsoft 365 environment, into view so you can better safeguard it without slowing down innovation and collaboration. 

Dive deeper into the world of Data Security Posture Management (DSPM) and discover how it helps organizations secure their entire data estate, including cloud, on-prem, and SaaS data stores (like Microsoft 365).

To learn more about Sentra's DSPM, and how you can secure your entire data estate, visit Sentra's demo page.

<blogcta-big>

Read More
David Stuart
David Stuart
April 30, 2024
4
Min Read
Data Security

How to Meet the Security Challenges of Hybrid Data Environments

How to Meet the Security Challenges of Hybrid Data Environments

It’s an age-old question at this point: should we operate in the cloud or on premises? But for many of today’s businesses, it’s not an either-or question, as the answer is both.

Although cloud has been the ‘latest and greatest’ for the past decade, very few organizations rely on it completely, and that’s probably not going to change anytime soon. According to a survey conducted by Foundry in 2023, 70% of organizations have brought some cloud apps or services back to on premises after migration due to security concerns, budget/cost control, and performance/reliability issues. 

But at the same time, the cloud is still growing in importance within organizations. Gartner projects that public cloud spending will increase by 20.4% in just the next year. With all of this in mind, it’s safe to say that most businesses are leveraging a hybrid approach and will continue to do so for a long time. 

But where does this leave today’s data security professionals, who must simultaneously secure cloud and on prem operations? The key to building a robust data security approach and future-proofing your hybrid organization is to adopt cloud-native data security that serves both areas equally well and, importantly, can match the expected cloud growth demands of the future.

On Prem Data Security Considerations

Because on premises data stores are here to stay for most organizations, teams must consider how they will respond to the unique challenges of on prem data security. Let’s dive into two areas that are unique to on premises data stores and require specific security considerations:

Network-Attached Storage (NAS) and File Servers

File shares, such as SMB (CIFS), NFS and FTP, play an integral role in making on prem data accessible. However, the specific structure and data formats used within file servers can pose challenges for data security professionals, including:

  • Identifying where sensitive data is stored and preventing its sprawl to unknown locations.
  • Nested or inherited permissions structures that could lead to overly permissive access.
  • Ensuring security and compliance across massive amounts of data that change continuously.

On Prem Databases With Structured and Unstructured Data

The variety in on prem databases also brings security challenges. Different databases such as MSSQL, Oracle, PostgreSQL, MongoDB, and MySQL and others use different data structures. Security professionals often struggle to compile structured, unstructured, and semi-structured data from these different sources to monitor their data security posture continuously. ETL operations do the heavy lifting, but this can lead to further obfuscation of the underlying (and often sensitive!) data. Plus, access control is managed separately within each of these databases, making it hard to institute least privilege.

Businesses need to use data security solutions that can scan all of these distinct store and data types, centralize security administration for these disparate storage areas, and respond to security issues commonly appearing in hybrid environments, such as misconfigurations, weak security, data proliferation and compliance violations. Legacy premise or cloud-only solutions won’t cut it in these situations, as they aren’t adapted to work with these specific considerations. 

Cloud Data Security Considerations

In addition to all these on prem data and storage variations, most organizations also leverage multiple cloud environments. This reality makes managing a holistic view of data security even more complex. A single organization might use several different cloud service providers (AWS, Azure, Google Cloud Platform, etc.), along with a variety of data lakes and data warehouses (e.g., Snowflake). Each of these platforms has a unique architecture and must be managed separately, making it challenging to centralize data security efforts.

Here are a few aspects of cloud environments that data security professionals must consider:

Massive Data Attack Surface

Because it’s so easy to move, change, or modify data in the cloud, data proliferates at an unprecedented speed. This leads to a huge attack surface of unregulated and unmonitored data. Security professionals face a new challenge in the cloud: securing data regardless of where it resides. But this can prove to be difficult when security teams might not even know that a copied or modified version of sensitive data exists in the first place. This organizational data that exists outside the centralized and secured data management framework, known as shadow data, poses a considerable threat to organizations, as they can’t protect what they don’t know.

Business Agility

In addition, security teams must figure out how to secure cloud data without slowing down other teams’ innovation and agility in the cloud. In many cases, teams must copy cloud data to complete their daily tasks. For example, a developer might need to stage a copy of production data for test purposes, or a business intelligence analyst might need to mine a copy of production data for new revenue opportunities. They must learn how to enforce critical policies without gatekeeping sensitive data that teams need to access for the business to succeed. 

Variety in Data Store Types

Cloud infrastructure often includes a variety of data store types as well. This includes cloud computing infrastructure such as IaaS, PaaS, DBaaS, application development components such as repositories and live applications, and, in many cases, several different public cloud providers. Each of these data stores exists in a silo, making it challenging for data security professionals to gain a centralized view of the entire organization’s data security posture. 

Unifying Cloud and On Prem Hybrid Environments With Cloud-Native Data Security

Because of its massive scale, dynamic nature, and service-oriented architecture, cloud infrastructure is more complex to secure than on prem. Generally speaking, anyone with a username and password for a cloud instance can access most of the data inside it by default. In other words, you can’t just secure its boundaries as you would with on premises data. And because new cloud instances are so easy to spin up, there are no assurances that a new cloud asset, that may contain data copies, will have the same protections as the original.  

Because of this complexity, legacy tools originally created for on prem environments, such as traditional data loss prevention (DLP), just won’t cut it in cloud environments. Yet cloud-only security offerings, such as those from the cloud service providers themselves, exclude the unique aspects of on premises environments or may be myopic in what they support. Instead, organizations must consider solutions that address both on prem and multi-cloud environments simultaneously. The answer lies in cloud-native data security that supports both

Because it’s built for the complexity of the cloud but includes support for on prem infrastructure, a cloud-native data security platform can follow your data across your entire hybrid environment and compile complex security posture information into a single location. Sentra approaches this concept in a unique way, enabling teams to see data similarity and movement between on prem and cloud stores. By understanding data movement, organizations can minimize the risks associated with data sprawl, while simultaneously securely enabling the business.

With a unified platform, teams can see a complete picture of their data security posture without needing to jump back and forth between the contexts and differing interfaces of on premises and cloud tools. A centralized platform also enables teams to consistently define and enforce policies for all types of data across all types of environments. In addition, it makes it easier to generate audit-ready reports and feed data into remediation tools from a single integration point.


Sentra’s Cloud-Native Approach to Hybrid Environments

Sentra offers a cloud-native data security posture management (DSPM) solution for monitoring various data types across all environments — from premises to SaaS to public cloud.

This is a major development, as our solution uniquely enables security teams to…

  • Automatically discover all data without agents or connectors, including data within multiple cloud environments, NFS / SMB File Servers, and both SQL/NoSQL on premises databases.
  • Compile information inside a single data catalog that lists sensitive data and its security and compliance posture.
  • Receive alerts for misconfigurations, weak encryptions, compliance violations, and much more.
  • Identify duplicated data between environments, including on prem, cloud, and SaaS, enabling organizations to clean up unused data, control sprawl and reduce risks.
  • Track access to sensitive data stores from a single interface and ensure least privilege access.

Plus, when you use Sentra, your data never leaves your environment - it remains in place, secure and without disruption. We leverage native cloud serverless processing functions (ex. AWS Lambda) to scan your cloud data. For on premises, we scan all data within your secure networks and only send metadata to the Sentra cloud platform for further reporting and analysis.

Sentra also won’t interrupt your production flow of data, as it works asynchronously in both cloud and on premises environments (it scans on prem by creating temporary copies to scan in the customer cloud environment).

Dive deeper into how Sentra’s data security posture management (DSPM) helps hybrid organizations secure data everywhere. 

To learn more about DSPM, schedule a demo with one of our experts.

Read More
Meni Besso
Meni Besso
April 11, 2024
4
Min Read
Compliance

How PCI DSS 4.0 Improves Your Security Posture

How PCI DSS 4.0 Improves Your Security Posture

The release of PCI DSS 4.0 marks a fundamental shift in how organizations are expected to protect payment card data. Compliance is no longer about passing periodic audits or maintaining static controls, it now requires a continuous, risk-based approach to securing cardholder data, especially in cloud and third-party environments.

With expanded requirements, increased focus on cloud service providers, and 51 future-dated controls becoming mandatory by March 31, 2025, organizations that store, process, or transmit cardholder data must rethink how they approach PCI compliance. Traditional, point-in-time methods struggle to keep up with dynamic cloud data, shadow PAN, and evolving access patterns.

In this blog, we break down the most important changes in PCI DSS 4.0, explain what the March 2025 deadline really means, and show how organizations can operationalize PCI compliance through continuous visibility, monitoring, and protection of sensitive payment card data.

Understanding PCI DSS v4.0

PCI DSS v4.0 brings several notable updates, emphasizing a more comprehensive and risk-based approach to data security. Companies in the payment card ecosystem must take note of these changes to ensure they remain compliant and resilient against evolving threats.

Increased Focus on Cloud and Service Providers

One of the key highlights of PCI DSS v4.0 is its focus on cloud environments and third-party service providers. As more businesses leverage cloud services for storing and processing payment data, it's imperative to extend security controls to these environments.

Expanded Scope of Requirements

With the proliferation of digital transactions, PCI DSS v4.0 expands the scope of requirements to address emerging technologies and evolving threats. The standard now covers a broader range of systems, applications, and processes involved in payment card transactions.

Emphasis on Risk-Based Approach

Recognizing that not all security threats are created equal, PCI DSS v4.0 places a greater emphasis on a risk-based approach to security. Organizations should assess risks systematically and prioritize security measures based on potential impact and likelihood of occurrence.

Enhanced Focus on Data Protection

From encryption and access control to data retention policies, organizations are expected to implement robust measures to prevent unauthorized access and data breaches. This will help mitigate the risk of data theft and ensure compliance with regulatory standards.

New PCI DSS 4.0 Release Implementation by March 2025

Out of the 64 of the new requirements, 51 are future dated due to their complexity and/or cost of implementation. This is relevant and important for any business that stores, processes or transmits cardholder data.

Further, it is crucial to focus on establishing a continuous process:

  • Automated log analysis for threat detection (Req: 10.4.1.1)
  • On-going review of access to sensitive data (Req: 7.2.4)
  • Detection of stored PAN anywhere it is not expected (Req: 12.10.7)

How Sentra Helps Comply With PCI DSS 4.0

Below are a few examples of how Sentra can assist you in complying with PCI DSS 4.0 by continuously monitoring your environment for threats and vulnerabilities.

In today's threat landscape, security is an ongoing process. PCI DSS v4.0 emphasizes the importance of continuous monitoring and testing to detect and respond to security incidents in real-time. By implementing automated monitoring tools and conducting regular security assessments, organizations can proactively identify vulnerabilities and address them before they are exploited by attackers.

PCI DSS 4.0 New Requirement How Sentra Solves It
10.4.1.1 Automated mechanisms are used to perform audit log reviews. Sentra's Data Detection and Response (DDR) module continuously monitors logs from sensitive data stores, identifying threats and anomalies in real time that may indicate potential data breaches or unauthorized access to sensitive data.

7.2.4 All user accounts and related access privileges, including third party/vendor accounts, are reviewed as follows:

  • At least once every six months.
  • Ensure user accounts and access remain appropriate based on job function.
  • Any inappropriate access is addressed.
  • Management acknowledges that access remains appropriate.
Sentra's Data Security Posture Management (DSPM) data access module frequently scans your sensitive data stores, mapping out the various identities with access to your data, including third-party entities, internal users, and applications. This aids in ensuring least privilege access and allows for the analysis of each identity's security posture through a risk-based approach.

12.10.7 Incident response procedures are in place, to be initiated upon the detection of stored PAN anywhere it is not expected, and include:

  • Determining what to do if PAN is discovered outside the CDE, including its retrieval, secure deletion, and/or migration into the currently defined CDE, as applicable.
  • Identifying whether sensitive authentication data is stored with PAN.
  • Determining where the account data came from and how it ended up where it was not expected.
  • Remediating data leaks or process gaps that resulted in the account data being where it was not expected.
Sentra's scanning and classification engine detects all types of sensitive data, including PII, digital identities, and financial data, especially PAN, across all your cloud accounts. It highlights potential "shadow data" suspected of being misplaced. Additionally, Sentra's DataTreks module tracks the movement of sensitive data across accounts, regions, and environments, helping you understand the root cause and take preventive steps.

Use Sentra's Reporting Capabilities to Adhere With PCI DSS

Here you can see a detected S3 bucket which contains credit card numbers and personal information which are not properly encrypted.

This is an example of how Sentra creates a threat in real time, detecting suspicious activity in a sensitive AWS S3 bucket.

In the dashboard below, you can see open security issues grouped by different compliances frameworks.

Proactive Integration of New Compliance Controls

Sentra remains vigilant in staying up to date with changes in PCI-DSS, GDPR, CCPA and other compliance frameworks. To ensure continuous compliance and security, Sentra actively monitors updates and integrates new controls as they become available. This proactive approach allows users to automate the validation process on an ongoing basis, ensuring that they always adhere to the latest standards and maintain a robust security posture.

Implementation Timeline and Best Practices

It's essential for relevant companies to understand the implementation timeline for PCI DSS v4.0. With a two-phase approach, certain requirements are future-dated due to their complexity or cost of implementation. However, it's crucial not to overlook these future requirements, as they will eventually become mandatory for compliance.

These requirements will be considered best practices until March 31, 2025, after which they will become obligatory. This transition period allows organizations to gradually adapt to the new standards while ensuring they meet current compliance requirements.

Conclusion

PCI DSS 4.0 represents a clear move away from checkbox compliance toward continuous protection of cardholder data. As the payment card industry continues to evolve, so must the security measures used to protect sensitive data. PCI DSS v4.0 represents a significant step forward in enhancing data security and resilience against emerging threats. Understanding the key changes and implementation timeline is crucial for companies to proactively adapt to the new standard and maintain compliance in an ever-changing regulatory landscape.

Sentra plays a pivotal role in this ongoing compliance effort. Its comprehensive features align closely with the requirements of PCI DSS v4.0, providing automated log analysis for threat detection, ongoing review of access to sensitive data, and detection of stored PAN outside expected locations. Through Sentra's Data Detection and Response (DDR) module, organizations can continuously monitor logs from sensitive data stores, identifying threats and anomalies in real-time, thus aiding in compliance with PCI DSS 4.0 requirements such as automated log reviews.

Furthermore, Sentra's Data Security and Posture Management (DSPM) module facilitates the review of user accounts and access privileges, ensuring that access remains appropriate based on job function and addressing any inappropriate access, in line with PCI DSS v4.0 requirements. In addition, Sentra's scanning and classification engine, coupled with its DataTreks module, assists in incident response procedures by detecting all types of sensitive data, including PAN, across cloud accounts and tracking the movement of sensitive data, aiding in the remediation of data leaks or process gaps.

By leveraging these capabilities, organizations can streamline their compliance efforts, mitigate risks, and maintain the security and integrity of cardholder data in accordance with PCI DSS v4.0 requirements.

<blogcta-big>

Read More
Ran Shister
Ran Shister
April 10, 2024
4
Min Read
Data Sprawl

Understanding Data Movement to Avert Proliferation Risks

Understanding Data Movement to Avert Proliferation Risks

Understanding the perils your cloud data faces as it proliferates throughout your organization and ecosystems is a monumental task in the highly dynamic business climate we operate in. Being able to see data as it is being copied and travels, monitor its activity and access, and assess its posture allows teams to understand and better manage the full effect of data sprawl.

 

It ‘connects the dots’ for security analysts who must continually evaluate true risks and threats to data so they can prioritize their efforts. Data similarity and movement are important behavioral indicators in assessing and addressing those risks. This blog will explore this topic in depth.

What Is Data Movement

Data movement is the process of transferring data from one location or system to another – from A to B. This transfer can be between storage locations, databases, servers, or network locations. Copying data from one location to another is simple, however, data movement can get complicated when managing volume, velocity, and variety.

  • Volume: Handling large amounts of data.
  • Velocity: Overseeing the pace of data generation and processing.
  • Variety: Managing a variety of data types.

How Data Moves in the Cloud

Data is free and can be shared anywhere. The way organizations leverage data is an integral part of their success. Although there are many business benefits to moving and sharing data (at a rapid pace), there are also many concerns that arise, mainly dealing with privacy, compliance, and security. Data needs to move quickly, securely, and have the proper security posture at all times.  

These are the main ways that data moves in the cloud:

1. Data Distribution in Internal Services: Internal services and applications manage data, saving it across various locations and data stores.

2. ETLs: Extract, Transform, Load processes, involve combining data from multiple sources into a central repository known as a data warehouse. This centralized view supports applications in aggregating diverse data points for organizational use.

3. Developer and Data Scientist Data Usage: Developers and data scientists utilize data for testing and development purposes. They require both real and synthetic data to test applications and simulate real-life scenarios to drive business outcomes.

4. AI/ML/LLM and Customer Data Integration: The utilization of customer data in AI/ML learning processes is on the rise. Organizations leverage such data to train models and apply the results across various organizational units, catering to different use-cases.

What Is Misplaced Data

"Misplaced data" refers to data that has been moved from an approved environment to an unapproved environment. For example, a folder that is stored in the wrong location within a computer system or network. This can result from human error, technical glitches, or issues with data management processes.

 

When unauthorized data is stored in an environment that is not designed for the type of data, it can lead to data leaks, security breaches, compliance violations, and other negative outcomes.

With companies adopting more cloud services, and being challenged with properly managing the subsequent data sprawl, having misplaced data is becoming more common, which can lead to security, privacy, and compliance issues.

The Challenge of Data Movement and Misplaced Data

Organizations strive to secure their sensitive data by keeping it within carefully defined and secure environments. The pervasive data sprawl faced by nearly every organization in the cloud makes it challenging to effectively protect data, given its rapid multiplication and movement.

It is encouraged for business productivity to leverage data and use it for various purposes that can help enhance and grow the business. However, with the advantages, come disadvantages. There are risks to having multiple owners and duplicate data..

To address this challenge, organizations can leverage the analysis of similar data patterns to gain a comprehensive understanding on how data flows within the organization and help security teams first get visibility of those movement patterns, and then identify whether this movement is authorized. Then they can protect it accordingly and understand which unauthorized movement should be blocked.

This proactive approach allows them to position themselves strategically. It can involve ensuring robust security measures for data at each location, re-confining it by relocating, or eliminating unnecessary duplicates. Additionally, this analytical capability proves valuable in scenarios tied to regulatory and compliance requirements, such as ensuring GDPR - compliant data residency.

 Identifying Redundant Data and Saving Cloud Storage Costs

The identification of similarities empowers Chief Information Security Officers (CISOs) to implement best practices, steering clear of actions that lead to the creation of redundant data.

Detecting redundant data helps reduce cloud storage costs and drive up operational efficiency from targeted and prioritized remediation efforts that focus on the critical data risks that matter. 

This not only enhances data security posture, but also contributes to a more streamlined and efficient data management strategy.

“Sentra has helped us to reduce our risk of data breaches and to save money on cloud storage costs.”

-Benny Bloch, CISO at Global-e

Security Concerns That Arise

  1. Data Security Posture Variations Across Locations: Addressing instances where similar data, initially secure, experiences a degradation in security posture during the copying process (e.g., transitioning from private to public, or from encrypted to unencrypted).
  1. Divergent Access Profiles for Similar Data: Exploring scenarios where data, previously accessible by a limited and regulated set of identities, now faces expanded access by a larger number of identities (users), resulting in a loss of control.
  1. Data Localization and Compliance Violations: Examining situations where data, mandated to be localized in specific regions, is found to be in violation of organizational policies or compliance rules (with GDPR as a prominent example). By identifying similar sensitive data, we can pinpoint these issues and help users mitigate them.
  1. Anonymization Challenges in ETL Processes: Identifying issues in ETL processes where data is not only moved but also anonymized. Pinpointing similar sensitive data allows users to detect and mitigate anonymization-related problems.
  1. Customer Data Migration Across Environments: Analyzing the movement of customer data from production to development environments. This can be used by engineers to test real-life use-cases.
  2. Data Data Democratization and Movement Between Cloud and Personal Stores: Investigating instances where users export data from organizational cloud stores to personal drives (e.g., OneDrive) for purposes of development, testing, or further business analysis. Once this data is moved to personal data stores, it typically is less secure. This is due to the fact that these personal drives are less monitored and protected, and in control of the private entity (the employee), as opposed to the security/dev teams. These personal drives may be susceptible to security issues arising from misconfiguration, user mistakes or insufficient knowledge.

How Sentra’s DSPM Helps Navigate Data Movement Challenges

  1. Discover and accurately classify the most sensitive data and provide extensive context about it, for example:
  • Where it lives
  • Where it has been copied or moved to
  • Who has access to it
  1. Highlight misconfigurations by correlating similar data that has different security posture. This helps you pinpoint the issue and adjust it according to the right posture.
  2. Quickly identify compliance violations, such as GDPR - when European customer data moves outside of the allowed region, or when financial data moves outside a PCI compliant environment.
  3. Identify access changes, which helps you to understand the correct access profile by correlating similar data pieces that have different access profiles.

For example, the same data is well kept in a specific environment and can be accessed by 2 very specific users. When the same data moves to a developers environment, it can then be accessed by the whole data engineering team, which exposes more risks.

Leveraging Data Security Posture Management (DSPM) and Data Detection and Response (DDR) tools proves instrumental in addressing the complexities of data movement challenges. These tools play a crucial role in monitoring the flow of sensitive data, allowing for the swift remediation of exposure incidents and vulnerabilities in real-time. The intricacies of data movement, especially in hybrid and multi-cloud deployments, can be challenging, as public cloud providers often lack sufficient tooling to comprehend data flows across various services and unmanaged databases.

 

Our innovative cloud DLP tooling takes the lead in this scenario, offering a unified approach by integrating static and dynamic monitoring through DSPM and DDR. This integration provides a comprehensive view of sensitive data within your cloud account, offering an updated inventory and mapping of data flows. Our agentless solution automatically detects new sensitive records, classifies them, and identifies relevant policies. In case of a policy violation, it promptly alerts your security team in real time, safeguarding your crucial data assets.

In addition to our robust data identification methods, we prioritize the implementation of access control measures. This involves establishing Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC) policies, so that the right users have permissions at the right times.

Identifying data movement with Sentra

Identifying Data Movement With Sentra

Sentra has developed different methods to identify data movements and similarities based on the content of two assets. Our advanced capabilities allow us to pinpoint fully duplicated data, identify similar data, and even uncover instances of partially duplicated data that may have been copied or moved across different locations. 

Moreover, we recognize that changes in access often accompany the relocation of assets between different locations. 

As part of Sentra’s Data Security Posture Management (DSPM) solution, we proactively manage and adapt access controls to accommodate these transitions, maintaining the integrity and security of the data throughout its lifecycle.

These are the 3 methods we are leveraging:

  1. Hash similarity - Using each asset unique identifier to locate it across the different data stores of the customer environment.
  2. Schema similarity - Locate the exact or similar schemas that indicated that there might be similar data in them and then leverage other metadata and statistical methods to simplify the data and find necessary correlations.
  3. Entity Matching similarity - Detects when parts of files or tables are copied to another data asset. For example, an ETL that extracts only some columns from a table into a new table in a data warehouse. 

Another example would be if PII is found in a lower environment, Sentra could detect if this is real or mock customer PII, based on whether this PII was also found in the production environment.

PII found in a lower environment

Conclusion

Understanding and managing data sprawl are critical tasks in the dynamic business landscape. Monitoring data movement, access, and posture enable teams to comprehend the full impact of data sprawl, connecting the dots for security analysts in assessing true risks and threats. 

Sentra addresses the challenge of data movement by utilizing advanced methods like hash, schema, and entity similarity to identify duplicate or similar data across different locations. Sentra's holistic Data Security Posture Management (DSPM) solution not only enhances data security but also contributes to a streamlined data management strategy. 

The identified challenges and Sentra's robust methods emphasize the importance of proactive data management and security in the dynamic digital landscape.

To learn more about how you can enhance your data security posture, schedule a demo with one of our experts.

<blogcta-big>

Read More
David Stuart
David Stuart
March 11, 2024
4
Min Read
Data Loss Prevention

It's Time to Embrace Cloud DLP and DSPM

It's Time to Embrace Cloud DLP and DSPM

What’s the best way to prevent data exfiltration or exposure? In years past, the clear answer was often data loss prevention (DLP) tools. But today, the answer isn’t so clear — especially in light of the data democratization trend and for those who have adopted multi-cloud or cloud-first strategies.

 

Data loss prevention (DLP) emerged in the early 2000s as a way to secure web traffic, which wasn’t encrypted at the time. Without encryption, anyone could tap into data in transit, creating risk for any data that left the safety of on-premise storage. As Cyber Security Review describes, “The main approach for DLP here was to ensure that any sensitive data or intellectual property never saw the outside web. The main techniques included (1) blocking any actions that copy or move data to unauthorized devices and (2) monitoring network traffic with basic keyword matching.”

Although DLP has evolved for securing endpoints, email and more, its core functionality has remained the same: gatekeeping data within a set perimeter. But, this approach simply doesn’t perform well in cloud environments, as the cloud doesn’t have a clear perimeter. Instead, today’s multi-cloud environment includes constantly changing data stores, infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and more.

And thanks to data democratization, people across an organization can access all of these areas and move, change, or copy data within seconds. Cloud applications do so as well—even faster.

Traditional DLP tools weren’t built for cloud-native environments and can cause significant challenges for today’s organizations. Data security teams need a new approach, purpose-built for the realities of the cloud, digital transformation and today’s accelerated pace of innovation.

Why Traditional DLP Isn’t Ideal for the Cloud

Traditional DLPs are often unwieldy for the engineers who must work with the solution and ineffective for the leaders who want to see positive results and business continuity from the tool. There are a few reasons why this is the case:

1. Traditional DLP tools often trigger false alarms.

Traditional DLPs are prone to false positives. Because they are meant to detect any sensitive data that leaves a set perimeter, these solutions tend to flag normal cloud activities as security risks. For instance, traditional DLP is notorious for erroneously blocking apps and services in IaaS/PaaS environments. These “false positives” disrupt business continuity and innovation, which is frustrating for users who want to use valuable cloud data in their daily work. Not only do traditional DLPs block the wrong signals, but they also overlook the right ones, such as suspicious activities happening over cloud-based applications like Slack, Google Drive or generative AI/LLM apps. Plus, traditional DLP doesn’t follow data as users move, change or copy it, meaning it can easily miss shadow data.

2. Traditional DLP tools cause alert fatigue.

In addition, these tools lack detailed data context, meaning that they can’t triage alerts based on severity. Combine this factor with the high number of false positives, and teams end up with an overwhelming list of alerts that they must sort manually. This reality leads to alert fatigue and can cause teams to overlook legitimate security issues.

3. Traditional DLP tools rely on lots of manual intervention.

Traditional DLP deployment and maintenance take up lots of time and resources for a cloud-based or hybrid organization. For instance, teams must often install several legacy agents and proxies across the environment to make the solution work accurately. Plus, these legacy tools rely on clear-cut data patterns and keywords to uncover risk. These patterns are often hidden or nonexistent because they are often disguised or transformed in the data that exists in or moves to cloud environments. This means that teams must manually tune their DLP solution to align with what their sensitive cloud data actually looks like. In many cases, this manual intervention is very difficult—if not impossible—since many cloud pipelines rely on ETL data, which isn’t easy to manually alter or inspect. 

Additionally, today’s organizations use vast amounts of unstructured data within cloud file shares such as Sharepoint. They must parse through tens or even hundreds of petabytes of this unstructured data, making it challenging to find hidden sensitive data. Traditional DLP solutions lack the technology that would make this process far easier, such as AI/ML analysis.

Cloud DLP: A Cloud-Native Approach to Data Loss Prevention

Because the cloud is so different from traditional, on-premise environments, today’s cloud-based and hybrid organizations need a new solution. This is where a cloud DLP solution comes into the picture. We are seeing lots of cloud DLP tools hit the market, including solutions that fall into two main categories:

SaaS DLP products that leverage APIs to provide access control. While these products help to protect from loss within some SaaS applications, they are limited in scope, only covering a small percentage of the cloud services that a typical cloud-native organization uses. These limitations mean that a SaaS DLP product can’t provide a truly comprehensive view of all cloud data or trace data lineage if it’s not based in the cloud. 

IaaS + PaaS DLP products that focus on scanning and classifying data. Some of these tools are simply reporting tools that uncover data but don’t take action to remediate any issues. This still leaves extra manual work for security teams. Other IaaS + PaaS DLP offerings include automated remediation capabilities but can cause business interruptions if the automation occurs in the wrong situation.  

To directly address the limitations inherent in traditional DLPs and avoid these pitfalls, next-generation cloud DLPs should include the following:

  • Scalability in complex, multi-cloud environments
  • Automated prioritization for detected risks based on rich data context
  • Auto-detection and remediation capabilities that use deep context to correct configuration issues, creating efficiency without blocking everyday activities
  • Integration and workflows that are compatible with your existing environments
  • Straightforward, cloud-native agentless deployment without extensive tuning or maintenance


Attribute Cloud DLP DSPM DDR
Security Use Case Data Leakage Prevention Data Posture Improvement, Compliance Threat Detection and Response
Environments SaaS, Cloud Storage, Apps Public Cloud, SaaS and OnPremises Public Cloud, SaaS, Networks
Risk Prioritization Limited: based only on predefined policies - not based on discovered data or data context Analyzes Data Context, Access Controls, and Vulnerabilities Threat Activity Context such as anomalous traffic, volume, access
Remediation Block or Redact Data Transfers, Encryption, Alert Alerts, IR/Tool Integration & Workflow Initiation Alerts, Revoke Users/Access, Isolate Data Breach

Further Enhancing Cloud DLP by Integrating DSPM & DDR

While Cloud Data Loss Prevention (DLP) helps to secure data in multi-cloud environments by preventing loss, DSPM and DDR capabilities can complete the picture. These technologies add contextual details, such as user behavior, risk scoring and real-time activity monitoring, to enhance the accuracy and actionability of data threat and loss mitigation. Data Security Posture Management (DSPM) enforces good data hygiene no matter where the data resides. It takes a proactive approach, significantly reducing data exposure by preventing employees from taking risky actions in the first place. Data Detection and Response (DDR) alerts teams to the early warning signs of a breach, including suspicious activities such as data access by an unknown IP address. By bringing together Cloud DLP, DSPM and DDR, your organization can establish holistic data protection with both proactive and reactive controls. There is already much overlap in these technologies. As the market evolves, it is likely they will continue to combine into holistic cloud-native data security platforms.  


Sentra’s data security platform brings a cloud-native approach to DLP by automatically detecting and remediating data risks at scale. Built for complex multi-cloud and premise environments, Sentra empowers you with a unified platform to prioritize all of your most critical data risks in near real-time.

Request a demo to learn more about our cloud DLP, DSPM and DDR offerings.

<blogcta-big>

Read More
Ron Reiter
Ron Reiter
March 5, 2024
3
Min Read
AI and ML

New AI-Assistant, Sentra Jagger, Is a Game Changer for DSPM and DDR

New AI-Assistant, Sentra Jagger, Is a Game Changer for DSPM and DDR

Evolution of Large Language Models (LLMs)

In the early 2000s, as Google, Yahoo, and others gained widespread popularity. Users found the search engine to be a convenient tool, effortlessly bringing a wealth of information to their fingertips. Fast forward to the 2020s, and we see Large Language Models (LLMs) are pushing productivity to the next level. LLMs skip the stage of learning, seamlessly bridging the gap between technology and the user.

LLMs create a natural interface between the user and the platform. By interpreting natural language queries, they effortlessly translate human requests into software actions and technical operations. This simplifies technology to make it close to invisible. Users no longer need to understand the technology itself, or how to get certain data — they can just input any query, and LLMs will simplify it.

Revolutionizing Cloud Data Security With Sentra Jagger

Sentra Jagger is an industry-first AI assistant for cloud data security based on the Large Language Model (LLM).

It enables users to quickly analyze and respond to security threats, cutting task times by up to 80% by answering data security questions, including policy customization and enforcement, customizing settings, creating new data classifiers, and reports for compliance. By reducing the time for investigating and addressing security threats, Sentra Jagger enhances operational efficiency and reinforces security measures.

Empowering security teams, users can access insights and recommendations on specific security actions using an interactive, user-friendly interface. Customizable dashboards, tailored to user roles and preferences, enhance visibility into an organization's data. Users can directly inquire about findings, eliminating the need to navigate through complicated portals or ancillary information.

Benefits of Sentra Jagger

  1. Accessible Security Insights: Simplified interpretation of complex security queries, offering clear and concise explanations in plain language to empower users across different levels of expertise. This helps users make informed decisions swiftly, and confidently take appropriate actions.
  1. Enhanced Incident Response: Clear steps to identify and fix issues, offering users clear steps to identify and fix issues, making the process faster and minimizing downtime, damage, and restoring normal operations promptly. 
  1. Unified Security Management: Integration with existing tools, creating a unified security management experience and providing a complete view of the organization's data security posture. Jagger also speeds solution customization and tuning.

Why Sentra Jagger Is Changing the Game for DSPM and DDR

Sentra Jagger is an essential tool for simplifying the complexities of both Data Security Posture Management (DSPM) and Data Detection and Response (DDR) functions. DSPM discovers and accurately classifies your sensitive data anywhere in the cloud environment, understands who can access this data, and continuously assesses its vulnerability to security threats and risk of regulatory non-compliance. DDR focuses on swiftly identifying and responding to security incidents and emerging threats, ensuring that the organization’s data remains secure. With their ability to interpret natural language, LLMs, such as Sentra Jagger, serve as transformative agents in bridging the comprehension gap between cybersecurity professionals and the intricate worlds of DSPM and DDR.

Data Security Posture Management (DSPM)

When it comes to data security posture management (DSPM), Sentra Jagger empowers users to articulate security-related queries in plain language, seeking insights into cybersecurity strategies, vulnerability assessments, and proactive threat management.

Meet Sentra Jagger, your new data security assistant

The language models not only comprehend the linguistic nuances but also translate these queries into actionable insights, making data security more accessible to a broader audience. This democratization of security knowledge is a pivotal step forward, enabling organizations to empower diverse teams (including privacy, governance, and compliance roles) to actively engage in bolstering their data security posture without requiring specialized cybersecurity training.

Data Detection and Response (DDR)

In the realm of data detection and response (DDR), Sentra Jagger contributes to breaking down technical barriers by allowing users to interact with the platform to seek information on DDR configurations, real-time threat detection, and response strategies. Our AI-powered assistant transforms DDR-related technical discussions into accessible conversations, empowering users to understand and implement effective threat protection measures without grappling with the intricacies of data detection and response technologies.

The integration of LLMs into the realms of DSPM and DDR marks a paradigm shift in how users will interact with and comprehend complex cybersecurity concepts. Their role as facilitators of knowledge dissemination removes traditional barriers, fostering widespread engagement with advanced security practices. 

Sentra Jagger is a game changer by making advanced technological knowledge more inclusive, allowing organizations and individuals to fortify their cybersecurity practices with unprecedented ease. It helps security teams better communicate with and integrate within the rest of the business. As AI-powered assistants continue to evolve, so will their impact to reshape the accessibility and comprehension of intricate technological domains.

How CISOs Can Leverage Sentra Jagger 

Consider a Chief Information Security Officer (CISO) in charge of cybersecurity at a healthcare company. To assess the security policies governing sensitive data in their environment, the CISO leverages Sentra’s Jagger AI assistant.. If the CISO, let's call her Sara, needs to navigate through the Sentra policy page, instead of manually navigating, Sara can simply queryJagger, asking, "What policies are defined in my environment?" In response, Jagger provides a comprehensive list of policies, including their names, descriptions, active issues, creation dates, and status (enabled or disabled).

Sara can then add a custom policy related to GDPR, by simply describing it. For example, "add a policy that tracks European customer information moving outside of Europe". Sentra Jagger will translate the request using Natural Language Processing (NLP) into a Sentra policy and inform Sara about potential non-compliant data movement based on the recently added policy.

Upon thorough review, Sara identifies a need for a new policy: "Create a policy that monitors instances where credit card information is discovered in a datastore without audit logs enabled." Sentra Jagger initiates the process of adding this policy by prompting Sara for additional details and confirmation. 

The LLM-assistant, Sentra Jagger, communicates, "Hi Sara, it seems like a valuable policy to add. Credit card information should never be stored in a datastore without audit logs enabled. To ensure the policy aligns with your requirements, I need more information. Can you specify the severity of alerts you want to raise and any compliance standards associated with this policy?" Sara responds, stating, "I want alerts to be raised as high severity, and I want the AWS CIS benchmark to be associated with it."

Having captured all the necessary information, Sentra Jagger compiles a summary of the proposed policy and sends it to Sara for her review and confirmation. After Sara confirms the details, the LLM-assistant, Sentra Jagger seamlessly incorporates the new policy into the system. This streamlined interaction with LLMs enhances the efficiency of policy management for CISOs, enabling them to easily navigate, customize, and implement security measures in their organizations.

Create a policy with Sentra Jagger
Creating a policy with Sentra Jagger

Conclusion 

The advent of Large Language Models (LLMs) has changed the way we interact with and understand technology. Building on the legacy of search engines, LLMs eliminate the learning curve, seamlessly translating natural language queries into software and technical actions. This innovation removes friction between users and technology, making intricate systems nearly invisible to the end user.

For Chief Information Security Officers (CISOs) and ITSecOps, LLMs offer a game-changing approach to cybersecurity. By interpreting natural language queries, Sentra Jagger bridges the comprehension gap between cybersecurity professionals and the intricate worlds of DSPM and DDR. This standardization of security knowledge allows organizations to empower a wider audience to actively engage in bolstering their data security posture and responding to security incidents, revolutionizing the cybersecurity landscape.

To learn more about Sentra, schedule a demo with one of our experts.

Read More
Yoav Regev
Yoav Regev
February 20, 2024
3
Min Read
Data Security

Emerging Data Security Challenges In the LLM Era

Emerging Data Security Challenges In the LLM Era

In April of 2023, it was discovered that several Samsung employees reportedly leaked sensitive data via OpenAI’s chatbot ChatGPT. The data leak included the source code of software responsible for measuring semiconductor equipment. This leak emphasizes the importance of taking preventive measures against future breaches associated with Large Language Models (LLMs).

LLMs are created to generate responses to questions with data that they continuously receive, which can unintentionally expose confidential information. Even though OpenAI specifically tells users not to share “any sensitive information in your conversations”, ChatGPT and other LLMs are simply too useful to ban for security reasons. You wouldn’t ban an employee from using Google or an engineer from Github. Business productivity (almost) always comes first.

This means that the risks of spilling company secrets and sharing sensitive data with LLMs are not going anywhere. And you can be sure that more generative AI tools will be introduced to the workplace in the near future.

“Banning chatbots one by one will start feeling “like playing whack-a-mole” really soon.”

  • Joe Payne, the CEO of insider risk software solutions provider Code42.


In many ways, the effect of LLMs on data security is similar to the changes we saw 10-15 years ago when companies started moving their data to the cloud.

Broadly speaking, we can say there have been three ‘eras’ of data and data security….

The Era of On-Prem Data

The first was the era of on-prem data. For most of the history of computing, enterprises stored their data in on-prem data centers, and secured access to sensitive data by fortifying the perimeter. The data also wasn’t going anywhere on its own. It lived on company servers, was managed by company IT teams, and they controlled who accessed anything that lived on those systems. 

The Era of the Cloud

Then came the next era - the cloud. Suddenly, corporate data wasn’t static anymore. Data was free and could be shared anywhere - engineers, BI tools, and data scientists were accessing and moving thus free-flowing data to drive the business forward. How you leverage your data becomes an integral part of a company’s success. While the business benefits were clear, this created a number of concerns - particularly around privacy, compliance, and security. Data needed to move quickly, securely, and have the proper security posture at all times. 

The challenge was that now security teams were struggling with basic questions about the data  like: 

  • Where is my data? 
  • Who has access to it? 
  • How can I comply with regulations? 

It was during this era that Data Security Posture Management (DSPM) emerged as a solution to this problem - by ensuring that data always had proper access controls wherever it traveled, this solution promised to address security and compliance issues for enterprises with fast-moving cloud data.

And while we were answering these questions, a new era emerged, with a host of new challenges. 

The Era of AI

The recent rise of Large Language Models (LLMs) as indispensable business tools in just the past few years has introduced a new dimension to data security challenges. It has significantly amplified the existing issues in the cloud era, presenting an unparalleled and exploding problem. While it has accelerated business operations to new heights, this development has also taken the cloud to another level of risk and challenge.

While securing data in the cloud was a challenge, at least you controlled (somehow) your cloud. You could decide who could access it, and when. You could decide what data to keep and what to remove. That has all changed as LLMs and AI play a larger role in company operations. 

Globally, and specifically in the US, organizations are facing the challenge of managing these new AI technology initiatives efficiently while maintaining speed and ensuring regulatory compliance. CEOs and boards are increasingly urging companies to leverage LLMs and AI and use them as databases. However, there is a limited understanding of associated risks and difficulties in controlling the data input into these models. The ultimate goal is to mitigate and prevent such situations effectively. 


LLMs are a black box. You don't know what data your engineers are feeding into it, and you can’t be sure that users aren’t going to be able to manipulate your LLMs into disclosing sensitive information. For example, an engineer training a model might accidentally use real customer data that now exists somewhere in the LLM and might be inadvertently disclosed. Or an LLM powered chatbot might have a vulnerability that leads it to respond with sensitive company data to an inquiry. This is the challenge facing the data security team in this new era. 

How can you know what the LLM has access to, how it’s using that data, and who it’s sharing that data with?

Solving The Challenges of the Cloud and AI Eras at the Same Time

Adding to the complexity for security and compliance professionals is that we’re still dealing with the challenges from the cloud era. Fortunately, Data Security Posture Management (DSPM) has adapted to solve these eras’ primary data security headaches.

For data in the cloud, DSPM can discover your sensitive data anywhere in the cloud environment, understand who can access this data, and assess its vulnerability to security threats and risk of regulatory non-compliance. Organizations can harness advanced technologies while ensuring privacy and compliance seamlessly integrated into their processes. Further, DSPM tackles issues such as finding shadow data, identifying sensitive information with inadequate security postures, discovering duplicate data, and ensuring proper access control.

For the LLM data challenges, DSPMs can automatically secure LLM training data, facilitating swift AI application development, and letting the business run as smoothly as possible.

Any DSPM solution that collaborates with platforms like AWS SageMaker and GCP Vertex AI, as well as other AI IDEs, can ensure secure data handling during ML training. Full integrations with features like Data Access Governance (DAG) and Data Detection and Response (DDR), provide a robust approach to data security and privacy.

AI has the remarkable capacity to reshape our world, yet this must be balanced with a firm dedication to maintaining data integrity and privacy. Ensuring data integrity and privacy in LLMs is crucial for the creation of ethical and responsible AI applications. By utilizing DSPM, organizations are equipped to apply best practices in data protection, thereby reducing the dangers of data breaches, unauthorized access, and bias. This approach is key to fostering a safe and ethical digital environment as we advance in the LLM era.

To learn more about DSPM, request a demo today.

<blogcta-big>

Read More