All Resources
In this article:
minus iconplus icon
Share the Blog

Key Practices for Responding to Compliance Framework Updates

June 10, 2024
3
Min Read
Compliance

Most privacy, IT, and security teams know the pain of keeping up with ever-changing data compliance regulations. Because data security and privacy-related regulations change rapidly over time, it can often feel like a game of “whack a mole” for organizations to keep up. Plus, in order to adhere to compliance regulations, organizations must know which data is sensitive and where it resides. This can be difficult, as data in the typical enterprise is spread across multiple cloud environments, on premises stores, SaaS applications, and more. Not to mention that this data is constantly changing and moving.

While meeting a long list of constantly evolving data compliance regulations can seem daunting, there are effective ways to set a foundation for success. By starting with data security and hygiene best practices, your business can better meet existing compliance requirements and prepare for any future changes.

Recent Updates to Common Data Compliance Frameworks 

The average organization comes into contact with several voluntary and mandatory compliance frameworks related to security and privacy. Here’s an overview of the most common ones and how they have changed in the past few years:

Payment Card Industry Data Security Standard (PCI DSS)

What it is: PCI DSS is a set of over 500 requirements for strengthening security controls around payment cardholder data. 

Recent changes to this framework: In March 2022, the PCI Security Standards Council announced PCI DSS version 4.0. It officially went into effect in Q1 2024. This newest version has notably stricter standards for defining which accounts can access environments containing cardholder data and authenticating these users with multi-factor authentication and stronger passwords. This update means organizations must know where their sensitive data resides and who can access it.  

U.S. Securities and Exchange Commission (SEC) 4-Day Disclosure Requirement

What it is:  The SEC’s 4-day disclosure requirement is a rule that requires more established SEC registrants to disclose a known cybersecurity incident within four business days of its discovery.

Recent changes to this framework: The SEC released this disclosure rule in December 2023. Several Fortune 500 organizations had to disclose cybersecurity incidents, including a description of the nature, scope, and timing of the incident. Additionally, the SEC requires that the affected organization release which assets were impacted by the incident. This new requirement significantly increases the implications of a cyber event, as organizations risk more reputational damage and customer churn when an incident happens.

In addition, the SEC will require smaller reporting companies to comply with these breach disclosure rules in June 2024. In other words, these smaller companies will need to adhere to the same breach disclosure protocols as their larger counterparts.

Health Insurance Portability and Accountability Act (HIPAA)

What it is: HIPPA safeguards that protect patient information through stringent disclosure and privacy standards.

Recent changes to this framework: Updated HIPAA guidelines have been released recently, including voluntary cybersecurity performance goals created by the U.S. Department of Health and Human Services (HHS). These recommendations focus on data security best practices such as strengthening access controls, implementing incident planning and preparedness, using strong encryption, conducting asset inventory, and more. Meeting these recommendations strengthens an organization’s ability to adhere to HIPAA, specifically protecting electronic protected health information (ePHI).

General Data Protection Regulation (GDPR) and EU-US Data Privacy Framework

What it is: GDPR is a robust data privacy framework in the European Union. The EU-US Data Privacy Framework (DPF) adds a mechanism that enables participating organizations to meet the EU requirements for transferring personal data to third countries.

Recent changes to this framework: The GDPR continues to evolve as new data privacy challenges arise. Recent changes include the EU-U.S. Data Privacy framework, enacted in July 2023. This new framework requires that participating organizations significantly limit how they use personal data and inform individuals about their data processing procedures. These new requirements mean organizations must understand where and how they use EU user data.

National Institute of Standards and Technology (NIST) Cybersecurity Framework

What it is:  NIST is a voluntary guideline that provides recommendations to organizations for managing cybersecurity risk. However, companies that do business with or a part of the U.S. government, including agencies and contractors, are required to comply with NIST.

Recent changes to this framework: NIST recently released its 2.0 version. Changes include a new core function, “govern,” which brings in more leadership oversight. It also highlights supply chain security and executing more impactful cyber incident responses. Teams must focus on gaining complete visibility into their data so leaders can fully understand and manage risk.    

ISO/IEC 27001:2022

What it is: ISO/IEC 27001 is a certification that requires businesses to achieve a level of information security standards. 

Recent changes to this framework: ISO 27001 was revised in 2022. While this addendum consolidated many of the controls listed in the previous version, it also added 11 brand-new ones, such as data leakage protection, monitoring activities, data masking, and configuration management. Again, these additions highlight the importance of understanding where and how data gets used so businesses can better protect it.

California Consumer Privacy Act (CCPA)

What it is: CCPA is a set of mandatory regulations for protecting the data privacy of California residents.

Recent changes to this framework: The CCPA was amended in 2023 with the California Privacy Rights Act (CPRA). This new edition includes new data rights, such as consumers’ rights to correct inaccurate personal information and limit the use of their personal information. As a result, businesses must have a stronger grasp on how their CA users’ data is stored and used across the organization.

2024 FTC Mandates

What it is: The Federal Trade Commission (FTC)’s new mandates require some businesses to disclose data breaches to the FTC as soon as possible — no later than 30 days after the breach is discovered. 

Recent changes to this framework: The first of these new data breach reporting rules is the Standards for Safeguarding Customer Information (Safeguards Rule) which took effect in May 2024. The Safeguards Rule puts disclosure requirements on non-banking financial institutions and financial institutions that aren’t required to register with the SEC (e.g, mortgage brokers, payday lenders, and vehicle dealers). 

Key Data Practices for Meeting Compliance

These frameworks are just a portion of the ever-changing compliance and regulatory requirements that businesses must meet today. Ultimately, it all goes back to strong data security and hygiene: knowing where your data resides, who has access to it, and which controls are protecting it. 

To gain visibility into all of these areas, businesses must operationalize the following actions throughout their entire data estate:

  • Discover data in both known and unknown (shadow) data stores.
  • Accurately classify and organize discovered data so they can adequately protect their most sensitive assets.
  • Monitor and track access keys and user identities to enforce least privilege access and to limit third-party vendor access to sensitive data.
  • Detect and alert on risky data movement and suspect activity to gain early warning into potential breaches.

Sentra enables organizations to meet data compliance requirements with data security posture management (DSPM) and data access governance (DAG) that travel with your data. We help organizations gain a clear view of all sensitive data, identify compliance gaps for fast resolution, and easily provide evidence of regulatory controls in framework-specific reports. 

Find out how Sentra can help your business achieve data and privacy compliance requirements.

If you want to learn more, request a demo with our data security experts.

Meni is an experienced product manager and the former founder of Pixibots (A mobile applications studio). In the past 15 years, he gained expertise in various industries such as: e-commerce, cloud management, dev-tools, mobile games, and more. He is passionate about delivering high quality technical products, that are intuitive and easy to use.

Subscribe

Latest Blog Posts

David Stuart
David Stuart
Nikki Ralston
Nikki Ralston
February 4, 2026
3
Min Read

DSPM Dirty Little Secrets: What Vendors Don’t Want You to Test

DSPM Dirty Little Secrets: What Vendors Don’t Want You to Test

Discover  What DSPM Vendors Try to Hide 

Your goal in running a data security/DSPM POV is to evaluate all important performance and cost parameters so you can make the best decision and avoid unpleasant surprises. Vendors, on the other hand, are looking for a ‘quick win’ and will often suggest shortcuts like using a limited test data set and copying your data to their environment.

 On the surface this might sound like a reasonable approach, but if you don’t test real data types and volumes in your own environment, the POV process may hide costly failures or compliance violations that will quickly become apparent in production. A recent evaluation of Sentra versus another top emerging DSPM exposed how the other solution’s performance dropped and costs skyrocketed when deployed at petabyte scale. Worse, the emerging DSPM removed data from the customer environment - a clear controls violation.

If you want to run a successful POV and avoid DSPM buyers' remorse you need to look out for these "dirty little secrets".

Dirty Little Secret #1:
‘Start small’ can mean ‘fails at scale’

The biggest 'dirty secret' is that scalability limits are hidden behind the 'start small' suggestion. Many DSPM platforms cannot scale to modern petabyte-sized data environments. Vendors try to conceal this architectural weakness by encouraging small, tightly scoped POVs that never stress the system and create false confidence. Upon broad deployment, this weakness is quickly exposed as scans slow and refresh cycles stretch, forcing teams to drastically reduce scope or frequency. This failure is fundamentally architectural, lacking parallel orchestration and elastic execution, proving that the 'start small' advice was a deliberate tactic to avoid exposing the platform’s inevitable bottleneck.In a recent POV, Sentra successfully scanned 10x more data in approximately the same time than the alternative:

Dirty Little Secret #2:
High cloud cost breaks continuous security

Another reason some vendors try to limit the scale of POVs is to hide the real cloud cost of running them in production. They often use brute-force scanning that reads excessive data, consumes massive compute resources, and is architecturally inefficient. This is easy to mask during short, limited POVs, but quickly drives up cloud bills in production. The resulting cost pressure forces organizations to reduce scan frequency and scope, quietly shifting the platform from continuous security control to periodic inventory. Ultimately, tools that cannot scale scanners efficiently on-demand or scan infrequently trade essential security for cost, proving they are only affordable when they are not fully utilized. In a recent POV run on 100 petabytes of data, Sentra proved to be 10x more operationally cost effective to run:

Dirty Little Secret #3:
‘Good enough’ accuracy degrades security

Accuracy is fundamental to Data Security Posture Management (DSPM) and should not be compromised. While a few points difference may not seem like a deal breaker, every percentage point of classification accuracy can dramatically affect all downstream security controls. Costs increase as manual intervention is required to address FPs. When organizations automate controls based on these inaccuracies, the DSPM platform becomes a source of risk. Confidence is lost. The secret is kept safe because the POV never validates the platform's accuracy against known sensitive data.

In a recent POV Sentra was able to prove less than one percent rate of false positives and false negatives:

DSPM POV Red Flags 

  • Copy data to the vendor environment for a “quick win”
  • Limit features or capabilities to simplify testing
  • Artificially reduce the size of scanned data
  • Restrict integrations to avoid “complications”
  • Limit or avoid API usage

These shortcuts don’t make a POV easier - they make it misleading.

Four DSPM POV Requirements That Expose the Truth

If you want a DSPM POV that reflects production reality, insist on these requirements:

1. Scalability

Run discovery and classification on at least 1 petabyte of real data, including unstructured object storage. Completion time must be measured in hours or days - not weeks.

2. Cost Efficiency

Operate scans continuously at scale and measure actual cloud resource consumption. If cost forces reduced frequency or scope, the model is unsustainable.

3. Accuracy

Validate results against known sensitive data. Measure false positives and false negatives explicitly. Accuracy must be quantified and repeatable.

4. Unstructured Data Depth

Test long-form, heterogeneous, real-world unstructured data including audio, video, etc. Classification must demonstrate contextual understanding, not just keyword matches.

A DSPM solution that only performs well in a limited POV will lead to painful, costly buyer’s regret. Once in production, the failures in scalability, cost efficiency, accuracy, and unstructured data depth quickly become apparent.

Getting ready to run a DSPM POV? Schedule a demo.

<blogcta-big>

Read More
David Stuart
David Stuart
January 28, 2026
3
Min Read

Data Privacy Day: Why Discovery Isn’t Enough

Data Privacy Day: Why Discovery Isn’t Enough

Data Privacy Day is a good reminder for all of us in the tech world: finding sensitive data is only the first step. But in today’s environment, data is constantly moving -across cloud platforms, SaaS applications, and AI workflows. The challenge isn’t just knowing where your sensitive data lives; it’s also understanding who or what can touch it, whether that access is still appropriate, and how it changes as systems evolve.

I’ve seen firsthand that privacy breaks down not because organizations don’t care, but because access decisions are often disconnected from how data is actually being used. You can have the best policies on paper, but if they aren’t continuously enforced, they quickly become irrelevant.

Discovery is Just the Beginning

Most organizations start with data discovery. They run scans, identify sensitive files, and map out where data lives. That’s an important first step, and it’s necessary, but it’s far from sufficient. Data is not static. It moves, it gets copied, it’s accessed by humans and machines alike. Without continuously governing that access, all the discovery work in the world won’t stop privacy incidents from happening.

The next step, and the one that matters most today, is real-time governance. That means understanding and controlling access as it happens. 

Who can touch this data? Why do they have access? Is it still needed? And crucially, how do these permissions evolve as your environment changes?

Take, for example, a contractor who needs temporary access to sensitive customer data. Or an AI workflow that processes internal HR information. If those access rights aren’t continuously reviewed and enforced, a small oversight can quickly become a significant privacy risk.

Privacy in an AI and Automation Era

AI and automation are changing the way we work with data, but they also change the privacy equation. Automated processes can move and use data in ways that are difficult to monitor manually. AI models can generate insights using sensitive information without us even realizing it. This isn’t a hypothetical scenario, it’s happening right now in organizations of all sizes.

That’s why privacy cannot be treated as a once-a-year exercise or a checkbox in an audit report. It has to be embedded into daily operations, into the way data is accessed, used, and monitored. Organizations that get this right build systems that automatically enforce policies and flag unusual access - before it becomes a problem.

Beyond Compliance: Continuous Responsibility

The companies that succeed in protecting sensitive data are those that treat privacy as a continuous responsibility, not a regulatory obligation. They don’t wait for audits or compliance reviews to take action. Instead, they embed privacy into how data is accessed, shared, and used across the organization.

This approach delivers real results. It reduces risk by catching misconfigurations before they escalate. It allows teams to work confidently with data, knowing that sensitive information is protected. And it builds trust - both internally and with customers because people know their data is being handled responsibly.

A New Mindset for Data Privacy Day

So this Data Privacy Day, I challenge organizations to think differently. The question is no longer “Do we know where our sensitive data is?” Instead, ask:

“Are we actively governing who can touch our data, every moment, everywhere it goes?”

In a world where cloud platforms, AI systems, and automated workflows touch nearly every piece of data, privacy isn’t a one-time project. It’s a continuous practice, a mindset, and a responsibility that needs to be enforced in real time.

Organizations that adopt this mindset don’t just meet compliance requirements, they gain a competitive advantage. They earn trust, strengthen security, and maintain a dynamic posture that adapts as systems and access needs evolve.

Because at the end of the day, true privacy isn’t something you achieve once a year. It’s something you maintain every day, in every process, with every decision. This Data Privacy Day, let’s commit to moving beyond discovery and audits, and make continuous data privacy the standard.

<blogcta-big>

Read More
David Stuart
David Stuart
January 27, 2026
4
Min Read

DSPM for Modern Fintech: From Masking to AI-Aware Data Protection

DSPM for Modern Fintech: From Masking to AI-Aware Data Protection

Fintech leaders, from digital-first banks to API-driven investment platforms, face a major data dilemma today. With cloud-native architectures, real-time analytics, and the rapid integration of AI, the scale, speed, and complexity of sensitive data have skyrocketed. Fintech platforms are quickly surpassing what legacy Data Loss Prevention (DLP) and Data Security Posture Management (DSPM) tools can handle.

Why? Fintech companies now need more than surface-level safeguards. They require true depth: AI-driven data classification, dynamic masking, and fluid integrations across a massive tech stack that includes Snowflake, AWS Bedrock, and Microsoft 365. Below, we look at why DSPM in financial services is at a defining moment, what recurring pain points exist with traditional, and even many emerging, tools, and how Sentra is reimagining what the modern data protection stack should deliver.

The Pitfalls of Legacy DLP and Early DSPM in Fintech

Legacy DLP wasn’t built for fintech’s speed or expanding data footprint. These tools focus on rigid rules and tight boundaries, which aren’t equipped to handle petabyte-scale, multi-cloud, or AI-powered environments. Early DSPM tools brought some improvements in visibility, but problems persisted: incomplete data discovery, basic classification, lots of manual steps, and limited support for dynamic masking.

For fintech companies, this creates mounting regulatory risk as compliance pressures rise, and slow, manual processes lead to both security and operational headaches. Teams waste hours juggling alerts and trying to piece together patchwork fixes, often resorting to clunky add-on masking tools. The cost is obvious: a scattered protection strategy, long breach response times, and constant exposure to regulatory issues - especially as environments get more distributed and complex.

Why "Good Enough" DSPM Isn’t Enough Anymore

Change in fintech moves faster than ever. The DSPM for the financial services sector is growing at breakneck speed. But as financial applications get more sophisticated, and with cloud and AI adoption soaring, the old "good enough" DSPM falls short. Sensitive data is everywhere now. 82% percent of breaches happen in the cloud, with 39% stretching across multi-cloud or hybrid setups according to The Future of Data Security: Why DSPM is Here to Stay. Enterprise data is set to exceed 181 zettabytes by 2025, raising the stakes for automation, real-time classification, and tight integration with core infrastructure.

AI and automation are no longer optional. To effectively reduce risk and keep compliance manageable and truly auditable, DSPM systems need to automate classification, masking, remediation, and reporting as a central part of operations, not as last-minute additions.

Where Most DSPM Solutions Fall Short

Fintech organizations often struggle to scale legacy or early DSPM and DLP products, especially those similar to emerging DSPM or large CNAPP vendors. These tools might offer broad control and AI-powered classification, but they usually require too much manual orchestration to achieve full remediation, only automate certain pieces of the workflow, and rely on separate masking add-ons.

That leads to gaps in AI and multi-cloud data context, choppy visibility, and much of the workflow stuck in manual gear, a recipe for persistent exposure of sensitive data, especially in fast-moving fintech environments.

Fintech buyers, especially those scaling quickly, also point to a crucial need: ensuring DSPM tools natively and deeply support platforms like Snowflake, AWS Bedrock, and Macie. They want automated, business-driven policy enforcement without constantly babysitting the system.

Sentra’s Next-Gen DSPM: AI-Native, Masking-Aware, and Stack-Integrated for Fintech

Sentra was created with these modern fintech challenges in mind. It offers real-time, continuous, agentless classification and deep context for cloud, SaaS, and AI-powered environments.

What makes Sentra different?

  • Petabyte-scale agentless discovery: Always-on, friction-free classification, with no heavy infrastructure or manual tweaks.
  • AI-native contextualization: Pinpoints sensitive data at a business level and connects instantly with masking policies across Snowflake, Microsoft Purview, and more inferred masking synergy.
  • Automation-driven compliance: Handles everything from discovery to masking to changing permissions, with clear, auditable reporting automated masking/remediation.
  • Integrated for modern stacks: Ready-made, with out-of-the-box connections for Snowflake, Bedrock, Microsoft 365, and the wider AWS/fintech ecosystem.

More and more fintech companies are switching to Sentra DSPM to achieve true cross-cloud visibility and meet regulations without slowing down. By plugging into fintech data flows and covering AI model pipelines, Sentra lets organizations use DSPM with the same speed as their business.

Building a Future-Ready DSPM Strategy in Financial Services

Managing and protecting sensitive data is a competitive edge for fintech, not just a security concern. With compliance rising up the agenda - 84% of IT and security leaders now list it as a top driver - your DSPM investments need to focus on automation, consistent visibility, and enforceable policies throughout your architecture.

Next-gen DSPM means: less busywork, no more juggling between masking and classification tools, and instant, actionable insight into data risk, wherever your information lives. In other words, you spend less time firefighting, move faster, and can assure partners and customers that their data is in good hands.

See How SoFi

Request a demo and technical assessment to discover how Sentra’s AI-aware DSPM can speed up both your compliance and your innovation.

Conclusion

Legacy data protection simply can’t keep up with the size, complexity, and regulatory demands of financial data today. DSPM is now table stakes - as long as it’s automated, built with AI at its core, and actively reduces risk in real time, not just points it out.

Sentra helps you move forward confidently: always-on, agentless classification, automated fixes and masking, and deep stack integration designed for the most complex fintech systems. As you build the future of financial services, your DSPM should make it easier to stay compliant, agile, and protected - no matter how quickly your technology changes.

<blogcta-big>

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.