All Resources
In this article:
minus iconplus icon
Share the Blog

Empowering Users to Self-Protect Their Data

March 27, 2025
3
Min Read
Sentra Case Study

In today’s rapidly evolving cybersecurity landscape, protecting sensitive cloud data requires more than deploying advanced security tools, it demands operationalized data security and empowered users. Organizations must reduce alert fatigue, gain visibility into sensitive data exposure, and enable data owners to take action without slowing the business.

In a recent discussion with Sapir Gottdiner, Cyber Security Architect at Global-e, we explored how Global-e approaches cloud data security using automation and Data Security Posture Management (DSPM). Operating across multiple regions and complying with strict regulations such as GDPR, PCI, and SOC 2, Global-e needed a scalable way to identify sensitive data, manage risk, and streamline remediation - without overburdening security teams.

This customer-driven perspective highlights how DSPM-powered automation and user enablement can transform data protection from a reactive process into a proactive, scalable security strategy.

Automating Security Tasks for Efficiency

“One of the primary challenges faced by any security team is keeping pace with the volume of security alerts and the effort required to address them”, said Sapir. Automating human resource-constrained tasks is crucial for efficiency. For example, sensitive data should only exist in certain controlled environments, as improper data handling can lead to vulnerabilities. By leveraging DSPM which acts as a validation tool, organizations can automate the detection of sensitive information stored in incorrect locations and initiate remediation processes without human intervention.

Strengthening Sensitive Data Protection

A concern identified in the discussion was data accessible to unauthorized personnel in Microsoft OneDrive, that may contain sensitive information. To mitigate this, organizations should automate the creation of support tickets (in Jira, for instance) for security incidents, ensuring critical and high-risk alerts are addressed immediately. Assigning these incidents to the relevant departments and data owners ensures accountability and prompt resolution. Additionally, identifying the type and location of sensitive data enables organizations to implement precise fixes, reducing exposure risks.

Risk Management and Process Improvement

Permissioning is equally important and organizations must establish clear procedures and policies for managing authentication credentials. Different actions for different levels of risk to ensure no business interruption is applicable in most cases. This can vary from easy, quick access revocation for low-risk cases while requiring manual verification for critical credentials.

Furthermore, proper data storage is an important protection factor, given sovereignty regulations, data proliferation, etc. Implementing well-defined data mapping strategies and systematically applying proper hygiene and ensuring correct locations will minimize security gaps. For the future, Sapir envisions smart data mapping within O365 and deeper integrations with automated remediation workflow tools to further enhance security posture.

Continuous Review and Training

Sapir also suggests that to ensure compliance and effective security management, organizations should conduct monthly security reviews. These reviews help define when to close or suppress alerts, preventing unnecessary effort on minor issues. Additionally, policies should align with infrastructure security and regulatory compliance requirements such as GDPR, PCI and SOC2. Expanding security training programs is another essential step, equipping users with the knowledge on proper storage and handling of controlled data and how to avoid common security missteps. Empowering users to self-police/self-remediate allows lean security teams to scale data protection operations more efficiently.

Enhancing Communication and Future Improvements

Operationalizing data security is an ongoing effort that blends automation, process refinement, and user education. As Global-e’s experience shows, empowering users and data owners to self-protect and self-remediate sensitive data allows security teams to scale their impact while maintaining strong compliance and governance.

Since implementing Sentra’s DSPM solution, Global-e has significantly strengthened its cloud data security posture. The organization now has greater visibility into sensitive data exposure, faster remediation workflows, and reduced operational overhead for its security, IT, DevOps, and engineering teams - all while remaining compliant with global regulatory requirements.

By shifting data security closer to the people who create and use the data, and supporting them with the right DSPM tools and automation, organizations can reduce risk, improve efficiency, and build a culture of shared responsibility for data protection. User-driven data security isn’t just more scalable, it’s a competitive advantage.

<blogcta-big>

Ran is a passionate product and customer success leader with over 12 years of experience in the cybersecurity sector. He combines extensive technical knowledge with a strong passion for product innovation, research and development (R&D), and customer success to deliver robust, user-centric security solutions. His leadership journey is marked by proven managerial skills, having spearheaded multidisciplinary teams towards achieving groundbreaking innovations and fostering a culture of excellence. He started at Sentra as a Senior Product Manager and is currently the Head of Technical Account Management, located in NYC.

Subscribe

Latest Blog Posts

Nikki Ralston
Nikki Ralston
Romi Minin
Romi Minin
March 12, 2026
4
Min Read

How to Protect Sensitive Data in GCP

How to Protect Sensitive Data in GCP

Protecting sensitive data in Google Cloud Platform has become a critical priority for organizations navigating cloud security complexities in 2026. As enterprises migrate workloads and adopt AI-driven technologies, understanding how to protect sensitive data in GCP is essential for maintaining compliance, preventing breaches, and ensuring business continuity. Google Cloud offers a comprehensive suite of native security tools designed to discover, classify, and safeguard critical information assets.

Key GCP Data Protection Services You Should Use

Google Cloud Platform provides several core services specifically designed to protect sensitive data across your cloud environment:

  • Cloud Key Management Service (Cloud KMS) enables you to create, manage, and control cryptographic keys for both software-based and hardware-backed encryption. Customer-Managed Encryption Keys (CMEK) give you enhanced control over the encryption lifecycle, ensuring data at rest and in transit remains secured under your direct oversight.
  • Cloud Data Loss Prevention (DLP) API automatically scans data repositories to detect personally identifiable information (PII) and other regulated data types, then applies masking, redaction, or tokenization to minimize exposure risks.
  • Secret Manager provides a centralized, auditable solution for managing API keys, passwords, and certificates, keeping secrets separate from application code while enforcing strict access controls.
  • VPC Service Controls creates security perimeters around cloud resources, limiting data exfiltration even when accounts are compromised by containing sensitive data within defined trust boundaries.

Getting Started with Sensitive Data Protection in GCP

Implementing effective data protection begins with a clear strategy. Start by identifying and classifying your sensitive data using GCP's discovery and profiling tools available through the Cloud DLP API. These tools scan your resources and generate detailed profiles showing what types of sensitive information you're storing and where it resides.

Define the scope of protection needed based on your specific data types and regulatory requirements, whether handling healthcare records subject to HIPAA, financial data governed by PCI DSS, or personal information covered by GDPR. Configure your processing approach based on operational needs: use synchronous content inspection for immediate, in-memory processing, or asynchronous methods when scanning data in BigQuery or Cloud Storage.

Implement robust Identity and Access Management (IAM) practices with role-based access controls to ensure only authorized users can access sensitive data. Configure inspection jobs by selecting the infoTypes to scan for, setting up schedules, choosing appropriate processing methods, and determining where findings are stored.

Using Google DLP API to Discover and Classify Sensitive Data

The Google DLP API provides comprehensive capabilities for discovering, classifying, and protecting sensitive data across your GCP projects. Enable the DLP API in your Google Cloud project and configure it to scan data stored in Cloud Storage, BigQuery, and Datastore.

Inspection and Classification

Initiate inspection jobs either on demand using methods like InspectContent or CreateDlpJob, or schedule continuous monitoring using job triggers via CreateJobTrigger. The API automatically classifies detected content by matching data against predefined "info types" or custom criteria, assigning confidence scores to help you prioritize protection efforts. Reusable inspection templates enhance classification accuracy and consistency across multiple scans.

De-identification Techniques

Once sensitive data is identified, apply de-identification techniques to protect it:

  • Masking (obscuring parts of the data)
  • Redaction (completely removing sensitive segments)
  • Tokenization
  • Format-preserving encryption

These transformation techniques ensure that even if sensitive data is inadvertently exposed, it remains protected according to your organization's privacy and compliance requirements.

Preventing Data Loss in Google Cloud Environments

Preventing data loss requires a multi-layered approach combining discovery, inspection, transformation, and continuous monitoring. Begin with comprehensive data discovery using the DLP API to scan your data repositories. Define scan configurations specifying which resources and infoTypes to inspect and how frequently to perform scans. Leverage both synchronous and asynchronous inspection approaches. Synchronous methods provide immediate results using content.inspect requests, while asynchronous approaches using DlpJobs suit large-scale scanning operations. Apply transformation methods, including masking, redaction, tokenization, bucketing, and date shifting, to obfuscate sensitive details while maintaining data utility for legitimate business purposes.

Combine de-identification efforts with encryption for both data at rest and in transit. Embed DLP measures into your overall security framework by integrating with role-based access controls, audit logging, and continuous monitoring. Automate these practices using the Cloud DLP API to connect inspection results with other services for streamlined policy enforcement.

Applying Data Loss Prevention in Google Workspace for GCP Workloads

Organizations using both Google Workspace and GCP can create a unified security framework by extending DLP policies across both environments. In the Google Workspace Admin console, create custom rules that detect sensitive patterns in emails, documents, and other content. These policies trigger actions like blocking sharing, issuing warnings, or notifying administrators when sensitive content is detected.

Google Workspace DLP automatically inspects content within Gmail, Drive, and Docs for data patterns matching your DLP rules. Extend this protection to your GCP workloads by integrating with Cloud DLP, feeding findings from Google Workspace into Cloud Logging, Pub/Sub, or other GCP services. This creates a consistent detection and remediation framework across your entire cloud environment, ensuring data is safeguarded both at its source and as it flows into or is processed within your Google Cloud Platform workloads.

Enhancing GCP Data Protection with Advanced Security Platforms

While GCP's native security services provide robust foundational protection, many organizations require additional capabilities to address the complexities of modern cloud and AI environments. Sentra is a cloud-native data security platform that discovers and governs sensitive data at petabyte scale inside your own environment, ensuring data never leaves your control. The platform provides complete visibility into where sensitive data lives, how it moves, and who can access it, while enforcing strict data-driven guardrails.

Sentra's in-environment architecture maps how data moves and prevents unauthorized AI access, helping enterprises securely adopt AI technologies. The platform eliminates shadow and ROT (redundant, obsolete, trivial) data, which not only secures your organization for the AI era but typically reduces cloud storage costs by approximately 20 percent. Learn more about securing sensitive data in Google Cloud with advanced data security approaches.

Understanding GCP Sensitive Data Protection Pricing

GCP Sensitive Data Protection operates on a consumption-based, pay-as-you-go pricing model. Your costs reflect the actual amount of data you scan and process, as well as the number of operations performed. When estimating your budget, consider several key factors:

Cost Factor Impact on Pricing
Data Volume Primary cost driver; larger datasets or more frequent scans lead to higher bills
Operation Frequency Continuous scanning with detailed detection policies generates more processing activity
Feature Complexity Specific features and policies enabled can add to processing requirements
Associated Resources Network or storage fees may accumulate when data processing integrates with other services

To better manage spending, estimate your expected data volume and scan frequency upfront. Apply selective scanning or filtering techniques, such as scanning only changed data or using file filters to focus on high-risk repositories. Utilize Google's pricing calculator along with cost monitoring dashboards and budget alerts to track actual usage against projections. For organizations concerned about how sensitive cloud data gets exposed, investing in proper DLP configuration can prevent costly breaches that far exceed the operational costs of protection services.

Successfully protecting sensitive data in GCP requires a comprehensive approach combining native Google Cloud services with strategic implementation and ongoing governance. By leveraging Cloud KMS for encryption management, the Cloud DLP API for discovery and classification, Secret Manager for credential protection, and VPC Service Controls for network segmentation, organizations can build robust defenses against data exposure and loss.

The key to effective implementation lies in developing a clear data protection strategy, automating inspection and remediation workflows, and continuously monitoring your environment as it evolves. For organizations handling sensitive data at scale or preparing for AI adoption, exploring additional GCP security tools and advanced platforms can provide the comprehensive visibility and control needed to meet both security and compliance objectives. As cloud environments grow more complex in 2026 and beyond, understanding how to protect sensitive data in GCP remains an essential capability for maintaining trust, meeting regulatory requirements, and enabling secure innovation.

<blogcta-big>

Read More
David Stuart
David Stuart
Romi Minin
Romi Minin
March 11, 2026
4
Min Read

Data Security Governance in the Age of Cloud and AI

Data Security Governance in the Age of Cloud and AI

Cloud adoption, SaaS expansion, and GenAI applications are transforming how organizations approach data security governance. What was once primarily a compliance exercise is now a strategic priority. In fact, 67% of security leaders say information protection and data governance are top priorities, as it directly affects how companies protect sensitive data, manage risk, and support digital growth.

What Is Data Security Governance?

Data security governance is the framework of policies, technologies, and processes organizations use to protect sensitive data, control access, ensure regulatory compliance, and reduce risk across cloud, SaaS, and on-prem environments. It combines data discovery, classification, access governance, monitoring, and incident response to ensure that the right users can access the right data - securely and responsibly. As data environments expand across cloud platforms, SaaS applications, and AI systems, effective governance helps organizations maintain visibility, enforce policies, and respond quickly to emerging threats.

Quick Answer: What Makes Data Security Governance Effective?

Effective data security governance programs typically include five key elements:

  • Continuous data discovery and classification
  • Strong data access governance
  • AI-driven monitoring and risk detection
  • Zero trust security controls
  • Clear policies supported by a security-first culture

Organizations that combine these capabilities gain better visibility into sensitive data, reduce exposure risks, and strengthen compliance across complex cloud environments. But the landscape is evolving quickly. Security leaders must manage growing cloud ecosystems, keep up with complex regulations, and respond to new threats while maintaining business agility. Sentra offers a streamlined approach: unified, agentless data security governance that connects visibility, automation, and intelligent threat response.

Here are five steps to building an effective governance program in 2026 and beyond.

1. Lay the Foundation: Build a Governance Program That Evolves

Effective data security governance begins with a strong organizational foundation. As data environments expand across cloud platforms, SaaS applications, and AI systems, organizations need structured governance programs that define how sensitive data is discovered, classified, accessed, and protected.

Adoption is rapidly increasing. Today, 71% of organizations report having a formal data governance program in place, reflecting growing recognition that coordinated governance improves data quality, analytics, and compliance outcomes. However, effective data security governance frameworks cannot remain static. They must evolve alongside business operations, regulatory requirements, and emerging technologies.

Organizations should establish:

  • Clear data ownership and accountability
  • Policies for data classification, access control, and retention
  • Strong collaboration between security teams, IT, data teams, and business stakeholders

Security leaders should also conduct regular governance reviews, measure risk reduction and compliance outcomes, and continuously refine policies as data usage expands. Sentra helps organizations strengthen this foundation by providing unified visibility into sensitive data across cloud,SaaS, and OnPrem environments, enabling teams to align governance policies with real-world data risk.

2. Automate Data Security Governance with AI

Manual governance processes cannot scale with today’s massive data volumes and complex cloud environments.

Leading organizations are increasingly adopting AI-driven data security governance to automate critical tasks such as:

  • Sensitive data discovery and classification
  • Automated metadata tagging
  • Anomaly detection and threat identification
  • Policy enforcement and data masking

These capabilities embed security directly into operational workflows and significantly reduce manual overhead. Sentra combines Data Security Posture Management (DSPM), Data Access Governance (DAG), and Data Detection & Response (DDR) into a unified, agentless platform.

Security teams gain:

  • Real-time visibility into sensitive cloud and SaaS data
  • Detailed access mapping across identities and systems
  • Rapid remediation for misconfigurations and excessive permissions

This automation allows security teams to focus on strategy instead of constant reactive firefighting.

3. Implement Zero Trust and Manage GenAI & SaaS Data Exposure

The rapid adoption of GenAI and SaaS tools introduces new governance challenges. Many organizations face risks from shadow AI, where employees use AI tools (ex. Copilot) without security oversight. Gartner predicts that 40% of enterprises will experience security or compliance incidents due to “shadow AI” by 2030. Modern data security governance frameworks should apply zero trust principles, which assume risk is always present.

Key practices include:

  • Inventory and monitor both sensitive data and the AI tools accessing it
  • Continuous monitoring of data access behavior
  • Detecting unusual activity and privilege misuse
  • Identifying excessive permissions and dormant accounts

Sentra’s automated risk scans and access controls help organizations quickly detect exposures and ensure both traditional and AI-generated data remain governed and protected.

4. Unify Identity Governance and Privacy Controls

As automation and AI expand, the distinction between human and machine identities is becoming increasingly blurred. Modern data security governance programs must manage both. Many data breaches originate from credential misuse, excessive permissions, or compromised identities, making identity governance a critical part of protecting sensitive data.

Effective programs should:

  • Map identities to the data they access
  • Enforce least-privilege access controls
  • Monitor identity activity across environments
  • Automate privacy and data protection policies

Sentra enables organizations to unify identity governance with data security by mapping every user, application, and machine identity to sensitive data assets. This reduces risk, strengthens compliance, and limits the impact of credential abuse or privilege creep.

5. Foster a Security-First Culture and Business Alignment

Technology alone cannot ensure effective data security governance. People and processes are equally critical. Organizations that succeed build a security-first culture where employees understand policies, participate in training, and recognize their role in protecting data.

Leading organizations embed governance responsibilities across departments, aligning security with:

  • Digital transformation initiatives
  • Regulatory compliance requirements
  • ESG commitments
  • Customer trust and brand reputation

Sentra customers achieve this by integrating governance into everyday business workflows, enabling innovation while maintaining strong risk controls.

Key Takeaways: Building Effective Data Security Governance

  • Data security governance protects sensitive information through policies, monitoring, and access controls across cloud and SaaS environments.
  • Modern governance programs rely on AI-driven automation for classification, monitoring, and risk detection.
  • Zero trust security models help detect abnormal data access and reduce risk from excessive permissions.
  • Identity governance ensures both human and machine identities only access the data they need.
  • Strong governance requires both technology and organizational alignment.

Conclusion

In the age of cloud computing, SaaS expansion, and AI innovation, data security governance has become a critical driver of secure business growth. Organizations that combine strong governance foundations, AI-driven automation, zero trust principles, and identity-aware security can better protect sensitive data while enabling innovation. By following these five steps and adopting unified platforms, companies can reduce risk, maintain compliance, and confidently scale their digital initiatives.

Want to unify data visibility, automate governance, and secure cloud and AI data?Schedule a personalized demo to see how Sentra’s DSPM + DDR platform accelerates modern data security governance.

<blogcta-big>

Read More
Dean Taler
Dean Taler
March 11, 2026
3
Min Read

Archive Scanning for Cloud Data Security: Stop Ignoring Compressed Files

Archive Scanning for Cloud Data Security: Stop Ignoring Compressed Files

If you care about cloud data security, you cannot afford to treat compressed files as opaque blobs. Archive scanning for cloud data security is no longer a nice‑to‑have — it’s a prerequisite for any credible data security posture.

Every environment I’ve seen at scale looks the same: thousands of ZIP files in S3 buckets, TAR.GZ backups in Azure Blob, JARs and DEBs in artifact repositories, and old GZIP‑compressed database dumps nobody remembers creating. These archives are the digital equivalent of sealed boxes in a warehouse. Most tools walk right past them.

Attackers don’t.

Archives: Where Sensitive Data Goes to Disappear

Think about how your teams actually use compressed files:

  • An engineer zips up a project directory — complete with .env files and API keys — and uploads it to shared storage.
  • A DBA compresses a production database backup holding millions of customer records and drops it into an internal bucket.
  • A departing employee packs a folder of financial reports into a RAR file and moves it to a personal account.

None of this is hypothetical. It happens every day, and it creates a perfect hiding place for:

  • Bulk data exfiltration – a single ZIP can contain thousands of PII‑rich documents, financial reports, or IP.
  • Nested archives – ZIP‑inside‑ZIP‑inside‑TAR.GZ is normal in automated build and backup pipelines. One‑layer scanners never see what’s inside.
  • Password‑protected archives – if your tool silently skips encrypted ZIPs, you’re ignoring what could be the highest‑risk file in your environment.
  • Software artifacts with secrets – JARs and DEBs often carry config files with embedded credentials and tokens.
  • Old backups – that three‑year‑old compressed backup may contain an unmasked database nobody has reviewed since it was created.

If your data security platform cannot see inside compressed files, you don’t actually have end‑to‑end data visibility. Full stop.

Why Archive Scanning for Cloud Data Security Is Hard

The problem isn’t just volume — it’s structure and diversity.

Real cloud environments contain:

  • ZIP / JAR / CSZ
  • RAR (including multi‑part R00/R01 sets)
  • 7Z
  • TAR and TAR.GZ / TAR.BZ2 / TAR.XZ
  • Standalone compression formats like GZIP, BZ2, XZ/LZMA, LZ4, ZLIB
  • Package formats like DEB that are themselves layered archives

Most legacy tools treat all of this as “a file with an unknown blob of bytes.” At best, they record that the archive exists. They don’t recursively extract layers, don’t traverse internal structures, and don’t feed the inner files back into the same classification engine they use for documents or databases.

That gap becomes larger every quarter, as more data gets compressed to save money and speed up transfer.

How Sentra Does Archive Scanning All the Way Down

In Sentra, we treat archives and compressed files as first‑class citizens in the parsing and classification pipeline.

Full Archive and Compression Format Coverage

Our archive scanning engine supports the full range of formats we see in real‑world cloud workloads:

  • ZIP (including JAR and CSZ)
  • RAR (including multi‑part sets)
  • 7Z
  • TAR
  • GZ / GZIP
  • BZ2
  • XZ / LZMA
  • LZ4
  • ZLIB / ZZ
  • DEB and other layered package formats

Each reader is implemented as a composite reader. When Sentra encounters an archive, we don’t just log its presence. We:

  1. Open the archive.
  2. Iterate every entry.
  3. Hand each inner file back into the global parsing pipeline.
  4. If the inner file is itself an archive, we repeat the process until there are no more layers.

A TAR.GZ containing a ZIP containing a CSV with customer records is not an edge case. It’s Tuesday. Sentra will find the CSV and classify the records correctly.

Encryption Detection Without Decryption

Password‑protected archives are dangerous precisely because they’re opaque.

When Sentra hits an encrypted ZIP or RAR, we don’t shrug and move on. We detect encryption by inspecting archive metadata and entry‑level flags, then surface:

  • That the archive is encrypted
  • Where it lives
  • How large it is

We don’t attempt to brute‑force passwords or exfiltrate content. But we do make encrypted archives visible so they can be governed: flagged as high‑risk, pulled into investigations, or subject to separate key‑management policies.

Intelligent File Prioritization Inside Archives

Not every file inside an archive has the same risk profile. A tarball full of binaries and images is very different from one full of CSVs and PDFs.

Sentra implements file‑type–aware prioritization inside archives. We scan high‑value targets first — formats associated with PII, PCI, PHI, or sensitive business data — before we get to low‑risk assets.

This matters when you’re scanning multi‑gigabyte archives under time or budget constraints. You want the most important findings first, not after you’ve chewed through 40,000 icons and object files.

In‑Memory Processing for Security and Speed

All archive processing in Sentra happens in memory. We don’t unpack archives to temporary disk locations or leave extracted debris lying around in scratch directories.

That gives you two benefits:

  • Performance – we avoid disk I/O overhead when dealing with massive archives.
  • Security – we don’t create yet another copy of the sensitive data you’re trying to control.

For a data security platform, that design choice is non‑negotiable.

Compliance: Auditors Don’t Accept “We Skipped the Zips”

Regulations like GDPR, CCPA, HIPAA, and PCI DSS don’t carve out exceptions for compressed files. If personal health information is sitting in a GZIP’d database dump in S3, or cardholder data is archived in a ZIP on a shared drive, you are still accountable.

Auditors won’t accept “we scanned everything except the compressed files” as a defensible position.

Sentra’s archive scanning closes this gap. Across major cloud providers and archive formats, we give you end‑to‑end visibility into compressed and archived data — recursively, intelligently, and without blind spots.

Because the most dangerous data exposure in your cloud is often the one hiding a single ZIP file deep.

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.