All Resources
In this article:
minus iconplus icon
Share the Blog

AI in Data Security: Guardian Angel or Trojan Horse?

March 4, 2025
4
Min Read
AI and ML

Artificial intelligence (AI) is transforming industries, empowering companies to achieve greater efficiency, and maintain a competitive edge. But here’s the catch: although AI unlocks unprecedented opportunities, its rapid adoption also introduces complex challenges, especially for data security and privacy.

 

How do you accelerate transformation without compromising the integrity of your data? How do you harness AI’s power without it becoming a threat?

For security leaders, AI presents this very paradox. It is a powerful tool for mitigating risk through better detection of sensitive data, more accurate classification, and real-time response. However, it also introduces complex new risks, including expanded attack surfaces, sophisticated threat vectors, and compliance challenges.

As AI becomes ubiquitous and enterprise data systems become increasingly distributed, organizations must navigate the complexities of the big-data AI era to scale AI adoption safely. 

This article explores the key data security challenges introduced by AI and outlines practical strategies organizations can use to protect sensitive data while safely scaling AI adoption.

AI and Data Security in 2025: Why the Risk Profile Has Changed

AI-driven systems in 2025 operate at a fundamentally different scale than earlier generations. Generative AI, large language models (LLMs), and AI-powered automation now interact directly with sensitive enterprise data across cloud platforms, SaaS tools, and distributed data stores.

Several shifts define the modern AI data security landscape:

  • AI systems increasingly consume and generate sensitive data in real time
  • Employees regularly interact with external and internal AI tools
  • Cloud-native and multi-environment data architectures reduce centralized visibility
  • Threat actors now leverage AI to automate and amplify attacks

As a result, traditional perimeter-based or reactive security approaches are no longer sufficient. Organizations must adopt continuous data discovery, governance, and monitoring to manage AI-driven data risk effectively.

The Emerging Challenges for Data Security with AI

AI-driven systems are driven by vast amounts of data, but this reliance introduces significant security risks - both from internal AI usage and external client-side AI applications. As organizations integrate AI deeper into their operations, security leaders must recognize and mitigate the growing vulnerabilities that come with it.

Below, we outline the four biggest AI security challenges that will shape how you protect data and how you can address them.

1. Expanded Attack Surfaces

AI’s dependence on massive datasets, often unstructured and spread across cloud environments—creates an expansive attack surface. This data sprawl increases exposure to adversarial threats, such as model inversion attacks, where bad actors can reverse-engineer AI models to extract sensitive attributes or even re-identify anonymized data.

To put this in perspective, an AI system trained on healthcare data could inadvertently leak protected health information (PHI) if improperly secured. As adversaries refine their techniques, protecting AI models from data leakage must be a top priority.

For a detailed analysis of this challenge, refer to NIST’s report,Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.

2. Sophisticated and Evolving Threat Landscape

The same AI advancements that enable organizations to improve detection and response are also empowering threat actors. Attackers are leveraging AI to automate and enhance malicious campaigns, from highly targeted phishing attacks to AI-generated malware and deepfake fraud.

According to StrongDM's “The State of AI in Cybersecurity Report,” 65% of security professionals believe their organizations are unprepared for AI-driven threats. This highlights a critical gap: while AI-powered defenses continue to improve, attackers are innovating just as fast - if not faster. Organizations must adopt AI-driven security tools and proactive defense strategies to keep pace with this rapidly evolving threat landscape.

3. Data Privacy and Compliance Risks

AI’s reliance on large datasets introduces compliance risks for organizations bound by regulations such as GDPR, CCPA, or HIPAA. Improper handling of sensitive data within AI models can lead to regulatory violations, fines, and reputational damage. One of the biggest challenges is AI’s opacity, in many cases, organizations lack full visibility into how AI systems process, store, and generate insights from data. This makes it difficult to prove compliance, implement effective governance, or ensure that AI applications don’t inadvertently expose personally identifiable information (PII). As regulatory scrutiny on AI increases, businesses must prioritize AI-specific security policies and governance frameworks to mitigate legal and compliance risks.

4. Risk of Unintentional Data Exposure

Even without malicious intent, generative AI models can unintentionally leak sensitive or proprietary data. For instance, employees using AI tools may unknowingly input confidential information into public models, which could then become part of the model’s training data and later be disclosed through the model’s outputs. Generative AI models, especially large language models (LLMs)—are particularly susceptible to data extrapolation attacks, where adversaries manipulate prompts to extract hidden information.

Techniques like “divergence attacks” on ChatGPT can expose training data, including sensitive enterprise knowledge or personally identifiable information. The risks are real, and the pace of AI adoption makes data security awareness across the organization more critical than ever.

For further insights, explore our analysis of “Emerging Data Security Challenges in the LLM Era.”

Top 5 Strategies for Securing Your Data with AI

To integrate AI responsibly into your security posture, companies today need a proactive approach is essential. Below we outline five key strategies to maximize AI’s benefits while mitigating the risks posed by evolving threats. When implemented holistically, these strategies will empower you to leverage AI’s full potential while keeping your data secure.

1. Data Minimization, Masking, and Encryption

The most effective way to reduce risk exposure is by minimizing sensitive data usage whenever possible. Avoid storing or processing sensitive data unless absolutely necessary. Instead, use techniques like synthetic data generation and anonymization to replace sensitive values during AI training and analysis.

When sensitive data must be retained, data masking techniques, such as name substitution or data shuffling—help protect confidentiality while preserving data utility. However, if data must remain intact, end-to-end encryption is critical. Encrypt data both in transit and at rest, especially in cloud or third-party environments, to prevent unauthorized access.

2. Data Governance and Compliance with AI-SPM

Governance and compliance frameworks must evolve to account for AI-driven data processing. AI Security Posture Management (AI-SPM) tools help automate compliance monitoring and enforce governance policies across hybrid and cloud environments. 

AI-SPM tools enable:

  • Automated data lineage mapping to track how sensitive data flows through AI systems.
  • Proactive compliance monitoring to flag data access violations and regulatory risks before they become liabilities.

By integrating AI-SPM into your security program, you ensure that AI-powered workflows remain compliant, transparent, and properly governed throughout their lifecycle.

3. Secure Use of AI Cloud Tools

AI cloud tools accelerate AI adoption, but they also introduce unique security risks. Whether you’re developing custom models or leveraging pre-trained APIs, choosing trusted providers like Amazon Bedrock or Google’s Vertex AI ensures built-in security protections. 

However, third-party security is not a substitute for internal controls. To safeguard sensitive workloads, your organization should:

  • Implement strict encryption policies for all AI cloud interactions.
  • Enforce data isolation to prevent unauthorized access.
  • Regularly review vendor agreements and security guarantees to ensure compliance with internal policies.

Cloud AI tools can enhance your security posture, but always review the guarantees of your AI providers (e.g., OpenAI's security and privacy page) and regularly review vendor agreements to ensure alignment with your company’s security policies.

4. Risk Assessments and Red Team Testing

While offline assessments provide an initial security check, AI models behave differently in live environments—introducing unpredictable risks. Continuous risk assessments are critical for detecting vulnerabilities, including adversarial threats and data leakage risks.

Additionally, red team exercises simulate real-world AI attacks before threat actors can exploit weaknesses. A proactive testing cycle ensures AI models remain resilient against emerging threats.

To maintain AI security over time, adopt a continuous feedback loop—incorporating lessons learned from each assessment to strengthen your AI systems

5. Organization-Wide AI Usage Guidelines

AI security isn’t just a technical challenge—it’s an organizational imperative. To democratize AI security, companies must embed AI risk awareness across all teams.

  • Establish clear AI usage policies based on zero trust and least privilege principles.
  • Define strict guidelines for data sharing with AI platforms to prevent shadow AI risks.
  • Integrate AI security into broader cybersecurity training to educate employees on emerging AI threats.

By fostering a security-first culture, organizations can mitigate AI risks at scale and ensure that security teams, developers, and business leaders align on responsible AI practices.

Key Takeaways: Moving Towards Proactive AI Security 

AI is transforming how we manage and protect data, but it also introduces new risks that demand ongoing vigilance. By taking a proactive, security-first approach, you can stay ahead of AI-driven threats and build a resilient, future-ready AI security framework.

AI integration is no longer optional for modern enterprises, it is both inevitable and transformative. While AI offers immense potential, particularly in security applications, it also introduces significant risks, especially around data security. Organizations that fail to address these challenges proactively risk increased exposure to evolving threats, compliance failures, and operational disruptions.

By implementing strategies such as data minimization, strong governance, and secure AI adoption, organizations can mitigate these risks while leveraging AI’s full potential. A proactive security approach ensures that AI enhances—not compromises—your overall cybersecurity posture. As AI-driven threats evolve, investing in comprehensive, AI-aware security measures is not just a best practice but a competitive necessity. Sentra’s Data Security Platform provides the necessary visibility and control, integrating advanced AI security capabilities to protect sensitive data across distributed environments.

To learn how Sentra can strengthen your organization’s AI security posture with continuous discovery, automated classification, threat monitoring, and real-time remediation, request a demo today.

<blogcta-big>

Discover Ron’s expertise, shaped by over 20 years of hands-on tech and leadership experience in cybersecurity, cloud, big data, and machine learning. As a serial entrepreneur and seed investor, Ron has contributed to the success of several startups, including Axonius, Firefly, Guardio, Talon Cyber Security, and Lightricks, after founding a company acquired by Oracle.

Subscribe

Latest Blog Posts

Team Sentra
Team Sentra
April 24, 2026
3
Min Read
AI and ML

Patchwork AI Security vs. Purpose-Built Protection: Thoughts on Cyera’s Ryft Acquisition

Patchwork AI Security vs. Purpose-Built Protection: Thoughts on Cyera’s Ryft Acquisition

Yesterday’s news that Cyera is acquiring Ryft, a two-year-old startup building automated data lakes for AI agents, is the latest sign of how fast the agentic AI security market is moving. It’s also Cyera’s fourth acquisition in five years, on the heels of Trail Security and Otterize, a clear signal that the company is trying to buy its way into new narratives as quickly as they emerge.

For security and data leaders, the question isn’t “Is agentic AI important?” It absolutely is. The question is: What’s the real cost of stitching together yet another acquisition into an already complex platform?

The hidden cost of rapid, piecemeal integrations

On paper, adding Ryft gives Cyera a new story around “agentic AI security.” In practice, it creates a familiar set of integration problems:

  • Multiple architectures to reconcile
    Trail Security, Otterize, and now Ryft were all built as independent products with their own data models, UX patterns, and engineering roadmaps. Four acquisitions in five years means customers are effectively buying an integration project that’s still in progress, not a single, mature platform.

  • Gaps, overlaps, and inconsistent controls
    Every acquired module has its own blind spots and strengths. Until they’re truly unified, you get overlapping coverage in some areas, gaps in others, and policy engines that don’t behave consistently across cloud, SaaS, and on-prem.

  • Slower time-to-value for AI initiatives
    AI programs move quickly; integrations do not. Each acquisition has to be wired into discovery, classification, policy, reporting, access control, and remediation workflows before it delivers real value. That’s measured in quarters and years, not weeks.

  • Operational drag on security teams
    When you tie together multiple acquired engines, you often see scan-based coverage, noisy false positives, and limited self-serve reporting that still depends on the vendor’s team to interpret results. That’s the opposite of what already stretched security teams need as they take on AI data risk.

The Ryft deal fits this pattern. It’s a high-priced bet on an early-stage team with a small set of digital-native customers, not a proven, enterprise-scale AI data security engine. That’s fine as a venture bet. It’s more problematic when packaged as an answer for Fortune 500 AI governance.

Why agentic AI security can’t be bolted on

Agentic AI changes the risk profile of enterprise data:

  • Agents traverse structured and unstructured data across cloud, SaaS, and on-prem.
  • They act on behalf of identities, often chaining tools and APIs in ways that are hard to predict.
  • The blast radius of a misconfiguration or over-permissioned identity grows dramatically once agents are in the loop.

Trying to solve that by bolting an AI data lake acquisition onto a legacy, scan-based DSPM engine is risky. You’re adding another moving part on top of a system that already struggles with:

  • Point-in-time scans instead of real-time, continuous coverage
  • High false positives without strong prioritization
  • Shallow support for hybrid and on-prem environments
  • Vendor-controlled workflows instead of customer-controlled, self-serve reporting

If the underlying platform can’t continuously understand where sensitive data lives, which identities can touch it, and how that access is used, then adding an “AI data lake” on the side doesn’t fix the fundamentals. It just adds another place for risk to hide.

A different path: Sentra’s purpose-built, real-time platform

At Sentra, we took a different approach from day one: build a single, in-place, real-time data security platform, not a patchwork of stitched-together acquisitions.

A few principles guide the way we think about AI and data security:

  • One unified architecture
    Sentra is a purpose-built, unified platform, not an assortment of logos held together by integration roadmaps. There’s one architecture, one data model, one roadmap, and one team focused entirely on DSPM and AI data security, rather than a set of acquired point products that still need to be woven together.

  • Proven for real AI workloads today
    Our platform is already securing real AI workloads in production environments, rather than depending on the future maturation of a seed-stage acquisition. AI data security for us is not a sidecar story. It's built into how we discover, classify, govern, and remediate risk across your estate.

  • Higher-precision signal, not more noise
    Sentra delivers higher classification precision (4.9 vs. 4.7 stars on Gartner) and couples that with workflows your team controls, not processes that require vendor intervention every time you need a new report or policy tweak.

  • Complete coverage for complex environments
    Modern enterprises aren’t cloud-only. Sentra provides full coverage across IaaS, PaaS, SaaS, and on-premises from a single platform, built for hybrid and legacy-heavy environments as much as for cloud-native stacks.

In other words, while some vendors are racing to acquire their way into the next AI buzzword, Sentra is focused on delivering trustworthy, real-time, identity-aware data security that you can put in front of a CISO and a data platform owner today.

What to ask your vendors now

If you’re evaluating Cyera (or any vendor riding the latest AI acquisition wave), a few concrete questions can cut through the noise:

  1. How many acquisitions have you done in the last five years, and which parts of my deployment depend on those integrations actually working?
  2. What’s fully integrated and running in production today vs. what’s still on the roadmap?
  3. Are my AI and non-AI data risks handled by the same platform, policies, and reporting, or by separate acquired modules?
  4. Do you provide continuous coverage and identity-aware controls across cloud, SaaS, and on-prem, or am I still relying on periodic scans and partial visibility?

The AI security market doesn’t need more logos; it needs fewer moving parts, better signals, and real-time control over how data is used by humans and agents alike.

That’s the standard Sentra is building for and the lens through which we view every new acquisition announcement in this space.

Read More
Ron Reiter
Ron Reiter
April 24, 2026
3
Min Read
Data Security

Sentra Now Supports Solidworks 3D CAD Files – Protecting the Digital Blueprint in the Age of AI

Sentra Now Supports Solidworks 3D CAD Files – Protecting the Digital Blueprint in the Age of AI

Walk into any advanced manufacturing, aerospace, defense, or industrial design shop and you’re just as likely to see Solidworks as you are AutoCAD. The models, assemblies, and drawings built in Solidworks are the digital blueprints for everything from turbine blades and medical devices to satellites and weapons systems.

Earlier this year we announced native support for AutoCAD DWG files, making an entire class of previously opaque CAD data visible to security and compliance teams for the first time. Now we’re extending that same deep visibility to Solidworks 3D CAD files, so you can protect the IP and regulated technical data hiding inside your .sldprt, .sldasm, and related content—without slowing engineering down.

And as AI accelerates design cycles, that visibility is no longer optional.

AI is Supercharging Design – and Expanding the Blast Radius

Design teams are pushing faster than ever:

  • Generative design tools propose entire families of parts and assemblies.
  • Copilots summarize requirements, suggest changes, and draft documentation off CAD models.
  • PLM-integrated agents automatically create downstream artifacts—quotes, NC programs, service manuals—based on 3D designs.
  • RAG-style internal assistants answer questions using a mix of project docs, CAD files, and simulation outputs.

All of this is powerful. It also multiplies the ways sensitive CAD data can leak:

  • Entire assemblies uploaded to unmanaged AI tools “just to explore options.”
  • Export-controlled models referenced in prompts and ending up in long‑lived AI data lakes.
  • Supplier and customer CAD shared into external copilots with little visibility into who—or what agent—can access it.
  • Rich metadata from CAD (usernames, project codes, server paths, partner names) silently turned into reconnaissance material.

If you don’t understand what’s inside your CAD, where it lives, and which identities and AI agents can reach it, AI doesn’t just speed up design—it speeds up IP disclosure, compliance failures, and supply‑chain exposure.

CAD Has Been a Blind Spot for Security

Most traditional DSPM and DLP tools still treat specialized engineering formats as a big binary blob: “probably sensitive, treat with caution.” That may have been acceptable when CAD lived on a handful of on‑prem engineering servers.

It’s not acceptable when:

  • Decades of CAD history have been lifted and shifted into S3, Azure Blob, or SharePoint.
  • ITAR/EAR “technical data” now lives side‑by‑side with everyday project files in cloud object stores.
  • Those same repositories feed downstream systems—PLM, MES, AI assistants—where traditional security tools have little or no visibility.

We built native DWG parsing into Sentra to break that stalemate, making CAD content as transparent to security teams as a Word document. Solidworks 3D CAD support is the next logical step.

What’s Really Inside a Solidworks 3D CAD File?

Like DWG, a Solidworks file is far more than geometry. It’s a container for rich metadata, text, and structural context that describes both what you’re building and how it fits into regulated programs and commercial IP. Our Solidworks support is designed to surface that security‑relevant context—without requiring CAD tools, manual exports, or data movement.

Similar to what we do for DWG, Sentra can extract and analyze key elements, including:

  • Document properties
    Authors, “last saved by,” creation and modification timestamps, total editing time, and revision counters—signals that help you understand who is touching sensitive designs and when.

  • Custom properties and configuration metadata
    Project IDs, part and assembly numbers, revision codes, program names, business units, and export‑control or classification markings encoded as custom properties or notes.

  • Text content and annotations
    Notes, callouts, PMI, and embedded text that often contain material specifications, tolerances, customer names, contract IDs, and phrases like “COMPANY CONFIDENTIAL,” “EXPORT CONTROLLED,” or ITAR statements.

  • Assembly structure and component names
    Which parts roll up into which assemblies, and how those components are named—critical when you need to understand which physical systems a given sensitive model belongs to.

  • File dependencies and paths
    References to drawings, configurations, libraries, and external resources that routinely expose server names, share paths, usernames, and department structures—goldmine context for attackers, but also for incident response and insider‑risk investigations.

For organizations operating under ITAR and EAR, this is where truly export‑controlled technical data actually lives—not in the folder name, but in the title blocks, annotations, and metadata attached to models and drawings.

Turning Solidworks Models into Actionable Security Signals

By parsing Solidworks 3D CAD files in place, inside your own cloud accounts or VPCs, Sentra can now treat them as first‑class citizens in your data security program—just like we do for DWG and other specialized formats.

That unlocks concrete use cases, such as:

  • Finding export‑controlled or highly sensitive designs in cloud storage
    Automatically surface Solidworks files whose metadata, annotations, or custom properties contain ITAR statements, ECCN codes, proprietary markings, or customer‑confidential labels—so you can focus remediation on the drawings and models that are actually regulated.

  • Mapping who (and what) can access critical designs
    Combine CAD‑aware classification with Sentra’s DSPM and DAG capabilities to answer:
    Where are our most sensitive Solidworks assemblies stored, and which identities, service principals, and AI agents can currently reach them?

  • Monitoring AI and collaboration workflows for IP exposure
    Track when Solidworks files that contain regulated or high‑value IP are moved into AI data lakes, shared via collaboration platforms, or accessed by non‑human identities—so DDR policies can flag, quarantine, or route for review before they turn into public incidents.

  • Building a defensible audit trail for CAD‑resident technical data
    Maintain an inventory of Solidworks files that contain export‑control markings or IP‑critical content, tie each file to its exact storage location and access controls, and surface any out‑of‑policy placements—so when auditors ask “Where is your technical data?”, you can answer with data, not slideware.

Closing the Gap Between “Stored” and “Understood” for 3D CAD

As workloads like EDA, PLM, simulation, and AI‑assisted design move deeper into the cloud, the number of specialized formats in your environment explodes. Most tools still only truly understand emails, office documents, and a narrow slice of structured data.

The reality is simple: you cannot secure data you don’t understand. Understanding means being able to answer, at scale, not just “Where is this file?” but “What is inside this file, how sensitive is it, and how is AI amplifying its risk?”

For organizations whose crown‑jewel IP and export‑controlled technical data live in Solidworks 3D CAD, that’s the gap Sentra is now closing.

If you want to see what’s actually hiding inside your own Solidworks models and assemblies, the easiest next step is to run a focused assessment: pick a few representative buckets or repositories, let Sentra scan those CAD files in place, and review the inventory of regulated and high‑value designs that surfaces.

Chances are, once you’ve seen that map—and how it connects to your AI initiatives—you’ll never look at “just another CAD file” the same way again.

Read More
Yair Cohen
Yair Cohen
David Stuart
David Stuart
April 15, 2026
3
Min Read
Data Sprawl

Fiverr Data Breach: Beyond Misconfigured Buckets and the Data Sprawl That Made It Inevitable

Fiverr Data Breach: Beyond Misconfigured Buckets and the Data Sprawl That Made It Inevitable

Fiverr’s recent data breach/data exposure left tax forms, IDs, contracts, and even credentials publicly accessible and indexed by Google via misconfigured Cloudinary URLs.

This post explains what happened, why data sprawl across third-party services made it inevitable, and how to prevent the next Fiverr-style leak.

The Fiverr data breach is a textbook case of sensitive data sprawl and misconfigured third‑party infrastructure: highly sensitive documents (including tax returns, IDs, health records, and even admin credentials) were stored on Cloudinary behind unauthenticated, non‑expiring URLs, then surfaced via public HTML so Google could index them—remaining accessible for weeks after initial disclosure and hours after public reporting. This isn’t a zero‑day exploit; it’s a failure to understand where regulated data lives, how it rapidly proliferates and is shared across services, and whether controls like signed URLs, authentication, and proper indexing rules are actually in place.

In practical terms, what happened in the Fiverr data breach?

– Sensitive documents (tax returns, IDs, contracts, even credentials) were stored on Cloudinary behind unauthenticated, non-expiring URLs.

– Some of those URLs were linked from public HTML, allowing Google and other search engines to index them.

– As a result, private Fiverr user data became publicly searchable, long before regulators or affected users were notified.

What the Fiverr Data Breach Reveals About Third-Party Data Sprawl

What makes this kind of data exposure - like the Fiverr data leak - so damaging is that it collapses the boundary between “internal work product” and “public web content.” The same files that power everyday workflows—tax filings, medical notes, penetration test reports, admin credentials—suddenly become discoverable to anyone with a search engine, long before regulators or affected users even know there’s a problem. As enterprises lean on third‑party processors, media platforms, and SaaS for collaboration, the real risk isn’t a single misconfigured bucket; it’s the absence of continuous visibility into where sensitive data actually resides and who—human or machine—can reach it.

Sentra is built to restore that visibility and hygiene baseline across the entire data estate, including cloud storage, SaaS platforms, AI data lakes, and media services like the one at the center of this incident. By running discovery and classification in‑environment—without copying customer data out—Sentra builds a live inventory of sensitive assets, from tax forms and IDs to health and financial records, even in unstructured PDFs and images brought into scope via OCR and transcription. On top of that, Sentra continuously identifies redundant, obsolete, and toxic (ROT) data, so organizations can eliminate unnecessary copies that amplify the blast radius when something does go wrong, and set enforceable policies like “no GLBA‑covered data on unauthenticated public endpoints” before the next Cloudinary‑style exposure ever materializes.

If you’re asking “How do we avoid a Fiverr-style data breach on our own SaaS and media stack?”, the starting point is continuous visibility into where sensitive data lives, how it moves into services like Cloudinary, and who or what (including AI agents) can access it.

How to Prevent a Fiverr-Style Data Leak Across SaaS, Storage, and Media Services

Where traditional controls stop at the perimeter, Sentra ties data to identities and access paths, including AI agents, copilots, and service principals. Lineage‑driven maps show how data moves—from a storage bucket into a search index, from a document library into a media processor—so entitlements can follow data automatically and public or over‑privileged links can be revoked in a targeted way, rather than taking an entire service offline. On that foundation, Sentra orchestrates automated actions and remediation: quarantining exposed files, tombstoning toxic copies, removing public links, and routing rich, contextual tickets to owners when human judgment is required—all through existing tools like DLP, IAM, ServiceNow, Jira, Slack, and SOAR instead of standing up a parallel enforcement stack.

Doing this at “Fiverr scale” requires more than point tools; it demands a platform that is accurate, scalable, and cost‑efficient enough to run continuously and scale across multi-hundred petabyte environments. Sentra’s in‑environment architecture and small‑model approach have already scanned 8–9 petabytes in under 4–5 days at 95–98% accuracy—an order‑of‑magnitude faster and cheaper than extraction‑based alternatives—while keeping customer data inside their own accounts. That efficiency means enterprises can maintain continuous scanning, labeling, and remediation across hundreds of petabytes and multiple clouds without turning governance into a budget‑breaking project, and can generate audit‑grade evidence that sensitive data was governed properly over time—not just at the last assessment.

Incidents like the Fiverr data breach are a warning shot for the AI era, where copilots, internal agents, and search experiences will happily surface whatever the underlying permissions and data quality allow. As AI adoption accelerates, the only sustainable defense is a baseline of automated, continuous data protection: accurate classification, durable hygiene, identity‑aware access, automated remediation, and economically viable, always‑on governance that keeps pace with rapidly expanding and evolving data estates. You can’t secure AI—or avoid the next “public and searchable” headline—without first understanding and continuously governing the data that AI and its surrounding services can see. As AI pushes boundaries (and challenges security teams!), there is no time like now to ensure data remains protected.


Fiverr data breach FAQ

  • Was my Fiverr data exposed in the breach?
    Fiverr and independent researchers have confirmed that some user documents—including tax forms, IDs, invoices, and credentials—were publicly accessible and indexed by Google via misconfigured Cloudinary URLs. Whether your specific files were exposed depends on what you shared and how Fiverr stored it, but the safest assumption is that any sensitive document shared on the platform may have been at risk.

  • What made the Fiverr data breach possible?
    The root cause wasn’t a zero-day exploit; it was data sprawl across third-party infrastructure plus weak controls: public, non-expiring Cloudinary URLs, public HTML linking to those URLs, and no continuous visibility into where regulated data lived or who could reach it.

  • How can enterprises prevent similar leaks?
    By continuously discovering and classifying sensitive data across cloud storage, SaaS, and media services; cleaning up ROT; enforcing policies like “no GLBA-covered data on unauthenticated public endpoints”; and tying access to identities so public links and over-privileged routes can be revoked automatically. 

Read more about the Fiverr Data Breach

Detailed news coverage of the Fiverr data breach and Cloudinary misconfiguration (Cybernews)

Independent analysis of the Fiverr data exposure via public Cloudinary URLs (CyberInsider)

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.