All Resources
In this article:
minus iconplus icon
Share the Blog

New AI-Assistant, Sentra Jagger, Is a Game Changer for DSPM and DDR

March 5, 2024
3
Min Read
AI and ML

Evolution of Large Language Models (LLMs)

In the early 2000s, as Google, Yahoo, and others gained widespread popularity. Users found the search engine to be a convenient tool, effortlessly bringing a wealth of information to their fingertips. Fast forward to the 2020s, and we see Large Language Models (LLMs) are pushing productivity to the next level. LLMs skip the stage of learning, seamlessly bridging the gap between technology and the user.

LLMs create a natural interface between the user and the platform. By interpreting natural language queries, they effortlessly translate human requests into software actions and technical operations. This simplifies technology to make it close to invisible. Users no longer need to understand the technology itself, or how to get certain data — they can just input any query, and LLMs will simplify it.

Revolutionizing Cloud Data Security With Sentra Jagger

Sentra Jagger is an industry-first AI assistant for cloud data security based on the Large Language Model (LLM).

It enables users to quickly analyze and respond to security threats, cutting task times by up to 80% by answering data security questions, including policy customization and enforcement, customizing settings, creating new data classifiers, and reports for compliance. By reducing the time for investigating and addressing security threats, Sentra Jagger enhances operational efficiency and reinforces security measures.

Empowering security teams, users can access insights and recommendations on specific security actions using an interactive, user-friendly interface. Customizable dashboards, tailored to user roles and preferences, enhance visibility into an organization's data. Users can directly inquire about findings, eliminating the need to navigate through complicated portals or ancillary information.

Benefits of Sentra Jagger

  1. Accessible Security Insights: Simplified interpretation of complex security queries, offering clear and concise explanations in plain language to empower users across different levels of expertise. This helps users make informed decisions swiftly, and confidently take appropriate actions.
  1. Enhanced Incident Response: Clear steps to identify and fix issues, offering users clear steps to identify and fix issues, making the process faster and minimizing downtime, damage, and restoring normal operations promptly. 
  1. Unified Security Management: Integration with existing tools, creating a unified security management experience and providing a complete view of the organization's data security posture. Jagger also speeds solution customization and tuning.

Why Sentra Jagger Is Changing the Game for DSPM and DDR

Sentra Jagger is an essential tool for simplifying the complexities of both Data Security Posture Management (DSPM) and Data Detection and Response (DDR) functions. DSPM discovers and accurately classifies your sensitive data anywhere in the cloud environment, understands who can access this data, and continuously assesses its vulnerability to security threats and risk of regulatory non-compliance. DDR focuses on swiftly identifying and responding to security incidents and emerging threats, ensuring that the organization’s data remains secure. With their ability to interpret natural language, LLMs, such as Sentra Jagger, serve as transformative agents in bridging the comprehension gap between cybersecurity professionals and the intricate worlds of DSPM and DDR.

Data Security Posture Management (DSPM)

When it comes to data security posture management (DSPM), Sentra Jagger empowers users to articulate security-related queries in plain language, seeking insights into cybersecurity strategies, vulnerability assessments, and proactive threat management.

Meet Sentra Jagger, your new data security assistant

The language models not only comprehend the linguistic nuances but also translate these queries into actionable insights, making data security more accessible to a broader audience. This democratization of security knowledge is a pivotal step forward, enabling organizations to empower diverse teams (including privacy, governance, and compliance roles) to actively engage in bolstering their data security posture without requiring specialized cybersecurity training.

Data Detection and Response (DDR)

In the realm of data detection and response (DDR), Sentra Jagger contributes to breaking down technical barriers by allowing users to interact with the platform to seek information on DDR configurations, real-time threat detection, and response strategies. Our AI-powered assistant transforms DDR-related technical discussions into accessible conversations, empowering users to understand and implement effective threat protection measures without grappling with the intricacies of data detection and response technologies.

The integration of LLMs into the realms of DSPM and DDR marks a paradigm shift in how users will interact with and comprehend complex cybersecurity concepts. Their role as facilitators of knowledge dissemination removes traditional barriers, fostering widespread engagement with advanced security practices. 

Sentra Jagger is a game changer by making advanced technological knowledge more inclusive, allowing organizations and individuals to fortify their cybersecurity practices with unprecedented ease. It helps security teams better communicate with and integrate within the rest of the business. As AI-powered assistants continue to evolve, so will their impact to reshape the accessibility and comprehension of intricate technological domains.

How CISOs Can Leverage Sentra Jagger 

Consider a Chief Information Security Officer (CISO) in charge of cybersecurity at a healthcare company. To assess the security policies governing sensitive data in their environment, the CISO leverages Sentra’s Jagger AI assistant.. If the CISO, let's call her Sara, needs to navigate through the Sentra policy page, instead of manually navigating, Sara can simply queryJagger, asking, "What policies are defined in my environment?" In response, Jagger provides a comprehensive list of policies, including their names, descriptions, active issues, creation dates, and status (enabled or disabled).

Sara can then add a custom policy related to GDPR, by simply describing it. For example, "add a policy that tracks European customer information moving outside of Europe". Sentra Jagger will translate the request using Natural Language Processing (NLP) into a Sentra policy and inform Sara about potential non-compliant data movement based on the recently added policy.

Upon thorough review, Sara identifies a need for a new policy: "Create a policy that monitors instances where credit card information is discovered in a datastore without audit logs enabled." Sentra Jagger initiates the process of adding this policy by prompting Sara for additional details and confirmation. 

The LLM-assistant, Sentra Jagger, communicates, "Hi Sara, it seems like a valuable policy to add. Credit card information should never be stored in a datastore without audit logs enabled. To ensure the policy aligns with your requirements, I need more information. Can you specify the severity of alerts you want to raise and any compliance standards associated with this policy?" Sara responds, stating, "I want alerts to be raised as high severity, and I want the AWS CIS benchmark to be associated with it."

Having captured all the necessary information, Sentra Jagger compiles a summary of the proposed policy and sends it to Sara for her review and confirmation. After Sara confirms the details, the LLM-assistant, Sentra Jagger seamlessly incorporates the new policy into the system. This streamlined interaction with LLMs enhances the efficiency of policy management for CISOs, enabling them to easily navigate, customize, and implement security measures in their organizations.

Create a policy with Sentra Jagger
Creating a policy with Sentra Jagger

Conclusion 

The advent of Large Language Models (LLMs) has changed the way we interact with and understand technology. Building on the legacy of search engines, LLMs eliminate the learning curve, seamlessly translating natural language queries into software and technical actions. This innovation removes friction between users and technology, making intricate systems nearly invisible to the end user.

For Chief Information Security Officers (CISOs) and ITSecOps, LLMs offer a game-changing approach to cybersecurity. By interpreting natural language queries, Sentra Jagger bridges the comprehension gap between cybersecurity professionals and the intricate worlds of DSPM and DDR. This standardization of security knowledge allows organizations to empower a wider audience to actively engage in bolstering their data security posture and responding to security incidents, revolutionizing the cybersecurity landscape.

To learn more about Sentra, schedule a demo with one of our experts.

Discover Ron’s expertise, shaped by over 20 years of hands-on tech and leadership experience in cybersecurity, cloud, big data, and machine learning. As a serial entrepreneur and seed investor, Ron has contributed to the success of several startups, including Axonius, Firefly, Guardio, Talon Cyber Security, and Lightricks, after founding a company acquired by Oracle.

Subscribe

Latest Blog Posts

Team Sentra
Team Sentra
April 24, 2026
3
Min Read
AI and ML

Patchwork AI Security vs. Purpose-Built Protection: Thoughts on Cyera’s Ryft Acquisition

Patchwork AI Security vs. Purpose-Built Protection: Thoughts on Cyera’s Ryft Acquisition

Yesterday’s news that Cyera is acquiring Ryft, a two-year-old startup building automated data lakes for AI agents, is the latest sign of how fast the agentic AI security market is moving. It’s also Cyera’s fourth acquisition in five years, on the heels of Trail Security and Otterize, a clear signal that the company is trying to buy its way into new narratives as quickly as they emerge.

For security and data leaders, the question isn’t “Is agentic AI important?” It absolutely is. The question is: What’s the real cost of stitching together yet another acquisition into an already complex platform?

The hidden cost of rapid, piecemeal integrations

On paper, adding Ryft gives Cyera a new story around “agentic AI security.” In practice, it creates a familiar set of integration problems:

  • Multiple architectures to reconcile
    Trail Security, Otterize, and now Ryft were all built as independent products with their own data models, UX patterns, and engineering roadmaps. Four acquisitions in five years means customers are effectively buying an integration project that’s still in progress, not a single, mature platform.

  • Gaps, overlaps, and inconsistent controls
    Every acquired module has its own blind spots and strengths. Until they’re truly unified, you get overlapping coverage in some areas, gaps in others, and policy engines that don’t behave consistently across cloud, SaaS, and on-prem.

  • Slower time-to-value for AI initiatives
    AI programs move quickly; integrations do not. Each acquisition has to be wired into discovery, classification, policy, reporting, access control, and remediation workflows before it delivers real value. That’s measured in quarters and years, not weeks.

  • Operational drag on security teams
    When you tie together multiple acquired engines, you often see scan-based coverage, noisy false positives, and limited self-serve reporting that still depends on the vendor’s team to interpret results. That’s the opposite of what already stretched security teams need as they take on AI data risk.

The Ryft deal fits this pattern. It’s a high-priced bet on an early-stage team with a small set of digital-native customers, not a proven, enterprise-scale AI data security engine. That’s fine as a venture bet. It’s more problematic when packaged as an answer for Fortune 500 AI governance.

Why agentic AI security can’t be bolted on

Agentic AI changes the risk profile of enterprise data:

  • Agents traverse structured and unstructured data across cloud, SaaS, and on-prem.
  • They act on behalf of identities, often chaining tools and APIs in ways that are hard to predict.
  • The blast radius of a misconfiguration or over-permissioned identity grows dramatically once agents are in the loop.

Trying to solve that by bolting an AI data lake acquisition onto a legacy, scan-based DSPM engine is risky. You’re adding another moving part on top of a system that already struggles with:

  • Point-in-time scans instead of real-time, continuous coverage
  • High false positives without strong prioritization
  • Shallow support for hybrid and on-prem environments
  • Vendor-controlled workflows instead of customer-controlled, self-serve reporting

If the underlying platform can’t continuously understand where sensitive data lives, which identities can touch it, and how that access is used, then adding an “AI data lake” on the side doesn’t fix the fundamentals. It just adds another place for risk to hide.

A different path: Sentra’s purpose-built, real-time platform

At Sentra, we took a different approach from day one: build a single, in-place, real-time data security platform, not a patchwork of stitched-together acquisitions.

A few principles guide the way we think about AI and data security:

  • One unified architecture
    Sentra is a purpose-built, unified platform, not an assortment of logos held together by integration roadmaps. There’s one architecture, one data model, one roadmap, and one team focused entirely on DSPM and AI data security, rather than a set of acquired point products that still need to be woven together.

  • Proven for real AI workloads today
    Our platform is already securing real AI workloads in production environments, rather than depending on the future maturation of a seed-stage acquisition. AI data security for us is not a sidecar story. It's built into how we discover, classify, govern, and remediate risk across your estate.

  • Higher-precision signal, not more noise
    Sentra delivers higher classification precision (4.9 vs. 4.7 stars on Gartner) and couples that with workflows your team controls, not processes that require vendor intervention every time you need a new report or policy tweak.

  • Complete coverage for complex environments
    Modern enterprises aren’t cloud-only. Sentra provides full coverage across IaaS, PaaS, SaaS, and on-premises from a single platform, built for hybrid and legacy-heavy environments as much as for cloud-native stacks.

In other words, while some vendors are racing to acquire their way into the next AI buzzword, Sentra is focused on delivering trustworthy, real-time, identity-aware data security that you can put in front of a CISO and a data platform owner today.

What to ask your vendors now

If you’re evaluating Cyera (or any vendor riding the latest AI acquisition wave), a few concrete questions can cut through the noise:

  1. How many acquisitions have you done in the last five years, and which parts of my deployment depend on those integrations actually working?
  2. What’s fully integrated and running in production today vs. what’s still on the roadmap?
  3. Are my AI and non-AI data risks handled by the same platform, policies, and reporting, or by separate acquired modules?
  4. Do you provide continuous coverage and identity-aware controls across cloud, SaaS, and on-prem, or am I still relying on periodic scans and partial visibility?

The AI security market doesn’t need more logos; it needs fewer moving parts, better signals, and real-time control over how data is used by humans and agents alike.

That’s the standard Sentra is building for and the lens through which we view every new acquisition announcement in this space.

Read More
Ron Reiter
Ron Reiter
April 24, 2026
3
Min Read
Data Security

Sentra Now Supports Solidworks 3D CAD Files – Protecting the Digital Blueprint in the Age of AI

Sentra Now Supports Solidworks 3D CAD Files – Protecting the Digital Blueprint in the Age of AI

Walk into any advanced manufacturing, aerospace, defense, or industrial design shop and you’re just as likely to see Solidworks as you are AutoCAD. The models, assemblies, and drawings built in Solidworks are the digital blueprints for everything from turbine blades and medical devices to satellites and weapons systems.

Earlier this year we announced native support for AutoCAD DWG files, making an entire class of previously opaque CAD data visible to security and compliance teams for the first time. Now we’re extending that same deep visibility to Solidworks 3D CAD files, so you can protect the IP and regulated technical data hiding inside your .sldprt, .sldasm, and related content—without slowing engineering down.

And as AI accelerates design cycles, that visibility is no longer optional.

AI is Supercharging Design – and Expanding the Blast Radius

Design teams are pushing faster than ever:

  • Generative design tools propose entire families of parts and assemblies.
  • Copilots summarize requirements, suggest changes, and draft documentation off CAD models.
  • PLM-integrated agents automatically create downstream artifacts—quotes, NC programs, service manuals—based on 3D designs.
  • RAG-style internal assistants answer questions using a mix of project docs, CAD files, and simulation outputs.

All of this is powerful. It also multiplies the ways sensitive CAD data can leak:

  • Entire assemblies uploaded to unmanaged AI tools “just to explore options.”
  • Export-controlled models referenced in prompts and ending up in long‑lived AI data lakes.
  • Supplier and customer CAD shared into external copilots with little visibility into who—or what agent—can access it.
  • Rich metadata from CAD (usernames, project codes, server paths, partner names) silently turned into reconnaissance material.

If you don’t understand what’s inside your CAD, where it lives, and which identities and AI agents can reach it, AI doesn’t just speed up design—it speeds up IP disclosure, compliance failures, and supply‑chain exposure.

CAD Has Been a Blind Spot for Security

Most traditional DSPM and DLP tools still treat specialized engineering formats as a big binary blob: “probably sensitive, treat with caution.” That may have been acceptable when CAD lived on a handful of on‑prem engineering servers.

It’s not acceptable when:

  • Decades of CAD history have been lifted and shifted into S3, Azure Blob, or SharePoint.
  • ITAR/EAR “technical data” now lives side‑by‑side with everyday project files in cloud object stores.
  • Those same repositories feed downstream systems—PLM, MES, AI assistants—where traditional security tools have little or no visibility.

We built native DWG parsing into Sentra to break that stalemate, making CAD content as transparent to security teams as a Word document. Solidworks 3D CAD support is the next logical step.

What’s Really Inside a Solidworks 3D CAD File?

Like DWG, a Solidworks file is far more than geometry. It’s a container for rich metadata, text, and structural context that describes both what you’re building and how it fits into regulated programs and commercial IP. Our Solidworks support is designed to surface that security‑relevant context—without requiring CAD tools, manual exports, or data movement.

Similar to what we do for DWG, Sentra can extract and analyze key elements, including:

  • Document properties
    Authors, “last saved by,” creation and modification timestamps, total editing time, and revision counters—signals that help you understand who is touching sensitive designs and when.

  • Custom properties and configuration metadata
    Project IDs, part and assembly numbers, revision codes, program names, business units, and export‑control or classification markings encoded as custom properties or notes.

  • Text content and annotations
    Notes, callouts, PMI, and embedded text that often contain material specifications, tolerances, customer names, contract IDs, and phrases like “COMPANY CONFIDENTIAL,” “EXPORT CONTROLLED,” or ITAR statements.

  • Assembly structure and component names
    Which parts roll up into which assemblies, and how those components are named—critical when you need to understand which physical systems a given sensitive model belongs to.

  • File dependencies and paths
    References to drawings, configurations, libraries, and external resources that routinely expose server names, share paths, usernames, and department structures—goldmine context for attackers, but also for incident response and insider‑risk investigations.

For organizations operating under ITAR and EAR, this is where truly export‑controlled technical data actually lives—not in the folder name, but in the title blocks, annotations, and metadata attached to models and drawings.

Turning Solidworks Models into Actionable Security Signals

By parsing Solidworks 3D CAD files in place, inside your own cloud accounts or VPCs, Sentra can now treat them as first‑class citizens in your data security program—just like we do for DWG and other specialized formats.

That unlocks concrete use cases, such as:

  • Finding export‑controlled or highly sensitive designs in cloud storage
    Automatically surface Solidworks files whose metadata, annotations, or custom properties contain ITAR statements, ECCN codes, proprietary markings, or customer‑confidential labels—so you can focus remediation on the drawings and models that are actually regulated.

  • Mapping who (and what) can access critical designs
    Combine CAD‑aware classification with Sentra’s DSPM and DAG capabilities to answer:
    Where are our most sensitive Solidworks assemblies stored, and which identities, service principals, and AI agents can currently reach them?

  • Monitoring AI and collaboration workflows for IP exposure
    Track when Solidworks files that contain regulated or high‑value IP are moved into AI data lakes, shared via collaboration platforms, or accessed by non‑human identities—so DDR policies can flag, quarantine, or route for review before they turn into public incidents.

  • Building a defensible audit trail for CAD‑resident technical data
    Maintain an inventory of Solidworks files that contain export‑control markings or IP‑critical content, tie each file to its exact storage location and access controls, and surface any out‑of‑policy placements—so when auditors ask “Where is your technical data?”, you can answer with data, not slideware.

Closing the Gap Between “Stored” and “Understood” for 3D CAD

As workloads like EDA, PLM, simulation, and AI‑assisted design move deeper into the cloud, the number of specialized formats in your environment explodes. Most tools still only truly understand emails, office documents, and a narrow slice of structured data.

The reality is simple: you cannot secure data you don’t understand. Understanding means being able to answer, at scale, not just “Where is this file?” but “What is inside this file, how sensitive is it, and how is AI amplifying its risk?”

For organizations whose crown‑jewel IP and export‑controlled technical data live in Solidworks 3D CAD, that’s the gap Sentra is now closing.

If you want to see what’s actually hiding inside your own Solidworks models and assemblies, the easiest next step is to run a focused assessment: pick a few representative buckets or repositories, let Sentra scan those CAD files in place, and review the inventory of regulated and high‑value designs that surfaces.

Chances are, once you’ve seen that map—and how it connects to your AI initiatives—you’ll never look at “just another CAD file” the same way again.

Read More
Yair Cohen
Yair Cohen
David Stuart
David Stuart
April 15, 2026
3
Min Read
Data Sprawl

Fiverr Data Breach: Beyond Misconfigured Buckets and the Data Sprawl That Made It Inevitable

Fiverr Data Breach: Beyond Misconfigured Buckets and the Data Sprawl That Made It Inevitable

Fiverr’s recent data breach/data exposure left tax forms, IDs, contracts, and even credentials publicly accessible and indexed by Google via misconfigured Cloudinary URLs.

This post explains what happened, why data sprawl across third-party services made it inevitable, and how to prevent the next Fiverr-style leak.

The Fiverr data breach is a textbook case of sensitive data sprawl and misconfigured third‑party infrastructure: highly sensitive documents (including tax returns, IDs, health records, and even admin credentials) were stored on Cloudinary behind unauthenticated, non‑expiring URLs, then surfaced via public HTML so Google could index them—remaining accessible for weeks after initial disclosure and hours after public reporting. This isn’t a zero‑day exploit; it’s a failure to understand where regulated data lives, how it rapidly proliferates and is shared across services, and whether controls like signed URLs, authentication, and proper indexing rules are actually in place.

In practical terms, what happened in the Fiverr data breach?

– Sensitive documents (tax returns, IDs, contracts, even credentials) were stored on Cloudinary behind unauthenticated, non-expiring URLs.

– Some of those URLs were linked from public HTML, allowing Google and other search engines to index them.

– As a result, private Fiverr user data became publicly searchable, long before regulators or affected users were notified.

What the Fiverr Data Breach Reveals About Third-Party Data Sprawl

What makes this kind of data exposure - like the Fiverr data leak - so damaging is that it collapses the boundary between “internal work product” and “public web content.” The same files that power everyday workflows—tax filings, medical notes, penetration test reports, admin credentials—suddenly become discoverable to anyone with a search engine, long before regulators or affected users even know there’s a problem. As enterprises lean on third‑party processors, media platforms, and SaaS for collaboration, the real risk isn’t a single misconfigured bucket; it’s the absence of continuous visibility into where sensitive data actually resides and who—human or machine—can reach it.

Sentra is built to restore that visibility and hygiene baseline across the entire data estate, including cloud storage, SaaS platforms, AI data lakes, and media services like the one at the center of this incident. By running discovery and classification in‑environment—without copying customer data out—Sentra builds a live inventory of sensitive assets, from tax forms and IDs to health and financial records, even in unstructured PDFs and images brought into scope via OCR and transcription. On top of that, Sentra continuously identifies redundant, obsolete, and toxic (ROT) data, so organizations can eliminate unnecessary copies that amplify the blast radius when something does go wrong, and set enforceable policies like “no GLBA‑covered data on unauthenticated public endpoints” before the next Cloudinary‑style exposure ever materializes.

If you’re asking “How do we avoid a Fiverr-style data breach on our own SaaS and media stack?”, the starting point is continuous visibility into where sensitive data lives, how it moves into services like Cloudinary, and who or what (including AI agents) can access it.

How to Prevent a Fiverr-Style Data Leak Across SaaS, Storage, and Media Services

Where traditional controls stop at the perimeter, Sentra ties data to identities and access paths, including AI agents, copilots, and service principals. Lineage‑driven maps show how data moves—from a storage bucket into a search index, from a document library into a media processor—so entitlements can follow data automatically and public or over‑privileged links can be revoked in a targeted way, rather than taking an entire service offline. On that foundation, Sentra orchestrates automated actions and remediation: quarantining exposed files, tombstoning toxic copies, removing public links, and routing rich, contextual tickets to owners when human judgment is required—all through existing tools like DLP, IAM, ServiceNow, Jira, Slack, and SOAR instead of standing up a parallel enforcement stack.

Doing this at “Fiverr scale” requires more than point tools; it demands a platform that is accurate, scalable, and cost‑efficient enough to run continuously and scale across multi-hundred petabyte environments. Sentra’s in‑environment architecture and small‑model approach have already scanned 8–9 petabytes in under 4–5 days at 95–98% accuracy—an order‑of‑magnitude faster and cheaper than extraction‑based alternatives—while keeping customer data inside their own accounts. That efficiency means enterprises can maintain continuous scanning, labeling, and remediation across hundreds of petabytes and multiple clouds without turning governance into a budget‑breaking project, and can generate audit‑grade evidence that sensitive data was governed properly over time—not just at the last assessment.

Incidents like the Fiverr data breach are a warning shot for the AI era, where copilots, internal agents, and search experiences will happily surface whatever the underlying permissions and data quality allow. As AI adoption accelerates, the only sustainable defense is a baseline of automated, continuous data protection: accurate classification, durable hygiene, identity‑aware access, automated remediation, and economically viable, always‑on governance that keeps pace with rapidly expanding and evolving data estates. You can’t secure AI—or avoid the next “public and searchable” headline—without first understanding and continuously governing the data that AI and its surrounding services can see. As AI pushes boundaries (and challenges security teams!), there is no time like now to ensure data remains protected.


Fiverr data breach FAQ

  • Was my Fiverr data exposed in the breach?
    Fiverr and independent researchers have confirmed that some user documents—including tax forms, IDs, invoices, and credentials—were publicly accessible and indexed by Google via misconfigured Cloudinary URLs. Whether your specific files were exposed depends on what you shared and how Fiverr stored it, but the safest assumption is that any sensitive document shared on the platform may have been at risk.

  • What made the Fiverr data breach possible?
    The root cause wasn’t a zero-day exploit; it was data sprawl across third-party infrastructure plus weak controls: public, non-expiring Cloudinary URLs, public HTML linking to those URLs, and no continuous visibility into where regulated data lived or who could reach it.

  • How can enterprises prevent similar leaks?
    By continuously discovering and classifying sensitive data across cloud storage, SaaS, and media services; cleaning up ROT; enforcing policies like “no GLBA-covered data on unauthenticated public endpoints”; and tying access to identities so public links and over-privileged routes can be revoked automatically. 

Read more about the Fiverr Data Breach

Detailed news coverage of the Fiverr data breach and Cloudinary misconfiguration (Cybernews)

Independent analysis of the Fiverr data exposure via public Cloudinary URLs (CyberInsider)

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.