Sentra Launches Breakthrough AI Classification Capabilities!
All Resources
In this article:
minus iconplus icon
Share the Blog

Navigating the SEC's New Cybersecurity and Incident Disclosure Rules

January 11, 2024
4
Min Read
Compliance

Recently, the U.S Securities and Exchange Commission (SEC) had adopted stringent cybersecurity and incident disclosure rules, placing a heightened emphasis on the imperative need for robust incident detection, analysis, and reporting processes.

Following these new rules, public companies are finding themselves under a microscope, obligated to promptly disclose any cybersecurity incident deemed material. This disclosure mandates a detailed account of the incident's nature, scope, and timing within a stringent 4-business-day window. In essence, companies are now required to offer swift detection, thorough analysis, and the delivery of a comprehensive report on the potential impact of a data breach for shareholders and investors.

SEC's Decisive Actions in 2023: A Wake-Up Call for CISOs

The SEC's resolute stance on cybersecurity became clear with two major actions in the latter half of 2023. In July, the SEC implemented rules, effective December 18, mandating the disclosure of "material" threat/breach incidents within a four-day window. Simultaneously, annual reporting on cybersecurity risk management, strategy, and governance became a new norm. These actions underscore the SEC's commitment to getting tough on cybersecurity, prompting Chief Information Security Officers (CISOs) and their teams to broaden their focus to the boardroom. The evolving threat landscape now demands a business-centric approach, aligning cybersecurity concerns with overarching organizational strategies.

Adding weight to the SEC's commitment, in October, SolarWinds Corporation and its CISO, Timothy G. Brown was charged with fraud and internal control failures relating to allegedly known cybersecurity risks and vulnerabilities. This marked a historic moment, as it was the first time the SEC brought cybersecurity enforcement claims against an individual. SolarWinds' case, where the company disclosed only "generic and hypothetical risks" while facing specific security issues, serves as a stark reminder of the SEC's intolerance towards non-disclosure and intentional fraud in the cybersecurity domain. It's evident that the SEC's cybersecurity mandates are reshaping compliance norms.

This blog will delve into the intricacies of these rules, their implications, and how organizations, led by their CISOs, can proactively meet the SEC's expectations.

Implications for Compliance Professionals

Striking the Balance: Over-Reporting vs. Under-Reporting

Compliance professionals must navigate the fine line between over-reporting and under-reporting, a task akin to a high-stakes tightrope walk.

Over-Reporting: The consequences of hyper-vigilance can't be underestimated. Reporting every incident, regardless of its material impact, might instigate unwarranted panic in the market. This overreaction could lead to a domino effect, causing a downturn in stock prices and inflicting reputational damage.

Under-Reporting: On the flip side, failing to report within the prescribed time frame has its own set of perils. Regulatory penalties loom large, and the erosion of investor trust becomes an imminent risk. The SEC's strict adherence to disclosure timelines emphasizes the need for precision and timeliness in reporting.

Market Perception

Shareholder & Investor Trust: Balancing reporting accuracy is crucial for maintaining shareholder and investor trust. Over-reporting may breed skepticism and lead to potential divestment, while delayed reporting can erode trust and raise questions about the organization's cybersecurity commitment.

Regulatory Compliance: The SEC mandates timely and accurate reporting. Failure to comply incurs penalties, impacting both finances and the organization's regulatory standing. Regulatory actions, combined with market fallout, can significantly affect the long-term reputation of the organization.

Strategies for Success

The Day Before - Minimize the Impact of the Data Breach

To minimize the impact of a data breach, the first crucial step is knowing the locations of your sensitive data. Identifying and mapping this data within your infrastructure, along with proper classification, lays the foundation for effective protection and risk mitigation.

Data Security Posture Management (DSPM) solutions provide advanced tools and processes to actively monitor, analyze, and fortify the security posture of your sensitive data, ensuring robust protection in the face of evolving threats.

  • Discovers any piece of data you have and classifies the different data types in your organization.
  • Automatically detects the risks of your sensitive data (including data movement) and remediation. 
  • Aligns your data protection practices with security regulations and best practices. Incorporates compliance measures for handling personally identifiable information (PII), protected health information (PHI), credentials, and other sensitive data.

From encryption to access controls, adopting a comprehensive security approach safeguards your organization against potential breaches. It’s crucial to conduct a thorough risk assessment to measure vulnerabilities and potential threats to your data. Understanding the risks allows for targeted and proactive risk management strategies.

Security posture score, which includes the data and issues overview, highlighting the top data classes at risk.
An example of a security posture score, which includes the data and issues overview, highlighting the top data classes at risk.

The Day After: Maximizing the Pace to Handle the Impact (reputation, money, recovery, etc)

In the aftermath of a breach, having a “Data Catalog” with data sensitivity ranking helps with understanding the materiality of the breach and quick resolution and reporting within the 4-day window.

Swift incident response is also paramount; and this can be accomplished by establishing a rapid plan for mitigating the impact on reputation, finances, and overall recovery. This is where the data catalog comes into play again, by helping you understand which data was extracted, facilitating quick and accurate resolution. The next step for the ‘day after’ is actively managing your organization's reputation post-incident through transparent communication and decisive action, which contributes to trust and credibility rebuilding.

A complete catalog, showing the data stores, the account, the sensitivity and category of the data, as well as the data context.
An example of a complete catalog, showing the data stores, the account, the sensitivity and category of the data, as well as the data context.

Finally, always conduct a comprehensive post-incident analysis for valuable insights, and enhance future security measures through a continuous improvement cycle. Building resilience into your cybersecurity framework by proactively adapting and fortifying defenses, best positions your organization to withstand future challenges. Adhering to these strategies enables organizations to navigate the cybersecurity landscape effectively, minimizing risks, ensuring compliance, and enhancing their ability to respond swiftly to potential incidents.

Empowering Compliance in the Face of SEC Regulations with Sentra’s DSPM

Sentra’s DSPM solution both discovers and classifies sensitive data, and aligns seamlessly with SEC's cybersecurity and incident disclosure rules. The real-time monitoring swiftly identifies potential breaches, offering a critical head start within the 4-day disclosure window.

Efficient impact analysis enables compliance professionals to gauge materiality and consequences for shareholders during reporting. Sentra's DSPM streamlines incident analysis processes, adapting to each organization's needs. Having a "Data Catalog" aids in understanding breach materiality for quick resolution and reporting, while detailed reports ensure SEC compliance.

By integrating Sentra, organizations meet regulatory demands, fortify data security, and navigate evolving compliance requirements. As the SEC shapes the cybersecurity landscape, Sentra guides towards a future where proactive incident management is a strategic imperative.

To learn more, schedule a demo with one of our experts.

<blogcta-big>

Meni is an experienced product manager and the former founder of Pixibots (A mobile applications studio). In the past 15 years, he gained expertise in various industries such as: e-commerce, cloud management, dev-tools, mobile games, and more. He is passionate about delivering high quality technical products, that are intuitive and easy to use.

Subscribe

Latest Blog Posts

David Stuart
David Stuart
Nikki Ralston
Nikki Ralston
November 24, 2025
3
Min Read

Third-Party OAuth Apps Are the New Shadow Data Risk: Lessons from the Gainsight/Salesforce Incident

Third-Party OAuth Apps Are the New Shadow Data Risk: Lessons from the Gainsight/Salesforce Incident

The recent exposure of customer data through a compromised Gainsight integration within Salesforce environments is more than an isolated event - it’s a sign of a rapidly evolving class of SaaS supply-chain threats. Even trusted AppExchange partners can inadvertently create access pathways that attackers exploit, especially when OAuth tokens and machine-to-machine connections are involved. This post explores what happened, why today’s security tooling cannot fully address this scenario, and how data-centric visibility and identity governance can meaningfully reduce the blast radius of similar breaches.

A Recap of the Incident

In this case, attackers obtained sensitive credentials tied to a Gainsight integration used by multiple enterprises. Those credentials allowed adversaries to generate valid OAuth tokens and access customer Salesforce orgs, in some cases with extensive read capabilities. Neither Salesforce nor Gainsight intentionally misconfigured their systems. This was not a product flaw in either platform. Instead, the incident illustrates how deeply interconnected SaaS environments have become and how the security of one integration can impact many downstream customers.

Understanding the Kill Chain: From Stolen Secrets to Salesforce Lateral Movement

The attackers’ pathway followed a pattern increasingly common in SaaS-based attacks. It began with the theft of secrets; likely API keys, OAuth client secrets, or other credentials that often end up buried in repositories, CI/CD logs, or overlooked storage locations. Once in hand, these secrets enabled the attackers to generate long-lived OAuth tokens, which are designed for application-level access and operate outside MFA or user-based access controls.

What makes OAuth tokens particularly powerful is that they inherit whatever permissions the connected app holds. If an integration has broad read access, which many do for convenience or legacy reasons, an attacker who compromises its token suddenly gains the same level of visibility. Inside Salesforce, this enabled lateral movement across objects, records, and reporting surfaces far beyond the intended scope of the original integration. The entire kill chain was essentially a progression from a single weakly-protected secret to high-value data access across multiple Salesforce tenants.

Why Traditional SaaS Security Tools Missed This

Incident response teams quickly learned what many organizations are now realizing: traditional CASBs and CSPMs don’t provide the level of identity-to-data context necessary to detect or prevent OAuth-driven supply-chain attacks.

CASBs primarily analyze user behavior and endpoint connections, but OAuth apps are “non-human identities” - they don’t log in through browsers or trigger interactive events. CSPMs, in contrast, focus on cloud misconfigurations and posture, but they don’t understand the fine-grained data models of SaaS platforms like Salesforce. What was missing in this incident was visibility into how much sensitive data the Gainsight connector could access and whether the privileges it held were appropriate or excessive. Without that context, organizations had no meaningful way to spot the risk until the compromise became public.

Sentra Helps Prevent and Contain This Attack Pattern

Sentra’s approach is fundamentally different because it starts with data: what exists, where it resides, who or what can access it, and whether that access is appropriate. Rather than treating Salesforce or other SaaS platforms as black boxes, Sentra maps the data structures inside them, identifies sensitive records, and correlates that information with identity permissions including third-party apps, machine identities, and OAuth sessions.

One key pillar of Sentra’s value lies in its DSPM capabilities. The platform identifies sensitive data across all repositories, including cloud storage, SaaS environments, data warehouses, code repositories, collaboration platforms, and even on-prem file systems. Because Sentra also detects secrets such as API keys, OAuth credentials, private keys, and authentication tokens across these environments, it becomes possible to catch compromised or improperly stored secrets before an attacker ever uses them to access a SaaS platform.

OAuth 2.0 Access Token

Another area where this becomes critical is the detection of over-privileged connected apps. Sentra continuously evaluates the scopes and permissions granted to integrations like Gainsight, identifying when either an app or an identity holds more access than its business purpose requires. This type of analysis would have revealed that a compromised integrated app could see far more data than necessary, providing early signals of elevated risk long before an attacker exploited it.

Sentra further tracks the health and behavior of non-human identities. Service accounts and connectors often rely on long-lived credentials that are rarely rotated and may remain active long after the responsible team has changed. Sentra identifies these stale or overly permissive identities and highlights when their behavior deviates from historical norms. In the context of this incident type, that means detecting when a connector suddenly begins accessing objects it never touched before or when large volumes of data begin flowing to unexpected locations or IP ranges.

Finally, Sentra’s behavior analytics (part of DDR) help surface early signs of misuse. Even if an attacker obtains valid OAuth tokens, their data access patterns, query behavior, or geography often diverge from the legitimate integration. By correlating anomalous activity with the sensitivity of the data being accessed, Sentra can detect exfiltration patterns in real time—something traditional tools simply aren’t designed to do.

The 2026 Outlook: More Incidents Are Coming

The Gainsight/Salesforce incident is unlikely to be the last of its kind. The speed at which enterprises adopt SaaS integrations far exceeds the rate at which they assess the data exposure those integrations create. OAuth-based supply-chain attacks are growing quickly because they allow adversaries to compromise one provider and gain access to dozens or hundreds of downstream environments. Given the proliferation of partner ecosystems, machine identities, and unmonitored secrets, this attack vector will continue to scale.

Prediction:
Unless enterprises add data-centric SaaS visibility and identity-aware DSPM, we should expect three to five more incidents of similar magnitude before summer 2026.

Conclusion

The real lesson from the Gainsight/Salesforce breach is not to reduce reliance on third-party SaaS providers as modern business would grind to a halt without them. The lesson is that enterprises must know where their sensitive data lives, understand exactly which identities and integrations can access it, and ensure those privileges are continuously validated. Sentra provides that visibility and contextual intelligence, making it possible to identify the risks that made this breach possible and help to prevent the next one.

<blogcta-big>

Read More
David Stuart
David Stuart
November 24, 2025
3
Min Read

Securing Unstructured Data in Microsoft 365: The Case for Petabyte-Scale, AI-Driven Classification

Securing Unstructured Data in Microsoft 365: The Case for Petabyte-Scale, AI-Driven Classification

The modern enterprise runs on collaboration and nothing powers that more than Microsoft 365. From Exchange Online and OneDrive to SharePoint, Teams, and Copilot workflows, M365 hosts a massive and ever-growing volume of unstructured content: documents, presentations, spreadsheets, image files, chats, attachments, and more.

Yet unstructured = harder to govern. Unlike tidy database tables with defined schemas, unstructured repositories flood in with ambiguous content types, buried duplicates, or unused legacy files. It’s in these stacks that sensitive IP, model training data, or derivative work can quietly accumulate, and then leak.

Consider this: one recent study found that more than 81 % of IT professionals report data-loss events in M365 environments. And to make matters worse, according to the International Data Corporation (IDC), 60% of organizations do not have a strategy for protecting their critical business data that resides in Microsoft 365.

Why Traditional Tools Struggle

  • Built-in classification tools (e.g., M365’s native capabilities) often rely on pattern matching or simple keywords, and therefore struggle with accuracy, context, scale and derivative content.

  • Many solutions only surface that a file exists and carries a type label - but stop short of mapping who or what can access it, its purpose, and what its downstream exposure might be.

  • GenAI workflows now pump massive volumes of unstructured data into copilots, knowledge bases, training sets - creating new blast radii that legacy DLP or labeling tools weren’t designed to catch.

What a Modern Platform Must Deliver

  1. High-accuracy, petabyte-scale classification of unstructured data (so you know what you have, where it sits, and how sensitive it is). And it must keep pace with explosive data growth and do so cost efficiently.

  2. Unified Data Access Governance (DAG) - mapping identities (users, service principals, agents), permissions, implicit shares, federated/cloud-native paths across M365 and beyond.
  3. Data Detection & Response (DDR) - continuous monitoring of data movement, copies, derivative creation, AI agent interactions, and automated response/remediation.

How Sentra addresses this in M365

Assets contain plain text credit card numbers

At Sentra, we’ve built a cloud-native data-security platform specifically to address this triad of capabilities - and we extend that deeply into M365 (OneDrive, SharePoint, Teams, Exchange Online) and other SaaS platforms.

  • A newly announced AI Classifier for Unstructured Data accelerates and improves classification across M365’s unstructured repositories (see: Sentra launches breakthrough unstructured-data AI classification capabilities).

  • Petabyte-scale processing: our architecture supports classification and monitoring of massive file estates without astronomical cost or time-to-value.

  • Seamless support for M365 services: read/write access, ingestion, classification, access-graph correlation, detection of shadow/unmanaged copies across OneDrive and SharePoint—plus integration into our DAG and DDR layers (see our guide: How to Secure Regulated Data in Microsoft 365 + Copilot).

  • Cost-efficient deployment: designed for high scale without breaking the budget or massive manual effort.

The Bottom Line

In today’s cloud/AI era, saying “we discovered the PII in my M365 tenant” isn’t enough.

The real question is: Do I know who or what (user/agent/app) can access that content, what its business purpose is, and whether it’s already been copied or transformed into a risk vector?


If your solution can’t answer that, your unstructured data remains a silent, high-stakes liability and resolving concerns becomes a very costly, resource-draining burden. By embracing a platform that combines classification accuracy, petabyte-scale processing, unified DSPM + DAG + DDR, and deep M365 support, you move from “hope I’m secure” to “I know I’m secure.”

Want to see how it works in a real M365 setup? Check out our video or book a demo.

<blogcta-big>

Read More
Ofir Yehoshua
Ofir Yehoshua
November 17, 2025
4
Min Read

How to Gain Visibility and Control in Petabyte-Scale Data Scanning

How to Gain Visibility and Control in Petabyte-Scale Data Scanning

Every organization today is drowning in data - millions of assets spread across cloud platforms, on-premises systems, and an ever-expanding landscape of SaaS tools. Each asset carries value, but also risk. For security and compliance teams, the mandate is clear: sensitive data must be inventoried, managed and protected.

Scanning every asset for security and compliance is no longer optional, it’s the line between trust and exposure, between resilience and chaos.

Many data security tools promise to scan and classify sensitive information across environments. In practice, doing this effectively and at scale, demands more than raw ‘brute force’ scanning power. It requires robust visibility and management capabilities: a cockpit view that lets teams monitor coverage, prioritize intelligently, and strike the right balance between scan speed, cost, and accuracy.

Why Scan Tracking Is Crucial

Scanning is not instantaneous. Depending on the size and complexity of your environment, it can take days - sometimes even weeks to complete. Meanwhile, new data is constantly being created or modified, adding to the challenge.

Without clear visibility into the scanning process, organizations face several critical obstacles:

  • Unclear progress: It’s often difficult to know what has already been scanned, what is currently in progress, and what remains pending. This lack of clarity creates blind spots that undermine confidence in coverage.

  • Time estimation gaps: In large environments, it’s hard to know how long scans will take because so many factors come into play — the number of assets, their size, the type of data - structured, semi-structured, or unstructured, and how much scanner capacity is available. As a result, predicting when you’ll reach full coverage is tricky. This becomes especially stressful when scans need to be completed before a fixed deadline, like a compliance audit. 

    "With Sentra’s Scan Dashboard, we were able to quickly scale up our scanners to meet a tight audit deadline, finish on time, and then scale back down to save costs. The visibility and control it gave us made the whole process seamless”, said CISO of Large Retailer.
  • Poor prioritization: Not all environments or assets carry the same importance. Yet without visibility into scan status, teams struggle to balance historical scans of existing assets with the ongoing influx of newly created data, making it nearly impossible to prioritize effectively based on risk or business value.

Sentra’s End-to-End Scanning Workflow

Managing scans at petabyte scale is complex. Sentra streamlines the process with a workflow built for scale, clarity, and control that features:

1. Comprehensive Asset Discovery

Before scanning even begins, Sentra automatically discovers assets across cloud platforms, on-premises systems, and SaaS applications. This ensures teams have a complete, up-to-date inventory and visual map of their data landscape, so no environment or data store is overlooked.

Example: New S3 buckets, a freshly deployed BigQuery dataset, or a newly connected SharePoint site are automatically identified and added to the inventory.

Comprehensive Asset Discovery with Sentra

2. Configurable Scan Management

Administrators can fine-tune how scans are executed to meet their organization’s needs. With flexible configuration options, such as number of scanners, sampling rates, and prioritization rules - teams can strike the right balance between scan speed, coverage, and cost control.

For instance, compliance-critical assets can be scanned at full depth immediately, while less critical environments can run at reduced sampling to save on compute consumption and costs.

3. Real-Time Scan Dashboard

Sentra’s unified Scan Dashboard provides a cockpit view into scanning operations, so teams always know where they stand. Key features include:

  • Daily scan throughput correlated with the number of active scanners, helping teams understand efficiency and predict completion times.
  • Coverage tracking that visualizes overall progress and highlights which assets remain unscanned.
  • Decision-making tools that allow teams to dynamically adjust, whether by adding scanner capacity, changing sampling rates, or reordering priorities when new high-risk assets appear.
Real-Time Scan Dashboard with Sentra

Handling Data Changes

The challenge doesn’t end once the initial scans are complete. Data is dynamic, new files are added daily, existing records are updated, and sensitive information shifts locations. Sentra’s activity feeds give teams the visibility they need to understand how their data landscape is evolving and adapt their data security strategies in real time.


Conclusion

Tracking scan status at scale is complex but critical to any data security strategy. Sentra provides an end-to-end view and unmatched scan control, helping organizations move from uncertainty to confidence with clear prediction of scan timelines, faster troubleshooting, audit-ready compliance, and smarter, cost-efficient decisions for securing data.

<blogcta-big>

Read More
decorative ball
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

Gartner Certificate for Sentra