Sentra Launches Breakthrough AI Classification Capabilities!
All Resources
In this article:
minus iconplus icon
Share the Blog

Top 6 Azure Security Tools, Features, and Best Practices

November 7, 2022
6
Min Read

Nowadays, it is evident that the rapid growth of cloud computing has changed how organizations operate. Many organizations increasingly rely on the cloud to drive their daily business operations. The cloud is a single place for storing, processing and accessing data; it’s no wonder that people are becoming addicted to its convenience.

However, as the dependence on cloud service providers continues, the need for security also increases. One needs to measure and safeguard sensitive data to protect against possible threats. Remember that security is a shared responsibility - even if your cloud provider secures your data, the security will not be absolute. Thus, understanding the security features of a particular cloud service provider becomes significant.

Introduction to Microsoft Azure Security Services

Image of Microsoft Azure, explaining how to strengthen security posture with Azure

Microsoft Azure offers services and tools for businesses to manage their applications and infrastructure. Utilizing Azure ensures robust security measures are in place to protect sensitive data, maintain privacy, and mitigate potential threats.

This article will tackle Azure’s security features and tools to help organizations and individuals safeguard and protect their data while they continue their innovation and growth. 

There’s a collective set of security features, services, tools, and best practices offered by Microsoft to protect cloud resources. In this section, let's explore some layers to gain some insights.

The Layers of Security in Microsoft Azure:

Layers of Security Description
Physical Security Microsoft Azure has a strong foundation of physical security measures, and it operates state-of-the-art data centers worldwide with strict physical access controls, which ensures that Azure's infrastructure protects itself against unauthorized physical access.
Network Security Virtual networks, network security groups (NSGs), and distributed denial of service (DDoS) protection create isolated and secure network environments. Microsoft Azure network security mechanisms secure data in transit and protect against unauthorized network access. Of course, we must recognize Azure Virtual Network Gateway, which secures connections between on-premises networks and Azure resources.
Identity and Access Management (IAM) Microsoft Azure offers identity and access management capabilities to control and secure access to cloud resources. The Azure Active Directory (AD) is a centralized identity management platform that allows organizations to manage user identities, enforce robust authentication methods, and implement fine-grained access controls through role-based access control (RBAC).
Data Security Microsoft Azure offers Azure Storage Service Encryption (SSE) which encrypts data at rest, while Azure Disk Encryption secures virtual machine disks. Azure Key Vault provides a secure and centralized location for managing cryptographic keys and secrets.
Threat Detection and Monitoring Microsoft Azure offers Azure Security Center, which provides a centralized view of security recommendations, threat intelligence, and real-time security alerts. Azure Sentinel offers cloud-native security information that helps us quickly detect, alert, investigate, and resolve security incidents.
Compliance and Governance Microsoft Azure offers Azure Policy to define and enforce compliance controls across Azure resources within the organization. Moreover, it helps provide compliance certifications and adhere to industry-standard security frameworks.

Let’s explore some features and tools, and discuss their key features and best practices.

Azure Active Directory Identity Protection

Image of Azure’s Identity Protection page, explaining what is identity protection

Identity protection is a cloud-based service for the Azure AD suite. It focuses on helping organizations protect their user identities and detect potential security risks. Moreover, it uses advanced machine learning algorithms and security signals from various sources to provide proactive and adaptive security measures. Furthermore, leveraging machine learning and data analytics can identify risky sign-ins, compromised credentials, and malicious or suspicious user behavior. How’s that? Sounds great, right?

Key Features

1. Risk-Based User Sign-In Policies

It allows organizations to define risk-based policies for user sign-ins which evaluate user behavior, sign-in patterns, and device information to assess the risk level associated with each sign-in attempt. Using the risk assessment, organizations can enforce additional security measures, such as requiring multi-factor authentication (MFA), blocking sign-ins, or prompting password resets.

2. Risky User Detection and Remediation

The service detects and alerts organizations about potentially compromised or risky user accounts. It analyzes various signals, such as leaked credentials or suspicious sign-in activities, to identify anomalies and indicators of compromise. Administrators can receive real-time alerts and take immediate action, such as resetting passwords or blocking access, to mitigate the risk and protect user accounts.

Best Practices

  • Educate Users About Identity Protection - Educating users is crucial for maintaining a secure environment. Most large organizations now provide security training to increase the awareness of users. Training and awareness help users protect their identities, recognize phishing attempts, and follow security best practices.
  • Regularly Review and Refine Policies - Regularly assessing policies helps ensure their effectiveness, which is why it is good to continuously improve the organization’s Azure AD Identity Protection policies based on the changing threat landscape and your organization's evolving security requirements.

Azure Firewall

Image of Azure Firewall page, explaining what is Azure Firewall

Microsoft offers an Azure Firewall, which is a cloud-based network security service. It acts as a barrier between your Azure virtual networks and the internet. Moreover, it provides centralized network security and protection against unauthorized access and threats. Furthermore, it operates at the network and application layers, allowing you to define and enforce granular access control policies.

Thus, it enables organizations to control inbound and outbound traffic for virtual and on-premises networks connected through Azure VPN or ExpressRoute. Of course, we can’t ignore the filtering traffic of source and destination IP addresses, ports, protocols, and even fully qualified domain names (FQDNs).

Key Features

1. Network and Application-Level Filtering

This feature allows organizations to define rules based on IP addresses (source and destination), including ports, protocols, and FQDNs. Moreover, it helps organizations filter network and application-level traffic, controlling inbound and outbound connections.

2. Fully Stateful Firewall

Azure Firewall is a stateful firewall, which means it can intelligently allow return traffic for established connections without requiring additional rules. The beneficial aspect of this is it simplifies rule management and ensures that legitimate traffic flows smoothly.

3. High Availability and Scalability

Azure Firewall is highly available and scalable. It can automatically scale with your network traffic demand increases and provides built-in availability through multiple availability zones.

Best Practices

  • Design an Appropriate Network Architecture - Plan your virtual network architecture carefully to ensure proper placement of Azure Firewall. Consider network segmentation, subnet placement, and routing requirements to enforce security policies and control traffic flow effectively.
  • Implement Network Traffic Filtering Rules - Define granular network traffic filtering rules based on your specific security requirements. Start with a default-deny approach and allow only necessary traffic. Regularly review and update firewall rules to maintain an up-to-date and effective security posture.
  • Use Application Rules for Fine-Grain Control - Leverage Azure Firewall's application rules to allow or deny traffic based on specific application protocols or ports. By doing this, organizations can enforce granular access control to applications within their network.

Azure Resource Locks

Image of Azure Resource Locks page, explaining how to lock your resources to protect your infrastructure

Azure Resource Locks is a Microsoft Azure feature that allows you to restrict Azure resources to prevent accidental deletion or modification. It provides an additional layer of control and governance over your Azure resources, helping mitigate the risk of critical changes or deletions.

Key Features

Two types of locks can be applied:

1. Read-Only (CanNotDelete)

This lock type allows you to mark a resource as read-only, meaning modifications or deletions are prohibited.

2. CanNotDelete (Delete)

This lock type provides the highest level of protection by preventing both modifications and deletions of a resource; it ensures that the resource remains completely unaltered.

Best Practices

  • Establish a Clear Governance Policy - Develop a governance policy that outlines the use of Resource Locks within your organization. The policy should define who has the authority to apply or remove locks and when to use locks, and any exceptions or special considerations.
  • Leverage Azure Policy for Lock Enforcement - Use Azure Policy alongside Resource Locks to enforce compliance with your governance policies. It is because Azure Policy can automatically apply locks to resources based on predefined rules, reducing the risk of misconfigurations.

Azure Secure SQL Database Always Encrypted

Image of Azure Always Encrypted page, explaining how it works

Azure Secure SQL Database Always Encrypted is a feature of Microsoft Azure SQL Database that provides another security-specific layer for sensitive data. Moreover, it protects data at rest and in transit, ensuring that even database administrators or other privileged users cannot access the plaintext values of the encrypted data.

Key Features

1. Client-Side Encryption

Always Encrypted enables client applications to encrypt sensitive data before sending it to the database. As a result, the data remains encrypted throughout its lifecycle and can be decrypted only by an authorized client application.

2. Column-Level Encryption

Always Encrypted allows you to selectively encrypt individual columns in a database table rather than encrypting the entire database. It gives organizations fine-grained control over which data needs encryption, allowing you to balance security and performance requirements.

3. Transparent Data Encryption

The database server stores the encrypted data using a unique encryption format, ensuring the data remains protected even if the database is compromised. The server is unaware of the data values and cannot decrypt them.

Best Practices

The organization needs to plan and manage encryption keys carefully. This is because encryption keys are at the heart of Always Encrypted. Consider the following best practices.

  • Use a Secure and Centralized Key Management System - Store encryption keys in a safe and centralized location, separate from the database. Azure Key Vault is a recommended option for managing keys securely.
  • Implement Key Rotation and Backup - Regularly rotate encryption keys to mitigate the risks of key compromise. Moreover, establish a key backup strategy to recover encrypted data due to a lost or inaccessible key.
  • Control Access to Encryption Keys - Ensure that only authorized individuals or applications have access to the encryption keys. Applying the principle of least privilege and robust access control will prevent unauthorized access to keys.

Azure Key Vault

Image of Azure Key Vault page

Azure Key Vault is a cloud service provided by Microsoft Azure that helps safeguard cryptographic keys, secrets, and sensitive information. It is a centralized storage and management system for keys, certificates, passwords, connection strings, and other confidential information required by applications and services. It allows developers and administrators to securely store and tightly control access to their application secrets without exposing them directly in their code or configuration files.

Key Features

1. Key Management

Key Vault provides a secure key management system that allows you to create, import, and manage cryptographic keys for encryption, decryption, signing, and verification.

2. Secret Management

It enables you to securely store (as plain text or encrypted value) and manage secrets such as passwords, API keys, connection strings, and other sensitive information.

3. Certificate Management

Key Vault supports the storage and management of X.509 certificates, allowing you to securely store, manage, and retrieve credentials for application use.

4. Access Control

Key Vault provides fine-grained access control to manage who can perform operations on stored keys and secrets. It integrates with Azure Active Directory (Azure AD) for authentication and authorization.

Best Practices

  • Centralized Secrets Management - Consolidate all your application secrets and sensitive information in Key Vault rather than scattering them across different systems or configurations. The benefit of this is it simplifies management and reduces the risk of accidental exposure.
  • Use RBAC and Access Policies - Implement role-based access control (RBAC) and define granular access policies to power who can perform operations on Key Vault resources. Follow the principle of least privilege, granting only the necessary permissions to users or applications.
  • Secure Key Vault Access - Restrict access to Key Vault resources to trusted networks or virtual networks using virtual network service or private endpoints because it helps prevent unauthorized access to the internet.

Azure AD Multi-Factor Authentication

Image of Azure AD Multi-Factor Authentication page, explaining how it works

It is a security feature provided by Microsoft Azure that adds an extra layer of protection to user sign-ins and helps safeguard against unauthorized access to resources. Users must give additional authentication factors beyond just a username and password.

Key Features

1. Multiple Authentication Methods

Azure AD MFA supports a range of authentication methods, including phone calls, text messages (SMS), mobile app notifications, mobile app verification codes, email, and third-party authentication apps. This flexibility allows organizations to choose the methods that best suit their users' needs and security requirements.

2. Conditional Access Policies

Azure AD MFA can configure conditional access policies, allowing organizations to define specific conditions under which MFA (is required), once applied to an organization, on the user location, device trust, application sensitivity, and risk level. This granular control helps organizations strike a balance between security and user convenience.

Best Practices

  • Enable MFA for All Users - Implement a company-wide policy to enforce MFA for all users, regardless of their roles or privileges, because it will ensure consistent and comprehensive security across the organization.
  • Use Risk-Based Policies - Leverage Azure AD Identity Protection and its risk-based policies to dynamically adjust the level of authentication required based on the perceived risk of each sign-in attempt because it will help balance security and user experience by applying MFA only when necessary.
  • Implement Multi-Factor Authentication for Privileged Accounts - Ensure that all privileged accounts, such as administrators and IT staff, are protected with MFA. These accounts have elevated access rights and are prime targets for attackers. Enforcing MFA adds an extra layer of protection to prevent unauthorized access.

Conclusion

In this post, we have introduced the importance of cybersecurity in the cloud space due to dependence on cloud providers. After that we discussed some layers of security in Azure to gain insights about its landscape and see some tools and features available. Of course we can’t ignore the features such as Azure Active Directory Identity Protection, Azure Firewall, Azure Resource Locks, Azure Secure SQL Database Always Encrypted, Azure Key Vault and Azure AD Multi-Factor Authentication by giving an overview on each, its key features and the best practices we can apply to our organization.

Ready to go beyond native Azure tools?

While Azure provides powerful built-in security features, securing sensitive data across multi-cloud environments requires deeper visibility and control.

Request a demo with Sentra to see how our platform complements Azure by discovering, classifying, and protecting sensitive data - automatically and continuously.

<blogcta-big>

Discover Ron’s expertise, shaped by over 20 years of hands-on tech and leadership experience in cybersecurity, cloud, big data, and machine learning. As a serial entrepreneur and seed investor, Ron has contributed to the success of several startups, including Axonius, Firefly, Guardio, Talon Cyber Security, and Lightricks, after founding a company acquired by Oracle.

Subscribe

Latest Blog Posts

David Stuart
David Stuart
Nikki Ralston
Nikki Ralston
November 24, 2025
3
Min Read

Third-Party OAuth Apps Are the New Shadow Data Risk: Lessons from the Gainsight/Salesforce Incident

Third-Party OAuth Apps Are the New Shadow Data Risk: Lessons from the Gainsight/Salesforce Incident

The recent exposure of customer data through a compromised Gainsight integration within Salesforce environments is more than an isolated event - it’s a sign of a rapidly evolving class of SaaS supply-chain threats. Even trusted AppExchange partners can inadvertently create access pathways that attackers exploit, especially when OAuth tokens and machine-to-machine connections are involved. This post explores what happened, why today’s security tooling cannot fully address this scenario, and how data-centric visibility and identity governance can meaningfully reduce the blast radius of similar breaches.

A Recap of the Incident

In this case, attackers obtained sensitive credentials tied to a Gainsight integration used by multiple enterprises. Those credentials allowed adversaries to generate valid OAuth tokens and access customer Salesforce orgs, in some cases with extensive read capabilities. Neither Salesforce nor Gainsight intentionally misconfigured their systems. This was not a product flaw in either platform. Instead, the incident illustrates how deeply interconnected SaaS environments have become and how the security of one integration can impact many downstream customers.

Understanding the Kill Chain: From Stolen Secrets to Salesforce Lateral Movement

The attackers’ pathway followed a pattern increasingly common in SaaS-based attacks. It began with the theft of secrets; likely API keys, OAuth client secrets, or other credentials that often end up buried in repositories, CI/CD logs, or overlooked storage locations. Once in hand, these secrets enabled the attackers to generate long-lived OAuth tokens, which are designed for application-level access and operate outside MFA or user-based access controls.

What makes OAuth tokens particularly powerful is that they inherit whatever permissions the connected app holds. If an integration has broad read access, which many do for convenience or legacy reasons, an attacker who compromises its token suddenly gains the same level of visibility. Inside Salesforce, this enabled lateral movement across objects, records, and reporting surfaces far beyond the intended scope of the original integration. The entire kill chain was essentially a progression from a single weakly-protected secret to high-value data access across multiple Salesforce tenants.

Why Traditional SaaS Security Tools Missed This

Incident response teams quickly learned what many organizations are now realizing: traditional CASBs and CSPMs don’t provide the level of identity-to-data context necessary to detect or prevent OAuth-driven supply-chain attacks.

CASBs primarily analyze user behavior and endpoint connections, but OAuth apps are “non-human identities” - they don’t log in through browsers or trigger interactive events. CSPMs, in contrast, focus on cloud misconfigurations and posture, but they don’t understand the fine-grained data models of SaaS platforms like Salesforce. What was missing in this incident was visibility into how much sensitive data the Gainsight connector could access and whether the privileges it held were appropriate or excessive. Without that context, organizations had no meaningful way to spot the risk until the compromise became public.

Sentra Helps Prevent and Contain This Attack Pattern

Sentra’s approach is fundamentally different because it starts with data: what exists, where it resides, who or what can access it, and whether that access is appropriate. Rather than treating Salesforce or other SaaS platforms as black boxes, Sentra maps the data structures inside them, identifies sensitive records, and correlates that information with identity permissions including third-party apps, machine identities, and OAuth sessions.

One key pillar of Sentra’s value lies in its DSPM capabilities. The platform identifies sensitive data across all repositories, including cloud storage, SaaS environments, data warehouses, code repositories, collaboration platforms, and even on-prem file systems. Because Sentra also detects secrets such as API keys, OAuth credentials, private keys, and authentication tokens across these environments, it becomes possible to catch compromised or improperly stored secrets before an attacker ever uses them to access a SaaS platform.

OAuth 2.0 Access Token

Another area where this becomes critical is the detection of over-privileged connected apps. Sentra continuously evaluates the scopes and permissions granted to integrations like Gainsight, identifying when either an app or an identity holds more access than its business purpose requires. This type of analysis would have revealed that a compromised integrated app could see far more data than necessary, providing early signals of elevated risk long before an attacker exploited it.

Sentra further tracks the health and behavior of non-human identities. Service accounts and connectors often rely on long-lived credentials that are rarely rotated and may remain active long after the responsible team has changed. Sentra identifies these stale or overly permissive identities and highlights when their behavior deviates from historical norms. In the context of this incident type, that means detecting when a connector suddenly begins accessing objects it never touched before or when large volumes of data begin flowing to unexpected locations or IP ranges.

Finally, Sentra’s behavior analytics (part of DDR) help surface early signs of misuse. Even if an attacker obtains valid OAuth tokens, their data access patterns, query behavior, or geography often diverge from the legitimate integration. By correlating anomalous activity with the sensitivity of the data being accessed, Sentra can detect exfiltration patterns in real time—something traditional tools simply aren’t designed to do.

The 2026 Outlook: More Incidents Are Coming

The Gainsight/Salesforce incident is unlikely to be the last of its kind. The speed at which enterprises adopt SaaS integrations far exceeds the rate at which they assess the data exposure those integrations create. OAuth-based supply-chain attacks are growing quickly because they allow adversaries to compromise one provider and gain access to dozens or hundreds of downstream environments. Given the proliferation of partner ecosystems, machine identities, and unmonitored secrets, this attack vector will continue to scale.

Prediction:
Unless enterprises add data-centric SaaS visibility and identity-aware DSPM, we should expect three to five more incidents of similar magnitude before summer 2026.

Conclusion

The real lesson from the Gainsight/Salesforce breach is not to reduce reliance on third-party SaaS providers as modern business would grind to a halt without them. The lesson is that enterprises must know where their sensitive data lives, understand exactly which identities and integrations can access it, and ensure those privileges are continuously validated. Sentra provides that visibility and contextual intelligence, making it possible to identify the risks that made this breach possible and help to prevent the next one.

<blogcta-big>

Read More
David Stuart
David Stuart
November 24, 2025
3
Min Read

Securing Unstructured Data in Microsoft 365: The Case for Petabyte-Scale, AI-Driven Classification

Securing Unstructured Data in Microsoft 365: The Case for Petabyte-Scale, AI-Driven Classification

The modern enterprise runs on collaboration and nothing powers that more than Microsoft 365. From Exchange Online and OneDrive to SharePoint, Teams, and Copilot workflows, M365 hosts a massive and ever-growing volume of unstructured content: documents, presentations, spreadsheets, image files, chats, attachments, and more.

Yet unstructured = harder to govern. Unlike tidy database tables with defined schemas, unstructured repositories flood in with ambiguous content types, buried duplicates, or unused legacy files. It’s in these stacks that sensitive IP, model training data, or derivative work can quietly accumulate, and then leak.

Consider this: one recent study found that more than 81 % of IT professionals report data-loss events in M365 environments. And to make matters worse, according to the International Data Corporation (IDC), 60% of organizations do not have a strategy for protecting their critical business data that resides in Microsoft 365.

Why Traditional Tools Struggle

  • Built-in classification tools (e.g., M365’s native capabilities) often rely on pattern matching or simple keywords, and therefore struggle with accuracy, context, scale and derivative content.

  • Many solutions only surface that a file exists and carries a type label - but stop short of mapping who or what can access it, its purpose, and what its downstream exposure might be.

  • GenAI workflows now pump massive volumes of unstructured data into copilots, knowledge bases, training sets - creating new blast radii that legacy DLP or labeling tools weren’t designed to catch.

What a Modern Platform Must Deliver

  1. High-accuracy, petabyte-scale classification of unstructured data (so you know what you have, where it sits, and how sensitive it is). And it must keep pace with explosive data growth and do so cost efficiently.

  2. Unified Data Access Governance (DAG) - mapping identities (users, service principals, agents), permissions, implicit shares, federated/cloud-native paths across M365 and beyond.
  3. Data Detection & Response (DDR) - continuous monitoring of data movement, copies, derivative creation, AI agent interactions, and automated response/remediation.

How Sentra addresses this in M365

Assets contain plain text credit card numbers

At Sentra, we’ve built a cloud-native data-security platform specifically to address this triad of capabilities - and we extend that deeply into M365 (OneDrive, SharePoint, Teams, Exchange Online) and other SaaS platforms.

  • A newly announced AI Classifier for Unstructured Data accelerates and improves classification across M365’s unstructured repositories (see: Sentra launches breakthrough unstructured-data AI classification capabilities).

  • Petabyte-scale processing: our architecture supports classification and monitoring of massive file estates without astronomical cost or time-to-value.

  • Seamless support for M365 services: read/write access, ingestion, classification, access-graph correlation, detection of shadow/unmanaged copies across OneDrive and SharePoint—plus integration into our DAG and DDR layers (see our guide: How to Secure Regulated Data in Microsoft 365 + Copilot).

  • Cost-efficient deployment: designed for high scale without breaking the budget or massive manual effort.

The Bottom Line

In today’s cloud/AI era, saying “we discovered the PII in my M365 tenant” isn’t enough.

The real question is: Do I know who or what (user/agent/app) can access that content, what its business purpose is, and whether it’s already been copied or transformed into a risk vector?


If your solution can’t answer that, your unstructured data remains a silent, high-stakes liability and resolving concerns becomes a very costly, resource-draining burden. By embracing a platform that combines classification accuracy, petabyte-scale processing, unified DSPM + DAG + DDR, and deep M365 support, you move from “hope I’m secure” to “I know I’m secure.”

Want to see how it works in a real M365 setup? Check out our video or book a demo.

<blogcta-big>

Read More
Ofir Yehoshua
Ofir Yehoshua
November 17, 2025
4
Min Read

How to Gain Visibility and Control in Petabyte-Scale Data Scanning

How to Gain Visibility and Control in Petabyte-Scale Data Scanning

Every organization today is drowning in data - millions of assets spread across cloud platforms, on-premises systems, and an ever-expanding landscape of SaaS tools. Each asset carries value, but also risk. For security and compliance teams, the mandate is clear: sensitive data must be inventoried, managed and protected.

Scanning every asset for security and compliance is no longer optional, it’s the line between trust and exposure, between resilience and chaos.

Many data security tools promise to scan and classify sensitive information across environments. In practice, doing this effectively and at scale, demands more than raw ‘brute force’ scanning power. It requires robust visibility and management capabilities: a cockpit view that lets teams monitor coverage, prioritize intelligently, and strike the right balance between scan speed, cost, and accuracy.

Why Scan Tracking Is Crucial

Scanning is not instantaneous. Depending on the size and complexity of your environment, it can take days - sometimes even weeks to complete. Meanwhile, new data is constantly being created or modified, adding to the challenge.

Without clear visibility into the scanning process, organizations face several critical obstacles:

  • Unclear progress: It’s often difficult to know what has already been scanned, what is currently in progress, and what remains pending. This lack of clarity creates blind spots that undermine confidence in coverage.

  • Time estimation gaps: In large environments, it’s hard to know how long scans will take because so many factors come into play — the number of assets, their size, the type of data - structured, semi-structured, or unstructured, and how much scanner capacity is available. As a result, predicting when you’ll reach full coverage is tricky. This becomes especially stressful when scans need to be completed before a fixed deadline, like a compliance audit. 

    "With Sentra’s Scan Dashboard, we were able to quickly scale up our scanners to meet a tight audit deadline, finish on time, and then scale back down to save costs. The visibility and control it gave us made the whole process seamless”, said CISO of Large Retailer.
  • Poor prioritization: Not all environments or assets carry the same importance. Yet without visibility into scan status, teams struggle to balance historical scans of existing assets with the ongoing influx of newly created data, making it nearly impossible to prioritize effectively based on risk or business value.

Sentra’s End-to-End Scanning Workflow

Managing scans at petabyte scale is complex. Sentra streamlines the process with a workflow built for scale, clarity, and control that features:

1. Comprehensive Asset Discovery

Before scanning even begins, Sentra automatically discovers assets across cloud platforms, on-premises systems, and SaaS applications. This ensures teams have a complete, up-to-date inventory and visual map of their data landscape, so no environment or data store is overlooked.

Example: New S3 buckets, a freshly deployed BigQuery dataset, or a newly connected SharePoint site are automatically identified and added to the inventory.

Comprehensive Asset Discovery with Sentra

2. Configurable Scan Management

Administrators can fine-tune how scans are executed to meet their organization’s needs. With flexible configuration options, such as number of scanners, sampling rates, and prioritization rules - teams can strike the right balance between scan speed, coverage, and cost control.

For instance, compliance-critical assets can be scanned at full depth immediately, while less critical environments can run at reduced sampling to save on compute consumption and costs.

3. Real-Time Scan Dashboard

Sentra’s unified Scan Dashboard provides a cockpit view into scanning operations, so teams always know where they stand. Key features include:

  • Daily scan throughput correlated with the number of active scanners, helping teams understand efficiency and predict completion times.
  • Coverage tracking that visualizes overall progress and highlights which assets remain unscanned.
  • Decision-making tools that allow teams to dynamically adjust, whether by adding scanner capacity, changing sampling rates, or reordering priorities when new high-risk assets appear.
Real-Time Scan Dashboard with Sentra

Handling Data Changes

The challenge doesn’t end once the initial scans are complete. Data is dynamic, new files are added daily, existing records are updated, and sensitive information shifts locations. Sentra’s activity feeds give teams the visibility they need to understand how their data landscape is evolving and adapt their data security strategies in real time.


Conclusion

Tracking scan status at scale is complex but critical to any data security strategy. Sentra provides an end-to-end view and unmatched scan control, helping organizations move from uncertainty to confidence with clear prediction of scan timelines, faster troubleshooting, audit-ready compliance, and smarter, cost-efficient decisions for securing data.

<blogcta-big>

Read More
decorative ball
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

Gartner Certificate for Sentra