All Resources
In this article:
minus iconplus icon
Share the Blog

Use Redshift Data Scrambling for Additional Data Protection

May 3, 2023
8
 Min Read

According to IBM, a data breach in the United States cost companies an average of 9.44 million dollars in 2022. It is now more important than ever for organizations to place high importance on protecting confidential information. Data scrambling, which can add an extra layer of security to data, is one approach to accomplish this. 

In this post, we'll analyze the value of data protection, look at the potential financial consequences of data breaches, and talk about how Redshift Data Scrambling may help protect private information.

The Importance of Data Protection

Data protection is essential to safeguard sensitive data from unauthorized access. Identity theft, financial fraud,and other serious consequences are all possible as a result of a data breach. Data protection is also crucial for compliance reasons. Sensitive data must be protected by law in several sectors, including government, banking, and healthcare. Heavy fines, legal problems, and business loss may result from failure to abide by these regulations.

Hackers employ many techniques, including phishing, malware, insider threats, and hacking, to get access to confidential information. For example, a phishing assault may lead to the theft of login information, and malware may infect a system, opening the door for additional attacks and data theft. 

So how to protect yourself against these attacks and minimize your data attack surface?

What is Redshift Data Masking?

Redshift data masking is a technique used to protect sensitive data in Amazon Redshift; a cloud-based data warehousing and analytics service. Redshift data masking involves replacing sensitive data with fictitious, realistic values to protect it from unauthorized access or exposure. It is possible to enhance data security by utilizing Redshift data masking in conjunction with other security measures, such as access control and encryption, in order to create a comprehensive data protection plan.

What is Redshift Data Masking

What is Redshift Data Scrambling?

Redshift data scrambling protects confidential information in a Redshift database by altering original data values using algorithms or formulas, creating unrecognizable data sets. This method is beneficial when sharing sensitive data with third parties or using it for testing, development, or analysis, ensuring privacy and security while enhancing usability. 

The technique is highly customizable, allowing organizations to select the desired level of protection while maintaining data usability. Redshift data scrambling is cost-effective, requiring no additional hardware or software investments, providing an attractive, low-cost solution for organizations aiming to improve cloud data security.

Data Masking vs. Data Scrambling

Data masking involves replacing sensitive data with a fictitious but realistic value. However, data scrambling, on the other hand, involves changing the original data values using an algorithm or a formula to generate a new set of values.

In some cases, data scrambling can be used as part of data masking techniques. For instance, sensitive data such as credit card numbers can be scrambled before being masked to enhance data protection further.

Setting up Redshift Data Scrambling

Having gained an understanding of Redshift and data scrambling, we can now proceed to learn how to set it up for implementation. Enabling data scrambling in Redshift requires several steps.

To achieve data scrambling in Redshift, SQL queries are utilized to invoke built-in or user-defined functions. These functions utilize a blend of cryptographic techniques and randomization to scramble the data.

The following steps are explained using an example code just for a better understanding of how to set it up:

Step 1: Create a new Redshift cluster

Create a new Redshift cluster or use an existing cluster if available. 

Redshift create cluster

Step 2: Define a scrambling key

Define a scrambling key that will be used to scramble the sensitive data.

 
SET session my_scrambling_key = 'MyScramblingKey';

In this code snippet, we are defining a scrambling key by setting a session-level parameter named <inlineCode>my_scrambling_key<inlineCode> to the value <inlineCode>MyScramblingKey<inlineCode>. This key will be used by the user-defined function to scramble the sensitive data.

Step 3: Create a user-defined function (UDF)

Create a user-defined function in Redshift that will be used to scramble the sensitive data. 


CREATE FUNCTION scramble(input_string VARCHAR)
RETURNS VARCHAR
STABLE
AS $$
DECLARE
scramble_key VARCHAR := 'MyScramblingKey';
BEGIN
-- Scramble the input string using the key
-- and return the scrambled output
RETURN ;
END;
$$ LANGUAGE plpgsql;

Here, we are creating a UDF named <inlineCode>scramble<inlineCode> that takes a string input and returns the scrambled output. The function is defined as <inlineCode>STABLE<inlineCode>, which means that it will always return the same result for the same input, which is important for data scrambling. You will need to input your own scrambling logic.

Step 4: Apply the UDF to sensitive columns

Apply the UDF to the sensitive columns in the database that need to be scrambled.


UPDATE employee SET ssn = scramble(ssn);

For example, applying the <inlineCode>scramble<inlineCode> UDF to a column saying, <inlineCode>ssn<inlineCode> in a table named <inlineCode>employee<inlineCode>. The <inlineCode>UPDATE<inlineCode> statement calls the <inlineCode>scramble<inlineCode> UDF and updates the values in the <inlineCode>ssn<inlineCode> column with the scrambled values.

Step 5: Test and validate the scrambled data

Test and validate the scrambled data to ensure that it is unreadable and unusable by unauthorized parties.


SELECT ssn, scramble(ssn) AS scrambled_ssn
FROM employee;

In this snippet, we are running a <inlineCode>SELECT<inlineCode> statement to retrieve the <inlineCode>ssn<inlineCode> column and the corresponding scrambled value using the <inlineCode>scramble<inlineCode> UDF. We can compare the original and scrambled values to ensure that the scrambling is working as expected. 

Step 6: Monitor and maintain the scrambled data

To monitor and maintain the scrambled data, we can regularly check the sensitive columns to ensure that they are still rearranged and that there are no vulnerabilities or breaches. We should also maintain the scrambling key and UDF to ensure that they are up-to-date and effective.

Different Options for Scrambling Data in Redshift

Selecting a data scrambling technique involves balancing security levels, data sensitivity, and application requirements. Various general algorithms exist, each with unique pros and cons. To scramble data in Amazon Redshift, you can use the following Python code samples in conjunction with a library like psycopg2 to interact with your Redshift cluster. Before executing the code samples, you will need to install the psycopg2 library:


pip install psycopg2

Random

Utilizing a random number generator, the Random option quickly secures data, although its susceptibility to reverse engineering limits its robustness for long-term protection.


import random
import string
import psycopg2

def random_scramble(data):
    scrambled = ""
    for char in data:
        scrambled += random.choice(string.ascii_letters + string.digits)
    return scrambled

# Connect to your Redshift cluster
conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()
# Fetch data from your table
cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

# Scramble the data
scrambled_rows = [(random_scramble(row[0]),) for row in rows]

# Update the data in the table
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

# Close the connection
cursor.close()
conn.close()

Shuffle

The Shuffle option enhances security by rearranging data characters. However, it remains prone to brute-force attacks, despite being harder to reverse-engineer.


import random
import psycopg2

def shuffle_scramble(data):
    data_list = list(data)
    random.shuffle(data_list)
    return ''.join(data_list)

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

scrambled_rows = [(shuffle_scramble(row[0]),) for row in rows]

cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Reversible

By scrambling characters in a decryption key-reversible manner, the Reversible method poses a greater challenge to attackers but is still vulnerable to brute-force attacks. We’ll use the Caesar cipher as an example.


def caesar_cipher(data, key):
    encrypted = ""
    for char in data:
        if char.isalpha():
            shift = key % 26
            if char.islower():
                encrypted += chr((ord(char) - 97 + shift) % 26 + 97)
            else:
                encrypted += chr((ord(char) - 65 + shift) % 26 + 65)
        else:
            encrypted += char
    return encrypted

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

key = 5
encrypted_rows = [(caesar_cipher(row[0], key),) for row in rows]
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(encrypted, original) for encrypted, original in zip(encrypted_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Custom

The Custom option enables users to create tailor-made algorithms to resist specific attack types, potentially offering superior security. However, the development and implementation of custom algorithms demand greater time and expertise.

Best Practices for Using Redshift Data Scrambling

There are several best practices that should be followed when using Redshift Data Scrambling to ensure maximum protection:

Use Unique Keys for Each Table

To ensure that the data is not compromised if one key is compromised, each table should have its own unique key pair. This can be achieved by creating a unique index on the table.


CREATE UNIQUE INDEX idx_unique_key ON table_name (column_name);

Encrypt Sensitive Data Fields 

Sensitive data fields such as credit card numbers and social security numbers should be encrypted to provide an additional layer of security. You can encrypt data fields in Redshift using the ENCRYPT function. Here's an example of how to encrypt a credit card number field:


SELECT ENCRYPT('1234-5678-9012-3456', 'your_encryption_key_here');

Use Strong Encryption Algorithms

Strong encryption algorithms such as AES-256 should be used to provide the strongest protection. Redshift supports AES-256 encryption for data at rest and in transit.


CREATE TABLE encrypted_table (  sensitive_data VARCHAR(255) ENCODE ZSTD ENCRYPT 'aes256' KEY 'my_key');

Control Access to Encryption Keys 

Access to encryption keys should be restricted to authorized personnel to prevent unauthorized access to sensitive data. You can achieve this by setting up an AWS KMS (Key Management Service) to manage your encryption keys. Here's an example of how to restrict access to an encryption key using KMS in Python:


import boto3

kms = boto3.client('kms')

key_id = 'your_key_id_here'
grantee_principal = 'arn:aws:iam::123456789012:user/jane'

response = kms.create_grant(
    KeyId=key_id,
    GranteePrincipal=grantee_principal,
    Operations=['Decrypt']
)

print(response)

Regularly Rotate Encryption Keys 

Regular rotation of encryption keys ensures that any compromised keys do not provide unauthorized access to sensitive data. You can schedule regular key rotation in AWS KMS by setting a key policy that specifies a rotation schedule. Here's an example of how to schedule annual key rotation in KMS using the AWS CLI:

 
aws kms put-key-policy \\
    --key-id your_key_id_here \\
    --policy-name default \\
    --policy
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    \\":\\"kms:RotateKey\\",\\"Resource\\":\\"*\\"},{\\"Effect\\":\\"Allow\\",\
    \"Principal\\":{\\"AWS\\":\\"arn:aws:iam::123456789012:root\\"},\\"Action\\
    ":\\"kms:CreateGrant\\",\\"Resource\\":\\"*\\",\\"Condition\\":{\\"Bool\\":
    {\\"kms:GrantIsForAWSResource\\":\\"true\\"}}}]}"

Turn on logging 

To track user access to sensitive data and identify any unwanted access, logging must be enabled. All SQL commands that are executed on your cluster are logged when you activate query logging in Amazon Redshift. This applies to queries that access sensitive data as well as data-scrambling operations. Afterwards, you may examine these logs to look for any strange access patterns or suspect activities.

You may use the following SQL statement to make query logging available in Amazon Redshift:

ALTER DATABASE  SET enable_user_activity_logging=true;

The stl query system table may be used to retrieve the logs once query logging has been enabled. For instance, the SQL query shown below will display all queries that reached a certain table:

Monitor Performance 

Data scrambling is often a resource-intensive practice, so it’s good to monitor CPU usage, memory usage, and disk I/O to ensure your cluster isn’t being overloaded. In Redshift, you can use the <inlineCode>svl_query_summary<inlineCode> and <inlineCode>svl_query_report<inlineCode> system views to monitor query performance. You can also use Amazon CloudWatch to monitor metrics such as CPU usage and disk space.

Amazon CloudWatch

Establishing Backup and Disaster Recovery

In order to prevent data loss in the case of a disaster, backup and disaster recovery mechanisms should be put in place. Automated backups and manual snapshots are only two of the backup and recovery methods offered by Amazon Redshift. Automatic backups are taken once every eight hours by default. 

Moreover, you may always manually take a snapshot of your cluster. In the case of a breakdown or disaster, your cluster may be restored using these backups and snapshots. Use this SQL query to manually take a snapshot of your cluster in Amazon Redshift:

CREATE SNAPSHOT ; 

To restore a snapshot, you can use the <inlineCode>RESTORE<inlineCode> command. For example:


RESTORE 'snapshot_name' TO 'new_cluster_name';

Frequent Review and Updates

To ensure that data scrambling procedures remain effective and up-to-date with the latest security requirements, it is crucial to consistently review and update them. This process should include examining backup and recovery procedures, encryption techniques, and access controls.

In Amazon Redshift, you can assess access controls by inspecting all roles and their associated permissions in the <inlineCode>pg_roles<inlineCode> system catalog database. It is essential to confirm that only authorized individuals have access to sensitive information.

To analyze encryption techniques, use the <inlineCode>pg_catalog.pg_attribute<inlineCode> system catalog table, which allows you to inspect data types and encryption settings for each column in your tables. Ensure that sensitive data fields are protected with robust encryption methods, such as AES-256.

The AWS CLI commands <inlineCode>aws backup plan<inlineCode> and <inlineCode>aws backup vault<inlineCode> enable you to review your backup plans and vaults, as well as evaluate backup and recovery procedures. Make sure your backup and recovery procedures are properly configured and up-to-date.

Decrypting Data in Redshift

There are different options for decrypting data, depending on the encryption method used and the tools available; the decryption process is similar to of encryption, usually a custom UDF is used to decrypt the data, let’s look at one example of decrypting data scrambling with a substitution cipher.

Step 1: Create a UDF with decryption logic for substitution


CREATE FUNCTION decrypt_substitution(ciphertext varchar) RETURNS varchar
IMMUTABLE AS $$
    alphabet = 'abcdefghijklmnopqrstuvwxyz'
    substitution = 'ijklmnopqrstuvwxyzabcdefgh'
    reverse_substitution = ''.join(sorted(substitution, key=lambda c: substitution.index(c)))
    plaintext = ''
    for i in range(len(ciphertext)):
        index = substitution.find(ciphertext[i])
        if index == -1:
            plaintext += ciphertext[i]
        else:
            plaintext += reverse_substitution[index]
    return plaintext
$$ LANGUAGE plpythonu;

Step 2: Move the data back after truncating and applying the decryption function


TRUNCATE original_table;
INSERT INTO original_table (column1, decrypted_column2, column3)
SELECT column1, decrypt_substitution(encrypted_column2), column3
FROM temp_table;

In this example, encrypted_column2 is the encrypted version of column2 in the temp_table. The decrypt_substitution function is applied to encrypted_column2, and the result is inserted into the decrypted_column2 in the original_table. Make sure to replace column1, column2, and column3 with the appropriate column names, and adjust the INSERT INTO statement accordingly if you have more or fewer columns in your table.

Conclusion

Redshift data scrambling is an effective tool for additional data protection and should be considered as part of an organization's overall data security strategy. In this blog post, we looked into the importance of data protection and how this can be integrated effectively into the  data warehouse. Then, we covered the difference between data scrambling and data masking before diving into how one can set up Redshift data scrambling.

Once you begin to accustom to Redshift data scrambling, you can upgrade your security techniques with different techniques for scrambling data and best practices including encryption practices, logging, and performance monitoring. Organizations may improve their data security posture management (DSPM) and reduce the risk of possible breaches by adhering to these recommendations and using an efficient strategy.

<blogcta-big>

Veronica is the security researcher at Sentra. She brings a wealth of knowledge and experience as a cybersecurity researcher. Her main focuses are researching the main cloud provider services and AI infrastructures for Data related threats and techniques.

Subscribe

Latest Blog Posts

Asaf Kochan
Asaf Kochan
July 9, 2025
3
Min Read
Data Security

Data Security in 2025: Why DSPM Is Now a Business Imperative

Data Security in 2025: Why DSPM Is Now a Business Imperative

At RSAC 2025, I had the opportunity to speak with Adrian Sanabria about one of the most pressing and complex challenges facing security teams today: data security. Since then, the urgency around the future of data security has only intensified.

We're watching a major inflection point unfold across industries. Organizations are generating and storing more data than ever, while simultaneously adopting AI at a pace that outstrips most security programs. At the same time, regulators are enforcing data privacy with increasing sharpness. These trends all converge on one critical question:

 

Do you know where your sensitive data is - and who can access it?

If the answer is no, then it's time to rethink your approach.

Data is Now The Most Valuable, And Volatile Asset

For years, security tools have operated largely without visibility into the data itself. We've focused on endpoints, perimeters, and identities - all essential layers. But in 2025, that’s no longer sufficient.

Data is now the most valuable, and volatile asset most companies have. We’re seeing this in breach investigations, where the root cause often traces back to unmonitored or duplicated sensitive data left in the wrong place. We're seeing it in AI deployments, where teams rush to fine-tune models or deploy copilots without knowing what's inside the datasets they’re exposing. And we’re certainly seeing it in regulatory fines, many of which stem from nothing more than storing customer data longer than necessary, in the wrong place, or in unsecured formats.

What all of this underscores is a simple truth: you can’t protect what you can’t see.

The Role of DSPM in the Future of Data Security

At Sentra, we’ve built our platform around a core philosophy that Data Security Posture Management (DSPM) is not just a security tool, it’s the future of data security, an enabler of responsible innovation. The foundation starts with sensitive data discovery. Most organizations are surprised by how much sensitive data exists outside expected systems- in backups, temporary stores, or SaaS apps that were never properly offboarded. From there, classification adds context. It’s not enough to label something as “PII”, we need to understand how sensitive it is, who owns it, how it is being used, and how it should be governed.

We built Sentra as a cloud-native solution from day one. That means it works across IaaS, SaaS, PaaS, and even on-prem environments without needing agents or pulling data outside the customer’s environment. That last point is non-negotiable for us. As a security company, we believe strongly that extracting customer data for analysis creates unnecessary risk and liability.

To support classification at scale, especially for unstructured data, we developed our own language models using open-source LLMs. This provides the deep contextual understanding needed to accurately label large volumes of data all while maintaining cost efficiency and avoiding unnecessary compute overhead.

AI, Risk, and Responsibility in Data Securityy

One of the biggest shifts we’re seeing in the market is how AI has elevated data security from a technical concern to a boardroom issue. Security teams are now being asked to approve large-scale data usage for AI training, RAG systems, copilots, and internal assistants. But very few have the tools to answer basic questions about what’s in those datasets.

I’ve worked with customers who only realized after deploying AI that they had been exposing medical records, credentials, or confidential meeting data to the model. Once it’s in, you can’t pull it back. That’s why data classification and risk detection must come before any AI integration.

This is precisely the use case we had in mind when we built Sentra’s Data Security for AI Module. It helps teams scan, assess, and verify the contents of data before it ever touches a model. The goal isn’t to slow down innovation - it’s to make it safer, auditable, and repeatable.

Proactive Risk Management Helps Enterprises Ship Faster

One of the most exciting developments we’ve seen for the future of data security is how quickly Sentra’s data security platform becomes a strategic asset for enterprise data risk management. Time to value is fast in many cases, our customers discover major data risks just days after deployment. But beyond those early wins, the real power lies in alignment.

When security leaders can map data to risk, compliance, and governance frameworks, and do so continuously, they’re no longer operating reactively. They’re enabling the business, helping teams ship faster with fewer unknowns, and building trust around how AI and data are managed.

At scale, this kind of maturity is the difference between organizations that can confidently embrace generative AI and those that will always be playing catch-up.

A Final Word

From my time in the Israeli Defense Forces and Unit 8200 to helping enterprises build modern security programs, I’ve seen one truth over and over again: data left behind is data exposed. The volume may grow, the threats may change, but this principle doesn’t.

In 2025, securing data is no longer an aspiration, it’s a baseline. Whether you’re preparing for your next AI initiative, facing regulatory audits, or just trying to get visibility into sprawling cloud environments, DSPM should be your first step. At Sentra, we’re proud to help lead this change. And we believe the organizations that take control of their data today will be the ones best positioned to lead tomorrow.

<blogcta-big>

Read More
Team Sentra
Team Sentra
July 2, 2025
3
Min Read
Data Security

Data Blindness: The Hidden Threat Lurking in Your Cloud

Data Blindness: The Hidden Threat Lurking in Your Cloud

“If you don’t know where your sensitive data is, how can you protect it?”

It’s a simple question, but for many security and compliance teams, it’s nearly impossible to answer. When a Fortune 500 company recently paid millions in fines due to improperly stored customer data on an unmanaged cloud bucket, the real failure wasn’t just a misconfiguration. It was a lack of visibility.

Some in the industry are starting to refer to this challenge as "data blindness".

What Is Data Blindness?

Data Blindness refers to an organization’s inability to fully see, classify, and understand the sensitive data spread across its cloud, SaaS, and hybrid environments.

It’s not just another security buzzword. It’s the modern evolution of a very real problem: traditional data protection methods weren’t built for the dynamic, decentralized, and multi-cloud world we now operate in. Legacy DLP tools or one-time audits simply can’t keep up.

Unlike general data security issues, Data Blindness speaks to a specific kind of operational gap: you can’t protect what you can’t see, and most teams today are flying partially blind.

Why Data Blindness Is Getting Worse

What used to be a manageable gap in visibility has now escalated into a full-scale operational risk. As organizations accelerate cloud adoption and embrace SaaS-first architectures, the complexity of managing sensitive data has exploded. Information no longer lives in a few centralized systems, it’s scattered across AWS, Azure, and GCP instances, and a growing stack of SaaS tools, each with its own storage model, access controls, and risk profile.

At the same time, shadow data is proliferating. Sensitive information ends up in collaboration platforms, forgotten test environments, and unsanctioned apps - places that rarely make it into formal security inventories. And with the rise of generative AI tools, a new wave of unstructured content is being created and shared at scale, often without proper visibility or retention controls in place.

To make matters worse, many organizations are still operating with outdated identity and access frameworks. Stale permissions and misconfigured policies allow unnecessary access to critical data, dramatically increasing the potential impact of both internal mistakes and external breaches.

In short, the cloud hasn’t just moved the data, it’s multiplied it, fragmented it, and made it harder than ever to track. Without continuous, intelligent visibility, data blindness becomes the default.

The Hidden Risks of Operating Blind

When teams don’t have visibility into where sensitive data lives or how it moves, the consequences stack up quickly:

  • Compliance gaps: Regulations like GDPR, HIPAA, and PCI-DSS demand accurate data inventories, privacy adherence, and prompt response to DSARs. Without visibility, you risk fines and legal exposure.

  • Breach potential: Blind spots become attack vectors. Misplaced data, overexposed buckets, or forgotten environments are easy targets.

  • Wasted resources: Scanning everything (just in case) is expensive. Without prioritization, teams waste cycles on low-risk data.

  • Trust erosion: Customers expect you to know where their data is and how it’s protected. Data blindness isn’t a good look.

Do You Have Data Blindness? Here Are the Signs

  • Your security team can’t confidently answer, “Where is our most sensitive data and who has access to it?”

  • Data inventories are outdated, or built on manual tagging and spreadsheets.

  • You’re still relying on legacy DLP tools with poor context and high false positives.

  • Incident response is slow because it’s unclear what data was touched or how sensitive it was.

Sound familiar? You’re not alone.

Breaking Free from Data Blindness

Solving data blindness starts with visibility, but real progress comes from turning that visibility into action. Modern organizations need more than one-off audits or static reports. They need continuous data discovery that scans cloud, SaaS, and on-prem environments in real time, keeping up with the constant movement of data.

But discovery alone isn’t enough. Classification must go beyond content analysis, it needs to be context-aware, taking into account where the data lives, who has access to it, how it’s used, and why it matters to the business. Visibility must extend to both structured and unstructured data, since sensitive information often hides in documents, PDFs, chat logs, and spreadsheets. And finally, insights need to be integrated into existing security and compliance workflows. Detection without action is just noise.

How Sentra Solves Data Blindness

At Sentra, we give security and privacy teams the visibility and context they need to take control of their data - without disrupting operations or moving it out of place. Our cloud-native DSPM (Data Security Posture Management) platform scans and classifies data in-place across cloud, SaaS, and on-prem environments, with no agents or data removal required.

Sentra uses AI-powered, context-rich classification to achieve over 95% accuracy, helping teams identify truly sensitive data and prioritize what matters most. We provide full coverage of structured and unstructured sources, along with real-time insights into risk exposure, access patterns, and regulatory posture, all with a cost-efficient scanning model that avoids unnecessary compute usage.

One customer reduced their shadow data footprint by 30% in just a few weeks, eliminating blind spots that their legacy tools had missed for years. That’s the power of visibility, backed by context, at scale.

The Bottom Line: Awareness Is Step One

Data Blindness is real, but it’s also solvable. The first step is acknowledging the problem. The next is choosing a solution that brings your data out of the dark, without slowing down your teams or compromising security.

If you’re ready to assess your current exposure or just want to see what’s possible with modern data security, you can take a free data blindness assessment, or talk to our experts to get started.

<blogcta-big>

Read More
Yoav Regev
Yoav Regev
June 12, 2025
3
Min Read
Data Security

Why Sentra Was Named Gartner Peer Insights Customer Choice 2025

Why Sentra Was Named Gartner Peer Insights Customer Choice 2025

When we started Sentra three years ago, we had a hypothesis: organizations were drowning in data they couldn't see, classify, or protect. What we didn't anticipate was how brutally honest our customers would be about what actually works, and what doesn't.

This week, Gartner named Sentra a "Customer's Choice" in their Peer Insights Voice of the Customer report for Data Security Posture Management. The recognition is based on over 650 verified customer reviews, giving us a 4.9/5 rating with 98% willing to recommend us.

The Accuracy Obsession Was Right

The most consistent theme across hundreds of reviews? Accuracy matters more than anything else.

"97.4% of Sentra's alerts in our testing were accurate! By far the highest percentage of any of the DSPM platforms that we tested."

"Sentra accurately identified 99% of PII and PCI in our cloud environments with minimal false positives during the POC."

But customers don't just want data discovery—they want trustworthy data discovery. When your DSPM tool incorrectly flags non-sensitive data as critical, teams waste time investigating false leads. When it misses actual sensitive data, you face compliance gaps and real risk. The reviews validate what we suspected: if security teams can't trust your classifications, the tool becomes shelf-ware. Precision isn't a nice-to-have—it's everything.

How Sentra Delivers Time-to-Value

Another revelation: customers don't just want fast deployment, they want fast insights.

"Within less than a week we were getting results, seeing where our sensitive data had been moved to."

"We were able to start seeing actionable insights within hours."

I used to think "time-to-value" was a marketing term. But when you're a CISO trying to demonstrate ROI to your board, or a compliance officer facing an audit deadline, every day matters. Speed isn’t a luxury in security, it’s a necessity. Data breaches don't wait for your security tools to finish their months-long deployment cycles. Compliance deadlines don't care about your proof-of-concept timeline. Security teams need to move at the speed of business risk.

The Honesty That Stings (And Helps)

But here's what really struck me: our customers were refreshingly honest about our shortcomings.

"The chatbot is more annoying than helpful."

"Currently there is no SaaS support for something like Salesforce."

"It's a startup so it has all the advantages and disadvantages that those come with."

As a founder, reading these critiques was... uncomfortable. But it's also incredibly valuable. Our customers aren't just users, they're partners in our product evolution. They're telling us exactly where to invest our engineering resources.

The Salesforce integration requests, for instance, showed up in nearly every "dislike" section. Message received. We're shipping SaaS connectors specifically because it’s a top priority for our customers.

What Gartner Customer Choice Trends Reveal About the DSPM Market

Analyzing 650 reviews across 9 vendors revealed something fascinating about our market's maturity. Customers aren't just comparing features, they're comparing outcomes.

The traditional data security playbook focused on coverage: "How many data sources can you scan?" But customers are asking different questions:

  • How accurate are your findings?
  • How quickly can I act on your insights?
  • How much manual work does this actually eliminate?

This shift from inputs to outcomes suggests the DSPM market is maturing rapidly. 

The Gartner Voice of the Customer Validated

Perhaps the most meaningful insight came from what customers didn't say. I expected more complaints about deployment complexity, integration challenges, or learning curves. Instead, review after review mentioned how quickly teams became productive with Sentra.

"It was also the fastest set up."

"Quick setup and responsive support."

"The platform is intuitive and offers immediate insights."

This tells me we're solving a real problem in a way that feels natural to security teams. The best products don't just work, they feel inevitable once you use them.

The Road Ahead: Learning from Gartner Choice Recognition

These reviews crystallized our 2025 roadmap priorities:

1. SaaS-First Expansion: Every customer asked for broader SaaS coverage. We're expanding beyond IaaS to support the applications where your most sensitive data actually lives. Our mission is to secure data everywhere.

2. AI Enhancement: Our classification engine is industry-leading, but customers want more. We're building contextual AI that doesn't just find data, it understands data relationships and business impact.

3. Remediation Automation: Customers love our visibility but want more automated remediation. We're moving beyond recommendations to actual risk mitigation.

A Personal Thank You

To the customers who contributed to our Sentra Gartner Peer Insights success: thank you. Building a startup is often a lonely journey of best guesses and gut instincts. Your feedback is the compass that keeps us pointed toward solving real problems.

To the security professionals reading this: your honest feedback (both praise and criticism) makes our products better. If you're using Sentra, please keep telling us what's working and what isn't. If you're not, I'd love to show you what earned us Customer Choice 2025 recognition and why 98% of our customers recommend us.

The data security landscape is evolving rapidly. But with customers as partners and recognition like Gartner Peer Insights Customer Choice 2025, I'm confident we're building tools that don't just keep up with threats, they help organizations stay ahead of them.

<blogcta-big>

Read More
decorative ball
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!