All Resources
In this article:
minus iconplus icon
Share the Blog

Use Redshift Data Scrambling for Additional Data Protection

May 3, 2023
8
 Min Read

According to IBM, a data breach in the United States cost companies an average of 9.44 million dollars in 2022. It is now more important than ever for organizations to place high importance on protecting confidential information. Data scrambling, which can add an extra layer of security to data, is one approach to accomplish this. 

In this post, we'll analyze the value of data protection, look at the potential financial consequences of data breaches, and talk about how Redshift Data Scrambling may help protect private information.

The Importance of Data Protection

Data protection is essential to safeguard sensitive data from unauthorized access. Identity theft, financial fraud,and other serious consequences are all possible as a result of a data breach. Data protection is also crucial for compliance reasons. Sensitive data must be protected by law in several sectors, including government, banking, and healthcare. Heavy fines, legal problems, and business loss may result from failure to abide by these regulations.

Hackers employ many techniques, including phishing, malware, insider threats, and hacking, to get access to confidential information. For example, a phishing assault may lead to the theft of login information, and malware may infect a system, opening the door for additional attacks and data theft. 

So how to protect yourself against these attacks and minimize your data attack surface?

What is Redshift Data Masking?

Redshift data masking is a technique used to protect sensitive data in Amazon Redshift; a cloud-based data warehousing and analytics service. Redshift data masking involves replacing sensitive data with fictitious, realistic values to protect it from unauthorized access or exposure. It is possible to enhance data security by utilizing Redshift data masking in conjunction with other security measures, such as access control and encryption, in order to create a comprehensive data protection plan.

What is Redshift Data Masking

What is Redshift Data Scrambling?

Redshift data scrambling protects confidential information in a Redshift database by altering original data values using algorithms or formulas, creating unrecognizable data sets. This method is beneficial when sharing sensitive data with third parties or using it for testing, development, or analysis, ensuring privacy and security while enhancing usability. 

The technique is highly customizable, allowing organizations to select the desired level of protection while maintaining data usability. Redshift data scrambling is cost-effective, requiring no additional hardware or software investments, providing an attractive, low-cost solution for organizations aiming to improve cloud data security.

Data Masking vs. Data Scrambling

Data masking involves replacing sensitive data with a fictitious but realistic value. However, data scrambling, on the other hand, involves changing the original data values using an algorithm or a formula to generate a new set of values.

In some cases, data scrambling can be used as part of data masking techniques. For instance, sensitive data such as credit card numbers can be scrambled before being masked to enhance data protection further.

Setting up Redshift Data Scrambling

Having gained an understanding of Redshift and data scrambling, we can now proceed to learn how to set it up for implementation. Enabling data scrambling in Redshift requires several steps.

To achieve data scrambling in Redshift, SQL queries are utilized to invoke built-in or user-defined functions. These functions utilize a blend of cryptographic techniques and randomization to scramble the data.

The following steps are explained using an example code just for a better understanding of how to set it up:

Step 1: Create a new Redshift cluster

Create a new Redshift cluster or use an existing cluster if available. 

Redshift create cluster

Step 2: Define a scrambling key

Define a scrambling key that will be used to scramble the sensitive data.

 
SET session my_scrambling_key = 'MyScramblingKey';

In this code snippet, we are defining a scrambling key by setting a session-level parameter named <inlineCode>my_scrambling_key<inlineCode> to the value <inlineCode>MyScramblingKey<inlineCode>. This key will be used by the user-defined function to scramble the sensitive data.

Step 3: Create a user-defined function (UDF)

Create a user-defined function in Redshift that will be used to scramble the sensitive data. 


CREATE FUNCTION scramble(input_string VARCHAR)
RETURNS VARCHAR
STABLE
AS $$
DECLARE
scramble_key VARCHAR := 'MyScramblingKey';
BEGIN
-- Scramble the input string using the key
-- and return the scrambled output
RETURN ;
END;
$$ LANGUAGE plpgsql;

Here, we are creating a UDF named <inlineCode>scramble<inlineCode> that takes a string input and returns the scrambled output. The function is defined as <inlineCode>STABLE<inlineCode>, which means that it will always return the same result for the same input, which is important for data scrambling. You will need to input your own scrambling logic.

Step 4: Apply the UDF to sensitive columns

Apply the UDF to the sensitive columns in the database that need to be scrambled.


UPDATE employee SET ssn = scramble(ssn);

For example, applying the <inlineCode>scramble<inlineCode> UDF to a column saying, <inlineCode>ssn<inlineCode> in a table named <inlineCode>employee<inlineCode>. The <inlineCode>UPDATE<inlineCode> statement calls the <inlineCode>scramble<inlineCode> UDF and updates the values in the <inlineCode>ssn<inlineCode> column with the scrambled values.

Step 5: Test and validate the scrambled data

Test and validate the scrambled data to ensure that it is unreadable and unusable by unauthorized parties.


SELECT ssn, scramble(ssn) AS scrambled_ssn
FROM employee;

In this snippet, we are running a <inlineCode>SELECT<inlineCode> statement to retrieve the <inlineCode>ssn<inlineCode> column and the corresponding scrambled value using the <inlineCode>scramble<inlineCode> UDF. We can compare the original and scrambled values to ensure that the scrambling is working as expected. 

Step 6: Monitor and maintain the scrambled data

To monitor and maintain the scrambled data, we can regularly check the sensitive columns to ensure that they are still rearranged and that there are no vulnerabilities or breaches. We should also maintain the scrambling key and UDF to ensure that they are up-to-date and effective.

Different Options for Scrambling Data in Redshift

Selecting a data scrambling technique involves balancing security levels, data sensitivity, and application requirements. Various general algorithms exist, each with unique pros and cons. To scramble data in Amazon Redshift, you can use the following Python code samples in conjunction with a library like psycopg2 to interact with your Redshift cluster. Before executing the code samples, you will need to install the psycopg2 library:


pip install psycopg2

Random

Utilizing a random number generator, the Random option quickly secures data, although its susceptibility to reverse engineering limits its robustness for long-term protection.


import random
import string
import psycopg2

def random_scramble(data):
    scrambled = ""
    for char in data:
        scrambled += random.choice(string.ascii_letters + string.digits)
    return scrambled

# Connect to your Redshift cluster
conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()
# Fetch data from your table
cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

# Scramble the data
scrambled_rows = [(random_scramble(row[0]),) for row in rows]

# Update the data in the table
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

# Close the connection
cursor.close()
conn.close()

Shuffle

The Shuffle option enhances security by rearranging data characters. However, it remains prone to brute-force attacks, despite being harder to reverse-engineer.


import random
import psycopg2

def shuffle_scramble(data):
    data_list = list(data)
    random.shuffle(data_list)
    return ''.join(data_list)

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

scrambled_rows = [(shuffle_scramble(row[0]),) for row in rows]

cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Reversible

By scrambling characters in a decryption key-reversible manner, the Reversible method poses a greater challenge to attackers but is still vulnerable to brute-force attacks. We’ll use the Caesar cipher as an example.


def caesar_cipher(data, key):
    encrypted = ""
    for char in data:
        if char.isalpha():
            shift = key % 26
            if char.islower():
                encrypted += chr((ord(char) - 97 + shift) % 26 + 97)
            else:
                encrypted += chr((ord(char) - 65 + shift) % 26 + 65)
        else:
            encrypted += char
    return encrypted

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

key = 5
encrypted_rows = [(caesar_cipher(row[0], key),) for row in rows]
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(encrypted, original) for encrypted, original in zip(encrypted_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Custom

The Custom option enables users to create tailor-made algorithms to resist specific attack types, potentially offering superior security. However, the development and implementation of custom algorithms demand greater time and expertise.

Best Practices for Using Redshift Data Scrambling

There are several best practices that should be followed when using Redshift Data Scrambling to ensure maximum protection:

Use Unique Keys for Each Table

To ensure that the data is not compromised if one key is compromised, each table should have its own unique key pair. This can be achieved by creating a unique index on the table.


CREATE UNIQUE INDEX idx_unique_key ON table_name (column_name);

Encrypt Sensitive Data Fields 

Sensitive data fields such as credit card numbers and social security numbers should be encrypted to provide an additional layer of security. You can encrypt data fields in Redshift using the ENCRYPT function. Here's an example of how to encrypt a credit card number field:


SELECT ENCRYPT('1234-5678-9012-3456', 'your_encryption_key_here');

Use Strong Encryption Algorithms

Strong encryption algorithms such as AES-256 should be used to provide the strongest protection. Redshift supports AES-256 encryption for data at rest and in transit.


CREATE TABLE encrypted_table (  sensitive_data VARCHAR(255) ENCODE ZSTD ENCRYPT 'aes256' KEY 'my_key');

Control Access to Encryption Keys 

Access to encryption keys should be restricted to authorized personnel to prevent unauthorized access to sensitive data. You can achieve this by setting up an AWS KMS (Key Management Service) to manage your encryption keys. Here's an example of how to restrict access to an encryption key using KMS in Python:


import boto3

kms = boto3.client('kms')

key_id = 'your_key_id_here'
grantee_principal = 'arn:aws:iam::123456789012:user/jane'

response = kms.create_grant(
    KeyId=key_id,
    GranteePrincipal=grantee_principal,
    Operations=['Decrypt']
)

print(response)

Regularly Rotate Encryption Keys 

Regular rotation of encryption keys ensures that any compromised keys do not provide unauthorized access to sensitive data. You can schedule regular key rotation in AWS KMS by setting a key policy that specifies a rotation schedule. Here's an example of how to schedule annual key rotation in KMS using the AWS CLI:

 
aws kms put-key-policy \\
    --key-id your_key_id_here \\
    --policy-name default \\
    --policy
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    \\":\\"kms:RotateKey\\",\\"Resource\\":\\"*\\"},{\\"Effect\\":\\"Allow\\",\
    \"Principal\\":{\\"AWS\\":\\"arn:aws:iam::123456789012:root\\"},\\"Action\\
    ":\\"kms:CreateGrant\\",\\"Resource\\":\\"*\\",\\"Condition\\":{\\"Bool\\":
    {\\"kms:GrantIsForAWSResource\\":\\"true\\"}}}]}"

Turn on logging 

To track user access to sensitive data and identify any unwanted access, logging must be enabled. All SQL commands that are executed on your cluster are logged when you activate query logging in Amazon Redshift. This applies to queries that access sensitive data as well as data-scrambling operations. Afterwards, you may examine these logs to look for any strange access patterns or suspect activities.

You may use the following SQL statement to make query logging available in Amazon Redshift:

ALTER DATABASE  SET enable_user_activity_logging=true;

The stl query system table may be used to retrieve the logs once query logging has been enabled. For instance, the SQL query shown below will display all queries that reached a certain table:

Monitor Performance 

Data scrambling is often a resource-intensive practice, so it’s good to monitor CPU usage, memory usage, and disk I/O to ensure your cluster isn’t being overloaded. In Redshift, you can use the <inlineCode>svl_query_summary<inlineCode> and <inlineCode>svl_query_report<inlineCode> system views to monitor query performance. You can also use Amazon CloudWatch to monitor metrics such as CPU usage and disk space.

Amazon CloudWatch

Establishing Backup and Disaster Recovery

In order to prevent data loss in the case of a disaster, backup and disaster recovery mechanisms should be put in place. Automated backups and manual snapshots are only two of the backup and recovery methods offered by Amazon Redshift. Automatic backups are taken once every eight hours by default. 

Moreover, you may always manually take a snapshot of your cluster. In the case of a breakdown or disaster, your cluster may be restored using these backups and snapshots. Use this SQL query to manually take a snapshot of your cluster in Amazon Redshift:

CREATE SNAPSHOT ; 

To restore a snapshot, you can use the <inlineCode>RESTORE<inlineCode> command. For example:


RESTORE 'snapshot_name' TO 'new_cluster_name';

Frequent Review and Updates

To ensure that data scrambling procedures remain effective and up-to-date with the latest security requirements, it is crucial to consistently review and update them. This process should include examining backup and recovery procedures, encryption techniques, and access controls.

In Amazon Redshift, you can assess access controls by inspecting all roles and their associated permissions in the <inlineCode>pg_roles<inlineCode> system catalog database. It is essential to confirm that only authorized individuals have access to sensitive information.

To analyze encryption techniques, use the <inlineCode>pg_catalog.pg_attribute<inlineCode> system catalog table, which allows you to inspect data types and encryption settings for each column in your tables. Ensure that sensitive data fields are protected with robust encryption methods, such as AES-256.

The AWS CLI commands <inlineCode>aws backup plan<inlineCode> and <inlineCode>aws backup vault<inlineCode> enable you to review your backup plans and vaults, as well as evaluate backup and recovery procedures. Make sure your backup and recovery procedures are properly configured and up-to-date.

Decrypting Data in Redshift

There are different options for decrypting data, depending on the encryption method used and the tools available; the decryption process is similar to of encryption, usually a custom UDF is used to decrypt the data, let’s look at one example of decrypting data scrambling with a substitution cipher.

Step 1: Create a UDF with decryption logic for substitution


CREATE FUNCTION decrypt_substitution(ciphertext varchar) RETURNS varchar
IMMUTABLE AS $$
    alphabet = 'abcdefghijklmnopqrstuvwxyz'
    substitution = 'ijklmnopqrstuvwxyzabcdefgh'
    reverse_substitution = ''.join(sorted(substitution, key=lambda c: substitution.index(c)))
    plaintext = ''
    for i in range(len(ciphertext)):
        index = substitution.find(ciphertext[i])
        if index == -1:
            plaintext += ciphertext[i]
        else:
            plaintext += reverse_substitution[index]
    return plaintext
$$ LANGUAGE plpythonu;

Step 2: Move the data back after truncating and applying the decryption function


TRUNCATE original_table;
INSERT INTO original_table (column1, decrypted_column2, column3)
SELECT column1, decrypt_substitution(encrypted_column2), column3
FROM temp_table;

In this example, encrypted_column2 is the encrypted version of column2 in the temp_table. The decrypt_substitution function is applied to encrypted_column2, and the result is inserted into the decrypted_column2 in the original_table. Make sure to replace column1, column2, and column3 with the appropriate column names, and adjust the INSERT INTO statement accordingly if you have more or fewer columns in your table.

Conclusion

Redshift data scrambling is an effective tool for additional data protection and should be considered as part of an organization's overall data security strategy. In this blog post, we looked into the importance of data protection and how this can be integrated effectively into the  data warehouse. Then, we covered the difference between data scrambling and data masking before diving into how one can set up Redshift data scrambling.

Once you begin to accustom to Redshift data scrambling, you can upgrade your security techniques with different techniques for scrambling data and best practices including encryption practices, logging, and performance monitoring. Organizations may improve their data security posture management (DSPM) and reduce the risk of possible breaches by adhering to these recommendations and using an efficient strategy.

<blogcta-big>

Veronica is the security researcher at Sentra. She brings a wealth of knowledge and experience as a cybersecurity researcher. Her main focuses are researching the main cloud provider services and AI infrastructures for Data related threats and techniques.

Subscribe

Latest Blog Posts

Ward Balcerzak
Ward Balcerzak
May 15, 2025
3
Min Read
Data Security

Why I Joined Sentra: A Data Defender’s Journey

Why I Joined Sentra: A Data Defender’s Journey

After nearly two decades immersed in cybersecurity, spanning Fortune 500 enterprises, defense contractors, manufacturing giants, consulting, and the vendor ecosystem, I’ve seen firsthand how elusive true data security remains. I've built and led data security programs from scratch in some of the world’s most demanding environments. But when I met the team from Sentra, something clicked in a way that’s rare in this industry.

Let me tell you why I joined Sentra and why I’m more excited than ever about the future of data security.

From Visibility to Vulnerability

In every role I've held, one challenge has consistently stood out: understanding data.
Not just securing it but truly knowing what data we have, where it lives, how it moves, how it's used, and who touches it. This sounds basic, yet it’s one of the least addressed problems in security.

Now, we layer on the proliferation of cloud environments and SaaS sprawl (without mentioning the increasing proliferation of AI agents). The traditional approaches simply don’t cut it. Most organizations either ignore cloud data discovery altogether or lean on point solutions that can’t scale, lack depth, or require endless manual tuning and triage.

That’s exactly where Sentra shines.

Why Sentra?

When I first engaged with Sentra, what struck me was that this wasn’t another vendor trying to slap a new UI on an old problem. Sentra understands the problem deeply and is solving it holistically across all environments. They’re not just keeping up; they’re setting the pace.

The AI-powered data classification engine at the heart of Sentra’s platform is, quite frankly, the best I’ve seen in the market. It automates what previously required a small army of analysts and does so with an accuracy and scale that’s unmatched. It's not just smart, it’s operationally scalable.

But technology alone wasn’t what sold me. It was the people.
The Sentra founders are visionaries who live and breathe this space. They’re not building in a vacuum, they’re listening to customers, responding to real-world friction, and delivering solutions that security teams will actually adopt. That’s rare. That’s powerful.

And finally, there’s the culture. Sentra radiates innovation, agility, and relentless focus on impact. Every person here knows the importance of their role and how it aligns with our mission. That energy is infectious and it’s exactly where I want to be.

Two Decades. One Mission: Secure the Data.

At Sentra, I’m bringing the scars, stories, and successes from almost 20 years “in the trenches”:

  • Deep experience building and maturing data security programs within highly regulated, high-stakes environments

  • A commitment to the full people-process-technology stack, because securing data isn’t just about tools

  • A background stitching together integrated solutions across silos and toolsets

  • A unique perspective shaped by my time as a practitioner, leader, consultant, and vendor

This blend helps me speak the language of security teams, empathize with their challenges, and design strategies that actually work.

Looking Ahead

Joining Sentra isn’t just the next step in my career; it’s a chance to help lead the next chapter of data security. We’re not here to incrementally improve what exists. We’re here to rethink it. Redefine it. Solve it.

If you’re passionate about protecting what matters most, your data. I’d love to connect.

This is more than a job; it’s a mission. And I couldn’t be prouder to be part of it.

<blogcta-big>

Read More
David Stuart
David Stuart
May 5, 2025
4
Min Read
Compliance

What the HIPAA Compliance Updates Mean for Your Security

What the HIPAA Compliance Updates Mean for Your Security

The Health Insurance Portability and Accountability Act (HIPAA) has long been a cornerstone of safeguarding sensitive health information in the U.S., particularly electronic protected health information (ePHI). As healthcare organizations continue to face growing cybersecurity challenges, ensuring the protection of ePHI has never been more critical. 

In response, for the first time in two decades, the U.S. Department of Health and Human Services (HHS) has proposed significant amendments to the HIPAA Security Rule, aimed at strengthening cybersecurity measures across the healthcare sector. These proposed changes are designed to address emerging threats and ensure that healthcare organizations have robust systems in place to protect patient data from unauthorized access and potential breaches. This blog presents the major changes that are coming soon and how you can prepare for them.

Instead of considering compliance as a one-time effort, with Sentra you can monitor your compliance status at any given moment, streamline reporting, and remediate compliance violations instantly.

How Sentra Can Help You Stay Compliant

Sentra’s data security platform equips healthcare organizations with the necessary tools to stay compliant with the new HIPAA Security Rule amendments. By providing continuous monitoring of ePHI data locations and assessing associated risks, Sentra helps organizations maintain full visibility and control over sensitive data.

Key Benefits of Using Sentra for HIPAA Compliance:

  • Automated Data Discovery & Classification: Instantly locate and classify ePHI across cloud and on-prem environments.
  • Real-time Risk Assessment: Continuously assess vulnerabilities and flag security gaps related to HIPAA requirements.
  • Access Control & Encryption Monitoring: Ensure compliance with mandatory MFA, encryption policies, and access termination requirements.
  • Smart Compliance Alerts: Sentra doesn’t just detect generic cloud misconfigurations. Instead, it pinpoints security issues affecting sensitive data, helping teams focus on what truly matters.

Without a solution such as Sentra, organizations waste valuable time manually searching for and classifying sensitive data, diverting key employees from higher-priority security tasks. With Sentra, security teams gain an ongoing, real-time dashboard that ensures efficient compliance and faster risk mitigation.

What You Need to Know About the Proposed HIPAA Security Rule Updates

The latest proposed updates to the HIPAA Security Rule represent some of the most significant changes in years. These updates aim to modernize data protection practices and ensure healthcare organizations are better equipped to handle today’s security challenges. Below are the key highlights compliance and security teams should focus on:

Mandatory Implementation Specifications
All implementation specifications under the HIPAA Security Rule will become mandatory. Covered entities and business associates must now fully comply with all safeguards—no more "addressable" exceptions.

Stricter Encryption Requirements
Encryption of electronic protected health information (ePHI) will be required both at rest and in transit. Organizations must ensure encryption is in place across all systems handling sensitive data.

Required Multifactor Authentication (MFA)
MFA will become mandatory to protect access to ePHI. This added security layer significantly reduces the risk of unauthorized access and credential compromise.

Network Segmentation for Threat Containment
Organizations must implement network segmentation to isolate sensitive systems and limit the spread of cyber threats in the event of a breach.

Timely Termination of Access
Access to ePHI must be revoked within 24 hours when an employee leaves or changes roles. This reduces the risk of insider threats and unauthorized access.

Comprehensive Documentation Requirements
Healthcare organizations must maintain detailed, up-to-date documentation of all security policies, procedures, risk assessments, and incident response plans.

Asset Inventories and Network Mapping
Annual updates to technology asset inventories and network maps will be required to ensure accurate tracking of where and how ePHI is stored and transmitted.

Enhanced Risk Analysis
Organizations must conduct regular, thorough risk assessments to identify vulnerabilities and assess threats across all systems that interact with ePHI.

Stronger Incident Response Plans
Entities must be able to restore lost systems and data within 72 hours after a cyber incident. Regular testing and refinement of incident response protocols will be essential.

Annual Compliance Audits
Healthcare organizations will be required to conduct annual audits of their HIPAA Security Rule compliance, covering all technical and administrative safeguards.

Mandatory Technical Controls
Technical safeguards like anti-malware tools, firewalls, and port restrictions must be in place and regularly reviewed to protect systems from evolving threats.

What’s Next?

The proposed changes to the HIPAA Security Rule are currently in the Notice of Proposed Rulemaking (NPRM) stage, with a 60-day public comment period that opened on January 6, 2025. During this period, stakeholders can provide feedback on the amendments, which may influence the final rule. Organizations should actively monitor the comment period, engage in the feedback process, and stay informed on any potential adjustments before the rule is finalized.

Steps Organizations Should Take Now:

  • Review the proposed changes and understand how they impact your current security posture.
  • Engage in the public comment process to share concerns or recommendations.
  • Start assessing security gaps to align with HIPAA’s evolving compliance requirements.

Conclusion

The new HIPAA compliance amendments represent a major shift in how healthcare organizations must protect electronic Protected Health Information (ePHI). The introduction of enhanced encryption standards, mandatory multi-factor authentication (MFA), and stricter access control measures means organizations must act swiftly to maintain compliance and reduce cybersecurity risks.

Compliance is not just about meeting regulations, it is about efficiency. Organizations relying on manual processes to locate and secure sensitive data waste valuable time and resources, making compliance efforts less effective.

With Sentra, healthcare organizations gain a powerful, automated data security solution that:

  • Eliminates manual data discovery by providing a real-time, continuous inventory of sensitive data.
  • Prioritizes relevant data security risks instead of overwhelming teams with unnecessary alerts.
  • Ensures compliance readiness by automating key processes like access control monitoring and encryption verification.

Now is the time for healthcare organizations to take proactive steps toward compliance. Stay informed, participate in the public comment process, and start implementing security enhancements today.

To learn how Sentra can help your organization achieve HIPAA compliance efficiently, request a demo today and take control of your sensitive data.

<blogcta-big>

Read More
Yoav Regev
Yoav Regev
April 23, 2025
3
Min Read
Data Security

Your AI Is Only as Secure as Your Data: Celebrating a $100M Milestone

Your AI Is Only as Secure as Your Data: Celebrating a $100M Milestone

Over the past year, we’ve seen an incredible surge in enterprise AI adoption. Companies across industries are integrating AI agents and generative AI into their operations to move faster, work smarter, and unlock innovation. But behind every AI breakthrough lies a foundational truth: AI is only as secure as the data behind it.

At Sentra, securing that data has always been our mission, not just to prevent breaches and data leaks, but to empower prosperity and innovation with confidence and control.

Data Security: The Heartbeat of Your Organization

As organizations push forward with AI, massive volumes of data, often sensitive, regulated, or business-critical are being used to train models or power AI agents. Too often, this happens without full visibility or governance. 


The explosion of the data security market reflects how critical this challenge has become. At Sentra, we’ve long believed that a Data Security Platform (DSP) must be cloud-native, scalable, and adaptable to real-world enterprise environments. We’ve been proud to lead the way, and our continued growth, especially among Fortune 500 customers, is a testament to the urgency and relevance of our approach.

Scaling for What's Next

With the announcement of our $50 million Series B funding round, bringing our total funding to over $100 million, we’re scaling Sentra to meet the moment. We're counting on strong customer momentum and more than tripling revenue year-over-year, and we’re using this investment to grow our team, strengthen our platform, and continue defining what modern data security looks like.

We’ve always said security shouldn’t slow innovation - it should fuel it. And that’s exactly what we’re enabling.

It's All About the People


At the end of the day, it’s people who build it, scale it, and believe in it. I want to extend a heartfelt thank you to our investors, customers, and, most importantly, our team. It’s all about you! Your belief in Sentra and your relentless execution make everything possible. We couldn’t make it without each and every one of you.

We’re not just building a product, we’re setting the gold standard for data security, because securing your data is the heartbeat of your organization!

Innovation without security isn’t progress. Let’s shape a future where both go together!

<blogcta-big>

Read More
decorative ball
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!