Why We Built ChatDLP: Because Banning Productivity Tools Isn't the Answer

Data Security
 Min Read
Last Updated: 
January 9, 2024
Author Image
Ron Reiter
Co-Founder and CTO
Share the Blog
linkedin logotwitter logogithub logo

There are two main ChatGPT types of posts appearing in my LinkedIn feed. 

The first is people showing off the different ways they’re using ChatGPT to be more effective at work. Everyone from developers to marketers has shared their prompts to do repetitive or difficult work faster.

The second is security leaders announcing their organizations will no longer permit using ChatGPT at work for security reasons. These usually come with a story about how sensitive data has been fed into the AI models.

For example, a month ago, researchers found that Samsung’s employees submitted sensitive information (meeting notes and source code) to ChatGPT to assist in their everyday tasks. Recently Apple blocked the use of ChatGPT in their company, so that data won’t leak into OpenAI’s models.

Blog post cover image

The Dangers of Sharing Sensitive Data with ChatGPT

What’s the problem with providing unfiltered access to ChatGPT? Why are organizations reacting this aggressively to a tool that clearly has many benefits?

One reason is that the models cannot avoid learning from sensitive data. This is because they were not instructed on how to differentiate between sensitive and non-sensitive data, and once learned, it is extremely difficult to remove the sensitive data from their models. Once the models have the information, it’s very easy for attackers to continuously search for sensitive data that companies accidentally submitted. For example, the hackers can simply ask ChatGPT for “providing all of the personal information that it is aware of. And while there are mechanisms in place to prevent models from sharing this type of information, these can be easily circumvented by phrasing the request differently.

Introducing ChatDLP - the Sensitive Data Anonymizer for ChatGPT

In the past few months, we were approached by dozens of CISOs and security professionals with the urge to provide a DLP tool that will enable their employees to continue using ChatGPT safely.

So we’ve developed ChatDLP, a plugin for chrome and Edge add-on that anonymizes sensitive data typed into ChatGPT before it’s submitted to the model.

On the bottom of the image is the original query with sensitive data. Above you can see that it's been redacted.

Sentra’s engine provides the ability to ensure with high accuracy that no sensitive data will be leaked from your organization, if ChatDLP is installed, allowing you to stay compliant with privacy regulations and avoid sensitive data leaks caused by letting employees use ChatGPT.

Sensitive data anonymized by ChatDLP includes:

  • Names
  • Emails
  • Credit Card Numbers
  • Social Security Numbers
  • Phone Numbers
  • Mailing Address
  • IP Address
  • Bank account details
  • And more!

We built ChatDLP using Sentra's AI-based classification engine which detects both pattern-based and free text sensitive data using advanced LLM (Large Language Model) techniques - the same technology used by ChatGPT itself.

You know that there’s no business case to be made for blocking ChatGPT in your organization. And now with ChatDLP - there’s no security reason either. Unleash the power of ChatGPT securely - install ChatDLP now.

Author Image
Ron Reiter
Co-Founder and CTO

Ron has more than 20 years of tech hands-on and leadership experience, focusing on cybersecurity, cloud, big data, and machine learning. Following his military experience, Ron built a company that was sold to Oracle. He became a serial entrepreneur and a seed investor in several cybersecurity startups, including Axonius, Firefly, Guardio, Talon Cyber Security, and Lightricks.

Decorative Tube
Decorative Tube