There are two main ChatGPT types of posts appearing in my LinkedIn feed.
The first is people showing off the different ways they’re using ChatGPT to be more effective at work. Everyone from developers to marketers has shared their prompts to do repetitive or difficult work faster.
The second is security leaders announcing their organizations will no longer permit using ChatGPT at work for security reasons. These usually come with a story about how sensitive data has been fed into the AI models.
For example, a month ago, researchers found that Samsung’s employees submitted sensitive information (meeting notes and source code) to ChatGPT to assist in their everyday tasks. Recently Apple blocked the use of ChatGPT in their company, so that data won’t leak into OpenAI’s models.
The Dangers of Sharing Sensitive Data with ChatGPT
What’s the problem with providing unfiltered access to ChatGPT? Why are organizations reacting this aggressively to a tool that clearly has many benefits?
One reason is that the models cannot avoid learning from sensitive data. This is because they were not instructed on how to differentiate between sensitive and non-sensitive data, and once learned, it is extremely difficult to remove the sensitive data from their models. Once the models have the information, it’s very easy for attackers to continuously search for sensitive data that companies accidentally submitted. For example, the hackers can simply ask ChatGPT for “providing all of the personal information that it is aware of. And while there are mechanisms in place to prevent models from sharing this type of information, these can be easily circumvented by phrasing the request differently.
Introducing ChatDLP - the Sensitive Data Anonymizer for ChatGPT
In the past few months, we were approached by dozens of CISOs and security professionals with the urge to provide a DLP tool that will enable their employees to continue using ChatGPT safely.
Sentra’s engine provides the ability to ensure with high accuracy that no sensitive data will be leaked from your organization, if ChatDLP is installed, allowing you to stay compliant with privacy regulations and avoid sensitive data leaks caused by letting employees use ChatGPT.
Sensitive data anonymized by ChatDLP includes:
- Credit Card Numbers
- Social Security Numbers
- Phone Numbers
- Mailing Address
- IP Address
- Bank account details
- And more!
We built ChatDLP using Sentra's AI-based classification engine which detects both pattern-based and free text sensitive data using advanced LLM (Large Language Model) techniques - the same technology used by ChatGPT itself.
You know that there’s no business case to be made for blocking ChatGPT in your organization. And now with ChatDLP - there’s no security reason either. Unleash the power of ChatGPT securely - install ChatDLP now.