All Resources
In this article:
minus iconplus icon
Share the Article

Data Leakage Detection for AWS Bedrock

July 15, 2024
4
 Min Read
Data Security

Amazon Bedrock is a fully managed service that streamlines access to top-tier foundation models (FMs) from premier AI startups and Amazon, all through a single API. This service empowers users to leverage cutting-edge generative AI technologies by offering a diverse selection of high-performance FMs from innovators like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon itself. Amazon Bedrock allows for seamless experimentation and customization of these models to fit specific needs, employing techniques such as fine-tuning and Retrieval Augmented Generation (RAG).

 

Additionally, it supports the development of agents capable of performing tasks with enterprise systems and data sources. As a serverless offering, it removes the complexities of infrastructure management, ensuring secure and easy deployment of generative AI features within applications using familiar AWS services, all while maintaining robust security, privacy, and responsible AI standards.

Why Are Enterprises Using AWS Bedrock

Enterprises are increasingly using AWS Bedrock for several key reasons:

  • Diverse Model Selection: Offers access to a curated selection of high-performing foundation models (FMs) from both leading AI startups and Amazon itself, providing a comprehensive range of options to suit various use cases and preferences. This diversity allows enterprises to select the most suitable models for their specific needs, whether they require language generation, image processing, or other AI capabilities.
  • Streamlined Integration: Simplifies the process of adopting and integrating generative AI technologies into existing systems and applications. With its unified API and serverless architecture, enterprises can seamlessly incorporate these advanced AI capabilities without the need for extensive infrastructure management or specialized expertise. This streamlines the development and deployment process, enabling faster time-to-market for AI-powered solutions.
  • Customization Capabilities: Facilitates experimentation and customization, allowing enterprises to fine-tune and adapt the selected models to better align with their unique requirements and data environments. Techniques such as fine-tuning and Retrieval Augmented Generation (RAG) enable enterprises to refine the performance and accuracy of the models, ensuring optimal results for their specific use cases.
  • Security and Compliance Focus: Prioritizes security, privacy, and responsible AI practices, providing enterprises with the confidence that their data and AI deployments are protected and compliant with regulatory standards. By leveraging AWS's robust security infrastructure and compliance measures, enterprises can deploy generative AI applications with peace of mind.

AWS Bedrock Data Privacy & Security Concerns

The rise of AI technologies, while promising transformative and major benefits, also introduces significant security risks. As enterprises increasingly integrate AI into their operations, like with AWS Bedrock, they face challenges related to data privacy, model integrity, and ethical use. AI systems, particularly those involving generative models, can be susceptible to adversarial attacks, unintended data extraction, and unintended biases, which can lead to compromised data security and regulatory violations. 

Training Data Concerns

Training data is the backbone of machine learning and artificial intelligence systems. The quality, diversity, and integrity of this data are critical for building robust models. However, there are significant risks associated with inadvertently using sensitive data in training datasets, as well as the unintended retrieval and leakage of such data. 

These risks can have severe consequences, including breaches of privacy, legal repercussions, and erosion of public trust.

Accidental Usage of Sensitive Data in Training Sets

Inadvertently including sensitive data in training datasets can occur for various reasons, such as insufficient data vetting, poor anonymization practices, or errors in data aggregation. Sensitive data may encompass personally identifiable information (PII), financial records, health information, intellectual property, and more.

 

The consequences of training models on such data are multifaceted:

  • Data Privacy Violations: When models are trained on sensitive data, they might inadvertently learn and reproduce patterns that reveal private information. This can lead to direct privacy breaches if the model outputs or intermediate states expose this data.
  • Regulatory Non-Compliance: Many jurisdictions have stringent regulations regarding the handling and processing of sensitive data, such as GDPR in the EU, HIPAA in the US, and others. Accidental inclusion of sensitive data in training sets can result in non-compliance, leading to heavy fines and legal actions.
  • Bias and Ethical Concerns: Sensitive data, if not properly anonymized or aggregated, can introduce biases into the model. For instance, using demographic data can inadvertently lead to models that discriminate against certain groups.

These risks require strong security measures and responsible AI practices to protect sensitive information and comply with industry standards. AWS Bedrock provides a ready solution to power foundation models and Sentra provides a complementary solution to ensure compliance and integrity of data these models use and output. Let’s explore how this combination and each component delivers its respective capility.

Prompt Response Monitoring With Sentra

Sentra can detect sensitive data leakage in near real-time by scanning and classifying all prompt responses generated by AWS Bedrock, by analyzing them using Sentra’s Data Detection and Response (DDR) security module.

Data exfiltration might occur if AWS Bedrock prompt responses are used to return data outside of an organization - for example using a chatbot interface connected directly to a user facing application.

By analyzing the prompt responses, Sentra can ensure that both sensitive data acquired through fine-tuning models and data retrieved using Retrieval-Augmented Generation (RAG) methods are protected. This protection is effective within minutes of any data exfiltration attempt.

To activate the detection module, there are 3 prerequisites:

  1. The customer should enable AWS Bedrock Model Invocation Logging to an S3 destination(instructions here) in the customer environment.
  2. A new Sentra tenant for the customer should be created/set up.
  3. The customer should install the Sentra copy Lambda using Sentra’s Cloudformation template for its DDR module (documentation provided by Sentra).

Once the prerequisites are fulfilled, Sentra will automatically analyze the prompt responses and will be able to provide real-time security threat alerts based on the defined set of policies configured for the customer at Sentra.

Here is the full flow which describes how Sentra scans the prompts in near real-time:

  1. Sentra’s setup involves using AWS Lambda to handle new files uploaded to the Sentra S3 bucket configured in customer cloud, which logs all responses from AWS Bedrock prompts. When a new file arrives, our Lambda function copies it into Sentra’s prompt response buckets.
  2. Next, another S3 trigger kicks off enrichment of each response with extra details needed for detecting sensitive information.
  3. Our real-time data classification engine then gets to work, sorting the data from the responses into categories like emails, phone numbers, names, addresses, and credit card info. It also identifies the context, such as intellectual property or customer data.
  4. Finally, Sentra uses this classified information to spot any sensitive data. We then generate an alert and notify our customers, also sending the alert to any relevant downstream systems.
Data Flow Customer AWS Cloud Sentra

Sentra can push these alerts downstream into 3rd party systems, such as SIEMs, SOARs, ticketing systems, and messaging systems (Slack, Teams, etc.).

Sentra’s data classification engine provides three methods of classification:

  • Regular expressions
  • List classifiers
  • AI models

Further, Sentra allows the customer to add its own classifiers for their own business-specific needs, apart from the 150+ data classifiers which Sentra provides out of the box.

Sentra’s sensitive data detection also provides control for setting a threshold of the amount of sensitive data exfiltrated through Bedrock over time (similar to a rate limit) to reduce the rate of false positives for non-critical exfiltration events.

Example threat sensitive customer data found in Amazon Bedrock response

Conclusion

There is a pressing push for AI integration and automation to enable businesses to improve agility, meet growing cloud service and application demands, and improve user experiences  - but to do so while simultaneously minimizing risks. Early warning to potential sensitive data leakage or breach is critical to achieving this goal.

Sentra's data security platform can be used in the entire development pipeline to classify, test and verify that models do not leak sensitive information, serving the developers, but also helping them to increase confidence among their buyers. By adopting Sentra, organizations gain the ability to build out automation for business responsiveness and improved experiences, with the confidence knowing their most important asset — their data — will remain secure.

If you want to learn more, request a live demo with our data security experts.

<blogcta-big>

Discover Ron’s expertise, shaped by over 20 years of hands-on tech and leadership experience in cybersecurity, cloud, big data, and machine learning. As a serial entrepreneur and seed investor, Ron has contributed to the success of several startups, including Axonius, Firefly, Guardio, Talon Cyber Security, and Lightricks, after founding a company acquired by Oracle.

Subscribe

Latest Blog Posts

Mark Kiley
Mark Kiley
April 6, 2026
3
Min Read

North Carolina Data Breach Notification Law: Requirements, Timelines, and Checklist for 2026

North Carolina Data Breach Notification Law: Requirements, Timelines, and Checklist for 2026

North Carolina has been ahead of the curve on breach notification. Its Identity Theft Protection Act (N.C. Gen. Stat. Chapter 75, Article 2A) sets clear requirements for how quickly organizations must notify residents and the Attorney General when personal information is exposed in a security incident.

For security and compliance leaders operating in or with NC, the big challenge isn’t just understanding the law on paper, it’s being able to answer, with evidence, exactly what data was exposed, where it lived, and who was affected when an incident hits.

This guide breaks down:

  • Who the NC breach law applies to
  • What “personal information” means under NC law
  • What counts as a security breach
  • Notification requirements and timelines
  • A practical checklist to operationalize NC breach readiness
  • How Data Security Posture Management (DSPM) makes this manageable at cloud scale

Who the North Carolina breach law applies to

North Carolina’s Identity Theft Protection Act applies broadly to any business that owns or licenses personal information of NC residents or conducts business in NC and holds personal information, whether computerized or not.

That includes:

  • NC‑headquartered organizations
  • Out‑of‑state organizations holding NC residents’ personal data
  • Both private sector and, for certain provisions, state and local agencies

If your organization stores customer, employee, or patient data for NC residents—especially in healthcare, financial services, insurance, education, retail, or SaaS—you should assume the law applies.

What “personal information” means in North Carolina

Under N.C. Gen. Stat. § 75‑61 and § 75‑65, “personal information” (PI) is defined as a person’s first name or first initial and last name in combination with any one of several sensitive data elements, when that data is not encrypted or redacted.

Common examples include:

  • Social Security numbers
  • Driver’s license, state ID, or passport numbers
  • Financial account numbers, credit or debit card numbers, plus any required security code, access code, or password for the account
  • Biometric data and other unique identifiers that can be used to access financial resources or uniquely identify an individual

Certain electronic identifiers (like usernames, email addresses, or internet account numbers) can also qualify as PI if they would permit access to a financial account or resources when combined with a password or other credentials.

For security teams, the takeaway is straightforward but difficult in practice: anything that can be used to impersonate, financially exploit, or uniquely identify an NC resident must be treated as regulated data.

What counts as a “security breach” in NC?

North Carolina defines a “security breach” as an incident of unauthorized access to and acquisition of unencrypted and unredacted records or data containing personal information where illegal use has occurred or is reasonably likely to occur, or that creates a material risk of harm to a consumer.

A few important nuances:

  • Good‑faith access by employees or agents is not a breach, as long as the information is used only for legitimate business purposes and is not subject to further unauthorized disclosure.
  • Encrypted data is generally not considered breached unless the encryption keys or confidential process needed to unlock the data are also compromised.
  • North Carolina guidance explicitly recognizes identity theft and financial harm as key risk factors when determining whether notice is required.

In practice, many organizations err on the side of treating any credible unauthorized access to PI as a potential breach until a risk assessment proves otherwise.

Notification requirements and timelines

Once your organization discovers or is notified of a breach involving NC residents’ PI, several notification obligations may apply.

1. Notice to affected individuals

Businesses must notify affected NC residents “without unreasonable delay” after discovery of the breach, taking into account law enforcement needs and time to determine the scope of the breach and restore system integrity.

The notice must be clear and conspicuous and include at least:

  • A general description of the incident
  • The type of personal information involved
  • A description of measures taken to protect the information from further unauthorized access
  • A contact telephone number for more information
  • Advice to review account statements and monitor free credit reports
  • Contact details for the major consumer reporting agencies, the Federal Trade Commission, and the NC Attorney General’s Office

Notice can be provided by:

  • Written notice
  • Electronic notice (if the consumer has agreed to electronic communications)
  • Telephonic notice
  • Substitute notice (email + prominent website posting + statewide media) if costs or scale exceed statutory thresholds.

2. Notice to the North Carolina Attorney General

If a business provides notice to affected individuals, it must also notify the Consumer Protection Division of the NC Attorney General’s Office without unreasonable delay.

That notice must describe:

  • The nature of the breach
  • The number of NC consumers affected
  • Steps taken to investigate the breach and prevent future incidents
  • Timing, distribution, and content of consumer notices

The NC Department of Justice maintains guidance and contact information here:

3. Notice to consumer reporting agencies

If you notify more than 1,000 individuals at one time, you must also notify all nationwide consumer reporting agencies of the timing, distribution, and content of the consumer notice, without unreasonable delay.

Penalties and enforcement

A violation of North Carolina’s breach notification requirements is considered an unfair or deceptive trade practice under N.C. Gen. Stat. § 75‑1.1, enforced by the Attorney General.

Key points:

  • The AG can seek injunctive relief, civil penalties, and other remedies.
  • Individuals may have a private right of action if they are injured as a result of the violation.
  • Repeated or willful noncompliance can significantly increase exposure, especially if regulators view your security practices as unreasonable given your size and risk profile.

For many boards and CISOs, the reputational damage and downstream regulatory scrutiny from a mishandled NC breach can matter as much as direct financial penalties.

Why NC breach readiness is hard in 2026

On paper, NC’s requirements look straightforward: discover breach → determine scope → notify affected people and regulators promptly. The complexity comes from the “determine scope” step:

  • Cloud sprawl: Sensitive data sprawls across object storage (e.g., S3, GCS, Azure Blob), data warehouses, SaaS apps, and backups.
  • Shadow and legacy data: Old exports, test copies, and forgotten file shares often have the most complete—and poorly protected—PII sets.
  • Multi‑cloud and hybrid: Different platforms expose different telemetry; correlating it to “which NC residents were affected?” can take weeks.
  • AI and unstructured data: Chat logs, support transcripts, and AI training sets now routinely contain PI but are rarely tracked like systems of record.

Without always‑on, accurate visibility into where personal data lives and how it’s exposed, NC’s expectation of “without unreasonable delay” can collide with your ability to answer basic questions:

  • Which datasets in the affected environment contained NC residents’ PI?
  • Exactly what types of PI were present (SSNs, account numbers, health data)?
  • Who had access, and were they over‑permissioned?

This is where Data Security Posture Management (DSPM) becomes a practical foundation rather than a buzzword.

How DSPM helps operationalize North Carolina breach requirements

Data Security Posture Management (DSPM) focuses on continuously discovering, classifying, and assessing the risk posture of sensitive data—wherever it lives across cloud, SaaS, and hybrid environments.

A mature DSPM program gives NC‑regulated organizations the ability to:

  1. Maintain a live inventory of NC residents’ PI


    • Automatically discover data stores containing PI across cloud, SaaS, and on‑prem.
    • Classify data as PII/PHI/PCI, tagged by geography or residency where possible.
    • See at a glance which systems hold North Carolina‑resident data and what types.
  2. → Learn more: Data Security Posture Management (DSPM)

  3. Assess exposure and “material risk of harm” quickly


    • Understand whether affected datasets were encrypted and how keys are managed (critical under NC’s definition of breach).
    • See who had effective access to PI (including service accounts and AI agents), not just theoretical permissions.
    • Identify misconfigurations like public buckets, overly broad access policies, or data in high‑risk regions.
  4. → Related reading: Cloud Data Security Means Shrinking the Data Attack Surface

  5. Accelerate incident scoping and notification decisions


    • When a storage location, SaaS tenant, or account is compromised, instantly surface:
      • Which tables/buckets/files contained NC‑defined personal information
      • How many unique NC residents were likely impacted
      • Whether encryption, masking, or tokenization meaningfully reduced risk
    • Use this as the factual backbone of your AG and consumer notifications.
  6. → Related use case: Keep Your Cloud Data Compliant

  7. Continuously reduce breach blast radius in NC and beyond


    • Proactively remove ROT (redundant, obsolete, trivial) data and risky legacy copies that amplify NC breach scope.
    • Automate remediation workflows—tightening access, encrypting high‑risk data stores, and enforcing retention policies.
    • Generate evidence for audits and regulator inquiries about your ongoing data protection program.
  8. → Deep dive: Manage Data Security and Compliance Risks with DSPM

A practical NC breach‑readiness checklist

To align with North Carolina’s data breach law and make incident response defensible:

  1. Map your NC footprint

    • Identify which systems hold NC residents’ PI and tag them accordingly in your asset inventory/DSPM.
  2. Deploy continuous discovery and classification

    • Move from annual spreadsheets to ongoing, automated detection of PI across cloud, SaaS, and on‑prem data stores.
  3. Define “material risk of harm” criteria

    • Involve legal, compliance, and security to define when access to PI triggers NC notification duties, incorporating encryption and key‑management posture.
  4. Pre‑draft NC‑specific notification templates

    • Include all NC‑required content elements and keep them updated with current AG and FTC contact information.
  5. Integrate DSPM findings with IR playbooks

    • Ensure your incident response runbooks explicitly call DSPM to:
      • Enumerate affected data stores
      • Classify PI types
      • Estimate affected NC residents
  6. Test NC‑specific tabletop exercises

    • Run at least one scenario per year involving NC residents’ data, including simulated AG notification, CRA notification, and evidence collection.
  7. Document your “no‑notice” decisions

    • If you determine a particular incident does not require notice under NC law, document the risk assessment and supporting data (encryption status, access logs, etc.) and retain it.
  8. → NC DOJ guidance: Security Breach Information – NC DOJ

Ready to see your North Carolina breach exposure in real time?Request a Sentra demo

Read More
Mark Kiley
Mark Kiley
April 1, 2026
5
Min Read

HIPAA + North Carolina Identity Theft Protection Act: A Data Security Guide for Hospitals and Health Systems

HIPAA + North Carolina Identity Theft Protection Act: A Data Security Guide for Hospitals and Health Systems

Quick refresher: HIPAA Breach Notification Rule

Under HIPAA, a breach is “the acquisition, access, use, or disclosure of unsecured PHI in a manner not permitted” by the Privacy Rule, unless a documented risk assessment shows a low probability that the PHI has been compromised.

Key HIPAA breach notification requirements (at a high level):

  • To affected individuals: Without unreasonable delay and no later than 60 days after discovery
  • To HHS (OCR):
    • For breaches affecting 500+ individuals in a state: contemporaneously with individual notice
    • For smaller breaches: annually, within 60 days of the end of the calendar year
  • To the media: For breaches affecting 500+ residents of a state or jurisdiction

HIPAA is focused specifically on PHI, information related to an individual’s health status, provision of care, or payment for care that can identify the individual.

North Carolina’s Identity Theft Protection Act for healthcare

North Carolina’s Identity Theft Protection Act requires any business that owns or licenses NC residents’ personal information, including hospitals and health systems, to notify affected individuals, and in many cases the Attorney General and consumer reporting agencies, after security breaches involving “personal information.”

What counts as “personal information” in NC

The Act defines “personal information” as a person’s first name or first initial and last name plus any one of several sensitive data elements, when not encrypted or redacted. For healthcare providers, that can include:

  • Social Security numbers (often present in registration and billing)
  • Driver’s license or state ID numbers
  • Financial account or payment card numbers with any required codes or passwords
  • Health insurance policy numbers or other unique identifiers used by a health insurer
  • Biometric data and other identifiers that can be used to access financial accounts or uniquely identify an individual

Crucially, NC “personal information” is not limited to PHI. It picks up employee PII, guarantor or subscriber information, and login credentials for portals and billing systems that might fall outside HIPAA’s PHI definition.

What NC considers a “security breach”

A “security breach” under N.C. Gen. Stat. § 75‑65 means unauthorized access to and acquisition of unencrypted and unredacted data containing personal information where illegal use has occurred or is reasonably likely to occur, or that creates a material risk of harm to a consumer.

  • Good‑faith access by an employee or agent is not a breach, as long as the information is used only for legitimate purposes and not further disclosed.
  • Encrypted data generally does not trigger notice unless the keys or process to decrypt are also compromised.

The NC Department of Justice offers additional guidance and emphasizes prompt notice and risk‑based assessment of harm:

HIPAA vs. NC Identity Theft Protection Act: Where they overlap and differ

For hospitals and health systems, HIPAA and NC law often apply at the same time—but they do not cover exactly the same datasets or impose identical obligations.

When both laws apply

Both HIPAA and NC law will typically apply when:

  • PHI of North Carolina residents is exposed in a way that meets each law’s definition of “breach” or “security breach”; and
  • The data is unsecured (e.g., unencrypted PHI or keys compromised) and there is a realistic risk of misuse.

In these scenarios, you’ll need to:

  • Conduct a HIPAA risk assessment of compromise
  • Assess material risk of harm under NC law
  • Issue timely notices that satisfy both HIPAA and NC content/timing requirements

Because HIPAA allows up to 60 days, while NC expects notice “without unreasonable delay” after discovery (subject to law enforcement delay and scoping needs), the stricter timeline will often be driven by your ability to determine the scope of affected NC residents and data types.

Where NC reaches further than HIPAA

NC’s Identity Theft Protection Act covers several scenarios HIPAA alone might not fully address:

  1. Employee and non‑patient PII
    • Employee payroll and HR records, including SSNs, DL numbers, and bank information
    • Volunteer and contractor data used for background checks or credentialing
  2. Patient‑adjacent financial and identity data
    • Guarantor and subscriber information that may be outside your designated record set
    • Payment card and bank data tied to hospital billing systems
  3. Credentials and portal access
    • Patient portal usernames and passwords
    • Staff credentials or MFA secrets that can be used to access systems containing PI or PHI
  4. Non‑PHI systems still holding NC personal information
    • Legacy billing, call center, or marketing platforms
    • Shadow IT and SaaS apps adopted by specific departments

Where HIPAA may focus your teams on clinical systems and PHI, NC law forces you to widen the lens to all personal information you hold about NC residents—across clinical, financial, HR, and digital engagement ecosystems.

Practical implications for NC hospitals and health systems

Taken together, HIPAA and NC breach law create three core operational challenges:

  1. You must know where NC residents’ PHI and PII actually live
    • EHR and core clinical systems are just the start.
    • PHI and NC “personal information” frequently spill into:
      • Data warehouses and analytics platforms
      • Imaging archives, document management, and fax servers
      • Email, file‑sharing, and collaboration tools (e.g., M365, Google Workspace)
      • AI‑related logs and training data (chatbots, scribes, coding assistants)
  2. You must be able to rapidly scope “who was affected and how"
    • For NC residents specifically, you need to answer:
      • Which datasets in the compromised environment held NC‑defined personal information?
      • Were those data encrypted, masked, or tokenized—and were the keys safe?
      • How many distinct NC residents were affected and what types of data were involved (PHI vs financial vs credentials)?
  3. You must manage multiple, overlapping clocks and audiences
    • HIPAA’s 60‑day clock
    • NC’s “without unreasonable delay” expectation for residents and the Attorney General
    • Potential media and CRA notifications (HIPAA for large breaches; NC for >1,000 individuals via credit bureaus)

Without a unified, data‑centric view, most health systems are left stitching together EHR logs, DLP alerts, and manual exports to approximate impact—burning precious weeks while both clocks are running.

Why DSPM is becoming foundational for HIPAA + NC compliance

Data Security Posture Management (DSPM) is emerging as the foundation for modern healthcare data security because it focuses on what HIPAA and NC regulators ultimately care about: what sensitive data you have, where it lives, how it’s protected, and who can get to it.

A mature DSPM platform should enable hospitals and health systems to:

1. Continuously discover and classify PHI + NC personal information

  • Agentless connections into cloud storage, data warehouses, M365, and SaaS, as well as on‑prem file shares and databases.
  • Accurate classification for:
    • PHI (clinical notes, lab results, imaging reports)
    • Financial identifiers (account numbers, payment cards, insurance IDs)
    • Identity data (SSNs, DL numbers, biometrics)
    • Credentials and secrets present in logs or unstructured content

→ Learn more: Data Security Posture Management (DSPM)

2. Map effective access and exposure, not just where data sits

  • Understand who actually has access to PHI and NC personal information—including clinicians, back‑office staff, vendors, and AI agents—across all environments.
  • Highlight over‑permissioned roles, stale accounts, and risky sharing patterns that increase breach scope before incidents occur.

→ Related: One Platform to Secure All Data: Moving from Data Discovery to Full Data Access Governance

3. Accelerate HIPAA and NC breach scoping

When an account, bucket, VM, or SaaS tenant is compromised, DSPM should make it possible to:

  • Instantly see which data stores in that blast radius contain PHI or NC personal information
  • Break down data types by regulation (HIPAA PHI, NC PI, PCI, etc.)
  • Estimate unique NC residents impacted and the kinds of harm they may face (identity theft, financial fraud, clinical privacy)

This enables coordinated notifications that satisfy:

  • HIPAA (OCR, media, and affected individuals)
  • North Carolina (residents, Attorney General, and credit bureaus where applicable)

→ Deep dive: Manage Data Security and Compliance Risks with DSPM

4. Proactively shrink breach impact before it happens

Finally, DSPM isn’t just for incident response. For NC hospitals, it should support:

  • Data minimization: Identifying redundant or obsolete PHI and PII, especially in analytics sandboxes, exports, and backups
  • Stronger encryption coverage: Ensuring sensitive records are encrypted at rest and in transit, with keys managed in line with both HIPAA security and NC expectations around encryption and “unusable” data.
  • Least‑privilege access: Systematically tightening access to sensitive datasets—particularly those combining PHI and NC‑defined personal information—so any single incident affects fewer people.

→ Related reading: Cloud Data Security Means Shrinking the Data Attack Surface

A unified playbook for HIPAA and North Carolina breach readiness

For NC hospitals and health systems, a pragmatic approach looks like this:

  1. Inventory your regulated data universe
    • PHI (HIPAA) and NC‑defined personal information across clinical, financial, HR, and digital systems.
  2. Deploy continuous DSPM across cloud, SaaS, and on‑prem
    • Move from point‑in‑time questionnaires and manual spreadsheets to always‑on discovery and classification.
  3. Align your HIPAA risk assessment and NC “material harm” criteria
    • Use shared evidence (classification, encryption posture, access analytics) to drive consistent decisions.
  4. Update incident response plans to include NC‑specific steps
    • Explicit branches for: notifying NC residents, the NC Attorney General, and relevant consumer reporting agencies.
  5. Run joint table‑tops (HIPAA + NC)
    • Simulate a multi‑system breach impacting NC residents and walk through every step from detection to notification.
  6. Measure and improve over time
    • Track metrics like “time to scope affected datasets” and “time to identify affected NC residents” as core readiness KPIs.

By embedding a data‑centric security posture—supported by DSPM—into daily operations, NC hospitals can turn overlapping HIPAA and state obligations from a scramble into a repeatable, defensible process.

See how leading health systems are unifying HIPAA and NC breach readiness with DSPM.

Get a live walkthrough of how Sentra discovers PHI and NC‑defined personal information across EHR, cloud, and SaaS—and how it accelerates incident scoping and notification. Request a Sentra demo.

Read More
Alejandro Hernández
Alejandro Hernández
March 23, 2026
5
Min Read

Sentra MCP Server: AI-Driven Data Security Operations

Sentra MCP Server: AI-Driven Data Security Operations

The Gap Between Seeing and Doing

Data Security Posture Management has delivered on its promise of visibility. Organizations know where their sensitive data lives, which stores are misconfigured, and how many identities can reach their crown jewels. But a fundamental gap remains: the distance between seeing a security problem and resolving it is still measured in manual steps, context switches, and tribal knowledge.

Security teams spend disproportionate time on operational toil -- navigating dashboards, correlating data across screens, constructing API queries, and manually updating alert statuses. Every alert triage requires the same sequence of clicks. Every compliance audit requires the same series of exports. Every access review requires the same chain of lookups.

The Sentra MCP Server closes this gap by exposing the full breadth and depth of the Sentra platform through the Model Context Protocol (MCP), an open standard that enables AI agents to discover and call tools programmatically. This turns every security operation -- from a simple status check to a multi-step investigation with remediation -- into a natural language conversation.

Unlike read-only MCP implementations that provide a conversational interface to data catalogs, the Sentra MCP Server is a complete security operations platform. It reads, investigates, correlates, and acts. It chains multiple API calls into coherent workflows. And it does so with enterprise-grade safety controls that put security teams in command of what the AI agent can do.

Core thesis: AI-driven DSPM doesn't just tell you what's wrong -- it investigates, triages, and helps you fix it.

How It Works

The Sentra MCP Server sits between AI agents (Claude Desktop, Claude Code, Cursor, or any MCP-compatible client) and the Sentra API, translating natural language requests into precise API call chains.

 Sentra MCP Server sits between AI agents and the Sentra API, translating natural language requests into precise API call chains.

Architecture highlights:

  • Auto-generated tools: The MCP server parses Sentra's OpenAPI specification at startup and dynamically creates tool wrappers using closures with inspect.Signature -- no code generation or exec() required. This means new API endpoints are automatically exposed as tools when the spec is updated.
  • Unified request pipeline: All tools -- read and write -- flow through a shared HTTP client with connection pooling, automatic retry with exponential backoff for rate limits (429) and server errors (5xx), and consistent error handling.
  • Safety-first write operations: Write tools are organized into a 6-tier hierarchy from additive-only to destructive, gated behind a feature flag, with UUID validation and explicit safety confirmations for high-risk operations.

Capability Deep Dive

Read Operations by Domain

The Sentra MCP Server exposes read operations across every domain of the Sentra platform:

Domain Tool Count Example Operations
Alerts ~20 List alerts, filter by severity/status, get trends, compliance aggregation, risk ratings, affected assets
Threats ~5 List threats, filter by MITRE tactic, get threat details
Data Stores ~20 Inventory stores, filter by type/region/sensitivity, aggregated risk, scan status, top data classes
Data Assets ~10 Search assets, count by type, export, file extensions, classification findings
Data Insights & Classes ~15 Data class distribution, group by account/region/store type/environment, dictionary values
Identity & Access ~15 Search/count identities, accessible stores/assets, full access graphs, permission metadata
Connectors ~5 List connectors, filter by type, associated connectors
Policies ~5 List policies, filter, incident counts
Compliance ~5 Framework compliance aggregation, control mappings, security ratings, rating trends
Audit Logs ~4 Activity feed, aggregated logs, entity-specific logs, activity histograms
DSAR ~3 List DSAR requests, request details, download reports
AI Assets ~2 List AI/ML assets, asset details
Dashboard & Sensitivity ~3 Dashboard summary, sensitivity overview, scan status

Every tool includes enhanced descriptions that guide the AI agent on when to use it, what parameters to pass, how to construct filters, and what follow-up tools to chain for deeper investigation.

Write Operations: The 6-Tier Hierarchy

Write operations are the key differentiator. They transform the MCP server from a query interface into an operations platform. Each tier represents increasing impact and corresponding safety controls:

Tier Category Tools Impact Safety Controls
1 Additive Only alert_add_comment, threat_add_comment Append-only, no state change Max 1000 chars, cannot delete
2 State Changes alert_transition, threat_transition Changes alert/threat status Validated status + reason enums
3 Scan Triggers scan_data_store, scan_data_asset Triggers classification scans Rate-aware, async execution
4 Configuration policy_change_status, policy_create Modifies security policy config UUID validation, full policy schema validation
5 Metadata Updates data_store_update_description, data_store_update_custom_tags Updates store metadata Input length limits, JSON validation
6 Destructive data_class_purge Irreversible deletion of all detections Requires confirm="PURGE" safety gate

All 11 write tools are gated by the SENTRA_ENABLE_WRITE_OPS environment variable (default: enabled). Setting it to false completely removes all write tools from the MCP server, leaving a read-only interface.

Why this matters: Read-only MCP servers can tell you "this policy generates 200 low-severity alerts." The Sentra MCP Server can tell you that and then disable the policy and resolve its alerts -- in the same conversation.

Composite Investigation Tools

Two composite tools chain multiple API calls into single-invocation investigations:

`investigate_alert(alert_id)` -- Full alert triage in one call:

  1. Retrieves alert details (severity, policy, timestamps)
  2. Fetches affected data assets
  3. Gets alert status change history (recurring?)
  4. Pulls store context (type, region, owner, sensitivity)
  5. Maps accessible identities (blast radius)

`security_posture_summary()` -- Complete security overview:

  1. Dashboard summary metrics
  2. Open alerts aggregated by severity
  3. Overall security rating
  4. Compliance status across frameworks
  5. Risk distribution across data stores
  6. Sensitivity summary

These tools reduce what would be 5-6 sequential API calls into a single invocation, dramatically reducing latency and context window usage for the AI agent.

Guided Workflow Prompts

Five MCP prompts provide pre-built, step-by-step instructions that guide the AI agent through complex security workflows:

Prompt Parameters Workflow
triage_alert alert_id 6-step alert investigation: details, affected assets, store context, blast radius, history, sensitivity
security_posture_overview none 7-step executive briefing: dashboard, alerts, rating, compliance, risk, sensitivity, threats
compliance_audit_prep framework (optional) 6-step audit preparation: compliance overview, controls, violations, classification, access, encryption
investigate_identity identity_id 5-step identity deep dive: details, accessible stores, accessible assets, access graph, related threats
investigate_data_store store_id 7-step store assessment: details, sensitivity, asset count, access list, alerts, scan status, data classes

Prompts serve as expert runbooks encoded directly into the MCP server. A junior security analyst using these prompts follows the same investigation methodology as a senior engineer.

Use Cases

UC1: Quick Security Status Check

Persona: Security operations analyst starting their shift

Prompt:

"Show me all open alerts by severity and our current security rating."

Tools used: alerts_get_open_alerts_aggregated, alerts_get_risks_security_rating

Value: Instant situational awareness. No dashboard navigation, no login sequence. A 2-second question replaces a 5-minute morning routine.

UC2: Compliance Readiness Assessment

Persona: GRC analyst preparing for an upcoming HIPAA audit

Prompt:

"Prepare HIPAA compliance evidence: show our compliance score, all HIPAA-related controls and their status, any open violations, and data classification coverage for PHI across all data stores."

Tools used: alerts_get_frameworks_compliance_aggregation, alerts_get_framework_controls_mapping, alerts_get_all_external (filtered), data_insights_get_all (filtered for PHI), data_stores_get_all_external (filtered)

Value: Audit preparation that typically takes a full day compressed into a single conversational session. The output is structured for direct inclusion in audit evidence packages.

UC3: Alert Triage and Resolution

Persona: Security engineer responding to an overnight alert

Prompt:

"Investigate alert 7a3f9c21-4b8e-4d2a-9f1c-8e7d6a5b4c3d. Walk me through what happened, what data is at risk, who can access it, and whether this has happened before. If it's a false positive, resolve it and add a comment explaining why."

Tools used: investigate_alert (composite), alert_add_comment (write), alert_transition (write)

Value: End-to-end triage and resolution in one conversation. The composite tool gathers all context in a single call, and write operations close the loop -- no need to switch to the Sentra UI.

UC4: Identity Access Review

Persona: Security architect conducting a quarterly access review

Prompt:

"Show me all external identities with access to high-sensitivity data stores. For the identity with the broadest access, map the full access graph from identity to roles to stores to assets. Flag any stores with open alerts."

Tools used: search_identities (filtered), get_data_access_identities_by_id_accessible_stores, get_data_access_identities_by_id_graph, alerts_get_all_external (filtered per store)

Value: Access reviews that require correlating identity data, store sensitivity, role chains, and alert status -- all unified into a single investigation flow. The graph traversal reveals access paths that flat permission reports miss.

UC5: Policy Noise Reduction (Hero Example)

Persona: Security operations lead tuning policy configurations

Prompt:

"Audit all enabled security policies. For each, show how many open alerts it generates and its severity. Identify policies generating more than 50 low-severity alerts -- those are candidates for tuning. For the noisiest policy, show me sample violated assets so I can verify if it's misconfigured. Then disable that policy and resolve its existing alerts as false positives."

Tools used:

  1. policies_get_all -- Retrieve all enabled policies
  2. policies_get_policy_incidents_count -- Alert counts per policy
  3. alerts_get_all_external -- Alerts filtered to the noisiest policy
  4. alerts_get_violated_store_data_assets_by_alert -- Sample violated assets
  5. policy_change_status -- Disable the misconfigured policy (write)
  6. alert_transition -- Resolve existing alerts as false positives (write)

Value: This is the workflow that defines the difference between observing and operating. A read-only MCP server stops at step 4. Sentra's MCP server completes the full audit-to-remediation cycle, reducing policy noise that would otherwise consume analyst hours every week.

UC6: M&A Data Security Due Diligence

Persona: CISO assessing an acquisition target's data security posture

Prompt:

"We're acquiring Company X. Their AWS connector is 'companyX-aws-prod'. Give me a full data security due diligence report: all data stores in that account, sensitivity levels, open alerts and threats, access permissions, and compliance gaps. Flag anything that would be a deal risk."

Tools used: lookup_connector_by_name, data_stores_get_all_external (filtered), data_stores_get_store_asset_sensitivity, alerts_get_all_external (filtered), threats_get_all_external (filtered), get_data_access_stores_by_id_accessible_identities, alerts_get_frameworks_compliance_aggregation

Value: M&A due diligence that would require a dedicated workstream compressed into a structured assessment. The connector-scoped view ensures the analysis is precisely bounded to the acquisition target's infrastructure.

UC7: Board-Ready Security Briefing

Persona: CISO preparing for a quarterly board presentation

Prompt:

"Prepare my quarterly board security briefing: security rating trend over 90 days, current compliance status by framework, open alerts by severity with quarter-over-quarter comparison, data-at-risk trends, sensitivity summary, and top 5 prioritized recommendations."

Tools used: security_posture_summary (composite), alerts_get_risks_security_rating_trend, alerts_get_trends, alerts_get_data_at_risk_trends, data_stores_get_data_stores_aggregated_by_risk

Value: Board materials that tell a story: where we were, where we are, what we've improved, and what we need to prioritize next. The AI agent synthesizes data from 6+ tools into a narrative suitable for non-technical audiences.

UC8: AI Data Risk Assessment

Persona: AI governance lead assessing training data risk

Prompt:

"Show me all AI-related assets Sentra has discovered. For each, what sensitive data classes are present, who has access to the training data stores, and are there any open security alerts? Summarize the risk posture for our AI/ML workloads."

Tools used: get_all_ai_assets_api_data_access_ai_assets_get, get_ai_asset_by_id_api_data_access_ai_assets__asset_id__get, get_data_access_stores_by_id_accessible_identities, alerts_get_all_external (filtered)

Value: As organizations scale AI initiatives, visibility into what sensitive data feeds AI models becomes critical. This workflow surfaces PII, PHI, or proprietary data in training pipelines before it becomes a regulatory or reputational risk.

Prompt Showcase Gallery

The following prompts are designed to be used directly with any MCP-compatible AI agent connected to the Sentra MCP Server. Each demonstrates a complete workflow with the tools that fire behind the scenes.

Prompt 1: Full Alert Investigation with Remediation

Full Alert Investigation with Remediation

Tools that fire:

  • alerts_get -- Alert details and policy info
  • alerts_get_data_assets_by_alert -- Affected data assets
  • data_stores_get_store -- Store details including sensitivity
  • get_data_access_stores_by_id_accessible_identities -- Blast radius
  • alertchangelog_get_alert_changelog_status_change_by_alert_id -- Recurrence check
  • alert_transition -- Status change (write)
  • alert_add_comment -- Investigation notes (write)

Expected output: A structured investigation report with severity assessment, impact analysis, blast radius, recurrence history, and confirmed remediation action.

Prompt 2: Compliance Audit Evidence Package

Compliance Audit Evidence Package

Tools that fire:

  • alerts_get_frameworks_compliance_aggregation -- Framework scores
  • alerts_get_framework_controls_mapping -- Control-level detail
  • alerts_get_all_external -- Open violations by control
  • get_coverage_metrics_api_scan_hub_visibility_coverage_get -- Scan coverage
  • count_identities -- Identity totals
  • search_identities -- Identity type breakdown
  • alerts_get_risks_security_rating_trend -- Rating trend

Expected output: A multi-section evidence package with quantified compliance metrics, identified gaps, and trend data demonstrating continuous improvement.

Prompt 3: Identity Blast Radius Analysis

Identity Blast Radius Analysis

Tools that fire:

  • get_identity_by_id_api_data_access_identities__identity_id__get -- Identity profile
  • get_data_access_identities_by_id_accessible_stores -- Accessible stores
  • data_stores_get_store_asset_sensitivity -- Per-store sensitivity
  • get_data_access_identities_by_id_graph -- Full access graph
  • threats_get_all_external -- Threats on accessible stores
  • alerts_get_all_external -- Alerts on accessible stores
  • get_data_access_identities_by_id_accessible_assets -- Top sensitive assets

Expected output: A risk-scored blast radius report with the identity's complete reach across the data estate, active threats in the blast zone, and a prioritized recommendation.

Prompt 4: Data Store Security Deep Dive

Data Store Security Deep Dive

Tools that fire:

  • data_stores_get_store -- Store profile
  • data_stores_get_store_asset_sensitivity -- Sensitivity breakdown
  • data_stores_get_store_assets_count -- Asset count
  • datastorecontroller_getfileextensionsbydatastoreid -- File type breakdown
  • get_data_access_stores_by_id_accessible_identities -- Identity access
  • alerts_get_all_external -- Open alerts (filtered)
  • data_stores_get_store_scan_status -- Scan status
  • data_stores_get_data_stores_aggregated_by_risk -- Risk context
  • data_store_update_custom_tags -- Apply review tags (write)
  • data_store_update_description -- Update description (write)

Expected output: A comprehensive store security assessment with metadata updates applied directly to the store record for audit trail purposes.

Prompt 5: Weekly Security Operations Digest

Weekly Security Operations Digest

Tools that fire:

  • alerts_get_trends -- Alert trend data
  • alerts_get_open_alerts_aggregated -- Current severity breakdown
  • threats_get_all_external -- Recent critical/high threats
  • alerts_get_frameworks_compliance_aggregation -- Compliance scores
  • data_stores_get_data_stores_aggregated_by_risk -- High-risk stores
  • get_assets_scanned_api_scan_hub_visibility_assets_scanned_get -- Scan coverage
  • security_posture_summary -- Overall posture

Expected output: A formatted weekly digest suitable for team distribution, with trend comparisons, prioritized actions, and metrics that track security operations performance.

Competitive Differentiation

Sentra vs. Read-Only Metadata MCP Servers

Dimension Read-Only MCP Servers Sentra MCP Server
Tool count 5–20 data catalog tools 130+ tools across 13+ domains
Operations Read-only queries Read + 11 write operations
Investigation depth Single-tool lookups Multi-step composite investigations
Guided workflows None 5 pre-built security prompts
Security domains Data catalog only Alerts, threats, identity, compliance, DSAR, AI assets, policies, and more
Write operations None Comment, transition, scan, policy management, metadata updates
Safety controls N/A 6-tier hierarchy, feature flags, UUID validation, safety gates
Deployment options Desktop only Desktop, CLI, Docker with TLS

Five Key Differentiators

1. Operational depth, not just observational breadth. The 11 write operations across 6 safety tiers transform the MCP server from a query interface into an operations platform. Security teams don't just find problems -- they fix them.

2. Composite investigation tools. The investigate_alert and security_posture_summary tools chain 5-6 API calls into single invocations. This isn't just convenience -- it reduces AI agent round trips, lowers latency, and keeps conversation context focused on analysis rather than data gathering.

3. Guided workflow prompts. Five pre-built prompts encode expert investigation methodologies directly into the MCP server. A junior analyst following the triage_alert prompt performs the same investigation as a senior engineer.

4. Full security domain coverage. From DSAR processing to AI asset risk assessment to MITRE ATT&CK threat mapping to identity graph traversal -- the Sentra MCP Server covers security operations end to end, not just the data catalog slice.

5. Enterprise-grade safety architecture. Write operations aren't an afterthought. The 6-tier hierarchy, feature flag gating, UUID validation, and explicit safety gates (like requiring confirm="PURGE" for destructive operations) ensure that conversational access doesn't compromise operational safety.

Security and Governance

The Sentra MCP Server is designed for enterprise security environments where the tools themselves must meet the same security standards as the data they protect.

Authentication and Authorization

  • Sentra API authentication via X-Sentra-API-Key header on all outbound API calls
  • MCP endpoint authentication via X-MCP-API-Key header for HTTP transport (prevents unauthorized agent connections)
  • API key permissions inherit from the Sentra platform -- the MCP server cannot exceed the privileges of the configured API key

Input Validation

  • UUID validation on all identifier parameters (alert_id, threat_id, policy_id, class_id) before HTTP calls are made
  • Input length limits on all string parameters (1000 chars for comments, 2000 chars for descriptions)
  • JSON schema validation for policy creation and tag updates
  • Enum validation for status transitions (only valid statuses and reasons accepted)

Network Security

  • SSRF protection blocks requests to private IP ranges (169.254.x, 10.x, 172.16-31.x, 192.168.x) and cloud metadata endpoints
  • HTTPS enforcement for all non-localhost connections
  • TLS-native deployment with certificate and key configuration for direct HTTPS serving
  • CORS controls with configurable origin allowlists for HTTP transport

Operational Safety

  • Feature flag gating (SENTRA_ENABLE_WRITE_OPS) enables or disables all write operations with a single environment variable
  • 6-tier write hierarchy ensures destructive operations require explicit safety confirmation
  • Error sanitization strips internal details (hostnames, file paths, stack traces) from error responses returned to clients
  • Audit trail -- all write operations are recorded in Sentra's audit log, maintaining full traceability

Container Security

  • Docker deployment with non-root user, read-only filesystem, and resource limits
  • Health endpoint (/health) for orchestrator readiness probes, accessible without authentication

Deployment Options

Deployment Mode Transport Authentication Use Case
Claude Desktop stdio Sentra API key only Individual security analyst, local development
Claude Code / Cursor stdio Sentra API key only Developer workflow integration, IDE-embedded security
Docker (Production) HTTP (streamable-http) Sentra API key + MCP API key + TLS Team-shared instance, production security operations

Prerequisites

  • Python 3.11+ (or Docker)
  • Sentra API key with v3 access
  • Network access to your Sentra instance (typically https://app.sentra.io)

Quick Start (Claude Desktop)

Add to your Claude Desktop MCP configuration:

Adding Claude Desktop MCP configuration

Production Deployment (Docker with TLS)

Production Deployment (Docker with TLS)

Configuration Reference

Environment Variable Default Description
SENTRA_API_KEY (required) Sentra API key for platform access
SENTRA_BASE_URL https://app.sentra.io Sentra API base URL
SENTRA_ENABLE_WRITE_OPS true Enable/disable all write operations
SENTRA_MCP_TRANSPORT stdio Transport mode: stdio, streamable-http, sse
SENTRA_MCP_API_KEY (none) API key required for HTTP transport authentication
SENTRA_MCP_HOST 0.0.0.0 HTTP transport bind address
SENTRA_MCP_PORT 8000 HTTP transport port
SENTRA_MCP_PATH /mcp HTTP transport endpoint path
SENTRA_MCP_SSL_CERTFILE (none) TLS certificate file path
SENTRA_MCP_SSL_KEYFILE (none) TLS private key file path
SENTRA_MCP_CORS_ORIGINS (none) Comma-separated allowed CORS origins
SENTRA_MCP_MODE full full (all tools) or cursor (priority subset)

Call to Action

For Existing Sentra Customers

The MCP server is available today. Deploy it alongside your existing Sentra instance and start using natural language to investigate alerts, prepare compliance reports, and manage security operations. Contact your Sentra account team for deployment guidance and best practices.

For Security Teams Evaluating DSPM

The Sentra MCP Server demonstrates what modern data security operations look like: conversational, automated, and end-to-end. Request a demo to see how AI-driven security operations can reduce alert triage time, accelerate compliance preparation, and close the gap from detection to response.

For Security Engineers

The MCP server is open for customization. Add your own tools, create custom prompts that encode your organization's investigation methodologies, and integrate with your existing security workflows. The architecture is designed for extensibility -- every tool registered through the OpenAPI spec is automatically available, and custom tools can be added alongside the auto-generated ones.

The future of data security operations is conversational. Investigate, triage, and resolve -- not just query.

To see Sentra MCP in action Request a Demo

<blogcta-big>

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.