Shiri Nossel
Shiri is a Product Manager at Sentra with a background in engineering and data analysis. Before joining Sentra, she worked at ZoomInfo and in fast-paced startups, where she gained experience building products that scale. She’s passionate about creating clear, data-driven solutions to complex security challenges and brings curiosity and creativity to everything she does, both in and out of work.
Name's Data Security Posts

How Sentra Uncovers Sensitive Data Hidden in Atlassian Products
How Sentra Uncovers Sensitive Data Hidden in Atlassian Products
Atlassian tools such as Jira and Confluence are the beating heart of software development and IT operations. They power everything from sprint planning to debugging production issues. But behind their convenience lies a less-visible problem: these collaboration platforms quietly accumulate vast amounts of sensitive data often over years that security teams can’t easily monitor or control.
The Problem: Sensitive Data Hidden in Plain Sight
Many organizations rely on Jira to manage tickets, track incidents, and communicate across teams. But within those tickets and attachments lies a goldmine of sensitive information:
- Credentials and access keys to different environments.
- Intellectual property, including code snippets and architecture diagrams.
- Production data used to reproduce bugs or validate fixes — often in violation of data-handling regulations.
- Real customer records shared for troubleshooting purposes.
This accumulation isn’t deliberate; it’s a natural byproduct of collaboration. However, it results in a long-tail exposure risk - historical tickets that remain accessible to anyone with permissions.
The Insider Threat Dimension
Because Jira and Confluence retain years of project history, employees and contractors may have access to data they no longer need. In some organizations, teams include offshore or external contributors, multiplying the risk surface. Any of these users could intentionally or accidentally copy or export sensitive content at any moment.
Why Sensitive Data Is So Hard to Find
Sensitive data in Atlassian products hides across three levels, each requiring a different detection approach:
- Structured Data (Records): Every ticket or page includes structured fields - reporter, status, labels, priority. These schemas are customizable, meaning sensitive fields can appear unpredictably. Security teams rarely have visibility or consistent metadata across instances.
- Unstructured Data (Descriptions & Discussions): Free-text fields are where developers collaborate — and where secrets often leak. Comments can contain access tokens, internal URLs, or step-by-step guides that expose system details.
- Unstructured Data (Attachments): Screenshots, log files, spreadsheets, code exports, or even database snapshots are commonly attached to tickets. These files may contain credentials, customer PII, or proprietary logic, yet they are rarely scanned or governed.
.webp)
The Challenge for Security Teams
Traditional security tools were never designed for this kind of data sprawl. Atlassian environments can contain millions of tickets and pages, spread across different projects and permissions. Manually auditing this data is impractical. Even modern DLP tools struggle to analyze the context of free text or attachments embedded within these platforms.
Compliance teams face an uphill battle: GDPR, HIPAA, and SOC 2 all require knowing where sensitive data resides. Yet in most Atlassian instances, that visibility is nonexistent.
How Sentra Solves the Problem
Sentra takes a different approach. Its cloud-native data security platform discovers and classifies sensitive data wherever it lives - across SaaS applications, cloud storage, and on-prem environments. When connecting your atlassian environment, Sentra delivers visibility and control across every layer of Jira and Confluence.
Comprehensive Coverage
Sentra delivers consistent data governance across SaaS and cloud-native environments. When connected to Atlassian Cloud, Sentra’s discovery engine scans Jira and Confluence content to uncover sensitive information embedded in tickets, pages, and attachments, ensuring full visibility without impacting performance.
In addition, Sentra’s flexible architecture can be extended to support hybrid environments, providing organizations with a unified view of sensitive data across diverse deployment models.
AI-Based Classification
Using advanced AI models, Sentra classifies data across all three tiers:
- Structured metadata, identifying risky fields and tags.
- Unstructured text, analyzing ticket descriptions, comments, and discussions for credentials, PII, or regulated data.
- Attachments, scanning files like logs or database snapshots for hidden secrets.
This contextual understanding distinguishes between harmless content and genuine exposure, reducing false positives.
Full Lifecycle Scanning
Sentra doesn’t just look at new tickets, it scans the entire historical archive to detect legacy exposure, while continuously monitoring for ongoing changes. This dual approach helps security teams remediate existing risks and prevent future leaks.
The Real-World Impact
Organizations using Sentra gain the ability to:
- Prevent accidental leaks of credentials or production data in collaboration tools.
- Enforce compliance by mapping sensitive data across Jira and Confluence.
- Empower DevOps and security teams to collaborate safely without stifling productivity.
Conclusion
Collaboration is essential, but it should never compromise data security. Atlassian products enable innovation and speed, yet they also hold years of unmonitored information. Sentra bridges that gap by giving organizations the visibility and intelligence to discover, classify, and protect sensitive data wherever it lives, even in Jira and Confluence.
<blogcta-big>

The Hidden Risks Metadata Catalogs Can’t See
The Hidden Risks Metadata Catalogs Can’t See
In today’s data-driven world, organizations are dealing with more information than ever before. Data pours in from countless production systems and applications, and data analysts are tasked with making sense of it all - fast. To extract valuable insights, teams rely on powerful analytics platforms like Snowflake, Databricks, BigQuery, and Redshift. These tools make it easier to store, process, and analyze data at scale.
But while these platforms are excellent at managing raw data, they don't solve one of the most critical challenges organizations face: understanding and securing that data.
That’s where metadata catalogs come in.
Metadata Catalogs Are Essential But They’re Not Enough
Metadata catalogs such as AWS Glue, Hive Metastore, and Apache Iceberg are designed to bring order to large-scale data ecosystems. They offer a clear inventory of datasets, making it easier for teams to understand what data exists, where it’s stored, and who is responsible for it.
This organizational visibility is essential. With a good catalog in place, teams can collaborate more efficiently, minimize redundancy, and boost productivity by making data discoverable and accessible.
But while these tools are great for discovery, they fall short in one key area: security. They aren’t built to detect risky permissions, identify regulated data, or prevent unintended exposure. And in an era of growing privacy regulations and data breach threats, that’s a serious limitation.
Different Data Tools, Different Gaps
It’s also important to recognize that not all tools in the data stack work the same way. For example, platforms like Snowflake and BigQuery come with fully managed infrastructure, offering seamless integration between storage, compute, and analytics. Others, like Databricks or Redshift, are often layered on top of external cloud storage services like S3 or ADLS, providing more flexibility but also more complexity.
Metadata tools have similar divides. AWS Glue is tightly integrated into the AWS ecosystem, while tools like Apache Iceberg and Hive Metastore are open and cloud-agnostic, making them suitable for diverse lakehouse architectures.
This variety introduces fragmentation, and with fragmentation comes risk. Inconsistent access policies, blind spots in data discovery, and siloed oversight can all contribute to security vulnerabilities.
The Blind Spots Metadata Can’t See
Even with a well-maintained catalog, organizations can still find themselves exposed. Metadata tells you what data exists, but it doesn’t reveal when sensitive information slips into the wrong place or becomes overexposed.
This problem is particularly severe in analytics environments. Unlike production environments, where permissions are strictly controlled, or SaaS applications, which have clear ownership and structured access models, data lakes and warehouses function differently. They are designed to collect as much information as possible, allowing analysts to freely explore and query it.
In practice, this means data often flows in without a clear owner and frequently without strict permissions. Anyone with warehouse access, whether users or automated processes, can add information, and analysts typically have broad query rights across all data. This results in a permissive, loosely governed environment where sensitive data such as PII, financial records, or confidential business information can silently accumulate. Once present, it can be accessed by far more individuals than appropriate.
The good news is that the remediation process doesn't require a heavy-handed approach. Often, it's not about managing complex permission models or building elaborate remediation workflows. The crucial step is the ability to continuously identify and locate sensitive data, understand its location, and then take the correct action whether that involves removal, masking, or locking it down.
How Sentra Bridges the Gap Between Data Visibility & Security
This is where Sentra comes in.
Sentra’s Data Security Posture Management (DSPM) platform is designed to complement and extend the capabilities of metadata catalogs, not just to address their limitations, but to elevate your entire data security strategy. Instead of replacing your metadata layer, Sentra works alongside it enhancing your visibility with real-time insights and powerful security controls.
Sentra scans across modern data platforms like Snowflake, S3, BigQuery, and more. It automatically classifies and tags sensitive data, identifies potential exposure risks, and detects compliance violations as they happen.
With Sentra, your metadata becomes actionable.

From Static Maps to Live GPS
Think of your metadata catalog as a map. It shows you what’s out there and how things are connected. But a map is static. It doesn’t tell you when there’s a roadblock, a detour, or a collision. Sentra transforms that map into a live GPS. It alerts you in real time, enforces the rules of the road, and helps you navigate safely no matter how fast your data environment is moving.
Conclusion: Visibility Without Security Is a Risk You Can’t Afford
Metadata catalogs are indispensable for organizing data at scale. But visibility alone doesn’t stop a breach. It doesn’t prevent sensitive data from slipping into the wrong place, or from being accessed by the wrong people.
To truly safeguard your business, you need more than a map of your data—you need a system that continuously detects, classifies, and secures it in real time. Without this, you’re leaving blind spots wide open for attackers, compliance violations, and costly exposure.
Sentra turns static visibility into active defense. With real-time discovery, context-rich classification, and automated protection, it gives you the confidence to not only see your data, but to secure it.
See clearly. Understand fully. Protect confidently with Sentra.
<blogcta-big>

EU AI Act Compliance: What Enterprise AI 'Deployers' Need to Know
EU AI Act Compliance: What Enterprise AI 'Deployers' Need to Know
The EU AI Act isn't just for model builders. If your organization uses third-party AI tools like Microsoft Copilot, ChatGPT, and Claude, you're likely subject to EU AI Act compliance requirements as a "deployer" of AI systems. While many security leaders assume this regulation only applies to companies developing AI systems, the reality is far more expansive.
The stakes are significant. The EU AI Act officially entered into force on August 1, 2024. However, it’s important to note that for Deployers of high-risk AI systems, most obligations will not be fully enforceable until August 2, 2026. Once active, the Act employs a tiered penalty structure: non-compliance with prohibited AI practices can reach up to €35 million or 7% of global revenue, while violations of high-risk obligations (the most likely risk for deployers) can reach up to €15 million or 3% of global revenue., emphasizing the need for early preparation.
For security leaders, this presents both a challenge and an opportunity. AI adoption can drive significant competitive advantage, but doing so responsibly requires robust risk management and strong data protection practices. In other words, compliance and safety are not just regulatory hurdles, they’re enablers of trustworthy and effective AI deployment.
Why the Risk-Based Approach Changes Everything for Enterprise AI
The EU AI Act establishes a four-tier risk classification system that fundamentally changes how organizations must think about AI governance. Unlike traditional compliance frameworks that apply blanket requirements, the AI Act's obligations scale based on risk level.
The critical insight for security leaders: classification depends on use case, not the technology itself. A general-purpose AI tool like ChatGPT or Microsoft Copilot starts as "minimal risk" but becomes "high-risk" based on how your organization deploys it. This means the same AI platform can have different compliance obligations across different business units within the same company.
Deployer vs. Developer: Most Enterprises Are "Deployers"
The EU AI Act establishes distinct responsibilities for two main groups: AI system providers (those who develop and place AI systems on the market) and deployers (those who use AI systems within their operations).
Most enterprises today, especially those using third-party tools such as ChatGPT, Copilot, or other AI services are deployers. This means they face compliance obligations related to how they use AI, not necessarily how it was built.
Providers bear primary responsibility for:
- Risk management systems
- Data governance and documentation
- Technical transparency and conformity assessments
- Automated logging capabilities
For security and compliance leaders, this distinction is critical. Vendor due diligence becomes a key control point, ensuring that AI providers can demonstrate compliance before deployment.
However, being a deployer does not eliminate obligations. Deployers must meet several important requirements under the Act, particularly when using high-risk AI systems, as outlined below.
The Hidden High-Risk Scenarios
Security teams must map AI usage across the organization to identify high-risk deployment scenarios that many organizations overlook:
When AI Use Becomes “High-Risk”
Under the EU AI Act, risk classification is based on how AI is used, not which product or vendor provides it. The same tool, whether ChatGPT, Microsoft Copilot, or any other AI system—can fall into a high-risk category depending entirely on its purpose and context of deployment.
Examples of High-Risk Use Cases:
AI systems are considered high-risk when they are used for purposes such as:
- Biometric identification or categorization of individuals
- Operation of critical infrastructure (e.g., energy, water, transportation)
- Education and vocational training (e.g., grading, admission decisions)
- Employment and worker management, including access to self-employment
Access to essential private or public services, including credit scoring and insurance pricing - Law enforcement and public safety
Migration, asylum, and border control - Administration of justice or democratic processes
Illustrative Examples
- Using ChatGPT to draft marketing emails → Not high-risk
- Using ChatGPT to rank job candidates → High-risk (employment context)
Using Copilot to summarize code reviews → Not high-risk
Using Copilot to approve credit applications → High-risk (credit scoring)
In other words, the legal trigger is the use case, not the data type or the brand of tool. Processing sensitive data like PHI (Protected Health Information) may increase compliance obligations under other frameworks (like GDPR or HIPAA), but it doesn’t itself define an AI system as high-risk under the EU AI Act, the function and impact of the system do.
Even seemingly innocuous uses like analyzing customer data for business insights can become high-risk if they influence individual treatment or access to services.
The "shadow high-risk" problem represents a significant blind spot for many organizations. Employees often deploy AI tools for legitimate business purposes without understanding the compliance implications. A marketing team using AI to analyze customer demographics for targeting campaigns may unknowingly create high-risk AI deployments if the analysis influences individual treatment or access to services.
The “Shadow High-Risk” Problem
Many organizations face a growing blind spot: shadow high-risk AI usage. Employees often deploy AI tools for legitimate business tasks without realizing the compliance implications.
For example, an HR team using a custom-prompted ChatGPT to filter or rank job applicants inadvertently creates a high-risk deployment under Annex III of the Act. While simple marketing copy generation remains "limited risk," any AI use that evaluates employees or influences recruitment triggers the full weight of high-risk compliance. Without visibility, such cases can expose organizations to significant fines.
The Eight Critical Deployer Obligations for High-Risk AI Systems
1. AI System Inventory & Classification
Organizations must maintain comprehensive inventories of AI systems documenting vendors, use cases, risk classifications, data flows, system integrations, and current governance maturity. Security teams must implement automated discovery tools to identify shadow AI usage and ensure complete visibility.
2. Data Governance for AI
For high-risk AI systems, deployers who control the input data must ensure that the data is relevant and sufficiently representative for the system’s intended purpose.
This responsibility includes maintaining data quality standards, tracking data lineage, and verifying the statistical properties of datasets used in training and operation, but only where the deployer has control over the input data.
3. Continuous Monitoring
System monitoring represents a critical security function requiring continuous oversight of AI system operation and performance against intended purposes. Organizations must implement real-time monitoring capabilities, automated alert systems for anomalies, and comprehensive performance tracking.
4. Logging & Retention
Organizations must maintain automatically generated logs for minimum six-month periods, with financial institutions facing longer retention requirements. Logs must capture start and end dates/times for each system use, input data and reference database information, and identification of personnel involved in result verification.
5. Workplace Notification
Workplace notification requirements mandate informing employees and representatives before deploying AI systems that monitor or evaluate work performance. This creates change management obligations for security teams implementing AI-powered monitoring tools.
6. Incident Reporting
Serious incident reporting requires immediate notification to both providers and authorities when AI systems directly or indirectly lead to death, serious harm to a person's health, serious and irreversible disruption of critical infrastructure, infringement of fundamental rights obligations, or serious harm to property or the environment. Security teams must establish AI-specific incident response procedures.
7. Fundamental Rights Impact Assessments (FRIAs)
Organizations using high-risk AI systems must conduct FRIAs before deployment. FRIAs are mandatory for public bodies, organizations providing public services, and specific use cases like credit scoring or insurance risk assessment. Security teams must integrate FRIA processes with existing privacy impact assessments.
8. Vendor Due Diligence
Organizations must verify AI provider compliance status throughout the supply chain, assess vendor security controls adequacy, negotiate appropriate service level agreements for AI incidents, and establish ongoing monitoring procedures for vendor compliance changes.
Recommended Steps for Security Leaders
Once you’ve identified which AI systems may qualify as high-risk under the EU AI Act, the next step is to establish a practical roadmap for compliance and governance readiness.
While the Act does not prescribe an implementation timeline, organizations should take immediate, proactive measures to prepare for enforcement. The following are Sentra’s recommended best practices for AI governance and security readiness, not legal deadlines.
1. Build an AI System Inventory: Map all AI systems in use, including third-party tools and internal models. Automated discovery can help uncover shadow AI use across departments.
2. Assess Vendor and Partner Compliance: Evaluate each vendor’s EU AI Act readiness, including whether they follow relevant Codes of Practice or maintain clear accountability documentation.
3. Identify High-Risk Use Cases: Map current AI deployments against EU AI Act risk categories to flag high-risk systems for closer governance and oversight.
4. Strengthen AI Data Governance: Implement standards for data quality, lineage, and representativeness (where the deployer controls input data). Align with existing data protection frameworks such as GDPR and ISO 42001.
5. Conduct Fundamental Rights Impact Assessments (FRIA): Integrate FRIAs into your broader risk management and privacy programs to proactively address potential human rights implications.
6. Enhance Monitoring and Incident Response: Deploy continuous monitoring solutions and integrate AI-specific incidents into your SOC playbooks.
7. Update Vendor Contracts and Accountability Structures: Include liability allocation, compliance warranties, and audit rights in contracts with AI vendors to ensure shared accountability.
*Author’s Note:
These steps represent Sentra’s interpretation and recommended framework for AI readiness, not legal requirements under the EU AI Act. Organizations should act as soon as possible, regardless of when they begin their compliance journey.
Critical Deadlines Security Leaders Can't Miss
August 2, 2025: GPAI transparency requirements are already in effect, requiring clear disclosure of AI-generated content, copyright compliance mechanisms, and training data summaries.
August 2, 2026: Full high-risk AI system compliance becomes mandatory, including registration in EU databases, implementation of comprehensive risk management systems, and complete documentation of all compliance measures.
Ongoing enforcement: Prohibited practices enforcement is active immediately with €35 million maximum penalties or 7% of global revenue.
From Compliance Burden to Competitive Advantage
The EU AI Act represents more than a regulatory requirement, it's an opportunity to establish comprehensive AI governance that enables secure, responsible AI adoption at enterprise scale. Security leaders who act proactively will gain competitive advantages through enhanced data protection, improved risk management, and the foundation for trustworthy AI innovation.
Organizations that view EU AI Act compliance as merely a checklist exercise miss the strategic opportunity to build world-class AI governance capabilities. The investment in comprehensive data discovery, automated classification, and continuous monitoring creates lasting organizational value that extends far beyond regulatory requirements. Understanding data security posture management (DSPM) reveals how these capabilities enable faster AI adoption, reduced risk exposure, and enhanced competitive positioning in an AI-driven market.
Organizations that delay implementation face increasing compliance costs, regulatory risks, and competitive disadvantages as AI adoption accelerates across industries. The path forward requires immediate action on AI discovery and classification, strategic technology platform selection, and integration with existing security and compliance programs. Building a data security platform for the AI era demonstrates how leading organizations are establishing the technical foundation for both compliance and innovation.
Ready to transform your AI governance strategy? Understanding your obligations as a deployer is just the beginning, the real opportunity lies in building the data security foundation that enables both compliance and innovation.
Schedule a demonstration to discover how comprehensive data visibility and automated compliance monitoring can turn regulatory requirements into competitive advantages.
<blogcta-big>