Intelligence Briefing β€” April 2026

Securing Clinical AI
at Scale

Henry Schein One AI Security Assessment β€” prepared for the meeting with Martin Busch, Sr. Director Safety & Security EMEA & Brazil

Prepared for
Martin Busch
Role
Sr. Director Safety & Security EMEA & Brazil
Date
April 2026
Prepared by
Hugo Nguyen β€” AI Security
Scroll to begin
Section 01

The Situation

Henry Schein One has moved from AI experimentation to platform-wide deployment at remarkable speed β€” 3 AI products shipped in 4 months, embedded in an agentic architecture with 700+ API endpoints.

48,000+
US dental practices on Dentrix / Dentrix Ascend[1]
100M
Claims processed annually[1]
191M
Eligibility checks in 2025[1]
3M
Charts completed via AI-powered voice dictation[1]

Product Timeline

Nov 3, 2025
AWS Strategic Partnership announced β€” goal: "world's first AI-augmented practice management system"[2]
Nov 25, 2025
Voice Notes & Claire unveiled at Greater New York Dental Meeting (GNYDM)[3]
Feb 19, 2026
Image Verify launched β€” AI-powered image quality assessment in Dentrix[4]
Mar 10, 2026
Dentrix Ascend restructured into 3 AI-embedded tiers β€” AI is now non-optional even at Essentials level[1]
Section 02

The Gap β€” AWS Says It's Your Problem

AWS explicitly states that prompt injection defense is the customer's responsibility. Their own documentation, security blog, and ML blog say so β€” eight separate times.

"The responsibility for preventing vulnerabilities like prompt injection lies with the customer."
β€” AWS Bedrock Documentation
"Guardrails should not be relied upon as the sole defense against prompt injections."
β€” AWS Security Blog, Anna McAbee, Security Specialist Solutions Architect (Jan 2025)
"Unlike SQL injection… prompt injection doesn't have a single remediation solution."
β€” AWS ML Blog, Sr. AI Security Engineers at Amazon (May 2025)

What Bedrock Guardrails Covers vs. What It Doesn't

CapabilityGuardrails CoversGap
Content filtering (hate, violence, etc.)βœ“β€”
PII/PHI redactionβœ“β€”
Prompt attack detection (jailbreak/injection)βœ“Requires correct input tagging β€” "if there are no tags, prompt attacks will not be filtered"[6]
Contextual grounding checksβœ“Conversational QA / chatbot use cases NOT supported[7]
Tool input/output scanningβœ—Agent tool I/O not passed through Guardrails by default[8]
Reasoning content blocksβœ—Explicitly excluded from scanning[7]
Automated reasoning β†’ injection protectionβœ—Validates "as-is" β€” provides "No prompt injection protection"[7]
System prompt absolute controlβœ—"Don't provide absolute control, model may still deviate"[9]
Section 03

The Threat Landscape

OWASP Top 10 for LLM Applications 2025 β€” the global security standard created by 600+ experts across 18 countries, mapped to Henry Schein's specific workflows.[11]

LLM01
Prompt Injection
Manipulating LLM behavior via crafted inputs. OWASP: "It is unclear if there are fool-proof methods of prevention."[12]
LLM02
Sensitive Info Disclosure
LLMs exposing PII, health records, credentials. Health records explicitly listed as at-risk category.[13]
LLM03
Supply Chain
Vulnerabilities in third-party models, datasets, and dependencies.
LLM04
Data & Model Poisoning
Tampered training/fine-tuning/embedding data introducing backdoors or bias.
LLM05
Improper Output Handling
Insufficient validation of LLM outputs before downstream use. OWASP: "Treat the model as any other user."[14]
LLM06
Excessive Agency
LLMs granted too much functionality, permissions, or autonomy. Directly relevant to agentic architecture.[11]
LLM07
System Prompt Leakage
Risk that system prompts and developer instructions are exposed to users.
LLM08
Vector & Embedding Weaknesses
Security risks in RAG systems and vector databases.
LLM09
Misinformation
LLMs generating false but convincing content. OWASP flags healthcare as a high-stakes domain.[11]
LLM10
Unbounded Consumption
Excessive resource usage leading to DoS or cost exploitation.

Threat Matrix: Henry Schein Workflows Γ— Attack Types

Prompt
Injection
PHI
Leakage
Excessive
Agency
Output
Misuse
Voice Notes
CRITICAL
CRITICAL
MEDIUM
HIGH
Image Verify
HIGH
MEDIUM
MEDIUM
HIGH
Claire (Agent)
CRITICAL
HIGH
CRITICAL
HIGH
Claims Auto
HIGH
HIGH
CRITICAL
CRITICAL
Section 04

Attack Research β€” What the Lab Says

Frontier AI safety defenses are being systematically bypassed. UK government researchers demonstrated automated attacks against the strongest commercial safeguards β€” at trivial cost.

Boundary Point Jailbreaking (UK AI Security Institute, Feb 2026)

0% β†’ 75.6%
GPT-5 input classifier β€” avg harmful rubric score (94.3% max@50)[16]
0% β†’ 68%
Claude Sonnet 4.5 / Constitutional Classifiers β€” avg with elicitation (80.4% max@50)[16]
$210–$330
API cost to develop universal jailbreak (months of R&D not included)[16]

Clinical Hallucination Risk at Scale

"44% of hallucinations were clinically major" β€” even at the "low" 1.47% rate, at Henry Schein's scale (3M AI-powered charts), this translates to ~19,400 clinically major hallucinations per year.
β€” Asgari et al., npj Digital Medicine 2025
ECRI ranked AI chatbot misuse as the #1 health technology hazard for 2026.
β€” ECRI, January 2026
Section 05

The Defense Blueprint β€” CaMeL + Defense-in-Depth

CaMeL (CApabilities for MachinE Learning) provides provable security guarantees against prompt injection β€” the centerpiece of a 6-layer defense architecture.[18]

CaMeL Architecture β€” Capability-Based Prompt Injection Defense USER Trusted Query PRIVILEGED LLM Sees ONLY trusted query Generates execution plan QUARANTINED LLM Processes untrusted data NO tool access CaMeL INTERPRETER Enforces capability policies Tracks data provenance Gates every tool call TOOLS Claims, Email, EHR DATA SOURCES Transcripts, Images CAPABILITY TAGS Source provenance Allowed readers SECURITY POLICIES Source: Debenedetti et al. (2025), arXiv:2503.18813 β€” Google DeepMind / ETH Zurich

AgentDojo Benchmark Results

~77%
Task completion with CaMeL (vs ~84% undefended) β€” 7pp cost for provable guarantees[18]
0–2
Successful attacks remaining (vs 80–300 undefended)[18]

6-Layer Defense Stack

Section 06

Regulatory Mandate

The EU AI Act makes adversarial robustness a legal requirement for high-risk AI systems β€” and names the specific attack categories by name in binding law.

EU AI Act Enforcement Timeline

Feb 2, 2025
ALREADY LIVE
AI Literacy (Art. 4)
Prohibited Practices (Art. 5)[20]
Aug 2, 2026
General Enforcement
Annex III high-risk
Transparency rules
Regulatory sandboxes[20]
Aug 2, 2027
MEDICAL DEVICE AI
Annex I high-risk (MDR)
Henry Schein's deadline[20]
"High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. The technical solutions shall include measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws."
β€” EU AI Act, Article 15(5) β€” Adversarial robustness is now LAW

Financial Exposure

$375M
Maximum Tier 2 fine: 3% of $12.5B global revenue[21]
$9.77M
Average healthcare data breach cost β€” highest sector, 13th year running[17]
Market Ban
Authorities can order non-compliant AI systems withdrawn from EU market[21]
Section 07

Engagement Model

A progressive engagement ladder, from no-cost diagnostic to ongoing managed security, designed to match Henry Schein's deployment pace and risk profile.

1. AI Exposure QuickScan
Recommended Entry Point
2 weeks
No cost

Produce an exec-ready punch list showing where the AI stack is most exposed to prompt injection and tool abuse in the real workflows named in Henry Schein One's public roadmap.

  • AI inventory (models, agents, tools, data sources)
  • Trust-boundary and ingestion-channel map
  • Guardrails configuration review: tags, coverage, known limitations
  • Top 10 risks ranked by likelihood Γ— impact, mapped to OWASP LLM risks
2. AI Safety & Security Baseline Audit
6–8 weeks
€45k–€90k

Produce audit artefacts that stand up to security leadership scrutiny and create a remediation roadmap aligned to AWS, OWASP, and EU AI Act obligations.

  • Threat modeling workshops with Security, Product, and Engineering
  • Prompt injection (direct + indirect) test suite design
  • RAG / knowledge-base hygiene assessment
  • Tool-use/agency hardening review and least-privilege design
  • Monitoring and logging design recommendations
3. Red-Team Sprint
3–4 weeks
€60k–€140k

Stress-test staging environments across the highest-risk flows: intake β†’ EHR fields, voice β†’ summary β†’ coding suggestions, support β†’ entitlements/actions, RAG β†’ tool calls.

  • Adversarial testing across all AI workflows
  • Multimodal injection tests for image ingestion paths
  • Audio/transcript pipeline tests for voice workflows
  • Guardrail bypass verification
4. Layered Defenses Buildout
8–12+ weeks
€120k–€350k+

Ship the hard controls, not just write a report.

  • AI gateway build (policy enforcement, tagging discipline, tool router)
  • CaMeL-pattern implementation for highest-risk workflows
  • Guardrails integration + tuning + regression harnesses
  • RAG hardening (source allowlists, signing/change control)
  • Monitoring and incident response playbooks
5. Managed AI Security Monitoring
Ongoing
€5k–€25k/month

Keep security controls from decaying as models, prompts, and tools change.

  • Quarterly red-team regression reports and drift detection
  • Guardrail KPI monitoring (block rates, false positives, language coverage)
  • Update advisory when AWS releases new capabilities/limitations
  • EU AI Act compliance tracking and documentation support
Section 08

The Bottom Line

"We stop treating the model as the security boundary, and start treating it as just another untrusted input processor β€” wrapped in controls you can audit."

Next Step

Schedule the AI Exposure QuickScan β€” a 2-week, no-cost diagnostic that produces an actionable risk assessment specific to Henry Schein One's architecture.

Hugo Nguyen β€” hugo@hugonguyen.com
AI Security

Sources & References

  1. Henry Schein One, "Next Era of Dentrix Ascend," Investor Relations PR, Mar 10, 2026
  2. Henry Schein One Γ— AWS Partnership Announcement, Nov 3, 2025
  3. Henry Schein One, GNYDM Press Release, Nov 25, 2025
  4. Henry Schein One, Image Verify Launch, Feb 19, 2026
  5. Cybernews, "Lynx ransomware attacks TriMed/Henry Schein subsidiary," Oct 3, 2025
  6. AWS Bedrock Guardrails β€” Prompt Attack Documentation
  7. AWS Bedrock Guardrails Overview Documentation
  8. AWS ML Blog, "Securing Amazon Bedrock Agents," May 2025
  9. AWS Security Blog, "Safeguard your generative AI workloads from prompt injections," Jan 2025
  10. AWS Industries Blog, Henry Schein One technical architecture, Dec 9, 2025
  11. OWASP Top 10 for LLM Applications 2025
  12. OWASP LLM01: Prompt Injection
  13. OWASP LLM02: Sensitive Information Disclosure
  14. OWASP LLM05: Improper Output Handling
  15. OWASP Top 10 for Agentic Applications 2026
  16. Davies et al., "Boundary Point Jailbreaking of Black-Box LLMs," arXiv:2602.15001, UK AI Security Institute, Feb 2026
  17. Cisco, State of AI Security 2026 Report; IBM Cost of a Data Breach Report 2025
  18. Debenedetti et al., "CaMeL: Defeating Prompt Injections by Design," arXiv:2503.18813, Google DeepMind / ETH Zurich, 2025
  19. AWS AI Security Scoping Matrix β€” Guardrails as "additional protection"
  20. EU AI Act β€” Regulation (EU) 2024/1689: Art. 6, Art. 15, Art. 113 (enforcement timeline)
  21. EU AI Act, Article 99 β€” Penalty structure: up to €15M or 3% of worldwide annual turnover
  22. HIPAA Security Rule Proposed Updates 2025–2026