Henry Schein One AI Security Assessment β prepared for the meeting with Martin Busch, Sr. Director Safety & Security EMEA & Brazil
Scroll to beginHenry Schein One has moved from AI experimentation to platform-wide deployment at remarkable speed β 3 AI products shipped in 4 months, embedded in an agentic architecture with 700+ API endpoints.
AWS explicitly states that prompt injection defense is the customer's responsibility. Their own documentation, security blog, and ML blog say so β eight separate times.
"The responsibility for preventing vulnerabilities like prompt injection lies with the customer."β AWS Bedrock Documentation
"Guardrails should not be relied upon as the sole defense against prompt injections."β AWS Security Blog, Anna McAbee, Security Specialist Solutions Architect (Jan 2025)
"Unlike SQL injectionβ¦ prompt injection doesn't have a single remediation solution."β AWS ML Blog, Sr. AI Security Engineers at Amazon (May 2025)
| Capability | Guardrails Covers | Gap |
|---|---|---|
| Content filtering (hate, violence, etc.) | β | β |
| PII/PHI redaction | β | β |
| Prompt attack detection (jailbreak/injection) | β | Requires correct input tagging β "if there are no tags, prompt attacks will not be filtered"[6] |
| Contextual grounding checks | β | Conversational QA / chatbot use cases NOT supported[7] |
| Tool input/output scanning | β | Agent tool I/O not passed through Guardrails by default[8] |
| Reasoning content blocks | β | Explicitly excluded from scanning[7] |
| Automated reasoning β injection protection | β | Validates "as-is" β provides "No prompt injection protection"[7] |
| System prompt absolute control | β | "Don't provide absolute control, model may still deviate"[9] |
OWASP Top 10 for LLM Applications 2025 β the global security standard created by 600+ experts across 18 countries, mapped to Henry Schein's specific workflows.[11]
Frontier AI safety defenses are being systematically bypassed. UK government researchers demonstrated automated attacks against the strongest commercial safeguards β at trivial cost.
"44% of hallucinations were clinically major" β even at the "low" 1.47% rate, at Henry Schein's scale (3M AI-powered charts), this translates to ~19,400 clinically major hallucinations per year.β Asgari et al., npj Digital Medicine 2025
ECRI ranked AI chatbot misuse as the #1 health technology hazard for 2026.β ECRI, January 2026
CaMeL (CApabilities for MachinE Learning) provides provable security guarantees against prompt injection β the centerpiece of a 6-layer defense architecture.[18]
The EU AI Act makes adversarial robustness a legal requirement for high-risk AI systems β and names the specific attack categories by name in binding law.
"High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. The technical solutions shall include measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws."β EU AI Act, Article 15(5) β Adversarial robustness is now LAW
A progressive engagement ladder, from no-cost diagnostic to ongoing managed security, designed to match Henry Schein's deployment pace and risk profile.
Produce an exec-ready punch list showing where the AI stack is most exposed to prompt injection and tool abuse in the real workflows named in Henry Schein One's public roadmap.
Produce audit artefacts that stand up to security leadership scrutiny and create a remediation roadmap aligned to AWS, OWASP, and EU AI Act obligations.
Stress-test staging environments across the highest-risk flows: intake β EHR fields, voice β summary β coding suggestions, support β entitlements/actions, RAG β tool calls.
Ship the hard controls, not just write a report.
Keep security controls from decaying as models, prompts, and tools change.
"We stop treating the model as the security boundary, and start treating it as just another untrusted input processor β wrapped in controls you can audit."
Schedule the AI Exposure QuickScan β a 2-week, no-cost diagnostic that produces an actionable risk assessment specific to Henry Schein One's architecture.
Hugo Nguyen β hugo@hugonguyen.com
AI Security