In 2009, the smartcard industry learned a costly lesson about the gap between what vendors claimed and what their hardware actually delivered. Multiple Common Criteria certified products failed dramatically when independent researchers applied the same side-channel analysis techniques that the certification labs used — but with more realistic attacker models and longer measurement campaigns. The lesson wasn't that Common Criteria was useless — it was that the threat model had to match the actual attack capability, not the theoretical one.

I'm thinking about that history now because the AI chip industry is repeating it at a larger scale and faster pace.

AI accelerators are being deployed in critical infrastructure, medical devices, defense systems, and financial services — contexts where hardware security failures have consequences far beyond data breaches. The certification infrastructure that exists to evaluate this security — primarily Common Criteria (ISO/IEC 15408) for security evaluation and IEC 62443 for industrial control systems — is rarely being used. And when it is used, the evaluation targets are usually narrower than they should be.

What Common Criteria Actually Evaluates (And What It Doesn't)

Common Criteria is a framework for evaluating the security of IT products. An evaluated product goes through analysis against a defined Protection Profile — a document that specifies what threats the product must defend against, what security functions it must implement, and what assurance measures must be applied.

The key concept in Common Criteria is the Assurance Level (EAL), ranging from EAL1 (basic审查) to EAL7 (formal verification). For hardware security products, you typically see EAL4 through EAL5+ for products deployed in sensitive contexts. Critically, an EAL4 certified product is not "more secure" than an EAL3 product in an absolute sense — it's "evaluated more thoroughly against its Protection Profile." The Protection Profile defines the scope.

For AI accelerators, the relevant Protection Profiles are limited. There are existing Protection Profiles for secure modules and cryptographic modules, but a Protection Profile specifically for AI accelerator security — covering model protection, inference integrity, and side-channel resistance — is not standardized. This means that even if a vendor achieves a high EAL rating, the evaluation may not cover the threats most relevant to their deployment context.

This is the core problem: the certification framework can only evaluate against defined criteria, and the criteria for AI hardware security are lagging the threat landscape by years.

IEC 62443 and Its Relevance to AI Accelerators

IEC 62443 is a family of standards for industrial automation and control system security. It was designed for SCADA systems, PLCs, and industrial networks — not AI accelerators. But its concepts are directly applicable.

The standard defines Security Levels (SL) from 0 (no security requirement) to 4 (security against nation-state level attacks with extended resources). It provides a framework for:

  • Security capability requirements for industrial automation systems
  • System security requirements for asset owners
  • Component security requirements for product vendors

The relevance to AI accelerators is the security level framework. When you deploy an AI accelerator in an industrial control context — say, running inference on a model that controls a manufacturing process — the security requirements should be defined in IEC 62443 terms. What SL does the deployment require? Does the accelerator meet that SL?

This sounds abstract, but it has practical implications. An AI accelerator deployed in a power grid management system has different security requirements than one deployed in a consumer recommendation engine. The IEC 62443 framework gives you a vocabulary for specifying those requirements and evaluating whether hardware meets them.

For my own work — particularly the hardware trojan detection research that went into Trusted AI hardware patents — this framework is how you translate academic security properties into procurement requirements. The defense industrial base has been using IEC 62443 for hardware procurement for years; the AI industry is just starting to catch up.

The Side-Channel Gap in Current Certifications

Here's where I need to be direct: the most commercially available AI accelerators — NVIDIA, AMD, Intel, Google, Amazon — have not undergone Common Criteria evaluation for side-channel resistance. Their security evaluations, where they exist, focus on software-level isolation, firmware security, and supply chain provenance. Physical side-channel attacks like DPA are not part of the standard evaluation scope.

This is not an accident. Side-channel evaluation is expensive, time-consuming, and requires specialized equipment and expertise. For a chip vendor shipping millions of units, certifying against side-channel attacks would be a massive investment with uncertain return. The current market doesn't reward it — buyers aren't asking, and the evaluation criteria don't require it.

The exception is the automotive and defense supply chains, where formal security evaluation requirements do exist and side-channel testing is sometimes specified. This is why FPGAs used in defense applications went through more rigorous evaluation than commercial AI accelerators — the procurement requirements were stricter.

What Certification Would Actually Require for AI Accelerators

For a meaningful AI accelerator security certification, the evaluation would need to cover:

Model confidentiality and integrity: The accelerator must prevent unauthorized extraction of model weights and protect against inference manipulation. This requires hardware-level isolation, secure boot, and memory encryption — areas where some AI accelerators do have security features, though often with gaps.

Side-channel resistance: Physical attacks including power analysis, electromagnetic analysis, and timing attacks. For a meaningful evaluation, this requires a defined threat model (what attacker's capabilities are assumed?), a defined attack methodology (what measurements and analysis are in scope?), and a defined resistance threshold (how much leakage is acceptable?).

Supply chain integrity: The hardware must be traceable through its manufacturing and distribution chain to prevent counterfeits or modified components. This connects to the supply chain security work I described in AI Hardware Supply Chain Security.

Secure update and lifecycle management: The accelerator must support secure firmware updates and have a defined end-of-life process for security support.

For side-channel specifically, the evaluation would need to specify:

  • Which side-channel attack classes are in scope (DPA, timing, EM)
  • What measurement conditions are assumed (location, equipment, access)
  • What constitutes a passing result (statistical leakage thresholds)

The research community has proposed methodologies for this — NIST's side-channel test standards, the ISO/IEC 20085 series on side-channel testing — but none have been adapted into a Protection Profile for AI hardware.

The Gap Between What's Required and What's Deployed

The practical situation is this: AI accelerators are being deployed in high-stakes contexts — medical imaging, industrial control, financial services, defense — without formal security certification that covers their actual threat model. The certifications that exist (Common Criteria, IEC 62443) weren't designed for this context, and the threat model for physical attacks on neural network inference is still being defined by researchers.

I spent six years on hardware trojan detection, and the pattern I've seen repeatedly is that formal evaluation requirements come after high-profile security failures, not before them. The smartcard industry got Common Criteria requirements after differential power analysis attacks on deployed payment cards. The industrial control industry got IEC 62443 requirements after Stuxnet. The AI hardware industry will likely get formal certification requirements after a publicly documented attack that demonstrates the gap.

The question for organizations deploying AI accelerators is whether to wait for that moment or to proactively specify security requirements that go beyond what vendors currently offer. For high-value deployments — proprietary models, critical infrastructure, defense contexts — the case for proactive requirements is strong.

A Practical Framework for Evaluating AI Accelerator Security

If you're procuring AI accelerators for a sensitive deployment, here's what to ask vendors:

  1. What physical security testing has been performed? If the answer is "none" or "we rely on software isolation," that's a signal about the gap between claims and reality.

  2. Is side-channel resistance specified in any evaluation report? Look for NIST SP 800-90 test methodology or ISO/IEC 20085 compliance for cryptographic operations, and ask whether similar methods apply to the model's inference operations.

  3. What's the supply chain assurance model? Hardware attestation — which I covered in detail in Hardware Root-of-Trust in Cloud AI — is the mechanism. Does the accelerator support it?

  4. What security certifications does the hardware have? Common Criteria (which Protection Profile?), IEC 62443 (which SL?), ISO 26262 (for automotive), or other sector-specific certifications.

  5. What is the vulnerability disclosure and patching timeline? Hardware security vulnerabilities often can't be patched in the field — once silicon is deployed, the only mitigation is architectural isolation. Understanding what happens when a vulnerability is discovered is critical.

For the AI industry to develop the certification infrastructure it needs, there needs to be demand from the buyer side — procurement requirements that specify side-channel evaluation, supply chain attestation, and defined security levels. The vendors will respond when the buyers ask. Right now, the buyers mostly aren't asking.