Blog — pillar 5, compliance

AEDT compliance without a rebuild.

NYC Local Law 144, Illinois AIVIA, and EU AI Act Annex III — translated. What your stack actually needs, and what it doesn't.

By Jon Senger · Founder, CertAIn · April 2026 · 5-minute read

The setting

Three different jurisdictions wrote three different laws regulating AI in hiring between 2023 and 2025. If you hire in any of them — and if you hire at a medium or larger scale in the United States or Europe, you probably do — your stack has to account for all three. Most recruiting teams we talk to are aware of this and most of their current tooling is handling it via contract addendums, not architecture.

This post is about what changes when you approach it the other way around. Architecture first, paperwork second.

The three frameworks, in one paragraph each

### NYC AEDT — Local Law 144

Effective July 2023. Covers automated employment decision tools used for candidates applying to jobs in New York City. Three core requirements: a bias audit conducted by an independent auditor within the last year before use, public availability of a summary of that audit, and a disclosure to candidates that an AEDT is being used. Penalties are per violation, per day, and they add up fast. The auditable unit is the AEDT output itself.

### Illinois AIVIA (plus expanded provisions)

The original Artificial Intelligence Video Interview Act went into effect in 2020 for AI analyzing video interviews; expanded AI-in-hiring provisions have followed. The core requirements that matter for a recruiting stack: notify candidates that AI will be used in evaluating their application, explain (in plain language) what the AI is doing, obtain consent, and provide the ability to request that the evaluation not be shared beyond necessary reviewers. The auditable unit is the candidate-facing notice and consent trail.

### EU AI Act — Annex III, high-risk

The EU AI Act classifies AI systems used in the recruitment or selection of natural persons as high-risk (Annex III, point 4). High-risk systems must demonstrate: human oversight, risk management documentation, logging sufficient for post-hoc investigation, transparency toward the end user, and conformity assessment before market placement. The auditable unit is the architecture itself — not a single output.

What these three laws actually have in common

Three different jurisdictions, three different statutes, but the substantive requirements collapse to a short list:

1. No automated decisioning. A human must make the hiring decision. The AI can recommend; it cannot decide.
2. A record of what the AI did, attributable to an individual candidate and a specific job. Not a log blob — something a regulator or an auditor can actually read.
3. A candidate-facing notice explaining that AI is involved, in plain language.
4. Bias monitoring, with some form of structured data that an auditor can analyze.
5. The ability to explain — or at least document — the AI's reasoning, in response to a candidate challenge or a regulatory inquiry.

That's the common substrate. Everything else is jurisdictional flavor — annual cadence, specific disclosure language, auditor independence requirements.

The "addendum" approach, and why it gets fragile

Most recruiting teams today handle these requirements via contracts and manual processes: a DPA addendum with the vendor, a candidate consent paragraph added to the application flow, a quarterly manual export of "AI outputs" to Excel for spot-checking. That works — until it doesn't.

The addendum approach gets fragile in a few predictable places:

  • The human-oversight claim can't be verified. If the architecture permits an AI output to become a rejection without human review — even as a default configurable by the admin — then the claim that a human decided is aspirational, not architectural. A regulator will ask for evidence. "Our terms say we don't auto-reject" is not evidence.
  • The log data isn't what an auditor wants. Ad-hoc logging written to application.log, indexed by a log aggregator, isn't the same thing as a structured, queryable record of every AI output by candidate × JD. The bias audit export AEDT requires has specific shape expectations that free-form logs can't fulfill without cleanup.
  • Disclosure language drifts. A candidate-facing notice in the application flow can be edited by a marketer next quarter without legal review, and now it doesn't match the AIVIA requirement. A tenant-editable template with version control catches this.
  • The answer to "why was this candidate rejected" is a vendor black box. If the AI vendor's output is a score with no reasoning, the answer to a candidate's or a regulator's "why" is "proprietary model" — which is not an acceptable answer in any of the three regimes.

The rebuild everyone fears, reading the legislation cold, is what happens if those fragility points all fail at once and a company has to retrofit a compliance architecture onto a product that wasn't designed for it. That's the expensive version of this.

The architectural version

The cheap version is to build the substrate once, correctly, and let the jurisdictional flavor attach to it via configuration rather than redesign.

Here's what that looks like, in practical terms. (CertAIn is built this way; other vendors can be too. The architectural shape is not proprietary — it's just engineering discipline.)

1. Codify human oversight as a constraint, not a policy. No code path in the product can turn an AI output into a final hiring decision without a logged human action. The tenant admin can't flip a switch to "auto-reject below threshold" because the switch doesn't exist. CertAIn exposes this at Settings → Compliance → AI Oversight as a read-only, exportable page — the document a GC can hand to an auditor.

2. Log every AI output to an append-only structured table. Not application logs — a database table with tenant, user, candidate, JD, action type, model version, and timestamp. Append-only, indexed, queryable. The AEDT bias audit data export reads from that table. So does any regulatory inquiry.

3. Make the candidate notice a tenant-editable template with versioning. Tenants' legal teams own the language; the product ships a sensible default; changes are versioned so a past candidate always knows which version of the notice they saw. CertAIn stores this in a dedicated column (tenants.ai_disclosure_template) and surfaces it in the candidate-facing flow and as a downloadable PDF.

4. Produce reasoning, not scores. If the AI output is a paragraph of reasoning per candidate — grounded in the resume, naming gaps, proposing next steps — then the answer to "why was this candidate ranked where they were ranked" is the output itself. A score forces you to construct an after-the-fact explanation; reasoning is already the explanation.

5. Ship the bias audit export as a standard feature, not a paid add-on. Structured CSV of AI action outcomes by candidate × JD, demographic-free by design, ready for an independent auditor. CertAIn's bias_audit export does this through the same async pipeline as the portability export — tenants download it, hand it to their auditor, done.

That's the five-item architectural checklist. If your current stack has all five, your compliance posture for all three regimes is largely an exercise in paperwork. If it's missing two or three, the posture is fragile and the paperwork is covering for that — which is where most teams are today.

What this doesn't replace

Architecture doesn't replace:

  • Your independent bias auditor. AEDT requires one. We can hand them the data; we can't be them.
  • Your GC's review of the specific regulations. The above is a practitioner's summary, not legal advice. Your jurisdiction, your facts, your counsel.
  • Your own governance program. The architecture makes compliance cheaper to operate, not optional.

The short version

The three regulatory regimes — AEDT, AIVIA, EU AI Act — share a common substrate that can be built once, correctly, into the architecture of a recruiting tool. Vendors that didn't do this are covering the gap with contract addendums and manual processes. That works until it doesn't. If you're scoping an AI recruiting tool right now, the "compliance architecture" section of the evaluation deserves as much weight as the feature list — because retrofitting it later is the thing nobody wants to do.

CertAIn was built around this substrate from the first line of the spec. If that matters to your evaluation, we'll walk through it with your GC.

Related reading

Take CertAIn for a run on a real JD.