Glossary

AI hiring that explains itself.

A definition of the category CertAIn is building — and how to tell it apart from score-based or chat-based AI hiring products.

Definition

What it is.

AI hiring that explains itself is a category of AI recruiting tool whose signature output is a paragraph of defensible reasoning per candidate — written by the AI, attached to the candidate record, exportable for audit, and usable by the recruiter to defend the hiring decision to a hiring manager, end client, candidate, or regulator.

Two architectural commitments

  • 1. Reasoning is the primary output. Not a score with a tooltip. Not a chat reply that disappears at the end of a session. A written paragraph, versioned, logged, attached to the candidate record.
  • 2. Every output is a recommendation to a human. Never an automated decision. The architecture cannot be configured to make a hiring choice without human intervention — a load-bearing constraint for NYC Local Law 144 (AEDT), Illinois AIVIA, and EU AI Act Annex III.

What it is not

  • — A match-score UI with a sentence of justification underneath.
  • — A chat window that answers questions about candidates but doesn’t persist the reasoning.
  • — A ranking engine whose output is the position number, not the reasoning.
  • — An auto-rejector that filters candidates before a human sees them.

Why the category matters in 2026

Three regulatory regimes are live for AI in hiring: NYC Local Law 144 (AEDT) requires bias audits of automated employment decision tools; Illinois AIVIA regulates AI use in hiring decisions affecting Illinois candidates; EU AI Act Annex III classifies AI systems used in recruitment as high-risk. Every one of them reduces to a single operational question: can you show your work? A product category organized around producing readable reasoning is the category that answers that question — and the one enterprise buyers, GCs, and candidates increasingly expect.

Related reading

See the category in practice. Trial is free, 30 days, no card.