AI in HR: The Hype vs. The Reality
- Lisa Carr
- 4 days ago
- 6 min read

Most vendors will tell you AI is revolutionizing people management. Here's an honest, evidence-based breakdown of what actually works — and what's mostly marketing.
Lisa Ann Carr, CCP®, M.Ed, CPC | April 2026 | 8 min read
Let me say something that might be unpopular in a room full of HR tech vendors: most AI implementations in HR are not delivering what they promised.
That doesn't mean AI has no place in our function. It has a significant one. But the gap between the pitch deck and the reality — between what the software promises and what HR teams actually experience — is wide enough to drive a benefits bus through.
I've spent several years in HR and Total Rewards, working across US and Canadian jurisdictions, managing everything from benefits renewals to compensation architecture. I've also built AI-powered HR tools from scratch. So when I say I understand both sides of this conversation, I mean it.
Here's my honest breakdown of AI in HR — claim by claim.
Recruiting: where AI scores its clearest wins — and its worst misses
“AI eliminates bias in hiring”
⚠ MOSTLY HYPE AI doesn't eliminate bias — it relocates it. The model learns from historical hiring data, which already encodes the biases of whoever made past decisions. Amazon scrapped its internal AI recruiting tool in 2018 after discovering it systematically downgraded resumes from women. The lesson wasn't unique to Amazon. It was a preview of the systemic problem. |
If your historical promotions skewed male, senior, or from certain university networks, your AI-powered screening tool will learn to replicate exactly that pattern — efficiently and at scale. The bias doesn't disappear. It gets automated.
“AI dramatically speeds up high-volume screening”
✅ LARGELY TRUE This one holds up well. ATS platforms with AI resume parsing genuinely cut screening time by 50–75% for roles receiving hundreds of applications. The ROI is measurable, especially for hourly, early-career, and volume-hire positions where human review of every submission isn't realistic. |
The key caveat: speed is only valuable if the criteria being applied are sound. Garbage in, garbage out — just faster.
“AI video interviews can predict job performance”
⚠ MOSTLY HYPE — AND GROWING LEGAL EXPOSURE Some vendors claim their platforms can assess candidate suitability by analyzing tone of voice, facial expressions, and word choice. The peer-reviewed evidence is thin. Illinois passed the Artificial Intelligence Video Interview Act in 2020 requiring disclosure and consent. New York City enacted Local Law 144 requiring bias audits of automated employment decision tools. Regulatory risk is rising fast — and HR will own the exposure. |
Performance management: the measurement trap
“AI can flag early flight-risk signals”
✅ LARGELY TRUE — WITH IMPORTANT CONDITIONS People analytics platforms can identify patterns — tenure, engagement scores, promotion lag, manager tenure — that correlate with attrition. When HR acts on the insight, retention outcomes improve. The model is only as good as the quality of data feeding it. And “acting on the insight” still requires a skilled HR professional with relationship capital and context. |
“AI can objectively evaluate employee performance”
⚠ MOSTLY HYPE Productivity monitoring tools that track keystrokes, email volume, Zoom camera-on time, and active screen time capture activity — not impact. The highest performers on many teams do their best work in deep, focused blocks that look invisible to an activity monitor. These tools generate serious employee relations risk when applied broadly, and in some Canadian provinces, they trigger privacy disclosure obligations under PIPEDA. |
“The most productive people on your team may score the worst on an AI activity monitor. That should tell you something about the limits of the tool.” |
Compensation: where I spend most of my time, and where AI is genuinely useful
“AI can accelerate market pricing and benchmarking”
✅ LARGELY TRUE What used to take weeks of survey analysis can now take hours. Job matching, percentile calculations, and band modelling are genuinely faster with modern compensation tools. The interpretation — what to do with the data in your specific organizational context, with your specific talent strategy and budget constraints — still requires a skilled comp professional. The tool accelerates the work. It doesn't replace the judgment. |
This is exactly the gap I built CompAlchemist's navigator suite to address. The tools do the heavy lifting — the math, the logic, the compliance cross-referencing — and leave the decision-making in experienced hands.
COMPALCHEMIST TOOL 🧭 Job Levelling Navigator — US & Canadian Editions AI-powered job levelling, pay band modelling, and pay transparency readiness — built by a CCP® with 15+ years in North American total rewards. Available in Individual and Consultant editions. |
“AI will automate pay equity analysis”
🔭 EMERGING — WATCH THIS SPACE Regression-based pay equity tools are becoming more accessible. But legal defensibility, jurisdiction-specific rules — Ontario's Pay Equity Act, BC's incoming pay transparency requirements, the EU Pay Transparency Directive — and the difference between statistical significance and practical equity require human judgment. AI is a powerful co-pilot in pay equity work. It is not the pilot. |
“AI-generated job descriptions eliminate gender bias in pay”
⚠ MOSTLY HYPE — USEFUL BUT MISUNDERSTOOD AI absolutely can flag gendered language in job descriptions — “ninja,” “rockstar,” “aggressive growth targets.” That's genuinely useful. But JD language is downstream of pay decisions, not upstream. Rewriting a job description doesn't fix a compensation structure built on undervalued job families or a flawed job evaluation methodology. |
Employee wellbeing: the most personal — and most sensitive — application
“AI chatbots can support employee mental health”
PARTIALLY TRUE — HANDLE WITH CARE Tools that triage employees to appropriate mental health resources faster are genuinely valuable. Reducing EAP access friction matters. But an AI chatbot is not a therapist. Employees with serious mental health needs who interact with an AI first — and receive delayed or misrouted care — are at real risk. Duty to accommodate obligations in Canada (under provincial human rights codes) don't pause because the first touchpoint was automated. |
COMPALCHEMIST TOOL ⚖️ Canadian Addiction & Recovery Navigator Navigate duty to accommodate, duty to inquire, EI Sickness Benefits, PIPEDA privacy obligations, and Cannabis Act accommodation requirements — all in one interactive tool built for Canadian HR professionals. |
“AI-powered benefits navigation reduces employee frustration”
✅ LARGELY TRUE — ONE OF THE CLEANER WINS Natural language benefits chatbots genuinely reduce HR admin load and help employees find the right plan or program faster. ROI is measurable through call deflection rates and benefits utilization data. This is AI doing what it does best: answering known questions at scale, freeing HR professionals to handle the edge cases and complex situations that require real judgment. |
The regulatory arc is the most important story in HR tech right now
The EU AI Act, NYC Local Law 144, Illinois' AI Video Interview Act, and Canada's Artificial Intelligence and Data Act (AIDA) are all moving toward mandatory auditing, disclosure, and in some cases employee consent for AI used in people decisions.
This is not a future concern. It is an active compliance risk in 2026.
HR is the function that will own this exposure — not IT, not Legal alone. If you're implementing AI tools that touch hiring, performance, or termination decisions, you need to know what legislation applies in your jurisdiction, what disclosure obligations you have, and whether your vendor has undergone a third-party bias audit.
COMPALCHEMIST TOOL 📋 Leave Navigator — US & Canadian Editions Jurisdiction-specific leave entitlements, accommodation obligations, and compliance checkpoints across federal and provincial/state legislation — built at full parity for both US and Canadian HR professionals. |
What this actually means for HR professionals
Four things to do differently starting now:
Treat AI as a speed and scale tool — not a decision-making tool. Use it to process volume faster. Keep humans in the loop for anything with legal, equity, or individual impact.
Ask vendors the hard questions: Who audits the model? How often? What's the bias testing methodology? Can they show you peer-reviewed outcomes data — not just client case studies they curated themselves?
Know your regulatory exposure. If your AI tools touch hiring, performance, or benefits decisions, identify which legislation applies in your jurisdiction today — not when a complaint arrives.
The biggest ROI from AI in HR isn't replacing judgment. It's freeing you to apply your judgment where it matters most — complex accommodation cases, strategic comp decisions, and employee conversations that no algorithm can replicate.
AI didn't make expert HR judgment obsolete. It made it more valuable. The practitioners who understand both the tool and the limits of the tool will be the ones HR leaders trust most in the years ahead.
That's what CompAlchemist is built on: tools that are expert-built and AI-powered — so that the human using them can be unmistakably human where it counts.
About the author
Lisa Ann Carr, CCP® · M.Ed · CPC
Director of People Operations | Founder, CompAlchemist
Lisa brings cross-border US and Canadian HR and Total Rewards experience. As founder of CompAlchemist, she builds AI-powered HR tools that help practitioners work smarter without losing the human judgment that matters most.
Expert-built. AI-powered. Unmistakably human.



Comments