“It told me I was AI.”
That’s a line one U.S. job seeker wrote into our open-ended survey question about the strangest experience they’d had with hiring software. They weren’t a bot. They were a person applying for a job, evaluated by an algorithm that decided they didn’t seem human enough.
A few years ago, that would have been an outlier. It isn’t anymore.
In February 2022, SHRM put the share of US organizations using AI or automation anywhere in HR at roughly one in four.
By October 2024, ResumeBuilder’s survey of 948 business leaders had it at 51% in hiring specifically—82% in resume screening, with 67% of those companies admitting the tools could introduce bias.
HireVue alone ran more than 20 million one-way video interviews in the first quarter of 2024. Adoption isn’t the question anymore. What that adoption looks like from the candidate side is.
In April 2026, Enhancv surveyed 1,066 U.S. job seekers about the receiving end of AI hiring. Half of them (50.5%) had been rejected at least once in the past year without a single word from a human. Of that group, 63.8% said they believed a machine made the call. Only 9.7% of the full sample said an employer had ever clearly told them AI was involved.
Everyone else was guessing. Most still are.
The data below is what that silence looks like at scale. One in three candidates has walked away from a job rather than sit through a one-way AI interview. Nearly half are using AI of their own to handle the ones they didn’t walk away from. The rest are guessing—at the rules, at the rubric, at what the machine wants from them. Most never had a way to know.
Key takeaways
- 50.5% of US job seekers got at least one rejection with zero human feedback in the past year. Among that group, 63.8% believe an AI was responsible for the decision.
- 68.5% say AI was never disclosed to them. Only 9.7% were clearly told. 84.7% of candidates are operating in the dark.
- 31.4% have abandoned a job application or declined an interview specifically because of a one-way AI video or chatbot screening.
- 79.1% of those abandoned roles paid under $100k. AI gating drives off lower-paid workers fastest.
- 47.7% of candidates agree AI hiring tools are biased against their age, race, gender, or background. Only 25.8% disagree—agreement leads disagreement nearly 2:1.
- Neurodivergent candidates feel the bias more sharply: 53.4% net agree (vs. 45.8% non-ND), and 18.5% strongly agree (vs. 12.7%).
- 49.6% of candidates have used AI to “optimize” their automated interview—5.6% to feed them answers live during the interview itself.
- Counter-intuitive: candidates told about AI "in the fine print" agree it's biased at 65%—nearly double the 35.8% rate among candidates who don't know AI was used at all. Half-disclosure produces more distrust than none.
Let’s take a closer look at the data.
Are job rejections going silent in 2026?
Hiring used to come with a sentence. A “We went with another candidate.” A phone call. A form letter. Even an automated email signed with a recruiter’s name. The wording varied, but it was confirmation that a person, somewhere, had read the application.
Most candidates aren’t getting that anymore.
Have you received a job rejection with zero human feedback in the past 12 months?
| Response | Response share |
|---|---|
| Yes | 50.5% |
| No | 40.8% |
| Maybe | 6.6% |
| Not sure | 2.2% |
More than half of the respondents have been rejected without a single word from a human in the past year. Add the “Maybe” bucket and the share rises to 57.1%.
It doesn’t fall on everyone equally. Younger candidates are roughly twice as likely as those over 55 to be on the receiving end of it. This suggests that AI gating concentrates at the entry and early-career tiers where applicant volume—and resume parsing—runs hottest.
Here’s the silent rejection rate by age group:
- 25–34 (Late Gen Z + Millennial): 62.2%
- 18–24 (Gen Z): 61.5%
- 35–44 (Millennial mid-career): 47.9%
- 45–54 (Gen X): 40.7%
- 55+ (Boomer): 32%
The pattern repeats by industry. Functions with high applicant volume and structured intake processes, like product management (80.0%), consulting (73.3%), finance (59.4%), customer support (59.1%), sales (57.6%), and software engineering (53.3%), show the heaviest silent-rejection rates.
Healthcare, where licensure and credentialing still pull humans into the loop, sits lowest of the top ten at 44.3%.
Who’s responsible—the human or the machine?
When candidates are asked who they think made the call, the algorithm wins by a wide margin.
Of the 538 candidates who actually received a no-human-feedback rejection, 63.8% blamed an AI. Only 13.9% thought a person was behind it. That’s a 4.6-to-1 split blaming the algorithm.
22.3% say they have no way of knowing how they got rejected. When candidates can’t tell whether a human or a system rejected them, they tend to assume the worst. And once that assumption is in place, every rejection that follows lands on the algorithm by default—whether the algorithm earned it or not.
Only 1 in 10 candidates were told an AI was involved
In theory, the disclosure problem is solved. New York City’s AEDT law, Illinois and Maryland’s video-interview rules, the EU AI Act—all of them require some version of telling the candidate when an algorithm is in the loop.
The candidates we surveyed don’t seem to be hearing it.
Did the company explicitly disclose that AI was used to evaluate your application?
| Disclosure status | Response share (%) |
|---|---|
| No | 68.5% |
| I don’t know | 16.2% |
| Yes—clearly stated | 9.7% |
| Yes—in the fine print | 5.6% |
Less than 10% of candidates were clearly told an AI was evaluating them. Combined with the 16.2% who don’t know, 84.7% of applicants are operating without a basic answer to the question of who—or what—is reading their resume.
That changes the math of every rejection that follows. A candidate who doesn’t know AI was used can’t ask why it scored them the way it did. They can’t request human review. They can’t even tell whether the disclosure law on the books in their state was followed. The black box stays black.
Why nearly a third of candidates are walking away from AI interviews
Some candidates have stopped waiting to be rejected. Nearly a third (31.4%) told us they’ve walked away from a job altogether rather than sit through a one-way AI video or chatbot screening.
335 of 1,066 chose to give up the role rather than perform for the camera with no one on the other side. And it isn’t the candidates with the most options walking away the fastest. It’s the candidates with the fewest.
The under-$100k middle is absorbing the AI gate
Of the 658 respondents who told us what the abandoned role paid, 79.1% were under-$100k jobs.
Salary range of abandoned role
| Salary band | Share of abandoned roles (%) |
|---|---|
| Under $50k | 38.4% |
| $50k – $100k | 40.7% |
| $100k – $200k | 18.2% |
| $200k+ | 2.6% |
Cut by personal income, the gap gets sharper. People earning over $200k abandon AI interviews at just 17.5%—about half the rate of every other band. Candidates with the most leverage rarely face the one-way AI gate. The ones with the least face it the most, and walk anyway.
Younger candidates walk away faster, too: 36% of 18–24s and 35% of 25–34s have given up at least one AI screening, against 21% of those 55 and over.
Put the income and age cuts together, and you get a hiring funnel that’s losing exactly the demographic most employers say they can’t recruit fast enough.
Half of candidates already think AI hiring is biased against them
Whether or not the algorithms actually are biased, the candidates have already made up their minds.
We asked them how much they agreed with this statement: “AI hiring tools are programmed with biases that make it harder for someone of my age, race, gender, or background to get hired.”
47.7% agree to some extent. 25.8% disagree. Almost half of U.S. candidates think the system is rigged against them, and they outnumber the people who disagree by close to two to one.
The bias is loudest among neurodivergent job-seekers
Roughly 23% of our respondents identify as neurodivergent—diagnosed, self-identified, or still figuring it out. Their answers on the bias question stand out.
Neurodivergent candidates are 7.6 points more likely to agree overall, and about 46% more likely to strongly agree, that AI hiring tools are biased against them.
That gap matters. The behaviors most one-way interview platforms score on—eye contact, vocal pacing, response speed, an even smile—are exactly the behaviors that read differently for many neurodivergent candidates. The system is grading them against a default they don’t fit, and they can tell.
Gen Z and boomers are the most suspicious. Millennials are the least.
The age cut isn’t linear. It bends.
Candidates over 55 and those under 25 see themselves as the most filtered out (the older end reading themselves as too expensive or too senior, the younger end as too inexperienced).
The 35–44 group, who happen to fit the demographic that hiring algorithms have the most training data for, are the least suspicious.
The disclosure paradox: transparency makes things worse
The strangest cut in the data comes when you cross bias agreement with whether the company actually disclosed the AI in the first place.
Net agreement that AI is biased, by disclosure status
| Disclosure status | Net agree (%) |
|---|---|
| Yes—in the fine print | 65% |
| Yes—clearly stated | 52.4% |
| No (not disclosed) | 48.3% |
| I don’t know | 35.8% |
The more candidates know an AI is there, the less they trust it. Fine-print disclosure produces the highest bias agreement of any group—65%. “I don’t know” produces the lowest, at 35.8%. The closer the disclosure is to a legal hedge, the angrier the candidates get.
This is the part of the data we’d push hardest if we were running the policy conversation. The fight has been about whether to disclose. The data says the fight should be about how.
A line of fine print at the bottom of an application doesn’t read as transparency to a candidate. It reads as an admission that something is being hidden, written in the kind of language that’s there to protect a company in court, not to inform anyone applying. Half-disclosure is worse than none.
Author’s take
Half of candidates are now bringing AI of their own
If you believe the algorithm screening you is unfair, the rational thing to do is bring an algorithm of your own. A lot of respondents have.
Just under half (49.6%) now use AI in some form during the hiring process. The bulk of that, 44%, is preparation: practice interview answers, mock questions, scripted bullets.
Roughly one in 18 candidates say they’ve gone further. They’ve used AI live during an interview to feed themselves answers while a camera was on.
It looks like a Gen Z story until you sort by job level
By age, the live-cheating rate descends predictably:
- 18–24: 8.0%
- 25–34: 6.3%
- 35–44: 5.8%
- 45–54: 5.3%
- 55+: 1%
The headline read is the easy one—Gen Z is most willing to game the system. But cross-cut by professional level, the picture flips.
Live AI-cheating rate, by professional level
| Professional level | Live cheating rate (%) |
|---|---|
| Senior executive/C-suite | 8.6% |
| Unemployed/currently seeking | 8.5% |
| Mid-level management | 7.9% |
| Freelancer/gig worker | 6% |
| Entry-level | 1.8% |
C-suite executives admit to live AI-cheating at 8.6%. Entry-level candidates do it at 1.8%. The top of the org chart is gaming the screening at almost five times the rate of the bottom. Pick your read on why this is happening.
Senior candidates probably encounter more one-way AI interviews to begin with, they almost certainly have more to lose from a bad take, and they may simply be less worried about being caught. Pick more than one. They aren’t mutually exclusive.
Whatever the explanation, the “Gen Z is cheating their way in” framing falls apart on the data. The cheating tracks closer to seniority than to age.
What AIs are actually telling job seekers
At the end of the survey we left an open-ended question: What’s the strangest or most inhuman piece of feedback or instruction you’ve ever gotten from an AI during the hiring process?
Here are themes that came back the most:
Most cited bizarre experiences with hiring AI
| Theme | Mentions |
|---|---|
| Told they look or sound like AI / a robot | 12 |
| Instant rejection (within seconds of submitting) | 11 |
| Glitches, errors, frozen interview tools | 8 |
| Eye contact/"look at the camera" instructions | 7 |
| Smile/be enthusiastic" coaching | 7 |
| No feedback at all | 5 |
| Penalized for human pauses ("um", hesitations) | 3 |
| Word salad or nonsensical feedback | 3 |
None of these are huge in raw count, but the shape of the complaint is consistent. The job seekers who run into an AI job interview tend to describe a system that asks them to perform humanity on a script. Smile here. Look there. Don’t pause. Don’t um. And then it grades them on it.
Below are some pull-quotes from Enhancv’s survey:
- The second I submitted I got denied.
- The AI stated that I was required to ensure my eyes remained fixed on the camera lens.
- I had a prospective job ask me to say specific phrases out loud before beginning the interview.
- I was rejected because I filled a questionnaire about my mental health and they said I wasn’t a good fit.
- Stating that I had no industry experience when I’ve worked in the industry over 10 years with matching job titles.
- An AI during an interview screening asked me to describe what I am most proud of as a human. I thought that was a strange question for a computer to ask me.
- Told me to ‘better align with an ideal personality profile’ without explaining what that meant, which made the whole experience feel less like a genuine evaluation and more like trying to guess what a machine wanted.
- I think most AIs are biased against people with specific names or backgrounds that would lead them to be less apt to be white.
Final thoughts: hiring has gone dark
The findings suggest a definitive shift in the candidate–employer contract. Half of U.S. job-seekers are now being rejected by something they can’t see, can’t question, and were rarely told existed. Almost half believe that something is biased against them. Nearly a third refuse to submit to it at all. And among the half who do, a meaningful share are now meeting AI with AI.
For employers, the lesson is that opaque AI hiring is expensive. It hands job seekers the worst-case interpretation of every “no,” pushes lower-paid talent out of the funnel, and breeds the exact mistrust that fuels the next generation of evasion tactics.
Transparency is no longer the polite version of compliance, but the price of running a hiring funnel that candidates will still walk into.
Methodology
This report is based on a survey of 1,066 U.S. job seekers conducted in April 2026. The instrument contained 13 questions: five demographic, seven closed experience questions, and one open-ended free-text response.
Sample composition:
- 37.2% entry-level/individual contributors
- 36.1% mid-level managers
- 10.1% actively unemployed and seeking
- 9.8% freelancers/gig workers
- 6.7% senior executives or C-suite
Age distribution:
- 25–44 = 63%,
- 45+ = 28%,
- 18–24 = 8.5%.
Roughly 64% reported household income or target salary under $100k.
Industry coverage:
- Healthcare (10.8%)
- Education (10.5%)
- Finance (9%)
- Customer support (8.7%)
- Operations/project management (8.3%)
- IT/DevOps (8%)
The top ten industries account for ~76% of the sample.
Bases for percentages reflect the answered-only count for each question, with blanks excluded.
Margin of error: +/- 3.0% at the 95% confidence level for full-sample figures.
Make one that's truly you.



