The AI interview preparation problem is no longer theoretical. Right now, the candidate you're interviewing next week is having a conversation with ChatGPT that goes something like this: "I have an interview at [company] for a [role]. Here's the job description. Here's my resume. Generate 20 behavioral questions they might ask me and write ideal STAR-format answers for each one."
This takes about fifteen minutes. The output is indistinguishable from someone who has spent years developing the self-awareness and communication skills to answer interview questions well. The candidate memorizes it, practices it, and walks into your interview with a complete script for every question in the standard playbook.
If your interview is built around standard questions, you are no longer evaluating a candidate. You are evaluating their ability to have a good conversation with ChatGPT. Those are different skills, and only one of them predicts job performance.
This is different from traditional interview prep — and the difference matters
Candidates have always prepared for interviews. Flash cards for common questions. Mock interviews with friends. Reading books on behavioral interview technique. This is fine and expected — the interview has always been a game with known rules, and preparation is part of playing it.
What's changed is the scale and quality of the preparation available. Traditional prep produces a candidate who has thought about their experiences and organized them into stories. That's work that requires genuine reflection on what they've actually done. The output is their real experiences, better articulated.
AI candidate prep is categorically different. A model generating answers from a job description and resume doesn't reflect on real experiences — it constructs plausible experiences that fit the question. It produces answers that are structurally perfect, emotionally calibrated, and completely divorced from what the candidate actually did. The candidate's job is simply to memorize and deliver.
The result is that interview signals that used to mean something — articulate answers, specific examples, apparent self-awareness — are now table stakes that any candidate with a good internet connection can produce. These signals no longer differentiate. They've been inflated to the point of uselessness.
Which questions are fully scriptable, and which aren't
The scriptability spectrum
"Tell me about a time you had to manage a difficult stakeholder."
"Describe a situation where you failed and what you learned."
"Tell me about your greatest professional achievement."
"Where do you see yourself in five years?"
"Why do you want to work here?"
"The platform migration on your resume — what specifically did you decide NOT to migrate, and why?"
"You were there in Q3 of last year when the architecture decision was made. What was the argument you lost?"
"Walk me through a decision you made that looked right at the time but turned out to be wrong. Not the lesson — the actual decision process."
The left column is a list that ChatGPT can answer in five seconds. The right column requires actual knowledge of what the candidate did — knowledge that only exists if the candidate actually did it. An AI-coached candidate can still answer these questions, but the answer will be evasive, vague, or will require them to improvise a story they haven't rehearsed. Under gentle follow-up, the seams show.
The structural difference is personalization. Generic questions can be scripted generically. Questions built from the specific candidate's specific resume, targeting specific claims they made — those can't be scripted in advance because they couldn't have been anticipated. The candidate doesn't know which claim you noticed, which metric you're skeptical of, or which gap in their timeline you're curious about.
ChatGPT interview coaching: what it actually produces
It's worth being concrete about what AI-coached candidates sound like, because it changes how you listen. The outputs of AI interview coaching have a recognizable texture:
- Structural perfection at the expense of specificity. STAR-format answers that hit every beat cleanly, but with numbers that feel approximate, timelines that feel rounded, and outcomes that are always positive. Real experiences have ragged edges — the metric that moved a different direction than expected, the stakeholder who wasn't satisfied, the project that ended before the measurable outcome materialized. AI coaching smooths these out.
- Identical energy across all answers. A candidate telling stories from memory varies in engagement — they light up on things they're proud of, go quieter on things they're not. A candidate delivering memorized answers maintains consistent, practiced enthusiasm that doesn't track the content.
- Pivots under follow-up. When you ask a scripted candidate to go deeper on something specific — "You said your team grew by 40% — what was the first hire you personally made, and how did that go?" — the answer often pivots back to higher-level generalities. This is where the script runs out. Real experience has infinite depth; scripted experience doesn't.
The real AI interview preparation problem isn't cheating — it's signal degradation
It's tempting to frame AI-assisted interview preparation as a form of dishonesty. That framing is a distraction. Candidates aren't lying; they're preparing thoroughly with the best available tools. The problem isn't their behavior — it's that the interview format they're optimizing against was already unreliable, and AI has made the unreliability catastrophic.
The core issue: interviews evaluate performance under the specific conditions of an interview, not performance under the conditions of the actual job. AI coaching has widened this gap to the point where the two things being measured — interview performance and job performance — are nearly uncorrelated for candidates who prepare well.
This isn't a new problem. The research literature on unstructured interviews has shown for decades that they're poor predictors of job performance. Hiring managers favor candidates who remind them of themselves, who tell polished stories, who communicate confidence. These variables predict interviewability, not capability. AI coaching has simply made interviewability a skill everyone can acquire cheaply.
The response is not to try to detect AI-coached candidates — that's a losing game. It's to ask questions that measure something AI coaching can't manufacture: genuine depth of experience, specific memory of real events, and the ability to think through novel problems in real time.
Three techniques that still surface real signal
1. Resume-anchored questions
Pick a specific claim from the resume and ask about it in a way that requires genuine detail. Not "Tell me about your experience with X" — that's answerable from a script. Instead: "You list this project prominently. Walk me through a specific technical decision you made that you now think was wrong." Or: "What did the system look like before you touched it? Not conceptually — specifically."
The specificity is the key. Scripted answers operate at the level of narrative arc. Resume-anchored follow-up questions require operating at the level of actual memory, which can't be faked without effort that's hard to sustain under a real-time conversation.
2. Live problem-solving
Give the candidate a problem that resembles what they'll actually work on and watch them think through it in real time. This doesn't have to be a formal case study — it can be as simple as "Here's a situation we've dealt with recently. How would you approach it?" The point is that there's no pre-prepared answer. The candidate has to think, and how they think is itself the signal.
AI coaching can teach candidates to structure their thinking out loud, which is useful but not sufficient. The substance of what they think — whether their intuitions are good, whether they ask clarifying questions, whether they're aware of tradeoffs — that still reflects genuine expertise.
3. Follow-up on the answer you actually got, not the one you expected
The easiest way to break out of scripted interview territory is to follow up on what the candidate just said, specifically. "You mentioned your team disagreed on the approach — what was the argument you lost?" "You said the migration was mostly successful. What's the mostly?" "You described this as a learning experience — what would you have done differently at month two, not month twelve?"
Scripts can't anticipate their own follow-ups. A candidate who is pulling from memory can go wherever you take them. A candidate who is reciting a prepared narrative will either follow the script off a cliff or will have to visibly improvise. Both are informative.
AskSharp generates questions AI-coached candidates can't script
Every question is built from this specific resume. Generic prep doesn't survive it.
The practical implication for hiring teams
The interview format that worked five years ago is broken. This isn't a reason to abandon interviews — it's a reason to rebuild what you actually do in them. The core change is moving from questions that evaluate how well candidates can tell stories about themselves, to questions that evaluate whether they actually did what they claimed to have done.
That shift requires preparation that most hiring managers don't currently do: reading the resume carefully enough to identify specific claims worth testing, building questions that are personalized to this candidate rather than drawn from a template, and knowing in advance what a strong answer looks like so the evaluation isn't happening in real time while you're also managing a conversation.
The good news is that AI-coached candidates haven't solved this problem. They've only solved the problem of generic interviews. A well-prepared interviewer with resume-specific questions still has a decisive advantage over a well-prepared candidate with a generic script. The arms race isn't over — but the interviewer's side of it requires upgrading the preparation, not the questions themselves.
If you want to run interviews that surface real signal in the era of AI candidate prep, AskSharp builds your interview kit from the resume — probe areas, targeted questions, and an answer cheat sheet — in 30 seconds. It's free to start.