Technical interviews are a black box — candidates are usually told if they made it through to the next round, but they rarely find out why.
Lack of feedback isn’t just frustrating for candidates; it is also bad for business. Our research shows that 43% of all candidates consistently underestimate their technical application performance, and 25% of all candidates consistently think they failed when they actually passed.
Why are these numbers important? Because giving direct feedback to successful candidates can do wonders for increasing your close rate.
Providing feedback makes it more likely that the candidates you’re looking for today will join your team, but it’s also critical to hiring the people you might need later. The results of technical interviews are erratic and according to our data only about 25% of the candidates perform consistently from interview to interview.
This means that a candidate you reject today could be someone you want to hire in 6 months.
But aren’t we being sued?
I’ve polled founders, hiring managers, recruiters, and employment lawyers to understand why everyone who’s ever taken interviewer training has been told in no uncertain terms not to give feedback.
The main reason: companies are afraid of being sued.
As it turns out, literally zero companies (at least in the US) have ever been sued by an engineer who received constructive feedback after the interview.
People don’t get defensive because they failed – it’s because they don’t understand why and feel powerless.
Many cases are settled out of court, making it much harder to get that data, but given what we know, the likelihood of us being sued after providing helpful feedback is extremely slim.
What about candidates who get defensive?
For every interviewer on our platform, we track two key metrics: the candidate’s experience and the interviewer’s calibration.
The candidate experience score is a measure of how likely someone is to return after speaking with a particular interviewer. The interviewer calibration score tells us if a particular interviewer is too strict or too lenient, based on how well their candidates do in following, Real Job interviews. If someone consistently gives good scores to candidates who fail real job interviews, they’re being too lenient, and vice versa.
When you add up these scores, you can reason about the value of giving honest feedback. Below is a graph of the average candidate experience score as a function of interviewer accuracy, with data from over 1,000 different interviewers (consisting of approximately 100,000 interviews):
The candidate experience score peaks right at the point where interviewers aren’t too strict or too lenient, but are, in Goldilocks terms, “just right.” After that, it falls off pretty dramatically on both sides.