Interview Champion Get started

How answer scoring works

After each answer (Practice mode) or at the end of a session, the app scores you on five dimensions, each out of 10.

  • Clarity — structure and flow. Rambling, repetition, or unclear sequencing pulls this down.
  • Relevance — does the answer actually address this question? Tangential content is penalised heavily.
  • Depth — specific evidence. Names, numbers, dates, tools, decisions. Vague answers cap at 4-5.
  • Role fit — how well the answer signals suitability for the named role. Generic answers that could apply to any job sit at 4-5.
  • Impact — tangible outcomes. "We improved things" is a 4. "Cut p99 latency 40%, saved £50k/year" is a 9.

Calibration anchors:

  • 1-3: doesn't answer, or empty/incoherent.
  • 4-5: vague generalities, no specifics.
  • 6-7: solid but unremarkable.
  • 8-9: specific, structured, real evidence, tailored.
  • 10: outstanding — vivid, quantified, perfectly tailored. Rare.

The overall score is a weighted average: relevance and impact are weighted slightly higher.

Alongside the scores you'll get a feedback paragraph (named verdict + the single most important improvement), a list of strategic strengths the answer demonstrates, and a list of precise tweaks — exact words to swap or specific data to insert. Treat the tweaks as a punch list.

The scoring AI is designed to differentiate sharply. If you're getting 6/10 across every dimension regardless of effort, raise it as a bug — the model should be giving you spread.