What State-of-the-Art AI Interviewers Can Do
The latest evidence shows AI-led interviews are now high quality at scale. The newest LSE results are the strongest benchmark to date, and findings from other studies point in the same direction: people engage more, share more, and often prefer the conversational format.

Can AI conduct interviews at a quality level that is useful for serious decision making? And do people want to participate in AI-led interviews? The current state of evidence suggests the answer is twice yes, especially when the interviewer is designed well and uses voice interaction.
Latest Benchmark: New LSE Evidence (2026)
The most important new result comes from Geiecke and Jaravel's February 7, 2026 paper, which evaluates AI-led interviews across multiple applications and interview modes (Geiecke & Jaravel 2026).
In blind transcript evaluations by sociology PhD students, AI text interviews scored 2.93 on a 1-5 scale where 3 equals an average human expert (in comparable online-text conditions). In the same research, voice AI interviews scored 3.93 against the online-text benchmark and 3.50 against a face-to-face benchmark, with face-to-face human interviews scoring 3.53.
The key implication is that voice interaction is a major quality lever that brings AI interviewing close to in-person expert performance for short semi-structured interviews.
Do People Like Talking to AI?
Across the same paper's core applications, respondents preferred AI interviews over human interviews by at least 2:1. Examples include meaning-in-life interviews (42.51% AI vs 18.36% human), French election interviews (49.48% vs 15.62%), and education-and-occupation interviews (41% vs 17%).
In a representative U.S. study on meaning in life (n=462), AI-led interviews also produced 148% more words than open text fields (471 vs 190). In matched transcript comparisons, expert evaluators judged AI transcripts more informative in 75% of cases.
More evidece
This pattern is consistent with prior HCI research. Already in 2020, Xiao et al. (2020) found that conversational AI survey formats elicited longer answers and richer disclosure than traditional open-ended web forms (Xiao et al. 2020).
The broader emerging literature on AI-led interviews reports similar themes. Interview quality depends heavily on follow-up behavior, conversational flow, and non-judgmental interaction design.
What Good AI Interviewing Looks Like
High-performing systems consistently apply core qualitative interview principles: non-directive questioning, eliciting concrete examples, cognitive empathy, non-judgmental tone, one-question turns, and clear topical focus.
In practice, this means that when designed well, AI can combine human-level interview quality with consistency and scale.
References
Geiecke, F., & Jaravel, X. (2026, February 7). Conversations at Scale: Robust AI-led Interviews. London School of Economics. https://www.xavierjaravel.com/_files/ugd/bacd2d_ca55b68204ad4f5395905952ad0f3153.pdf.
Xiao, Z., Zhou, M. X., Liao, Q. V., Mark, G., Chi, C., Chen, W., & Yang, H. (2020). Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Transactions on Computer-Human Interaction, 27(3), Article 15. https://doi.org/10.1145/3381804.