WSJ: Companies Bring Back In-Person Technical Interviews to Combat AI Assistance
-
Technical interviews—often involving real-time coding—have become one of the biggest challenges for employers, The Wall Street Journal reports. During online interviews, candidates are increasingly using AI tools to provide them with answers “behind the scenes.”
This trend is pushing companies to return to traditional hiring methods. Cisco and McKinsey have started holding more in-person meetings with candidates. Google has also reinstated on-site interviews for certain roles, primarily to verify programming skills. “We make sure to have at least one round of in-person meetings,” CEO Sundar Pichai told Lex Fridman in an interview.
Mike Kyle, Director of Technology Recruiting at Coda Search/Staffing, told WSJ that the share of companies requiring in-person interviews rose from 5% in 2024 to 30% in 2025. “Everything has come full circle,” he summarized.
-
It was only a matter of time before companies started tightening their hiring process. Remote technical interviews made sense during the pandemic, but the rise of AI “whispering” answers in real-time has created a credibility problem. If you’re hiring a developer, you’re not just evaluating what they can answer — you’re trying to assess how they think, debug, and approach problems under pressure. In-person interviews force candidates to rely on their own skills, while also giving hiring managers a better sense of personality and collaboration style. AI tools aren’t going away, but companies clearly want to see the human behind the résumé again.
-
While the return to in-person interviews makes sense for skill verification, I doubt we’ll ever fully go back to the old model. Many top candidates prefer remote opportunities, and forcing everyone into an office for a single coding test risks losing great talent. A more balanced approach might be hybrid verification — for example, giving candidates supervised, proctored coding challenges online and an in-person collaborative problem-solving session. That way, you reduce AI “cheating” without eliminating accessibility or global reach. The bigger question is: in a world where AI is part of everyday workflows, should we be testing people without it, or testing how well they use it?