Overview
Technical interviewing evaluates candidates' engineering capabilities through practical assessment. Done well, it predicts job success and creates a positive candidate experience. Done poorly, it filters on interview preparation rather than actual skill.
Modern technical interviews have moved beyond whiteboard algorithm puzzles. Companies like Google, Stripe, and Shopify now use take-home projects, pair programming, and system design discussions that better reflect real work.
The key insight: interview performance and job performance are different skills. Your process should optimize for signal about how candidates will actually perform, not how well they've memorized sorting algorithms. This requires thoughtful interview design, trained interviewers, and consistent evaluation criteria.
The Problem with Traditional Technical Interviews
For decades, technical interviews meant one thing: whiteboard coding. A candidate stands at a board, marker in hand, trying to implement a binary tree traversal while three engineers watch silently. This format has one advantage—it's easy to administer—and countless disadvantages.
Whiteboard interviews test a specific skill: performing under artificial pressure while writing syntactically correct code without autocomplete, documentation, or the ability to run it. This skill rarely correlates with actual engineering work, where developers have IDEs, Stack Overflow, and the ability to iterate.
Research consistently shows weak correlation between algorithm puzzle performance and job success. Google famously studied their own hiring data and found that brainteaser questions had zero predictive value. Yet many companies continue using formats that filter for interview preparation rather than engineering ability.
The result? Top engineers who haven't practiced LeetCode get rejected, while candidates who've spent months on interview prep get hired—and sometimes struggle in the actual role. Your interview process might be selecting for the wrong signal.
Interview Format Options
Live Coding (Real Environment)
Modern live coding moves away from whiteboards to actual development environments. Candidates use their own machines or a shared environment like CoderPad, CodeSandbox, or a video call with screen sharing.
When it works well:
- Problems are realistic and scoped appropriately (30-60 minutes)
- Candidates can use their preferred tools and can Google
- The interviewer acts as a collaborative partner, not silent observer
- Focus is on problem-solving approach, not syntactic perfection
Common pitfalls:
- Problems that require obscure algorithms most engineers never use
- Artificial constraints (no documentation, no IDE features)
- Interviewers who watch silently instead of engaging
- Problems designed to have "gotcha" moments
Best practice: Use problems similar to tasks the candidate would do in the first week. If you're hiring a frontend engineer, have them build a component. If you're hiring a backend engineer, have them design an API endpoint.
Take-Home Projects
Candidates complete a project on their own time, typically 2-6 hours of work. This gives candidates a realistic environment and removes performance anxiety, but requires significant time investment.
When it works well:
- Time expectations are clear and respected (state "2-4 hours")
- The project relates to actual work they'd do
- Evaluation criteria are defined before review
- Follow-up discussion focuses on decisions and trade-offs
Common pitfalls:
- Projects that actually take 15+ hours for quality work
- No clear evaluation criteria (leads to subjective review)
- Expecting "production quality" from a 4-hour project
- Using candidates' work without compensation
Best practice: Create a realistic scope that senior engineers can complete in the stated time. Pay candidates for extensive take-homes (4+ hours). Have them walk through their decisions in a follow-up call.
System Design Interviews
System design rounds ask candidates to architect a system—"design Twitter's feed" or "design a URL shortener." These work well for senior roles where architectural thinking matters.
When it works well:
- The problem is appropriate for the level (don't ask juniors to design Netflix)
- Discussion is collaborative, not a presentation
- Evaluation focuses on communication and trade-off reasoning
- The problem relates to your actual technical challenges
Common pitfalls:
- Expecting one "right answer" (there isn't one)
- Problems so abstract they don't reveal real thinking
- Not calibrating difficulty to seniority level
- Interviewers who don't understand the domain themselves
Best practice: Use problems related to your domain. Let candidates ask clarifying questions—this mirrors real engineering work. Evaluate their reasoning process, not whether they match your solution.
Pair Programming
Candidates work alongside an engineer on a real or realistic problem. This simulates actual collaboration and reveals communication style, problem-solving approach, and how they handle ambiguity.
When it works well:
- The "pair" is genuinely collaborative, not just watching
- The problem has multiple valid approaches
- Focus includes communication, not just code output
- Time allows for meaningful progress (60-90 minutes)
Common pitfalls:
- The interviewer dominates or stays silent
- Artificial problems that don't reflect real work
- Evaluation based only on output, not process
- Insufficient time to build rapport
Best practice: Use real tickets from your backlog (simplified if needed). Have the interviewer be an active participant who answers questions and offers hints when stuck.
What Good Technical Interviews Look Like
Job-Relevant Problems
The single most important factor: does your interview test what the candidate will actually do? If you're hiring a Rails developer, test Rails knowledge—not abstract algorithms. If you're hiring for a data-heavy role, have them work with data.
Map your interview questions to actual job tasks:
| Job Responsibility | Good Assessment | Poor Assessment |
|---|---|---|
| Build REST APIs | Design and implement an endpoint | Reverse a linked list |
| Debug production issues | Give them a bug to investigate | Whiteboard a sorting algorithm |
| Write React components | Build an interactive component | Implement binary search |
| Optimize database queries | Analyze and improve slow queries | Solve a dynamic programming puzzle |
Clear Expectations
Before any technical round, candidates should know:
- Format and duration
- Tools they'll use (or should bring)
- What they'll be evaluated on
- Whether they can use documentation/Google
This isn't "giving away the answer"—it's setting candidates up to demonstrate their actual abilities instead of testing how well they guess what you want.
Structured Evaluation
Every interviewer should evaluate against the same criteria using the same rubric. Without structure, interviews devolve into "vibes" and are vulnerable to bias. A structured evaluation includes:
- Defined criteria: What signals are you looking for? Code quality? Communication? Problem-solving approach?
- Rating scale: What does "meets bar" look like? What's exceptional?
- Evidence requirements: Interviewers must cite specific observations, not just feelings
Trained Interviewers
Untrained interviewers make inconsistent, biased decisions. Invest in interviewer training that covers:
- Your evaluation criteria and rubrics
- Calibration on what "good" looks like at each level
- Bias awareness and mitigation techniques
- How to be a good interview partner (not interrogator)
- Legal considerations (questions to avoid)
Red Flags in Interview Processes
From the Candidate's Perspective
Developers share interview experiences—bad ones spread through networks and damage your employer brand. Watch for these warning signs that you're creating poor experiences:
Process red flags:
- No timeline or next steps communicated
- Multiple rounds with no feedback
- Moving goalposts ("just one more round")
- Ghosting after interviews complete
Technical red flags:
- Algorithm puzzles unrelated to the role
- "Gotcha" questions designed to trick
- Interviewers who seem disengaged or hostile
- Unrealistic time constraints
Culture red flags:
- Disrespect for candidate's time
- Unprepared interviewers
- Inconsistent information from different interviewers
- No opportunity to ask questions
Warning Signs Your Process Isn't Working
Symptoms of a broken technical interview process:
- High candidate drop-off during interviews
- Offers rejected citing "interview experience"
- New hires who struggle despite "passing" interviews
- Strong candidates from referrals who fail your process
- Interviewers who disagree dramatically on the same candidate
- Lack of diversity in your engineering team
Candidate Experience Matters
Why Experience Affects Outcomes
Top engineers have options. They're interviewing at multiple companies simultaneously and will choose based on total experience—not just compensation. A poor interview experience signals:
- "This is how they treat people"
- "They're disorganized"
- "They don't respect my time"
- "I'd be working with these interviewers"
Studies show candidates who have positive interview experiences are 38% more likely to accept offers and significantly more likely to recommend the company—even if they don't get hired.
Elements of Positive Experience
Before the interview:
- Clear communication about process and timeline
- Interview prep materials (what to expect)
- Accommodations offered proactively (different times, formats)
During the interview:
- Prepared, engaged interviewers
- Time for candidate questions
- Realistic problems that showcase skills
- Collaborative, not adversarial atmosphere
After the interview:
- Prompt follow-up (within 48 hours ideal)
- Constructive feedback (when possible)
- Clear next steps or closure
Time Respect
The total time investment candidates make is significant. Consider:
- Resume review and application: 30-60 minutes
- Initial screen: 30-60 minutes
- Technical phone screen: 60-90 minutes
- Take-home project: 3-8 hours
- On-site interviews: 4-6 hours
- Travel time (if applicable): 2-8 hours
A typical process asks for 10-20 hours of unpaid work. Top candidates comparing three opportunities might invest 60+ hours in job searching. Respect this investment by being efficient, communicative, and making decisions quickly.
Structured vs. Unstructured Interviews
The Case for Structure
Unstructured interviews—where each interviewer asks different questions and evaluates subjectively—are nearly useless. Research shows unstructured interviews have reliability of about 0.2 (barely better than random). Structured interviews reach 0.5-0.6 reliability.
| Aspect | Unstructured | Structured |
|---|---|---|
| Questions | Varies by interviewer | Same for all candidates |
| Evaluation | "Gut feeling" | Defined rubric |
| Bias | High | Reduced |
| Legal risk | Higher | Lower |
| Predictive validity | ~0.2 | ~0.5-0.6 |
Implementing Structure
1. Define what you're evaluating:
Create a scorecard with specific competencies: problem-solving, code quality, communication, technical depth, etc.
2. Design questions per competency:
Each question should target a specific skill area with defined criteria for what "meets bar" looks like.
3. Train interviewers on the rubric:
Calibration sessions where interviewers evaluate the same candidate (real or mock) help align standards.
4. Collect evidence, not impressions:
Interviewers must cite specific observations: "Candidate identified the edge case involving null input" not "Candidate seemed smart."
5. Debrief systematically:
Discuss each competency area before reaching overall decisions. Avoid anchoring on first opinions.
Building Your Interview Process
Designing for Your Needs
Start with these questions:
- What does success look like in this role at 6 months?
- What skills are truly essential vs. trainable?
- What's your candidate's likely interview fatigue level?
- What signals have predicted success (or failure) historically?
Sample Process for Mid-Level Engineer
- Recruiter screen (30 min): Background, motivation, logistics
- Technical screen (60 min): Fundamentals discussion with engineer
- Practical exercise (90 min): Build something realistic
- System design/architecture (45 min): For mid+ level
- Team fit (45 min): Collaboration and communication
Total candidate time: ~5 hours over 1-2 weeks
Iterating on Your Process
Track metrics to improve:
- Time-to-hire
- Offer acceptance rate
- Candidate satisfaction scores
- New hire performance at 6 months
- Interview-to-offer ratio
- Diversity metrics through the funnel
Survey candidates who complete your process (both hired and rejected). Their feedback reveals problems you can't see internally.