AI-Powered Career Platform for Interview Preparation

Time: 2024

Time: 2024

Duration: 6 months

Duration: 6 months

My role:

My role:

I worked as a Product Designer on Datalynn’s first AI-powered career platform, leading a key design iteration focused on improving user retention and engagement across the interview experience.

I worked as a Product Designer on Datalynn’s first AI-powered career platform, leading a key design iteration focused on improving user retention and engagement across the interview experience.

Responsibility:

Led the redesign of the interview setup flow, interview report dashboard, and guided practice questions to improve clarity, personalization, and repeat engagement. Collaborated closely with PMs and engineers to iterate based on user feedback and usage insights.

Led the redesign of the interview setup flow, interview report dashboard, and guided practice questions to improve clarity, personalization, and repeat engagement. Collaborated closely with PMs and engineers to iterate based on user feedback and usage insights.

Time & Status:

May 2024 - December 2024

May 2024 - December 2024






HIGHLIGHT

HIGHLIGHT

From Practice to Performance

Actionable Performance Insights

Immediate feedback builds awareness.

Interview Progress & Insights Hub

Long-term patterns reveal growth.

AI-Assisted Practice & Question Bank

Guided practice helps close skill gaps.

CONTEXT

CONTEXT

CONTEXT

Identifying Friction Across the Mock Interview Journey

High Drop-Off During Setup Step

42%

of users dropped off at the onboarding page.

By tracking how users navigated the product, we found major friction in onboarding and early setup, reducing activation, confidence, and long-term engagement.

By tracking how users navigated the product, we found major friction in onboarding and early setup, reducing activation, confidence, and long-term engagement.
By tracking how users navigated the product, we found major friction in onboarding and early setup, reducing activation, confidence, and long-term engagement.

Low Engagement After the Interview

40%

40%

40%

of users never practiced beyond their first mock interview.

27%

retention declined from Day 7 to Day 30.

After completing a mock interview, low retention and return rates suggested that users didn’t receive enough feedback or direction to continue practicing.

After completing a mock interview, low retention and return rates suggested that users didn’t receive enough feedback or direction to continue practicing.

After completing a mock interview, low retention and return rates suggested that users didn’t receive enough feedback or direction to continue practicing.

Business Goal

Business Goal

Business Goal

Increase retention after the first mock interview by encouraging continued practice and sustained engagement.

Increase retention after the first mock interview by encouraging continued practice and sustained engagement.

How might we design an interview experience that helps users prepare better, improve continuously, and feel more confident?

RESEARCH

RESEARCH

Analyze user feedback across the end-to-end user flow
Methodology

We conducted moderated usability testing sessions with 12 participants to understand how users set up, completed, and reviewed a mock interview. Sessions were conducted remotely via Zoom, where participants completed interview tasks and provided qualitative feedback afterward.

We conducted moderated usability testing sessions with 12 participants to understand how users set up, completed, and reviewed a mock interview. Sessions were conducted remotely via Zoom, where participants completed interview tasks and provided qualitative feedback afterward.

Task List & Completion Rates

We measured task completion time and success rates across the end-to-end mock interview flow to identify where users experienced the most friction, and found that interview setup took significantly longer than other steps.

We measured task completion time and success rates across the end-to-end mock interview flow to identify where users experienced the most friction, and found that interview setup took significantly longer than other steps.

Insights from user feedbacks

Through moderated user interviews, we gathered qualitative feedback and synthesized recurring patterns into key pain points and user-driven needs.

Through moderated user interviews, we gathered qualitative feedback and synthesized recurring patterns into key pain points and user-driven needs.

Identify the problems

Overwhelming Setup

Users felt confused because all onboarding fields were packed into a single, cluttered page.

Users felt confused because all onboarding fields were packed into a single, cluttered page.

No Post-Interview Insight

Users had no way to review answers or understand how they performed.

Users had no way to review answers or understand how they performed.

Limited Practice Options

There was no structured question bank for users to practice or reinforce skills.

There was no structured question bank for users to practice or reinforce skills.

Low Confidence of their Performance

Without feedback or guidance, users were unclear about their performance.

Without feedback or guidance, users were unclear about their performance.

Rebuilding the User Flow to Find Opportunities
During research, I learned that users didn’t just need a mock interview—they needed a clear path to improvement. The original product operated as a one-off interaction, offering no structured setup, no performance insights, and no way to continue practicing.
I reframed the experience as a positive reinforcement loop where clearer feedback builds confidence, encouraging more practice and continuous improvement, and focused the redesign on three key stages of the flow: setup, feedback, and practice & improvement.

TARGET USERS

TARGET USERS

Motivated Beginners Seeking Practical Growth

To ensure our solution effectively meets user needs, we conducted extensive research to better understand our target audience.

To ensure our solution effectively meets user needs, we conducted extensive research to better understand our target audience.

Focus 1: Setup

A Guided and Customizable Setup Experience

BEFORE

Overwhelming Single-Page Setup

All onboarding tasks (job info, resume, setup instructions) were packed into a single long page.

High cognitive load — users didn’t know where to start or what was required.

No guiding structure, no sense of progress.

Before - Learning Interface Comment

Design Process

Reorganize information

Reorganize information

I reorganized scattered information into a 3-step flow and added key inputs to enable more customized interview practice with less cognitive load.

Design Exploration

Design Exploration

During the setup flow design, I explored two structural approaches.

Single-page accordion

Option 1

Reduced cognitive load

Clear sense of progress and momentum

Stronger perceived guidance

Multi-page stepper

Option 2

Fast access for experienced users

High cognitive load for first-time users

Unclear progress and completion state

I ultimately chose a multi-stage stepper to reduce complexity, guide users through critical decisions sequentially, and improve the quality of inputs that power personalized interview feedback.

AFTER

Guided 3-Step Onboarding

Introduced a 3-step stepper to Breaks information into focused chunks

Added a Background Questionnaire to capture deeper context.

Introduced customizable interview settings for a tailored experience.

Customizable Background Questions

Customizable Background Questions

To ensure users could evaluate their understanding and reinforce key concepts, I designed a Test-Out section that appears after each course.

Focus 2: Get Feedback

A Centralized Space to Review, Reflect, and Improve

I designed a centralized space to help users review results, reflect on performance, and improve over time.

The system consists of two connected parts: Post-Interview Recap for immediate feedback, and Performance Tracking Over Time to surface patterns and guide long-term improvement.

Post-Interview Recap

BEFORE

There is no no place to review results or track improvement.

Lack of interview record to review after interview

The system offered no strengths/weaknesses insights of the performance

No learning feedback or recommendations help user improve their interviews

Before - Learning Interface Comment

Design Process

We introduced Interview Recap.

The interview experience ends at completion, breaking the feedback → reflection → improvement loop.

Design Exploration: Where Should the Interview Recap Live?

Interview History → Recap

Interview History → Recap

Interview History → Recap

Option 1

Option 1

Option 1

Supports reflection anytime

Supports reflection anytime

Supports reflection anytime

Enables comparison across interviews

Enables comparison across interviews

Enables comparison across interviews

Scales well as a system

Scales well as a system

Scales well as a system

Delayed feedback if user never returns

Delayed feedback if user never returns

Delayed feedback if user never returns

Immediate Post-Interview Recap

Immediate Post-Interview Recap

Immediate Post-Interview Recap

Option 2

Option 2

Option 2

Fresh context

Fresh context

Fresh context

Strong learning momentum

Strong learning momentum

Strong learning momentum

Scales better

Scales better

Scales better

Cognitive fatigue

Cognitive fatigue

Cognitive fatigue

Users may want to exit quickly

Users may want to exit quickly

Users may want to exit quickly

I chose the Interview History–based recap because it supports reflection at the user’s pace, enables comparison across interviews, and scales into a long-term learning system without adding cognitive load at the moment of completion.

I chose the Interview History–based recap because it supports reflection at the user’s pace, enables comparison across interviews, and scales into a long-term learning system without adding cognitive load at the moment of completion.

I chose the Interview History–based recap because it supports reflection at the user’s pace, enables comparison across interviews, and scales into a long-term learning system without adding cognitive load at the moment of completion.

Design Exploration: How should we present the interview recap?

Sidebar to Select Questions

Option 1

Option 1

Option 1

High visibility of all questions

High visibility of all questions

High visibility of all questions

Fast skim or compare feedback across multiple questions.

Fast skim or compare feedback across multiple questions.

Fast skim or compare feedback across multiple questions.

Competes with feedback content

Competes with feedback content

Competes with feedback content

Less scalable on smaller screens

Less scalable on smaller screens

Less scalable on smaller screens

Top Dropdown Question Selector

Option 2

Option 2

Option 2

Keeps focus on the feedback itself

Keeps focus on the feedback itself

Keeps focus on the feedback itself

Reduced cognitive load

Reduced cognitive load

Reduced cognitive load

Scales better

Scales better

Scales better

Less immediate visibility of all questions

Less immediate visibility of all questions

Less immediate visibility of all questions

Less scalable on smaller screens

Less scalable on smaller screens

Less scalable on smaller screens

I explored both a sidebar and a dropdown approach for navigating interview questions. While the sidebar offered faster access, it competed with the feedback content and encouraged shallow scanning. I ultimately chose a top dropdown to keep users focused on one question at a time, supporting deeper reflection and a calmer review experience.

I explored both a sidebar and a dropdown approach for navigating interview questions. While the sidebar offered faster access, it competed with the feedback content and encouraged shallow scanning. I ultimately chose a top dropdown to keep users focused on one question at a time, supporting deeper reflection and a calmer review experience.

I explored both a sidebar and a dropdown approach for navigating interview questions. While the sidebar offered faster access, it competed with the feedback content and encouraged shallow scanning. I ultimately chose a top dropdown to keep users focused on one question at a time, supporting deeper reflection and a calmer review experience.

AFTER

Comprehensive Interview Record

The Interview History Dashboard

allows user to revisit all past interviews.

The AI-Powered Q&A Analysis provides detailed insights for each question

Performance Feedback Dashboard

While users could review individual mock interviews, they struggled to see patterns across sessions or understand what to improve next. Interview prep isn’t about fixing one answer—it’s about recognizing recurring strengths and weaknesses over time. To address this, I designed a Performance Feedback Dashboard that synthesizes interview data across sessions and highlights actionable areas for improvement.

BEFORE

No structured feedback to understand performance over time

Users had no clear record of past interviews to review or compare progress over time.

Feedback lacked context, leaving users unsure why they received certain scores.

There were no actionable recommendations to help users improve future interviews.

Before - Learning Interface Comment

Design Process

Connecting User Questions to Product Features

  1. Design Exploration: Interview metrics

Option 1 - KPI Cards

Fast to scan

Low cognitive load

Sets context without overwhelming

Limited detail

Option 2 - Trend Charts

Shows progress over time clearly

More analytical

Too heavy for first glance

Requires interpretation

I chose KPI cards because they let users quickly understand overall progress at a glance without adding cognitive load or distracting from deeper analysis below.

  1. Design Exploration: Skill Breakdown Visualization

Option 1 - Bar Chart 

High clarity and readability

Easy comparison across skills

Scales well with data changes

Less visually expressive

Option 2 - Radar Chart

Holistic visual snapshot

Harder to compare precise values

Increased cognitive load

Poor scalability

I ultimately chose a bar chart because it offers clearer comparisons, scales better, and more directly supports actionable decision-making.

  1. Design Exploration: Recurring Feedback

Option 1 - Frequency List

Highlights patterns clearly

Easy to scan

Low emotional load

Lacks context per instance

Option 2 - Tag Cloud

Visually expressive

Imprecise

Hard to act on

I chose a frequency-based list to help users recognize systemic issues without re-reading every comment.

  1. Design Exploration: Actionable Suggestions

Option 1 - Checklist with Target Skill

Clear next steps

Encourages action

Easy to revisit

Less personalized than conversational guidance

Option 2 - AI Coach Chat

Feels personalized and supportive

Can adapt guidance based on user responses

Good for exploration and deeper understanding

Harder to scan quickly

Users may not know what to ask next

I chose the checklist because it delivers clear, actionable guidance when users have low cognitive energy, while AI chat remains better for optional, deeper coaching.

AFTER

Actionable Performance Feedback Dashboard

Clear Skill Insights — See strengths and gaps at a glance.

Pattern-Based Feedback — Identify recurring strengths and issues.

Actionable Next Steps — Get focused recommendations to improve.

Focus 3: Practice & Improve

Practice and Improve with Feedback

BEFORE

Limited Practice Options

Users could not practice questions, only read them passively.

Users had no answer analysis, so they couldn't gauge performance.

Question cards were hard to skim, with unclear labels and structure.

Before - Learning Interface Comment

Design Process

Designing the Question Card for Fast Problem Identification

When browsing interview practice questions, users struggled to quickly determine whether a question matched their current skill level.

I designed the question card using layered labels to help users quickly identify which problems a question solves, reducing decision friction and supporting targeted practice.

AFTER

A Better Question Bank for Practice and Feedback

Question cards are clearly labeled and filterable, making browsing easier.

Practice mode supports audio and text, enabling flexible, anytime practice.

AI provides answer analysis with strengths, weaknesses, and tips, giving actionable feedback.

Impacts and Results
Impacts and Results
Impacts and Results

Successful MVP Launch

Successful MVP Launch

Successful MVP Launch

Contributed to the design of DataLynn's MVP, resulting in a successful product launch.

Contributed to the design of DataLynn's MVP, resulting in a successful product launch.

Accelerated Design Efficiency

Accelerated Design Efficiency

Accelerated Design Efficiency

Contributed to developing a scalable design system that reduced design-to-development handoff times by 40%, significantly speeding up feature releases and product iterations.

Contributed to developing a scalable design system that reduced design-to-development handoff times by 40%, significantly speeding up feature releases and product iterations.

DESIGN SYSTEM

DESIGN SYSTEM

Define the guiding principle

During the MVP launch, we developed a design system to establish a cohesive and consistent design language for Datalynn.

During the MVP launch, we developed a design system to establish a cohesive and consistent design language for Datalynn.

Icons

Typography

Colors

Components

© Jinlan Huang 2026

© Jinlan Huang 2026

© Jinlan Huang 2026