Leah Logo

AI-Powered Experience Design

Leah Wellness

When someone in mental distress turns to an AI for help, every piece of jargon is a reason to close the tab. Every unexplained recommendation is a reason not to book. My work on Leah was about making the clinical world feel human — at the exact moment people needed it most.

Leah Wellness Hero Mockup — Desktop + Mobile

Role

UX Designer

Timeline

8 Months

Team

1 PM, 2 Engineers, 1 Researcher

Tools

Figma, Jira


The Project

Leah Wellness is an early-stage AI startup helping people find and book mental health providers — replacing the fragmented, multi-step manual search (Psychology Today → provider website → third-party intake form) with a single, AI-guided experience.

I joined as the sole UX designer during the 0-to-1 phase, responsible for the end-to-end design of the web onboarding flow — from first landing to booking a consult.

The users: people actively seeking mental health support, often in distress, with low tolerance for friction, jargon, or anything that feels clinical. The design challenge wasn't simplifying a form — it was getting vulnerable users to trust an AI with their mental health.


Deep Dive 01

Translating Clinical Complexity into Human Clarity

The Problem

The clinical intake contained 10 questions packed with therapeutic jargon — CBT, DBT, humanistic therapy, emotion-focused approaches. For users seeking help for the first time, encountering terms they didn't understand created a new problem on top of an existing one.

The original design asked users to self-select from a dense checkbox list with no guidance. Open-ended text fields made people anxious about whether they were "saying it right."

"Unclear what each question was asking."

— Yan, user testing

"Not knowing how long the form would take — or how to answer — was enough to make me leave."

— Rana, user testing

The Design Decisions

The conversational format was set. My job was to make it actually work for users who were already emotionally depleted — by removing every micro-friction point inside the flow.

01

Tooltips on every clinical term

Plain-language explanations on hover — so users could understand what they were selecting without needing a therapy background.

02

Chip selectors over open text fields

Structured options reduced the anxiety of "am I saying this right?" — with free text always available as a fallback.

03

Visible progress indicator

Step X of 10 made the commitment feel finite. Users could see the end — which research showed was enough to prevent early drop-off.

04

"Skip for now" on every sensitive question

Permission to move forward without perfection — especially important for questions about trauma, identity, and mental health history.

From form to chat interface comparison

Deep Dive 02

Visualizing Trust in AI Recommendations

Provider Card Front
Provider Card Back
Click to flip — see front & back

The Tension

"Give me something on a silver platter."

— Rana, overwhelmed by provider choices

"I don't believe in AI care plans because I don't know if it's accurate or private."

— Serena, skeptical of AI recommendations

The first user needed less information. The second needed more. Any card that tried to serve both at once would serve neither.

The Decision

The PM's direction was a single-sided card. I pushed back — two distinct user modes can't share one surface without breaking trust for both. A double-sided card lets users control how deep they go.

Front — Quick viability check

Insurance, cost, availability, and a one-line match reason — everything for a 3-second decision.

Back — Trust-building detail

The AI's full reasoning in plain language, plus provider bio — for users who need to understand why before they'll commit.


Deep Dive 03

Designing for Decision Fatigue

The Challenge

Users arrive at the results page having just completed a 10-question intake. They're already cognitively spent. Showing the wrong information first — or burying the details that matter most — is enough to lose them before they book.

“I need to know if they take my insurance before I look at anything else.”

— Multiple users, usability testing

The Design Decisions

The question wasn't what to show — it was what to show first. I mapped the card front to answer four questions in the order users actually ask them.

01

Is this person relevant to me?

Match score + specialty — lead with the AI's confidence signal.

02

Can I afford this?

Insurance coverage + estimated cost — the #1 drop-off point if left unanswered.

03

Is this person available?

Next availability — removes the fear of going through matching only to find no open slots.

04

Why did the AI pick them?

One-line match reason — answers the trust question before the user has to ask it.


Outcome

Intake Length

<10

Minutes

The step-by-step format brought completion time well under the 25+ minute original — observed consistently across test sessions.

What Testing Showed

“This is better than my usual process — Psychology Today, then the provider's website, then a separate intake form. It's all in one place.”

— Jen, usability testing

Across multiple usability sessions, users consistently described the experience as less overwhelming than existing alternatives. The conversational intake reduced the sense of clinical interrogation, and the transparent match card gave users language to evaluate recommendations rather than simply accept or reject them.

Reflection

1

Trust is earned through transparency. Users didn't resist AI recommendations — they resisted unexplained ones. Making the reasoning visible changed the dynamic entirely.

2

Jargon is a drop-off point, not just a UX inconvenience. Every clinical term without an explanation is a moment where a user questions whether this product is for them.

3

Designing for AI's rough edges. In mental healthcare, latency isn't a loading spinner—it's anxiety. I would design empathetic waiting states and explicit privacy reassurances for sensitive data.

Next Project

Yoyo — Hyperlocal Marketplace