Diagnosing a 75% Day 1 uninstall rate and redesigning the pharmacy experience end to end.
Case Study - Health and Pharma
Frank Ross had a pharmacy app that 75% of users uninstalled within Day 1. I diagnosed the behavioural and structural causes through real analytics, Play Store review mining, heuristic evaluation, and competitive research - then redesigned the end-to-end experience from home screen to post-purchase. This case study documents the discovery, decisions, and design process behind that work.
Context
Emami Frank Ross Ltd is one of India's oldest pharmacy chains - over 200 physical stores across Kolkata and nearby cities, a strong brand legacy, and a loyal offline customer base. The company had built a mobile app to capture online sales and expand beyond West Bengal into cities like Bengaluru and Mysore.
But the app was not working. The data told a damning story before I wrote a single wireframe.
of users uninstalled within Day 1
customer retention rate
cart abandonment rate
search-to-conversion rate
These were not isolated design problems. They were symptoms of a product that failed to earn trust, reduce friction, or give users a reason to return. My job was to find out why - and fix it from the ground up.
Role
I was the solo Product Designer, working alongside a Project Manager and Product Manager at Tenovia. I owned the complete design process - discovery, research synthesis, information architecture, wireframes, high-fidelity UI, prototype, and developer handoff.
Before sketching anything, I ran a working alignment session with the PM to agree on scope, constraints, timeline, and what "success" meant in measurable terms. That conversation defined the KPIs that every design decision was later judged against.
Research
I approached discovery in three layers - the analytics, the product itself, and the market. Each layer either confirmed or challenged what the previous one suggested.
I started with the app's own analytics before forming any hypotheses. The uninstall data showed a cliff at Day 1. Users were not churning after a bad experience over time - they were leaving immediately after installation. That told me the problem was first-session clarity and trust, not feature depth or long-term engagement.
The new vs returning user trend revealed a more complex picture. New user acquisition was declining month-over-month while the returning user ratio temporarily elevated - a false positive. The product was not retaining well. It was simply failing to attract new users consistently enough to grow the denominator.



I ran a structured heuristic evaluation of the existing app against Nielsen's 10 usability principles. What I found was not subtle. The navigation used pharmaceutical jargon - OTC, Rx - that everyday consumers could not parse. Search returned irrelevant results. Product pages were missing basic purchase-decision information: expiry dates, return policies, delivery timelines. The checkout required 5+ clicks for actions that should have taken one.
I cross-referenced all of this with Play Store and App Store reviews to validate which friction points users were actually expressing in their own words.
"I was really let down by the app - it did not meet my expectations in terms of design and user-friendliness."

I analysed how 1mg and PharmEasy - the category leaders - handled the same user flows. Both had invested heavily in trust signals, search quality, and simplified prescription workflows. Frank Ross was competing in the same category without the same foundational UX in place.
Secondary research on Indian pharmacy app consumers consistently surfaced the same priorities: convenience, product authenticity, fast delivery, and an interface that does not require medical literacy to navigate.
Google Trends confirmed the gap - Frank Ross had strong offline brand recognition in West Bengal, but negligible digital search presence compared to Apollo Pharmacy and 1mg nationally.

Research Synthesis
Based on the analytics, review mining, and competitive research, I defined the primary user - not a healthcare professional, but an everyday urban consumer who wants to order medicines from home without confusion or anxiety. They are not looking for clinical depth. They want simplicity, reassurance, and speed.


Strategy
One of the first things I did - before any wireframe - was work with the PM to lock in specific, measurable KPIs. Any design decision that could not be traced back to one of these metrics needed a stronger justification. This kept design debates grounded in outcomes, not preferences.
| Metric | Baseline | Target |
|---|---|---|
| Conversion rate from search | 9% | +15% |
| Products added to cart from search | 17% | +30% |
| Items added to cart from category page | 15% | +33% |
| Average order value | INR 685 | INR 705 |
| Cart abandonment rate | 30% | -25% |
| Coupon apply rate | 40% | +75% |
| Customer retention rate | 4% | +100% |
| App uninstall rate | 75% | -10% |
Structure
Before touching any UI, I restructured the IA around how users think about their health needs - not how a pharmacy categorises its inventory. Categories sorted by pharmaceutical type made internal sense but failed users who think in terms of symptoms and conditions.
I relabelled all Level 1 categories in plain consumer language, sorted them by purchase frequency, and restructured the bottom navigation. The Articles link - receiving 0.5% of all clicks - was replaced with My Orders, which is critical for less tech-savvy users navigating the post-purchase experience.

Design
Every screen change was tied to a specific behavioural or business problem identified in research. Here is the reasoning behind each major decision.







Outcome
An interactive prototype was built in Figma to simulate the real experience - micro-interactions and screen transitions included - giving the client and development team a precise picture of intended behaviour at every touchpoint.
Validation Plan
The design was fully completed and handed off to the client team. Post-implementation measurement was outside my project scope. If I were to validate this work, here is exactly how I'd approach measurement - in three phases, each building on the last.
The 75% Day 1 uninstall was the most urgent problem. Everything in month one is about whether the new experience earns enough trust to keep users past that critical first session.
Once retention stabilises, the focus shifts to the purchase funnel. Are users who stay actually buying? Are the friction reductions translating into revenue?
Month three is about durability - whether the improvements hold over time and whether users are forming habits around the product.
The KPIs were defined upfront precisely so that any team implementing this work could measure against a clear benchmark from day one - not retrofit success criteria after the fact.
Reflection
Analytics before assumptions. The Day 1 uninstall cliff told me exactly where to focus before I'd spoken to a single user or drawn a single wireframe. In healthcare apps, the stakes of getting the first session wrong are higher than in most categories - trust is harder to rebuild once broken. Data is not a substitute for empathy, but it is an indispensable guide to where empathy needs to go first.
Language is a design decision, not a copy edit. Relabelling "OTC" and "Rx" to plain-language consumer categories was not wordsmithing - it was a navigation fix that changed how users could find what they needed. Every label that requires domain knowledge is a friction point for the majority of your users. In consumer health especially, plain language is a trust signal.
Defining KPIs before designing makes every decision defensible. Having specific metrics agreed upfront - with the PM and stakeholders - meant design debates were grounded in business outcomes, not personal preference. It also gave me a clear prioritisation framework: fix the things with the highest uninstall-driving impact first, optimise for conversion second, build for retention third.