2 designers + Cross-functional R&D team | 2 weeks | athenahealth | 2015

feature overview

Adding an interactive feedback panel to search within the Epocrates mobile app encouraged users to vent their frustration with our initially buggy feature, and helped provide us with valuable, actionable data about what to fix first. 




We were pleasantly surprised to get a number of positive comments on New Search, which was a welcomed morale boost for the team, but more importantly we also were able to collect extremely valuable data to prioritize search engine algorithm fixes as well as content issues and suggestions for our Medical Informatics team. 

We had over 40,000 interactions with the feedback smileys during their first month! A handful of users even found the prompt so inviting that they felt like they had to complete it every time they searched! (We added a 'hide smileys' option in a later release to comfort these folks).

In addition to the hard data that helped the team troubleshoot and refine the algorithm, actively engaging with our users provided additional benefits. It helped decrease tensions surrounding the major changes in search, and helped our team stay much closer in touch, and in-sync with our users' needs.

A ticker with the latest user feedback as it happened was added to an always-on performance scoreboard in the Search Team room, elevating the importance and availability of meeting users needs as a key metric of success.

After technical bugs caused our App Store rating to take a rare dive from 5 stars, we were able to swap out a positive feedback screen for a prompt asking "happy" users to rate us on the Store. It was vital to get long-standing, satisfied users to go and rate us to offset those users motivated to provide a low review by technical issues. The in-feedback prompt was much more effective than other efforts undertaken to help restore our rating.



In order to increase engagement with our app, and to fix a long-time usability challenge, an engineering team began to develop a search engine to replace the existing basic lookup functionality. While the lookup functionality lived underneath a standard search box, it performed a much simpler task than a modern search engine. It simply matched a user's entry, letter for letter, with a single list of article titles, and displayed any article titles that matched exactly. Switching to a search engine that could handle a more diverse series of user inputs, and eventually give more sophisticated results would be a big win for users. 

However, when our search engine was in its early days of development, the results it was spitting out weren't quite right yet, and it was very slow. And because the old lookup lived on the client, it worked online and offline, and showed results instantaneously as you typed. The new search would require an internet connection, and even once we improved the performance issues it would be a bit slower than the old system.

Despite what the team thought were release-blocking technical and user experience deficiencies, leadership slated the feature for general release.

In addition to the technical hurdles, UX had joined the team, with extremely limited purview, just before the release.  We quickly realized we were about to burden our users with a really big user experience change. The search box would look similar, but the navigation would have to behave very differently in order to switch from lookup to search. Performance problems plus a major interaction change would present a user satisfaction challenge for any app, but the bar was set even higher for our app due to it's context of use and history. Epocrates is a work tool for busy doctors, so its users are used to navigating the app many times a day using muscle memory to complete a task in mere seconds. Most of the users have been using the app for many years (avg of 8.5 years!), and have grown accustomed to it being largely static.  

We realized that given the technical issues, the lack of onboarding, and the UX changes, we weren't just going to be moving our users' cheese, we were switching them to a raw diet without asking them! While we continued to advocated for a delay in the release, we also worked to design "handrails" to help users mitigate the unpleasantness as they waited for technical fixes to be implemented and tried to make sense of the new paradigm.


How could we prevent an am overly aggressive rollout of this functionality change from completely alienating our users? How could we decrease the pain of the change process? 


Luckily we had advocated hard to fit in some user research to help validate, quantify, and better understanding the concerns we had about the feature rollout. Surveys from a beta user group as well as in-person interviews to help us prioritize which issues would be the most urgent issues. After identifying the keys insights (pink post-its), we developed How Might We's (green) for each, and then ideated different possible approaches to answer the HMW. After converging on our top approach options, we then ideated around each (orange).

We knew that we were moving users' cheese with this change. So we posed two How Might We's in response to this: 

HMW Acknowledge the transition and offer to help? 
HMW Make them feel comfortable and at home? 

While many of our possible fixes were out of scope for our first release, we were able to get buy-in and resources to incorporate an explicit ask for feedback in the first GA release.

ideation & Iteration

Once we decide to move forward with an active ask for feedback, we began ideating on the style and UI. We wanted something more engaging and clear than our legacy app-wide feedback:


We started off considering adding a personal touch with a team member directly asking for feedback, ala Mint. That felt too distracting and ad-like to our no-nonsense physician users in our testing, so we continued to iterate.

How else could we convey a more human touch, and show our physician users get that we "get" them and know that they rely on our app and these minor setbacks don't feel minor to them? 

After several other iterations, we  drew inspiration from the classic Pain Scale, a graphical healthcare hallmark:


We started off with a direct analogy, asking "How painful was this search?" Even though we were worried about a user backlash from a buggy search engine, copy that could help users assume search was bad seemed like a nudge in the wrong direction, especially if they missed the reference! So we worked on the language, and on simplifying while hopefully retaining a nod to the scale:



We were a bit torn on how detailed to make the feedback request. There was concern that users might not engage if we required them to input more information than just a rating. But we were also concerned a rating alone wouldn't give users a chance to share more details (or vent), and limiting the feedback to a simple selection would forfeit the chance to collect additional valuable data from users about their needs and reactions.

We chose to go with a two-step feedback process: first users would choose their rating right on the results screen, and then they would be prompted to share more details on an overlay screen. They could still keep the rating even if they dismissed the detail screen, as we assumed most users would.

This approach seems to create a good compromise between assuring high engagement with feedback and grabbing as much information from willing users as possible.


This approach came with one minor drawback: what if a user misclicked on a feedback face? We were interpreting the "close" option second screen as a decline to provide additional details, not an undo of the rating. 

We definitely wanted to capture the happy/meh/sad face on a search, even if a user didn't answer the follow-up questions or add any text. While one piece of unconfirmed click data alone could just be a misclick, and might be hard to interpret, in aggregate this click little microinteraction provided us with alot of information. With the frequency of use of our app, and the search function, we had no trouble at all picking out terms that were receiving a below-average rating. 

However, to ensure users also had the option to "undo" an unintended rating, we revised the second screen. The primary actions would still be "Close" (and keep rating), or "Send" (details and rating, only enabled once details were entered) but we would also provide a more secondary option to "undo rating" 



While the microinteractions within the feedback flow are nothing remarkable, it was a hard sell in the midst of release push to "bother" getting these subtle details right. However, we felt it was critical that we turn over a new leaf with Epocrates and ensure that as we worked on a section of the app we brought it up fully to the high standards of modern app design. We wanted feedback to be as easy and delightful to use as a top consumer app.


We also felt a pleasant interaction was absolutely critical to ensure users were happy to give feedback repeatedly. Working on the transitions, timing, the subtle touch indicator change as you tap a face, and adding a quick, but genuinely expressed, thank you combine to make the feedback experience feel meaningful.