top of page
Search
  • Writer's pictureHarshal

An Adventure in Solving Customer Problems and Tidying-up Metrics - Part 1

Updated: Apr 28, 2023

A core part of Product Managers’ role is to understand customers’ needs so that PMs can build the solutions customers need, where they need it, how they need it, and let the right information be visible when customers need it.


Venn diagram showing what, when and where the information is needed


The tricky parts of uncovering the “what”, “when”, and “where” include:

  1. Uncovering the underlying “need” from the “want” brought up by customers.

  2. Discovering qualitative or quantitative data that represent your customer base.

  3. Synthesizing the “why” behind any data you see.

Customer needs influence customer behavior. Let’s discuss a bit more about that aspect - “customer behavior”. For example, launching a feature that customers need isn’t enough to solve the customer problem. You also need to understand where all customers are likely to encounter the problem, where they will go to troubleshoot it, and accordingly where we can provide or announce the new feature.


I want to share my learnings from one such experience about understanding customer behavior.



Retrospecting from an Experience


A few years ago, I had the chance to improve the customer experience at my employer, as measured by the number of customer support tickets my business unit (BU) received. The project resulted in halving the total normalized support tickets.

graph showing customer behavior

My journey was full of ups and downs because of all the missteps we made along the way. Thankfully, the experimental approach of figuring things out on one’s own helped me learn a few things about understanding customer behavior. While there is still a lot for me to learn about and I am no expert, I wanted to share my experience and learnings along the way with you. Since the experience was spread across many incremental learnings and long articles do not work well with email, I have split the article across multiple parts. If you haven’t already, this would be a good time to subscribe to my Substack so that you do not miss any part!

This retrospective helped me think of areas of improvement and strengths to persist in. I hope it will help you too in areas such as how to:

  1. Get quality data on confusions faced by customers

  2. Use quantitative data and qualitative perspectives to understand customer needs or behavior

  3. Visualize customer behavior - covered in part 7 (WIP)

  4. Test customer behavior and gather usage metrics across websites, emails, or offline documents - covered in part 4 (WIP)

  5. Choose between different options to increase customer awareness - covered in part 5 (WIP)

  6. Align stakeholders for making process changes - covered in part 2

To make the retrospective understandable, I’ve used numbers to illustrate data-informed decisions, keeping it restricted to what was available or understandable at any point. At the same time, to keep the company information confidential, the numbers are made up. I hope this also helps understand one perspective of a project through many phases and see collateral - documents, graphs, data, interview questions, pitch decks - created throughout the project.


illustration of people divided in detractors, passives, and promoters


Anecdotal Issues and Solutions


The project that resulted in halving support tickets started with a small premise. The customer support team started it as a way to be heard by my business unit’s leadership to accelerate improvements that would make customer and customer support team’s life easier. The support team lead, support specialist, my BU’s leadership, and a few Product Managers started with a group of 20-30 tickets that the support team brought up. These had a common theme of customers reaching out to customer support to request an update of the customer’s contact preferences or address details. The first-glance solution seemed to be to enable self-service capabilities through the product’s web portal for customers to update the details on their own.


This set of tickets was interesting because

  1. Such tickets came in very frequently, anecdotally.

  2. Such tickets could not even be resolved by the support team and required further escalation to another operational (Ops) team via a different ticketing system and a slower SLO (Service-level objective, the non-contractual version of SLA) on the expected time to resolution.

  3. Most of these tickets could be eliminated by enabling self-service via our web portal.

So, the first thing a peer PM, our leadership, and I decided was to review these 20-30 tickets and build the self-service feature.


Problem — If we build this feature, how many tickets or percentage of tickets in a month would reduce? No Idea. It might be more or less than 20-30.


Historic Data and Category Tags


Similar to the self-service feature discussed above, there were one or two other features we prioritized to build based on a few groups of tickets. While we tried to understand the potential impact of building these features, the engineering teams had moved forward with efforts to build these features.


To understand the impact of the features we were building, we looked at the tags (or issue areas or categories) on customer support tickets. Support tickets are created when customers send questions to the company via web forms or emails. Support agents apply a category tag to a ticket when they resolve and close it. The category tags can give insights into the types and frequency of issues faced by our customers.


Examples of tools that support agents in your company could use to respond to customer support inquiries and tag tickets could include Freshdesk, Zoho Desk, Zendesk, Happyfox, Servicenow, Salesforce, Freshservice, or Agiloft.


When I looked at the data, I realized that

  1. The categories had overlaps i.e. some tickets could fit in category 1 as well as category 2 just as easily,

  2. The categories were not exhaustive, i.e. they did not cover all possibilities, and

  3. The distribution of tickets across categories showed that some categories were vague.

The data was, therefore, not useful.


Graph showing categories from old taxonomy


To make sense of this data, I now needed to go beyond the category quantification and look at the tickets one by one. I assumed one month’s worth of data to be a representative and used one month’s data which came to about 1,000 tickets. Engineering managers, designers, fellow Product Managers, and I systematically reviewed each ticket and tagged it to the type of issue faced by the customer. The team and I read through 1,000 tickets of the month, tagging them to a specific customer problem.


Pie chart showing ticket distribution per customer problem


You can see a visual of the manually reviewed tickets above being grouped into about 200 customer problems. There are many different customer problems; way too many to take action at this level of granularity. So, grouping these together might help find the biggest ones we should solve. This exercise helped us understand the percentage of tickets we were targeting to solve with the self-service features as well.


However, it was not sufficient to understand the likely impact of the feature from past tickets. Since the ticket category tags were vague, new ticket volumes would not explain the impact of the feature.


Problem — How many tickets did the feature launch reduce?


Revamping the Ticket Categories


Given the drawbacks of the old categories and the deep-dive into the 1,000 customer support tickets, I had the opportunity to lead a revamp of the category tags. I’ll refer to the system of the category tags as a taxonomy.


First, let us think about the users of the taxonomy of the category tags. The taxonomy needed to satisfy four user segments -

  1. Customers,

  2. Support agents,

  3. Product managers, and

  4. Upper management -

Each of whom had different needs.


Image showing needs of customers, support agents, product managers, and upper management


Let us discuss the needs of the different users of the taxonomy so that we can understand what the revamp needs to focus on:

  1. Customers want to file a ticket ASAP but also in a way that they get a response asap. This means they want to select as relevant a category as possible although they would not know anything about most of the categories. This means the categories visible to customers should be easy to understand for someone creating a ticket for the first time.

  2. Support agents need a quick way to tag a ticket since they are constrained on time to solve a ticket. The fewer the categories, the faster they can tag. On the other hand, support agents can spend time upfront ramping-up on categories and navigate unintuitive names of categories thanks to practice via repetitions.

  3. Product Managers want valuable ticket data. Having in-depth categories while retaining accurate classification per ticket would make the data valuable. More laser-focused categories help have data on the frequency of each customer complaint, which helps in feature prioritization and in tracking adoption.

  4. Executives in upper management want an easy way to differentiate a severe problem from a minor confusion and a way to reduce the external influence on the metrics to understand the impact of team efforts since these are output metrics, not input metrics.

To meet these varied needs, we can use a multi-level taxonomy. The constraints identified are:

  1. The high-level categories can be chosen by customers, so they should be intuitive at first glance.

  2. The mid-level categories are chosen by support agents and some customers, so they need to be intuitive as well but should also provide high-level insights to PMs.

  3. The low-level categories are chosen by support agents, so they can be more in number and should be valuable for PMs and upper management to know different types of problems faced by customers.

The revamp focused on providing a taxonomy that is

  1. Collectively exhaustive,

  2. Mutually exclusive (MECE),

  3. Precise instead of vague, and

  4. Concise, not expansive.

Although the team and I had tagged 1,000 tickets into 200 customer problems, we needed to compress this taxonomy as per the base principles identified above. As a comparison, the old taxonomy had fewer than 10 categories.


Graph showing low level categories of customer problems

You can see a distribution of the 200 customer problems grouped by low-level categories above. Utilizing the constraints above and iteratively combining the 200 customer problems, we crystallized 15 low-level categories. We then grouped these 15 categories into 4 mid-level categories.


Graph showing mid level categories of customer problems

This met the constraints of being MECE, it was approved by the team that implements taxonomy changes, but still needed testing to be easy to use by support agents.


Problem — how to test the ease of using the taxonomy for support agents without implementing it?


Testing the New Category Taxonomy


Along with the support specialist and team leads for different customer support teams, I set up a set of 10 mock tickets and a prototype ticket categorization system. The support specialist’s priority was to help the customer support team by coordinating between the support team and the product managers. I invited support agents from different geographies, and the Ops team we’d referenced before to review the 10 sample tickets and categorize them. I set this up in google sheets where I let them choose a high-level category followed by choosing one of the subsets of mid-level and low-level categories. Then I asked each of them their rationale for choosing the category they did.


This way, user testing with mock data and mock system helped me get buy-in from support agents and finalize the taxonomy. Thanks to this testing and experimentation, I got valuable feedback and confirmation to help tweak and finalize the taxonomy.

Lastly, the revamped taxonomy was implemented by the team specializing in maintaining our support tool.


Problem — Although we looked at about 1,000 tickets from one month of data, there were about 10,000 pending tickets in another support channel that weren’t analyzed. Why so many? What to do about it?


Another problem — the new taxonomy is implemented into the multiple support tools we use, but how would support requesters and responders across geographies and support channels ramp-up on it?


Takeaways


Solving anecdotal customer problems is one of the common pitfalls I’ve seen Product Managers do and I hope the above experience helps you think of ways to move beyond anecdotes and towards impact-based prioritization. Having a lot of data but not knowing what is there in it is another challenge many PMs face nowadays given the growth in the amount of data we capture from customer usage - the above could help you think of ways to generate value out of the data for tracking product adoption metrics or to find the most important customer problems to solve. What else did you take away? What would you suggest I tweak in content or tone as I share the rest of the learnings?


Next Up…


In part 2, we will look at what to do with the 10,000 tickets that have not been analyzed. Do we start manual review again? Do we ignore this support channel and move on? On the other hand, how will every support tool user know about the new taxonomy, and how will they react to it?

Originally published at https://harshalpatil.substack.com on Apr 1, 2021


26 views
Post: Blog2_Post
bottom of page