Details matter. Don’t get me wrong, I wasn’t arguing against the value of the insights of the customer. My concern centred specifically around the questions being asked. We seemed to be “leading the witness” by relating their problems based on our understanding of the service. Basically, we weren’t listening.
I’ve learned over the years that if teams give up on the pride of what they’ve created, there’s an incredible opportunity to learn. If they are in fact open to that opportunity, then outcomes begin to shift from defending output to discussing solutions that are true from the actual needs of the customer.
This isn’t a unique approach. In fact, it mirrors the scientific method of understanding nature as outlined by Richard Feynman. When evolving products and services, a good experience is defined by first testing assumptions, or as Feynman describes it, our best ‘guesses’, about why people succeed or fail. It’s from this understanding that we can iterate based on a constant of objectivity; not on a subjective preference for design or a preferred method of development.
Context is the Kingdom
While there’s pressure to ship or improve upon services, there’s rarely a shared understanding of terms and how to measure improvements both qualitatively and quantitatively. Although content has taken a back seat to high fidelity mock ups and speed, I still believe in the adage, “Content is king but context is the kingdom.” In other words, without a shared understanding of what we mean when we share troubles or issues, we can’t ask better questions that lead to changes for an improved customer experience.
The following is an example of how I’ve learned to guide discussions with customers in getting to that shared understanding. This doesn’t draw from a single experience as a consultant or employee, but rather a consistent pattern of discussions and interactions over my career.
Questioning for Context
I was invited to several conversations with a range of customers. In the first round I didn’t interject, but kept detailed notes of what was shared. In particular I focused on constants that were revealed as troubles customers were expressing.
Patterns quickly emerged that would impact interactions. More importantly, there were many inconsistencies in what customers would describe as troubles in using the products, while also using similar phrasing. A few examples included:
- I think this is causing a bad user experience
- That’s not what I would call that category
- I can’t find that information because that’s not what our company calls it
- I’m not seeing the level of engagement I was expecting from users
In the next round of customer interviews, working from this feedback, I began to question for clarity when anyone brought up these phrases. While they had their own opinions about each, very few customers could tie this back to analytics or issues being brought to them by their employees. After asking for everyone’s perspective, I would then clarify my understanding. For example, in one instance I recall saying:
“Please correct me if I’m wrong but it sounds like you’re making assumptions as to the issues your employees are experiencing.” After some clarification they agreed, so I continued. “That’s great! I say that because this gives me a starting point to see if what you’re guessing (drawing on Richard Feynman’s video) is true.”
I then went on to outline all of the different phrases I had heard and how I believe I could assist them going forward. For example, I described the difference between their use of ‘user experience’ and the value of doing usability testing.
By engaging in a range of usability tests we could work together in defining ‘engagement’ and what specifically they wished to measure. In turn this could start discussions around changes to features or services they would like to see incorporated, or even removed all together.
In bridging the divide between the subjective assumptions and objective truths, trust became stronger and open dialogue became the norm. Don’t get me wrong, it wasn’t all smooth sailing with this or many other customers; the tide always rises. However by questioning for clarity and questioning for greater context early, most storms were weathered before the next update.
As several customers have shared with me over the years, the greatest lesson they learned in these interactions was that they were asking the wrong questions. They weren’t considering their users and in many cases they realized they were demonstrating their own bias around what demonstrated a good outcome.
In several instances they were only focused on analytics from the product or the qualitative issues they heard from their employees. In essence, they lacked a “balanced” perspective because they hadn’t taken into account all of the information.
This is a critical lesson for all businesses today. When company’s don’t ask the right questions, and/or open up to the possibility that our own assumptions are incorrect, they dramatically limit their chances for success.
An example of this was outlined by author Kevin Ashton in his book “How to Fly a Horse: The Secret History of Creation, Invention and Discovery” In one historical account, Kevin outlined the problem in making the bicycle and airplane work; and it had nothing to do with speed.
Cycling was a new fashion in the 1890’s. BICYCLES ARE MIRACLES OF EQUILIBRIUM. THEY ARE NOT EASY TO BUILD OR RIDE. WHEN WE CYCLE, WE MAKE CONSTANT ADJUSTMENTS TO STAY BALANCED. WHEN WE TURN, WE ABANDON THIS BALANCE BY STEERING AND LEANING, THEN RECOVER IT ONCE OUR TURN IS COMPLETE. THE PROBLEM WITH THE BICYCLE IS NOT MOTION, IT IS BALANCE.
…THE WRIGHT BROTHERS …SAW AN AIRPLANE AS A BICYCLE WITH WINGS. THE PROBLEM OF THE AIRCRAFT IS NOT FLYING; LIKE THE BICYCLE, IT IS BALANCE.
…The Wrights solved the problem by studying birds. A bird is buffeted by wind when it glides. It balances by raising one wingtip and lowering the other. The wind turns the wings like sails on a windmill until the bird regains equilibrium.”
HOW TO FLY A HORSE: THE SECRET HISTORY OF CREATION, INVENTION, AND DISCOVERY BY KEVIN ASHTON