Get off the Review Roller Coaster

I was 5 years old the first time I went to Disney World. I stood on my tiptoes next to the ruler to determine if I was big enough to go on Space Mountain and barely squeaked by. I have always loved a thrill and rollercoasters have been a favorite of mine since that first day at Disney. However, there is a time and place for the ups and downs that come with that kind of ride, but in product development - the rollercoaster of reviews is one I not only don´t recommend, I think impacts the way that the team consumes the feedback in this critical touch point with the market.

The product development team has all been there… working on a project for months, hours upon hours spent by authors or subject matter experts and product developers to create a version of the product that is certainly not final, but ready for market feedback. It is an exciting time…each reach out to the market both builds up the product itself as well as builds the relationships that will endure beyond the product development process. The team has crafted the right mix of questions for the reviews, got the right potential customers involved, and the results are coming in…

And so begins the rollercoaster ride..

One review comes in. They love it!

Next review comes in. They hate it!

Next review comes in. Up

Next review comes in. Down

Next review comes in.  Up

Next review comes in.  Down

Get control of this ride. You have only one chance to read reviews with a fresh set of eyes for the first time. Below are recommendations on how to get the most out of this critical feedback from your market.

Wait! Read all reviews at one time.

I have been guilty of this, but eventually learned - pause and resist the temptation to open up the first couple of emails to see what the "market" thinks. A collection of reviews can give insight into what the market thinks. However, one review, in isolation is one person's opinion. Wait until the bulk of the reviews are in so you will be able to see if this particular review is an outlier, or if it matches what the majority of the feedback states. If you read results in isolation you miss the flow that comes with receiving all the feedback at once. You still get the ups and downs of each review, but you get them right on top of each other and can see the connections between them more than if you just read them individually as they come in.

Read each individual review separately.

With survey software there is an easy ability to cut right to a particular question to see all the responses from everyone at one time. While I highly recommend this when you are going deeper into the feedback, at first read it is best to read all responses by one reviewer at a time. It makes their responses to individual questions make much more sense, and this nuance won´t be visible otherwise. Who hasn't read a review before and realized the reviewer's response is less about anything you asked them to review and more a gripe with the industry in general? Or, maybe you have read a review where it later comes out that the reviewer had their own project denied and thinks their idea was way better than what you are now building? I have received these kinds of responses (among many others). You can sense when things outside of the product you are reviewing are impacting a particular reviewer's responses. It doesn't discount the review, but it does give it a color you wouldn't get otherwise if you read it in isolation. It makes some statements make more sense.  

Don´t try to get immediately to the point.

Pages of thoughtful, detailed responses, and I would want to (and admittedly until I learned better sometimes actually would) start each review by going straight to the answer to this question (or a variation thereof): "how likely are you to use this product?" before reading the rest of the review.

It's natural. As developers we naturally want to be able to quickly and easily ascertain if the customers we just involved in our product review are real potential users - or not. We have a product launch date and we want to be able to count up the number of people who are going to use our product at that time to give the confidence we are on track.

Don't do this. Resist the temptation. Why?  

Because reading that response first impacts your perspective of the rest of the product review. What if you have a customer that responds he or she is very likely to use your product? You will read the rest of the review - the good and the bad - knowing that the bad can't be that bad because he or she is likely to use the product anyway. It's easier to discount the negative feedback when you know it's not detrimental to the customer using your product. If the potential customer will still use it, can't be that big of a deal, right?

In the same vein, if the customer wrote the exact same responses but had answered the question that he or she was unlikely to use your product you would interpret the responses very differently. You may listen more closely to the negative feedback knowing it was enough to keep the prospective customer from using your product, or you could more easily dismiss the comments altogether thinking that this may just not be your customer anyway. If this person is never going to come along with us anyway, let's find other customers who will.

Don't cheat yourself from reading the detailed feedback in an unbiased way. The exact same comments will have very different meanings through the lenses of "potential user" or not.  You only get to do this once.

Product review questionnaires are a critical touch point with your market and are most impactful when you have the right people involved, at the right time, and have strategic follow-up planned. How you listen to and interpret this feedback is critical.

Are your reviews garnering the feedback you need to drive the development of products your markets want to use?  Contact us. We can help.