It’s Not a Research Problem — It’s a Business Problem

creativity-problem-solving I find it hard to believe that any business owner wakes up and says “I think we need an awareness and perception study to understand the drivers of preference for our brand.” Rather, they probably are wondering what their customers think of their products and how they can increase sales. See the difference? The first is looking at the world from a research POV, while the other is stating a business problem–a common one at that.

But, we as researchers, can get quite hung up on being statistically accurate when we design research studies, rather than taking a step back and putting on our business hats. Case in point: I worked for a market research firm that designed a pricing study for a major software company with three (yes, count them, three!) models–including a conjoint (which everyone who knows me, knows that just the term ‘conjoint’ makes the hair stand up on my neck).

While the head of research was a super-smart, PhD-type guy, he didn’t think for a moment that the business model of selling software is on a negotiated basis through a third-party (referred to as Value-Added Resellers-VARs). So, overkill was the word of the day for that project. Had the firm stepped back and thought about the business problem of what to charge for a software package, it could have been a simple triangulation of (1) secondary research (what do competitors charge for similar products), (2) how much do VARs think the software is worth, (3) how important is this piece of software in the customers’ overall IT strategy.

So, as researchers, let’s put on our business hats first!

Share

DIY Deja Vu

Deja VuA week doesn’t go by in #MRX land (Market Research in the Twitterverse) without bashing the Do-It-Yourself phenomenon of survey production.

This reminds me of my somewhat distant past in graphic design..I was a Marketing Communications Manager for a large paper company at the time–when the advent of “Desktop Publishing” invaded the hallowed halls of commercial design and production.

At that time Graphic Designers were artists, using pen and paper, specifying fonts and photos, laying out designs with non-photo blue pencil. There were type houses whose sole job was to produce beautiful fonts on sheets of paper for the layout artists to cut and arrange. No WYSIWG here–typesetting was command code only. Think HTML for the dark ages.

Then a little company named Adobe created PostScript enabling Aldus to create PageMaker–the first page layout program.

Suddenly anyone could be a ‘designer’ because they had a new tool that enabled them to layout type and images. Graphic Designers balked at this idea and many resisted the change. But, in time, using a computer to do layouts became de rigueur and those who didn’t have innate talent or training in design fell by the wayside.

It wasn’t enough to just use the tools anymore…

The same is true today in market research. The advent of tools like Google Surveys, Survey Monkey and Survey Gizmo (all good products for basic data collection needs) has made my industry balk at those who send out a “Survey Monkey” to their customers. Companies feel they are engaging with their customers by sending out a survey…except that many of these ‘surveys’ haven’t taken into account biased questions, too many open-ended responses (see this post all about those!), pages and pages of characteristic grids, and many of the other painful survey mistakes we market researchers cringe at.

At some point, like it did for graphic designers, using the tool won’t be enough. We just have to be patient and keep educating companies about what is ‘good research’. Until then, let the teeth gnashing continue.

Share

Beyond Black Belt and the Art of Continuous Improvement

First Degree Black BeltAfter 12 weeks of grueling work, I recently tested and was awarded my first degree black belt in Kenpo Karate. My sensei likes to remind us that becoming a black belt is just another step in the journey and continuing on is just as important as achieving that initial milestone.

“Continuous Improvement” isn’t just a buzz phrase in Martial Arts, it’s a mantra. No matter how experienced you are there is always something you can improve–whether it’s your stances (toe/heel alignment!), your blocks and punches, or learning a new form. I feel I have vastly improved from where I was 15 months ago when I first earned my black belt,  and I am excited to keep up my journey–even though my next advancement is at least two years away.

How does this relate to the world of Market Research? If we #MRX practitioners do not continuously improve and innovate, new technologies and ‘paradigm shifts’ will overtake us and we will be left behind. The days of yearly tracking studies, asking respondents to think about their past purchases, and clients who rely on us to do all their market research are fading away. DIY surveys, communities, polls, real-time transactional data (#bigdata) are all here NOW. We can either embrace them and learn how to harness them, or we can be left by the wayside.

Like continuously improving in Kenpo, I plan to keep updating and practicing new market research methods.

Share

The Dark Side of Customer Satisfaction Surveys

In the past month I have had the ‘pleasure’ of dealing with two different car dealerships…once for service and once for purchasing a new car. Trust me the latter was way more enjoyable. (secret: use a broker so you don’t have to haggle)

While one dealer specializes in German cars and the other Japanese cars, the one thing they have in common is an overwhelming need to make sure I’m ‘satisfied’.

As some readers know, I’m a sucker for taking surveys–it’s fun to guess exactly what the data collector is really after–and I’m pretty blunt when it comes to customer satisfaction surveys.

So, back to my German car servicing experience. Not great. And, hoping to make an impression, I was very detailed in my responses. And the dealer’s response? A phone call with a lengthy explanation of why they do business the way they do. And a follow up email making sure I received a phone call. An apology? Nope.

Japanese car buying experience. OK. Took a long time to get the paperwork done, but the salesman was very nice and showed us all of the nifty features on my new car and even paired our cell phones so they work hands free. But, he mentioned not once, not twice, but three times that we would be getting a call about his service and that on a 5 point scale, a 4 was considered a failure.

This is where the ‘dark side’ rears it’s ugly head. Both dealerships are under scrutiny from their US affiliates to provide exceptional customer service and, instead of thinking of ways to improve their service, they both focus on the survey results instead.

These results-oriented surveys miss the mark by a wide margin, but we see them time and time again. And, it’s not just relegated to customer-service scenarios. Think about ‘high stakes testing’ prevalent in education (where I spend a fair amount of pro-bono time)–student test scores are being used for teacher evaluations and school performance indicators forcing a ‘teach to the test’ mentality.

This will only change when the focus is not on the survey results (or test scores) in isolation, but rather, what can be done to improve the experience.

Share

Check your emotions at the door…

When soliciting feedback (whether it be a survey, poll or discussion), if you want useful, objective feedback it’s a good idea to leave emotions out of the research design.

I do a lot of work, the majority of it pro bono, for our local school district. It’s not that I’m totally altruistic, although I like to think I am at times. Really, it’s because I want folks to have good, useful, objective feedback so they can make smart decisions about children’s education. And, in order to make smart decisions–whether they are about advanced learning programs or how the PTA should spend generous parental donations, unbiased feedback is required.

I’m learning the hard way that the topic of children’s education is up there with money, religion and politics as a minefield of emotions.

So, how can one best approach this emotional battlefield?

First off, please, please don’t use “Pros and Cons” to elicit feedback. One person’s pros are inevitably someone else’s cons…A better choice would be to use an unbiased statement and ask people whether they agree or don’t agree (in market research terminology, a likert scale). You can determine whether a topic is a ‘pro’ or a ‘con’ by how much someone agrees, or conversely does not agree with your statement.

Avoid ‘loaded questions/statements’. You know the ones where the author’s inherent bias is already worked into the question…like “have you always listened to that awful, noisy rock and roll?” Neutrality is your friend…think like Switzerland.

Finally, be sure to listen to all stakeholders in the process. Even though I balked at collaborative survey-writing, getting buy-in from everyone ensured my own biases were kept in check.

So…even if it’s ‘for the kids’, be sure your research design is impartial.

Share