DIY Deja Vu

Deja VuA week doesn’t go by in #MRX land (Market Research in the Twitterverse) without bashing the Do-It-Yourself phenomenon of survey production.

This reminds me of my somewhat distant past in graphic design..I was a Marketing Communications Manager for a large paper company at the time–when the advent of “Desktop Publishing” invaded the hallowed halls of commercial design and production.

At that time Graphic Designers were artists, using pen and paper, specifying fonts and photos, laying out designs with non-photo blue pencil. There were type houses whose sole job was to produce beautiful fonts on sheets of paper for the layout artists to cut and arrange. No WYSIWG here–typesetting was command code only. Think HTML for the dark ages.

Then a little company named Adobe created PostScript enabling Aldus to create PageMaker–the first page layout program.

Suddenly anyone could be a ‘designer’ because they had a new tool that enabled them to layout type and images. Graphic Designers balked at this idea and many resisted the change. But, in time, using a computer to do layouts became de rigueur and those who didn’t have innate talent or training in design fell by the wayside.

It wasn’t enough to just use the tools anymore…

The same is true today in market research. The advent of tools like Google Surveys, Survey Monkey and Survey Gizmo (all good products for basic data collection needs) has made my industry balk at those who send out a “Survey Monkey” to their customers. Companies feel they are engaging with their customers by sending out a survey…except that many of these ‘surveys’ haven’t taken into account biased questions, too many open-ended responses (see this post all about those!), pages and pages of characteristic grids, and many of the other painful survey mistakes we market researchers cringe at.

At some point, like it did for graphic designers, using the tool won’t be enough. We just have to be patient and keep educating companies about what is ‘good research’. Until then, let the teeth gnashing continue.


Beyond Black Belt and the Art of Continuous Improvement

First Degree Black BeltAfter 12 weeks of grueling work, I recently tested and was awarded my first degree black belt in Kenpo Karate. My sensei likes to remind us that becoming a black belt is just another step in the journey and continuing on is just as important as achieving that initial milestone.

“Continuous Improvement” isn’t just a buzz phrase in Martial Arts, it’s a mantra. No matter how experienced you are there is always something you can improve–whether it’s your stances (toe/heel alignment!), your blocks and punches, or learning a new form. I feel I have vastly improved from where I was 15 months ago when I first earned my black belt,  and I am excited to keep up my journey–even though my next advancement is at least two years away.

How does this relate to the world of Market Research? If we #MRX practitioners do not continuously improve and innovate, new technologies and ‘paradigm shifts’ will overtake us and we will be left behind. The days of yearly tracking studies, asking respondents to think about their past purchases, and clients who rely on us to do all their market research are fading away. DIY surveys, communities, polls, real-time transactional data (#bigdata) are all here NOW. We can either embrace them and learn how to harness them, or we can be left by the wayside.

Like continuously improving in Kenpo, I plan to keep updating and practicing new market research methods.


My Cereal Doesn’t Have a Personality…

What is it that compels researchers to anthropomorphize objects? I recently took the survey below asking me to give personality traits to cereal…yep, cereal.

I understand from an advertising point of view that the client would like to learn what respondents associate with a particular cereal brand, but sorry, a cereal brand is not “Intelligent, Independent, Approachable, or Trendy”. It’s cereal. Oats and flakes. Nothing more.

OK, maybe ‘Healthy” would be a more appropriate choice…

So, what about assigning descriptions that actually fit the product? Then I could rate a cereal brand on being healthy, convenient, vitamin-rich and other characterizations that fit the product.

Not “Fun”…breakfast really isn’t all that much fun.



Consensus is one of those words that can go either way–you either love it or hate it. If you live in the Pacific NW (especially Seattle) it’s a way of life.

After my last, somewhat harsh post of collaborative survey design, I’m happy to report that not only did we manage to create a pretty useful survey (OK, there’s a teeny tiny issue of bad parallel constructs, but heck…it satisfied some loud constituents), but in 24 hours we have achieved almost a 20% response rate. And, no loud complaining about the survey on our local school blogs.

So, while gaining consensus can be long and painful (think root canal pain), it really can offer some positive results.


“Does this survey make my rear end look big?”…and the perils of collaborative survey design.

I’m currently working on another pro-bono project for a task force of…30 people. Yes, you read correctly–thirty people. When I heard there were that many folks my thoughts immediately turned to the perils of ‘group-think’ and collaborative decision-making.

Turning a major initiative over to thirty people is like asking all of your friends (30 of them!) whether your new dress is flattering. Plan for 30 different opinions. In my case, I’m steeling myself for line edits galore and massive wordsmithing of characteristic statements in the survey. Let’s just hope everyone is OK with a 7-point Likert scale.

In our initial meeting, task force members were somewhat clear on the overall objective, but were definitely not in agreement as to what they ‘needed to know‘ in order to make recommendations — my first step in research design. We spent so much time trying to figure that out, that we had very little time to explore the characteristics of a successful program. In fact, while we did manage to come up with a long list of ideal program characteristics, we didn’t have a chance to rank them, so the task force is expected to accomplish this ranking exercise via email this week.

I’m envisioning thirty nit-picky changes to each statements. I hope I’m wrong.

Lesson learned: if you can’t make the group smaller, make the task timelines very short and concrete. Consensus among 30 people is not going to happen, so keep them focused on the outcome, not the steps to get there.