DIY Deja Vu

Deja VuA week doesn’t go by in #MRX land (Market Research in the Twitterverse) without bashing the Do-It-Yourself phenomenon of survey production.

This reminds me of my somewhat distant past in graphic design..I was a Marketing Communications Manager for a large paper company at the time–when the advent of “Desktop Publishing” invaded the hallowed halls of commercial design and production.

At that time Graphic Designers were artists, using pen and paper, specifying fonts and photos, laying out designs with non-photo blue pencil. There were type houses whose sole job was to produce beautiful fonts on sheets of paper for the layout artists to cut and arrange. No WYSIWG here–typesetting was command code only. Think HTML for the dark ages.

Then a little company named Adobe created PostScript enabling Aldus to create PageMaker–the first page layout program.

Suddenly anyone could be a ‘designer’ because they had a new tool that enabled them to layout type and images. Graphic Designers balked at this idea and many resisted the change. But, in time, using a computer to do layouts became de rigueur and those who didn’t have innate talent or training in design fell by the wayside.

It wasn’t enough to just use the tools anymore…

The same is true today in market research. The advent of tools like Google Surveys, Survey Monkey and Survey Gizmo (all good products for basic data collection needs) has made my industry balk at those who send out a “Survey Monkey” to their customers. Companies feel they are engaging with their customers by sending out a survey…except that many of these ‘surveys’ haven’t taken into account biased questions, too many open-ended responses (see this post all about those!), pages and pages of characteristic grids, and many of the other painful survey mistakes we market researchers cringe at.

At some point, like it did for graphic designers, using the tool won’t be enough. We just have to be patient and keep educating companies about what is ‘good research’. Until then, let the teeth gnashing continue.

Share

Check your emotions at the door…

When soliciting feedback (whether it be a survey, poll or discussion), if you want useful, objective feedback it’s a good idea to leave emotions out of the research design.

I do a lot of work, the majority of it pro bono, for our local school district. It’s not that I’m totally altruistic, although I like to think I am at times. Really, it’s because I want folks to have good, useful, objective feedback so they can make smart decisions about children’s education. And, in order to make smart decisions–whether they are about advanced learning programs or how the PTA should spend generous parental donations, unbiased feedback is required.

I’m learning the hard way that the topic of children’s education is up there with money, religion and politics as a minefield of emotions.

So, how can one best approach this emotional battlefield?

First off, please, please don’t use “Pros and Cons” to elicit feedback. One person’s pros are inevitably someone else’s cons…A better choice would be to use an unbiased statement and ask people whether they agree or don’t agree (in market research terminology, a likert scale). You can determine whether a topic is a ‘pro’ or a ‘con’ by how much someone agrees, or conversely does not agree with your statement.

Avoid ‘loaded questions/statements’. You know the ones where the author’s inherent bias is already worked into the question…like “have you always listened to that awful, noisy rock and roll?” Neutrality is your friend…think like Switzerland.

Finally, be sure to listen to all stakeholders in the process. Even though I balked at collaborative survey-writing, getting buy-in from everyone ensured my own biases were kept in check.

So…even if it’s ‘for the kids’, be sure your research design is impartial.

Share

My Cereal Doesn’t Have a Personality…

What is it that compels researchers to anthropomorphize objects? I recently took the survey below asking me to give personality traits to cereal…yep, cereal.

I understand from an advertising point of view that the client would like to learn what respondents associate with a particular cereal brand, but sorry, a cereal brand is not “Intelligent, Independent, Approachable, or Trendy”. It’s cereal. Oats and flakes. Nothing more.

OK, maybe ‘Healthy” would be a more appropriate choice…

So, what about assigning descriptions that actually fit the product? Then I could rate a cereal brand on being healthy, convenient, vitamin-rich and other characterizations that fit the product.

Not “Fun”…breakfast really isn’t all that much fun.

Share

Consensus

Consensus is one of those words that can go either way–you either love it or hate it. If you live in the Pacific NW (especially Seattle) it’s a way of life.

After my last, somewhat harsh post of collaborative survey design, I’m happy to report that not only did we manage to create a pretty useful survey (OK, there’s a teeny tiny issue of bad parallel constructs, but heck…it satisfied some loud constituents), but in 24 hours we have achieved almost a 20% response rate. And, no loud complaining about the survey on our local school blogs.

So, while gaining consensus can be long and painful (think root canal pain), it really can offer some positive results.

Share

“Does this survey make my rear end look big?”…and the perils of collaborative survey design.

I’m currently working on another pro-bono project for a task force of…30 people. Yes, you read correctly–thirty people. When I heard there were that many folks my thoughts immediately turned to the perils of ‘group-think’ and collaborative decision-making.

Turning a major initiative over to thirty people is like asking all of your friends (30 of them!) whether your new dress is flattering. Plan for 30 different opinions. In my case, I’m steeling myself for line edits galore and massive wordsmithing of characteristic statements in the survey. Let’s just hope everyone is OK with a 7-point Likert scale.

In our initial meeting, task force members were somewhat clear on the overall objective, but were definitely not in agreement as to what they ‘needed to know‘ in order to make recommendations — my first step in research design. We spent so much time trying to figure that out, that we had very little time to explore the characteristics of a successful program. In fact, while we did manage to come up with a long list of ideal program characteristics, we didn’t have a chance to rank them, so the task force is expected to accomplish this ranking exercise via email this week.

I’m envisioning thirty nit-picky changes to each statements. I hope I’m wrong.

Lesson learned: if you can’t make the group smaller, make the task timelines very short and concrete. Consensus among 30 people is not going to happen, so keep them focused on the outcome, not the steps to get there.

Share