Check your emotions at the door…

When soliciting feedback (whether it be a survey, poll or discussion), if you want useful, objective feedback it’s a good idea to leave emotions out of the research design.

I do a lot of work, the majority of it pro bono, for our local school district. It’s not that I’m totally altruistic, although I like to think I am at times. Really, it’s because I want folks to have good, useful, objective feedback so they can make smart decisions about children’s education. And, in order to make smart decisions–whether they are about advanced learning programs or how the PTA should spend generous parental donations, unbiased feedback is required.

I’m learning the hard way that the topic of children’s education is up there with money, religion and politics as a minefield of emotions.

So, how can one best approach this emotional battlefield?

First off, please, please don’t use “Pros and Cons” to elicit feedback. One person’s pros are inevitably someone else’s cons…A better choice would be to use an unbiased statement and ask people whether they agree or don’t agree (in market research terminology, a likert scale). You can determine whether a topic is a ‘pro’ or a ‘con’ by how much someone agrees, or conversely does not agree with your statement.

Avoid ‘loaded questions/statements’. You know the ones where the author’s inherent bias is already worked into the question…like “have you always listened to that awful, noisy rock and roll?” Neutrality is your friend…think like Switzerland.

Finally, be sure to listen to all stakeholders in the process. Even though I balked at collaborative survey-writing, getting buy-in from everyone ensured my own biases were kept in check.

So…even if it’s ‘for the kids’, be sure your research design is impartial.

Share

Making Trade-offs

I’ve just finished putting together a survey for one of our local schools. With state budgets in dire shape, school districts will be cutting back funding, leaving parents/PTAs to make up the gap. I could discuss the inequity of this issue until I’m blue in the face, but that really doesn’t have much to do with research.

How to make trade-offs is an interesting topic for researchers. Clients are always interested in how their customers choose Product A over Product B, or which features are most important (oooh, there’s that rating/ranking concept again!). In the case of our local school, it’s which programs are parents’ top priority for the PTA to continue funding.

There are 9 programs for parents to prioritize encompassing everything from drama classes to teaching assistants–with all kinds of things in between: librarian assistant, playground equipment, instrumental music, etc… Which ones to fund are not easy choices for the PTA to make, nor for parents.

And, keeping the respondents in mind (in this case public elementary school parents) the survey can’t be long, nor can it be very complex (no conjoint analysis here for you hard-core staticians). With that in mind, I developed a simple ranking and budget allocation model for the survey. I asked the parents to rank the 9 programs in order of importance, and then gave them a hypothetical budget ($1000) to allocate across those same 9 items. In addition, I assigned a dollar value associated with each item so parents could understand the ramification of choosing one item over another.

In this case we really needed both questions–ranking the items allows the PTA to understand what is most important to parents, if money were no object or if fundraising wildly exceeds expectations, while having parents allocate hypothetical budget dollars helps bring their priorities back in line with reality.

And, hopefully inspires them to donate more money…

Share

Rating ≠ Ranking

We researchers love ratings and rankings–learning how important ideas, attributes or budget items are is often the core of what we do.

There are many ways to understand importance–two that are used quite often are ratings and rankings.

BIG WORD OF CAUTION…they are not the same. In fact the only things that are the same are the first two letters.

You rate something, usually by importance, on a 3-point, 5-point, 7-point or even a 9-point scale, with one usually the ‘least important’ and whatever your top point as the ‘most important’. Ratings are not discrete–many items can be rated the same. Useful for a large series of items where ranking would be too difficult.

You rank a series of ideas in order of importance as well. But, rankings are discrete with each number in the ‘scale’ used only once.

Easy way to think of this: on a 5 point scale, many items can be rated a 5, but only one can be ranked a 5.

I just finished a survey where I gave the client those two options–a rating or a ranking. Both were valid choices, both would give the client the information they needed to make a smart decision. This particular client was hosting the survey themselves and elected to use the ranking question…

…but changed the wording to ‘rating’. Grrrr.

Share