The Dark Side of Customer Satisfaction Surveys

In the past month I have had the ‘pleasure’ of dealing with two different car dealerships…once for service and once for purchasing a new car. Trust me the latter was way more enjoyable. (secret: use a broker so you don’t have to haggle)

While one dealer specializes in German cars and the other Japanese cars, the one thing they have in common is an overwhelming need to make sure I’m ‘satisfied’.

As some readers know, I’m a sucker for taking surveys–it’s fun to guess exactly what the data collector is really after–and I’m pretty blunt when it comes to customer satisfaction surveys.

So, back to my German car servicing experience. Not great. And, hoping to make an impression, I was very detailed in my responses. And the dealer’s response? A phone call with a lengthy explanation of why they do business the way they do. And a follow up email making sure I received a phone call. An apology? Nope.

Japanese car buying experience. OK. Took a long time to get the paperwork done, but the salesman was very nice and showed us all of the nifty features on my new car and even paired our cell phones so they work hands free. But, he mentioned not once, not twice, but three times that we would be getting a call about his service and that on a 5 point scale, a 4 was considered a failure.

This is where the ‘dark side’ rears it’s ugly head. Both dealerships are under scrutiny from their US affiliates to provide exceptional customer service and, instead of thinking of ways to improve their service, they both focus on the survey results instead.

These results-oriented surveys miss the mark by a wide margin, but we see them time and time again. And, it’s not just relegated to customer-service scenarios. Think about ‘high stakes testing’ prevalent in education (where I spend a fair amount of pro-bono time)–student test scores are being used for teacher evaluations and school performance indicators forcing a ‘teach to the test’ mentality.

This will only change when the focus is not on the survey results (or test scores) in isolation, but rather, what can be done to improve the experience.

Share

Check your emotions at the door…

When soliciting feedback (whether it be a survey, poll or discussion), if you want useful, objective feedback it’s a good idea to leave emotions out of the research design.

I do a lot of work, the majority of it pro bono, for our local school district. It’s not that I’m totally altruistic, although I like to think I am at times. Really, it’s because I want folks to have good, useful, objective feedback so they can make smart decisions about children’s education. And, in order to make smart decisions–whether they are about advanced learning programs or how the PTA should spend generous parental donations, unbiased feedback is required.

I’m learning the hard way that the topic of children’s education is up there with money, religion and politics as a minefield of emotions.

So, how can one best approach this emotional battlefield?

First off, please, please don’t use “Pros and Cons” to elicit feedback. One person’s pros are inevitably someone else’s cons…A better choice would be to use an unbiased statement and ask people whether they agree or don’t agree (in market research terminology, a likert scale). You can determine whether a topic is a ‘pro’ or a ‘con’ by how much someone agrees, or conversely does not agree with your statement.

Avoid ‘loaded questions/statements’. You know the ones where the author’s inherent bias is already worked into the question…like “have you always listened to that awful, noisy rock and roll?” Neutrality is your friend…think like Switzerland.

Finally, be sure to listen to all stakeholders in the process. Even though I balked at collaborative survey-writing, getting buy-in from everyone ensured my own biases were kept in check.

So…even if it’s ‘for the kids’, be sure your research design is impartial.

Share