The evolution of market research has not necessarily seen the strongest research genes prevail, but the rise of online research presents a new opportunity for fundraisers interested in finding out more about those they market to, says Tod Norman
In the early days of market research – we’re talking the 50’s through the 80’s – the statisticians who put together research projects were a white-coated bunch. They took research very seriously, and saw it as a science. As a result, they invented and deployed wonderful research tools and theories. But to make them work, they needed huge sample sizes, huge computers – and huge budgets.
By the 90’s, these scientists (and the big budget projects they thrived on) were history. The focus group had grabbed the attention of clients, and big scale projects with proprietary tools disappeared as quantitative research became commoditised.
Through both periods, fundraisers typically ignored research. In the earlier years, we didn’t understand the value of research when we had testing, and later, when we did start trying to use research – particularly qualitative – we found it to be a dismal tool. Bluntly, people didn’t do what they said they would; so why waste the money?
But times have changed. The low cost/high volume opportunities that the internet offers has brought research back to the table for fundraisers. But without the thinking of the early pioneers, will it be any more effective than it was before?
The fundamental problem with researching direct marketing is that people simply can’t predict what they will respond to with any real accuracy. They can tell us what they like, what something makes them feel, or think, or even who they think would like the ad. But, to quote the proverbial, no one in a focus group has ever said they would be more likely to respond to a pack if it had a pen in it.
Despite this, a lot of major organisations in the sector are now taking the findings of ‘traditional’ research to market, and putting out DM based on the findings. Good luck to them. I hope they have air tight contracts.
But if your research agency doesn’t know what monadic means, then your chances of success are mighty slim. And that’s being generous.
The old men in white coats knew what monadic meant. It meant showing respondents only one idea, and asking questions about it and it alone. It meant not showing respondents two or more ideas, and asking them to decide which idea they thought was better.
In other words, it meant getting respondents to respond, rather than judge.
The difference is both significant and provable.
The reason is simple. When people respond to a stimuli, they use all, some, or none of their mental facilities (the last of these being in the realm of autonomic reflexes). Yes, sometimes we sit down and think things through – but we often don’t. Sometimes we even do the opposite of what we think is right: St Paul said, ‘The good I would I do not’, recognising that we often betray not only our thought but our beliefs and values when presented with particularly tempting stimuli.
When we judge something, we use an entirely different set of facilities; we think, we consider, we ponder. It’s an entirely different process, resulting in what is usually a rational decision, even if it is rational only in that it compares the different emotions each stimulus created.
So by getting people to see a range of ideas, asking them to assess them against each other – either directly or by simply showing them a choice – we prevent them from reporting their actual inclination to respond.
The white coats knew this. That’s why they only trusted monadic studies. They covered a range of ideas by using a range of samples. It was expensive. But it gave them the right answers.
Tod Norman is a founder of Watson Phillips Norman