I was listening to Colorado Public Radio the other day. In a story about the challenges facing today’s high school students, the host said, “Twenty percent of Denver Public School students deal with some sort of mental illness.”

My first thought was “Wow! One in five students. That’s terrible!” But being part social researcher, my mind then went to: “Says who? How large was the sample? How are they defining mental illness? Was this a study based on student self-reporting or observation of student behavior? This is a story about high school students. How much of the sample was elementary and middle school students? What hypothesis did the researchers begin with? Who conducted the study? How qualified were the researchers? Might there be research bias because of who paid for the study?” and on and on.

Now, before you accuse me of being anal about all this, remember that the results of this study, and others like it, are using taxpayer dollars to fund programs related to this issue. Likewise, we use statistics to persuade others to make all sorts of decisions every day – “Four out of five doctors prefer . . . Nine out of ten customers recommend . . . Raising the sales tax by one tenth of one percent will lift 270,000 children out of poverty.

Does all this sound familiar? We tend to play fast and loose with statistics of all kinds and that can get us in real trouble. Chances are, you do it yourself. There is a natural human desire to be the authority in the room and quoting statistics helps us fulfill that role. But how many times do any of us ask about the veracity of the statistics we hear? Even if they are well researched, are they appropriate for the particular argument? A statistic that tugs at my heartstrings will not help me make a rational decision. Yet that is a common tactic for many attempting to raise money or pass legislation.

Surprisingly, very few scientific or social studies are replicated by others to check their veracity. As a result, we are left to trust that those conducting the research have collected accurate data, employed the correct analysis, and reported their findings without bias. Sadly, we are seeing more and more evidence that research is being conducted with a particular outcome in mind. The discredited blood sampling research conducted by Theranos Corporation comes to mind. Or perhaps the “hockey stick” controversy that resulted from manipulated climate change data. Additionally, the media, political parties, activists, corporations and others commission research that feeds a particular agenda. The result is that more and more of us take quoted statistics with a grain of salt because we’ve seen evidence of manipulation in the past.

Statistics, used with appropriate context and clarity can be extremely powerful in helping us make informed decisions. But without proper effort to verify their accuracy, we can be easily misled intentionally or unintentionally. Sometimes we want to believe something so much, we look for evidence (statistics) that support our desired outcome. That’s a form of confirmation bias.

When we rely on the research and reporting of others to make significant decisions, we should kick our “crap detectors” into high gear whenever statistics come into play. We all have a responsibility to look at quoted statistics with a skeptical eye, whether we’re news reporters, politicians, corporate leaders or citizens in the voting booth. After all, our outcomes, large and small, depend upon it.

Leave A Comment