researchMy husband and I were recently rewatching the first episode of Boardwalk Empire. “Jimmy” was complaining how the newspaper “got it all wrong.” His wife says “It has to be true for them to print it.”

Whoa! I don’t trust the media that much, especially printed media. Allow me to illustrate my point.

Every day you see newspapers, magazines, and professional journals headlines which read: “Study Proves X Causes Y,” “X May Lead to Z,” “A New Study Shows 50,000 Deaths are Caused by X Each Year According to Dr. or Agency Know-it-all,” “X is a Risk Factor for Y and Z,” and so on.

The study may not be mentioned in the headline, but in the body of the article the writer will try to bolster their version of the truth by mentioning or citing a study.  But to me the question remains, does this study (or studies) really prove anything? All too often we get little information on the study itself. Even when there is information about the study, not understanding how to evaluate that information leaves us at a loss. Even more frequently there is confusion over terms such as believing a “risk factor” is the same as “cause.”

People with persistent pain have enough challenges to face. They don’t need to have myths, slanted opinions, and misunderstandings evoked as “the truth.” Many people (including providers) do not understand that being published in a peer-reviewed journal does not guarantee it is good or valid research or a study that has strong evidence to support its conclusions. Articles themselves may not be able or willing to evaluate this either. It is too big a project to go into all the ins and out of research interpretation in a mere blog, however hopefully I can share enough information to help others identify who hasn’t proved anything and begin to ask the question: “Why are “journalists” even paying attention to a particular study?”

Language can be a problem. As I mentioned before, a risk factor does not mean cause; it just means there is a greater likelihood of being a contributing factor. Something else may be the cause, there may be multiple somethings which are risk factors; and just because a link is seen it does not mean one causes the other (in statistic terms: correlation does not equal causation). My husband teaches statistics and the example he uses is this: in Florida when the number of new boat licenses issued increases, there is an increase in the deaths of manatees as well. Boat licenses do not kill manatees so they are not a cause. There is a correlation however, risk of deaths increase with new licenses. The terms “proof,” “proved,” “substantiated,” “confirmed,” “disproved,” and “supports” are all absolutes which need multiple studies with strong evidence to be true. A modifier needs to be added such as “may,” “could,” “might,” etc. Of course many writers will just ignore the modifier especially if subject “proved” coincides with their personal opinion or commercial appeal.

So, how do we tell if the article’s conclusions and headlines are justified?

First, we need to know if this study is based on a survey, statistics, original research, quantitative research (something that can be measured) or qualitative research (used to gain an understanding of underlying reasons, opinions, and motivations), a study of studies, or anecdotal (someone’s story).

Surveys are not studies. Webster’s defines surveys as a method to ask (many people) a question or a series of questions in order to gather information about what most people do or think about something. There is little control over who answers the questions or why. Surveys can give us general information and a general idea but nothing more than that. When we are talking about statistics within a survey, by themselves, they cannot prove anything; we need to know how, when, and where they were obtained.  We also have to keep in mind that statistics can be manipulated to indicate something which actually may not be true.

When evaluating studies, two important issues to look at are size and method. How many were involved in the study; a small group really is not going to tell you very much and what it does tell you can be way off the mark.  It brings us directly to method as that explains not only how the study was conducted but how participants were chosen.

Looking through records (whatever the subject) is a weaker form of doing a study when compared to performing a controlled study, such as an experiment.  Also known as “data mining,” one weakness of this method, though gaining popularity, is that it may not only paint an incomplete picture, but a wrong conclusion might be drawn if this method doesn’t consider extenuating circumstances, correlation with other subjects, or most importantly, if information is missing or entered into the record incorrectly.

An example of this method that is most bothersome for me happens to come from the Center for Disease Control (CDC) where death certificates are data mined. They conclude that there is an increase in the number of deaths related to opioids and these results have been taken as gospel and cited by many others. It is important to understand that one of the weaknesses which occurred in this “study” was that there was no differentiating between opioids prescribed for a person due to medical necessity and those with substance use disorder, mental health disorder, the result of a suicidal action or used opioids along with alcohol or other sedating medications such as benzodiazepines (which should be held equally at fault for the death). My favorite as far as missing information is when the primary cause of death (from the medical examiner report) was reported as opioid toxicity or respiratory depression when the life-limiting disease process was secondary. Hello, doesn’t everyone stop breathing at the end of life? Allow me to share a story (anecdote) to illustrate:

One year ago, a 92 year old gentleman was dying of multiple causes. He did not want to be resuscitated and wished to die at home. He took too much liquid morphine (whether intentionally or accidently is moot). The ambulance was called in spite of his wishes and he was resuscitated by the paramedics even though the DNR order was there and he was then admitted to the hospital against his will. Four days later he died. “Opioid toxicity” was entered as the cause of death.

When we talk about how “strong” a study is, the size and method used is again a key factor. For evaluating a study or a meta-analysis (a review of multiple studies), the “Strength of the Evidence” list is used. These are lists which state just how much we can have confidence in the study. The higher on the list means the stronger the evidence. Level I methods carry the strongest evidence. A thorough resource is the Johns Hopkins Nursing Evidence Based Practice: Levels of Evidence:

  • Level I
    • Experimental study, randomized controlled trial (RCT)
    • Systematic review of RCTs, with or without meta-analysis
  • Level II
    • Quasi-experimental Study
    • Systematic review of a combination of RCTs and quasi-experimental, or quasi-experimental studies only, with or without meta-analysis.
  • Level III
    • Non-experimental study
    • Systematic review of a combination of RCTs, quasi-experimental and non-experimental, or non-experimental studies only, with or without meta-analysis.
    • Qualitative study or systematic review, with or without meta-analysis
  • Level IV
    • Opinion of respected authorities and/or nationally recognized expert committees/consensus panels based on scientific evidence.
    • Includes:
      • Clinical practice guidelines
      • Consensus panels
  • Level V
    • Based on experiential and non-research evidence.
    • Includes:
      • Literature reviews
      • Quality improvement, program or financial evaluation
      • Case reports
      • Opinion of nationally recognized expert(s) based on experiential evidence

You will notice the lowest level contains “opinion of nationally recognize experts.” This is another area where journals, magazines, and especially newspapers fail miserably. They rarely identify if their “expert” truly is an expert (e.g. frequently they use an addiction medicine doctor as an expert on pain management). This person may be an expert in his or her own mind.

You may also notice that anecdotal stories are not listed at all. This is because they are proof of nothing except one person’s beliefs concerning their own experience. When someone says; “I became addicted to narcotics when I was prescribed them for pain,” it is a little like saying because there was a curse on tomb of King Tut and someone died after exposure to the tomb, even if it was 30 years later when they died, they did so because of that curse.

So, be forewarned as you continue to read or listen to the news, read a medical journal article or follow information shared on social networks–just because something is in a study or the study is published, it does not mean it is something that should be believed. My message to the media: “Don’t tell me a study proves anything unless you tell how it was derived.”

I hope this will help you to understand these articles and studies a little better and give you more confidence when you chose to refute the erroneous message the media wants to send. Better yet, comment back and challenge their assumptions! Please share with me by commenting back when you do! We have a lot to lose when we remain silent.

 

Share This