Article: Tips For Evaluating Research Studies

By Pamela A. Popper, Ph.D., N.D.
Wellness Forum Health

Can you share some information about how to determine which research studies are reliable vs those that are not? I’m determined to get information about my own healthcare issues, particularly based on some of my previous interactions with doctors, but having some trouble figuring out the good from the bad.

Great question, and the complete answer would take several hours to explain. Since space is limited in this format, I’ll share my top eight tips for evaluating research studies here. 

Differentiate correlation from cause and effect relationships. Correlation means that two factors co-exist but it does not mean that one causes the other. It is much easier to establish correlation than to prove cause-and-effect relationships. An example is the correlation between the increasing consumption of beverages from plastic bottles and the increasing incidence of cancer. While it may be true that there are increases in both, this does not mean that drinking beverages in plastic bottles causes cancer. Observations and relationships are interesting and may be justifications for detailed research studies, but should not, by themselves, be the basis for making healthcare-related decisions. 

Understand the difference between expressing data in relative vs absolute terms. The benefits of drugs and supplements are often reported in relative terms instead of absolute terms because relative reporting makes many things seem like “medical breakthroughs” when they often have very little impact on health. For example, let’s assume that in a clinical trial for a new drug that reduces the risk of bone fractures, the incidence of fractures in the placebo group is 2% and the incidence of fractures in the group taking the drug is 1%. Expressed in relative terms, the new drug reduces the risk of fracture by 50% (1%÷2%=50%), and 50% sounds impressive. But expressed in absolute terms, which is the real benefit to you, the patient considering taking the drug, the risk reduction is only 1% (2%-1%=1%). This does not sound nearly as impressive, which is why relative numbers are commonly used. Often after considering the significant side effects associated with most drugs, the small benefits of drugs and supplements seem hardly worth it to many consumers.

 Some findings are statistically significant but who cares? Statistical significance refers to sorting out whether differences between groups in a study are real or due to chance. When a difference between groups is statistically significant, the findings are considered worth reporting, but this does not necessarily mean that the results are important or worth acting on. For example, a study that looked at the effects of adding olive oil or nuts to the daily diet on the risk of cardiac events concluded that eating nuts reduced the risk of heart attack by 1% and eating olive oil reduced the risk of heart attack by 0.6%. While the results were statistically significant, a 0.6%-1% reduction in risk is virtually meaningless to a person trying to avoid a heart attack. 

Pay attention to study design. While there are many researchers with great integrity, it has become a common practice to structure studies in order to show a particular result. For example, a study that involves lowering fat consumption from 40% to 30% may show no difference in health outcomes, but the reason is not that lowering fat is not important – it’s that fat consumption must be reduced significantly more in order to result in health improvement. A good analogy would be looking at the effects of speed on death rates in automobile accidents. If studies show that accidents that take place when cars are traveling 90 miles per hour almost always result in death and the same is true for accidents involving speeds of 80 miles per hour, one could report that driving slower does not matter. It really does matter, but only when speed is reduced significantly more, say to 30 miles per hour.

 Improvement in surrogate markers may not have anything to do with long-term health. Many drugs and supplements lower cholesterol, fasting glucose, blood pressure, or reduce pain. However, most do not change health outcomes. For example, statin drugs lower plasma cholesterol levels, but the average reduction in the risk of heart attack or stroke as a rsult of taking them  is less than 1%. In other words, statins improve the results of lab tests, but not health outcomes for most people. This is important because what most people want is long-term health and longer life, not just better test results.

Reductionism is pervasive. Reductionism involves focusing on the effect of single foods or single nutrients, and reporting that one food or nutrient can change health outcomes. Study design can sometimes create the illusion that a single nutrient or food has an effect on health, usually by measuring its effect on surrogate markers. But generalizing these results to infer that individual foods or nutrients will have significant benefits for the general public is very misleading. An example would be a study showing that taking vitamin C pills increases plasma vitamin C levels, and then reporting that since people who do not have cancer have higher plasma vitamin C levels, taking vitamin C pills can reduce the risk of cancer.

Single studies don’t mean much; look at the preponderance of the evidence. Unfortunately, the medical journals are cluttered with studies supporting almost any claim one wants to make, and this unfortunate state of affairs has made it easy to mislead people. For example, some studies published in reputable medical journals show that cigarette smoking does not increase the risk of lung cancer. The problem is that most studies show that smoking does increase the risk, so citing a few studies supporting a claim while ignoring most studies on a particular topic is misleading. The dairy industry is famous for this – sponsored research will result in one study that shows a particular outcome. The industry then uses that study to promote its products even though almost every other study conducted on the same topic shows a different result.

The media often misreports study results. I’ll be kind here, and attribute some of the misreporting in the media to the fact that journalists cannot be experts on everything and since most have little knowledge about nutrition, health, and medicine, it is difficult for them to do more than just report what they are told. This means that it is very important to read at least the abstract of a study covered in consumer-oriented media before forming an opinion or taking any action. Most journals publish abstracts online for free. I am regularly flabbergasted when I check articles at how much research findings are misreported by not only journalists, but health professionals who write articles, blogs and create other materials. Don’t rely on a report of the study, read the study itself.

 I deliver a workshop that educates consumers about how to read scientific information that has become quite popular, and from time to time in response, individuals have suggested that perhaps published research has become meaningless. I agree that there is a lot of misbehavior and that consumers should develop a healthy skepticism when looking at research. But I’m not ready to throw the baby out with the bathwater yet. There is much good information to be found in scientific journals, and new and valuable information is being added every day.  The key is to be a discerning consumer of health and medical information so that you cam make informed decisions about your health. 

For more guidance on how to read and interpret research and health information, take our courses. Wellness Forum Health membership includes 4 hours of lectures on informed medical consumerism and the InforMED Video Platform has several more hours of content on this topic. For more information, email pampopper@msn.com

 

 

Leave a reply