
We all know how frustrating it is to see the media sensationalizing every new study that comes out – chocolate or coffee can change from “good” to “evil” twice in a week if there's not a lot of other news to care about! But at the same time, it's hard not to feel a little niggle of worry every time a new study supposedly "proves" the dangers of Paleo staple foods like egg yolks or animal fat. Telling the good research from the bad is tricky for non-scientists, but to help you get a handle on which studies might not be worth your worry, here are 7 simple red flags to watch out for:
1. “Associated With”
“Associated with” means exactly what it says: associated with.
It does not mean “caused by.”
Unfortunately, the news media often assumes that it does mean “caused by,” and proceed to draw all kinds of unwarranted conclusions from data that just don’t support them.
The worst offenders here are the studies that read like this:
[Disease X] is associated with [diet pattern mostly found among people who don’t care about their health].
For example: “Heart disease is associated with saturated fat consumption.”
Or
Reduced risk of [disease X] is associated with [diet pattern mostly found among people who do care about their health].
For example: “Reduced risk of diabetes is associated with whole grain consumption.”
These studies tell us one thing and one thing only: people who care about their health tend to be healthier than people who don’t.
Think about it this way: what kind of person eats a lot of saturated fat? In a country where we’ve been telling everyone that saturated fat will kill you slowly for the past 30 years, the vast majority of people who eat a lot of saturated fat are people who just don’t care about their health. They smoke. They drink. They aren’t getting that saturated fat from pastured butter or grass-fed beef; they’re getting it from Dairy Queen.
It shouldn’t come as any surprise that these people have more heart disease, but it’s silly to pin that on “saturated fat” when clearly there’s so much else going on in their diet.
The same thing goes for the diabetes and whole grains: after 30 years of hearing that whole grains are the healthiest food you could eat, people who care about their health are (mostly) the people who eat the most whole grains. They avoid junk food and cook at home. They exercise regularly. They’re probably richer and more likely to be white (meaning that they probably have access to better healthcare). So is their lower risk of diabetes really an effect of their whole grain consumption?
It’s just not very convincing when you look at it this way, but this is the kind of research that gets used all the time to make the case against Paleo staples like fat and cholesterol. So don’t get worried the next time you see a report finding that some disease is “associated with” fat consumption: chances are pretty good that it doesn’t apply to you.
2. Animal or Test Tube Studies
Studies in rats and mice have a lot of advantages (they’re quick, they’re cheap, and they don’t raise as many ethical issues as studies in human subjects). But the results are not always relevant to human beings – and that goes double when the dose of a substance in animal studies is absurdly high. For example, rats fed 60% of their diet as fructose will become obese and develop fatty liver disease, but no free-living human being would ever eat a diet 60% fructose by calories.
Test tube studies (also called “in vitro” studies) are even harder to interpret. Isolated cells in test tubes just don’t act the way real human bodies do. If scientists purify and concentrate one particular extract of one particular phytonutrient in broccoli, and find that it kills purified and extracted cancer cells in a petri dish, it doesn’t follow that eating broccoli will cure cancer.
That’s not to say that these studies are useless: many of them are very interesting jumping-off points for future research. But interpret them with a big grain of salt and always remember that this is preliminary research, not conclusive evidence.
3. Tiny Sample Sizes
An effect observed in 3 out of 4 total study subjects is quite likely to just be a fluke. To properly control for the effects of random chance, studies should ideally use a much larger sample size. This doesn’t make very small studies useless, but it does mean that they should be taken cautiously and with their limitations in mind.
4. Surrogate Markers
“Surrogate markers” are changes that researchers think indicate an increased or decreased risk of a disease. For example, if someone is doing a study on bone health, they might use bone mineral density as a surrogate marker for risk of osteoporosis.
The problem with this approach is that bone mineral density isn’t actually a very good indicator of osteoporosis risk. So if that’s the only thing you measure, then your study isn’t very informative at all!
There are plenty of other cases like this (another example: the traditional “risk factors” for heart disease in men are often applied to women even though they don’t necessarily work the same way), so when you’re reading a study, pay careful attention to what the reported result was. Was it something that definitely matters (like death from heart disease, rate of fractures, or lifespan), or was it a substitute or “risk marker” that may or may not be very closely connected with the actual disease?
5. Conflicts of Interest
You can usually find these even if you don’t have access to the full text of a paper, and they’re often damning. For example, you can read in the article about sugar how most of the studies “proving” that sugar doesn’t cause weight gain were funded by Coke and other food industry giants. But of course a Coke-funded study is going to “discover” that Coke doesn’t cause obesity!
This effect has been well-documented in other areas as well. For years and years, studies funded by the tobacco lobby found that cigarettes didn’t cause cancer. Studies on the flu vaccine are more likely to find it effective if they’re funded by industry groups. It’s just not realistic to expect research to be objective with such an obvious conflict of interest.
Again, this raises some serious questions about how far you can trust these studies. A good bet is to do some more research and see if you can find a study on the same topic that doesn’t come with industry funding attached. Note the differences and similarities, and be skeptical about what you accept as “truth.”
6. No Placebo Control
A placebo group is a group of subjects who get an intervention that the researchers believe to be completely ineffective: a sugar pill, a sham surgery, or something like that. This is important for a well-designed study, because it helps account for the power of the placebo effect. Basically, if people think they’re getting medicine, about 30% of them tend to feel better regardless of whether they actually are getting medicine or not. So if your new painkiller helps 30% of patients, but you didn’t test it against a placebo group, you have no idea whether it was really the painkiller at work or just the placebo effect.
7. Abstract Distortion
The abstract is the only part of the study that many people have access to, since the rest is often behind a paywall. But just reading the abstract does not give you all the information in the full study. Often, researchers hide data they don’t like deep in the text itself, or draw “conclusions” in the abstract that aren’t actually warranted from the study results.
A great example of this is studies showing that “high-fat” diets cause obesity in rats and mice. From the abstract, you’d think this is a huge blow against Paleo. But if you read into the full text, you’ll often find that the fat in that “high-fat” diet is corn or soybean oil: brimming with inflammatory PUFA and not something a Paleo diet should include! The abstract looks relevant, but the results actually aren’t.
That doesn’t make abstracts useless; it just means that you should be aware that you aren’t getting the whole story when you read them.
Conclusion
As much as we wish they would be, even scientific studies aren’t always accurate or unbiased. It pays to be skeptical of what you read, and studies are no exception. You don’t have to be a doctor yourself to look out for these 7 “danger signs” of a study that might not be as conclusive as the news media claim, so keep your eyes open – and when you see a headline screaming that ___________ causes/cures ___________, go take a look at the actual study behind the scenes, and be very skeptical about how accurate that claim really is.
Leave a Reply