Bad Science or Bad Reporting?

July 5th, 2011

I was under the weather over the July 4th holiday weekend and didn’t get a lot done here on the blog, professionally, or socially.  Well, my cat enjoyed the extra time with me, even if I sometimes surprised her with a big sneeze.  I have a few things I meant to write about at the end of last week, and will stick to that schedule even though I have more stuff now (e.g. the redshift 7 quasar).

There are two stories that got some attention last week — way more than they should have.  I want to bring them up and discuss them briefly and explain what went wrong, if I can, or at least complain about it enough that anyone reading this will be less likely to be sucked in or support those who, well, do the sucking.

First was a story, and not the first one on this subject, about how drinking diet soda makes you fat.  Fatter than drinking regular soda, anyway.

Well, that’s a hypothesis at best, not a conclusion even if it turns out to be true in the end, and it’s garbage at worst.  Unfortunately it’s reported with confidence and will be taken as a so-called “scientific fact” by too many, and may well cause some at risk people to chage their diets and negatively impact their health.  (I suppose it could have a positive impact, but it would be lucky if it turned out that way.)

What researchers actually found was that people who drink diet soda have larger waistlines than people who don’t, and that this remains true over time.

That’s IT.  Correlation does not equal causation.  You can form a hypothesis from a correlation, and then go test it in various ways, but you can’t be sure about a causal relationship.  Let me give a counter example that’s also been controversial but is different: the correlation between CO2 and global temperatures.  In that case, the correlation is actual evidence in support of the CO2 increase causing global warming because it was predicted decades previously, and a host of other competing hypotheses have failed predictions.  Deniers of man-made climate change often like to say that “correlation doesn’t mean causation,” which is true, but they act like they don’t know the causality was predicted long before anyone had found a correlation, and like idiots climatologists had invented the idea after seeing the single correlation and ignoring alternative explanations.  And the correlation in this case still doesn’t prove causality, but it’ is a piece of evidence in support.

I used to drink real soda, then switched to diet soda when I noticed I was putting on weight.  I’ve been up and down since then, but if I switch back to regular soda or even to healthy fruit juice, full of sugar, I usually gain weight right away.  That’s anecdotal, but contrary to the claims of the article, and makes me suspicious.  I’ve also seen a lot of people who justify getting dessert, or supersizing a meal, if they have it with a diet drink.  Is it the diet drink causing the increasing waistline?  I would not say so.

The way to test the claim (not a valid conclusion, merely a hypothesis at best) is to get some random groups of people (random enough that they have the same overall demographics) and have half drink diet soda and the other half regular.  Ideally so that they don’t know which group they’re in (we’ll need good diet soda!).  Then we see if there’s a difference down the line.

Picking out people who have self-selected diet and regular and following them over time cannot tell us if diet soda helps to control weight.  As in my case, some people who have weight issues pick diet soda and continue to have weight issues.  Some people who have no weight concerns drink regular and continue to maintain their weight.  Some people who drink regular soda may refrain from snacking and desserts because of that choice, while the diet soda drinkers let themselves splurge.  It’s just not that easy with people.  The study sounds like it may have been worth doing, but I don’t see how it can reach the conclusion reported in the story.  I haven’t read the actual paper, and it’s possible that the authors were a lot more cautious than the pop science article I linked to above, so let’s share the blame.  It’s a study with results that are being over represented in the media, perhaps to the detriment of the health of the public.

If a follow-up study comes out in a few years, and finds a different conclusion, people will lose confidence in science some more.  It’ll be like eggs — good for you this year and bad for you the next, with science losing.  Look, it’s normal for our scientific knowledge to change, to improve, and in some cases get flipped on its head, but it doesn’t happen that dramatically very often with good science fairly reported.  It happens all the damn time with studies of limited value being over interpreted whether by the scientists themselves or by science reporters who ought to know better but too rarely do.

The second story is worse, in my opinion, dragging science into gender politics in a way that’s unfair to men and women alike.

An article headline in Time magazine reads “Why Women are Better at Everything.” Ugh.  The proposition isn’t true, although I could certainly believe that on average women likely have more natural talent than men in some areas, and vice versa, although I’d suspect training counts for more in general, and that unfair generalizations often cause more trouble than they’re worth.

The reporter gets her headline from A Wall Street Journal column in which the male author drops that bit of wisdom, presumably as a bit of a self-deprecating aside, while discussing a study of male vs. female investing patterns in which women did better.  So it went from a single study in which the women in the study outdid the men, to a more general “women better at investing” to a wild and unsupportable “women better at everything.”

All of this is discussed in the context of testosterone being the culprit.  Testosterone makes men take bigger risks than they should on average, and under perform women (containers of limited amounts of testosterone), at least when it comes to investing.  And everything else.  Ugh.

Well, it may be that limiting risk was the secret to the success of women in investing in this one study.  Taking bigger risks might be valued in a different market.  Or it may be that average performance isn’t that interesting, and we’re only concerned about hitting it big.  Or, given the smaller number of women in the study, we’re comparing the best women to the average men (the only woman in a class probably works harder and does better than the average man).  Or any of another possible number of things.

It’s a correlation between performance and gender in one study that may not be causal.  It’s the same damn thing as the first story, only on overdrive because of gender issues.  If you want to do this right, you get two random groups of men and women, matched in demographics and background, give them some investment funds, and see who wins out in the end.  You’d want to have a long test with lots of people to make sure you could detect a statistically significant difference that wasn’t muddled by big outliers (big losers and winners way outside the norm), and average over enough time it could be a general statement that applied to a range of market behavior.  My dad made 180% one year in the market back in the late 1990s, but a few years later he underperformed dramatically in a different sort of market with different behavior.  Or maybe it was his testosterone increasing as he got older…

My point is that “science” is explicitly dragged into these articles again, unfairly.  The study is what it is.  It isn’t what it isn’t: a scientific finding that one sex is better at everything than the other.  That’s an off-the-cuff line from a columnist being inflated into a headline.

We’ve been worrying for years about girls good at math suddenly losing interest and underperforming.  Now we’ve started worrying about boys underperforming and our universities being dominated by women.  There are certainly gender issues that science can and should address.  I believe good science can and should be done on just about any topic where knowing more would be useful or fundamental.

Bad science, badly reported, hurts us as a society and hurts our trust in science.

Given time, science as a whole fixes itself, stumbles forward, and we as a group have a deeper knowledge of how and why things work they way they do in our world.

Given time, do enough individually bad studies, or at least badly reported studies, damage science in the public sphere?

Where is the system to fix this perception?  Because I’m telling you right now that I don’t see it.  Top scientific journals rarely if ever run replication studies.  Newspapers and magazines rarely run retractions of follow-up stories with the same fanfare and prominence that the original controversial and badly reported finding gets.  The public just gets confused and becomes susceptible to purveyors of junk science pushing their own biased agenda.

Science is the only way of generating reliable new knowledge, imperfect as it is.  Everything else is useless, or worse than useless.

Anyway, I saw these two stories within a day or two of each other last week and they left me sort of angry and depressed.  It’s more and more of a soundbite world.  Now you just see a link somewhere, a headline, and file away that science has shown that women are better at everything, or that your diet Coke is making you fat, and continue on with your busy day just trying to get by.  But that junk sinks in and undermines a busy life, adding to the confusion, and sometimes leading to the rejection of our best source of knowledge: science.

Is there any way of keeping it from getting worse?  I’m all ears.

Share/Bookmark

You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.