Monday, March 18, 2013

Being wrong may be the quickest route to being a successful academic

Wired has a new article by Nassim N. Taleb that notes some dangers from poorly mining large data repositories - a new look that I see as another view of a known, but largely ignored, issue

Beware the Big Errors of 'Big Data' | Wired Opinion | Wired.com:
We’re more fooled by noise than ever before, and it’s because of a nasty phenomenon called “big data.” With big data, researchers have brought cherry-picking to an industrial level.
Modernity provides too many variables, but too little data per variable. So the spurious relationships grow much, much faster than real information.
In other words: Big data may mean more information, but it also means more false information.
Continue reading at Wired.com

This is one more contribution to a growing topic that deserves much more attention.


Late in 2010 The Atlantic printed "Lies, Damned Lies, and Medical Science" which reported on the work of Dr. John Ioannidis.
He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right.
Ioannidis is also mentioned in the New Yorker's "The Truth Wears Off"

The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
I added the emphasis due to a recent blog post from Sylvia McLain, aka Girl. Interupting, "The presure of high-impact"
Regardless of whether the high-impact factor journal imperative is fair or will even be used in the upcoming research assessment framework (REF); it certainly feels like high-impact papers are of the upmost importance. You can feel it in the water and I suspect there are many academics out there who truly believe that 40 papers in Nature are the only mark of a good research career.
I have really enjoyed many recent blogs by senior, established academics out there about the problems with impact factors, the REF and h-indexes. Athene Donald, Stephen Curry and Dorothy Bishop have all written about this extensively and thoughtfully.
The implication is that for an academic to get funding, they should get published in a broadly read journal - which means trying to broaden the appeal of their paper beyond their expertise.
From "The Truth Wears Off" it's apparent the high impact journals will often find the most broadly appealing articles to be those that are wrong.

No comments:

Post a Comment