What’s real and what’s not

Neuroscience today is considered one of the hottest fields to work in. A simple Pubmed search indicates that more than 70,000 neuroscience papers are being published every year. The Times Higher education placed Neuroscience among the top 3 fields in terms of citation averages in the years 2000-2010. Neuroscience beats Space Science and (hold your breath) Computer Science by a big margin. This settles the issue for neurogeeks as to “What’s hot and what’s not?”

However, in this post, I pose to the reader, a different question – “What’s real and what’s not?”

I begin with a snippet from an article on The New Yorker published about two years ago:

The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.”

The article goes on to explore the possible reasons as to why scientific findings were getting invalidated with time – a phenomenon that has a rather innocuous name, the decline effect. One possible reason cited is the publication bias, a preference by journal editors for articles with positive results.

While this has been recognized as a problem for some time now, the author concludes that it goes deeper than that. A recent rather alarming trend among many researchers is what has been termed “significance chasing”. Quoting the article:

According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,”

Thus, while high-impact journals have a tendency to publish findings that are least expected, the customary significance threshold remains the same even for these highly unlikely results. This has often resulted in highly publicized flukes which has negatively affected the field until proven wrong (which itself is difficult due to publication bias).

On a related note, I find it extremely hard to believe studies which rely heavily on correlations to support their findings. Remember the age old saying? Correlation does not imply causation. There is more to it. In a witty blog post, Dean Abbott describes what is popularly known as the Redskins rule. In a nutshell, the outcome of a particular American football game was able to predict the outcome of US elections, with close to cent percent accuracy. This was even before the era of Nate Silver and his advanced predictive models. It is obvious that the correlation is spurious. Here, the problem lies not with the statistics but with the quality of the data itself.

If the Redskins rule sounds like a far cry from your everyday research, consider this article recently published in Nature. This puts a big question mark on many connectivity studies (largely based on correlations!) that have studied autism and related brain disorders. Quoting from the article:

But three studies published in 2012 have come to the same conclusion: head motion leads to systematic biases in fMRI-based analyses of functional connectivity2, 3, 4. Specifically, motion makes it appear as if long-range connections are weaker than they really are, and that short-range connections are stronger than they really are.

The authors further explain that since autistic children who are put into fMRI scanners are more likely to move, the findings from the connectivity studies may well be an artefact.

Adding another twist to the already known publication bias, a recent study concluded that certain brain areas received more attention in higher impact journals as compared to other regions. In an almost comical turn to the article, the authors coin a term for brain regions which receive less attention – “low impact voxels”. Quantifying their claims, they state:

Leading the way in ignominy was the secondary somatosensory area (Z = –4.4, P < 5 x 10–6), but the supplementary motor area was almost equally disgraced (Z = –4.25, P <10 –6). Researchers unfortunate enough to find activity in these regions can expect to be published in a journal with approximately half the impact of their most celebrated colleagues (mean impact factors of approximately 5 compared with approximately 9).

As the authors conclude, one may have simpler explanations for this trend. However, it is almost natural to think that this could indeed have influenced many researchers to focus on specific brain areas.

       Brain regions that correlate positively (red) and
negatively (blue) with journal impact factor

As I conclude, I return to my question. How do I, as a young researcher, know which of the 70,000 articles have found something real? Is it a real finding, or just the fantasy of a creative mind? Media attention is hardly something to go by, as a recent debate indicates. Data and code sharing can mitigate these shortcomings to some extent. At the end of the day, the onus would lie with the individual investigator to critically question the findings and carefully examining the premise of an article before rushing for publication. I would suggest a simple talisman to refer to when in doubt – “Is this real or is it not?”


Acknowledgements
: Many thanks to my supervisor, Prof. Lauri Parkkonen for providing useful feedback and helpful comments.

About Me:

Mainak Jas, Masters Student
Machine Learning and Data Mining
Department of Information and Computer Sciences
Aalto University, School of Science

More about me: http://ltl.tkk.fi/wiki/Mainak_Jas

Leave a Reply