Data, Data, Everywhere. . . Especially in My File Drawer

Data, Data, Everywhere . . . Especially in My File Drawer

 Barbara A. Spellman (published in Perspectives on Psychological Science January 2012 vol. 7 no. 1, 58-59)

I don’t know about you, but most of my data are not published in good journals, or even in bad journals; most of my data are sitting in my file drawer.1 And most of those data have never even been sent to a journal. Some data are from novel studies that just “didn’t work.” Some are from studies my coauthors and I now call “pilot studies”—studies we did before the ones that “worked” and were published. Some are from actual or conceptual replications of other people’s research—a few of those worked and many of those did not. The successful replications are unpublishable; journals reject such research saying “But we already knew that.” Of course, the failures to replicate are also unpublishable; we all learned that our first week in graduate school.2 I’m told that the justification for that practice is that “there are a lot of reasons why a good published study will fail to replicate.”

These days, however, many of us are concerned with the flip side of that statement: There are lots of reasons why a bad result (i.e., one that incorrectly rejects the null hypothesis) will get published. I’m not talking about deliberately miscoding or inventing data as has been recently alleged against a couple of highly visible psychologists. And I’m not simply talking about the “random” set of Type I errors that are likely to occur (see Rosenthal, 1979). Rather, I’m talking about well-intentioned scientists making well-intentioned (although biased) decisions that lead to incorrect results.

A selection of cleverly titled articles over the last few years have made the argument well, pointing to problems in how research is run, analyzed, reported, evaluated, reviewed, and selected for publication.3 The first of these, published in PLoS Medicine and therefore not specific to psychology research, was Ioannidis’s (2005)article “Why Most Published Research Findings Are False.” Perspectives on Psychological Science (PPS) later published a controversial paper by Vul, Harris, Winkielman, and Pashler (2009) titled “Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition,” which had originally been titled “Voodoo Correlations in Social Neuroscience.” And 2011 was a busy year for publications about problems in our science, not only in the news but also in our journals. Generalizing from the Vul et al. paper, in early 2011, PPS published Fiedler’s (2011)“Voodoo Correlations Are Everywhere—Not Only in Neuroscience.” “The (Mis)reporting of Statistical Results in Psychology Journals” by Bakker and Wicherts (2011) appeared in Behavioral Research Methods. Most recently, Psychological Science has published Simmons, Nelson, and Simonsohn’s (2011) paper “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” So, yes, because it’s now time to worry about what we do actually publish, it’s also time to revisit our thoughts about what we choose not to publish.

So, what can be done and what is being done? There are now more journals, typically online, that make the review process quicker and more open. In addition, various method-related websites have sprung up where people can, for example, post nonrefereed research or “register” experiments before they are run. In Fall 2011, I created—a website where you can post information about your attempted replications of published studies, regardless of whether they succeeded or failed.4 At the same time, Hal Pashler and colleagues created, a website that allows exactly the same thing. When we discovered our “replication” in early November while both sites were still in development (I sent my link to Hal as part of beta testing), we were astonished. The sites were very similar in tone and content. As I write this introduction, we have plans to combine the sites. By the time you read this introduction, you should be able to use our final product [Ed:]. Note that in addition to posting attempted replications you will be able to post comments and questions. Posted replications must be signed and must have met ethical guidelines for research. Browsing may be done anonymously.

Meanwhile, PPS gets many submissions about scientific methodology. Because it is not a “methods journal” per se, most are politely rejected. But because methodology is something we have in common across the field, sometimes we have published single papers or sets of papers on method-related issues.

This issue contains several articles reacting to these recent events and publications. Some address the problems; some address potential solutions. In “Short, Sweet, and Problematic? The Rise of the Short Report in Psychological Science,” Ledgerwood and Sherman discuss the pluses and minuses of the trend toward shorter and faster publications. One of the minuses, of course, is the problem of false positives, which is taken up in more detail by Bertamini and Munafo in “Bite-Size Science and Its Undesired Side Effects.” Hegarty and Walton give us reason to worry about Journal Impact Factors as proxies for scientific merit in “The Consequences of Predicting Scientific Impact in Psychology Using Journal Impact Factors.”

The final paper in this issue is Chan and Arvey’s “Meta-Analysis and the Development of Knowledge.” It describes the many ways that meta-analyses can be useful (with lots of examples and not a lot of math). I am a fan of meta-analyses and look forward to PPS getting more meta-analysis manuscripts that include more unpublished research.5 I encourage people who post to the replication website to collaborate on such endeavors.

We all know that science proceeds not only by the accretion of new facts but also by the weeding out of what was once falsely believed. I hope this new website will provide a place to discuss what is robust and what is not, to discover and report limiting conditions on our findings, and to provide more complete input to the meta-analyses that we so badly need, and, therefore, help us improve our theories. Scientists should not feel attacked when other scientists report failures to replicate our work; it’s not an accusation that we did something wrong. Rather, we should see failures to replicate—and successful replications—first as compliments, because people thought our work was worth paying attention to and spending time on, and second as providing more pieces to the puzzle that is the field of psychology.


I would like to thank Tony Greenwald, Greg Mitchell, Brian Nosek, Hal Pashler, and Jeff Sherman for never-dull discussions of what can and should be done.


  • 1. Okay, my more recent data are scattered across computer files.

  • 2. Very occasionally, a major psychology journal will publish a systematic set of studies that fail to replicate some phenomenon.

  • 3. Of course, this selection is not exhaustive. For example, PPS published an entire special issue on ways of improving the practice of psychological science in January 2009.

  • 4. Here’s the disclaimer: This site has no affiliation with PPS or the Association for Psychological Science.

  • 5. Of course, such unpublished research would need to be evaluated as to quality by authors of the meta-analyses.

Spellman, B. (2012). Introduction to the Special Section: Data, Data, Everywhere . . . Especially in My File Drawer Perspectives on Psychological Science, 7 (1), 58-59 DOI: 10.1177/1745691611432124