FAQ's

What is the purpose of this website?


To provide a quick and easy way for people to post the results of replication attempts in experimental psychology (defined broadly, to include cognitive, developmental, perceptual, social and other fields of psychological experimentation) whether these attempts succeeded or failed.


Why is this important?


There is rapidly increasing recognition of the "File Drawer Problem" (the fact that most failures to replicate remain within the investigator's own "file drawer"). This problem can (indeed, almost certainly does) produce a misleading scientific literature. Consequently, a significant amount of what we think we know based on the experimental literature is undoubtedly wrong. Existing scientific practice provides very poor mechanisms for bringing these cases to light. A few journals specializing in non-replications have sprung up, but few people want to invest the time to write a full-length report about a series of studies they found disappointing; more often, they simply abandon the topic, with scarcely anyone ever hearing about it.


Is the file drawer problem really that serious or widespread in its implications?


Yes. By some estimates, a sizable proportion of the positive findings reported in modern scientific literatures are likely to be type 1 errors.


Does the website accept successful replications, or only non-replications?


Both. Neither one is well represented in full length journals and both are useful to the field.


Can I post results obtained in a class project?


By all means!  With appropriate supervision, replication efforts not only make highly instructive class projects for undergraduate and graduate classes--they can also allow the students to make a meaningful contribution to the field by providing evidence on the replicability of particular results (see forthcoming article on this by Mike Frank and Rebecca Saxe).  When uploading class projects to PsychFileDrawer, please check to indicate that the replication attempt was a class project.  It would also be useful to add comments regarding level and type of instructor supervision, and instructor's degree of confidence in the fidelity with which the experimental protocol was implemented.


What is meant by a "failure to replicate" or a "successful replication"?


This is a good question.  Not every failure to find a significant difference where a prior study did find a difference can reasonably be labeled a "failure to replicate".  For example, a trend in the same direction--especially if the attempt does not have enormous statistical power--may be fully consistent with the existence of a real nonzero effect of just the sort implied in the study being replicated.  And while a significant positive result that matches a published positive result is clearly a successful replication, it is not so clear how to regard a trend at, say, p=.25.


On PsychFileDrawer.org we allow users to use common sense and characterize their results as they see fit, but we encourage everyone to characterize only a very marked departure from a previous result as a "failure to replicate".   A difference in the opposite direction or very minor departures from a null observed difference, in a study of comparable power to the original, probably deserve such a characterization.  Conversely, a very pronounced trend or a significant difference in the same direction as a published result deserves the characterization of a "successful replication".


If users choose to provide a t or F statistic as well as df (under the header "Detailed Description of Method/Results") this would allow a meta-analyst to compute effect size and potentially aggregate these across different studies.  While many have argued that the whole notion of a discrete boundary between significant difference and no significant difference may not be the optimal way to think about a collection of experimental results, we do not believe that this is the place to demand that people compute statistics they may not be familiar with (anyone who wishes to provide extra information can do so in the Detailed Description of Method/Results section.)


For thoughtful discussions of the issues mentioned here, see links on our page on statistical issues in reporting null results.


Do I have to give my name to post a result?


Yes. This website will publish only reports of experiments that a named individual presently or formerly working at a university or research center stands behind.


Do I have to give an email address?


Yes, and in using this site you agree to respond to reasonable questions about your methods (although you need not agree to comply with requests that would impose a significant burden on you, such as re-analyzing your data.)  Note that our site converts your email to an image file and displays the image rather than text, to avoid harvesting of emails from our site by outside spammers.


I have failed to replicate a study, but I am not sure whether I feel comfortable putting up the only posting on the topic--do you have any suggestions?  


Yes!  If you have failed to replicate a study and would like to find out if others have had a similar experience--but aren't sure yet about making a posting on the website--you can use PsychFileDrawer's article-specific private networking tool (click here to access the tool).  The tool lets you indicate what published report you failed to replicate, and in the event that anyone else indicates (or has already indicated) that they too failed to replicate the same study, the software will automatically put you in touch with each other.  This way, you can discuss your findings among yourselves.  Of course, PsychFileDrawer hopes that when this happens, you will ultimately choose to post all the results on the site, so the field can be informed.  However, using the tool does not commit you to posting anything--nor does it cause your identity to be displayed anywhere on this website.


If I post a non-replication, am I accusing the original investigator of incompetence or worse?


Certainly not! In many cases, failures to replicate probably reflect unsuspected boundary conditions or interactions. Some also represent type 2 errors. If this archive becomes widely used, its contents should provide useful suggestions about what such boundary conditions might be.  


Is it perhaps unfair to original investigators to publish results online that question their findings without giving them an opportunity to reply?


The website promotes transparency and discussion by automatically creating a discussion forum centered around each posting.  This allows anyone, including the author of the original target article, to comment on replication attempts (postings by authors of target articles are highlighted).


Why should I post my non-replication?


Responsible investigators who are committed to advancing their field and who have conducted solid studies with negative results are normally eager for others to have the benefit of knowing about their results. However, since the payoffs to an individual are modest, we have focused on making the submission process quick and convenient. We estimate that it should take about 15 minutes to make a posting (although those posting can provide more elaborate materials if they wish).


If I post a replication attempt, can I later publish a full-length report in a journal?


It is up to a journal to decide what sort of prior dissemination of results precludes publishing an article about the same study.  Journals often agree to publish results that have been briefly described in published conference abstracts, and we see no reason that a report on PsychFileDrawer.org should be treated differently--but this matter is not up to us.


Do you review submissions, and if so, how?


Submissions are reviewed by a member of the PsychFileDrawer project, but they are not peer reviewed.  Postings that do not appear credible, or which are inappropriate in other ways, will be rejected.  However, the contents of all postings are the sole responsibility of the authors--see Terms of Service.


Who holds the copyright to postings?


Postings are published under the Creative Commons License/Attribution, which means (roughly) that anyone is free to reproduce the contents so long as they attribute the work (mentioning the author(s) and PsychFileDrawer.org).


I want to cite a posting--how do I do that?


When you view the details of any posting, you will note a link that says "How to Cite this Posting"--if you click on the link you will see how to cite it in APA and other publications formats.


There are journals of non-replications--how is PsychFileDrawer different from those?


Yes, there are several journals specializing in non-replications. The current site differs from these in that it is specifically aimed at those who do not wish to take the time to prepare a full length report of their replication attempt--a group that we believe probably includes the majority of those who have carried out replication attempts.


Do I have to upload my raw data?


No, but we recommend it. Doing so will be useful for other researchers that may be interested in conducting further analyses of the data (e.g., screen for potential outliers, examine for potential modulating variables in the data that might explain divergent results).


If I make a posting, will it disappear in a few years?


We are committed to keeping the archive going for a minimum of five years, but we hope to make it a self-sustaining enterprise.


How will people find postings on this website?


We expect that besides our own search functions, search engines will help direct people to postings on our site.


Why do you ask about the date and place that the work was done and who actually ran the subjects, etc?


We want all postings on this website to be serious and potentially verifiable, and we believe that asking for concrete details about the experiment will help insure this.


I think this website is a terrific thing--what can I do to help?


Two things: First, tell other experimental psychologists about it.  Second, please link the website from your university webpage--this will increase PsychFileDrawer's visibility to search engines.



PsychFileDrawer recently created a webpage to allow users to nominate and vote for a high priority list of studies that need replication.  What are the goals of this effort?


When the list is complete, we will urge users to conduct replications of these studies--and to upload results to PsychFileDrawer.  By highlighting the need for replications of these studies, we believe the list should help prod researchers to undertake replication attempts, and also demonstrate to editors the community's interest in replication efforts.