Reproducibility in research

 

 

Clarissa França Dias Carneiro

  orcid logo https://orcid.org/0000-0001-8127-0034

 Olavo Bohrer Amaral

  orcid logo https://orcid.org/0000-0002-4299-8978

Universidade Federal do Rio de Janeiro
Instituto de Bioquímica Médica Leopoldo de Meis

 

 

Introduction

Science progresses by building knowledge cumulatively. For this to occur efficiently, each piece of acquired knowledge  must be robust and reliable, which in turn involves performing reproducible experiments, observations, and analyses. The process of verification and correction of published science, however, occurs in a non systematic way, which means that reproducibility is not guaranteed by scientific publication in its current format.

It should be noted that the reproducibility of a scientific finding can be defined in many ways, and that there is no consensus on the use of the terms “reproducible” and “replicable” (1,2). In this chapter, we will use “reproducibility” and “replicability” interchangeably, indicating that a similar result is obtained when collecting new data under conditions similar to those in the original study. However, some sources propose different uses of the two terms to distinguish the reproducibility of analyses based on the same data from those based on new experiments or observations (3).

In recent years, data on the reproducibility of published findings in some areas of research has become available. In 2005, epidemiologist John Ioannidis analyzed published replications of highly cited articles in clinical research that presented either a higher level of methodological rigor or a higher sample size  than  the original articles. He found that 21% of the replications contradicted the original study, while another 21% found effects smaller than those initially described (4). Between 2011 and 2012, pharmaceutical companies Bayer and Amgen released data from internal attempts to replicate experiments from the academic literature. Of the 67 studies replicated by Bayer, only 21% had been completely reproduced (5), while Amgen could successfully replicate 11% of 53 articles (6).

In experimental psychology, several warning signs about the low reproducibility of published findings emerged in the early 2010s (7,8). In 2015, the results of a large systematic replication of studies in cognitive and social psychology were released, which indicated success rates between 36% and 47% (9). Since then, similar projects have found reproducibility rates between 30% and 85% in different samples of studies from the social and behavioral sciences (10–13).

These surveys, however, are uncommon and restricted to specific research areas and sets of articles. In most areas of science, therefore, we still have little data in this regard. Still, the available figures suggest that research findings, even if peer-reviewed and published in reputable journals, should not be necessarily assumed to be reproducible.

Causes of irreproducibility in research 

Conflicts between rigor and impact

 

The main cause attributed to the observations described above is a publication and incentive system that rewards the impact and novelty of scientific findings, but does not systematically assess their reproducibility, which is rarely considered in the evaluation of researchers (14,15). This leads to a literature that is full of positive and impacting results, but usually at the expense of selective or biased analyses and inflated effects, which distort our perception of the scientific problems under study (16).

To add to this problem, that the assessment of methodological rigor ends up blending conflated with that of impact and originality during peer review and editorial appraisal, even though these aspects clearly represent different dimensions of scientific quality. Thus, the acceptance of an article, particularly in journals that are very selective for high-impact findings, ends up depending not only on the research’s  methods but also on its results. This creates a problematic conflict of interest for the authors, as career advancement depends on obtaining particular results, biasing the conduct and the analysis of studies (17).

Selective analysis and publication

Another point often related to the lack of reproducibility in biomedical research is the biased use of statistical models. The theoretical framework on which null-hypothesis significance tests are based presupposes an a priori definition of the variables under study and of the hypotheses tested. These tests, however, are usually applied flexibly after data collection and examination, and end up being reported selectively according to the results found (18,19). As there is no detailed description of all the analysis procedures tested, a reader’s ability to interpret the results is severely impaired.

Furthermore, the problem of omitting “negative” or non-statistically significant results has been recognized for decades (20). Empirical analyses demonstrate that the problem is prevalent in many areas of science (21–24) – either because of journal restrictions or, more commonly, because researchers themselves fail to report their negative results (25). This also leads most failed replication attempts to go un published, making it difficult to correct false-positive results (26). Paradoxically, this can make the literature become less reliable as the number of researchers focusing on a research question increases (16).

Limitations of peer review

Despite the importance attributed by researchers to the peer review process (27), studies regarding its effectiveness as a quality control mechanism are rare and generally limited in scope (28,29). In its traditional pre-publication form, peer review inevitably has limited impact on improving a study, as it occurs after its completion and is based on an often selective and biased report of the research process by the authors. Still, even errors that would be within the reach of reviewers often go undetected (30). In addition, objective review contributions to the text or to the reporting of methods and results seem to be scarce (31–34), suggesting that the process fails as a systematic quality control mechanism of the literature (35).

It is also worth noting that peer review and the editorial process themselves are subject to reproducibility issues. Agreement among reviewers has been evaluated in scientific publications for decades, and a meta-analysis published in 2010 indicates that it is quite low (36). Likewise, analyses of agreement among reviewers of grant proposals indicate a lack of reproducibility between assessments (37,38). This problem is probably related to the lack of consensus or explicit guidance on which aspects of an article should be analyzed by peer review, which makes different reviewers and editors approach the process in different ways (39).

Difficulties in correcting the literature

A final important problem raised by recent debates about reproducibility in biomedical research is that the publication system is usually not efficient in correcting the literature after errors are identified. Even in the presence of great effort by individual researchers to detect problems with publications, the success rate in obtaining corrections or retractions is low and marked by the absence of cooperation from journals (40.41). Furthermore, the fact that later replications of an article are not easily traceable means that, even when an article is contradicted by the literature, this information is not necessarily available to its readers (42).

Proposals and solutions 

Systematic description of methods and results 

As described above, reproducibility initially involves the ability to repeat the same experimental or analytical procedures (1,3). Thus, promoting it starts with an adequate and complete description of the methods and results. There are numerous guidelines available for reporting different types of studies (43) and this guide includes a chapter specifically dedicated to this topic.

On the other hand, recent surveys indicate that the recommendation of guidelines by journals does not seem to be sufficient to improve the description of methods and results by authors (44). In addition, adherence to reporting guidelines is not necessarily among the highest priorities of reviewers (39). Thus, ensuring attention to them may require a more proactive attitude from editors, through the use of checklists (45,46) or specialized reviewers (47).

Availability of data and materials

Reproducibility can also be promoted through transparency in the availability of data and research materials. While raw data sharing is required by some journals and funding agencies, this does not guarantee that these data are in fact findable, accessible, interoperable, and reusable (48,49), as specified by management guidelines (50). Ensuring this involves not only the existence of policies in this regard, but also the control of data quality and the facilitation of data structuring through specialized repositories (51).

In 2015, the TOP Guidelines (52) were created: to encourage journals to implement measures that increase the transparency of published research. The proposal encompasses 8 distinct dimensions of transparency (citations, data, code, materials, design/analysis, pre-registration of studies, pre-registration of analysis and replications), proposing increasing levels of implementation of each practice to stimulate adoption. Complementarily, the guidelines also gave rise to a journal evaluation system based on the level of transparency achieved for each dimension (53).

Emphasis on methods rather than impact 

Another proposal at the level of journals is that publication criteria be based only on the methodological  rigor of studies, without considering the originality and potential impact of the results. In a similar vein, the creation of  venues for publishing replications of previous findings can have a significant impact on confirming results in the literature (54). An interesting possibility is that journals may request replication of important findings published in an area (55) or require independent confirmatory experiments for publication of some types of studies (56). Ideally, replications should follow  confirmatory research practices, with pre-registered protocols and high statistical power.

With this shift in evaluation, one would expect that the robustness of the results should become the main focus of researchers. Some journals have made this an explicit mission (57), with standardized forms to guide reviewers towards this purpose. That said, the ubiquitous use of metrics related to the number of citations acts as a strong stimulus for the “potential impact” of an article to end up as an editorial selection criterion in most journals. This has led to the recommendation that journal-based metrics such as impact factors should not be emphasized by journals or used in the evaluation of scientists (58).

Protocol registration and Registered Reports

As an additional step towards decoupling the evaluation of research from its results, some journals have implemented the Registered Reports model, in which the study protocol, containing a detailed description of the methods, undergoes peer review before data collection and analysis (59–61). In this format, authors can receive corrections and suggestions before carrying out the study, increasing the likelihood of concrete benefits. Furthermore, the practice helps to prevent both publication bias and analytical flexibility, by preventing analysis methods from being defined by the data (18,62,63). Upon completion, the work, is again evaluated by the reviewers, who must assess the study’s adherence to the previously approved methods rather than its results to decide on acceptance.

There are other ways to mitigate analysis flexibility that do not involve changes in the peer review system. Pre-registration of study protocols by authors also works as a way to increase reproducibility, by allowing for a distinction between confirmatory and exploratory analyses (62,63). This procedure is regularly adopted in specific areas of research, such as clinical trials, with encouraging results (64), and can be carried out both on specialized platforms, such as the Registro Brasileiro de Ensaios Clínicos (Brazilian Clinical Trials Registry) (65), or generic ones, such as the Open Science Framework (66,67). Recommending or requiring protocol pre-registration can thus be an alternative for journals to increase the reproducibility of published studies (68).

Transparency in literature correction

Despite the difficulties mentioned above regarding corrections in the literature, the scenario can be different when the initiative comes from the journal itself. In 2018, Molecular and Cellular Biology conducted a study to identify problematic images in its publications, resulting in the correction of 78% of the errors found (69). When implemented after publication, however, the process generated approximately 12 times more workload for the team involved than when performed during submission (69). Other experiences by specific journals in this regard include systematic image checking (70) and analysis reproducibility (71) during the submission and peer review process.

Even if these measures are taken, it is inevitable that some articles with erroneous or false data will still pass through the filter of peer review, which is known to be vulnerable to this type of situation (30,72). Thus, it is also important that journals and authors work quickly to correct the literature when necessary. When corrections and retractions occur, it is essential that they have their reasons explained in a transparent way, as has been happening more frequently in recent years (73). It is also important that they do not generate stigma for the researchers involved, in order to encourage correction initiatives to come from the authors themselves when there are doubts about the reproducibility of published results (74,75).

Support for new publication formats

In a context of search for greater transparency, agility, and accessibility of published science, the use of preprints has become an increasingly common practice in biomedical research (76–79). Preprints accelerate the advancement of knowledge by eliminating obstacles to the arrival of a finding in the scientific literature, while also representing a form of open access at low cost when compared to traditional journals. Currently, several preprint platforms exist with different modes of operation and scope (76,80). Some of them, such as bioRxiv, have been successful in integrating the flow of preprint publishing and journal submission for peer review (81), in a model that has been followed by other platforms such as medRxiv (82) and SciELO preprints (83).

During the first months of the COVID-19 pandemic, the importance of rapid dissemination of knowledge became even more evident, reinforcing the role of preprints in this scenario (84–86). Valid concerns remain about possible risks of preprints, especially regarding access by non-specialized audiences (87,88). That said, the available evidence suggests that the differences between preprints and published articles are usually small, supporting the idea that both should be considered valid scientific contributions (31,32). Furthermore, considering the current social and technological contexts, opposing free, immediate, and unlimited access to scientific knowledge does not seem to be an acceptable alternative (89,90).

Conclusions

Although we have addressed several proposals for changes in scientific publishing system to improve reproducibility, many of them are still based on anecdotal evidence, and their effectiveness has not been empirically demonstrated. Thus, the search for more effective and efficient quality control systems in research must involve the experimental testing of different approaches. There are numerous open questions and, in order to answer them, the participation of journals is essential, either by opening up data on the review process or by carrying out studies to assess the effectiveness of specific interventions (28,91).

Finally, it should be noted that, although scientific journals have a role to play in the search for more reproducible science, this task is larger than the publication system. Ultimately, developing an effective quality control system in academic science involves creating instances of review and correction throughout the research process, and not just at the end. At the same time, it is essential that the incentives provided to researchers by institutions and funding agencies place transparency and rigor as central objectives. Thus, the issue of reproducibility will only be solved by reform at several levels, along with empirical research on the effectiveness of proposed solutions.

References

  1. Goodman SN, Fanelli D, Ioannidis JPA. What does research reproducibility mean? Sci Transl Med. 2016;8(341):1–6. 
  2. Nosek BA, Errington TM. What is replication? PLOS Biol. 2020;18(3):e3000691. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/32218571/> 
  3. National Academies of Sciences Engineering and Medicine. Reproducibility and Replicability in Science. National Academies Press; 2019. 
  4. Ioannidis JPA. Contradicted and Initially Stronger Effects in Highly Cited Clinical Research. JAMA. 2005;294(2):218. [Acesso em 09 set. 2022]. Disponível em: <https://jamanetwork.com/journals/jama/fullarticle/10.1001/jama.294.2.218>
  5. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10(9):712. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/nrd3439-c1
  6. Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012;483(7391):531–3. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/483531a>
  7. Pashler H, Wagenmakers E-J. Editors’ Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence? Perspect Psychol Sci. 2012;7(6):528–30. Acesso em 09 set. 2022]. Disponível em: <https://journals.sagepub.com/doi/10.1177/1745691612465253?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed>
  8. Nosek BA, Hardwicke TE, Moshontz H, Allard A, Corker KS, Almenberg AD, et al. Replicability, Robustness, and Reproducibility in Psychological Science. PsyArXiv. 2021. 
  9. Open Science Collaboration. PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716. 
  10. Klein RA, Ratliff KA, Vianello M, Adams RB, Bahník Š, Bernstein MJ, et al. Investigating variation in replicability: A “many labs” replication project. Soc Psychol (Gott). 2014;45(3):142–52. 
  11. Camerer CF, Dreber A, Forsell E, Ho T-H, Huber J, Johannesson M, et al. Evaluating replicability of laboratory experiments in economics. Science. 2016;351(6280):1433–6. 
  12. Cova F, Strickland B, Abatista A, Allard A, Andow J, Attie M, et al. Estimating the Reproducibility of Experimental Philosophy. Rev Philos Psychol. 2018;1–36. 
  13. Klein RA, Vianello M, Hasselman F, Adams BG, Adams RB, Alper S, et al. Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Adv Methods Pract Psychol Sci. 2018;1(4):443–90. 
  14. Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016;3(9):160384. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/27703703/>
  15. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1(1):0021. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/33954258/>
  16. Ioannidis JPA. Why most published research findings are false. PLOS Med. 2005;2(8):0696–701. 
  17. Marder E, Kettenmann H, Grillner S. Impacting our young. Proc Natl Acad Sci U S A. 2010; 107:21233. [Acesso em 09 set. 2022]. Disponível em:<https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/21098264
  18. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22(11):1359–66. [Acesso em 09 set. 2022]. Disponível em: <https://journals.sagepub.com/doi/10.1177/0956797611417632?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed
  19. John LK, Loewenstein G, Prelec D. Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychol Sci. 2012;23(5):524–32. 
  20. Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bull. 1979;86(3):638–41. 
  21. Fanelli D, Costas R, Ioannidis JPA. Meta-assessment of bias in science. Proc Natl Acad Sci U S A. 2017;114(14):3714–9. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/28320937/>
  22. Dickersin K, Chalmers I. Recognizing, investigating and dealingwith incomplete and biased reporting of clinical research: From Francis Bacon to the WHO. J R Soc Med. 2011;104(12):532–8. [Acesso em 09 set. 2022]. Disponível em:<https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/22179297/>
  23. Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2012;90(3):891–904. 
  24. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. N Engl J Med. 2008;358(3):252–60. [Acesso em 09 set. 2022]. Disponível em: <https://www.nejm.org/doi/10.1056/NEJMsa065779?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200www.ncbi.nlm.nih.gov>
  25. Dickersin K, Min YI, Meinert CL. Factors Influencing Publication of Research Results: Follow-up of Applications Submitted to Two Institutional Review Boards. JAMA. 1992;267(3):374–8. 
  26. Baker M. 1,500 scientists lift the lid on reproducibility. Nature. 2016;533:452–4. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/533452a>
  27. Sense About Science. Quality, trust & peer review: researchers’ perspectives 10 years on. 2019. 
  28. Tennant JP, Ross-Hellauer T. The limitations to our understanding of peer review. Res Integr Peer Rev. 2020;5(1):6. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/32368354/>
  29. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;(2):MR000016. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/17443635/>
  30. Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med. 2008;101(10):507–14. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18840867/
  31. Carneiro CFD, Queiroz VGS, Moulin TC, Carvalho CAM, Haas CB, Rayêe D, et al. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. Res Integr Peer Rev. 2020;5(1):16. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/33292815/>
  32. Klein M, Broadwell P, Farb SE, Grappone T. Comparing published scientific journal articles to their pre-print versions. Int J Digit Libr. 2018;1–16. 
  33. Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Ann Intern Med. 1994;121(1):11–21. 
  34. Pierie J-PE, Walvoort HC, Overbeke AJP. Readers’ evaluation of effect of peer review and editing on quality of articles in the Nederlands Tijdschrift voor Geneeskunde. Lancet. 1996;348(9040):1480–3. 
  35. Ioannidis JPA, Tatsioni A, Karassa FB. Who is afraid of reviewers’ comments? Or, why anything can be published and anything can be cited. Eur J Clin Invest. 2010;40(4):285–7. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1111/j.1365-2362.2010.02272.x>
  36. Bornmann L, Mutz R, Daniel H-D. A Reliability-Generalization Study of Journal Peer Reviews: A Multilevel Meta-Analysis of Inter-Rater Reliability and Its Determinants. Rogers S, editor. PLOS One. 2010;5(12):e14331. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/21179459/>
  37. Cole S, Cole JR, Simon GA. Chance and Consensus in Peer Review. Science. 1981;214:881–6. 
  38. Pier EL, Brauer M, Filut A, Kaatz A, Raclaw J, Nathan MJ, et al. Low agreement among reviewers evaluating the same NIH grant applications. Proc Natl Acad Sci U S A. 2018;115(12):2952–7. Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/29507248/>
  39. Glonti K, Cauchi D, Cobo E, Boutron I, Moher D, Hren D. A scoping review on the roles and tasks of peer reviewers in the manuscript review process in biomedical journals. BMC Med. 2019;17(1):118. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/31217033/>
  40. Allison DB, Brown AW, George BJ, Kaiser KA. Reproducibility: A tragedy of errors. Nature. 2016;530:27–9. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/26842041/>
  41. Casadevall A, Steen RG, Fang FC. Sources of error in the retracted scientific literature. FASEB J. 2014;28(9):3847–55. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/24928194/>
  42. Hartshorne JK, Schachner A. Tracking Replicability as a Method of Post-Publication Open Evaluation. Front Comput Neurosci. 2012;6:8. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/22403538/>
  43. Equator Network. EQUATOR NETWORK website [Internet]. [cited 2019 Jan 3]. Available from: https://www.equator-network.org/
  44. Stevens A, Shamseer L, Weinstein E, Yazdi F, Turner L, Thielman J, et al. Relation of completeness of reporting of health research to journals’ endorsement of reporting guidelines: Systematic review. BMJ. 2014;348. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/24965222/>
  45. Hair K, Macleod MR, Sena ES. A randomised controlled trial of an Intervention to Improve Compliance with the ARRIVE guidelines (IICARus). Res Integr Peer Rev. 2019;4(1):12. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/31205756/>
  46. The NPQIP Collaborative group. Did a change in Nature journals’ editorial policy for life sciences research improve reporting? BMJ Open Sci. 2019;3(1):e000035. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/35047682/>
  47. Cobo E, Selva-O’Callagham A, Ribera J-M, Cardellach F, Dominguez R, Vilardell M. Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial. Scherer R, editor. PLOS One. 2007;2(3):e332. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/17389922/>
  48. Hardwicke TE, Mathur MB, MacDonald K, Nilsonne G, Banks GC, Kidwell MC, et al. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R Soc Open Sci. 2018;5(8):180448. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/30225032/>
  49. Stodden V, Seiler J, Ma Z. An empirical analysis of journal policy effectiveness for computational reproducibility. Proc Natl Acad Sci U S A. 2018;115(11):2584–9. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/29531050/>
  50. Wilkinson MD, Dumontier M, Aalbersberg IjJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3(1):1–9. 
  51. re3data.org – Registry of Research Data Repositories [Internet]. [cited 2021 Feb 24]. Available from: https://doi.org/10.17616/R3D
  52. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. Promoting an open research culture. Science. 2015;348(6242):1422–5. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/26113702/>
  53. Center for Open Science. New Measure Rates Quality of Research Journals’ Policies to Promote Transparency and Reproducibility [Internet]. 2020 [cited 2021 Feb 5]. Available from: https://www.cos.io/about/news/new-measure-rates-quality-research-journals-policies-promote-transparency-and-reproducibility
  54. Moonesinghe R, Khoury MJ, Janssens ACJW. Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLOS Med. 2007;4(2):e28. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/17326704/>
  55. Wagenmakers EJ, Forstmann BU. Rewarding high-power replication research. Cortex. 2014;51:105–6. 
  56. Mogil JS, Macleod MR. No publication without confirmation. Nature. 2017;542:409–11. Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/542409a>
  57. MacCallum CJ. ONE for All: The Next Step for PLoS. PLOS Biol. 2006;4(11):e401. [Acesso em 09 set. 2022]. Disponível em: <https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/17523266/>
  58. San Francisco Declaration on Research Assessment [Internet]. 2012. [cited 2021 Feb 25]. Available from: https://sfdora.org/read/
  59. Hardwicke TE, Ioannidis JPA. Mapping the universe of registered reports. Nature Human Behaviour. 2018;2:793–6. 
  60. Chambers C. What’s next for Registered Reports? Nature. 2019;573:187–9. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/d41586-019-02674-6>
  61. Center for Open Science. Registered Reports [Internet]. [cited 2021 Feb 25]. Available from: https://www.cos.io/initiatives/registered-reports
  62. Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci U S A. 2018;115(11):2600–6. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1073/pnas.1708274114>
  63. Gelman A, Loken E. The statistical Crisis in science. Am Sci. 2014;102(6):460–5. 
  64. Kaplan RM, Irvin VL. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. Garattini S, editor. PLOS One. 2015;10(8):e0132382. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1371/journal.pone.0132382>
  65. ReBEC – Registro Brasileiro de Ensaios Clínicos [Internet]. [cited 2021 Feb 25]. Available from: https://ensaiosclinicos.gov.br/
  66. Mellor DT. Templates of OSF Registration Forms [Internet]. Open Science Framework. 2021 [cited 2021 Feb 25]. Available from: https://osf.io/zab38/wiki/home/
  67. World Health Organization. Primary registries in the WHO registry network [Internet]. [cited 2021 Feb 25]. Available from: https://www.who.int/clinical-trials-registry-platform/network/primary-registries
  68. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: A statement from the International Committee of Medical Journal Editors. CMAJ. 2004;171:606–7. 
  69. Bik EM, Fang FC, Kullas AL, Davis RJ, Casadevall A.  Analysis and Correction of Inappropriate Image Duplication: the Molecular and Cellular Biology Experience . Mol Cell Biol. 2018;38(20).[Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1128/mcb.00309-18>
  70.  Pearson H. CSI: Cell biology. Nature. 2005;434:952–3. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/434952a>
  71. AJPS Verification Policy [Internet]. American Journal of Political Science. [cited 2021 Feb 25]. Available from: https://ajps.org/ajps-verification-policy/
  72. Heathers J. The Lancet has made one of the biggest retractions in modern history. How could this happen? The Guardian [Internet]. 2020 [cited 2021 Feb 24]; Available from: https://www.theguardian.com/commentisfree/2020/jun/05/lancet-had-to-do-one-of-the-biggest-retractions-in-modern-history-how-could-this-happen
  73. Deculllier E, Maisonneuve H. Correcting the literature: Improvement trends seen in contents of retraction notices. BMC Res Notes. 2018;11(1):490. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1186/s13104-018-3576-2>
  74. Rohrer JM, Tierney W, Uhlmann EL, DeBruine LM, Heyman T, Jones B, et al. Putting the Self in Self-Correction: Findings from the Loss-of-Confidence Project. PsyArXiv. 2020. 
  75. Bishop DVM. Fallibility in Science: Responding to Errors in the Work of Oneself and Others. Adv Methods Pract Psychol Sci. 2018;1(3):432–8. 
  76. Kirkham JJ, Penfold NC, Murphy F, Boutron I, Ioannidis JP, Polka J, et al. Systematic examination of preprint platforms for use in the medical and biomedical sciences setting. BMJ Open. 2020;10(12):e041849. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1136/bmjopen-2020-041849>
  77.  Kaiser J. The preprint dilemma. Science. 2017;357:1344–9. [Acesso em 09 set. 2022]. Disponível em: <https://www.science.org/doi/10.1126/science.357.6358.1344?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed>
  78. Biology preprints over time [Internet]. ASAPbio blog. 2020 [cited 2021 Feb 25]. Available from: https://asapbio.org/preprint-info/biology-preprints-over-time
  79. Abdill RJ, Blekhman R. Meta-research: Tracking the popularity and outcomes of all bioRxiv preprints. eLife. 2019;8:e45133. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.7554/elife.45133> 
  80. Malički M, Jerončić A, Ter Riet G, Bouter LM, Ioannidis JPA, Goodman SN, et al. Preprint Servers’ Policies, Submission Requirements, and Transparency in Reporting and Research Integrity Recommendations. JAMA. 2020; 324:1901–3. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1001/jama.2020.17195>
  81. Sever R, Roeder T, Hindle S, Sussman L, Black K-J, Argentine J, et al. bioRxiv: the preprint server fot biology. bioRxiv. 2019. 
  82. Kaiser J. Medical preprint server debuts. Science. 2019. 
  83. SCIENTIFIC ELECTRONIC LIBRARY ONLINE. SciELO Preprints em operação [Internet]. SciELO em Perspectiva. 2020 [cited 2021 Feb 5]. Available from: https://blog.scielo.org/blog/2020/04/07/scielo-preprints-em-operacao/#.YB2aInlv_IU
  84. Else H. How a torrent of COVID science changed research publishing – in seven charts. Nature. 2020;588:p. 553. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1038/d41586-020-03564-y
  85. Oransky I, Marcus A. Quick retraction of coronavirus paper was good moment for science [Internet]. 2020 [cited 2021 Jan 20]. Available from: https://www.statnews.com/2020/02/03/retraction-faulty-coronavirus-paper-good-moment-for-science/
  86. Fraser N, Brierley L, Dey G, Polka J, Pálfy M, Nanni F, et al. Preprinting the COVID-19 pandemic. bioRxiv. 2020. 
  87. Anderson R. Journalism, Preprint Servers, and the Truth: Allocating Accountability [Internet]. The Scholarly Kitchen. 2020 [cited 2021 Feb 5]. Available from: https://scholarlykitchen.sspnet.org/2020/12/14/journalism-preprint-servers-and-the-truth-allocating-accountability/
  88. Saitz R, Schwitzer G. Communicating Science in the Time of a Pandemic. JAMA. 2020; 324:443–4.[Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1001/jama.2020.12535>
  89. Cobb M. The prehistory of biology preprints: A forgotten experiment from the 1960s. PLOS Biol. 2017;15(11):e2003995. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1371/journal.pbio.2003995>
  90. Penfold NC, Polka JK. Technical and social issues influencing the adoption of preprints in the life sciences. Shafee T, editor. PLOS Genet. 2020;16(4):e1008565. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1371/journal.pgen.1008565>
  91. Schroter S, Loder E, Godlee F. Research on peer review and biomedical publication. The BMJ. 2020:368. [Acesso em 09 set. 2022]. Disponível em: <https://doi.org/10.1136/bmj.m661>

 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Was this page helpful?