Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals

* E-mail: [email protected]

Affiliation Fish Ecology and Conservation Physiology Laboratory, Department of Biology, Carleton University, Ottawa, Ontario, Canada

Affiliation MISTRA EviEM, Royal Swedish Academy of Sciences, Stockholm, Sweden

Affiliations Rosenstiel School of Marline and Atmospheric Science, University of Miami, Miami, FL, United States of America, Beneath the Waves, Inc., Syracuse, NY, United States of America

Affiliation Rosenstiel School of Marline and Atmospheric Science, University of Miami, Miami, FL, United States of America

Affiliations Fish Ecology and Conservation Physiology Laboratory, Department of Biology, Carleton University, Ottawa, Ontario, Canada, Institute of Environmental Science, Carleton University, Ottawa, Ontario, Canada

  • Vivian M. Nguyen, 
  • Neal R. Haddaway, 
  • Lee F. G. Gutowsky, 
  • Alexander D. M. Wilson, 
  • Austin J. Gallagher, 
  • Michael R. Donaldson, 
  • Neil Hammerschlag, 
  • Steven J. Cooke

PLOS

  • Published: August 12, 2015
  • https://doi.org/10.1371/journal.pone.0132557
  • Reader Comments

29 Sep 2015: The PLOS ONE Staff (2015) Correction: How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals. PLOS ONE 10(9): e0139783. https://doi.org/10.1371/journal.pone.0139783 View correction

Table 1

Delays in peer reviewed publication may have consequences for both assessment of scientific prowess in academia as well as communication of important information to the knowledge receptor community. We present an analysis on the perspectives of authors publishing in conservation biology journals regarding their opinions on the importance of speed in peer-review as well as how to improve review times. Authors were invited to take part in an online questionnaire, of which the data was subjected to both qualitative (open coding, categorizing) and quantitative analyses (generalized linear models). We received 637 responses to 6,547 e-mail invitations sent. Peer-review speed was generally perceived as slow, with authors experiencing a typical turnaround time of 14 weeks while their perceived optimal review time was six weeks. Male and younger respondents seem to have higher expectations of review speed than females and older respondents. The majority of participants attributed lengthy review times to reviewer and editor fatigue, while editor persistence and journal prestige were believed to speed up the review process. Negative consequences of lengthy review times were perceived to be greater for early career researchers and to have impact on author morale (e.g. motivation or frustration). Competition among colleagues was also of concern to respondents. Incentivizing peer-review was among the top suggested alterations to the system along with training graduate students in peer-review, increased editorial persistence, and changes to the norms of peer-review such as opening the peer-review process to the public. It is clear that authors surveyed in this study viewed the peer-review system as under stress and we encourage scientists and publishers to push the envelope for new peer-review models.

Citation: Nguyen VM, Haddaway NR, Gutowsky LFG, Wilson ADM, Gallagher AJ, Donaldson MR, et al. (2015) How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals. PLoS ONE 10(8): e0132557. https://doi.org/10.1371/journal.pone.0132557

Editor: Miguel A. Andrade-Navarro, Johannes-Gutenberg University of Mainz, GERMANY

Received: March 1, 2015; Accepted: June 16, 2015; Published: August 12, 2015

Copyright: © 2015 Nguyen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Data Availability: Data are available in the paper and supporting information files.

Funding: This work was supported by the Natural Sciences and Engineering Research Council, 315918-166, http://www.nserc-crsng.gc.ca/index_eng.asp and the Canada Research Chair, 320517-166, http://www.chairs-chaires.gc.ca/home-accueil-eng.aspx . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Peer reviewed publications remain the cornerstone of the scientific world [ 1 , 2 ] despite the fact that the review process is not infallible [ 3 , 4 ]. Such publications are an essential means of disseminating scientific information through credible and accessible channels. Moreover, academic institutions evaluate scientists based on the quantity and quality of their research via publication output. Given the importance of peer-review to the dissemination of information and to the researchers themselves, it is of little surprise that the process of scientific publishing has been a subject of discussion itself. For example, researchers have explored the many and various biases associated with contemporary peer-review (e.g., gender [ 5 ], nationality/language [ 6 ], and presence of a “known” name and academic age [ 7 ]), with a goal of improving the objectivity, fairness, and rigor of the review process [ 8 ]. What has received less attention is the duration of peer review. Given the significance of peer-reviewed publications for science and evidence-based conservation [ 9 ], efforts to improve the peer-review system are warranted to ensure that delays in publication do not have significant impacts on the transition of scientific evidence into policy.

Despite the switch from surface mail to online communication channels and article submission [ 10 , 11 ], review processes may still stretch into months or even years. Such extreme delays have consequences for both the assessment of scientific prowess (e.g., tenure, employment, promotion) in academics and also delay the communication of important information for threatened habitats or species. Presumably having rapid turnaround times is desirable for authors [ 12 ], particularly early career researchers [ 13 ], but also puts “stress” on the peer-review system. Although review time certainly is discussed informally, there is very little known about what authors themselves think about the speed of peer-review, and how it could be improved. For example, what is an acceptable timeline for a review? How long should authors wait before contacting editors about the progress of a review? What do authors perceive as trade-offs in quality versus speed of a review? What strategies can an author use to try to elicit a more rapid review process? What are the underlying factors that influence variation in review time? Do author demographics play a role in the perspective in the variation of review time? Finally, what does a “long” review mean to career development, scientific progress, and the future behavior of authors with respect to selecting potential publishing outlets? These questions might seem obvious or inherent given our publishing roles and requirements as active researchers, but they have yet to be addressed formally in the scientific literature.

Here, we present an analysis on perspectives about the speed and importance of review times among a subset of authors of papers within the realm of “conservation biology.” Conservation biology is a field with particular urgency for evidence to inform decisions [ 14 ], but has not received as much attention on its peer-review system as other urgent fields such as health and medical sciences [ 15 , 16 ]. We discuss the findings as they relate to peer-review duration and present author perspective on how to improve the speed of peer-review.

Data Collection and Sampling

We extracted the e-mail addresses of authors that published in the field of “conservation biology” from citation records within the Web of Science online database. A search was undertaken on 9 April, 2014 using Web of Science [consisting of Web of Science Core Collections, Biosis Previews (subscription up to 2008), MEDLINE, SciELo and Zoological Record]. We used the following search string, and limited the search to 2013 (to ensure all authors were still active): “conservation AND *diversity”. Search results were refined to include entries for the following Web of Science subject categories alone: environmental sciences ecology, biodiversity conservation, zoology, plant sciences, marine freshwater biology, agriculture, forestry, entomology, fisheries. A total of 6,142 results were obtained, where 4,606 individual e-mail addresses were extracted. E-mails were sent to this mailing list inviting authors to participate in an anonymous online questionnaire hosted on Fluid Surveys; however, of these e-mails, 312 addresses were inactive. Individuals with e-mails that bounced back indicating a change of e-mail were sent an invitation to the new e-mail address indicated. We sent an additional invitation on 22 May, 2014 using a mailing list produced from an additional extraction of 2,679 e-mail addresses obtained from another search using the above string and subject categories but restricted to 2012, with 426 addresses that were non-functional or no longer active. Reminders were sent to all e-mail addresses between 18–20 June, 2014, and closed access to the online questionnaire on 3 July, 2014.

Survey Instrument

The entire questionnaire was composed of 38 open- and closed-ended questions, of which a subset of the questions relevant to review times was used for this study. We asked respondents to focus their experiences in the last five years, given the major phase shift in review protocols in earlier years associated with the move to electronic-based communication [ 17 , 18 ]. However, we did anticipate observing different responses between those that were active in publishing in the pre-electronic era and those that have only published since electronic submission and review became standard practice. While it is not possible to decouple author age/career stage as a potential response driver in the questionnaire [ 13 ], we nonetheless explored the association between time since first peer-reviewed publication and author responses. The questionnaire began with questions that assessed the participants’ opinions on various “review metrics” (e.g., opinions of slow vs. rapid review durations, optimal review duration—see supporting information for full survey questions [ S1 File ]. This section was followed by questions associated with the respondent’s experience and expectations as an author, and their potential behaviour with respect to lengthy review times. Additionally, we assessed participants’ perspective on factors that ultimately influence review speed using open-ended questions and Likert type questions. We then asked whether the peer-review system should be altered and how it should be altered. Lastly, we recorded respondent characteristics such as socio-demographic information, publishing experience and frequency, as well as other experiences with the peer-review system (e.g. referee experience). It is important to note that there could be potential inaccuracies in perceptions of time and events due to self-reporting and recall bias, when someone may perceive a length of time to be quicker or slower than it is in reality. All but author characteristic questions in the survey were optional, and the number of responses (the sample size, n) therefore varies accordingly at or below the total number of respondents. The questionnaire was pre-tested with five authors, and protocols were approved by Carleton University Research Ethics Board (100958).

Data Analysis

For open-ended responses, we categorized the data by common themes that emerged among responses (i.e. open coding; [ 19 ]) using QSR NVivo 10. We use narrative-style quotes from the responses throughout the paper to illustrate the details and properties of each category or theme. We quantified certain responses using frequency counts of the coded themes to provide proportions of respondents that agree with an idea/theme or to provide a number of responses that corresponded with a theme. For the purpose of article clarity and conciseness, we report the majority of responses in percentage and chose to omit reporting the remainder of the responses when they are responses of no opinions or neutrality (e.g., when a respondent responds to a choice as “neither”).

Generalized linear models were used to identify how demographic information (e.g., gender), career status (e.g., number of publications), and experience regarding review times (# of weeks for either a “typical” (TYPICAL), “short” (SHORT), or “long” (LONG) review period) explained respondents’ expectations (i.e., opinion) for the length of time that constitutes an optimal (Model 1), short (Model 2) and long review time (Model 3). Response variables (modeled as # of weeks) were assumed to follow a Poisson or negative binomial distribution (i.e., when residuals were overdispersed) with normally distributed errors. The best model to explain respondent opinion was selected using backwards model selection [ 20 , 21 ]. Details on the statistical methods and the results are found in supporting information [ S2 File ].

Results and Discussion

Response rate and overall respondent characteristics.

We received 673 responses out of all the invited participants (N = 6,547), of which 461 completed the questionnaire to the end, with the possibility of skipping some questions (see S3 File for raw data). The remainder of participants partially completed the questionnaire, thus the number of responses varied by question. While we recognize that the response rate is low and the potential for sampling exists, we do not attempt to generalize the perspectives reported to the entire population of authors in field of conservation biology, but rather provide insights on the issue. It is also important to recognize that respondents who are more likely to participate in our questionnaire are also perhaps more likely to be those who are proactive in voicing their opinions. Of all the respondents, 28% were female and 63% were male (9% left it blank or preferred not to say). This may lead to a male-dominant perspective in our results. Most respondents ranged between 31–40 years old (38.2%), followed by 41–50 years (24%), 51–64 years (18%), 21–30 years (11%), less than 5% of respondents were 65 years or older, and <1% were under 21 years old (2 respondents).

Overall, responses came from 119 countries. We categorized countries based on economic income set out by the World Bank (2014). The majority of respondents (N = 640) worked in countries of high-income economies (78%), followed by upper-middle-income economies (17%), lower-middle-income (4%), and less than 2% for low-income economies. The top countries participating in this study included the United States (17%), the United Kingdom (10%), Australia (8%) and Brazil (7%). The majority of respondents (N = 611) were from academia (77% of which 15% were graduate students), governmental or state agencies (11%), non-government or non-profit organizations (10%), and the private sector (2%) which can include consulting and non-academic research institutes among others. The participant characterization suggests that the author perspectives in this article are largely biased towards industrialized nations and academia, which reflects the characteristics we would expect from the research community.

Author publishing and referee experiences

A larger proportion of participants published their first paper within the last decade (44% of 451 respondents published in 2000–2009, and 19% published in 2010 and after), which indicates a bias toward authors that are potentially in their mid-careers. About half of the respondents have published > 20 publications (with 21% of 623 respondents publishing >50), and only 10% have published <10. Half of the participants publish < 3 papers per year, 35% publish 4–6, 10% publish 7–10, and only 3% of participants publish >10 papers per year. Furthermore, nearly half of the respondents act as journal referees 1–5 times per year (48% of N = 450). Twenty percent of respondents are highly active referees (reviewing manuscripts >10 times per year), 25% being referees 6–10 times, and < 10% reviewing manuscripts only once a year. Overall, the majority of respondents have been publishing for at least 10 years and at least half of them are highly experienced with the peer-review process as both authors and referees. As such, the perspectives gathered in our questionnaire come from highly experienced authors that are actively publishing and therefore familiar with the peer-review system.

Peer review duration: experiences and expectations

We asked participating authors about their experience with peer-review durations (i.e. period between initial submission and first editorial decision following formal peer review), and 368 respondents gave useable/ complete answers. The average (mean ± SD) shortest or quickest review time was reported to be 5.1 ± 6.0 weeks ( Table 1 ), while the opinion of a “fast” review period was on average 4.4 ± 2.9 weeks. While the opinion of a “slow” review period was on average 14.4 ± 8.2 weeks, the longest or slowest review time was reported on average to be 31.5 ± 23.8 weeks ( Table 1 )—nearly double what the respondents perceive as slow. Furthermore, respondents reported that a “typical” turnaround time for a manuscript submission was on average 14.4 ± 6.0 weeks (ranging between 2–52 weeks), and that the optimal review period on average (median) is 6.4 ± 4 weeks. An optimal range for peer review durations were 1–20 weeks with majority falling within eight weeks or under (86% of 366 responses).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0132557.t001

The fact that respondent opinions and actual experiences of short or long review durations are not aligned, and that their experiences of review durations are lengthier (nearly double the “optimal” time), indicate that the overall perception of the peer-review system is slow. Results reported here may provide indicators for conservation biology related journals to gauge their performance on review time and improve author experiences and satisfaction. In a broad review (over 4000 respondents from across disciplines), Mulligan et al. [ 22 ] noted that 43% of respondents felt that the time it took to the first decision for their last article was slow or very slow. Mulligan et al. [ 22 ] asked authors about whether their last manuscript review (to first decision) took longer than 6 months and reported a mean of 31% but noted some differences among disciplines. For example, reviews in the physical sciences and chemistry rarely (15%) take longer than 6 months while those in the humanities, social science and economics were more likely to take longer than 6 months (i.e., 59%). Mulligan et al. [ 22 ] included a category called “agricultural and biological sciences” and reported 29% of respondents indicated reviews took longer than 6 months with 45% reporting 3 to 6 months. In general, these findings are consistent with the responses we obtained from a focused survey of scientists working in conservation biology.

Respondents did not perceive “fast” or “slow” reviews to influence review quality (75% of 547 useable responses), with the exception of 8% of respondents who believed that fast reviews have higher review quality and another 8% believed fast reviews have lower review quality (10% had no opinion). Therefore, faster review times should presumably be beneficial to the authors, the journals and the relevant field given the belief that review speed does not affect quality, although this has not been tested empirically. We discuss mechanisms to improve review times based on this information later in this article.

Who expects what in peer review duration?

A respondent’s opinion for an optimal review time depended on a weak two-way interaction between respondent experience and gender (TYPICAL*Gender, L- Ratio Test = 5.9, df = 1, P = 0.015). According to both male and female respondents, the optimal length of time for a review should always be shorter than what they have experienced as “typical” ( Fig 1 ). Opinion on what constitutes a short review period (Model 2) was dependent on several weak two-way interactions including: Age*Gender ( L- Ratio Test = 10.6, df = 3, P = 0.01), SHORT*Gender ( L- Ratio Test = 5.1, df = 1, P = 0.02), SHORT*Age ( L- Ratio Test = 11.5, df = 3, P = 0.01). For respondents over 41 years old, experience and opinion are more closely related than compared to younger respondents who suggest a short review is ≤ 10 weeks, regardless of experience ( Fig 1 ). Female experience and opinion were more closely matched than males, however this was evident for respondents 41–50 years old ( Fig 2 ). Finally, opinion on a long review period (Model 3) was dependent on LONG ( L- Ratio Test = 61.7, df = 1, P < 0.001) and Gender ( L- Ratio Test = 6.0, df = 1, P = 0.01). Here, respondents always expected “long” review periods to be many weeks less than what was experienced as a “long” review ( Fig 3 ). For example, although a female respondent experienced a long review of 60 weeks, she expects a long review to take just over 20 weeks [18.2, 22.3, 95% CI]. Based on researcher experience and generalizing for all ages, those who identify as male appear to be the least satisfied with the speed of the peer-review process.

thumbnail

https://doi.org/10.1371/journal.pone.0132557.g001

thumbnail

https://doi.org/10.1371/journal.pone.0132557.g002

thumbnail

https://doi.org/10.1371/journal.pone.0132557.g003

Interactions with editors and journals

If a decision has not been made on a manuscript, participants (N = 479) waited on average 12.9 ± 7.5 weeks before making first contact with the editor or journal regarding the status of the manuscript “in review”. Of those who make first contact with an editor or journal, most will make additional attempts (77% of 479 responses) if time progresses without a response or decision. Of 479 completed responses, only 9% will never attempt to contact the editor or journal suggesting that the author population in this study is quite proactive in voicing their concerns, but keeping in mind that authors who are proactive are likely to agree to partake in the questionnaire. Nevertheless, this finding is important for editors who may feel confused as to the sort of delays in time before authors begin contacting them. Approximately 12% of participants (N = 469) believed that contacting the editor or journal would jeopardize the decision for acceptance, 6% thought it would benefit the decision, while majority did not believe there was any influence.

Only 14% of respondents (N = 480) have threatened to withdraw their manuscript from a journal, and 15% (of the 480 respondents) have actually withdrawn a submitted manuscript when the review process was unsatisfactorily long, which was indicated to be on average 30 ± 31 weeks (ranging from 2–100 weeks, N = 72) when such actions were deemed necessary. This review duration for a potential withdrawal of a manuscript is over double the average time that respondents perceive as slow, indicating that most authors had been quite patient with the peer review process. Despite their apparent patience, respondents generally believe that long reviews should be shorter than what they have experienced ( Fig 2 ), indicating an overall perception that peer-review durations are too slow within the realm of conservation biology.

The majority of participants (72% of 480 responses) did not believe that a long or a short review period would mean that the manuscript was likely to be accepted or rejected. Contrastingly, 14% of respondents believed that a “short” review period would likely lead to a rejection of the manuscript and only 6% believed it would likely be accepted, leaving 8% without opinion. In general, authors did not seem to believe there was any bias toward acceptance or rejection of their manuscript if they contacted the editor or whether the review period was quick or long.

Factors influencing review time and accountability

Of the completed responses (N = 471), over half of the respondents (56%) believed that the reviewers are accountable for the review duration, while 33% held the editors accountable, and 6% attributed delays to the journal staff. The remainder of respondents (5%) believed it was a combination of all the players. Likert type questions revealed that in general, reviewer fatigue (e.g., lack of time, etc.) was ranked as the most influential factor in slowing review speed, followed by editor fatigue, and somewhat the length of the manuscript as well as number of reviewers ( Table 2 ). One respondent expressed this reviewer fatigue as follows:

While editors try to find suitable reviewers in practice there is a relatively small pool of reviewers who can be relied on to do useful reviews . I am an associate editor on 5 journals and am convinced that there is substantial reviewer fatigue out there as the number of publications has grown annually as have the number of journals .

thumbnail

https://doi.org/10.1371/journal.pone.0132557.t002

This may correspond with the increased number of publications and publication outlets that contemporary scientists have to contend with. Similarly, in 2007, it was reported that over 1000 new papers appear daily in the scientific and medical literature alone, and this number is likely increasing rapidly [ 12 ]. Kumar [ 23 ] listed five reasons for publication delay, which included reviewer availability and reviewers having other commitments pushing manuscript reviews at the bottom of their list. The other three reasons included editors sending the manuscript for multiple rounds of reviews (when reviews are conflicting or inadequate); the journal has outsourced manuscript management (e.g., Business Process Outsourcing agency), and; the reviewer intentionally delays the publication of a manuscript for various reasons (e.g., rivalry or intentions to plagiarize).

On the other hand, respondents perceived the persistence of the editorial team as a factor in somewhat speeding up the review process, as well as maximum allocated review times for each journal, and the journal prestige or impact factor ( Table 2 ):

I will always take the full amount of time they [editors] give me . Moreover , only once have I been asked to review a paper by an open access journal , which required my review submission in 2 weeks . But all the others were non-open access journals that gave me a month or more , which increased the average time to decision .

Consequences of long or short review durations

We questioned participating authors on their perspectives of consequences of long or short review durations. Our findings indicate a number of consequences that we have grouped into themes below.

Consequences for the journals.

After a long review period, most respondents (74% of 472 responses) said they are less likely to submit to that journal again relative to other journals; however, some (19%) said it would depend on the journal impact factor or prestige. As expected for the other end, if the review period was short, respondents (69% of N = 471) said they are more likely to submit to that journal again, with some respondents (17%) considering journal impact factor or prestige and 12% of participants were neither more or less likely to submit to that journal again. We also found that review duration is an important factor when respondents (N = 470) consider which journal to submit their research to (43% said yes and 46% said sometimes), while < 10% of participants said they never consider review duration when submitting a manuscript. Therefore, review time is an important consideration for journals to maintain reputation, as majority of respondents have given thought to review times when deciding what journal to submit to. Although, there are some indications of trade-offs between review duration and impact factor as approximately 1 of 5 respondents consider journal prestige and impact factor as an influential part of deciding to which journal they should submit.

In general, respondents (N = 465) discuss the speed of review with their colleagues, of which 54% (of 465) discuss it monthly, 30% once a year, 12% weekly, 1% daily and 4% never discuss review speed. Interestingly, there was an even split among all respondents (N = 466) with authors (49%) that have “blacklisted” a journal for its lengthy review times (i.e. chosen not to resubmit manuscripts to that journal in the future,) and those who have not (48%). These findings send multiple messages to journal editors: 1) review time is an important factor for authors in consideration of publication outlets, and 2) review time is actively being discussed by half of the respondents, which can hinder or endorse a particular journal’s reputation. Publication of research can ultimately affect society at large if the manuscript has significant scientific and policy implications. Therefore, editors/journals/publishers have a responsibility to disseminate credible scientific information in a timely manner and must play an active role through setting standards and facilitating the peer-review process [ 23 ].

Consequences on careers.

Just over half of the respondents (55% of 466 respondents) feel that a lengthy peer-review process affects their career, while 30% did not believe it did. Open-ended responses suggested that lengthier peer-review durations generally have negative impacts on “early career researchers” and “young scientists” (mentioned by 65 of 212 responses) because of the “publish or perish” system, which affects opportunities for jobs and career advancement. One respondent wrote:

As an early career researcher trying to build a list of publications , it is important to have papers reviewed quickly . The longer the time lag between a research project and accepted publication the more difficult it is to apply for new grants or job opportunities .

Furthermore, some respondents mentioned the delay in graduation or acceptance in graduate school for students due to lengthy peer-review processes:

I received the first response about my first article only after 54 weeks . At that time I was not able to start my PhD because the institution only accepted candidates with at least one accepted article .
Even after successful completion of my Ph . D . research topic , I was unable to submit my thesis because it’s a rule that at the day of Ph . D . thesis submission , must have a minimum one peer reviewed publication .

The comments of these early-career respondents are perhaps reflected in the predictions from Model 2, where despite the length of time they have experienced as a “short” review, respondents consistently expect review periods to be much shorter ( Fig 2 ). It seems that regardless of their experience, the review period cannot be short enough for early-career professionals who publish in conservation biology. In addition, it seems that irrespective of age, respondents believe a lengthy review period should be considerably shorter than what they have experienced ( Fig 3 ).

For respondents with tenure or later in their career, a slow review process can impact applications for grants/funding (approximately. 28% of responses) and promotions (approximately 19% of responses):

Publications are important for ranking of scientists and institution achievements so long reviews and long editorial process could violate this process .

Furthermore, concerns about competition among research groups (5% of responses), subjective treatment, malpractice of certain reviewers and editors, conflicts of interest, and the potential for being “scooped” (i.e., publishing the same idea/findings first) were voiced. Intentional delay of review was also listed as 1 of 5 reasons for peer-review delay by Kumar [ 23 ], emphasizing some merit to this topic. Although not the focus of this study, we found that the association between review time and the potential for being “scooped” is worrisome to a number of authors and should be acknowledged as this topic was brought up relatively frequently when respondents were given the opportunity to comment freely (open responses). For example:

If people play the game well and get their “friends” to review their papers . I am sure in many cases that speeds up the process more so when people cite their friends (the reviewers) in these papers .
If a person has an "in" with the journal . In other words , subjectivity and preferential treatment increase speed .

Several respondents (<8%) urged that if a manuscript is to be rejected, journals should do so in a timely manner so the researcher can resubmit to another journal sooner. Others voiced concerns that a delay of a manuscript could hinder subsequent work that is built on the manuscript in review, and some mentioned challenges in remembering specifics of the study or content of the manuscript when review times are particularly long.

Consequences to authors’ morale.

It was also revealed that lengthy peer reviews can affect motivation, causing conflict as well as frustration (8% of responses):

The frustration associated with a lengthy process discourages the writer . Incentives for conducting research are diminished when rewards are not forthcoming . Less incentive means less motivation which both translate into less productivity . Less productivity means less likelihood for promotions . This in turn sets up a vicious cycle very similar to the one related to applying unsuccessfully for grants .
A long peer review process reduces drastically your efficiency of publishing papers , because you need to go back to your previous work and you cannot focus on your current work . Sometimes you need to spend quite a bit of time figuring out how to answer reviewer’s concerns because it was too long ago that you submitted your manuscript .
It is very frustrating , and sometimes embarrassing , to have papers endlessly "in review” . " I had a paper where the subject editor sat on the paper for 5 months without sending it for review; after 3 contacts they finally sent it for review and it has been another month and we have not heard back . This was a key paper needed to build a grant proposal , and my collaborators consistently asked if it was published yet—the grant was ultimately submitted before the paper was accepted .

These consequences are not often discussed, but are often interlinked with consequences of a researcher’s career and aspirations. Although for the majority of the time, long review durations may not have dramatic consequences; however, lengthy review durations that occur at the wrong place at the wrong time may potentially lead to a cascade of consequences.

Alternative responses to consequences of review times.

A number of respondents (<10%) provided interesting alternative responses that are worth mentioning such as (but not limited to) consequences on research quality because of the race to publish, competition among colleagues, greater opportunity cost when taking the time to submit a “quality” manuscript, and limiting peer-review process only to academic research because researchers in other sectors are not rewarded with number of publications and productivity:

Research quality suffers—as opportunities to publish high quality research can be lost when other groups publish (often lower quality) research first . The focus then becomes speed and simplicity of research rather than quality .
Because of career pressure , especially for younger scientists , or the need to complete a degree program , choices are often made (I witness them here) to submit smaller , simpler studies to journals with a quick turnaround , or with a presumed higher acceptance rate for a particular work , rather than invest more time in extending analysis and/or facing rejection or extensive revisions

Should the review process be altered?

When asked if respondents thought the review process should be altered to change the review time, 61% (of 463) responded yes, 12% responded no, and the remainder had neutral opinions. Of 462 respondents, 43% believed that the review process should be improved while only 8% said no. When asked how the review process should be improved, 211 participants provided open-ended responses (summarized in Table 3 ).

thumbnail

https://doi.org/10.1371/journal.pone.0132557.t003

Referee reward system.

About one quarter of the suggestions for improvement was to pay reviewers/editors or provide reviewer incentives/consequences or reward system such as: free year subscription to the journal; rewarding reviewers by adding value to their CV (e.g., “20 best reviews” or “20 best reviewers’ awards”); “have a 1 in 2 out policy… each paper you submit as a lead author means you have to review 2 for that journal before you can publish again in that journal”; providing discounts on the reviewer’s own submissions or items from the scientific publishing house (e.g., books, open access discount, etc.); and home institutions should have reward systems for researchers who regularly review papers.

Editors should remove slow reviewers from their lists . There should be a central bank where good reviewers receive benefits such as fast track review of their material if submitted to the same company (e . g . Wiley , Elsevier , etc . ) . A reduction in publication costs for good reviewers (not just time but quality of revision)
Engagement for reviewing should be better acknowledged as a performance indicator; some exemplary review processes should be made public so that authors and reviewers can learn from them . Reviewers should be able to see the other reviewer's comments after the editor's decision .
For instance , the journal Molecular Ecology is publishing the list of the best reviewers every years based on the quality and speed of the review . This is one example of a reward that the reviewers can put in their CV to show their importance in the field .

Our findings suggest there is some weighted call for reviewer incentives and reward systems. It is challenging to get accurate data on the cost of peer review, and in economic terms, the ‘opportunity cost’ to reviewers. The editor of BMJ , Richard Smith [ 24 ], estimated the average total cost of peer review per paper was approximately £100 for BMJ (keeping in mind 60% are rejected without external review), whereas the cost of papers that made it to review was closer to £1000 and without considering opportunity costs (i.e., time spent on editing and reviewing manuscripts that could be spent on other activities). A recent survey reported two-thirds of academics agreed that $100–200 would motivate them for reviewing while one-third refused to accept monetary compensation [ 25 ]. Kumar [ 23 ] reports differing results from two recent studies where one study of 1500 peer reviewers in the field of economics responded to both monetary and non-monetary incentives to expedite the return of reports [ 26 ], while in 2013, Squazzoni et al. [ 27 ] reported that financial incentives decreased the quality and efficiency of peer reviewers.

Reward system and incentives for reviewers have been proposed in the literature [ 28 ], where there may be penalties to those who decline reviews or non-monetary rewards for review completions such as published lists of reviewers as a means of acknowledgment (e.g. Journal of Ecosystems and Management). However, some journals already use this system and still there is no indication of change in referee behavior [ 29 ]. One common incentive given for peer-review is a temporary subscription to the journal in question. It is perhaps not surprising that such an incentive might fail to change reviewer behavior, since many reviewers will belong to institutions that already possess subscriptions to a host of journals

It may just be a matter of time for the “top reviewers” or time spent on reviews to become “prestigious” and valued in more tangible ways (whereas current system values number of publication). Peerage for Science is a novel approach to externalized peer-review, through which manuscripts are submitted for peer-review by members of a community of researchers in a transparent and non-blinded way, after which journals can then be contacted with an amended draft [ 30 ]. This system incentivizes peer-reviewers by providing metrics and ratings relating to their reviewing activities that members can use to demonstrate their activities.

Deadlines and defined policies.

Approximately one third of responses (N = 211) suggested stricter deadlines and policies, shorter allocated time to review a manuscript, and procedures to ensure adherence to strict deadlines should be established to improve review duration:

Current review process should follow the model of the PLOS (online journals) . Reviewers are constrained to address specific scientific elements : The question , the method , the results and the discussion that these are scientifically acceptable . This should encourage young researchers to publish without the need to include big names/ popular personalities in research to have the paper through journal review .

Again improvements in peer review turnaround and quality are something that the journal editors are able to control by setting out standards and policies that facilitate the peer review process. A recent review of time management for manuscript peer-review acknowledged several suggestions to improve the review process and time, but that it is the responsibility of editors, publishers and academic sponsors of the journals to implement these improvements [ 23 ].

Editorial persistence and journal management.

Related to these more stringent deadlines and policies is the suggestion that editors should put more pressure on reviewers, and follow up with deadlines (30 responses), while others suggested better journal management (13 responses):

Some Journals restart the time counting during a revision process , for example , asking to re-submit as a new manuscript in order to reduce the revision time , instead of keeping track of the time during the whole revision process and to be more realistic about the time that a revision takes . I believe that is a way of cheating or deceiving the system .

As illustrated by the quote above, many journals ask to re-submit as a “new submission” rather than a “resubmission”, and sends to new referees instead of the previous ones to review the revisions, which increases the length of peer review time. Fox and Petchey [ 29 ] suggested that if a manuscript is rejected from one journal, the reviews should be carried forward to the subsequent journal that the manuscript was submitted to. They argued this action helps with quality control and facilitates review process by ensuring that authors revise their manuscripts appropriately, and reduces any duplication of efforts by referees. At present, at least one ecology journal allows authors of manuscripts previously rejected to provide previous reviews and the publisher Wiley is trialing peer-review transfer across nine of its neuroscience journals [ 31 ]. A more formal system for sharing reviews is suggested to increase speed and quality of the peer review system, which is now feasible with the pervasive use of electronic submission and review systems [ 29 ].

Peer review training.

Including graduate students or early career researchers as reviewers may increase the “supply” for the increasing demand. Some may argue that graduate students lack experience and knowledge to appropriately assess a manuscript. Formal training has been suggested to improve quality of reviews and increase the network of reviewers. Furthermore, recommendations by senior researchers of names of reliable and qualified graduate students or early career researchers as potential reviewers may help with the deficit [ 32 ]. Indeed, the British Ecological Society recommends that academic supervisors should assign their own peer-review invitations to graduate students [ 33 ], although it is certainly sensible to verify that individual journal editors are happy with this practice.

Changes to the norms of peer-review system.

A number of respondents (12%) wanted to see more drastic changes in the norms of publishing. For example, permanent and paid group of reviewers, standardizing all journals, permitting to submit manuscripts to more than one journal, including more early career researchers as reviewers, following model journals that do it well (e.g., Geoscience, PLOS one), having a database of reviewers, or have sub-reviewers (e.g. expertise for statistics, methods, taxa, tools, etc.).

“PubCreds” currency, has been proposed as a system where reviewers “pay” for their submission using PubCreds they have earned by performing reviews [ 29 ]. Although, a radical idea, Fox and Petchey [ 29 ] state that “doing nothing will lead to a system in which external review becomes a thing of the past, decision-making by journals is correspondingly stochastic, and the most selfish among us are the most rewarded”. Furthermore, Smith [ 24 ] suggested adopting a “quick and light” form of peer review, with the aim of opening the peer-review system to the broader world to critique the paper or even rank it in the way that Amazon and other retailers ask users to rank their products. Alternatively, some journals (e.g. Biogeosciences) employ a two-stage peer-review, whereby articles are published in a discussions format that is open to public review prior to final publication of an amended version. Other journals (e.g. PLOSone) and platforms ( www.PubPeer.com ) offer the opportunity for continued review following publication. The argument for a radical change in the norms is not uncommon and may be required in today’s peer-review system which will soon be in crisis [ 29 ], although suggestions that increase the labour required of editors and referees, such as submitting to more than one journal concurrently, may exacerbate the already stressed peer-review system.

Role of open access and journal prestige on review duration

The majority of respondents do not review a manuscript quicker for higher tier journals (71% of 445 respondents). When respondents were asked about their perception on the justification of journal prestige on turnaround time, 50% of 369 responses do not believe publishing in a top-tier journal justifies a rapid or delayed review time, while 37% believe it does (remainder had no opinion). Of those who believed publishing in a top-tier journal justifies longer or shorter review time, 64% believe it explains rapid reviews, 14% believe it justifies a delayed review, and 20% believe it justifies both rapid and delay (<5% believe neither). On the other hand, it was interesting to note that a higher number of respondents (75% of 367) believe that publishing in a low-tier journal does not justify a rapid or delayed review time. Overall, journal prestige and impact factor seem to be an important indicator for many authors, although their ability to turnaround peer-review in a timelier manner may reflect their perceived prestige and the higher quality manuscripts that make it through primary editorial screening. One respondent noted:

There is likely a link between review duration and impact factors , as impact factors are based on citations during the first two years after publication . If those citing papers take longer to go through the review , they won't count towards the journal's impact factor .

We were interested in participants’ perspectives of the review process for open access (OA) journals, particularly because authors pay a fee to publish in such journals. About a third (32% of 461) agreed that OA journals should have higher quality of “customer service”, such as faster review and publication times, with an additional 13% of respondents who strongly agree. Another third (31%) of respondents were neutral about this statement, whereas 16% disagree and 7% strongly disagree. This finding is interesting because it provides insight on authors’ perspectives and expectations of OA journals, where authors have higher expectations from OA journals even though peer-review standards should be disconnected from cost and from who pays. This is most likely the result of a shift in the customer base. In subscription-based publishing the customer is the librarian and their measure of product quality was assessed primarily through metrics such as Impact Factor. In OA publishing, the customer becomes the submitting researcher, and quality is assessed through publishing service and, incorrectly perhaps, standards of editorial review. It has yet to be proven that publishers will see substantial increases in profits following a switch to OA, and if profit margins are not significantly increased then expectations of improved service are unwarranted.

Although the topic of open access journals was not the primary focus of our study, we believe that it is an increasing relevant topic as there are debates about the quality of OA journals, but on the other hand, open access may be viewed as mandatory, particularly where research is funded with public money. Future research including perspectives and understanding value of OA journals within the conservation science community should be considered.

Our findings show that the peer-review process within conservation biology is perceived by authors to be slow (14 weeks), and turnaround times that are over double the length of what they perceive as “optimal” (6 weeks). In particular, males seem to expect shorter review times than females, whereas female expectations were found to be more closely related to what they have actually experienced in typical review times. Similarly, older participants (> 40 years) have expectations of review times that are more closely aligned with their experience, while younger authors developed their opinion of a short review time to be <10 weeks despite their experiences. Overall, the primary reasons that participants attribute to the lengthy peer-review process is the “stress” on the peer review system, mainly reviewer and editor fatigue. Meanwhile, editor persistence and journal prestige/impact factor were believed to speed up the review process. The institutional incentive for productivity has its fallacies. The demand from increased publications strains the peer-review system and the “publish or perish” environment can also potentially create a strong demand for publications outlets and increased expectations for quick turnaround times.

It appears that early career researchers are more vulnerable to slow peer review durations in a “publish or perish” system as it relates to graduation employment opportunities and other career advancements. Closely related to impacts on careers are consequences of lengthy peer review duration on an author’s “morale” (i.e. motivation, frustration, conflicts, embarrassment). Some respondents commented that lengthy review durations may result in lack of motivation, forgotten details about the manuscript thus leading to reduced efficiency in productivity and potentially a lower quality manuscript. Competition among colleagues was thought by few respondents to encourage publication of shorter and simpler studies in order to gain a quicker turnaround review time, rather than investing more time in complex and extensive analyses or revisions. These concerns have merit as they do exist and may have implications on quality of research and publications.

Although the objective of our research was not to assess the quality of the peer-review system, we believe all aspects of the process are interlinked and both peer review quality and speed are not mutually exclusive and must be discussed simultaneously. The majority (61%) of respondents believe that the review process should be altered with a number of suggestions such as a referee reward system, defined deadlines and policies, editorial persistence, better journal management, changing the norms of the peer-review process and others. Currently, researchers are rewarded based on productivity, which may result in a system breakdown by increasing demand from a short supply of reviewers and subsequently degrading quality of publications associated with the race to publish [ 32 ]. We suggest a partial shift in institutional rewards and incentives from researcher productivity to greater outreach efforts and public interactions/activities, as there is evidence that conservation goals may be more effectively achieved by engaging the public. Implementing a system that rewards these actions in conjunction with productivity may alleviate pressure in the peer review system overall, and increase conservation successes. Training for peer review is a possibility to improve quality of reviews as well as increase the pool of reviewers by including early career scientists and graduate students. Generally, there is a call from a number of authors to revise and review our own peer review system to ensure its persistence and quality control.

Open access and opening the peer review process is on the forefront of publishing innovation. For example, PeerJ ( www.peerj.com ) offers a novel approach that combines open access and a pre-print system that enables articles to be made available online more rapidly than traditional scholarly publishing. ScienceOpen ( www.scienceopen.com ) immediately publishes the manuscripts in Open Access and accepts continuous open review in a transparent Post-Publication Peer Review process. Such approaches will require time to determine their value to the scientific community, but as scholarly publishing continues to rapidly evolve, experimental approaches to enhancing the communication of peer-reviewed research are warranted. We encourage other scientists and publishers to build on these approaches and continue to push the envelope for new publishing approaches.

Peer reviewed journals will continue to be the primary means by which we vet scientific research and communicate novel discoveries to fellow scientists and the community at large, but as shown here, there is much room for improvement. We provided one of the first evaluations of an important component of the publishing machine, and our results indicate a desire for researchers to streamline the peer-review process. While our sample may not be generalizable to the entire global community of researchers in the field of conservation biology, we believe the opinions, perceptions, and information provided here present an important collective voice that should be discussed more broadly. While the technology is in place to accelerate peer-review, the process itself is still lagging behind the need of researchers, managers, policy-makers, and the public, particularly for time-sensitive research areas such as conservation biology. Moving forward, we should encourage experimental and innovative approaches to enhance and expedite the peer-review process.

Supporting Information

S1 file. complete list of survey questions..

https://doi.org/10.1371/journal.pone.0132557.s001

S2 File. GLM data analysis supplement for Models 1–3.

https://doi.org/10.1371/journal.pone.0132557.s002

S3 File. Raw questionnaire data.

https://doi.org/10.1371/journal.pone.0132557.s003

Acknowledgments

We thank all of the study participants who took the time to share their perspectives. Funding was provided by the Canada Research Chairs Program and the Natural Sciences and Engineering Research Council of Canada.

Author Contributions

Conceived and designed the experiments: SJC NH AJG MRD NRH ADMW VMN. Performed the experiments: VMN LFGG NRH. Analyzed the data: VMN LFGG. Contributed reagents/materials/analysis tools: VMN LFGG SJC. Wrote the paper: VMN NRH LFGG ADMW AJG MRD NH SJC.

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 10. Harnad S (1996) Implementing peer review on the Net: scientific quality control in scholarly electronic journals. In Peek R. and Newby G., eds. Scholarly publishing: the electronic frontier. MIT Press, Cambridge, MA.
  • 19. Strauss AL (1998) Basics of qualitative research: techniques and procedures for developing grounded theory 2nd ed. SAGE Publications Inc. Thousand Oaks, United States.
  • 20. Chambers JM (1992) Linear models. Chapter 4 of Statistical Models in S eds Chambers J. M. and Hastie T.J., Wadsworth & Brooks/Cole
  • 21. Zuur AF, Ieno EN, Walker N, Saveliev AA, Smith GM (2009) Mixed effects models and extensions in ecology with R. New York: Springer.
  • 25. Davis P (2013; Internet). Society for Scholarly Publishing—Rewarding reviewers: money, prestige, or some of both? [updated 2013 Feb 22; cited 2015 Feb 27] Available: http://scholarlykitchen.sspnet.org/2013/02/22/rewarding-reviewers-money-prestige-or-some-of-both/
  • 31. Wiley Online Library [Internet]. Transferable Peer Review Pilot (cited 2015 Feb 27) Available: http://olabout.wiley.com/WileyCDA/Section/id-819213.html
  • 33. British Ecological Society [Internet]. A guide to peer review in ecology and evolution (cited 2015 February 27). Available: http://www.britishecologicalsociety.org/wp-

Mastering the scientific peer review process: tips for young authors from a young senior editor

  • Published: 16 September 2021
  • Volume 33 , pages 1–20, ( 2022 )

Cite this article

scientific reports peer review time

  • Evgenios Agathokleous 1  

10k Accesses

13 Citations

2 Altmetric

Explore all metrics

Are you a student at a higher institution or an early-career researcher who is striving to understand and master the peer review process so to increase the odds of getting a paper published in the Journal of Forestry Research or another reputable, peer-reviewed, scientific journal? In this paper, a young, senior editor provides a handbook of the peer review process based on his decadal experience in scientific publishing. He covers major information you need to know during the entire process, from selecting journals to completing the proofing of your accepted paper. He introduces key points for consideration, such as avoidance of predatory journals, dubious research practices and ethics, interaction with peers, reviewers, and editors, and the pursuit of aretê. Finally, he points out some common statistical errors and misconceptions, such as P hacking and incorrect effect size inference. He hopes that this paper will enhance your understanding and knowledge of the peer-review process.

Avoid common mistakes on your manuscript.

Introduction

We live in a world where scientific publishing and thus peer review have become a major determinant of career development and success or failure (Neill 2008 ; Fanelli 2010 ; Van Wesel 2016 ; Fanelli and Larivière 2016 ; Vuong 2019 ). Improvement of humans’ daily lives and advancement of societies also depend upon the production of scientific knowledge and dissemination of research results (Böhme and Stehr 1986 ; Stehr 2009 ; de Camargo Jr 2011 ; Thorlindsson and Vilhjalmsson 2016 ), which often are the only means by which humanity can manage and overcome global crises, such as the current COVID-19 pandemic due to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Hu et al. 2020 ; COVID- 19 Host Genetics Initiative 2021 ; Perreau et al. 2021 ; Solís et al. 2021 ; Telenti et al. 2021 ).

The Journal of Forestry Research (JFR) was founded in 1990 and published under the title ‘Journal of Northeast Forestry University’ (English Edition) during the first seven years. During that period JFR was an institutional journal with almost all its published papers coming from Northeast Forestry University, China. In 1997, the journal adopted its current title, and, in 2002, it was jointly sponsored by the Northeast Forestry University and the Ecological Society of China. During the period 1997−2002, JFR had expanded to cover the entire nation, and most submissions came from institutions across China. JFR noted a major turn in 2007 when it collaborated with Springer Verlag to deliver its content to an international readership. This contributed to achieving about 85% of the total papers published in the journal coming from international researchers. Beginning from the first issue of Volume 24, in 2013, JFR is included in the Thomson Reuters’ Science Citation Expanded (SCIE) and Journal Citation Reports (JCR)/Science Edition. With an authorship covering over sixty countries, JFR is operated by a team of 55 affiliated academic editors from eighteen countries, five in-house publication editors, and four language editors.

Based on the latest Clarivate Analytics’ JCR published on 30 June, 2021, the 2020 Impact Factor (IF) of JFR has increased to 2.149, ranking 24th among 67 journals of Forestry (Q2), marking a new point in the history of its publication by exceeding an IF of 2.0. The new IF noted an important increase compared to the 2019, 2018, and 2017 IFs of 1.689, 1.155, and 0.748 (Q3), respectively. The IF should not be the only criterion of the quality of journals (Covaci et al. 2019 ) and does not serve as an index of the quality of individual articles. That is, the IF value is highly affected by the skewness in the data distribution introduced by only a small fraction of highly-cited papers in a journal (Lozano et al. 2012 ; Larivière et al. 2016 ). Some journals have discontinued its use and some countries have recently restricted the use of the SCI system in academic evaluations (Verma 2015 ; Fernandez-Llimos 2016 ; Qian et al. 2020 ; Zhu 2020 ). The trend of JFR’s metrics over the recent years, however, suggests an upward trajectory of the journal’s impact and visibility.

As JFR is transforming into a more competitive international journal, one that is less affected by profit agendas, Footnote 1 a more active engagement of young audience, i.e. early-career researchers and students, can be expected. Global health emergencies, however, can impact research laboratories, indicating a need for developing diverse skills to enhance lab health and resilience (Maestre 2019 ; Rillig et al. 2020 ). The recent pandemic, by urging for shutting down scientific research labs around the globe, might affect the training of many young authors, whose physical interaction with supervisors and mentors might be impeded. These prompted the synthesis of a paper that can serve as a handbook for publishing scientific papers and engaging in the peer reviewer process, especially during the current pandemic, which may be a good time for young researchers to start writing. This is a personal reflection based on my approximately decadal experience in scientific publishing, as a young Footnote 2 Associate Editor-in-Chief of JFR and editor at different ranks in several other journals, Footnote 3 a reviewer of approximately 550 articles for about 80 international peer-reviewed journals, and author of some 135 SCI articles, of which 65 as first author and the vast majority published in Q1 and Q2 journals. Footnote 4 I hope that this piece will transfer to you knowledge that would otherwise need physical interaction with your teachers, supervisors, or mentors, thus facilitating the development of your skills. Abraham Lincoln, the U.S. 16th president and the man who led through the American Civil War, once said “ give me six hours to chop down a tree and I will spend the first four sharpening the axe ”, meaning that we should devote resources to master the tools needed to achieve our goals.

Before starting the drafting process

Select your target journal.

Have you completed your manuscript and are now searching for candidate journals (Hites 2021 )? Yellow card. While this is an often observed practice, I believe it’s less appropriate. Based on my experience, I find it more efficient in terms of time and effort to target a journal much before starting to synthesize your paper (Fig.  1 ). Why is that? The Introduction Footnote 5 highly depends on the journal; so does the discussion (e.g. of the results if it’s an original research article) and even the abstract and conclusions. For example, the first paragraph opening the paper might relate to the aims and scope of the journal, e.g. indicating the environmental issue if it’s an environmental journal or the forestry relevance if it’s a forestry journal. Likewise, the last paragraph of Introduction might expose the significance of the study for the broader readership of the journal. The focus of certain aspects of the study in the paragraphs in between the first and last of the Introduction should also be with respect to the journal aims and scope. Hence, zeroing in on a specific journal a priori and fitting the Introduction to the aims and scope of the journal can save you time and effort from editing your Introduction later on to match the journal’s scope. A friendly, but believingly important, suggestion is to avoid sending pre-submission inquiries to the journal editors, unless it’s stated otherwise in the author guidelines for some specific article types, because they may be problematic, counterproductive, and unnecessary (Levesque 2019 ). Selecting your target journal in advance can also facilitate collaborations, especially for article types other than original research articles (e.g. literature review articles (Sayer 2018 )) where different people may be assigned to synthesize a specific section/topic of the article. By selecting the target journal a priori, you can comply with the journal’s guidelines from the beginning by setting size limits specific to each section. This can finally save time to your colleagues by often avoiding unnecessary reductions of text later on; time is precious and perhaps nobody wants to spend time unnecessarily. I’ve seen cases where the lead authors of such collaborative review articles don’t consider these from the beginning, letting independently-working people synthesize their sections without any specific guidelines as to the size. This policy finally leads to an excessively long manuscript that most journals (at least lead journals) would barely consider. But this can be an embarrassing outcome to some collaborators as more and more edits may be needed (more time), and finally having the paper published in a low-profile journal. I don’t think this is what one wants. Considering all these from the early stages shows professionalism, and your colleagues will appreciate this. It may help you sustain long-term collaborations.

figure 1

Tips to consider prior to drafting the paper

When selecting journal, if you’ve already prepared your degree’s thesis, pay a visit to your reference list. The most cited journal is perhaps where you would submit your article. If you haven’t worked on your thesis, your supervisor can direct you because senior researchers often select journals based on their experiences in a broader context, including previous experiences with interacting with journals as authors, reviewers, and editors.

Have a look at the indices of publication speed of the journal. You can usually find these on the journal’s website. However, remember that the numbers you see are usually average numbers reflected by the arithmetic mean of the articles published by the specific journal within a specific time window. Arithmetic means are sensitive to data distribution skewness, i.e. asymmetry around the mean resulting in left- or right-tailed skewness, and the metrics you see reflect only an average performance, which doesn’t make much sense in the absence of some measure of dispersion such as standard deviation. If we consider the articles published in a specific time window a sample of observations, the phenomenon of skewness may be more pronounced in journals publishing a relatively small number of papers in a year. The central limit theorem (CLT) states that as the sample size increases, the sampling distribution of the arithmetic mean approaches a Gaussian distribution (bell-shaped). This suggests that the relevant metrics of journals publishing a relatively high number of papers in a specific time window would be less prone to skewness. Hence, now you understand that some papers published even in journals offering rapid peer review can exhibit a considerably delayed peer review, i.e. these cases occur a few standard deviations from the mean.

If you cannot find any metrics of publication speed, you can sample several recent papers from the archives of the journal and see when the paper was submitted and when it was finally accepted for publication. By doing so, you’ll have an image of how speedy the peer review might be, if you’re concerned with the time needed for peer review. If you’re a more experienced author who publishes many papers per year, you shall expect that the review duration may be considerably prolonged for some of your papers. Hypothesizing that an author submits approximately 30 papers in a year (a moderate sample size on a normal distribution basis), the duration of the review process is likely to follow a Gaussian distribution with some papers being reviewed within a relatively very short time (left tail) and some within a far longer time (right tail). The entire process is characterized by high relativity and is driven by probabilities. Therefore, the earlier you accept these, the earlier you may free yourself from potentially high anxiety.

As a final tip, when you’re searching for the right journal and have the information fresh, write down at least 2–3 more journals, perhaps ranked by preference, because you may need them. Selecting a thematic group of journals might also come in handy. For example, assuming that a paper reports novel findings about real-world interactive effects of microplastics and antibiotics on biota, it could be published into various journals, such as Journal of Hazardous Materials, Science of the Total Environment, Chemosphere, Environmental Pollution, and Environmental Sciences Europe. Nowadays there is considerable overlap between journals, and this can be seen as an opportunity to select a thematic group of journals for potentially successive submissions.

Avoid predatory journals

In the process to select journal, be careful to avoid predatory journals (Clark and Thompson 2017 ; Pourret et al. 2020 ; Qehaja 2020 ; Sonne et al. 2020 ; Macháček and Srholec 2021 ). Predatory means “inclined or intended to injure or exploit others for personal gain or profit” (Merriam-Webster Inc.), and authorship of a paper published in a predatory journal may harm your reputation and career (Clark and Thompson 2017 ). For an experienced author, it’s easy to immediately realize that such an invitation to submit articles comes from a predatory journal or is a scam because reputable journals and publishers send standard formal invitations through their online submission systems (e.g. Editorial Manager for JFR), although there can often be an initial informal contact by an academic editor or a journal staff. Invitations to submit articles to a journal that are sent from non-institutional email addresses, e.g. gmail, hotmail, yahoo, outlook, and whatsnot, shouldn’t be trusted; most publishers also discourage the use of non-institutional email addresses when it comes to the submission/peer-review system. In any case, if you’re in doubt, you may seek advice from your supervisor or experienced colleagues.

Familiarize yourself with the selected journal

Let’s assume you’ve selected a journal. Now what? If you’re unfamiliar with the journal as author, reader, or reviewer, I suggest that you explore the archives of the journal, especially emphasizing on articles published within the last two years. Do you still believe your article would be a good fit to the articles collection of the journal? Will your article be of similar or higher quality than papers published on a similar subject? If there’re other papers on the exact subject, does your paper provide sufficient scientific advance over the already published articles? If your answer to these questions is negative, then perhaps you should try to find another journal. Hence, as it’s already clear to you now, this stage should be undertaken before selecting a target journal if you’re unfamiliar with the journal or if you aren’t regularly following the journal’s publications as a reader. While you’re exploring journal’s articles, remember to pay attention to the quality of presentation, which is an important indicator of your article’s quality (Sedlak 2015 ). Reaching a similar or higher quality of presentation, including the way display elements are designed, will likely increase your odds to get your paper published in your target journal. Above all, it shows some level of professionalism and proficiency—it’s the small details that matter, right?

Once you’ve selected the journal, go through the author guidelines carefully. Do this right away. It’s an important step that can help you utilize your time more effectively. For example, each journal has its own policy regarding article size/length, although some journals may haven’t. This is important to note because you may find that you prefer another journal that allows articles to occupy more space. For example, Ecotoxicology and Environmental Safety states that “regular research articles must not exceed 8,000 words. Word limit here is for text only. In principle the number of tables and figures should not collectively exceed seven”, while requiring the agreement of editors should the authors want to exceed their limits (author guidelines Footnote 6 ; accessed on 13 July, 2021). On the other hand, Environmental Science & Pollution Research states “please ensure that the length of your paper is in harmony with your research area and with the science presented”, not imposing such limitations (author guidelines Footnote 7 ; accessed on 13 July, 2021). Similarly, the editors of Environment International believe that “no single format can accommodate all useful contributions” and set no size limits to original research articles (author guidelines Footnote 8 ; accessed on 13 July, 2021). There’re more such journals. Most journals, however, also allow online-only supplementary materials of any size nowadays, but it’s important to know these in advance so to prepare your paper accordingly.

“Rules are useful, but the understanding of the reason on which a rule is based is better.” Thomas Arthur Rickard, author of a 1908’ book (Rickard and Gayley 1908 ).

With this quote in mind, I suggest that you don’t go mechanically through the author guidelines. Focus and try to understand them well. Ask yourself why a rule exists. Rules are important to maintain a journal’s standards, and failing to comply with the author guidelines can lead to desk rejection of a paper (i.e. rejection by the editors at the initial screening), although for less important requirements, such as line spacing, margin widths, and bibliographic style, the paper may be returned to you for further editing and re-uploading (Lang 2020 ; Lowry et al. 2020 ). Here, I should also emphasize that it’s not only authors that are responsible to comply with journals’ guidelines. Journals are also responsible to provide clear and explicitly explained guidelines, yet there’re often unclear guidelines (Lang 2020 ), explaining also the reasons of the ‘rules’. For example, a beginner author may be confused by the guidelines of Environmental Research stating “tables should be separate from the manuscript text, and can be uploaded individually or consolidated into a single file” and “tables can be placed either next to the relevant text in the article, or on separate page(s)” (author guidelines, Footnote 9 accessed on 13 July, 2021). Nevertheless, someone experienced with the journal would know that the editors would be fine with either choice. If you believe that some rules are unclear, you can contact the editorial office for clarifications; avoid contacting academic editors for technical issues.

Be aware of dubious research practices

Emerged from the need to reduce dubious research practices, new initiatives appear in several countries aiming at educating scientific researchers about responsible conduct of research, research misconduct, data handling, rules of collaborative research, conflicts of interest, and communicating information, amongst others. A good example of this is the founding of the Japan’s Association for the Promotion of Research Integrity (APRIN) on 1 April, 2016, which provides important educational materials that are also used for the training of researchers at governmental institutions across the country ( https://www.aprin.or.jp/en ). Responsible conduct of scientific research encompasses (1) correct conduct of the research itself, (2) appropriate dealing with research subjects (e.g. humans and other animals), and (3) “accountability to society that supports research” (APRIN educational materials, update on 3 July, 2017). Research progress and scientific knowledge can significantly impact societies and drive/control the development of humanity (Iaccarino 2001 ). The society’s trust to science also depends upon ethical conduct of research, and research misconduct can harm the mutualistic relationship between science and society. Hence, it’s of utmost importance to have a good understanding of what can be harmful to science and society when preparing your paper. Ethical compass can also be critical in times of crises, as is the case of COVID-19 pandemic (Xafis et al. 2020 ; Maccaro et al. 2021 ).

There are various practices that are considered inappropriate in terms of research ethics, Footnote 10 such as plagiarism, data fabrication or falsification, salami slicing, duplicate publication, and ghost authors, which threaten scientific integrity (Rawat and Meena 2014 ). I recommend having these on your radar before starting to draft your paper to save otherwise lost time and protect your reputation and career, because the conduct of fraudulent research can lead to social, legal, and financial consequences (Resnik 2014 ; Eungoo and Hwang 2020 ). While I summarize major issues in this section, you may read more about these and other issues of integrity in scholarly research and its publication on the website of the Committee on Publication Ethics (COPE; https://publicationethics.org/ ) and the U.S. Office of Research Integrity (ORI; https://ori.hhs.gov/ ).

Plagiarism is the phenomenon where authors merely copy or slightly edit text from other manuscripts authored by other authors or them (so called self-plagiarism), often with not even citing the source. This phenomenon has been occurring for quite a long time (Duggan 2007 ), appears more frequently in papers authored by people whose native language is not English (Higgins et al. 2016 ), and is considered theft or misappropriation of intellectual property (Kumar et al. 2014 ). Plagiarism can result in rejection of your paper or retraction if revealed after its publication as well as to your suspension or ban from the journal (Das and Panjabi 2011 ), a situation you don’t want to flirt with. Knowing and, thus, avoiding plagiarism during your writing is important (Gerding 2012 ; Kumar et al. 2014 ). Legitimate scientific journals evaluate your paper for plagiarism, similarity, and potential duplicate publication upon submission. Therefore, your paper will be desk rejected if found to be problematic. Don’t think that your paper may be lucky not to be checked because many legitimate journals don’t require an operator to do this. Instead, it’s done automatically by the submission system, and editors can simply see the result attached next to your submission. I should also highlight that, on the other hand, being well aware can also protect you from potentially incorrect rejections. For example, a system that is widely used for similarity screening by numerous journals published by various publishers is iThenticate (Turnitin, LLC; https://www.ithenticate.com/ ). However, this software has important limitations such as no proper consideration of the Materials and Methods section, no sub-analysis of the various sections, and no exclusion of title pages, author’s affiliations, funding and conflict of interest statements, and acknowledgements (Higgins et al. 2016 ). These can lead to an (incorrectly) high value of similarity, a single score that the software provides. Then, no careful detailed screening by an editor can lead to potentially incorrect rejection due to high similarity. Furthermore, this software doesn’t provide a score of plagiarism but a score of similarity (e.g. the similarity index of this submission scored 3%). While plagiarism accounts for similarity, similarity doesn’t necessarily mean plagiarism. If your paper is rejected by an editor accusing you for plagiarism while you’re sure your paper doesn’t include plagiarism, alas, it’s a serious issue because it indicates unethical practice from your side. In such a case, you may write a detailed letter to the journal office. For example, you may request that the journal examine the issue in detail and, if you’re right, to rescind the editor’s decision because the existence of such an incorrect accusation in the journal’s record may harm your reputation and career in ways that you cannot imagine or expect.

Don’t fabricate or falsify data/results. The ORI defines fabrication the phenomenon of “making up data or results and recording or reporting them” and falsification the phenomenon of “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record” ( https://ori.hhs.gov/definition-research-misconduct ). In general, based on ORI’s definitions, keep in mind not to (1) make up data or results and record or report them, (2) manipulate research materials, equipment, or processes, or (3) edit or omit data or results that lead to inaccurate representation. You may want to have a further reading about these issues (Resnik 2014 ; Kingori and Gerrets 2016 ; Eungoo and Hwang 2020 ), but you can always turn to your supervisor or other senior colleagues for advices if you have doubts whether a practice may consist data fabrication or falsification.

Don’t perform salami or duplicate publishing. Salami slicing or ‘least publishable unit’ is the phenomenon where authors try to publish more than one papers, each of which including the least possible data/information of a specific research. Duplicate publication is the phenomenon where authors publish the same data/results in more than one papers, often by submitting them in different journals at around the same time or at different times. Both practices are inappropriate and can seriously harm one’s career. Duplicate publishing is easier to define and identify, but salami slicing is more difficult to define and hard to identify (Broad 1981 ; Editorial 2005 ; Smolčić 2013 ; Ding et al. 2020 ). That is, most journals don’t have specific definitions of salami slicing, and there’re certain cases where salami slicing is allowed to a small extent (but how small is small?). For example, in highly integrated studies involving independent evaluations over multiple years and/or different laboratories studying distinct data sets, it may be reasonable to publish two or more papers with different data sets, especially if the one doesn’t depend on the other. There’s a fine line in making such judgments and requires experience, so it’s always good to discuss these with your supervisors and senior coauthors in advance.

Consider and discuss authorship in advance. Any person who has made important contributions to the scholarly content of the study/paper should be included in the author list, while any person who hasn’t made important contributions should be added in the acknowledgements (if agrees) but not in the author list (Schofferman et al. 2015 ). Before inviting colleagues to coauthor, read the journal’s guidelines regarding authorship. Bear in mind that ensuring the study’s funds with no further intellectual contributions to the production of the study/paper does not justify authorship, even if this is your supervisor. Furthermore, if you’re a student, inviting an expert in the field to become coauthor to your paper may not be a good idea unless your supervisor has agreed or instructed to do so. If you invite others to be coauthors from early in the process, clearly explain what contribution you do expect from them, potential timelines (when do you expect to have their feedback back), and whether you have the agreement of your supervisor. Lead scientists are extremely busy and the least they want is to engage in a situation where they’ll have to guide you in how to write a paper (this is your supervisor’s responsibility). To facilitate their work and maximize the benefit from their involvement, send them a tidy and clean draft after receiving the approval of your supervisor. Likewise, it’s good to discuss the process with your supervisor from the very beginning; however, try to do a good work (at the best of your ability) before sending him/her a draft. I suggest to avoid sending him/her a draft expecting that he/she will write your paper. This may show unprofessionalism, lack of motivation, incompetence, irresponsibility, and lack of desire to develop professionally.

When examining the authors’ list, I suggest to visit the author guidelines of your target journal, which will likely state specific requirements regarding authorship. For example, Global Change Biology instructs (author guidelines, Footnote 11 accessed on 10 August, 2021): “The list of authors should accurately illustrate who contributed to the work and how. All those listed as authors should qualify for authorship according to the following criteria: (1) Have made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; (2) Been involved in drafting the manuscript or revising it critically for important intellectual content; (3) Given final approval of the version to be published. Each author should have participated sufficiently in the work to take public responsibility for appropriate portions of the content; (4) Agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved; and (5) Contributions from anyone who does not meet the criteria for authorship should be listed, with permission from the contributor, in an Acknowledgments section (for example, to recognize contributions from people who provided technical help, collation of data, writing assistance, acquisition of funding, or a department chairperson who provided general support). Prior to submitting the article all authors should agree on the order in which their names will be listed in the manuscript.” You may also read the author guidelines of other journals (especially published from different publishers) to get a better picture. For example, a plethora of journals mandates the inclusion of a statement explaining each author’s contributions, while many journals recommend or require the use of relevant authors’ roles listed by their publishers (e.g. Elsevier’s CRediT roles Footnote 12 ). You may also want to have a look at the guidelines developed by the International Committee of Medical Journal Editors (ICMJE), which is well known for its authorship criteria in sciences ( http://www.icmje.org/ ), e.g. its Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (December, 2019; http://www.icmje.org/icmje-recommendations.pdf ). I suggest doing these so to be educated before discussing authorship issue with others.

Start drafting the paper

Now that you’ve mastered the preceding points of consideration, it’s the right time to start drafting your paper (Fig.  2 ).

figure 2

Tips to consider when drafting the paper

From where to start drafting my paper?

Have a plan. You may consider the following protocol:

Leave conclusions section of the main manuscript and abstract for later, after drafting all the other sections of the main manuscript.

If you have a title in mind, write it down first. I’m confident that you’ll have more ideas about candidate titles and, thus, you can improve or change it several times during your writing. Title is the first thing people will read, including editors and yourself each time you open your document to work (at least in Windows’ document management software). Hence, it’s important to have an attractive title, but without deviating considerably from your data. Why should people read your paper among an extremely abundant and continuously-growing literature? You’ll often find yourself listing a couple of candidate titles. Keep them there till the very end. If you’ve completed drafting the paper and are still unsure which to use, you may consider asking for the opinion of a colleague.

Then, you may draft a first ‘lose’ version of the Introduction. While working on the other sections, go back to the Introduction each time you visit your manuscript for potential improvements. You’ll find something to edit almost always. This will help you to maximize the quality of Introduction as you’ll also have a better comprehension of your own data and results, especially in highly-integrated original researches with a broad data set. While discussing your results, and searching for literature, you may generate new ideas that can help to increase the significance and expand the scope of your Introduction.

Continue with Materials and Methods (the section’s name may differ in some journals). Explain the entire methodology in sufficient detail such that anyone can repeat the study with your very protocol/methodology. However, leave the sample and data collection and statistics section for later.

Then, continue with data analyses. For each data set you analyze statistically, go back and fill each of the left sub-sections of Materials and Methods; describe sample and data collection and statistics section. Then, create the display elements and write also the results in the Results section. Continue with the next data set.

When you finish, write the discussion.

It is now time to write the conclusions. Do not repeat the results in the conclusions. Instead, provide a clear take-home message, one you would like readers to remember. This is a good place to clarify how your study advances the current scientific understandings of this subject. Try to put them in a broader context. Why are they so important? However, stick onto your data and write only based on what your data can support.

Complete the writing by drafting the abstract. The next component to read after someone has seen the title and became curious to know more about your study is the abstract. Try to write an efficient abstract so to attract the reader to explore the full paper. Don’t repeat the conclusions you wrote in the main paper. Be different. Write one−two brief sentences with the background to set the scene; this may be unnecessary in some cases, e.g. in very specialized journals (know your readership). Make it clear why you conducted this research and why now. Briefly explain the methodology (what you did) in one-two sentences. Follow up with the most important results in terms of novelty, occupying 3–4 sentences. Then, continue with some concise discussion of the results to help the readers understand why your results are important and how they add to the existing scientific base (2−3 short sentences). This is a general structure based on my own perception, personal preference, and experience, and is only indicative. However, what one should keep in mind is to keep harmony. As a reviewer and editor, I often come across abstracts whose background and/or methodological information occupies 50–75% of the total length, leaving relatively very little space for discussing the results. This would be less effective. All journals have specific requirements of the permitted size of abstract, but for original research articles the abstract is generally restricted below 300−350 words, for some journals even below 150 words. Furthermore, several journals require following a specific structure of abstract. Therefore, read the journal’s guidelines regarding abstract before starting to draft the abstract.

If you’ve not done it yet, select keywords. Most journals permit listing up to five keywords. Try to include the maximum allowed number because efficient use of keywords can help your paper being traced by more people. I suggest avoiding using keywords that are already included in the title and abstract because key search engines of scientific literature consider abstract and title in their search. Alternative, synonyms or alternative names of the same words can be used. Again, the publication practices are so relative that there is no standard protocol prototype. Hence, there’re journals enforcing their own requirements for keywords. For example, some journals require the use of 5–10 keywords selected from the browser list of the US National Library of Medicine's Medical Subject Headings (MeSH; www.nlm.nih.gov/mesh ) (e.g. Elsevier’s Plant, Cell & Environment). Therefore, check the journal’s author guidelines regarding keywords.

These suggestions mostly concern original research articles, the most common type of articles, and are based on my own personal experiences and professional development (my writing practice has changed over the years/with increasing experience, so will yours). These tips may serve as a good starting point, and you may expect that you’ll develop your own writing “protocol” over time. Consider also that there’re journals nowadays that publish research data (e.g. MDPI’s Data and Elsevier’s Data in Brief) and protocols and methodologies (Elsevier’s MethodsX and Mdpi’s Methods and Protocols). It may be good to consider from the beginning these supplementary options for facilitating reproducibility and enhancing the visibility of your data and protocols and methodologies. Many journals ask during submission whether you submit these pieces aside your submitted manuscript.

Control self-citations

Avoid heavily citing yours and your coauthors’ papers. Excessive self-citations is a phenomenon widely occurring in the scientific literature, and early-career scientists show a tendency to have greater self-citation rates due to the so called ‘youth effect’, i.e. their published papers had little time to receive massive citations from other scientists (Van Noorden and Singh Chawla 2019 ). Excessive self-citing isn’t an issue for scientists at the very beginning of their career, e.g. university students of various academic degrees who are writing their first articles. However, you should be better aware of the issue in order to avoid such potential practices of unwarranted references by your coauthors. Articles including your coauthor(s) in the authors’ list that do not add to the scholarly content of the article should not be cited. I opine that science shouldn’t be shaped within ‘wooden’ frames, but it should be ‘plastic’ so to provide higher degrees of freedom (flexibility) within certain relative limits. So should the scientific publishing be. That is, there’re cases where relatively more self-citations may be acceptable, such as when describing methodologies that you’ve developed or importantly modified and/or if you’re working in an emerging and rapidly-advancing research area where your papers cited to support the scholarly content in your new article cannot be replaced by non-self-references.

Cite others’ work

Authors cite references not only when they refer to data and findings, but also when it comes to concepts or ideas reported in or developed based on other papers. If you’ve developed an idea based on a paper you read, give credit to those authors by citing their work to show that your idea is based on their work. If their work has helped you in any way in your own paper, give credit to their work. Don’t search to cite references from only your own colleagues, teachers, or other scientists in your own country. Do cite international references based on proper literature survey, scientific merit, relevance, scholarly contribution to your paper, and being up-to-date. Massive citation of papers from your own country would add a nuance of rather local interest and lead to desk rejection of your paper due to lack of international interest. Cite references immediately when you are facilitated from them so to not forget later.

Disclose interests

If you or your coauthor(s) have any conflicts of interest, this should be disclosed. Add a relevant statement in your manuscript, usually placed after the main text and before your reference list. To this end, ask all your coauthors to disclose potential conflicts of interest. As a definition, conflict of interest is “a conflict between the private interests and the official responsibilities of a person in a position of trust” (Merriam-Webster Inc.). What may constitute conflict of interest is usually explained in the author guidelines, but you may read further information on the website of other bodies, such as of COPE, the World Association of Medical Editors, and the International Committee of Medical Journal Editors; you may also want to read some relevant publications to develop a more comprehensive understanding of the issue (Klein and Glick 2008 ; Ruff 2015 ; Dhillon 2021 ). Acknowledge any person, body, or institution that has helped you by any means in the acknowledgements section that is usually placed on the title page or at the end of the main text (before reference list) as indicated in the journal’s author guidelines. Don’t add thanks to reviewers and editors, however, unless they’ve helped you generate new ideas, perform new data analyses, and thus lead to considerably different results and conclusions. It’s the job of editors to handle your paper in the best possible way and the responsibility of reviewers to conduct a high-quality review (if they have accepted to review). Both editors and reviewers know that you’re thankful if you’ve improved your paper based on their comments and suggestions. I’ve come across new submissions whose acknowledgements thank reviewers and editors for their “helpful comments and suggestions” in a kind and polite manner. It’s understood that you appreciate their efforts however, you don’t know that they’ll provide helpful comments and suggestions. Moreover, personality traits and behavior, and thus perception of others’ behavior, highly vary among people. Hence, your practice may be perceived by someone as an effort to manipulate reviewers’ and editors’ judgment. By some others it may be considered flattering, and for some people this may not be perceived well; e.g. you may recall the famous quote “it is better to fall among crows than flatterers; for those devour only the dead—these the living” by Antisthenes (Ἀντισθένης; ≈ 446–366 BC), a Greek philosopher and Socrates’ pupil. For all these reasons, unless there are specific needs following review and revision (a rare case), refrain from adding thanks to reviewers or editors in your manuscript. Some journals also clearly indicate that “thanks to anonymous reviewers are not appropriate” (Global Change Biology, author guidelines, Footnote 13 accessed on 10 August, 2021).

Be ready to provide primary/raw data

Be prepared to provide any original data that might be requested at some point of the peer review. This may be the case for articles not reporting primary data (e.g. reporting transformed data). If you’re asked to provide them in the framework of a revision of your paper and instead you withdraw your paper, it’s a bad sign and will raise concerns of ethical conduct. This is a situation that nobody wants to face in her/his career. Primary data facilitate cumulative science, and many journals mandate their providing along with the article. Even if a journal doesn’t mandate making primary data available, keep in mind to provide them (at least arithmetic means with a measure of dispersion and sample size) in supplementary materials if possible. Primary data have multiple roles and can help saving resources, e.g. other authors may optimize their experimental design based on your data.

Do not succumb to P hacking

The value of probability ( P ) is obtained from statistical testing of a null hypothesis (H 0 ) and, in simple terms, indicates the odd for results equally or more extreme than the actual observations given that H 0 is true. A survey of biology’s literature would suggest that the results and conclusions of a tremendous portion of original research papers is based on P value. P value, however, is also one of the most discussed issues in the modern statistical and biology’s literatures due to various misconceptions regarding its use, meaning, and inference. You may want to read some key papers on the topic so to become knowledgeable and avoid such misconceptions (Berger and Delampady 1987 ; Senn 2001 ; Connor 2004 ; Cumming et al. 2007 ; Goodman 2008 ; Lew 2012 ; Nuzzo 2014 ; Amrhein et al. 2019 ; Agathokleous and Saitanis 2020 ). The commonly-used level of significance is alpha ( a ) = 0.05, although recent advances suggest that testing at the levels of significance of a  = 0.005 or 0.001 would decrease the level of non-reproducibility of scientific research by a factor of ≥ 5 (Johnson 2013 ).

P hacking is the phenomenon of mismatch between reported and actual P values (Nuzzo 2014 ; Veresoglou 2015 ), and one that I have encountered numerous times as editor and reviewer. Studies with multi-factorial designs (e.g. including three- or four-way analyses of variance) may be more prone to P hacking, especially if using tests for multiple comparisons among means or multiple t tests without treating P value. If one uses the common a level of 0.05 (nominal) for comparisons, the rate of false rejection of a true H 0 (type I error) is inflated, a phenomenon termed ‘experiment-wise error rate’ (Iglewicz 2014 ). While there are also other techniques to treat experiment-wise error rate, Bonferroni correction has been widely used due to its ease, albeit it’s conservative, giving the maximum error rate (Freund et al. 2010 ; Armstrong 2014 ; Veresoglou 2015 ). Let’s say you have 15 comparisons. The probability that one or more H 0 is falsely rejected equals to 1 − (1 −  α ) n , where a  = 0.05 and n  = 15 in this case. Solving the equation, the probability is 53.7%. If n were 50, the probability would become as high as 92.3%. Hence, it’s quite clear that the probability to incorrectly reject H 0 can be alarmingly high in studies with increased numbers of comparisons. In such cases, failure to treat experiment-wise error can lead to incorrect results and conclusions. Even where there aren’t so high numbers of comparisons/tests (e.g. 22.6% for 5 tests/comparisons), significant differences at a  = 0.05 may be incorrectly presented if no treatment of experiment-wise error is applied (Veresoglou 2015 ). Using Bonferroni correction, one can easily treat experiment-wise error by dividing the a level with the level of tests/comparisons, i.e. a  = 1 − (1 −  P ) 1/n ; P  = 0.05 and n is the number of tests/comparisons. In simple words, if you have 15 tests/comparisons, you use a level of significance of a  = 0.0033 (0.05/15). It’s good to be aware of these issues because the current peer-review system may not permit their identification during peer review (Wehrden et al. 2015 ), and to avoid adding to the existing issue of most published research findings being likely false (Ioannidis 2005 ). As an additional suggestion, don’t conduct tests until you obtain the P values you want to find the differences you expect (Masicampo and Lalande 2012 ).

Avoid effect size inference if you do not estimate effect size

An often encountered important issue is claiming size of differences in the absence of relevant mathematical/statistical support. For example, a common misconception I frequently encounter in the biology’s literature (and as reviewer) is the claim of relative differences based on differences in the P value. For instance, one author may claim that chlorophyll content is a more sensitive biomarker of stress in some tested plants than superoxide dismutase because the difference in the chlorophyll content between treatment and control group was significant at P  < 0.001 while the difference in superoxide dismutase was significant at P  < 0.05. Another example is when one ranks the susceptibility or tolerance of organisms based on P values. Red flag. This is a serious misconception as P values don’t support inference related to size/magnitude of differences, which would require the use of mathematical estimation of effect sizes for comparing magnitudes (Connor 2004 ; Agathokleous and Saitanis 2020 ). There’re various effect sizes and complementary reporting indices that can be used (Breaugh 2003 ; Kirk 2007 ; Nakagawa and Cuthill 2007 ; Durlak 2009 ; McGough and Faraone 2009 ; Berben et al. 2012 ; McCabe et al. 2012 ; Sullivan and Feinn 2012 ; Lakens 2013 ; Tomczak and Tomczak 2014 ); however, their analysis is beyond the scope of this article. Now that you’re aware of this issue, you can do a comprehensive reading of some key references cited herein and others if you’re interested in estimating or mastering effect sizes. There’re also available computerized applications that can help you make the estimations easily (see freely available electronic supplementary materials in Agathokleous and Saitanis ( 2020 ) and also https://www.cem.org/effect-size-calculator ).

When you have a first draft ready (before submission)

Keep in mind that there is no perfect paper.

Now you have your first draft ready and are excited to submit as soon as possible, right? I don’t want to disappoint you, but you definitely haven’t. Now you’ve the main base of the article upon which you’ll build (Fig.  3 ). Writing is art. It takes time, and requires attention to the finest details. A painting is never perfect. Hence, keep in mind that your paper will never be perfect. If you critically examine any paper in any published journal, you’ll commonly find at least a couple of minor non-scholarly-content ‘errors’ in each. There is always space for improvements, and editors know this very well. Does this mean that you’ll be improving your paper to the infinite? Apparently not, and a cost–benefit assessment is needed. You may want to stop working on small details when you realize/feel that the time (and effort) you spend outweighs considerably the benefit of the paper from your edits. For beginners this may be hard to define, but later on you’ll see that this point is clearly understood. If you’ve no coauthor, don’t worry. If you’re ‘lucky’ to receive good review reports, you’ll still have the opportunity to apply further improvements based on reviewers’ suggestions. If you’ve coauthors, the most likely scenario is that you’ll thoroughly revise the paper based on your coauthors’ comments and suggestions.

figure 3

Tips to consider when you have a first draft ready

Have it checked by all coauthors –the importance of corresponding author

Before sending the paper to coauthors, it may be beneficial if you ask some senior members of your lab/team to provide you with a feedback. Thank them and avail of their comments to potentially further improve the paper. Then, send the paper to all your coauthors; however, communicate with your supervisor first because he/she may instruct you otherwise. For example, your supervisor may want to work first with your draft in order to improve it considerably before sending it to other coauthors. This is his/her responsibility if (s)he is coauthor, and especially if (s)he is also corresponding author, of your paper. If (s)he is corresponding author, (s)he is also responsible for all matters of communication regarding this paper, including communicating with all authors, submitting the paper and interacting with the journal editors, responding to reviewers (but note that you’re encouraged to try preparing the response letter first for your training) as well as to readers of your paper following its publication. During the process of interacting with colleagues about your paper, remember to express your gratitude to all those who have contributed to your paper by any means.

Who is gonna be the corresponding author is also an important matter, especially because it can have an important impact on someone’s career. Remember that journals find reviewers from published literature, typically the corresponding authors. Furthermore, the corresponding author is the one who will interact with readers and journal. Hence, being corresponding author of papers can enhance one’s reputation and international visibility, and you should examine the issue of the corresponding author early in the process of determining authorship. For example, if someone has never been designated corresponding author on papers, journals won’t invite him/her for reviewing papers Footnote 14 and, thus, would barely become academic editor of scientific journals. However, if you’re a student enrolled in a BSc or an MSc degree or a PhD student who doesn’t wish to continue a research or academic career, it may be more appropriate to indicate a senior coauthor as corresponding author who guarantees readiness to respond to any requests regarding matters related to the article for a relatively long time. If you’re an early-career researcher or academic (e.g. lecturer or assistant professor), I don’t see any reason justifying not being corresponding author of your paper. As the lead author, there should be nobody knowing the paper’s content better than you. Hence, I opine that marking someone else as corresponding author might look as if you were unable to manage responsibly any arising matter. The issue of corresponding author, however, should be examined with reference to the specific journal’s guidelines, considering that there’re journals that don’t allow the designation of more than one corresponding authors, such as the Journal of Hazardous Materials (author guidelines, Footnote 15 accessed on 10 August, 2021).

The cover letter

While waiting feedback from colleagues and coauthors, utilize the time to prepare an excellent cover letter, which the vast majority of journals require upon article’s submission. This is your last chance to convince the editors that they should consider your paper for potential publication in the journal. Consider the below points:

When you’ve the article ready for submission and preparing the cover letter, it’s time to visit the author guidelines once more. Check the requirements for cover letter. Many journals have specific requirements of what to include or not include in the cover letter. Confirm that you comply with the guidelines.

With the current editorial practices and the often multiple Editors-in-Chief and many Associate Editors, it’s okay to address the letter to ‘editor’ in general. The editor who has handled a paper, however, is commonly indicated in the decision letter, especially if the paper is given revision or is accepted. If you get a signed revision letter or the editor’s name is specified, it might be preferable to address the letter to the specific editor.

Mention the article’s title and authors, and confirm that you haven’t published or have under consideration for publication elsewhere this work in part or fully in English or other language. State that all authors have read the manuscript under submission and agreed with the submission (make sure you did this).

If you or some coauthors have a conflict of interest to declare, expose it clearly in the cover letter. If you haven’t any, indicate it in a concise statement.

State whether you’ve uploaded your paper already on a preprint server or specific website. This is important to avoid incorrect rejection based on potentially misinterpreted and non-properly-checked similarity check.

If this is a resubmission of a previously rejected paper, clarify this (state the previous manuscript number and title) and explain why you resubmit (what have you changed) and if the previous editor encouraged resubmission.

Stress out briefly the scope, novelty, and impact of the paper (Lowry et al. 2020 ). Don’t copy the abstract or conclusions of your paper. Avail of this opportunity to communicate some more information to the editors and convince them that they should consider publishing your paper. Be brief and direct, within one or two short paragraphs. Don’t write a pages-long cover letter. Keep it formal and signed, including all the information of the corresponding/submitting author.

Include suggested and, if needed, opposed reviewers (Grimm 2005 ) in the cover letter along with contact information, their institutional details, and a justification of why you propose or not propose them. Most journals require inputting this information in the submission system, but it’s good to also include them in your cover letter for the editor’s attention. Who to suggest? Definitely not your classmates, teachers, or relatives. You had better to discuss this matter with all coauthors, if there’re any, and have the consensus of all authors based on mutual understanding. It’s important to ask your coauthors because they may have reasons to suggest opposed reviewers, and recommending or opposing reviewers can facilitate publication of your paper (Grimm 2005 ). In general, candidate reviewers are people who have important experience in the paper's subject, as documented by peer-reviewed papers, and who aren’t close collaborators. In no way you should contact people asking them to review your paper if they’re approached by the journal; this may be perceived in a bad way, especially if you haven’t even met someone. This applies to requesting review comments on your paper prior to submission since peer review takes much time and effort.

Prepare the submission

When there is a consensus among all authors that the manuscript is ready for submission, there you go. Some points for consideration:

Register on the journal’s online submission system if you aren’t registered already. Use an email address that you routinely check, and add the journal in the safe senders so to avoid important messages ending up in your spam emails folder that might hamper the peer review process.

Some journals request that you select a handling editor and/or editorial board member. If this is the case, check the journal’s website with the editors. If you’re unfamiliar with the editors, make a survey to find editors that are most closely related to the subject of your paper.

Prepare the figures’ and tables’ high resolution files for uploading, although many journals don’t require this upon the first submission nowadays.

Unify the reference and citation style across the manuscript, and correct any errors. Many journals don’t mandate the use of a specific reference and citation style upon the first submission, but they highlight the need for consistent style. I strongly recommend the use of some reference management software, among the many existing (e.g. Mendeley, EndNote, Zotero, MyBib, Qiqqa etc.). This is particularly useful when you‘re submitting to a journal requiring the use of numbered references. It can protect you from errors introduced when one manages the references manually, especially when there is a heavy list of references and a lengthy main text, as is the case of critical literature reviews. Using reference management software reduces the odds of introducing errors when revising the paper.

Confirm that you’ve numbered the pages and line numbers (continuous numbering). While the former may not be required in the author guidelines of the target journal, it can facilitate reviewers and editors. Note also that many journals add line numbers to your submission when you upload your manuscript. As reviewer and editor, I found myself disliking this automated insertion in some journals because the line numbers restart at each page and each number doesn’t necessarily indicate a specific line (not aligned with the text). This situation makes the job of reviewers, editors, and even authors (when addressing reviewers’ comments) more difficult; pennies on the dollar. Therefore, I suggest paying some attention to the line numbers when you check the submission pdf for approval. What I sometimes do is to add continuous line numbers, even if the journal does add the aforementioned type of line numbers, and confirm that the line numbers I’ve added are clearly visible (let the automated numbers be there too).

Confirm the theme fonts, font size, page margins, and line and paragraph spacing, although they’re less important. The most important among these may be the line and paragraph spacing, but the most commonly used is 2.0 points.

Before approving the submission pdf, check it carefully. You may want to send it to your supervisor and other coauthors for an additional check, especially if you’re a beginner. Your senior colleagues may easily notice an error based on their experience.

Approve the submission.

After submission

Pursuit aretê during the entire process.

You’ve now proudly approved the submission and, thus, your precious paper has been submitted. Good luck, although you’ll barely need it if your work is scientifically excellent and your paper of high quality. This will likely give you much satisfaction since it may reflect the outcome of efforts lasted for some years, spanning from designing your research to executing it and finally writing the paper. But this ain’t mark the end (Fig.  4 ). Instead, you’ve just entered the publication arena. Congratulations. You should now be prepared for the ‘battle’ —but a gentle battle without fighting. Remember that scientists should pursuit aretê (αρετή in Greek; a general translation in English is virtue), i.e. ethical/moral excellence or supremacy (Yiaslas 2019 ).

figure 4

Tips to consider after the original submission

Keep all coauthors in the loop

Keep your coauthors posted about submission-related matters, and do this without delays—they’ve the right to know too. Don’t they? As soon as you’ve approved the pdf and completed the submission, write to your coauthors, even if many journals send automated submission confirmations to all authors. Let them know that you’ve submitted the manuscript and thank them once more for all their contributions. Even as a beginner you may not realize it, even small additions/edits to the manuscript can make a huge difference in the outcome of the peer review. Attach also the final submitted files and submission pdf for their record. Be aware that academic scientists have evaluations and may need such a proof. Independently of this, all coauthors have the right to have all the materials at any stage in the process. When you receive the decision letter, forward it to them, and don’t forget to attach any documents uploaded by reviewers and editors.

How long should I wait for the first decision?

You may receive a decision on your manuscript from within a few hours to several months. There’re numerous factors that can affect the peer review time. For example, it depends on the journal where you submit. If you submit to broad multidisciplinary publications (e.g. Science, Nature) with low acceptance rates, the chances for obtaining a fast desk rejection within hours are high. This, however, may also be the case for top specialty journals (e.g. specialized in Forestry, Environmental Science, Ecology, etc.) with relatively low acceptance rates as the academic editors often reject papers right away based on their own publishing agendas.

Let’s say your paper is plagiarism-free, has a quite low similarity, isn’t a duplicate submission, doesn’t contain unethical practices or striking issues with statistics, amongst others, and is rather excellent. However, it’s still desk rejected. Don’t be discouraged. The best-laid plans of mice and men oft go astray Footnote 16 (no matter how well you’re prepared, the outcome may not be what you expected, at least temporarily). This is how the publishing business works. Remember that there are thousands of scientific journals nowadays, and their loss means gain for another journal. Journals have their own agendas and your paper may simply not align with their current publication policies. In journals where the publishing space is much smaller than the number of submissions in a year, editors may prefer to publish content that is considerably different from what they’ve recently published or planned to publish, or they may simply think that your paper isn’t among the most competent or competitive from those they have in hands. When these are at play, you may hear back within hours. If your paper is rejected based on issues with its scholarly content that have been determined by editors, you’ll still hear back within a few days. Many journals give from 3 to 7 days to editors to act on your manuscript. However, there are often 2−3 editors sequentially assigned to your paper, and this timeframe may apply to each of them. Therefore, if your paper is rejected by editors without external peer review, you’ll commonly be informed within a few hours to about 3 weeks. If you check the submission system after several weeks and see that the paper is still with editor, you may want to contact the editorial office of the journal (not academic editors) to politely ask what the status of your paper is. Remember that most journals are businesses and you’re the customer. They make profit thanks to you. Therefore, they should responsibly address all your concerns.

If your paper makes it to external peer review, congrats. You’ve convinced the editor that your paper merits consideration for publication to the journal, and made it to the next step. Note that passing the ‘guard’ of editors is often a difficult point in lead journals with relatively low acceptance rates. Reviewers are usually given from 10 to 21 days to submit their review reports, although there’re still journals that give even about two months. So, it’s good to explore the indices of publication speed of the journal when you’re in the process of selecting a journal.

If your paper was sent to reviewers (under review) in time and you’re waiting for the decision, be patient. I advise against contacting the journal if 3 months haven’t passed since the submission. This time window is reasonable based on my experience as author, editor, and reviewer of hundreds of papers and many dozens of journals. Patience doesn’t harm. If you’re checking the submission system all the time, you simply waste important time. It’s fine to check from time to time (e.g. weekly) but don’t be obsessed and repeat it every some minutes. Likewise, I suggest avoiding sending emails to editors and journal office all the time. For more experienced authors, if you’re a reluctant reviewer who doesn’t respond in time, and may not submit review reports on time, you may not expect that others will do otherwise for you. In general, editors and journals may want more than you to have a quick decision back to you, as the journal indices can be affected, which in turn may affect authors selecting the journal, and, thus, their business. Keep in mind, however, that how fast the decision is made depends upon reviewers. Reviewers are busy, and reviewing is a voluntary commitment requiring much time. Therefore, the editors may need to contact numerous reviewers until they rope in the required number of referees. Some of them may never submit their review report or submit it late. In some other cases, you may think that the editor is delaying your process, whereas in fact he/she is trying to help you. For example, the required number of review reports might have been obtained, and some of the reviewers recommend rejection, while the editor has a different opinion. This can lead the editor to seek recommendations from additional reviewers, extending the time of peer review.

As a general tip, I suggest that there is no need to be anxious and contact the journal as long as you see that the date of manuscript status changes from time to time. No noticeable change in the manuscript status’ information, however, doesn’t mean no actual change. For example, an editor may be exchanging correspondence with other editors or even with reviewers outside the online submission system. Remember the Heraclitus’ quote “ no man ever steps in the same river twice, for it’s not the same river and he’s not the same man ” (things that may seem constant may be actually undergoing change).

What do the various manuscript statuses mean?

There are various statuses of manuscript peer review stage in different submission systems and journals. The most common are Footnote 17 :

Manuscript submitted : Your manuscript has been received by the editorial office. It will now be subjected to technical checking and then assigned to an editor. If the journal office wants you to solve some technical issues, the manuscript will be returned to you. Commonly, you’ll be able to edit your existing submission in the system, meaning that you shouldn’t submit it again as a new submission. Read carefully the information in the email you’ve received; there will be specific instructions.

Editor invited : An academic editor has been invited to manage your paper. The editor hasn’t yet agreed to take on the assignment. This stage is used less frequently.

With editor : Your manuscript has been assigned to an editor. The editor was previously invited and has agreed to manage your paper, or he/she was assigned directly to your paper without being asked to agree or not. Your paper has commonly been assigned to a senior editor first. The senior editor can assign the paper to an equal- or lower-rank editor, who in turn may assign the paper to a further equal- or lower-rank editor. Depending on the parameters set in the journal’s submission system, you may see changes in the status date while the manuscript status name remains same. If the journal indicates the time of status date too, the status time can change one or more times within the same day, wherever a new editor is added in the loop. Authors commonly don’t notice this change since they’ve better things to do than checking the system every some minutes. If a journal displays only the date of status (time isn’t indicated), no change in the status date will be made if all editors are added in the loop on the same date. If, however, this activity takes place on different dates, the status date will change while the status name will remain same (i.e. with editor).

Reviewer invited : Reviewers (commonly ≥ 3) have been invited to review your paper. None of them has agreed to take on the assignment yet. This stage is used less frequently.

Under review : The paper is now under review. If the ‘Reviewer Invited’ status preceded, it means that at least one reviewer has agreed to review the paper. If the ‘Reviewer Invited’ status wasn’t used, it simply means that invitations have been sent to reviewers, but you can’t know if any of them has agreed to review your paper. If new reviewers are invited on different days, the status date will be changing. If these changes are noted within some days or the first few weeks of the submission, it commonly means that the editor couldn’t ensure the required number of reviewers yet, but he/she is still working on this. In some journals, the status date for ‘Under Review’ can change when some reviewer submits his/her report or when the editor evaluates the review reports. A small “secret” is also that the required number of reviews might have been obtained but you didn’t notice it because the editor has sent out new review invitations or has changed the parameter of required number of reviews to a higher number.

Required reviews completed : The required number of reviews has been received, and the editor will go through the review reports and perhaps your paper. This stage may last from some hours to more than one week. Be patient, and avoid sending emails to the editor asking about the status of your paper. As an editor, I faced before this situation where an author of a paper has sent me such a correspondence within three days of this status’s appearance (in 3−4 weeks from initial submission date). There are some reasons why you shouldn’t disturb the editor or editorial office so soon after this status appears. You cannot find these reasons written somewhere, so I’ll share them with you based on my own experience as editor. The status is quite relative, and doesn’t necessarily mean that the review of your paper has been completed. The editor might have received the required number of reviews, but remember that this is the minimum number of required reviews, i.e. typically 2 or 3. However, (1) there may be disagreement among the recommendations of the reviewers or (2) the editor may have a different opinion from some or all the reviewers, and (s)he may need more review reports. In this case, the editor may need some time to trace and invite other reviewers –remember editors have dozens of papers to handle, not only yours. When new reviewers are invited, the status will commonly change back to ‘Under Review’. Alternatively, the editor (1) might have received the minimum number of required reviews but is waiting additional reports from reviewers who have agreed to review the paper but haven’t submitted it yet, or (2) (s)he changed the parameter of the required number of reviews to a higher number, e.g. (s)he may initially expect 2 reports but later changed it to 4 reports. Another possible scenario is when a lower number of reports than the required might have been set, e.g. 1, while more review reports are expected to be delivered. In all these situations, the status can remain ‘Required Reviews Completed’, but as you understand now without meaning that the review of your paper has been completed.

Decision in process : The peer-review of your article has been completed, and the decision letter will be emailed to you shortly. This stage can last from a couple of minutes to several days. At this point, there’re two possible reasons: (1) at least one editor has submitted his recommendation/decision to the system or (2) the senior editor supervising the process (i.e. the highest-rank editor managing your paper) has started the process of submitting his/her decision. Once the senior editor has submitted the interim or final decision, the corresponding author who has submitted the paper will receive an email. It should be noted here that the senior editor may see things differently from a lower-rank editor, e.g. he/she may find the review reports insufficient, and can always invite more reviewers, which would result to the status changing back to ‘Under Review’. This is a rare case, however, but not impossible. For example, the lower-rank editor may be a new editor with no extensive experience and may be trained by a senior editor who may supervise the process.

I got the decision letter, now what? The different types of decision

Following the aforementioned statuses, the final status would indicate the decision of the editor and can have all sorts of names indicating a decision status. Once you receive the decision letter, forward it to your coauthors, if any. There are various types of decision:

Reject : Your paper has been rejected. Take it easy, this is how this strategy “game” works. You probably have received extensive comments from at least two reviewers. See this as an opportunity to improve your paper (Kotsis and Chung 2014 ). Read the comments once. Don’t be embarrassed, get angry, or be ashamed of your work. Remember that even highly-talented authors and top scientists experience similar situations. Sleep on the comments for a few days; this will allow you to return to a homeostatic psychological state if you’ve been affected by the comments. Then, return to the comments, read them all in detail, and improve whatever can be possibly improved from those indicated by the reviewer(s). After improvement, turn to your supervisor and coauthors again (if any), and restart the loop of the paper submission process. This process is like a “for-do loop” repetition control structure in Pascal programming. You may need to repeat the loop a few times, and this is why I recommended keeping in mind 2–3 additional journals when selecting a target journal. Well, here there are a few further points to consider. Read carefully the decision letter. Does the editor write some comments? Does he/she state clearly or imply that you’re encouraged to resubmit your manuscript as a new submission? Not all journals have a decision status for rejection and encouraged resubmission, and while the decision you got is ‘reject’, you may be given the chance for resubmission. If this is the case, make sure that you improve your paper considering all the reviewers’ comments. If you resubmit, add a response letter in the cover letter. Copy all the editor’s (if any) and reviewers’ comments, and respond to each comment separately, explaining how you addressed or why not addressed the indicated issue. You may facilitate editor’s work if you add a brief paragraph explaining the big picture of the main improvements you applied that make your paper meriting reconsideration. Another possibility is that you aren’t encouraged to resubmit your work, but you want to write a rebuttal. Rebuttals are barely successful (Hites 2021 ) and are considered only in very specific cases, such as if it is clear that your paper was rejected based on incorrect criticisms. For example, your paper was rejected based only on one reviewer incorrectly criticizing that you had no sufficient number of replicated experimental units while you had clearly stated the replicates in the paper. If you decide to proceed with a rebuttal, write everything in the cover letter, as explained above. Don’t enter into a personal debate with reviewers and start criticizing them. Remember that if a reviewer didn’t understand something is probably because of your writing. Be thankful and try always to find ways to potentially fix each single issue indicated. If a reviewer hasn’t understood something, other readers of your paper might also not understand. It should also be made clear that you aren’t supposed to always agree with each single comment of a reviewer. Authors may think that they must do everything a reviewer says, but reviewers also may think that their job is to rewrite reviewed papers or see their selves as a teacher and authors as their students. It’s okay to disagree with some comments, but this should be based on convincing reasoning−strong scientific support would help if it’s about the scholarly content of your paper.

Revise : You’re getting closer to get your paper published, well done! The editor believes that your paper has relatively high odds to be published following revision. The revision can be minor, moderate, or major, and this is usually indicated in the decision letter and/or the manuscript status in the submission system. Footnote 18 As a rule of thumb, minor revision leads to a quick acceptance, usually without sending the paper back to reviewers; however, this depends on how well the editor does his/her job and sometimes on whether his/her expertise is relevant to your paper. Even if the revision is minor, I strongly recommend that you do an excellent and thorough work. Don’t restrict yourself to simply correcting what the reviewers mention. Go again through the entire manuscript, and do this very carefully. There has been some time since you last read the manuscript, and you can now see things in a different way. Footnote 19 Do not afraid or hesitate to apply changes. This is your last chance to apply important changes to the scholarly content of your paper. If it’s to increase the quality further, it’s worth taking more time to get published. A misconception I heard a couple of times is that you just do only what the reviewers say, agree or not, without applying other changes when the revision is minor so that the editor doesn’t send the paper back to reviewers. Don’t fall into the trap of putting yourself into wooden frames because no single policy/practice applies to every situation. If the editor is professional and does his/her job as is supposed to do, there would be no difference. But even if the paper is send back to reviewers, why not if it is to enhance its quality and avoid errors? This can help your paper and protect your reputation. Extending your revisions beyond the minor changes indicated by reviewers shows professionalism and responsibility in my view. All the additional changes, however, should be clearly communicated to the editor (in the response letter). Moderate revision is rarely used, and has no important difference from major revision in my view; it is quite arbitrary and based on editor’s perception of what separates moderate from major. A major revision indicates that considerable changes would be needed to bring the manuscript to the level required by the editor to accept the paper, including modifications to the scholarly content, such as methods application and explanation, statistics, results interpretation, and/or conclusions. Thus, take the time needed to address all the issues in the best of your ability (see next section before revising your paper).

Accept : You’ll rarely receive this decision upon the first submission of your paper. The chances to receive it, however, increase if this is a first submission to a journal following improvement based on reviewers’ comments after rejection from the same or a different journal. If your paper has been accepted for publication, congratulations, you made it! (Dollars to doughnuts that you’re thrilled to receive this news).

Prepare the revision and submit

Make a plan of the revision as soon as you receive the revise decision. Be organized. Try to do your best in submitting the revised manuscript by the deadline indicated by the editor. If you think you need more time to complete the revision, contact the editorial office of the journal to extend the deadline. Things change and scientists are often extremely busy. Furthermore, you’re the customer. Hence, it’s fine to ask for an extension of the deadline, but when you do, indicate in your letter how much more time you think you need. If you have coauthors, keep in mind that they may be extremely busy too. Therefore, I recommend that you send them the revision materials much ahead of the deadline so to have several days to work with them. All of them would want to work on the revision carefully, thus give them the time to do this without putting them in a difficult situation. Remember that all authors should approve the revision for submission, and never ignore them—you may find yourself taking a turn in the barrel at some point. They should always be in the loop during the entire process.

Copy all the comments from editors and reviewers in a new document, so called response letter. Don’t apply any changes to the comments, including correcting language—keep them authentic. Below each comment explain how you’ve addressed the issue pointed out by the reviewer, or why you didn’t do so. Your answer should be convincing, explaining in sufficient detail and supported by scientific references wherever needed. As I mentioned multiple times, there is no single practice applied to every situation. Some answers can be just one word, whereas other answers can be one or more paragraphs, depending on the context and extent of changes applied, if applied. As a general tip, try to be concise, direct, and always on the topic. Write not too much, not too little, just right. Be polite. Remember that reviewers voluntarily spend considerable time to review your paper for your own benefit—even if your paper is rejected, reviewers’ comments can help to improve your paper almost always. While authors may come across personal attacks by reviewers, no professional editor who is doing correctly his/her job would allow this, and no reviewer has such a right (his/her call is to comment on the research itself and the quality of the paper). Not only such a practice by a reviewer is considered inappropriate, but it may also raise legal concerns. Editors should protect authors from such attacks, and can always request reviewers to edit their comments or even consider exclusion of the entire review report. If you face this situation, I suggest that you try to remain neutral and don’t engage into debates at personal level—avoid emotional contagion, i.e. matching your emotional state with reviewer’s emotional state (Pérez-Manrique and Gomila 2021 ). Inappropriate and unethical behavior of a reviewer can be stressed out in the response and/or cover letters, mentioning that it’s outside science’s spirit and ethics to enter into debates at personal level and, thus, you remain neutral in your responses. However, improve your scholarly content as needed and indicate this. You may also consider various options including (1) contacting the editorial office of the journal bringing into their attention the matter, (2) requesting withdrawal of your paper, and (3) communicating your problem to the COPE, although I suggest to turn to the editorial office of the journal before turning to external bodies such as the COPE. Finally, make the response letter user-friendly. Avail of available editing tools to make the response letter more pleasant and easier to reviewers and editors to follow. For example, you can use bold, italics, or underlining to stress out the most important points in each response. It’s also helpful to reviewers and editors if you use different colors of fonts for reviewers’ comments and your responses.

Cross-check all your revised submission components before approving the revised submission pdf file. After you complete the submission, brief your coauthors and wait for the decision. If the revision was minor, you may hear back within hours to some days (commonly up to one week). If it was major, you may expect to hear back commonly within a few days to some weeks (commonly 2–4 weeks); however, it may last longer.

Shortly after your paper is accepted for publication and is transferred to the publisher for production, you’ll receive the proofs. Download the pdf of the proofs, send it to all the coauthors, and keep it in your record. You’re usually asked to send any corrections back to the journal within 48 h. Nevertheless, this is your last chance to apply any corrections, and, if you need some more time, ask for an extension of the deadline. Let them know you need some more time. At this stage, however, you shouldn’t make any changes to the scholarly content of the article. If you need to do so, the permission of the academic editor who managed the peer review of your paper would be needed, and the publication of your paper may be delayed. Check the proofs carefully, and apply all the minor corrections needed. Your paper will be online soon after submitting the corrected proofs, commonly within one week. Many journals also publish manuscripts in their accepted form, often within 1–3 days from the acceptance.

Consider serving as peer reviewer

Now that your paper is published online, it’s the right time to be prepared to act as a peer reviewer too, especially if you’re designated as corresponding author. The referees of your paper have spent considerable time reviewing it, and you should consider doing the same for others. Author’ and reviewer’ roles are tightly linked, and acting as reviewer would improve your writing skills. How you can become a better reviewer is the subject of a succeeding paper (Agathokleous 2022 ).

In the pursuit of scientific knowledge, researchers engage in peer-reviewed publishing in a wide variety of scientific journals. In this piece of paper I provided a guide for peer-reviewed publishing, which I hope it will help thousands of early-career research scientists and students at higher institutions to become more knowledgeable and familiar with the current peer-review system of most scientific journals. While your goal may be to publish your papers, have fun during the journey. At the end it’s the journey that may matter more than reaching the destination (recall ‘Ithaca’, Constantine P. Cavafy’s 1911 poem). Regardless of the outcome of your submissions, the entire process will make you wiser and more mature, knowledgeable, and experienced. I do hope that you enjoyed reading the paper and found it useful to improve your skills, and wish you all the best for your research, writing, and career ahead.

Note Because of the commentary-type nature of this paper, the author has often used informal language and idioms (except in the Introduction) to make the paper more entertaining and user-friendly. A formal tone is expected in the scientific writing, and you should refrain from using informal language and idioms when writing your research paper.

JFR is directed and published by the Northeast Forestry University, China, in collaboration with a publisher (currently Springer).

Born in January, 1988.

Forestry Research, Science of The Total Environment, Current Opinion in Environmental Science & Health, Plant Stress, Climate, Sci, Frontiers in Forests and Global Change, and Water Emerging Contaminants & Nanoplastics.

Note: This does not imply that the longer someone is engaging in a process the more skilled he/she is. The skills do not necessarily improve with increasing time span of engagement, but they do improve with increasing time of actual effort put into producing, reviewing, or editing papers. Also, the number of papers alone does not necessarily say much about one’s authoring skills, whereas indices of papers’ and journals’ quality may be more appropriate.

Commonly the first section of any paper, which introduces the background and research questions/hypotheses, and sets the scene, although for some journals it can be named differently depending on the article type.

https://www.elsevier.com/journals/ecotoxicology-and-environmental-safety/0147-6513/guide-for-authors

https://www.springer.com/journal/11356/submission-guidelines

https://www.elsevier.com/journals/environment-international/0160-4120/guide-for-authors

https://www.elsevier.com/journals/environmental-research/0013-9351/guide-for-authors

From the Greek word ήθος (ethos). As a concept, ethics was introduced by the Greek philosopher Aristotle (Iaccarino 2001 ).

https://onlinelibrary.wiley.com/page/journal/13652486/homepage/forauthors.html

https://www.elsevier.com/authors/journal-authors/policies-and-ethics/credit-author-statement

Unless proposed by supervisors or other colleagues, and, even in this case, competent editors might hardly consider them for reviewers.

https://www.elsevier.com/journals/journal-of-hazardous-materials/0304-3894/guide-for-authors

Idiom likely adapted from Robert Burns, 18th-century poet.

These are status names used by JFR too. The name of each status can differ with journals. For example, (i) Manuscript Submitted may be same with Undergoing Initial Checking, (ii) Reviewers Invited may be same with Reviewers Assigned, (iii) Under Review may be same with Awaiting Reviewer Scores, Awaiting Referee Scores, and Manuscript Assigned to Peer-Reviewer/s (iv) Required Reviews Completed is same with Reviews Completed, and (v) Decision in Process may be same with Under Editor Evaluation, Pending Recommendation, Awaiting AE Recommendation, Awaiting EIC Decision, and Ready for Decision.

Minor revision can be termed differently, such as “Accept conditionally, minor revisions needed”.

Note: I personally go back to the manuscript from time to time while it is under review and note potential changes to apply during revisions.

Agathokleous E (2022) Engaging in scientific peer review: tips for young reviewers. J For Res. https://doi.org/10.1007/s11676-021-01389-7 (In Press)

Agathokleous E, Saitanis CJ (2020) Plant susceptibility to ozone: A Tower of Babel? Sci Total Environ 703:134962. https://doi.org/10.1016/j.scitotenv.2019.134962

Article   CAS   PubMed   Google Scholar  

Amrhein V, Greenland S, McShane B (2019) Retire statistical significance. Nature 567:305–307. https://doi.org/10.1038/d41586-019-00857-9

Armstrong RA (2014) When to use the Bonferroni correction. Ophthalmic Physiol Opt 34:502–508. https://doi.org/10.1111/opo.12131

Article   PubMed   Google Scholar  

Berben L, Sereika SM, Engberg S (2012) Effect size estimation: methods and examples. Int J Nurs Stud 49:1039–1047. https://doi.org/10.1016/j.ijnurstu.2012.01.015

Berger JO, Delampady M (1987) Testing precise hypotheses. Stat Sci 2:317–335. https://doi.org/10.2307/2245772

Böhme G, Stehr N (1986) The Knowledge Society. Sociology of the Sciences book series, vol 10. Springer, Dordrecht. https://doi.org/10.1007/978-94-009-4724-5_2

Breaugh JA (2003) Effect size estimation: Factors to consider and mistakes to avoid. J Manage 29:79–97. https://doi.org/10.1016/S0149-2063(02)00221-0

Article   Google Scholar  

Broad W (1981) The publishing game: getting more for less. Science 211:1137–1139. https://doi.org/10.1126/science.7008199

Clark AM, Thompson DR (2017) Five (bad) reasons to publish your research in predatory journals. J Adv Nurs 73:2499–2501. https://doi.org/10.1111/jan.13090

Connor JT (2004) The value of a P -valueless paper. Am J Gastroenterol 99:1638–1640. https://doi.org/10.1111/j.1572-0241.2004.40592.x

Covaci A, Nieuwenhuijsen M, He Z, Zhu YG (2019) A new era in the history of Environmental International. Environ Int 122:1–2. https://doi.org/10.1016/j.envint.2018.12.046

COVID-19 Host Genetics Initiative (2021) Mapping the human genetic architecture of COVID-19. Nat ure In Press . https://doi.org/10.1038/s41586-021-03767-x

Cumming G, Fidler F, Vaux DL (2007) Error bars in experimental biology. J Cell Biol 177:7–11. https://doi.org/10.1083/jcb.200611141

Article   CAS   PubMed   PubMed Central   Google Scholar  

Das N, Panjabi M (2011) Plagiarism: Why is it such a big issue for medical writers? Perspect Clin Res 2:67. https://doi.org/10.4103/2229-3485.80370

Article   PubMed   PubMed Central   Google Scholar  

de Camargo Jr KR (2011) Science, knowledge, and society. Am J Public Health 101:1352. https://doi.org/10.2105/ajph.2011.300311

Dhillon P (2021) How to be a good peer reviewer of scientific manuscripts. FEBS J 288:2750–2756. https://doi.org/10.1111/febs.15705

Ding D, Nguyen B, Gebel K, Bauman A, Bero A (2020) Duplicate and salami publication: a prevalence study of journal policies. Int J Epidemiol 49:281–288. https://doi.org/10.1093/ije/dyz187

Duggan F (2007) Plagiarism: prevention, practice and policy. Assess Eval High Educ 31:151–154. https://doi.org/10.1080/02602930500262452

Durlak JA (2009) How to select, calculate, and interpret effect sizes. J Pediatr Psychol 34:917–928. https://doi.org/10.1093/jpepsy/jsp004

Editorial, (2005) The cost of salami slicing. Nat Mater 41(4):1–1. https://doi.org/10.1038/nmat1305

Article   CAS   Google Scholar  

Eungoo K, Hwang HJ (2020) The consequences of data fabrication and falsification among researchers. J Res Publ Ethics 1:7–10. https://doi.org/10.15722/jrpe.1.2.202009.7

Fanelli D (2010) Do pressures to publish increase scientists’ bias? An empirical support from US States data. PLoS ONE 5:10271. https://doi.org/10.1371/journal.pone.0010271

Fanelli D, Larivière V (2016) Researchers’ individual publication rate has not increased in a century. PLoS ONE 11:e0149504. https://doi.org/10.1371/journal.pone.0149504

Fernandez-Llimos F (2016) Bradford’s law, the long tail principle, and transparency in Journal Impact Factor calculations. Pharm Pract (granada) 14:842. https://doi.org/10.18549/pharmpract.2014.03.842

Freund RJ, Mohr DL, Wilson WJ (2010) Statistical Methods, 3rd edn. Academic Press, Canada, p 795. https://doi.org/10.1016/b978-0-12-374970-3.00006-8

Gerding AB (2012) Ethical dilemmas in publishing. A rising tide of plagiarism? J Prosthodont 21:431–432. https://doi.org/10.1111/j.1532-849x.2012.00904.x

Goodman S (2008) A dirty dozen: Twelve P -value misconceptions. Semin Hematol 45:135–140. https://doi.org/10.1053/j.seminhematol.2008.04.003

Grimm D (2005) Suggesting or excluding reviewers can help get your paper published. Science 309:1974. https://doi.org/10.1126/science.309.5743.1974

Higgins JR, Lin FC, Evans JP (2016) Plagiarism in submitted manuscripts: incidence, characteristics and optimization of screening—case study in a major specialty medical journal. Res Integr Peer Rev 11(1):1–8. https://doi.org/10.1186/s41073-016-0021-8

Hites RA (2021) How to convince an editor to accept your paper quickly. Sci Total Environ 798:149243. https://doi.org/10.1016/j.scitotenv.2021.149243

Hu B, Guo H, Zhou P, Shi ZL (2020) Characteristics of SARS-CoV-2 and COVID-19. Nat Rev Microbiol 193(19):141–154. https://doi.org/10.1038/s41579-020-00459-7

Iaccarino M (2001) Science and ethics As research and technology are changing society and the way we live, scientists can no longer claim that science is neutral but must consider the ethical and social aspects of their work. EMBO Rep 2:747–750. https://doi.org/10.1093/embo-reports/kve191

Iglewicz B (2014) Experimentwise error rate in practice. Wiley StatsRef Stat Ref Online. https://doi.org/10.1002/9781118445112.stat05852

Ioannidis JPA (2005) Why most published research findings are false. PLOS Med 2:e124. https://doi.org/10.1371/journal.pmed.0020124

Johnson VE (2013) Revised standards for statistical evidence. Proc Natl Acad Sci U S A 110:19313–19317. https://doi.org/10.1073/pnas.1313476110

Kingori P, Gerrets R (2016) Morals, morale and motivations in data fabrication: Medical research fieldworkers views and practices in two Sub-Saharan African contexts. Soc Sci Med 166:150. https://doi.org/10.1016/j.socscimed.2016.08.019

Kirk RE (2007) Effect magnitude: A different focus. J Stat Plan Inference 137:1634–1646. https://doi.org/10.1016/j.jspi.2006.09.011

Klein DF, Glick ID (2008) Conflict of interest, journal review, and publication policy. Neuropsychopharmacol 3313(33):3023–3026. https://doi.org/10.1038/npp.2008.109

Kotsis SV, Chung KC (2014) Manuscript rejection: how to submit a revision and tips on being a good peer reviewer. Plast Reconstr Surg 133:958–964. https://doi.org/10.1097/prs.0000000000000002

Kumar PM, Priya NS, Musalaiah S, Nagasree M (2014) Knowing and avoiding plagiarism during scientific writing. Ann Med Health Sci Res 4:S193. https://doi.org/10.4103/2141-9248.141957

Lakens D (2013) Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front Psychol 4:863. https://doi.org/10.3389/fpsyg.2013.00863

Lang TA (2020) An author’s editor reads the “Instructions for Authors.” Eur Sci Ed 46:e55817. https://doi.org/10.3897/ese.2020.e55817

Larivière V, Kiermer V, MacCallum CJ, McNutt M, Patterson M, Pulverer B, Swaminathan S, Taylor S, Curry S (2016) A simple proposal for the publication of journal citation distributions. Biorxiv. https://doi.org/10.1101/062109

Levesque RJR (2019) Presubmission inquiries: Problematic, counterproductive, and unnecessary. J Youth Adolesc 484(48):651–654. https://doi.org/10.1007/s10964-019-01008-z

Lew MJ (2012) Bad statistical practice in pharmacology (and other basic biomedical disciplines): you probably don’t know P. Br J Pharmacol 166:1559–1567. https://doi.org/10.1111/j.1476-5381.2012.01931.x

Lowry G, Field J, Westerhoff P, Zimmerman J, Alvarez P, Boehm A, Crittenden J, Dachs J, Diamond M, Eckelman M, Gardea-Torresdey J, Giammar D, Hofstetter T, Hornbuckle K, Jiang G, Li XD, Leusch F, Mihelcic J, Miller S, Pruden A, Raskin L, Richardson S, Scheringer M, Schlenk D, Strathmann T, Tao S, Waite TD, Wang P, Wang S (2020) Why was my paper rejected without review? Environ Sci Technol 54:11641–11644. https://doi.org/10.1021/acs.est.0c05784

Lozano GA, Larivière V, Gingras Y (2012) The weakening relationship between the impact factor and papers’ citations in the digital age. J Am Soc Inf Sci Technol 63:2140–2145. https://doi.org/10.1002/asi.22731

Maccaro A, Piaggio D, Pagliara S, Pecchia L (2021) The role of ethics in science: A systematic literature review from the first wave of COVID-19. Health Technol in Press. https://doi.org/10.1007/s12553-021-00570-6

Macháček V, Srholec M (2021) Predatory publishing in Scopus: evidence on cross-country differences. Sci 1263(126):1897–1921. https://doi.org/10.1007/s11192-020-03852-4

Maestre FT (2019) Ten simple rules towards healthier research labs. PLOS Comput Biol 15:e1006914. https://doi.org/10.1371/journal.pcbi.1006914

Masicampo EJ, Lalande DR (2012) A peculiar prevalence of P values just below .05. Q J Exp Psychol 65:2271–2279. https://doi.org/10.1080/17470218.2012.711335

McCabe DJ, Hayes-Pontius EM, Canepa A, Berry KS, Levine BC (2012) Measuring standardized effect size improves interpretation of biomonitoring studies and facilitates meta-analysis. Freshw Sci 31:800–812. https://doi.org/10.1899/11-080.1

McGough JJ, Faraone SV (2009) Estimating the size of treatment effects: moving beyond P values. Psychiatry 6:21–29

Nakagawa S, Cuthill IC (2007) Effect size, confidence interval and statistical significance: a practical guide for biologists. Biol Rev Camb Philos Soc 82:591–605. https://doi.org/10.1111/j.1469-185x.2007.00027.x

Neill US (2008) Publish or perish, but at what cost? J Clin Invest 118:2368. https://doi.org/10.1172/jci36371

Nuzzo R (2014) Scientific method: Statistical errors. Nature 506:150–152. https://doi.org/10.1038/506150a

Pérez-Manrique A, Gomila A (2021) Emotional contagion in nonhuman animals: A review. Wiley Interdiscip Rev Cogn Sci e1560. https://doi.org/10.1002/wcs.1560

Perreau M, Suffiotti M, Marques-Vidal P, Wiedemann A, Levy Y, Laouénan C, Ghosn J, Fenwick C, Comte D, Roger T, Regina J, Vollenweider P, Waeber G, Oddo M, Calandra T, Pantaleo G (2021) The cytokines HGF and CXCL13 predict the severity and the mortality in COVID-19 patients. Nat Commun 12:4888. https://doi.org/10.1038/s41467-021-25191-5

Pourret O, Irawan DE, Tennant JP, Wien C, Dorch B (2020) Comments on “Factors affecting global flow of scientific knowledge in environmental sciences” by Sonne et al. (2020). Sci Total Environ 721:136454. https://doi.org/10.1016/j.scitotenv.2019.136454

Qehaja AB (2020) Avoiding publishing in predatory journals: An evaluation algorithm. J Effic Responsib Educ Sci 13:154–163. https://doi.org/10.7160/eriesj.2020.130305

Qian J, Yuan Z, Li J, Zhu H (2020) Science Citation Index (SCI) and scientific evaluation system in China. Humanit Soc Sci Commun 71(7):1–4. https://doi.org/10.1057/s41599-020-00604-w

Rawat S, Meena S (2014) Publish or perish: Where are we heading? J Res Med Sci 19:87

PubMed   PubMed Central   Google Scholar  

Resnik DB (2014) Data fabrication and falsification and empiricist philosophy of science. Sci Eng Ethics 20:423. https://doi.org/10.1007/s11948-013-9466-Z

Rickard T, Gayley CA (eds) (1908) A guide to technical writing. Mining and Scientific Press, San Francisco

Google Scholar  

Rillig MC, Bielcik M, Chaudhary VB, Grünfeld L, Maaß S, Mansour I, Ryo M, Veresoglou SD (2020) Ten simple rules for increased lab resilience. PLOS Comput Biol 16:e1008313. https://doi.org/10.1371/journal.pcbi.1008313

Ruff K (2015) Scientific journals and conflict of interest disclosure: what progress has been made? Environ Heal 141(14):1–8. https://doi.org/10.1186/s12940-015-0035-6

Sayer EJ (2018) The anatomy of an excellent review paper. Funct Ecol 32:2278–2281. https://doi.org/10.1111/1365-2435.13207

Schofferman J, Wetzel F, Bono C (2015) Ghost and guest authors: you can’t always trust who you read. Pain Med 16:416–420. https://doi.org/10.1111/pme.12579

Sedlak DL (2015) Just said no. Environ Sci Technol 49:6365–6366. https://doi.org/10.1021/acs.est.5b02405

Senn S (2001) Two cheers for P -values? J Epidemiol Biostat 6:193–204. https://doi.org/10.1080/135952201753172953

Smolčić VŠ (2013) Salami publication: definitions and examples. Biochem Medica 23:237. https://doi.org/10.11613/bm.2013.030

Solís Arce JS, Warren SS, Meriggi NF, Scacco A, McMurry N, Voors M, Syunyaev G, Malik AA, Aboutajdine S, Adeojo O, Anigo D, Armand A, Asad S, Atyera M, Augsburg B, Awasthi M, Ayesiga GE, Bancalari A, Björkman Nyqvist M, Borisova E, Bosancianu CM, Cabra García MR, Cheema A, Collins E, Cuccaro F, Farooqi AZ, Fatima T, Fracchia M, Galindo Soria ML, Guariso A, Hasanain A, Jaramillo S, Kallon S, Kamwesigye A, Kharel A, Kreps S, Levine M, Littman R, Malik M, Manirabaruta G, Mfura JLH, Momoh F, Mucauque A, Mussa I, Nsabimana JA, Obara I, Otálora MJ, Ouédraogo BW, Pare TB, Platas MR, Polanco L, Qureshi JA, Raheem M, Ramakrishna V, Rendrá I, Shah T, Shaked SE, Shapiro JN, Svensson J, Tariq A, Tchibozo AM, Tiwana HA, Trivedi B, Vernot C, Vicente PC, Weissinger LB, Zafar B, Zhang B, Karlan D, Callen M, Teachout M, Humphreys M, Mobarak AM, Omer SB (2021) COVID-19 vaccine acceptance and hesitancy in low- and middle-income countries. Nat Med 27:1385–1394. https://doi.org/10.1038/s41591-021-01454-y

Sonne C, Dietz R, Alstrup AKO (2020) Factors affecting global flow of scientific knowledge in environmental sciences. Sci Total Environ 701:135012. https://doi.org/10.1016/j.scitotenv.2019.135012

Stehr N (2009) Useful Scientific Knowledge: What Is Relevant Science for Society? on JSTOR. J Appl Soc Sci 3:18–29

Sullivan GM, Feinn R (2012) Using effect size—or why the P value is not enough. J Grad Med Educ 4:279–282. https://doi.org/10.4300/jgme-d-12-00156.1

Telenti A, Arvin A, Corey L, Corti D, Diamond MS, García-Sastre A, Garry RF, Holmes EC, Pang P, Virgin HW (2021) After the pandemic: perspectives on the future trajectory of COVID-19. Nature 596:495–504. https://doi.org/10.1038/s41586-021-03792-w

Thorlindsson T, Vilhjalmsson R (2016) Introduction to the special issue: Science, knowledge and society. Acta Sociol 46:99–105

Tomczak M, Tomczak E (2014) The need to report effect size estimates revisited. An overview of some recommended measures of effect size. TRENDS Sport Sci 1:19–25

Van Wesel M (2016) Evaluation by citation: Trends in publication behavior, evaluation criteria, and the strive for high impact publications. Sci Eng Ethics 22:225. https://doi.org/10.1007/s11948-015-9638-0

Van Noorden R, Singh Chawla D (2019) Hundreds of extreme self-citing scientists revealed in new database. Nature 572:578–579. https://doi.org/10.1038/d41586-019-02479-7

Veresoglou SD (2015) P hacking in biology: An open secret. Proc Natl Acad Sci U S A 112:E5112. https://doi.org/10.1073/pnas.1512689112

Verma IM (2015) Impact, not impact factor. Proc Natl Acad Sci U S A 112:7875. https://doi.org/10.1073/pnas.1509912112

von Wehrden H, Schultner J, Abson DJ (2015) A call for statistical editors in ecology. Trends Ecol Evol 30:293–294. https://doi.org/10.1016/j.tree.2015.03.013

Vuong QH (2019) Breaking barriers in publishing demands a proactive attitude. Nat Hum Behav 310(3):1034–1034. https://doi.org/10.1038/s41562-019-0667-6

Xafis V, Schaefer GO, Labude MK, Zhu Y, Hsu LY (2020) The perfect moral storm: Diverse ethical considerations in the COVID-19 pandemic. Asian Bioeth Rev 12:65–83. https://doi.org/10.1007/s41649-020-00125-3

Article   PubMed Central   Google Scholar  

Yiaslas T (2019) The pursuit of arete in medicine and health care. Int J Dis Reversal Prev 1:4–4. https://doi.org/10.22230/ijdrp.2019v1n2a105

Zhu JW (2020) Evaluation of scientific and technological research in China’s colleges: A review of policy reforms, 2000–2020. ECNU Rev Educat 3:556–561. https://doi.org/10.1177/2096531120938383

Download references

Acknowledgements

The author is grateful to Dr. Damià Barceló, Professor at the Institute of Environmental Assessment and Water Research, IDAEA-CSIC, and the Catalan Institute for Water Research, ICRA-CERCA, Spain, for comments and suggestions on a preliminary draft. The author is also thankful to Dr. Lei Yu and the in-house editorial team of JFR for providing JFR’s information as well as to Mr. Noboru Masui, PhD Candidate, for sharing some information about journal guidelines.

Author information

Authors and affiliations.

Department of Ecology, School of Applied Meteorology, Nanjing University of Information Science and Technology (NUIST), Nanjing, 210044, People’s Republic of China

Evgenios Agathokleous

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Evgenios Agathokleous .

Ethics declarations

Conflict of interest.

Any commercial name cited in this manuscript, e.g. of journal or software, is not for advertisement, and the author does not intend to recommend or encourage the use of their services. The views presented herein are those of the author/editor and do not represent views of the journal’s editorial board as a unit, the journal’s editorial office, the journal itself, the publisher, or the author’s institution. E.A. is Associate Editor-in-Chief of this journal; however, he was not involved in the peer-review process of this manuscript. The author declares that there are no conflicts of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Project Funding: The author acknowledges support by the Startup Foundation for Introducing Talent of Nanjing University of Information Science & Technology (NUIST), Nanjing, China (Grant No. 003080).

The online version is available at http://www.springerlink.com

Corresponding editor: Yu Lei.

Rights and permissions

Reprints and permissions

About this article

Agathokleous, E. Mastering the scientific peer review process: tips for young authors from a young senior editor. J. For. Res. 33 , 1–20 (2022). https://doi.org/10.1007/s11676-021-01388-8

Download citation

Received : 14 July 2021

Accepted : 20 August 2021

Published : 16 September 2021

Issue Date : February 2022

DOI : https://doi.org/10.1007/s11676-021-01388-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Academic editor
  • Article publishing
  • Manuscript status
  • Science communication
  • Scientific writing
  • Find a journal
  • Publish with us
  • Track your research

Review the scientific review process and find an efficient journal to publish your work in

Journal pages.

Each journal has its own page with information about the review process. Data on experienced duration and quality of the review process are provided by researchers. Journal popularity scores are calculated based on the number of visits to the journal page, and editorial information is provided by the editor.

scientific reports peer review time

Compare journals

Compare journals within and between research fields on several aspects such as duration of first review round and decision time for desk rejections. Other interesting statistics include total handling time of accepted manuscripts, journal popularity score, and overall quality of the review process. Many reviews come with a motivation for the overall rating.

scientific reports peer review time

Share your experience

After receiving the final decision of a review process, visit the journal's page, click on 'Review this journal' and share your experience by filling out the SciRev questionnaire. All review experiences are provided by registered members of the academic community, and checked for systematic errors by the SciRev team.

scientific reports peer review time

Support our work

Our website is meant to be a service by researchers for researchers. As a non-profit organization, SciRev is one of the few players in the scientific field that is completely independent of any other party. That means that we depend on donations to cover our costs. Please help us remain independent by supporting us with a donation.

scientific reports peer review time

scientific reports peer review time

Scientific Reports Impact factor: “riding high”

scientific reports peer review time

Our research is usually based on the journal’s impact factor (IF), which denotes the significance of the work published in a specific journal. A scientist’s research is evaluated based on the impact factor, representing a journal’s prestige. An article’s impact factor is determined by how many times it was cited in a particular year. A report by Clarivate Analytics, Web of Science Journal Citation Reports (WoS JCR), includes Journal Impact Factors each year.

scientific reports peer review time

Recently, it has been found that Scientific Reports, a journal of Nature Publishing Group with an option of open-access, is gaining popularity and importance among researchers for its wide coverage of all research areas, such as natural sciences, medicine, psychology, and engineering. The main reason this journal is a researcher’s choice is its publishing house and ease of publication than other core journals of Nature.

Scientific Reports Journal metrics

  • Scientific Reports is the fifth most-cited journal in the world as per Clarivate Analytics, 2020 (2021 Journal Citation Reports).
  • Scientific Reports is Web of Science, PubMed, Scopus, Google Scholar, and DOAJ-indexed, enhancing its credibility.
  • Scientific Reports’ impact factor in 2022-23 is 4.997 as per the latest update.
  • The 2-year and 5-year impact factors are 4.996 and 5.516, respectively. Even though it marks a slight decrease, the difference is not high enough.
  • Although Scientific Reports’ impact factor is not high, it is still a prominent option to publish as it has a smooth publication process and accepts quality content in a wide range.

Scientific Reports Peer-review Policy

Scientific Reports’ review time includes its first decision in 56 days and the acceptance time of 133 days. Since it publishes scientific content of robust quality and maintains originality, Scientific Reports’ acceptance rate is 49% which is good for any journal as it has a rigorous peer-review process.

Scientific Reports is a high-impact journal, although its IF is less because of its experienced and extensive editorial team that adheres to a constructive peer-review process and follows all editorial and ethical policies.

To maintain the journal’s high impact and quality, the journal charges an APC after the paper is accepted, which may vary, starting from €1,570.

A global journal such as Scientific Reports ranks higher than 77% of the journals with an impact factor.

For more exciting content, do visit our website https://www.manuscriptedit.com/scholar-hangout/ . You can also mail us at [email protected] for your queries. Happy reading!!!

Related Posts

How to fix 5 desk rejection.

Rejection from a journal is no one’s cup of tea but then it’s a reality that a large number of the article gets rejected across different journals.  Also, Journals mention the acceptance rate or the changes of the article getting rejected on their web page. High impact or top journals routinely reject the majority of […]

How to Identify Peer-Reviewed Journals

Professors, as well as students, require peer-reviewed journals while doing research because they provide authentic information. These journals are scholarly and refereed, which is why academicians look for them for publication. Authors write their papers, and expert reviewers assess them to maintain their quality. Usually, the reviewers do not know the article’s authors. Therefore, the […]

Importance of Journal Impact Factor

Research metrics are applicable for reaching out to complex concepts like impact and quality. An impact factor of a journal is an identified indicator for measuring the influence and prestige of any journal. It is important worldwide because academicians and researchers undergo peer pressure to publish in higher-impact journals. The impact factor is essential based […]

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Open access
  • Published: 09 May 2024

A systematic review to determine the effect of strategies to sustain chronic disease prevention interventions in clinical and community settings: study protocol

  • Edward Riley-Gibson   ORCID: orcid.org/0000-0003-0829-7913 1 , 2 , 3 , 4 ,
  • Alix Hall 1 , 2 , 3 , 4 ,
  • Adam Shoesmith 1 , 2 , 3 , 4 ,
  • Luke Wolfenden 1 , 2 , 3 , 4 ,
  • Rachel C. Shelton 5 ,
  • Emma Doherty 1 , 2 , 3 , 4 ,
  • Emma Pollock 1 , 2 , 3 , 4 ,
  • Debbie Booth 1 , 2 , 3 , 4 ,
  • Ramzi G. Salloum 7 ,
  • Celia Laur 8 , 9 ,
  • Byron J. Powell 10 , 11 , 12 ,
  • Melanie Kingsland 1 , 2 , 3 , 4 ,
  • Cassandra Lane 1 , 2 , 3 , 4 ,
  • Maji Hailemariam 6 ,
  • Rachel Sutherland 1 , 2 , 3 , 4 &
  • Nicole Nathan 1 , 2 , 3 , 4  

Systematic Reviews volume  13 , Article number:  129 ( 2024 ) Cite this article

164 Accesses

5 Altmetric

Metrics details

The primary purpose of this review is to synthesise the effect of strategies aiming to sustain the implementation of evidenced-based interventions (EBIs) targeting key health behaviours associated with chronic disease (i.e. physical inactivity, poor diet, harmful alcohol use, and tobacco smoking) in clinical and community settings. The field of implementation science is bereft of an evidence base of effective sustainment strategies, and as such, this review will provide important evidence to advance the field of sustainability research.

This systematic review protocol is reported in accordance with the Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) checklist. Methods will follow Cochrane gold-standard review methodology. The search will be undertaken across multiple databases, adapting filters previously developed by the research team, data screening and extraction will be performed in duplicate, strategies will be coded using an adapted sustainability-explicit taxonomy, and evidence will be synthesised using appropriate methods (i.e. meta-analytic following Cochrane or non-meta-analytic following SWiM guidelines). We will include any randomised controlled study that targets any staff or volunteers delivering interventions in clinical or community settings. Studies which report on any objective or subjective measure of the sustainment of a health prevention policy, practice, or programme within any of the eligible settings will be included. Article screening, data extraction, risk of bias, and quality assessment will be performed independently by two review authors. Risk of bias will be assessed using Version 2 of the Cochrane risk-of-bias tool for randomised trials (RoB 2). A random-effect meta-analysis will be conducted to estimate the pooled effect of sustainment strategies separately by setting (i.e. clinical and community). Sub-group analyses will be undertaken to explore possible causes of statistical heterogeneity and may include the following: time period, single or multi-strategy, type of setting, and type of intervention. Differences between sub-groups will be statistically compared.

Discussion/conclusion

This will be the first systematic review to determine the effect of strategies designed to support sustainment on sustaining the implementation of EBIs in clinical and community settings. The findings of this review will directly inform the design of future sustainability-focused implementation trials. Further, these findings will inform the development of a sustainability practice guide for public health practitioners.

Systematic review registration

PROSPERO CRD42022352333.

Peer Review reports

Global burden of chronic disease

Preventable chronic diseases such as heart disease, diabetes, and respiratory disease account for a significant proportion of morbidity and mortality, attributing to 70% of all deaths internationally [ 1 , 2 ]. There are several key behavioural risk factors associated with the development of chronic diseases across the life course including the following: physical inactivity, poor diet, harmful alcohol use, and tobacco smoking [ 3 , 4 ]. Each of these behavioural risk factors is responsible for a considerable proportion (2.78–9.24%) of the total disease burden globally [ 5 ].

The World Health Organization (WHO) recommends the implementation of evidence-based interventions (EBIs) in clinical (e.g. hospitals, general practitioner (GP) surgeries, dental practices, community health centres, and charity-based health programmes or initiatives) and community settings (e.g. schools, early childcare services, sporting clubs/organisations, and community centres) to target and reduce the prevalence and severity of these behavioural risk factors [ 4 ]. The routine and widespread implementation of EBIs (e.g. targeting physical activity [ 6 ] and alcohol reduction [ 7 ]) to address the prevention of chronic disease in these settings is important, as they provide centralised points of access to reach a large proportion of the population, and they have existing infrastructure to support intervention delivery [ 8 ]. Consequently, there have been substantial investments made by governments internationally in the development and implementation of EBIs to address behavioural risk factors for chronic disease in these settings [ 9 , 10 , 11 ].

There are two distinct outcomes within the field of sustainability: ‘sustainability’ and ‘sustainment’. There are several definitions in the literature for both sustainment and sustainability [ 12 , 13 , 14 ]. For this review, we make a clear distinction between sustainment and sustainability. We view sustainment as an outcome, defined by Damschroder et al. (2022) as ‘the extent the innovation is in place or being delivered long-term’ [ 15 ]. Sustainability is defined by Moore et al. (2017) as ‘after a defined period of time, the program, clinical intervention, and/or implementation strategies continue to be delivered and/or individual behaviour change (i.e., clinician, patient) is maintained; the programme and individual behaviour change may evolve or adapt while continuing to produce benefits for individuals/systems’ [ 13 ]. Recent research argues that sustainability should be viewed as a dynamic process with interventions updated according to new evidence and adapted to meet the changing needs of the context and population in which it is being delivered [ 12 , 16 ].

Although many EBIs provide significant benefits when initially implemented, the effects of these interventions often diminish once initial implementation support or resources are withdrawn, and consequently, the quality of intervention delivery decreases or is discontinued entirely [ 17 ]. Therefore, long-term positive health impacts are often not realised [ 17 , 18 , 19 , 20 ] or are not achieved equitably across a range of settings and populations [ 21 ]. Further, discontinuation of programmes may also have important implications for wasted investments in time and resources, as well as community member and practitioner mistrust and wariness to engage in future implementation efforts [ 21 ]. Even with multi-level implementation support and significant financial investment, ‘initiative decay’ is common [ 10 ]. For example, a systematic review by Wiltsey Stirman and Kimberly [ 17 ], focusing on the sustainment of public health and clinical interventions, found that out of 125 studies included in the review, the majority of interventions were only partially sustained (i.e. continuation of some, but not all elements of the intervention), following full initial implementation. Overall, less than half of the interventions included in this review were sustained to high levels of fidelity. Another recent systematic review by Herlitz and MacIntyre [ 20 ], which aimed to determine the sustainment of school-based public health interventions, found that of the 18 included interventions, none continued to be delivered in their entirety (i.e. all components) once initial implementation support (start-up funding and/or other resources) had been withdrawn.

Accordingly, policymakers are increasingly concerned with the sustainability of EBIs and highlight the importance of ensuring the sustained delivery of such interventions long term. To ensure that the positive effects of EBIs continue and health impact is realised, the public health investment in initial implementation is not wasted, and that community support, trust, and engagement with such interventions are not lost; it is vital that the implementation of these EBIs be sustained [ 21 ].

What impacts on sustaining EBIs

Understanding the determinants of sustainment is essential to successfully design effective sustainment support strategies and reduce implementation decline [ 22 ]. Theoretical frameworks, such as the Dynamic Sustainability Framework [ 12 ], the Program Sustainability Assessment Tool [ 23 ], and the Integrated Sustainability Framework [ 16 ], identify and categorise a range of factors that may impact the sustainment of EBIs. In general, most frameworks identify sustainability determinants at multiple levels, that is, the salient outer contextual factors (e.g. external funding environment), inner contextual factors (e.g. programme champions in the organisation), processes (e.g. strategic planning), intervention characteristics (e.g. fit with context and population), and implementer characteristics (e.g. staff attitude, motivation, and skills). Further, systematic reviews of determinants to sustaining EBIs in specific clinical and community settings have identified a number of factors perceived by stakeholders. The most frequently identified being as follows: the availability of equipment, resources and facilities, continued executive or leadership support, and staff turnover [ 17 , 19 , 20 , 22 ]. Moreover, there are commonalities in factors that commonly impact sustainability across both clinical and community settings such as funding and external partnerships, organisational factors (e.g. alignment with values, needs, resources, and priorities of the organisation) and support (e.g. the presence of programme champions, leadership support), and practitioner/workforce characteristics (e.g. staff motivation and attitudes [ 16 ]. The information gathered from these reviews can be utilised to determine which factors to prioritise when developing strategies to sustain EBI delivery.

The need for effective strategies to support sustainment

If policymakers and practitioners are to address determinants of sustaining EBIs, it is important to determine which strategies are most effective in supporting sustainment. It is also important to note that strategies designed to support sustainment may overlap with strategies designed to support initial implementation. While there is a growing body of evidence regarding the effectiveness of strategies to support the initial implementation of EBIs [ 24 , 25 , 26 ], to our knowledge, only one review has aimed to collate strategies designed specifically to support sustainment [ 27 ].This review of strategies used within community-based settings found only six studies that reported the use of nine types of strategies designed to support sustainment. The most commonly reported strategies were funding and/or contracting for EBIs, continued use, and maintenance of workforce skills through continued training, booster training sessions, supervision, and feedback. However, the review was descriptive and, given the low number of studies conducted to date, did not synthesise any data relating to the effectiveness or impact of the strategies designed to support sustainment. Additionally, as this review only focused on community settings, there is a current gap which presents a need to synthesise strategies designed to support sustainment in a broader range of settings. Consequently, the field is bereft of an evidence base of effective strategies for sustainment. Research within sustainability science is rapidly increasing. Consequently, there are likely to be numerous new studies that may provide evidence of effective strategies designed to support sustainment. Therefore, the primary aim of this review is to determine the effect of strategies aiming to sustain the chronic disease prevention initiatives targeting key health behaviours (i.e. physical inactivity, poor diet, harmful alcohol use, and tobacco smoking) in clinical and community settings.

The secondary aims of this review are as follows:

Examine the effectiveness of strategies designed to support sustainment on relevant health outcomes (including physical activity, healthy eating, obesity prevention, smoking cessation, or harmful alcohol use).

Describe the cost implications of strategies designed to support sustainment.

Identify if there are any unintended/adverse effects of strategies designed to support sustainment on end users.

This systematic review protocol was registered with PROSPERO on 20 August 2022 (Registration ID: CRD42022352333) and is reported in accordance with the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols checklist (PRISMA-P) [ 28 ].

Eligibility criteria

Types of studies.

We will include any randomised study with a control group that aims to assess the effect of a strategy or group of strategies to sustain the implementation of a chronic disease prevention EBI in a clinical or community setting. We will include the following types of studies:

Randomised controlled trial (RCT) (with a parallel control group)

Cluster randomised controlled trial (C-RCT) (with a parallel control group and at least two clusters randomised to each group)

Stepped-wedge trial

Cross over (only data prior to crossover will be used in the analysis)

We will restrict the review to this set of designs for pragmatic reasons due to the size of this review. Further, these designs are considered as gold standard for assessing casual effects, so are most appropriate for addressing the research questions. We will only include studies that compare a strategy or group of strategies to improve sustainment of a physical activity, healthy eating, obesity prevention, smoking cessation, or harmful alcohol use EBI (also termed as policy, practice, or program) with no sustainment intervention or ‘usual practice’. There will be no restriction on the length of the study follow-up period due to the varied definitions of sustainment within the literature. There will also be no restriction on country of origin or language. However, we will exclude studies that are not focused on assessing the effect of a sustainment strategy on the sustained implementation of a policy, practice, or programme as a specific aim.

Types of participants

We will include managers, policy makers, staff, clinicians, or volunteers delivering, or supporting the delivery of, EBIs to patients in clinical settings including hospitals, GP surgeries, community health centres, and charity-based health programmes or initiatives (e.g. charity-run smoking cessation and healthy eating interventions in low socioeconomic countries/areas).

We will also include managers, policy makers, staff, or volunteers delivering, or supporting the delivery of, EBIs to end users in community settings including educational settings (i.e. primary and secondary schools, colleges, and universities), childcare services (long day care, family day care, preschools, and nurseries), elite or nonelite sports organisations and clubs (professional and amateur sports clubs, sporting governing bodies), and community centres (youth centres, community outreach centres).

Types of interventions

We will include any study that employs a strategy or group of strategies with the explicit aim of sustaining the implementation of a smoking cessation, healthy eating, physical activity, alcohol or obesity prevention policy, practice, or programme by usual staff, clinicians, or volunteers within the setting, for example managers, policy makers, nurses, doctors, teachers, and carers. Studies embedding principles of sustainability into strategies that have a primary aim of increasing adoption or implementation of EBIs will be excluded. Strategies designed to support sustainment will be classified based on the sustainability-explicit expert recommendations for implementing change (ERIC) glossary [ 29 ]. To be eligible, strategies designed to support sustainment must be distinct from continuous quality improvement (CQI). Distinctions will be made between sustainment and CQI by recognising CQI as studies focused on making immediate improvements to an individual organisation [ 30 ]. This is compared to sustainability trials which are typically designed based on theoretical frameworks or models and focused on making generalisable improvements, rather than being restricted on one individual organisation.

Types of outcome measures

Primary outcome measures.

Studies that report on any objective or subjective measure of the sustainment of a health prevention policy, practice, or programme within any of the eligible settings will be included. This may include the ongoing delivery of physical activity, dietary, alcohol, or smoking cessation interventions in line with public health or clinical guidelines.

Sustained implementation must be a measure of usual staff or volunteer delivery of the policy, practice, or programme and not be externally supported by research personnel, except for the purposes of data collection. Individual outcomes such as sustained effects of patient’s participation in a programme (e.g. their participation in a healthy eating programme) are not considered sustainment outcomes.

Secondary outcome measures

Data on secondary outcomes will only be extracted for those studies that first meet the eligibility criteria for the primary review outcomes. For example, if a study aims to sustain the implementation of a physical activity policy practice, but reports on dietary outcomes and physical activity practices, only data regarding physical activity practices will the extracted.

Secondary outcomes include the following:

Health outcomes where an EBI or initiative is used to target modifiable health behaviour risks related to chronic disease. I.e. any objective or subjective measure of diet (e.g. fruit/vegetable intake), physical activity (e.g. minutes of physical activity during the school day), sedentary behaviour (e.g. daily minutes of sedentary time), weight status (e.g. BMI (body mass index)), alcohol consumption (e.g. number of standard drinks consumed on a typical drinking day), and smoking cessation (e.g. weekly number of cigarettes smoked). A hierarchy will be used to prioritise multiple measures of the same health outcome.

Cost outcomes relating to estimates of absolute costs, the assessment of the cost-effectiveness, or budget impact of strategies designed to support sustainment.

Any reported adverse effects of strategies designed to support sustainment. This may include negative impact on health outcomes (e.g. an increase in injury rates following physical activity initiatives), disruption to service operation or staff attitudes (e.g. negative impact on staff motivation or cohesion), or negative consequences to other key programmes or practices (e.g. lack of funding for other vital programmes due to reallocation of funding).

Search methods for identification of studies

We will conduct searches for peer-reviewed articles in relevant electronic databases.

Electronic searches

We will conduct searches in the following electronic databases: the Cochrane Central Register of Controlled trials (CENTRAL) (2022) via Cochrane Library; MEDLINE (1946 to November, 2022), PsycINFO (1950 to November, 2022), and Embase (1947 to November, 2022) via OVID; CINAHL (November, 2022) via EBSCO; and SCOPUS (November, 2022) and Education Research Complete (November, 2022) via EBSCO.

Search strategy/search terms

Search terms will be developed based on reviews conducted by Shelton et al. [ 16 ] (maintenance/sustainability) and Wolfenden et al. [ 9 , 10 , 11 ] (physical activity, nutrition, and obesity, implementation, and setting) and will cover the following four concepts:

Sustainability (other terms include maintenance, durability, continuation, institutionalisation, routinization, normalisation, integration, adherence)

Heath behaviours (e.g. physical activity, healthy eating, smoking cessation)

Clinical settings (e.g. hospitals, general practice)

Community settings (e.g. schools, workplaces, community centres)

Data collection and analysis

Selection criteria.

The search results from the electronic databases will be managed and duplicates identified using EndNote. The de-duplicated library will be imported into Covidence software, where article screening will occur. Both title and abstract and full-text screening will be conducted independently by two members of the research team, who will assess study eligibility according to the inclusion criteria. Any conflicts will be resolved by consensus. In instances where the study eligibility cannot be resolved via consensus, a third review author will make the final decision.

Data extraction and management

Two review authors unblinded to author and journal information will independently extract information from the included studies. We will record the information extracted from the included studies in a data extraction form, developed based on the recommendations of the Cochrane Public Health Group Guide for Developing a Cochrane Protocol [ 31 ]. The data extraction form will be piloted before the initiation of the review. Data extraction discrepancies between review authors will be resolved by consensus or by a third review author if required.

We will extract the following information:

Study eligibility as well as the study design, date of publication, EBI, country, the demographic/socioeconomic characteristics of the programme and participants, the number of experimental conditions, setting, overall study duration, and time points measured.

Characteristics of the strategy designed to support sustainment, including strategy description and duration of initial implementation support and length of time since withdrawn (if noted), duration of strategies (i.e. duration for which the sustainment strategy was in place), description of strategies, the theoretical underpinning of the strategy (if noted in the study), process evaluation measures (e.g. acceptability and appropriateness), and information to allow classification against the sustainability-explicit ERIC glossary [ 29 ]. Strategies will be described in line with the sustainability-explicit ERIC glossary [ 29 ].

Primary and secondary outcomes within each study, including the data collection method, validity of measures used, effect size, and measures of outcome variability

Source(s) of research funding and potential conflicts of interest

Assessment of risk of bias in included studies

Overall risk of bias.

Two review authors will assess risk of bias independently for each review outcome using Version 2 of the Cochrane risk-of-bias tool for randomised trials (RoB 2) described by Sterne et al. [ 32 ]. Signalling questions will be used for the following domains: Bias arising from the randomisation process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in measurement of the outcome, bias in the selection of the reported result, and overall bias. The response options to the signalling questions will be as follows: ‘yes’, ‘probably yes’, ‘probably no’, ‘no’, and ‘no information’. Once the signalling questions are answered, a risk-of-bias judgement and one of three levels (low risk of bias, some concerns, or high risk of bias) will be assigned to each domain. Stepped wedge trials will be assessed for risk of bias using RoB 2, with consideration given to time confounding. Crossover trials will be assessed using the RoB 2 extension for crossover designs, and only the initial segment prior to crossover will be used in the analysis. We will use the ROB2 extension for cluster trials for the assessment of the risk of bias for cluster RCTs, which includes consideration of the following additional domains: recruitment bias, baseline imbalances, loss of clusters, incorrect analysis, contamination, and compatibility with individually randomised trials. An overall risk of bias will be assigned to each study outcome giving consideration to all of the above domains. Overall risk of bias for study outcomes will be assessed against set criteria and judged as follows: ‘low risk of bias’ (‘the trial is judged to be at low risk of bias for all domains’), ‘some concerns’ (the trial is judged to raise some concerns in at least one domain, but not be at high risk of bias for any domain), and high risk of bias (the trial is judged to be at high risk of bias in at least on domain OR the trial is judged to have some concerns for multiple domains in a way that substantially lowers confidence in the result) (Higgins et al., 2022). The risk of bias of the included studies will be documented in a ‘risk-of-bias’ table.

Synthesis methods

Study characteristics will be grouped as types of studies, participants (i.e. clinical or community), and strategies designed to support sustainment. Strategies designed to support sustainment will be classified using the sustainability-explicit ERIC glossary [ 29 ]. The sustainability-explicit ERIC glossary is a taxonomy which categorises and defines strategies designed to support sustainment. It is an adapted version of the original ERIC [ 33 ], which has been extended with a specific focus on sustainability. The sustainability-explicit ERIC glossary will allow us to code the strategies in this review based on the standardised definitions included in the glossary. Deductive and inductive coding approaches will be used, and any strategies that do not fit within the sustainability-explicit ERIC glossary will be added. The effect of interest will be intention to treat, and we will prioritise differences between groups at follow-up, rather than differences between groups in the change from baseline. Primary outcomes will be reported using odds ratios, and any primary outcomes measured as means and standard deviations will be transformed into odds ratios (Higgins, et al., 2022). For secondary outcomes, the most appropriate effect type will be used, which will include odds ratios for dichotomous outcomes and means for continuous outcomes. Random-effects meta-analyses will be undertaken to estimate a pooled treatment effect overall for the primary outcome and by health behaviour for secondary outcomes (i.e. physical activity, alcohol consumption, dietary outcomes, and tobacco use). If we are unable to conduct a meta-analysis due to insufficient or incomplete data (e.g. missing standard deviations) that cannot be estimated from the data reported by authors, we will synthesise results using vote counting based on the direction of effect [ 31 ], with such methods reporting in compliance with the synthesis without meta-analysis (SWIM) guidelines [ 34 ]. For trials with multiple follow-up periods, we will use data from the final follow-up period reported. For studies that report multiple results for primary and secondary outcomes, we will prioritise the most objectively measured. Results from cluster- and individual-level RCTs will be combined. The standard error from cluster trials that do not adjust for clustering will be adjusted for unit of analysis errors following recommended procedures outlined by the Cochrane Handbook [ 31 ]. Trials reporting multiple, relevant intervention arms will be combined into a single group following methods outlined in the Cochrane Handbook [ 31 ].

Sensitivity analyses

Where there are sufficient studies, a sensitivity analyses removing studies with high risk of bias will be undertaken. If imputation of intra-class correlation coefficient (ICC) values to adjust for clustered trials is required, a sensitivity analysis assessing different ICC values will also be conducted.

Assessing heterogeneity and subgroup analyses

Statistical heterogeneity will be assessed by reviewing the distribution of studies on the forest plots and assessing the I 2 statistic. Pre-specified sub-group analyses will be undertaken to explore possible causes of statistical heterogeneity and will include time period classified as sustainability and type of setting (i.e. clinical or community). Differences between sub-groups will be statistically compared following procedures recommended by the Cochrane Handbook; within subgroup differences will not be interpreted.

This systematic review will synthesise current evidence on the effect of strategies designed to support sustainment of chronic disease prevention policies, practices, and programmes. This will be the first systematic review to determine the effect of strategies designed to support the sustainment of EBIs in both clinical and community settings. The findings of this review will directly inform the design of future sustainability and implementation trials. Further, these findings will help inform the development of a sustainability practice guide for public health practitioners. The main limitation of this review protocol is our restriction to only RCTs. In focusing exclusively on RCTs, we may overlook valuable insights from alternative study designs, such as quasi-experimental and qualitative methods, which offer a more nuanced understanding of real-world constraints and pressures. Future reviews may wish to broaden the included study types which could capture important information on the effect of sustainment strategies. Further, while our review will use data from included studies final follow-up period, the inclusion of longitudinal data could offer valuable insights into the temporal dynamics of sustainment strategy effectiveness and provide a more nuanced understanding of how interventions unfold over time. Therefore, we recommend that future reviews consider incorporating multiple follow-up times.

Availability of data and materials

Data and materials relating to this review are available from the corresponding author on reasonable request.

Abbreviations

Evidenced-based interventions

Preferred Reporting Items for Systematic review and Meta-Analysis

Continuous quality improvement

Body mass index

Randomised controlled trial

Cluster randomised controlled trial

Intra-class correlation coefficient

Schmidt H. Chronic disease prevention and health promotion. Cases spanning the globe. Public health ethics. Philadelphia: University of Pennsylvania; 2016. p. 137–76.

Alwan A. Global Status Report on Noncommunicable Diseases 2010. World Health Organization; 2011.

Jacob CM, Baird J, Barker M, Cooper C, Hanson M. The importance of a life-course approach to health: chronic disease risk from preconception through adolescence and adulthood. White paper. 2017.

Organization WH. Global Status Report on Noncommunicable Diseases 2014: World Health Organization; 2014.

Roth GA, Abate D, Abate KH, Abay SM, Abbafati C, Abbasi N, et al. Global, regional, and national age-sex-specific mortality for 282 causes of death in 195 countries and territories, 1980–2017: a systematic analysis for the Global Burden of Disease Study 2017. The Lancet. 2018;392(10159):1736–88.

Article   Google Scholar  

Kahn EB, Ramsey LT, Brownson RC, Heath GW, Howze EH, Powell KE, et al. The effectiveness of interventions to increase physical activity: a systematic review. Am J Prev Med. 2002;22(4):73–107.

Article   PubMed   Google Scholar  

Kaner EF, Beyer FR, Muirhead C, Campbell F, Pienaar ED, Bertholet N, et al. Effectiveness of brief alcohol interventions in primary care populations. Cochrane Database Syst Rev. 2018;2(2):CD004148.

PubMed   Google Scholar  

McFadyen T, Chai LK, Wyse R, Kingsland M, Yoong SL, Clinton-McHarg T, et al. Strategies to improve the implementation of policies, practices or programmes in sporting organisations targeting poor diet, physical inactivity, obesity, risky alcohol use or tobacco use: a systematic review. BMJ open. 2018;8(9):e019151.

Article   PubMed   PubMed Central   Google Scholar  

Wolfenden L, Nathan NK, Sutherland R, Yoong SL, Hodder RK, Wyse RJ, et al. Strategies for enhancing the implementation of school‐based policies or practices targeting risk factors for chronic disease. Cochrane Database Syst Rev. 2017;11(11):CD011677.

Wolfenden L, Barnes C, Jones J, Finch M, Wyse RJ, Kingsland M, et al. Strategies to improve the implementation of healthy eating, physical activity and obesity prevention policies, practices or programmes within childcare services. Cochrane Database Syst Rev. 2020;2(2):CD011779.

Wolfenden L, Goldman S, Stacey FG, Grady A, Kingsland M, Williams CM, et al. Strategies to improve the implementation of workplace‐based policies or practices targeting tobacco, alcohol, diet, physical activity and obesity. Cochrane Database Syst Rev. 2018;11(11):CD012439.

Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):1–11.

Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12(1):1–8.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):1–11.

Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci. 2022;17(1):1–16.

Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions and practices in public health and health care. Annual review of public health. New York: Columbia University; 2018.

Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7(1):1–19.

Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101(11):2059–67.

Cassar S, Salmon J, Timperio A, Naylor P-J, Van Nassau F, Contardo Ayala AM, et al. Adoption, implementation and sustainability of school-based physical activity and sedentary behaviour interventions in real-world settings: a systematic review. Int J Behav Nutr Phys Act. 2019;16(1):1–13.

Herlitz L, MacIntyre H, Osborn T, Bonell C. The sustainability of public health interventions in schools: a systematic review. Implement Sci. 2020;15(1):1–28.

Weiner BJ, Lewis CC, Sherr K. Practical implementation science: moving evidence into action. Springer Publishing Company; 2022.

Shoesmith A, Hall A, Wolfenden L, Shelton RC, Powell BJ, Brown H, et al. Barriers and facilitators influencing the sustainment of health behaviour interventions in schools and childcare services: a systematic review. Implement Sci. 2021;16(1):1–20.

Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. Peer reviewed: the program sustainability assessment tool: a new instrument for public health programs. Preventing chronic disease. 2014;11:130184.

Nathan N, Hall A, McCarthy N, Sutherland R, Wiggers J, Bauman AE, et al. Multi-strategy intervention increases school implementation and maintenance of a mandatory physical activity policy: outcomes of a cluster randomised controlled trial. Brit J Sports Med. 2022;56(7):385–93.

Jones S, Sloan D, Evans HE, Williams S. Improving the implementation of NICE public health workplace guidance: an evaluation of the effectiveness of action-planning workshops in NHS trusts in England. J Eval Clin Pract. 2015;21(4):567–71.

Seward K, Wolfenden L, Finch M, Wiggers J, Wyse R, Jones J, et al. Improving the implementation of nutrition guidelines in childcare centres improves child dietary intake: findings of a randomised trial of an implementation intervention. Public Health Nutr. 2018;21(3):607–17.

Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14(1):1–12.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1–9.

Nathan N, Powell BJ, Shelton RC, Laur CV, Wolfenden L, Hailemariam M, et al. Do the Expert Recommendations for Implementing Change (ERIC) strategies adequately address sustainment? Front Health Serv. 2022;2:905909.

Joyce BL, Harmon MJ, Johnson RH, Hicks V, Brown-Schott N, Pilling LB. Using a quality improvement model to enhance community/public health nursing education. Public Health Nurs. 2019;36(6):847–55.

Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions: John Wiley & Sons; 2019.

Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. bmj. 2019;366:l4898.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):1–14.

Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. bmj. 2020;368:l6890.

Download references

Acknowledgements

Not applicable.

This project is funded through the National Health and Medical Research Council (NHMRC) as part of Dr. Nicole Nathan’s Medical Research Future Fund (MRFF) Investigator Grant (APP1194785) and was supported by work undertaken as part of an NHMRC Centre for Research Excellence National Centre of Implementation Science (NCOIS) grant (APP1153479). Dr. Nicole Nathan is supported by a MRFF Investigator Grant (APP1194785); LW is supported by an NHMRC Investigator Grant (G1901360). Edward Riley-Gibson is supported by a University of Newcastle PhD scholarship. Adam Shoesmith is supported by a University of Newcastle PhD scholarship (ref. 3145402). Byron Powell is supported in part through grants from the Agency for Healthcare Research and Quality (R13HS025632) and the US National Institutes of Health (R01CA262325, P50CA19006, and R25MH080916). Ramzi G. Salloum is supported by the University of Florida Clinical and Translational Science Institute, which is supported in part by the NIH National Center for Advancing Translational Sciences (UL1TR001427). The funders had no role in the study design, conduct of the study, analysis, or dissemination of findings.

Author information

Authors and affiliations.

School of Medicine and Public Health, The University of Newcastle, Newcastle, NSW, Australia

Edward Riley-Gibson, Alix Hall, Adam Shoesmith, Luke Wolfenden, Emma Doherty, Emma Pollock, Debbie Booth, Melanie Kingsland, Cassandra Lane, Rachel Sutherland & Nicole Nathan

Priority Research Centre for Health Behaviour, The University of Newcastle, Newcastle, NSW, Australia

Hunter Medical Research Institute, New Lambton Heights, NSW, Australia

Hunter New England Local Health District, Hunter New England Population Health, Newcastle, NSW, 2287, Australia

Department of Sociomedical Sciences, Mailman School of Public Health, Columbia University, New York, NY, USA

Rachel C. Shelton

Division of Public Health, College of Human Medicine, Michigan State University, Flint, MI, USA

Maji Hailemariam

Department of Health Outcomes & Biomedical Informatics, University of Florida College of Medicine, Gainesville, FL, USA

Ramzi G. Salloum

Women’s College Hospital Institute for Health System Solutions and Virtual Care, Toronto, 76 Grenville StreetOntario, M5S 1B2, Canada

Institute of Health Policy, Management and Evaluation, University of Toronto, Health Sciences Building, 155 College Street, Suite 425, Toronto, ON, M5T 3M6, Canada

Center for Mental Health Services Research, Brown School, Washington University in St. Louis, St. Louis, MO, USA

Byron J. Powell

Center for Dissemination and Implementation, Institute for Public Health, Washington University in St. Louis, St. Louis, MO, USA

Division of Infectious Diseases, John T. Milliken Department of Medicine, School of Medicine, Washington University in St. Louis, St. Louis, MO, USA

You can also search for this author in PubMed   Google Scholar

Contributions

ERG led the development of this protocol. NN, AH, and LW led the initial conceptual planning of the review and made significant contributions to the methodology included in this protocol. AS RCS, ED, EP, RGS, CL, BP, MK, CL, MH, and RS provided extensive guidance and input into the background, structure, and methodology of this protocol. DB conducted the database searches, and ERG, LW, NN, AH made significant contributions to the search strategy.

Corresponding author

Correspondence to Edward Riley-Gibson .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Riley-Gibson, E., Hall, A., Shoesmith, A. et al. A systematic review to determine the effect of strategies to sustain chronic disease prevention interventions in clinical and community settings: study protocol. Syst Rev 13 , 129 (2024). https://doi.org/10.1186/s13643-024-02541-0

Download citation

Received : 01 December 2022

Accepted : 23 April 2024

Published : 09 May 2024

DOI : https://doi.org/10.1186/s13643-024-02541-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sustainment
  • Sustainability
  • Chronic isease revention

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

scientific reports peer review time

Article  

  • Volume 16, issue 5
  • ESSD, 16, 2297–2316, 2024
  • Peer review
  • Related articles

A 30 m annual cropland dataset of China from 1986 to 2021

Shengbiao wu.

  • Final revised paper (published on 06 May 2024)
  • Supplement to the final revised paper
  • Preprint (discussion started on 01 Jun 2023)
  • Supplement to the preprint

Interactive discussion

Status : closed

The paper A 30 m annual cropland dataset of China from 1986 to 2021 provides a remarkable attempt at creating national knowledge on the spatial and temporal patterns of cropland in China. In general, it is a well-writen and useful study and I enjoy the reading. The following comments are my suggestions for ensuring its messages are clear ad grounded behinad the results.

1. defination of cropland. Can I say that here you excluded all cash crops, like tea garden, citrus, etc (in addition to ugarcane), all of which are widely distributed in Soutern China. If so , it may be necessary you clearly mentioned this point in your manuscript.

2. intercomparision. I am happy the CACD was well validated with some published land cover products. However, it seems all selected reference datasets are single/multiple epoch maps. How about the agreement level with some cropland dynamic products, e.g. https://glad.umd.edu/dataset/croplands. In this way we can directly know how good or the accuracy of changed cropland, including both cropland expansion and loss.

We appreciate your precious time and constructive comments, which are greatly helpful in improving our manuscript. We have carefully addressed all raised concerns and revised the manuscript accordingly. Please see our point-by-point responses to your specific comments in the attachment.

General comments

Long-term and accurate cropland monitoring is quite important for provisioning food security and environmental sustainability. This study developed an annual cropland dataset in China (CACD) from 1986 to 2021 by using a novel cost-effective annual cropland mapping framework that integrated time-series Landsat imagery. The authors have done a good job in training and validation dataset selection and annual cropland mapping. The accuracy assessment indicates that CACD has relatively high reliability. Comparisons between CACD and other cropland datasets show its improvements spatially. Overall, I think the CACD is a good annual cropland extent dataset with fine resolution. However, I still have some concerns about the methods and results analysis and have been provided in the specific comments.

Specific comments

1. Lines 61-64. You listed two crop type data (i.e., NASS-CDL, European Union 10 m crop type map) and introduced the research gap, but your dataset also does not include the crop type information and making readers a little disappointed. Meanwhile, I can’t agree with “To date, no fine resolution annual cropland dataset of China exists yet”. In your literature review, Yang and Huang (2021) developed the 30 m annual land cover dataset in China (CLUD) from 1990 to 2019. There are no essential differences between cropland in this study and cropland from CLUD, because your dataset also doesn’t include the crop type information.

2. Lines 103. “The aim of this study is to propose a novel paradigm for large-scale fine-resolution cropland dynamics monitoring.” I think the paradigm is not very innovative. A study titled “Forest management in southern China generates short term extensive carbon sequestration” applied a similar framework to analyze the forest dynamics. You two used the same methods: RF-based probability prediction of cropland or forest, and LandTrendr-based segmentation.

3. Lines 116-117. “Cropland in this study is defined as a piece of land of 0.09 ha in minimum (minimum width of 30 m) that is sowed/planted and harvestable at least once within the 12 months after the sowing or planting date.” The definition of cropland in this study differs from that in previous studies. The vegetation indices (e.g., NDVI, EVI) of cropland samplings in the training and validation dataset could reflect the planting or harvest signals. Thus, statistics of vegetation indices variations during the growth period of the samples could improve the reliability rather than depending on visual interpretation only. Additionally, how do you exclude the sugarcane plantation and cassava crop in the training and validation samples? What’s the difference of spectral signals between sugarcane plantation/cassava crop and other crops?

4. Lines 146-147. As you said, “The threshold value was set following recommendations by Ghorbanian et al. (2020)”. But I didn’t find a threshold table to show the difference among the nine agricultural zones. In each subregion, ~800 training samples were used. So, how many cropland and non-cropland samples are there in each subregion?

5. Lines 176-207. I think these two steps are important for the final cropland layer. The authors give two examples (Figure 2 and Figure S2) to illustrate how the LandTrendr algorithm works. I think more examples should be given to prove the robustness of the cropland mapping method. For example, how cropland probabilities and vegetation indices changed when cropland was converted to urban/grassland/forest, and grassland/forest was reclaimed to cropland.

6. Lines 217-218. A spatial-temporal consistency check approach proposed by Li et al. (2015) was applied to refine the annual cropland maps. I don’t think this consistency check algorithm can be used to cropland without any improvements. In Li et al. (2015), there is a very important assumption that “…the transition from urban to other land cover types is not likely and should be avoided… (Section 2.3.2 in Li et al. (2015))”. However, the conversion rule of cropland differs from urban land. More descriptions should be given if there are any improvements to this algorithm.

7. Lines 264-265. Why do western and southeastern coastal areas have relatively low accuracy (F1 score)? Some explanations should be given. Is it because the cropland in southeastern coastal areas more fragmented?

8. Line 316. “Additionally, cropland areas in some inland provinces (such as Guizhou) remained rather stable.” The area of Guizhou province should be rechecked. As I know, Guizhou is the core area of ecological restoration projects of the karst region. Cropland was converted into forest (Yue et al., 2020, Landscape Ecology ).

9. Lines 328-330. “In the Ar Horqin Banner of Chifeng city, Inner Mongolia, large-scale croplands were developed for pasture reclamation and cultivation during the past decades”. It should be noted that pasture is a type of grassland rather than crops.

“Similarly, vast agricultural land parcels sprang up in Aksu, Xinjiang for cotton cultivation.” The newly developed dataset doesn’t include crop type information, how do you get this conclusion? Some studies about cotton expansion in Xinjiang Province should be cited to support your conclusion.

10. Lines 336-354. In this part, the authors give much information about cropland abandonment in China. The newly developed shows the cropland loss in the Loess Plateau and Beijing–Tianjin Sand Source Control Project zone (Figure 11). However, there is only a little analysis about the cause of cropland abandonment or cropland loss. For example, cropland loss is mainly driven by the “Grain for Green” ecological project in Shanxi and Inner Mongolia. Cropland abandonment is also affected by factors such as lack of labor and low income (Zhang et al., 2019, Acta Geography Sinica ).

11. Figure 3. The cropland and non-cropland samples could be symbolized with different colors

12. Figure 9. The title of the legend is a little weird. “Loss area” should be “Area change” or “Cropland area change”. Additionally, this figure only shows the net change of cropland area. When comparing the total area of cropland gain (increase) and loss during the period, the spatial shift of cropland will be more significant.

Peer review completion

scientific reports peer review time

Report abuse

Please provide a reason why you see this comment as being abusive. You might include your name and email but you can also stay anonymous.

Please provide a reason why you see this comment as being abusive.

Please confirm reCaptcha.

Mendeley

  • Article (12901 KB)
  • Full-text XML
  • Supplement (5384 KB)

Mendeley

Health Science Center

What Can we help you find?

Popular Searches

  • Academic Calendar
  • Study Abroad
  • Majors & Minors
  • Request Info

UT Tyler Health Science Center

UT Tyler Health Science Center

Build a healthier tomorrow.

Home to the region’s only academic medical center, The University of Texas at Tyler Health Science Center is one of the five campuses of UT Tyler. Two of UT Tyler’s four health-related schools have a presence on this campus: the School of Health Professions and the School of Medicine.

Campus History

Founding Dean Brigham Willis, speaking at a conference

Our Beginnings

placeholder

Joining the University of Texas System

UT Health Science Center Biomedical Research Building

A Name Change

Exterior of the H building at UT Health Science Center

New Programs

Sign for UT Tyler Health Science Center

Campus Programs and Facilities

The UT Tyler Health Science Center facility offers an array of crucial medical and healthcare education resources, fostering an environment dedicated to excellence in education. From cutting-edge simulation labs to dedicated research spaces, every aspect of the UT Tyler Health Science Center is designed to enhance the educational experience. This dynamic campus is not just a hub for learning; it’s a catalyst for progress in healthcare education and a testament to UT Tyler's commitment to shaping the future of healthcare in the East Texas region.

Students watch a demonstration at a pre-health conference

Office of Health Affairs

Two medicine students on a computer

School of Health Professions

Two medicine students studying

School of Medicine

UT Tyler Health Science Center Biomedical Research Building

Center for Biomedical Research

Medical and nursing students working on a patient in the simulation lab

Simulation in Medicine and Immersive Learning Experience Center

A row of medicine students with their white coats folded over their arms

Watson W. Wise Medical Research Library

Scientist in a Laboratory

Public Health Laboratory of East Texas

Food truck social event at UT Health north campus

UT Health North Campus Tyler (UTHET)

Two medical practitioners holding and looking at an IV

HOPE Cancer Center

A male presenter at the science in society symposium

A Regional Leader in Health Research

UT Tyler pioneers solutions to improve health. Several research centers, including the Center for Mycobacterial Treatment and Discovery and the Center for Biomedical Research, are housed on this campus. The centers build on our history of innovative treatments for lung disease and focus on the health concerns of rural populations through projects funded by agencies like the National Institutes of Health and the Centers for Disease Control and Prevention. Students benefit from hands-on research opportunities and instruction informed by the latest developments in the field. 

Dr. Maolin Lu

Dr. Maolin Lu

Assistant Professor of Cellular and Molecular Biology

Meet Professor Lu

Community Outreach and Engagement

Ut health east texas.

In its regional network of hospitals, clinics and other facilities, UT Health East Texas delivers world-class care to thousands of patients each year while conducting clinical trials and training the next generation of professionals through UT Tyler’s unique programs. The UT Tyler Health Science Center is home to UT Health North Campus Tyler .

Public Health Programs

Faculty, staff and students at the UT Tyler Health Science Center campus connect their expertise with local community needs to assist traditionally underserved populations through an array of health and outreach programs, including behavioral health telemedicine services for rural populations, cancer screenings, parental education, lifestyle changes and more.

Pollen and Mold Count

UT Tyler's Health Science Center provides a daily pollen and mold count as a resource to the community. See today's counts as well as prior counts.

Connect With Us

The university of texas at tyler health science center.

Phone:   903.877.7777

We’re pioneering the future of healthcare in East Texas. Find out how you can join us.

11937 U.S. Hwy. 271 Tyler, TX 75708-3154

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE INDEX
  • 01 May 2024

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

  • Jackson Ryan 0

Jackson Ryan is a freelance science journalist in Sydney, Australia.

You can also search for this author in PubMed   Google Scholar

Time pressures and a lack of confidence could be prompting reviewers to plagiarize text in their reports. Credit: Thomas Reimer/Zoonar via Alamy

Mikołaj Piniewski is a researcher to whom PhD students and collaborators turn when they need to revise or refine a manuscript. The hydrologist, at the Warsaw University of Life Sciences, has a keen eye for problems in text — a skill that came in handy last year when he encountered some suspicious writing in peer-review reports of his own paper.

Last May, when Piniewski was reading the peer-review feedback that he and his co-authors had received for a manuscript they’d submitted to an environmental-science journal, alarm bells started ringing in his head. Comments by two of the three reviewers were vague and lacked substance, so Piniewski decided to run a Google search, looking at specific phrases and quotes the reviewers had used.

To his surprise, he found the comments were identical to those that were already available on the Internet, in multiple open-access review reports from publishers such as MDPI and PLOS. “I was speechless,” says Piniewski. The revelation caused him to go back to another manuscript that he had submitted a few months earlier, and dig out the peer-review reports he received for that. He found more plagiarized text. After e-mailing several collaborators, he assembled a team to dig deeper.

scientific reports peer review time

Meet this super-spotter of duplicated images in science papers

The team published the results of its investigation in Scientometrics in February 1 , examining dozens of cases of apparent plagiarism in peer-review reports, identifying the use of identical phrases across reports prepared for 19 journals. The team discovered exact quotes duplicated across 50 publications, saying that the findings are just “the tip of the iceberg” when it comes to misconduct in the peer-review system.

Dorothy Bishop, a former neuroscientist at the University of Oxford, UK, who has turned her attention to investigating research misconduct, was “favourably impressed” by the team’s analysis. “I felt the way they approached it was quite useful and might be a guide for other people trying to pin this stuff down,” she says.

Peer review under review

Piniewski and his colleagues conducted three analyses. First, they uploaded five peer-review reports from the two manuscripts that his laboratory had submitted to a rudimentary online plagiarism-detection tool . The reports had 44–100% similarity to previously published online content. Links were provided to the sources in which duplications were found.

The researchers drilled down further. They broke one of the suspicious peer-review reports down to fragments of one to three sentences each and searched for them on Google. In seconds, the search engine returned a number of hits: the exact phrases appeared in 22 open peer-review reports, published between 2021 and 2023.

The final analysis provided the most worrying results. They took a single quote — 43 words long and featuring multiple language errors, including incorrect capitalization — and pasted it into Google. The search revealed that the quote, or variants of it, had been used in 50 peer-review reports.

Predominantly, these reports were from journals published by MDPI, PLOS and Elsevier, and the team found that the amount of duplication increased year-on-year between 2021 and 2023. Whether this is because of an increase in the number of open-access peer-review reports during this time or an indication of a growing problem is unclear — but Piniewski thinks that it could be a little bit of both.

Why would a peer reviewer use plagiarized text in their report? The team says that some might be attempting to save time , whereas others could be motivated by a lack of confidence in their writing ability, for example, if they aren’t fluent in English.

The team notes that there are instances that might not represent misconduct. “A tolerable rephrasing of your own words from a different review? I think that’s fine,” says Piniewski. “But I imagine that most of these cases we found are actually something else.”

The source of the problem

Duplication and manipulation of peer-review reports is not a new phenomenon. “I think it’s now increasingly recognized that the manipulation of the peer-review process, which was recognized around 2010, was probably an indication of paper mills operating at that point,” says Jennifer Byrne, director of biobanking at New South Wales Health in Sydney, Australia, who also studies research integrity in scientific literature.

Paper mills — organizations that churn out fake research papers and sell authorships to turn a profit — have been known to tamper with reviews to push manuscripts through to publication, says Byrne.

scientific reports peer review time

The fight against fake-paper factories that churn out sham science

However, when Bishop looked at Piniewski’s case, she could not find any overt evidence of paper-mill activity. Rather, she suspects that journal editors might be involved in cases of peer-review-report duplication and suggests studying the track records of those who’ve allowed inadequate or plagiarized reports to proliferate.

Piniewski’s team is also concerned about the rise of duplications as generative artificial intelligence (AI) becomes easier to access . Although his team didn’t look for signs of AI use, its ability to quickly ingest and rephrase large swathes of text is seen as an emerging issue.

A preprint posted in March 2 showed evidence of researchers using AI chatbots to assist with peer review, identifying specific adjectives that could be hallmarks of AI-written text in peer-review reports .

Bishop isn’t as concerned as Piniewski about AI-generated reports, saying that it’s easy to distinguish between AI-generated text and legitimate reviewer commentary. “The beautiful thing about peer review,” she says, is that it is “one thing you couldn’t do a credible job with AI”.

Preventing plagiarism

Publishers seem to be taking action. Bethany Baker, a media-relations manager at PLOS, who is based in Cambridge, UK, told Nature Index that the PLOS Publication Ethics team “is investigating the concerns raised in the Scientometrics article about potential plagiarism in peer reviews”.

scientific reports peer review time

How big is science’s fake-paper problem?

An Elsevier representative told Nature Index that the publisher “can confirm that this matter has been brought to our attention and we are conducting an investigation”.

In a statement, the MDPI Research Integrity and Publication Ethics Team said that it has been made aware of potential misconduct by reviewers in its journals and is “actively addressing and investigating this issue”. It did not confirm whether this was related to the Scientometrics article.

One proposed solution to the problem is ensuring that all submitted reviews are checked using plagiarism-detection software. In 2022, exploratory work by Adam Day, a data scientist at Sage Publications, based in Thousand Oaks, California, identified duplicated text in peer-review reports that might be suggestive of paper-mill activity. Day offered a similar solution of using anti-plagiarism software , such as Turnitin.

Piniewski expects the problem to get worse in the coming years, but he hasn’t received any unusual peer-review reports since those that originally sparked his research. Still, he says that he’s now even more vigilant. “If something unusual occurs, I will spot it.”

doi: https://doi.org/10.1038/d41586-024-01312-0

Piniewski, M., Jarić, I., Koutsoyiannis, D. & Kundzewicz, Z. W. Scientometrics https://doi.org/10.1007/s11192-024-04960-1 (2024).

Article   Google Scholar  

Liang, W. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2403.07183 (2024).

Download references

Related Articles

scientific reports peer review time

  • Peer review
  • Research management

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results

Career Feature 08 MAY 24

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Nature Index 25 APR 24

Researchers want a ‘nutrition label’ for academic-paper facts

Researchers want a ‘nutrition label’ for academic-paper facts

Nature Index 17 APR 24

Structure peer review to make it more robust

Structure peer review to make it more robust

World View 16 APR 24

Is ChatGPT corrupting peer review? Telltale words hint at AI use

Is ChatGPT corrupting peer review? Telltale words hint at AI use

News 10 APR 24

Mount Etna’s spectacular smoke rings and more — April’s best science images

Mount Etna’s spectacular smoke rings and more — April’s best science images

News 03 MAY 24

How reliable is this research? Tool flags papers discussed on PubPeer

How reliable is this research? Tool flags papers discussed on PubPeer

News 29 APR 24

2024 Recruitment notice Shenzhen Institute of Synthetic Biology: Shenzhen, China

The wide-ranging expertise drawing from technical, engineering or science professions...

Shenzhen,China

Shenzhen Institute of Synthetic Biology

scientific reports peer review time

Head of Operational Excellence

In this key position, you’ll be responsible for ensuring efficiency and quality in journal workflows through continuous improvement and innovation.

United States (US) - Remote

American Physical Society

scientific reports peer review time

Rowland Fellowship

The Rowland Institute at Harvard seeks outstanding early-career experimentalists in all fields of science and engineering.

Cambridge, Massachusetts

Rowland Institute at Harvard

scientific reports peer review time

Postdoctoral Fellowship: Chemical and Cell Biology

The 2-year fellowship within a project that will combine biochemical, cell biological and chemical genetic approaches to elucidate migrasome biology

Umeå, Sweden

Umeå University

scientific reports peer review time

Clinician Researcher/Group Leader in Cancer Cell Therapies

An excellent opportunity is available for a Group Leader with expertise in cellular therapies to join the Cancer Research program at QIMR Berghofer.

Herston, Brisbane (AU)

QIMR Berghofer

scientific reports peer review time

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

IMAGES

  1. Peer Review

    scientific reports peer review time

  2. Write Esse: Peer reviewed research articles

    scientific reports peer review time

  3. Peer Review Process

    scientific reports peer review time

  4. A Complete Overview Of The Typical Peer Review Process In Scientific

    scientific reports peer review time

  5. FREE 5+ Scientific Review Report Samples [ Expert, Paper, Peer ]

    scientific reports peer review time

  6. Understanding Peer Review in Science

    scientific reports peer review time

VIDEO

  1. THIS Got Through Peer Review?!

  2. NIH Peer Review Process

  3. Research in 3 Minutes: Peer Review

  4. How scientific papers are published

  5. TYPES OF PUBLICATIONS IN MEDICAL LITERATURE

  6. A GLIMPSE INTO THE PAST: James Webb Telescope in the depths of space

COMMENTS

  1. Peer-review policies

    During peer review, reviewers will be able to access your manuscript securely using our online system, whilst maintaining referee anonymity. At the submission stage, authors may indicate a limited ...

  2. Scientific Reports

    Scientific Reports Review Speed, Peer-Review Duration, Revision Process, Time from Submission to 1st Editorial/Reviewer Decision & Time from Submission to Acceptance/Publication ... and replication of experiments. By the time you have accrued enough data to write a manuscript, you will likely want to publish as soon as possible. Rapid ...

  3. Scientific Reports

    Scientific Reports has a 2-year impact factor of 4.6 (2022), and is the 5th most-cited journal in the world, with more than 738,000 citations in 2022*. *2023 Journal Citation Reports® Science ...

  4. Duration and quality of the peer review process: the author's

    To gain insight into the duration and quality of the scientific peer review process, we analyzed data from 3500 review experiences submitted by authors to the SciRev.sc website. Aspects studied are duration of the first review round, total review duration, immediate rejection time, the number, quality, and difficulty of referee reports, the time it takes authors to revise and resubmit their ...

  5. Effective Peer Review: Who, Where, or What?

    Peer review is widely viewed as one of the most critical elements in assuring the integrity of scientific literature (Baldwin, 2018; Smith, 2006).Despite the widespread acceptance and utilization of peer review, many difficulties with the process have been identified (Hames, 2014; Horrobin, 2001; Smith, 2006).One of the primary goals of the peer review process is to identify flaws in the work ...

  6. Peer review

    Abstract. Peer review has a key role in ensuring that information published in scientific journals is as truthful, valid and accurate as possible. It relies on the willingness of researchers to give of their valuable time to assess submitted papers, not just to validate the work but also to help authors improve its presentation before publication.

  7. Editorial and Peer Review Process

    The time to render a first decision averages about 43 days, but times vary depending on how long it takes for the editor to receive and assess reviews. ... Sharing peer review history enriches the scientific record, increases transparency and accountability, and helps to reinforce the validity of your research by displaying the thoroughness of ...

  8. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  9. How to Write a Peer Review

    Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript ...

  10. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it. 2) Be pleasant. If the paper is of low quality, suggest ...

  11. Submission guidelines

    Scientific Reports publishes ... Be sure to include the word "Supplementary" each time one is mentioned. ... Registered Reports are original research articles which undergo peer-review prior to ...

  12. Scientific Reports

    Scientific Reports is a peer-reviewed open-access ... Allegedly duplicated and manipulated images in a 2016 paper that were not detected during peer review led to criticism from the ... thesis of a Hungarian mathematician. The paper, entitled "Modified box dimension and average weighted receiving time on the weighted fractal networks", was ...

  13. How Long Is Too Long in Contemporary Peer Review? Perspectives ...

    We received 637 responses to 6,547 e-mail invitations sent. Peer-review speed was generally perceived as slow, with authors experiencing a typical turnaround time of 14 weeks while their perceived optimal review time was six weeks. Male and younger respondents seem to have higher expectations of review speed than females and older respondents.

  14. Duration and quality of the peer review process: the author's

    Introduction. The scientific peer review process is one of the weakest links in the process of scientific knowledge production. While it is possible to review a paper in less than a day (Ware and Mabe 2015), it may often lie untouched on reviewers' desks and in editorial offices for extended periods before it is evaluated.This means a substantial loss of time for the scientific process ...

  15. Mastering the scientific peer review process: tips for young authors

    We live in a world where scientific publishing and thus peer review have become a major determinant of career development and success or failure (Neill 2008; Fanelli 2010; Van Wesel 2016; Fanelli and Larivière 2016; Vuong 2019).Improvement of humans' daily lives and advancement of societies also depend upon the production of scientific knowledge and dissemination of research results (Böhme ...

  16. SciRev

    After receiving the final decision of a review process, visit the journal's page, click on 'Review this journal' and share your experience by filling out the SciRev questionnaire. All review experiences are provided by registered members of the academic community, and checked for systematic errors by the SciRev team. Submit a review.

  17. How to improve scientific peer review: Four schools of thought

    Registered reports, in which peer review of a research plan and in-principle acceptance take place before carrying out data collection and analysis, ... At the same time, for specific scientific works that are considered to be of special interest or importance, the envisioned system could offer a much more in-depth and rigorous form of peer ...

  18. Scientific Reports Impact factor: "riding high"

    Scientific Reports' review time includes its first decision in 56 days and the acceptance time of 133 days. Since it publishes scientific content of robust quality and maintains originality, Scientific Reports' acceptance rate is 49% which is good for any journal as it has a rigorous peer-review process.

  19. How Long Is Too Long in Contemporary Peer Review? Perspectives from

    Introduction. Peer reviewed publications remain the cornerstone of the scientific world [1, 2] despite the fact that the review process is not infallible [3, 4].Such publications are an essential means of disseminating scientific information through credible and accessible channels.

  20. A systematic review to determine the effect of strategies to sustain

    The primary purpose of this review is to synthesise the effect of strategies aiming to sustain the implementation of evidenced-based interventions (EBIs) targeting key health behaviours associated with chronic disease (i.e. physical inactivity, poor diet, harmful alcohol use, and tobacco smoking) in clinical and community settings. The field of implementation science is bereft of an evidence ...

  21. ESSD

    Abstract. Accurate, detailed, and up-to-date information on cropland extent is crucial for provisioning food security and environmental sustainability. However, because of the complexity of agricultural landscapes and lack of sufficient training samples, it remains challenging to monitor cropland dynamics at high spatial and temporal resolutions across large geographical extents, especially ...

  22. PDF Scientific Review and the IRB

    research submitted to the IRB for review: 1. For research previously subjected to full peer review (e.g., reviewed by a study section, grant committee or grant agency): No additional internal scientific review is required. This assumes that the actual research study submitted to the IRB was peer reviewed in its current form. Note the IRB may

  23. For authors

    Step 1: Before you submit. There are several important things you need to know and understand before you begin the submission process. From checking that your research is relevant to our journal ...

  24. UT Tyler Health Science Center

    The UT Tyler Health Science Center facility offers an array of crucial medical and healthcare education resources, fostering an environment dedicated to excellence in education. From cutting-edge simulation labs to dedicated research spaces, every aspect of the UT Tyler Health Science Center is designed to enhance the educational experience.

  25. Plagiarism in peer-review reports could be the 'tip of the iceberg'

    The team published the results of its investigation in Scientometrics in February 1, examining dozens of cases of apparent plagiarism in peer-review reports, identifying the use of identical ...