oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality  [PDF]
Michael L Callaham ,John Tercier
PLOS Medicine , 2007, DOI: 10.1371/journal.pmed.0040040
Abstract: Background Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science. Methods and Findings 306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak. Conclusions Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
Peer reviewers 2009-2011  [cached]
Anna Pasolini
Altre Modernità , 2011,
Abstract: Peer reviewers 2009-2011
Thanking our peer reviewers  [cached]
Storey Alan
Molecular Cancer , 2013, DOI: 10.1186/1476-4598-12-10
Abstract: Contributing reviewers As 2013 commences I would like to take a moment to reflect and recognize the peer reviewers that made the previous year possible. Listed below are those people who reviewed for Molecular Cancer last year. All are generous individuals who donated their time to assessing and improving our authors’ submissions. Your combined efforts have been invaluable to the editorial staff in maintaining the continued success of the journal in the Open Access forum. The editors of Molecular Cancer would like to thank all the reviewers who contributed to the journal in Volume 11 (2012) by participating in the review process - taking time out of your busy schedules and even to volunteer - without your critical insights, hard work and support for the journal we wouldn’t be able to do what we do.
An Algorithm to Determine Peer-Reviewers  [PDF]
Marko A. Rodriguez,Johan Bollen
Computer Science , 2006, DOI: 10.1145/1458082.1458127
Abstract: The peer-review process is the most widely accepted certification mechanism for officially accepting the written results of researchers within the scientific community. An essential component of peer-review is the identification of competent referees to review a submitted manuscript. This article presents an algorithm to automatically determine the most appropriate reviewers for a manuscript by way of a co-authorship network data structure and a relative-rank particle-swarm algorithm. This approach is novel in that it is not limited to a pre-selected set of referees, is computationally efficient, requires no human-intervention, and, in some instances, can automatically identify conflict of interest situations. A useful application of this algorithm would be to open commentary peer-review systems because it provides a weighting for each referee with respects to their expertise in the domain of a manuscript. The algorithm is validated using referee bid data from the 2005 Joint Conference on Digital Libraries.
Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?  [PDF]
Richard L. Kravitz,Peter Franks,Mitchell D. Feldman,Martha Gerrity,Cindy Byrne,William M. Tierney
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0010072
Abstract: Editorial peer review is universally used but little studied. We examined the relationship between external reviewers' recommendations and the editorial outcome of manuscripts undergoing external peer-review at the Journal of General Internal Medicine (JGIM).
Accelerating the pace of discovery by changing the peer review algorithm  [PDF]
Stefano Allesina
Computer Science , 2009,
Abstract: The number of scientific publications is constantly rising, increasing the strain on the review process. The number of submissions is actually higher, as each manuscript is often reviewed several times before publication. To face the deluge of submissions, top journals reject a considerable fraction of manuscripts without review, potentially declining manuscripts with merit. The situation is frustrating for authors, reviewers and editors alike. Recently, several editors wrote about the ``tragedy of the reviewer commons', advocating for urgent corrections to the system. Almost every scientist has ideas on how to improve the system, but it is very difficult, if not impossible, to perform experiments to test which measures would be most effective. Surprisingly, relatively few attempts have been made to model peer review. Here I implement a simulation framework in which ideas on peer review can be quantitatively tested. I incorporate authors, reviewers, manuscripts and journals into an agent-based model and a peer review system emerges from their interactions. As a proof-of-concept, I contrast an implementation of the current system, in which authors decide the journal for their submissions, with a system in which journals bid on manuscripts for publication. I show that, all other things being equal, this latter system solves most of the problems currently associated with the peer review process. Manuscripts' evaluation is faster, authors publish more and in better journals, and reviewers' effort is optimally utilized. However, more work is required from editors. This modeling framework can be used to test other solutions for peer review, leading the way for an improvement of how science is disseminated.
Thank you to Virology Journal’s peer reviewers in 2012  [cached]
Wang Linfa
Virology Journal , 2013, DOI: 10.1186/1743-422x-10-44
Abstract: Contributing reviewers The editors of Virology Journal would like to thank all our reviewers who have contributed to the journal in Volume 9 (2012). The success of any scientific journal depends on an effective and strict peer review process and Virology Journal could not operate without your contribution. We look forward to your continuous support to this journal either as an invited reviewer or a contributing author in the years to come.
Reliability of the Peer-Review Process for Adverse Event Rating  [PDF]
Alan J. Forster, Monica Taljaard, Carol Bennett, Carl van Walraven
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0041239
Abstract: Background Adverse events are poor patient outcomes caused by medical care. Their identification requires the peer-review of poor outcomes, which may be unreliable. Combining physician ratings might improve the accuracy of adverse event classification. Objective To evaluate the variation in peer-reviewer ratings of adverse outcomes; determine the impact of this variation on estimates of reviewer accuracy; and determine the number of reviewers who judge an adverse event occurred that is required to ensure that the true probability of an adverse event exceeded 50%, 75% or 95%. Methods Thirty physicians rated 319 case reports giving details of poor patient outcomes following hospital discharge. They rated whether medical management caused the outcome using a six-point ordinal scale. We conducted latent class analyses to estimate the prevalence of adverse events as well as the sensitivity and specificity of each reviewer. We used this model and Bayesian calculations to determine the probability that an adverse event truly occurred to each patient as function of their number of positive ratings. Results The overall median score on the 6-point ordinal scale was 3 (IQR 2,4) but the individual rater median score ranged from a minimum of 1 (in four reviewers) to a maximum median score of 5. The overall percentage of cases rated as an adverse event was 39.7% (3798/9570). The median kappa for all pair-wise combinations of the 30 reviewers was 0.26 (IQR 0.16, 0.42; Min = ?0.07, Max = 0.62). Reviewer sensitivity and specificity for adverse event classification ranged from 0.06 to 0.93 and 0.50 to 0.98, respectively. The estimated prevalence of adverse events using a latent class model with a common sensitivity and specificity for all reviewers (0.64 and 0.83 respectively) was 47.6%. For patients to have a 95% chance of truly having an adverse event, at least 3 of 3 reviewers are required to deem the outcome an adverse event. Conclusion Adverse event classification is unreliable. To be certain that a case truly represents an adverse event, there needs to be agreement among multiple reviewers.
A Position on Effective Peer Reviews–Rationale, Qualification, Process, and Policy
Rayford Vaughn
Journal of Systemics, Cybernetics and Informatics , 2007,
Abstract: This paper argues for the value of the conference peer review process given certain constraints that include a proper process, qualifications of the reviewers, policy used in the review, and the motivation of the reviewers. The paper also addresses how the lack of proper criteria can be harmful in a peer review process. The peer review process for journals is not addressed as this is a universally accepted practice in academia. An analogy to software engineering code review processes is briefly presented.
Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices  [PDF]
Hendy Abdoul, Christophe Perrey, Philippe Amiel, Florence Tubach, Serge Gottot, Isabelle Durand-Zaleski, Corinne Alberti
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0046054
Abstract: Background Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers’ scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria. Methods and Findings We first collected and analyzed a convenience sample of French and international calls for proposals and assessment guidelines, from which we created an overall typology of assessment criteria comprising nine domains relevance to the call for proposals, usefulness, originality, innovativeness, methodology, feasibility, funding, ethical aspects, and writing of the grant application. We then performed a qualitative study of reviewer practices, particularly regarding the use of assessment criteria, among reviewers of the French Academic Hospital Research Grant Agencies (Programmes Hospitaliers de Recherche Clinique, PHRCs). Semi-structured interviews and observation sessions were conducted. Both the time spent assessing each grant application and the assessment methods varied across reviewers. The assessment criteria recommended by the PHRCs were listed by all reviewers as frequently evaluated and useful. However, use of the PHRC criteria was subjective and varied across reviewers. Some reviewers gave the same weight to each assessment criterion, whereas others considered originality to be the most important criterion (12/34), followed by methodology (10/34) and feasibility (4/34). Conceivably, this variability might adversely affect the reliability of the review process, and studies evaluating this hypothesis would be of interest. Conclusions Variability across reviewers may result in mistrust among grant applicants about the review process. Consequently, ensuring transparency is of the utmost importance. Consistency in the review process could also be improved by providing common definitions for each assessment criterion and uniform requirements for grant application submissions. Further research is needed to assess the feasibility and acceptability of these measures.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.