oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2015 ( 4 )

2014 ( 4 )

2013 ( 12 )

2012 ( 20 )

Custom range...

Search Results: 1 - 10 of 101 matches for " Ludo Waltman "
All listed articles are free for downloading (OA Articles)
Page 1 /101
Display every page Item
An empirical analysis of the use of alphabetical authorship in scientific publishing
Ludo Waltman
Computer Science , 2012,
Abstract: There are different ways in which the authors of a scientific publication can determine the order in which their names are listed. Sometimes author names are simply listed alphabetically. In other cases, authorship order is determined based on the contribution authors have made to a publication. Contribution-based authorship can facilitate proper credit assignment, for instance by giving most credits to the first author. In the case of alphabetical authorship, nothing can be inferred about the relative contribution made by the different authors of a publication. In this paper, we present an empirical analysis of the use of alphabetical authorship in scientific publishing. Our analysis covers all fields of science. We find that the use of alphabetical authorship is declining over time. In 2011, the authors of less than 4% of all publications intentionally chose to list their names alphabetically. The use of alphabetical authorship is most common in mathematics, economics (including finance), and high energy physics. Also, the use of alphabetical authorship is relatively more common in the case of publications with either a small or a large number of authors.
A review of the literature on citation impact indicators
Ludo Waltman
Computer Science , 2015,
Abstract: Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research.
The detection of "hot regions" in the geography of science: A visualization approach by using density maps
Lutz Bornmann,Ludo Waltman
Computer Science , 2011,
Abstract: Spatial scientometrics has attracted a lot of attention in the very recent past. The visualization methods (density maps) presented in this paper allow for an analysis revealing regions of excellence around the world using computer programs that are freely available. Based on Scopus and Web of Science data, field-specific and field-overlapping scientific excellence can be identified in broader regions (worldwide or for a specific continent) where high quality papers (highly cited papers or papers published in Nature or Science) were published. We used a geographic information system to produce our density maps. We also briefly discuss the use of Google Earth.
On the calculation of percentile-based bibliometric indicators
Ludo Waltman,Michael Schreiber
Computer Science , 2012, DOI: 10.1002/asi.22775
Abstract: A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators were proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.
F1000 recommendations as a new data source for research evaluation: A comparison with citations
Ludo Waltman,Rodrigo Costas
Computer Science , 2013,
Abstract: F1000 is a post-publication peer review service for biological and medical research. F1000 aims to recommend important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and over 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications.
Large-Scale Analysis of the Accuracy of the Journal Classification Systems of Web of Science and Scopus
Qi Wang,Ludo Waltman
Computer Science , 2015,
Abstract: Journal classification systems play an important role in bibliometric analyses. The two most important bibliographic databases, Web of Science and Scopus, each provide a journal classification system. However, no study has systematically investigated the accuracy of these classification systems. To examine and compare the accuracy of journal classification systems, we define two criteria on the basis of direct citation relations between journals and categories. We use Criterion I to select journals that have weak connections with their assigned categories, and we use Criterion II to identify journals that are not assigned to categories with which they have strong connections. If a journal satisfies either of the two criteria, we conclude that its assignment to categories may be questionable. Accordingly, we identify all journals with questionable classifications in Web of Science and Scopus. Furthermore, we perform a more in-depth analysis for the field of Library and Information Science to assess whether our proposed criteria are appropriate and whether they yield meaningful results. It turns out that according to our citation-based criteria Web of Science performs significantly better than Scopus in terms of the accuracy of its journal classification system.
Text mining and visualization using VOSviewer
Nees Jan van Eck,Ludo Waltman
Computer Science , 2011,
Abstract: VOSviewer is a computer program for creating, visualizing, and exploring bibliometric maps of science. In this report, the new text mining functionality of VOSviewer is presented. A number of examples are given of applications in which VOSviewer is used for analyzing large amounts of text data.
The inconsistency of the h-index
Ludo Waltman,Nees Jan van Eck
Computer Science , 2011,
Abstract: The h-index is a popular bibliometric indicator for assessing individual scientists. We criticize the h-index from a theoretical point of view. We argue that for the purpose of measuring the overall scientific impact of a scientist (or some other unit of analysis) the h-index behaves in a counterintuitive way. In certain cases, the mechanism used by the h-index to aggregate publication and citation statistics into a single number leads to inconsistencies in the way in which scientists are ranked. Our conclusion is that the h-index cannot be considered an appropriate indicator of a scientist's overall scientific impact. Based on recent theoretical insights, we discuss what kind of indicators can be used as an alternative to the h-index. We pay special attention to the highly cited publications indicator. This indicator has a lot in common with the h-index, but unlike the h-index it does not produce inconsistent rankings.
A new methodology for constructing a publication-level classification system of science
Ludo Waltman,Nees Jan van Eck
Computer Science , 2012,
Abstract: Classifying journals or publications into research areas is an essential element of many bibliometric analyses. Classification usually takes place at the level of journals, where the Web of Science subject categories are the most popular classification system. However, journal-level classification systems have two important limitations: They offer only a limited amount of detail, and they have difficulties with multidisciplinary journals. To avoid these limitations, we introduce a new methodology for constructing classification systems at the level of individual publications. In the proposed methodology, publications are clustered into research areas based on citation relations. The methodology is able to deal with very large numbers of publications. We present an application in which a classification system is produced that includes almost ten million publications. Based on an extensive analysis of this classification system, we discuss the strengths and the limitations of the proposed methodology. Important strengths are the transparency and relative simplicity of the methodology and its fairly modest computing and memory requirements. The main limitation of the methodology is its exclusive reliance on direct citation relations between publications. The accuracy of the methodology can probably be increased by also taking into account other types of relations, for instance based on bibliographic coupling.
Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison
Ludo Waltman,Nees Jan van Eck
Computer Science , 2012,
Abstract: Different scientific fields have different citation practices. Citation-based bibliometric indicators need to normalize for such differences between fields in order to allow for meaningful between-field comparisons of citation impact. Traditionally, normalization for field differences has usually been done based on a field classification system. In this approach, each publication belongs to one or more fields and the citation impact of a publication is calculated relative to the other publications in the same field. Recently, the idea of source normalization was introduced, which offers an alternative approach to normalize for field differences. In this approach, normalization is done by looking at the referencing behavior of citing publications or citing journals. In this paper, we provide an overview of a number of source normalization approaches and we empirically compare these approaches with a traditional normalization approach based on a field classification system. We also pay attention to the issue of the selection of the journals to be included in a normalization for field differences. Our analysis indicates a number of problems of the traditional classification-system-based normalization approach, suggesting that source normalization approaches may yield more accurate results.
Page 1 /101
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.