Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Semantic web service discovery approaches: overview and limitations  [PDF]
Ibrahim El Bitar,Fatima-Zahra Belouadha,Ounsa Roudies
Computer Science , 2014,
Abstract: The semantic Web service discovery has been given massive attention within the last few years. With the increasing number of Web services available on the web, looking for a particular service has become very difficult, especially with the evolution of the clients needs. In this context, various approaches to discover semantic Web services have been proposed. In this paper, we compare these approaches in order to assess their maturity and their adaptation to the current domain requirements. The outcome of this comparison will help us to identify the mechanisms that constitute the strengths of the existing approaches, and thereafter will serve as guideline to determine the basis for a discovery approach more adapted to the current context of Web services.
The observational roots of reference of the semantic web  [PDF]
Simon Scheider,Krzysztof Janowicz,Benjamin Adams
Computer Science , 2012,
Abstract: Shared reference is an essential aspect of meaning. It is also indispensable for the semantic web, since it enables to weave the global graph, i.e., it allows different users to contribute to an identical referent. For example, an essential kind of referent is a geographic place, to which users may contribute observations. We argue for a human-centric, operational approach towards reference, based on respective human competences. These competences encompass perceptual, cognitive as well as technical ones, and together they allow humans to inter-subjectively refer to a phenomenon in their environment. The technology stack of the semantic web should be extended by such operations. This would allow establishing new kinds of observation-based reference systems that help constrain and integrate the semantic web bottom-up.
Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P100)  [PDF]
Lutz Bornmann,Loet Leydesdorff,Jian Wang
Computer Science , 2013,
Abstract: Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics to normalize observed citation counts. Percentiles are based on an ordered set of citation counts in a reference set, whereby the fraction of papers at or below the citation counts of a focal paper is used as an indicator for its relative citation impact in the set. In this study, we pursue two related objectives: (1) although different percentile-based approaches have been developed, an approach is hitherto missing that satisfies a number of criteria such as scaling of the percentile ranks from zero (all other papers perform better) to 100 (all other papers perform worse), and solving the problem with tied citation ranks unambiguously. We introduce a new citation-rank approach having these properties, namely P100. (2) We compare the reliability of P100 empirically with other percentile-based approaches, such as the approaches developed by the SCImago group, the Centre for Science and Technology Studies (CWTS), and Thomson Reuters (InCites), using all papers published in 1980 in Thomson Reuters Web of Science (WoS). How accurately can the different approaches predict the long-term citation impact in 2010 (in year 31) using citation impact measured in previous time windows (years 1 to 30)? The comparison of the approaches shows that the method used by InCites overestimates citation impact (because of using the highest percentile rank when papers are assigned to more than a single subject category) whereas the SCImago indicator shows higher power in predicting the long-term citation impact on the basis of citation rates in early years. Since the results show a disadvantage in this predictive ability for P100 against the other approaches, there is still room for further improvements.
Essay on Semantics Definition in MDE - An Instrumented Approach for Model Verification  [cached]
Beno?t Combemale,Xavier Crégut,Pierre-Lo?c Garoche,Xavier Thirioux
Journal of Software , 2009, DOI: 10.4304/jsw.4.9.943-958
Abstract: In the context of MDE (Model-Driven Engineering), our objective is to define the semantics for a given DSL (Domain Specific Language) either to simulate its models or to check properties on them using model-checking techniques. In both cases, the purpose is to formalize the DSL semantics as it is known by the DSL designer but often in an informal way. After several experiments to define operational semantics on the one hand, and translational semantics on the other hand, we discuss both approaches and we specify in which cases these semantics seem to be judicious. As a second step, we introduce a pragmatic and instrumented approach to define a translational semantics and to validate it against a reference operational semantics expressed by the DSL designer. We apply this approach to the XSPEM process description language in order to verify process models.
Semantic Web Services and Its Approaches
Tauqeer Ahmad Usmani,,Prof. Durgesh Pant,,Prof. Kunwar Singh Vaisla
International Journal on Computer Science and Engineering , 2011,
Abstract: OWL-S, IRS, WSMF are the prominent field that are the major part for Semantic Web Services. IRS-III is the first WSMO Compliant and implemented structure to support Semantic Web Services.IRS-III is the extension of previous version of IRS-II and supporting WSMO ontology within the IRS-III Server, browser and API.IRS-III provides support for the OWL-S service descriptions by importing the description to IRS-III. This paper describes about different approaches of Semantic WebServices.
Performance and Comparative Analysis of the Two Contrary Approaches for Detecting Near Duplicate Web Documents in Web Crawling  [cached]
VA Narayana,P Premchand,A Govardhan
International Journal of Electrical and Computer Engineering , 2012, DOI: 10.11591/ijece.v2i6.1746
Abstract: Recent years have witnessed the drastic development of World Wide Web (WWW). Information is being accessible at the finger tip anytime anywhere through the massive web repository. The performance and reliability of web engines thus face huge problems due to the presence of enormous amount of web data. The voluminous amount of web documents has resulted in problems for search engines leading to the fact that the search results are of less relevance to the user. In addition to this, the presence of duplicate and near-duplicate web documents has created an additional overhead for the search engines critically affecting their performance. The demand for integrating data from heterogeneous sources leads to the problem of near-duplicate web pages. The detection of near duplicate documents within a collection has recently become an area of great interest. In this research, we have presented an efficient approach for the detection of near duplicate web pages in web crawling which uses keywords and the distance measure. Besides that, G.S. Manku et al.’s fingerprint based approach proposed in 2007 was considered as one of the “state-of-the-art" algorithms for finding near-duplicate web pages. Then we have implemented both the approaches and conducted an extensive comparative study between our similarity score based approach and G.S. Manku et al.’s fingerprint based approach. We have analyzed our results in terms of time complexity, space complexity, Memory usage and the confusion matrix parameters. After taking into account the above mentioned performance factors for the two approaches, the comparison study clearly portrays our approach the better (less complex) of the two based on the factors considered.
A Survey on Web Service Discovery Approaches  [PDF]
Debajyoti Mukhopadhyay,Archana Chougule
Computer Science , 2012,
Abstract: Web services are playing an important role in e-business and e-commerce applications. As web service applications are interoperable and can work on any platform, large scale distributed systems can be developed easily using web services. Finding most suitable web service from vast collection of web services is very crucial for successful execution of applications. Traditional web service discovery approach is a keyword based search using UDDI. Various other approaches for discovering web services are also available. Some of the discovery approaches are syntax based while other are semantic based. Having system for service discovery which can work automatically is also the concern of service discovery approaches. As these approaches are different, one solution may be better than another depending on requirements. Selecting a specific service discovery system is a hard task. In this paper, we give an overview of different approaches for web service discovery described in literature. We present a survey of how these approaches differ from each other.
Reference Management meets Web 2.0
Martin Fenner
Cellular Therapy and Transplantation , 2010,
Abstract: Reference management software has been used by researchers for more than 20 years to find, store, and organize references, and to write scholarly papers. Recently developed collaborative web-based tools have resulted in a number of interesting new features, and in a number of new reference managers. These developments are changing which reference managers we use, and how we use them.
Web Log Clustering Approaches – A Survey
G. Sudhamathy,,Dr. C. Jothi Venkateswaran
International Journal on Computer Science and Engineering , 2011,
Abstract: As more organization rely on the Internet and the World Wide Web to conduct business, the proposed strategies and techniques for market analysis need to be revisited in this context. We therefore present a survey of the most recent work in the field of Web usage mining, focusing on three different approaches towards web logs clustering. Clustering analysis is a widely used data mining algorithm which is a process of partitioning a set of data objects into a number of object clusters, where each data object shares the high similarity with the other objects within the same cluster but is quite dissimilar to objects in other clusters. In this work we discuss three different approaches on web logs clustering, analyze their benefits and drawbacks. We finally conclude on the most efficient algorithm based on the results of experiments conducted with various web log files.
Furkh Zeshan,Radziah Mohamad
International Journal of New Computer Architectures and their Applications , 2011,
Abstract: Service composition is gaining popularity because the composite service presents the features that an individual service cannot present. There are multiple web services available over the web for different tasks. Semantic web is the advance form of the current web, where all the contents have well defined meanings, due to this nature; semantic web enables the automated processing of web contents by machines. At run time, the composition of these services based on the requestera€ s functional and non-functional requirements is a difficult task due to the heterogeneous nature of results of the services. This paper introduced some requirements that when fulfilled, a successful composition process can be achieved. In order to find the best approach, various composition approaches on these requirements were evaluated. Suggestions were provided on what approach can be used in which scenario in order to gain the best results.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.