Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Improvement in Word Sense Disambiguation by introducing enhancements in English WordNet Structure
Deepesh Kumar Kimtani,Jyotirmayee Choudhury,Alok Chakrabarty
International Journal on Computer Science and Engineering , 2012,
Abstract: Word sense disambiguation (WSD) is an open problem of natural language processing, which governs the process of identifying the appropriate sense of a word (i.e. intended meaning) in a sentence,when the word has multiple meanings. In this paper we introduce a new WordNet database relation structure whose usage enhances the WSD efficiency of knowledge-based contextual overlap dependent WSD algorithms, such as the popular Lesk algorithm. The efficiency of WSD, on the usage of the proposed WordNet over existing WordNet as a knowledge-base, has been experimentally verified by using the Lesk algorithm on a rich collection of heterogeneous sentences. Use of the proposed WordNet for Lesk Algorithm highly increases the chances of contextual overlap, thereby resulting in high accuracy of proper sense or context identification of the words. The WSD results and accuracies, obtained using the proposed WordNet, have been compared with the results obtained using existing WordNet. Experimental results show that use of our proposed WordNet results in better accuracy of WSD than the existing WordNet. Thus its usage will help the users better, in doing Machine translation, which is one of the most difficult problems of natural language processing
A chain dictionary method for Word Sense Disambiguation and applications  [PDF]
Doina Tatar,Gabriela Serban,Andreea Mihis,Mihaiela Lupea,Dana Lupsa,Militon Frentiu
Computer Science , 2008,
Abstract: A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk's algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet for a new algorithm originated in Lesk's, namely "chain algorithm for disambiguation of all words", CHAD. We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation.
Word Sense Disambiguation using Clue Words  [PDF]
Udaya Raj Dhungana,Subarna Shakya
Journal of the Institute of Engineering , 2014, DOI: 10.3126/jie.v10i1.10900
Abstract: ?This paper presents a new model to disambiguate the correct sense of polysemy word based on the related context words for each different sense of the polysemy word. The related context words for each sense are referred to as clue words for the sense. The WordNet organises nouns, verbs, adjectives and adverbs together into sets of synonyms called synsets each expressing a different concept. In contrast to the structure of WordNet, we developed a model that organizes the different senses of polysemy words based on the clue words. These clue words for each sense of a polysemy word are used to disambiguate the correct meaning of the polysemy word in the given context using any WSD algorithm. The clue word for a sense of a polysemy word may be a noun, verb, adjective or adverb. DOI: http://dx.doi.org/10.3126/jie.v10i1.10900 Journal of the Institute of Engineering, Vol. 10, No. 1, 2014, pp. 192–198
System Combination Based on WSD Using WordNet

LIU Yu-Peng,LI Sheng,ZHAO Tie-Jun,

自动化学报 , 2010,
Abstract: Recently confusion network decoding showed a better performance in combining outputs from multiple machine translation (MT) systems. However, overcoming different word orders presented in multiple MT systems during hypothesis alignment still remains to be the biggest challenge to confusion-network-based MT system combination. The previous alignment methods do not consider the information about semantics. In order to improve the system performance, we introduce word sense disambiguation (WSD) into confusion network alignment. Meanwhile, the selection of skeleton is taken through sentence similarity score, and the sentence similarity is computed by the largest bipartite graph matching algorithm. In order to combine WSD based on WordNet with our system, the experiments showed that the result using revised translation error rate (TER) algorithms is better than classic TER system combination.
Words Polysemy Analysis: Implementation of the Word Sense Disambiguation Algorithm Based on Magnini Domains
International Journal of Information Science , 2012, DOI: 10.5923/j.ijis.20120203.01
Abstract: This paper presents an analysis of the lexical resources used in Word Sense Disambiguation (WSD) process by methods based on Magnini domains. At the same time, the characteristics of two algorithms that use Magnini domains are shown and we define the implementation of Word Domain Disambiguation (WDD) algorithm as defined in [1]. Later on, we proceed designing the experiments to test the algorithm and we arrived to different conclusions.
Unsupervised Graph-based Word Sense Disambiguation Using Lexical Relation of WordNet
Ehsan Hessami,Faribourz Mahmoudi
International Journal of Computer Science Issues , 2011,
Abstract: Word Sense Disambiguation (WSD) is one of tasks in the Natural Language Processing that uses to identifying the sense of words in context. To select the correct sense, we can use many approach. This paper uses a tree and graph-connectivity structure for finding the correct senses. This algorithm has few parameters and does not require sense-annotated data for training. Performance evaluation on standard datasets showed it has the better accuracy than many previous graph base algorithms and decreases elapsed time.
Fine-Grained Word Sense Disambiguation Based on Parallel Corpora, Word Alignment, Word Clustering and Aligned Wordnets  [PDF]
Dan Tufis,Radu Ion,Nancy Ide
Computer Science , 2005,
Abstract: The paper presents a method for word sense disambiguation based on parallel corpora. The method exploits recent advances in word alignment and word clustering based on automatic extraction of translation equivalents and being supported by available aligned wordnets for the languages in the corpus. The wordnets are aligned to the Princeton Wordnet, according to the principles established by EuroWordNet. The evaluation of the WSD system, implementing the method described herein showed very encouraging results. The same system used in a validation mode, can be used to check and spot alignment errors in multilingually aligned wordnets as BalkaNet and EuroWordNet.
The Treatment of Word Sense Inventories in the ‘LACELL WSD Project’
Moisés Almela
International Journal of English Studies (IJES) , 2009, DOI: 10.6018/ijes.1.1.99491
Abstract: The WSD community has long debated whether the criteria for representing polysemy in general purpose dictionaries meet the specific demands of sense disambiguation tasks. Concern is growing that pre-defined sense inventories might not adjust well to the needs of WSD, because word occurrences can rarely be paired with rigid sense classes in a one-toone fashion. A second cause for concern is the level of sense granularity adopted in conventional dictionary entries.Fine-grained distinctions can be useful for a dictionary user but complicate the design and evaluation of WSD systems in a way that is often unnecessary. As a result of these objections, many experts have voiced the opinion that dictionaries are not adequate sources of sense inventories for WSD. However, the problem of word sense overlaps can also be resolved by modifying the way in which dictionary entries are processed by WSD programs. This is the solution applied in the LACELL WSD system. The algorithm selects simultaneously two or more dictionary senses if the context does not allow sufficient discrimination between/among them. This article explains the underpinnings of such proposal, as well as discussing some advantages and disadvantages. En el ámbito de la investigación sobre desambiguación léxica automática (WSD), se ha venido debatiendo largo tiempo acerca de la adecuación de los modelos de análisispolisémico empleados en los diccionarios de carácter general. En particular, existe una creciente preocupación en torno a los problemas generados por la utilización de inventarios de acepciones léxicas, ya que no es frecuente hallar correspondencias biunívocas entre los usos de una palabra en contextos específicos y las clases semánticas preestablecidas en la entrada léxica. Además, se duda de que el nivel de granularidad semántica aplicado en la lexicografía convencional sea el más adecuado para las necesidades específicas de la WSD. Como consecuencia de estas objeciones, algunos expertos han llegado incluso a sugerir que los sistemas de WSD deberían prescindir de las entradas de los diccionarios como sus principales recursos léxicos. Sin embargo, hay otras soluciones alternativas, entre ellas modificar el modo como los programas de desambiguación automática procesan los repertorios de acepciones léxicas listados en las entradas de los diccionarios. El sistema de WSD creado por el Grupo LACELL está dise ado para seleccionar simultáneamente un grupo de subacepciones en aquellos casos en los que el contexto no proporciona suficiente información para discriminar entre ellas. En este artículo se
Word Sense Disambiguation: An Empirical Survey
J. Sreedhar,S. Viswanadha Raju,A. Vinaya Babu,Amjan Shaik
International Journal of Soft Computing & Engineering , 2012,
Abstract: Word Sense Disambiguation(WSD) is a vital area which is very useful in today’s world. Many WSD algorithms are available in literature, we have chosen to opt for an optimal and portable WSD algorithms. We are discussed the supervised, unsupervised, and knowledge-based approaches for WSD. This paper will also furnish an idea of few of the WSD algorithms and their performances, Which compares and asses the need of the word sense disambiguity.
Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-Based Approach  [PDF]
Hwee Tou Ng,Hian Beng Lee
Computer Science , 1996,
Abstract: In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verb-object syntactic relation. We tested our WSD program, named {\sc Lexas}, on both a common data set used in previous work, as well as on a large sense-tagged corpus that we separately constructed. {\sc Lexas} achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of {\sc WordNet}.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.