Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Word sense disambiguation: a survey  [PDF]
Alok Ranjan Pal,Diganta Saha
Computer Science , 2015, DOI: 10.5121/ijctcm.2015.5301
Abstract: In this paper, we made a survey on Word Sense Disambiguation (WSD). Near about in all major languages around the world, research in WSD has been conducted upto different extents. In this paper, we have gone through a survey regarding the different approaches adopted in different research works, the State of the Art in the performance in this domain, recent works in different Indian languages and finally a survey in Bengali language. We have made a survey on different competitions in this field and the bench mark results, obtained from those competitions.
Using Machine Learning Algorithms for Word Sense Disambiguation: A Brief Survey  [PDF]
Neetu Sharma,,Samit Kumar, Dr. S. Niranjan
International Journal of Computer Technology and Electronics Engineering , 2012,
Abstract: In the entire vocabulary of Human language, numerous words have more than one distinct meaning and thus present a contextual ambiguity which is a worth of one of the many language based problems needs procedure based resolution. Approaches to WSD are often classified according to the main source of knowledge used in sense differentiation. Methods that rely primarily on dictionaries, thesauri, and lexical knowledge bases, without using any corpus evidence, are termed dictionary-based or knowledge based. Natural language tends to be ambiguous. Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. In this research we shall address the problem of Word Sense Disambiguation by a combination of learning algorithms. The study is aimed at comparing the performance of using machine learning algorithms for Word Sense Disambiguation (WSD)
Word Sense Disambiguation Using Association Rules: A Survey  [PDF]
Samit Kumar,,Neetu Sharma, Dr. S. Niranjan
International Journal of Computer Technology and Electronics Engineering , 2012,
Abstract: Word sense disambiguation (WSD) is defined as the task of assigning the appropriate meaning (sense) to a given word in a text or discourse. The sense in which the word is used can be determined, most of the times, by the context in which the word occurs. Word sense ambiguity is a central problem for many established Human Language Technology applications (e.g., machine translation, information extraction, question answering, information retrieval, text classification, and text summarization). The context of an ambiguous word is regarded as a transaction record, the words in the context and the senses of the ambiguous word are regarded as items. If some items frequently occur together in some transactions (the context of the ambiguous word), then there must be some correlation between the items. The basic idea of the WSD algorithm based on mining association rules is: to discover the frequent item sets composed of the sense of the ambiguous word and its context by scanning its context database, which support degree is no less than the threshold of support degree; to produce the association rules X=>Y which confidence degree is no less than the threshold of the confidence degree from maximum frequent item sets; at last to determine
Word Sense Disambiguation in Information Retrieval  [PDF]
Francis de la C. Fernández REYES, Exiquio C. Pérez LEYVA, Rogelio Lau FERNáNDEZ
Intelligent Information Management (IIM) , 2009, DOI: 10.4236/iim.2009.12018
Abstract: The natural language processing has a set of phases that evolves from lexical text analysis to the pragmatic one in which the author’s intentions are shown. The ambiguity problem appears in all of these tasks. Previous works tries to do word sense disambiguation, the process of assign a sense to a word inside a specific context, creating algorithms under a supervised or unsupervised approach, which means that those algorithms use or not an external lexical resource. This paper presents an approximated approach that combines not supervised algorithms by the use of a classifiers set, the result will be a learning algorithm based on unsupervised methods for word sense disambiguation process. It begins with an introduction to word sense disambiguation concepts and then analyzes some unsupervised algorithms in order to extract the best of them, and combines them under a supervised approach making use of some classifiers.
What is word sense disambiguation good for?  [PDF]
Adam Kilgarriff
Computer Science , 1997,
Abstract: Word sense disambiguation has developed as a sub-area of natural language processing, as if, like parsing, it was a well-defined task which was a pre-requisite to a wide range of language-understanding applications. First, I review earlier work which shows that a set of senses for a word is only ever defined relative to a particular human purpose, and that a view of word senses as part of the linguistic furniture lacks theoretical underpinnings. Then, I investigate whether and how word sense ambiguity is in fact a problem for different varieties of NLP application.
Boosting Applied to Word Sense Disambiguation  [PDF]
Gerard Escudero,Lluis Marquez,German Rigau
Computer Science , 2000,
Abstract: In this paper Schapire and Singer's AdaBoost.MH boosting algorithm is applied to the Word Sense Disambiguation (WSD) problem. Initial experiments on a set of 15 selected polysemous words show that the boosting approach surpasses Naive Bayes and Exemplar-based approaches, which represent state-of-the-art accuracy on supervised WSD. In order to make boosting practical for a real learning domain of thousands of words, several ways of accelerating the algorithm by reducing the feature space are studied. The best variant, which we call LazyBoosting, is tested on the largest sense-tagged corpus available containing 192,800 examples of the 191 most frequent and ambiguous English words. Again, boosting compares favourably to the other benchmark algorithms.
The Role of Conceptual Relations in Word Sense Disambiguation  [PDF]
David Fernandez-Amoros,Julio Gonzalo,Felisa Verdejo
Computer Science , 2001,
Abstract: We explore many ways of using conceptual distance measures in Word Sense Disambiguation, starting with the Agirre-Rigau conceptual density measure. We use a generalized form of this measure, introducing many (parameterized) refinements and performing an exhaustive evaluation of all meaningful combinations. We finally obtain a 42% improvement over the original algorithm, and show that measures of conceptual distance are not worse indicators for sense disambiguation than measures based on word-coocurrence (exemplified by the Lesk algorithm). Our results, however, reinforce the idea that only a combination of different sources of knowledge might eventually lead to accurate word sense disambiguation.
Sequential Model Selection for Word Sense Disambiguation  [PDF]
Ted Pedersen,Rebecca Bruce,Janyce Wiebe
Computer Science , 1997,
Abstract: Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models is conducted in order to find the model that best characterizes the interactions among features. This paper expands existing model selection methodology and presents the first comparative study of model selection search strategies and evaluation criteria when applied to the problem of building probabilistic classifiers for word-sense disambiguation.
Word-Sense Disambiguation Using Decomposable Models  [PDF]
Rebecca Bruce,Janyce Wiebe
Computer Science , 1994,
Abstract: Most probabilistic classifiers used for word-sense disambiguation have either been based on only one contextual feature or have used a model that is simply assumed to characterize the interdependencies among multiple contextual features. In this paper, a different approach to formulating a probabilistic model is presented along with a case study of the performance of models produced in this manner for the disambiguation of the noun "interest". We describe a method for formulating probabilistic models that use multiple contextual features for word-sense disambiguation, without requiring untested assumptions regarding the form of the model. Using this approach, the joint distribution of all variables is described by only the most systematic variable interactions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data.
Word sense disambiguation criteria: a systematic study  [PDF]
Laurent Audibert
Computer Science , 2005,
Abstract: This article describes the results of a systematic in-depth study of the criteria used for word sense disambiguation. Our study is based on 60 target words: 20 nouns, 20 adjectives and 20 verbs. Our results are not always in line with some practices in the field. For example, we show that omitting non-content words decreases performance and that bigrams yield better results than unigrams.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.