Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Attribution and its annotation in the Penn Discourse TreeBank  [PDF]
Rashmi Prasad,Nikhil Dinesh,Alan Lee,Aravind Joshi
Traitement Automatique des Langues , 2007,
Abstract: In this paper, we describe an annotation scheme for the attribution of abstract objects (propositions, facts, and eventualities) associated with discourse relations and their arguments annotated in the Penn Discourse TreeBank. The scheme aims to capture both the source and degrees of factuality of the abstract objects through the annotation of text spans signalling the attribution, and of features recording the source, type, scopal polarity, and determinacy of attribution.
Error-Driven Pruning of Treebank Grammars for Base Noun Phrase Identification  [PDF]
Claire Cardie,David Pierce
Computer Science , 1998,
Abstract: Finding simple, non-recursive, base noun phrases is an important subtask for many natural language processing applications. While previous empirical methods for base NP identification have been rather complex, this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task. In particular, we present a corpus-based approach for finding base NPs by matching part-of-speech tag sequences. The training phase of the algorithm is based on two successful techniques: first the base NP grammar is read from a ``treebank'' corpus; then the grammar is improved by selecting rules with high ``benefit'' scores. Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on the Penn Treebank Wall Street Journal.
Bagging and Boosting a Treebank Parser  [PDF]
John C. Henderson,Eric Brill
Computer Science , 2000,
Abstract: Bagging and boosting, two effective machine learning techniques, are applied to natural language parsing. Experiments using these techniques with a trainable statistical parser are described. The best resulting system provides roughly as large of a gain in F-measure as doubling the corpus size. Error analysis of the result of the boosting technique reveals some inconsistent annotations in the Penn Treebank, suggesting a semi-automatic method for finding inconsistent treebank annotations.
An Empirical Comparison of Probability Models for Dependency Grammar  [PDF]
Jason Eisner
Computer Science , 1997,
Abstract: This technical report is an appendix to Eisner (1996): it gives superior experimental results that were reported only in the talk version of that paper. Eisner (1996) trained three probability models on a small set of about 4,000 conjunction-free, dependency-grammar parses derived from the Wall Street Journal section of the Penn Treebank, and then evaluated the models on a held-out test set, using a novel O(n^3) parsing algorithm. The present paper describes some details of the experiments and repeats them with a larger training set of 25,000 sentences. As reported at the talk, the more extensive training yields greatly improved performance. Nearly half the sentences are parsed with no misattachments; two-thirds are parsed with at most one misattachment. Of the models described in the original written paper, the best score is still obtained with the generative (top-down) "model C." However, slightly better models are also explored, in particular, two variants on the comprehension (bottom-up) "model B." The better of these has an attachment accuracy of 90%, and (unlike model C) tags words more accurately than the comparable trigram tagger. Differences are statistically significant. If tags are roughly known in advance, search error is all but eliminated and the new model attains an attachment accuracy of 93%. We find that the parser of Collins (1996), when combined with a highly-trained tagger, also achieves 93% when trained and tested on the same sentences. Similarities and differences are discussed.
Coping with Variation in the Icelandic Diachronic Treebank
Eiríkur R?gnvaldsson,Anton Karl Ingason,Einar Freyr Siguresson
Oslo Studies in Language , 2011,
Abstract: We present an overview of an ongoing project which has the aim of developing methods for building a treebank of Icelandic. The treebank will contain both written and spoken language, and in addition have a diachronic dimension. Since Icelandic is an example of what has been called a less-resourced language when it comes to computational linguistics and language technology, it is essential to utilize the limited resources available as economically and efficiently as possible. We emphasize the importance of open source software and the interplay between linguistic knowledge and technological skills. We describe the workflow in the construction of the treebank and show how the different software tools work together towards the final representation. Finally, we show how the treebank can be used in studying some well known phenomena in Icelandic syntax.
Probabilistic Parsing Using Left Corner Language Models  [PDF]
Christopher D. Manning,Bob Carpenter
Computer Science , 1997,
Abstract: We introduce a novel parser based on a probabilistic version of a left-corner parser. The left-corner strategy is attractive because rule probabilities can be conditioned on both top-down goals and bottom-up derivations. We develop the underlying theory and explain how a grammar can be induced from analyzed data. We show that the left-corner approach provides an advantage over simple top-down probabilistic context-free grammars in parsing the Wall Street Journal using a grammar induced from the Penn Treebank. We also conclude that the Penn Treebank provides a fairly weak testbed due to the flatness of its bracketings and to the obvious overgeneration and undergeneration of its induced grammar.
The Index Thomisticus Treebank Project: Annotation, Parsing and Valency Lexicon  [PDF]
Barbara McGillivray,Marco Passarotti,Paolo Ruffolo
Traitement Automatique des Langues , 2010,
Abstract: We present an overview of the Index Thomisticus Treebank project (IT-TB). The IT-TB consists of around 60,000 tokens from the Index Thomisticus by Roberto Busa SJ, an 11-million-token Latin corpus of the texts by Thomas Aquinas. We briefly describe the annotation guidelines, shared with the Latin Dependency Treebank (LDT). The application of data-driven dependency parsers on IT-TB and LDT data is reported on. We present training and parsing results on several datasets and provide evaluation of learning algorithms and techniques. Furthermore, we introduce the IT-TB valency lexicon extracted from the treebank. We report on quantitative data of the lexicon and provide some statistical measures on subcategorisation structures.
Permeability Behavior of Self Compacting Concrete  [PDF]
Er. Sandeep Dhiman,,Arvind Dewangan,,Er. Lakhan Nagpal,,Er. Sumit Kumar
International Journal of Innovative Technology and Exploring Engineering , 2013,
Abstract: Self compacting concrete (SCC) is the new category of high performance concrete characterized by its ability to spread and self consolidate in the formwork exhibiting any significant separation of constituents. Elimination of vibration for compacting concrete during placing through the use of Self Compacting Concrete leads to substantial advantages related to better homogeneity, enhancement of working environment and improvement in the productivity by increasing the speed of construction. Understanding of this concrete flow property is of interest to many researchers. Flow properties of concrete at green stage are significantly governed by paste content, aggregate volume and admixture dosage. The flow properties of concrete is characterized in the fresh state by methods used for Self compacting concrete, such as slump-flow, V-funnel and L- box tests respectively. The number of trail mixtures are used and tests such as Slump Flow, V-Funnel, L-box etc. are conducted for their permissible limits, then the final proportions of ingredients and admixtures have been finalized for M30 , M 40 , M 50 and M 60 grade Concretes . In the present experimental investigation the main concentration is focused on permeability properties of self compacting concrete mixes.
Dictionary of Grammar
E. Ridge
Lexikos , 2012, DOI: 10.5788/9-1-933
Abstract: Review of Dictionary of Grammar Resensie van Dictionary of Grammar
Annotating Predicate-Argument Structure for a Parallel Treebank  [PDF]
Lea Cyrus,Hendrik Feddes,Frank Schumacher
Computer Science , 2004,
Abstract: We report on a recently initiated project which aims at building a multi-layered parallel treebank of English and German. Particular attention is devoted to a dedicated predicate-argument layer which is used for aligning translationally equivalent sentences of the two languages. We describe both our conceptual decisions and aspects of their technical realisation. We discuss some selected problems and conclude with a few remarks on how this project relates to similar projects in the field.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.