|
Understand the effects of erroneous annotations produced by NLP pipelines, a case study on the pronominal anaphora resolution Comprendre les effets des erreurs d'annotations des plateformes de TAL, une étude sur la résolution des anaphores pronominalesKeywords: Anaphora Resolution , Bayesian Network , Erroneous Annotation , Annotation Pipeline Abstract: Outputs of NLP tools can be regarded as sets of annotations of predefined characters included in a processed document. The generated annotations are said erroneous if they disagree with the correct annotations given by the experts. In general, as errors in automatic NLP tools are inevitable, these errors must be taken into account when we design new annotation pipelines. Innovative and integrated architectures have recently been proposed to annotate and revise annotations by processing errors in various levels of linguistic annotations simultaneously (e.g. the Named Entities and their roles in a document). However the computational complexity of such architectures limits the number of the annotations that can be effectively handled. In this article, we study an alternative way which keeps the standard pipeline architecture where the NLP tools are connected in cascade. We show that our explicit modeling the reliability of the input annotations helps (1) to attenuate the impact of the noisy annotations, (2) to integrate into the pipeline the complex annotations which express the linguistic knowledge necessary to fulfill the annotation tasks, and (3) to postpone the correction of errors in a latter stage and tackle them in a post-process optimized for the revision. These results have been obtained from a series of experiments aiming to resolve the anaphoric pronoun 'it' in English genomic abstracts using Bayesian Networks.
|