oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Bayesian Network Enhanced with Structural Reliability Methods: Methodology  [PDF]
Daniel Straub,Armen Der Kiureghian
Statistics , 2012, DOI: 10.1061/(ASCE)EM.1943-7889.0000173
Abstract: We combine Bayesian networks (BNs) and structural reliability methods (SRMs) to create a new computational framework, termed enhanced Bayesian network (eBN), for reliability and risk analysis of engineering structures and infrastructure. BNs are efficient in representing and evaluating complex probabilistic dependence structures, as present in infrastructure and structural systems, and they facilitate Bayesian updating of the model when new information becomes available. On the other hand, SRMs enable accurate assessment of probabilities of rare events represented by computationally demanding, physically-based models. By combining the two methods, the eBN framework provides a unified and powerful tool for efficiently computing probabilities of rare events in complex structural and infrastructure systems in which information evolves in time. Strategies for modeling and efficiently analyzing the eBN are described by way of several conceptual examples. The companion paper applies the eBN methodology to example structural and infrastructure systems.
Comparing and Combining Sentiment Analysis Methods  [PDF]
Pollyanna Gon?alves,Matheus Araújo,Fabrício Benevenuto,Meeyoung Cha
Computer Science , 2014, DOI: 10.1145/2512938.2512951
Abstract: Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.
Predicting binding sites of hydrolase-inhibitor complexes by combining several methods
Taner Z Sen, Andrzej Kloczkowski, Robert L Jernigan, Changhui Yan, Vasant Honavar, Kai-Ming Ho, Cai-Zhuang Wang, Yungok Ihm, Haibo Cao, Xun Gu, Drena Dobbs
BMC Bioinformatics , 2004, DOI: 10.1186/1471-2105-5-205
Abstract: In order to increase the power of predictive methods for protein-protein interaction sites, we have developed a consensus methodology for combining four different methods. These approaches include: data mining using Support Vector Machines, threading through protein structures, prediction of conserved residues on the protein surface by analysis of phylogenetic trees, and the Conservatism of Conservatism method of Mirny and Shakhnovich. Results obtained on a dataset of hydrolase-inhibitor complexes demonstrate that the combination of all four methods yield improved predictions over the individual methods.We developed a consensus method for predicting protein-protein interface residues by combining sequence and structure-based methods. The success of our consensus approach suggests that similar methodologies can be developed to improve prediction accuracies for other bioinformatic problems.Protein-protein interactions play a critical role in protein function. Completion of many genomes is being followed rapidly by major efforts to identify experimentally interacting protein pairs in order to decipher the networks of interacting, coordinated-in-action proteins. Identification of protein-protein interaction sites and detection of specific residues that contribute to the specificity and strength of protein interactions is an important problem [1-3] with broad applications ranging from rational drug design to the analysis of metabolic and signal transduction networks. Experimental detection of residues on protein-protein interaction surfaces can come either from determination of the structure of protein-protein complexes or from various functional assays. The ability to predict interface residues at protein binding sites using computational methods can be used to guide the design of such functional experiments and to enhance gene annotations by identifying specific protein interaction domains within genes at a finer level of detail than is currently possible.Computational
Combining qualitative and quantitative methods in studying education  [PDF]
?evku?i? Slavica
Zbornik Instituta za Pedago?ka Istra?ivanja , 2009, DOI: 10.2298/zipi0901045s
Abstract: In the humanities, in the last two decades, there has been an evident increase of research combining quantitative and qualitative methods, techniques, approaches, concepts or language. This paper discusses the arguments for and against these research drafts, which most often appear in literature under the title mixed methods research. While some authors consider this type of research as the announcement of the third paradigm in studying social phenomena and the approach that shifts the war between the two paradigms into the past, other authors claim that the paradigms underlying the two basic research orientations are incompatible because they study essentially different phenomena, and therefore the methods from two research traditions cannot be combined in any way. The third viewpoint, which we advocate as well, argues that qualitative and quantitative methods cannot be applied together in one draft for the purposes of triangulation or cross-validation, but that they can be combined for complementary objectives. This paper describes the example of mixed methods draft of complementary objectives in pedagogy, which refers to evaluation of mathematics curriculum. The example shows that combining qualitative and quantitative methods is not only possible, but that it creates the conditions for arriving at data which would not be possible to obtain using only one or the other approach.
Robust Eye Localization by Combining Classification and Regression Methods  [PDF]
Pak Il Nam,Ri Song Jin,Peter Peer
ISRN Applied Mathematics , 2014, DOI: 10.1155/2014/804291
Abstract: Eye localization is an important part in face recognition system, because its precision closely affects the performance of the system. In this paper we analyze the limitations of classification and regression methods and propose a robust and accurate eye localization method combining these two methods. The classification method in eye localization is robust, but its precision is not so high, while the regression method is sensitive to the initial position, but in case the initial position is near to the eye position, it can converge to the eye position accurately. Experiments on BioID and LFW databases show that the proposed method gives very good results on both low and high quality images. 1. Introduction Because face images should be normalized based on the coordinates of eyes in most face recognition systems, eye localization is an important part in face recognition systems. Its precision closely affects the performance of face recognition [1, 2]. Eye localization methods considering geometric properties of eyes such as edges, shape, and probabilistic characteristics are high in precision in normal conditions, but they are sensitive to illumination, pose, expression, and glasses [3–6]. State-of-the-art methods in eye localization are based on boosting classification, regression, boosting and cascade, boosting and SVM, and other variants [1, 2, 7–11]. In particular, the method in [1] is very effective, guaranteeing high precision even in unconstrained environment. It integrates the following three characteristics:(i)probabilistic cascade,(ii)two-level localization framework,(iii)extended local binary pattern (ELBP). In eye localization, the boundary between the positive and negative samples is ambiguous, especially in low quality images. Thus, positive samples with low quality are easily rejected by the thresholds in the cascade and fail to contribute to the final result. In [1] the authors introduced a quality adaptive cascade that works in a probabilistic framework (P cascade). In the P cascade framework all image patches have a chance to contribute to the final result and their contributions are determined by their corresponding probability. In this way P cascade can adapt to face images of arbitrary quality. Furthermore, they constructed two-level localization framework with a coarse-to-fine localization for the system to be robust and accurate. Figure 1 shows the size and geometry of the eye training samples for two-level stacked classifiers. Figure 1: The size and geometry of training samples for two-level stacked classifiers. In order to
Optimization of spectrophotometric methods using response surfaces methodology
Prieto,Avismelsi; Bolelawsky,Lucyna; Jiménez,Edgaly; Guanipa,Yaritza; Camargo,Nuris; Araujo,Lilia;
Revista Técnica de la Facultad de Ingeniería Universidad del Zulia , 2007,
Abstract: response surface methods are very useful in order to interpret the relationships between response and factor effects. the relationships can be represented by polynomials of second order or explored graphically to determine the optimum levels. this paper describes the development of two spectrophotometric methods for the determination of ciprofloxacin and norfloxacin in pharmaceutical formulations. the optimization of variables were studied both in the traditional mode and using response surface methodology. the methods are based on the reaction between ciprofloxacin and norfloxacin with the bromocresol green in acid medium, to give ion-pairs extractable with chloroform. the ion pairs formed exhibited absorption maximum at 420 nm. beer?s law is obeyed in the concentration ranges 2,0-40,0 μg/ml and 2,0-80,0 μg/ml for ciprofloxacin and norfloxacin respectively. the analysis yielded good reproducibility (rsd between 0,40-5,88%). the proposed methods were applied on the ciprofloxacin and norfloxacin determination in tablets and intravenous solution, shows recoveries in the range of 98,4-104,1%. the proposed methods are simple, sensitive and economic.
New methods to analyse microarray data that partially lack a reference signal
Neeltje Carpaij, Ad C Fluit, Jodi A Lindsay, Marc JM Bonten, Rob JL Willems
BMC Genomics , 2009, DOI: 10.1186/1471-2164-10-522
Abstract: When only a single strain is used as a reference for a multistrain array, the accessory gene pool will be partially represented by reference DNA, although these genes represent the genomic repertoire that can explain differences in virulence, pathogenicity or transmissibility between strains. The lack of a reference makes interpretation of the data for these genes difficult and, if the test signal is low, they are often deleted from the analysis. We aimed to develop novel methods to determine the presence or divergence of genes in a Staphylococcus aureus multistrain PCR product microarray-based CGH approach for which reference DNA was not available for some probes.In this study we have developed 6 new methods to predict divergence and presence of all genes spotted on a multistrain Staphylococcus aureus DNA microarray, published previously, including those gene spots that lack reference signals. When considering specificity and PPV (i.e. the false-positive rate) as the most important criteria for evaluating these methods, the method that defined gene presence based on a signal at least twice as high as the background and higher than the reference signal (method 4) had the best test characteristics. For this method specificity was 100% and 82% for MRSA252 (compared to the GACK method) and all spots (compared to sequence data), respectively, and PPV were 100% and 76% for MRSA252 (compared to the GACK method) and all spots (compared to sequence data), respectively.A definition of gene presence based on signal at least twice as high as the background and higher than the reference signal (method 4) had the best test characteristics, allowing the analysis of 6-17% more of the genes not present in the reference strain. This method is recommended to analyse microarray data that partially lack a reference signal.Comparative Genomic Hybridisation (CGH) microarray studies are applied to identify genetic diversity in both eukaryotes and prokaryotes [1-8]. In bacteria microarray-
Learning Methods for Combining Linguistic Indicators to Classify Verbs  [PDF]
Eric V. Siegel
Computer Science , 1997,
Abstract: Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.
Comparing and Combining Methods for Automatic Query Expansion  [PDF]
José R. Pérez-Agüera,Lourdes Araujo
Computer Science , 2008,
Abstract: Query expansion is a well known method to improve the performance of information retrieval systems. In this work we have tested different approaches to extract the candidate query terms from the top ranked documents returned by the first-pass retrieval. One of them is the cooccurrence approach, based on measures of cooccurrence of the candidate and the query terms in the retrieved documents. The other one, the probabilistic approach, is based on the probability distribution of terms in the collection and in the top ranked set. We compare the retrieval improvement achieved by expanding the query with terms obtained with different methods belonging to both approaches. Besides, we have developed a na\"ive combination of both kinds of method, with which we have obtained results that improve those obtained with any of them separately. This result confirms that the information provided by each approach is of a different nature and, therefore, can be used in a combined manner.
Combining Neural Methods and Knowledge-Based Methods in Accident Management  [PDF]
Miki Sirola,Jaakko Talonen
Advances in Artificial Neural Systems , 2012, DOI: 10.1155/2012/534683
Abstract: Accident management became a popular research issue in the early 1990s. Computerized decision support was studied from many points of view. Early fault detection and information visualization are important key issues in accident management also today. In this paper we make a brief review on this research history mostly from the last two decades including the severe accident management. The author’s studies are reflected to the state of the art. The self-organizing map method is combined with other more or less traditional methods. Neural methods used together with knowledge-based methods constitute a methodological base for the presented decision support prototypes. Two application examples with modern decision support visualizations are introduced more in detail. A case example of detecting a pressure drift on the boiling water reactor by multivariate methods including innovative visualizations is studied in detail. Promising results in early fault detection are achieved. The operators are provided by added information value to be able to detect anomalies in an early stage already. We provide the plant staff with a methodological tool set, which can be combined in various ways depending on the special needs in each case. 1. Introduction Accident management grew into an own and popular research branch in the early 1990s. This trend was a kind of delayed reflection of the two serious industrial accidents in the 1980s in Bhopal (1984) and in Chernobyl (1986). It was noticed that most of the earlier studies about abnormal events did not cover very well the severe accident cases. Already the Three Mile Island accident (1979) made the nuclear power plant control room a major focus for the studies of human factors, human reliability, and man-machine interface technology [1]. The Fukushima accident in 2011 has risen the accident management issue up again, although the nature and origin of this accident were completely different. The problem area in the 1990s was identified to begin with information needs and reliability, to be completed with accident mitigation. The presentation methods and information structuring were located to central issues, as the human being was considered often to be the weakest link in the safety systems. The fault diagnosis of abnormal events and the support of operator decision making were naturally completing this entity [2]. Computerized accident management was studied in the 1990s, for instance, in OECD Halden Project, and prototyping systems including also strategic planning features in operator support or technical support
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.