oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 2 )

2017 ( 1 )

2016 ( 2 )

2015 ( 20 )

Custom range...

Search Results: 1 - 10 of 408 matches for " Hannu Toivonen "
All listed articles are free for downloading (OA Articles)
Page 1 /408
Display every page Item
Multivariable adaptive control
Hannu T. Toivonen
Modeling, Identification and Control , 1984, DOI: 10.4173/mic.1984.1.2
Abstract: In recent years there has been an extensive interest in adaptive and self-tuning controllers, and there is a vast literature on various adaptive algorithms. The purpose of the present paper is to review some common approaches for multi-variable adaptive control. The presentation concentrates on procedures which are based on stochastic controller design methods, but some close connections with other design techniques are also indicated.
Multivariable controller for discrete stochastic amplitude-constrained systems
Hannu T. Toivonen
Modeling, Identification and Control , 1983, DOI: 10.4173/mic.1983.2.2
Abstract: A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.
A survey of data mining methods for linkage disequilibrium mapping
P?ivi Onkamo, Hannu Toivonen
Human Genomics , 2006, DOI: 10.1186/1479-7364-2-5-336
Abstract:
HaploRec: efficient and accurate large-scale reconstruction of haplotypes
Lauri Eronen, Floris Geerts, Hannu Toivonen
BMC Bioinformatics , 2006, DOI: 10.1186/1471-2105-7-542
Abstract: We define three novel statistical models and give an efficient algorithm for haplotype reconstruction, jointly called HaploRec. HaploRec is based on exploiting local regularities conserved in haplotypes: it reconstructs haplotypes so that they have maximal local coherence. This approach – not assuming statistical dependence for remotely located markers – has two useful properties: it is well-suited for sparse marker maps, such as those used in gene mapping, and it can actually take advantage of long maps.Our experimental results with simulated and real data show that HaploRec is a powerful method for the large scale haplotyping needed in association studies. With sample sizes large enough for gene mapping it appeared to be the best compared to all other tested methods (Phase, fastPhase, PL-EM, Snphap, Gerbil; simulated data), with small samples it was competitive with the best available methods (real data). HaploRec is several orders of magnitude faster than Phase and comparable to the other methods; the running times are roughly linear in the number of subjects and the number of markers. HaploRec is publicly available at http://www.cs.helsinki.fi/group/genetics/haplotyping.html webcite.The problem we consider is haplotype reconstruction: given the genotypes of a sample of individuals, the task is to predict the most likely haplotype pair for each individual. Computational haplotype reconstruction methods are based on statistical dependency between closely located markers, known as linkage disequilibrium. Many computational methods have been developed for the reconstruction of haplotypes. Some of these methods do not rely on the statistical modeling of the haplotypes [1-3], but most of them, like our proposed algorithm HaploRec, do [4-10]. For a review of these and other haplotyping methods we refer to [11-13]. Laboratory techniques are being developed for direct molecular haplotyping (see, e.g., [14,15]), but these techniques are not mature yet, and are currently t
Biomine: predicting links between biological entities using network models of heterogeneous databases
Lauri MA Eronen, Hannu TT Toivonen
BMC Bioinformatics , 2012, DOI: 10.1186/1471-2105-13-119
Abstract: Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes.The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi webcite, allowing searching
The Use of Weighted Graphs for Large-Scale Genome Analysis
Fang Zhou, Hannu Toivonen, Ross D. King
PLOS ONE , 2014, DOI: 10.1371/journal.pone.0089618
Abstract: There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution.
Stochastic Modelling and Self Tuning Control of a Continuous Cement Raw Material Mixing System
T. Westerlund,Hannu T. Toivonen,K-E Nyman
Modeling, Identification and Control , 1980, DOI: 10.4173/mic.1980.1.2
Abstract: The control of a continuously operating system for cement raw material mixing is studied. The purpose of the mixing system is to maintain a constant composition of the cement raw meal for the kiln despite variations of the raw material compositions. Experimental knowledge of the process dynamics and the characteristics of the various disturbances is used for deriving a stochastic model of the system. The optimal control strategy is then obtained as a minimum variance strategy. The control problem is finally solved using a self-tuning minimum variance regulator, and results from a successful implementation of the regulator are given.
Constrained hidden Markov models for population-based haplotyping
Landwehr Niels,Mielik?inen Taneli,Eronen Lauri,Toivonen Hannu
BMC Bioinformatics , 2007, DOI: 10.1186/1471-2105-8-s2-s9
Abstract: Background Haplotype Reconstruction is the problem of resolving the hidden phase information in genotype data obtained from laboratory measurements. Solving this problem is an important intermediate step in gene association studies, which seek to uncover the genetic basis of complex diseases. We propose a novel approach for haplotype reconstruction based on constrained hidden Markov models. Models are constructed by incrementally refining and regularizing the structure of a simple generative model for genotype data under Hardy-Weinberg equilibrium. Results The proposed method is evaluated on real-world and simulated population data. Results show that it is competitive with other recently proposed methods in terms of reconstruction accuracy, while offering a particularly good trade-off between computational costs and quality of results for large datasets. Conclusion Relatively simple probabilistic approaches for haplotype reconstruction based on structured hidden Markov models are competitive with more complex, well-established techniques in this field.
DopeLearning: A Computational Approach to Rap Lyrics Generation
Eric Malmi,Pyry Takala,Hannu Toivonen,Tapani Raiko,Aristides Gionis
Computer Science , 2015,
Abstract: Writing rap lyrics requires both creativity, to construct a meaningful and an interesting story, and lyrical skills, to produce complex rhyme patterns, which are the cornerstone of a good flow. We present a method for capturing both of these aspects. Our approach is based on two machine-learning techniques: the RankSVM algorithm, and a deep neural network model with a novel structure. For the problem of distinguishing the real next line from a randomly selected one, we achieve an 82 % accuracy. We employ the resulting prediction method for creating new rap lyrics by combining lines from existing songs. In terms of quantitative rhyme density, the produced lyrics outperform best human rappers by 21 %. The results highlight the benefit of our rhyme density metric and our innovative predictor of next lines.
SegMine workflows for semantic microarray data analysis in Orange4WS
Vid Podpe?an, Nada Lavra?, Igor Mozeti?, Petra Novak, Igor Trajkovski, Laura Langohr, Kimmo Kulovesi, Hannu Toivonen, Marko Petek, Helena Motaln, Kristina Gruden
BMC Bioinformatics , 2011, DOI: 10.1186/1471-2105-12-416
Abstract: We present a new methodology, SegMine, for semantic analysis of microarray data by exploiting general biological knowledge, and a new workflow environment, Orange4WS, with integrated support for web services in which the SegMine methodology is implemented. The SegMine methodology consists of two main steps. First, the semantic subgroup discovery algorithm is used to construct elaborate rules that identify enriched gene sets. Then, a link discovery service is used for the creation and visualization of new biological hypotheses. The utility of SegMine, implemented as a set of workflows in Orange4WS, is demonstrated in two microarray data analysis applications. In the analysis of senescence in human stem cells, the use of SegMine resulted in three novel research hypotheses that could improve understanding of the underlying mechanisms of senescence and identification of candidate marker genes.Compared to the available data analysis systems, SegMine offers improved hypothesis generation and data interpretation for bioinformatics in an easy-to-use integrated workflow environment.Systems biology aims at system-level understanding of biological systems, that is, understanding of system structures, dynamics, control methods, and design methods [1]. Biologists collect large quantities of data from in vitro and in vivo experiments with gene expression microarrays being the most widely used high-throughput platform [2]. Since the amount of available data exceeds human analytical capabilities, technologies that help analyzing and extracting useful information from such large amounts of data need to be developed and used.The field of microarray data analysis has shifted emphasis from methods for identifying individual differentially expressed genes to methods for identifying differentially expressed gene categories (enriched gene sets). A gene set is enriched if the member genes are statistically significantly differentially expressed compared to the rest of the genes. One of the
Page 1 /408
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.