oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Agalma: an automated phylogenomics workflow  [PDF]
Casey W. Dunn,Mark Howison,Felipe Zapata
Quantitative Biology , 2013, DOI: 10.1186/1471-2105-14-330
Abstract: In the past decade, transcriptome data have become an important component of many phylogenetic studies. Phylogenetic studies now regularly include genes from newly sequenced transcriptomes, as well as publicly available transcriptomes and genomes. Implementing such a phylogenomic study, however, is computationally intensive, requires the coordinated use of many complex software tools, and includes multiple steps for which no published tools exist. Phylogenomic studies have therefore been manual or semiautomated. In addition to taking considerable user time, this makes phylogenomic analyses difficult to reproduce, compare, and extend. In addition, methodological improvements made in the context of one study often cannot be easily applied and evaluated in the context of other studies. We present Agalma, an automated tool that conducts phylogenomic analyses. The user provides raw Illumina transcriptome data, and Agalma produces annotated assemblies, aligned gene sequence matrices, a preliminary phylogeny, and detailed diagnostics that allow the investigator to make extensive assessments of intermediate analysis steps and the final results. Sequences from other sources, such as externally assembled genomes and transcriptomes, can also be incorporated in the analyses. Agalma tracks provenance, profiles processor and memory use, records diagnostics, manages metadata, and enables rich HTML reports for all stages of the analysis. Agalma includes a test data set and a built-in test analysis of these data. In addition to describing Agalma, we here present a sample analysis of a larger seven-taxon data set. Agalma is available for download at https://bitbucket.org/caseywdunn/agalma. Agalma allows complex phylogenomic analyses to be implemented and described unambiguously as a series of high-level commands. This will enable phylogenomic studies to be readily reproduced, modified, and extended.
Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations  [PDF]
Nancy X. R. Wang,Jared D. Olson,Jeffrey G. Ojemann,Rajesh P. N. Rao,Bingni W. Brunton
Quantitative Biology , 2015,
Abstract: Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Most ongoing efforts have focused on training decoders on specific, stereotyped tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in natural settings requires adaptive strategies and scalable algorithms that require minimal supervision. Here we propose an unsupervised approach to decoding neural states from human brain recordings acquired in a naturalistic context. We demonstrate our approach on continuous long-term electrocorticographic (ECoG) data recorded over many days from the brain surface of subjects in a hospital room, with simultaneous audio and video recordings. We first discovered clusters in high-dimensional ECoG recordings and then annotated coherent clusters using speech and movement labels extracted automatically from audio and video recordings. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Our results show that our unsupervised approach can discover distinct behaviors from ECoG data, including moving, speaking and resting. We verify the accuracy of our approach by comparing to manual annotations. Projecting the discovered cluster centers back onto the brain, this technique opens the door to automated functional brain mapping in natural settings.
An automated workflow for enhancing microbial bioprocess optimization on a novel microbioreactor platform  [cached]
Rohe Peter,Venkanna Deepak,Kleine Britta,Freudl Roland
Microbial Cell Factories , 2012, DOI: 10.1186/1475-2859-11-144
Abstract: Background High-throughput methods are widely-used for strain screening effectively resulting in binary information regarding high or low productivity. Nevertheless achieving quantitative and scalable parameters for fast bioprocess development is much more challenging, especially for heterologous protein production. Here, the nature of the foreign protein makes it impossible to predict the, e.g. best expression construct, secretion signal peptide, inductor concentration, induction time, temperature and substrate feed rate in fed-batch operation to name only a few. Therefore, a high number of systematic experiments are necessary to elucidate the best conditions for heterologous expression of each new protein of interest. Results To increase the throughput in bioprocess development, we used a microtiter plate based cultivation system (Biolector) which was fully integrated into a liquid-handling platform enclosed in laminar airflow housing. This automated cultivation platform was used for optimization of the secretory production of a cutinase from Fusarium solani pisi with Corynebacterium glutamicum. The online monitoring of biomass, dissolved oxygen and pH in each of the microtiter plate wells enables to trigger sampling or dosing events with the pipetting robot used for a reliable selection of best performing cutinase producers. In addition to this, further automated methods like media optimization and induction profiling were developed and validated. All biological and bioprocess parameters were exclusively optimized at microtiter plate scale and showed perfect scalable results to 1 L and 20 L stirred tank bioreactor scale. Conclusions The optimization of heterologous protein expression in microbial systems currently requires extensive testing of biological and bioprocess engineering parameters. This can be efficiently boosted by using a microtiter plate cultivation setup embedded into a liquid-handling system, providing more throughput by parallelization and automation. Due to improved statistics by replicate cultivations, automated downstream analysis, and scalable process information, this setup has superior performance compared to standard microtiter plate cultivation.
Automated Workflow for Preparation of cDNA for Cap Analysis of Gene Expression on a Single Molecule Sequencer  [PDF]
Masayoshi Itoh, Miki Kojima, Sayaka Nagao-Sato, Eri Saijo, Timo Lassmann, Mutsumi Kanamori-Katayama, Ai Kaiho, Marina Lizio, Hideya Kawaji, Piero Carninci, Alistair R. R. Forrest, Yoshihide Hayashizaki
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0030809
Abstract: Background Cap analysis of gene expression (CAGE) is a 5′ sequence tag technology to globally determine transcriptional starting sites in the genome and their expression levels and has most recently been adapted to the HeliScope single molecule sequencer. Despite significant simplifications in the CAGE protocol, it has until now been a labour intensive protocol. Methodology In this study we set out to adapt the protocol to a robotic workflow, which would increase throughput and reduce handling. The automated CAGE cDNA preparation system we present here can prepare 96 ‘HeliScope ready’ CAGE cDNA libraries in 8 days, as opposed to 6 weeks by a manual operator.We compare the results obtained using the same RNA in manual libraries and across multiple automation batches to assess reproducibility. Conclusions We show that the sequencing was highly reproducible and comparable to manual libraries with an 8 fold increase in productivity. The automated CAGE cDNA preparation system can prepare 96 CAGE sequencing samples simultaneously. Finally we discuss how the system could be used for CAGE on Illumina/SOLiD platforms, RNA-seq and full-length cDNA generation.
SPIM Architecture for MVC based Web Applications  [PDF]
R. Sridaran,G. Padmavathi,K. Iyakutti,M. N. S. Mani
Computer Science , 2010,
Abstract: The Model / View / Controller design pattern divides an application environment into three components to handle the user-interactions, computations and output respectively. This separation greatly favors architectural reusability. The pattern works well in the case of single-address space and not proven to be efficient for web applications involving multiple address spaces. Web applications force the designers to decide which of the components of the pattern are to be partitioned between the server and client(s) before the design phase commences. For any rapidly growing web application, it is very difficult to incorporate future changes in policies related to partitioning. One solution to this problem is to duplicate the Model and controller components at both server and client(s). However, this may add further problems like delayed data fetch, security and scalability issues. In order to overcome this, a new architecture SPIM has been proposed that deals with the partitioning problem in an alternative way. SPIM shows tremendous improvements in performance when compared with a similar architecture.
An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook  [PDF]
Jean-Luc R. Stevens,James A. Bednar
Frontiers in Neuroinformatics , 2013, DOI: 10.3389/fninf.2013.00044
Abstract: Lancet is a new, simulator-independent Python utility for succinctly specifying, launching, and collating results from large batches of interrelated computationally demanding program runs. This paper demonstrates how to combine Lancet with IPython Notebook to provide a flexible, lightweight, and agile workflow for fully reproducible scientific research. This informal and pragmatic approach uses IPython Notebook to capture the steps in a scientific computation as it is gradually automated and made ready for publication, without mandating the use of any separate application that can constrain scientific exploration and innovation. The resulting notebook concisely records each step involved in even very complex computational processes that led to a particular figure or numerical result, allowing the complete chain of events to be replicated automatically. Lancet was originally designed to help solve problems in computational neuroscience, such as analyzing the sensitivity of a complex simulation to various parameters, or collecting the results from multiple runs with different random starting points. However, because it is never possible to know in advance what tools might be required in future tasks, Lancet has been designed to be completely general, supporting any type of program as long as it can be launched as a process and can return output in the form of files. For instance, Lancet is also heavily used by one of the authors in a separate research group for launching batches of microprocessor simulations. This general design will allow Lancet to continue supporting a given research project even as the underlying approaches and tools change.
EquiNMF: Graph Regularized Multiview Nonnegative Matrix Factorization  [PDF]
Daniel Hidru,Anna Goldenberg
Computer Science , 2014,
Abstract: Nonnegative matrix factorization (NMF) methods have proved to be powerful across a wide range of real-world clustering applications. Integrating multiple types of measurements for the same objects/subjects allows us to gain a deeper understanding of the data and refine the clustering. We have developed a novel Graph-reguarized multiview NMF-based method for data integration called EquiNMF. The parameters for our method are set in a completely automated data-specific unsupervised fashion, a highly desirable property in real-world applications. We performed extensive and comprehensive experiments on multiview imaging data. We show that EquiNMF consistently outperforms other single-view NMF methods used on concatenated data and multi-view NMF methods with different types of regularizations.
Rigid Multiview Varieties  [PDF]
Michael Joswig,Joe Kileel,Bernd Sturmfels,André Wagner
Mathematics , 2015,
Abstract: The multiview variety from computer vision is generalized to images by $n$ cameras of points linked by a distance constraint. The resulting five-dimensional variety lives in a product of $2n$ projective planes. We determine defining polynomial equations, and we explore generalizations of this variety to scenarios of interest in applications.
The Set-Up and Implementation of Fully Virtualized Lessons with an Automated Workflow Utilizing VMC/Moodle at the Medical University of Graz  [cached]
Herwig Erich Rehatschek,Gernot H?lzl,Michael Fladischer
International Journal of Emerging Technologies in Learning (iJET) , 2011, DOI: 10.3991/ijet.v6i4.1784
Abstract: With start of winter semester 2010/11 the Medical University of Graz (MUG) successfully introduced a new primary learning management system (LMS) Moodle. Moodle currently serves more than 4,300 students from three studies and holds more than 7,500 unique learning objects. With begin of the summer semester 2010 we decided to start a pilot with Moodle and 430 students. For the pilot we migrated the learning content of one module and two optional subjects to Moodle. The evaluation results were extremely promising – more than 92% of the students wanted immediately Moodle – also Moodle did meet our high expectations in terms of performance and scalability. Within this paper we describe how we defined and set-up a scalable and highly available platform for hosting Moodle and extended it by the functionality for fully automated virtual lessons. We state our experiences and give valuable clues for universities and institutions who want to introduce Moodle in the near future.
Kernelized Multiview Projection  [PDF]
Mengyang Yu,Li Liu,Ling Shao
Computer Science , 2015,
Abstract: Conventional vision algorithms adopt a single type of feature or a simple concatenation of multiple features, which is always represented in a high-dimensional space. In this paper, we propose a novel unsupervised spectral embedding algorithm called Kernelized Multiview Projection (KMP) to better fuse and embed different feature representations. Computing the kernel matrices from different features/views, KMP can encode them with the corresponding weights to achieve a low-dimensional and semantically meaningful subspace where the distribution of each view is sufficiently smooth and discriminative. More crucially, KMP is linear for the reproducing kernel Hilbert space (RKHS) and solves the out-of-sample problem, which allows it to be competent for various practical applications. Extensive experiments on three popular image datasets demonstrate the effectiveness of our multiview embedding algorithm.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.