oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 6 )

2018 ( 9 )

2017 ( 13 )

2016 ( 18 )

Custom range...

Search Results: 1 - 10 of 4648 matches for " Oliver Fiehn "
All listed articles are free for downloading (OA Articles)
Page 1 /4648
Display every page Item
Combining Genomics, Metabolome Analysis, and Biochemical Modelling to Understand Metabolic Networks
Oliver Fiehn
Comparative and Functional Genomics , 2001, DOI: 10.1002/cfg.82
Abstract: Now that complete genome sequences are available for a variety of organisms, the elucidation of gene functions involved in metabolism necessarily includes a better understanding of cellular responses upon mutations on all levels of gene products, mRNA, proteins, and metabolites. Such progress is essential since the observable properties of organisms – the phenotypes – are produced by the genotype in juxtaposition with the environment. Whereas much has been done to make mRNA and protein profiling possible, considerably less effort has been put into profiling the end products of gene expression, metabolites. To date, analytical approaches have been aimed primarily at the accurate quantification of a number of pre-defined target metabolites, or at producing fingerprints of metabolic changes without individually determining metabolite identities. Neither of these approaches allows the formation of an in-depth understanding of the biochemical behaviour within metabolic networks. Yet, by carefully choosing protocols for sample preparation and analytical techniques, a number of chemically different classes of compounds can be quantified simultaneously to enable such understanding. In this review, the terms describing various metabolite-oriented approaches are given, and the differences among these approaches are outlined. Metabolite target analysis, metabolite profiling, metabolomics, and metabolic fingerprinting are considered. For each approach, a number of examples are given, and potential applications are discussed.
Metabolomic database annotations via query of elemental compositions: Mass accuracy is insufficient even at less than 1 ppm
Tobias Kind, Oliver Fiehn
BMC Bioinformatics , 2006, DOI: 10.1186/1471-2105-7-234
Abstract: High mass accuracy (<1 ppm) alone is not enough to exclude enough candidates with complex elemental compositions (C, H, N, S, O, P, and potentially F, Cl, Br and Si). Use of isotopic abundance patterns as a single further constraint removes >95% of false candidates. This orthogonal filter can condense several thousand candidates down to only a small number of molecular formulas. Example calculations for 10, 5, 3, 1 and 0.1 ppm mass accuracy are given. Corresponding software scripts can be downloaded from http://fiehnlab.ucdavis.edu webcite. A comparison of eight chemical databases revealed that PubChem and the Dictionary of Natural Products can be recommended for automatic queries using molecular formulae.More than 1.6 million molecular formulae in the range 0–500 Da were generated in an exhaustive manner under strict observation of mathematical and chemical rules. Assuming that ion species are fully resolved (either by chromatography or by high resolution mass spectrometry), we conclude that a mass spectrometer capable of 3 ppm mass accuracy and 2% error for isotopic abundance patterns outperforms mass spectrometers with less than 1 ppm mass accuracy or even hypothetical mass spectrometers with 0.1 ppm mass accuracy that do not include isotope information in the calculation of molecular formulae.Metabolomics seeks to identify and quantify all metabolites in a given biological context [1]. In this respect its aim is different from metabolic fingerprinting or metabonomic approaches which utilize high dimensional unannotated variables and multivariate statistics to find biomarkers that may or may not be structurally identified in subsequent steps. Therefore, an important task in metabolomics is to identify or structurally annotate compounds in a high throughput manner. Mass spectrometry is one of the most powerful tools for unbiased analysis of small molecules in life sciences. Hundreds to thousands of metabolites can be detected when suitable sample preparation metho
Seven Golden Rules for heuristic filtering of molecular formulas obtained by accurate mass spectrometry
Tobias Kind, Oliver Fiehn
BMC Bioinformatics , 2007, DOI: 10.1186/1471-2105-8-105
Abstract: An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1) restrictions for the number of elements, (2) LEWIS and SENIOR chemical rules, (3) isotopic patterns, (4) hydrogen/carbon ratios, (5) element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6) element ratio probabilities and (7) presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries.The seven rules enable an automatic exclusion of molecular formulas which are either wrong or which contain unlikely high or low number of elements. The correct molecular formula is assigned with a probability of 98% if the formula exists in a compound database. For truly novel compounds that are not present in databases, the correct formula is found in the first three hits with a probability of 65–81%. Corresponding software
High quality metabolomic data for Chlamydomonas reinhardtii
Do Lee, Oliver Fiehn
Plant Methods , 2008, DOI: 10.1186/1746-4811-4-7
Abstract: Chlamydomonas reinhardtii is a model system for photosynthetic organisms [1] including studies on metabolism [2-4]. It has been studied since long as a particularly sturdy organism that can be genetically modified in multiple ways and for which community resources are available including mutant stock centers and a fully sequenced genome. Chlamydomonas may also be used for studying the response to availability of macronutrients e.g. phosphate, sulfur, carbon, and nitrogen [5] which was extended to broad profiling of responses of gene expression or metabolite levels [6,7]. The focus of such studies is to understand the complexity of regulatory circuits and reorganization of cellular modules in response to suboptimal conditions which may then lead to insights that could potentially be extended to vascular plants.Metabolites can be regarded as the ultimate output of the cellular machinery. Therefore, comprehensive metabolic phenotyping may help to unravel subtle stages of cellular reorganization if highly accurate quantifications can be achieved. Analytical methods have to be constantly improved in order to achieve this aim. One of the main concerns for developing analytical methods for quantifying microbial metabolites is to prevent undesirable changes of internal metabolites during the period of harvesting. The aim is to stop any metabolic activity as fast as possible without altering the internal metabolic signature. Yeast may be regarded as good proxy for Chlamydomonas with respect to sample preparation as both are eukaryotic organisms exerting comparatively sturdy cell walls, unlike bacterial models which are known to be more easily disrupted by physicochemical methods. Yeast metabolism has been preferably quenched by cold methanol treatments [8]. Nevertheless, even mild quenching methods unavoidably may lead to some degree of metabolite leakage by weakening cell walls. Consequently minimal concentrations of methanol and/or centrifugation times were tested, as well
How Large Is the Metabolome? A Critical Analysis of Data Exchange Practices in Chemistry
Tobias Kind, Martin Scholz, Oliver Fiehn
PLOS ONE , 2009, DOI: 10.1371/journal.pone.0005440
Abstract: Background Calculating the metabolome size of species by genome-guided reconstruction of metabolic pathways misses all products from orphan genes and from enzymes lacking annotated genes. Hence, metabolomes need to be determined experimentally. Annotations by mass spectrometry would greatly benefit if peer-reviewed public databases could be queried to compile target lists of structures that already have been reported for a given species. We detail current obstacles to compile such a knowledge base of metabolites. Results As an example, results are presented for rice. Two rice (oryza sativa) subspecies have been fully sequenced, oryza japonica and oryza indica. Several major small molecule databases were compared for listing known rice metabolites comprising PubChem, Chemical Abstracts, Beilstein, Patent databases, Dictionary of Natural Products, SetupX/BinBase, KNApSAcK DB, and finally those databases which were obtained by computational approaches, i.e. RiceCyc, KEGG, and Reactome. More than 5,000 small molecules were retrieved when searching these databases. Unfortunately, most often, genuine rice metabolites were retrieved together with non-metabolite database entries such as pesticides. Overlaps from database compound lists were very difficult to compare because structures were either not encoded in machine-readable format or because compound identifiers were not cross-referenced between databases. Conclusions We conclude that present databases are not capable of comprehensively retrieving all known metabolites. Metabolome lists are yet mostly restricted to genome-reconstructed pathways. We suggest that providers of (bio)chemical databases enrich their database identifiers to PubChem IDs and InChIKeys to enable cross-database queries. In addition, peer-reviewed journal repositories need to mandate submission of structures and spectra in machine readable format to allow automated semantic annotation of articles containing chemical structures. Such changes in publication standards and database architectures will enable researchers to compile current knowledge about the metabolome of species, which may extend to derived information such as spectral libraries, organ-specific metabolites, and cross-study comparisons.
The volatile compound BinBase mass spectral database
Kirsten Skogerson, Gert Wohlgemuth, Dinesh K Barupal, Oliver Fiehn
BMC Bioinformatics , 2011, DOI: 10.1186/1471-2105-12-321
Abstract: The volatile compound BinBase (vocBinBase) is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity) from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species). Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http://vocbinbase.fiehnlab.ucdavis.edu webcite).The BinBase database algorithms have been successfully modified to allow for tracking and identification of volatile compounds in complex mixtures. The database is capable of annotating large datasets (hundreds to thousands of samples) and is well-suited for between-study comparisons such as chemotaxonomy investigations. This novel volatile compound database tool is applicable to research fields spanning chemical ecology to human health. The BinBase source code is freely available at http://bi
Robust detection and verification of linear relationships to generate metabolic networks using estimates of technical errors
Frank Kose, Jan Budczies, Matthias Holschneider, Oliver Fiehn
BMC Bioinformatics , 2007, DOI: 10.1186/1471-2105-8-162
Abstract: The Bayesian law was applied for detecting linearities that are validated by explaining the residues by the degree of technical measurement errors. Test statistics were developed and the algorithm was tested on simulated data using 3–150 samples and 0–100% technical error. Under the null hypothesis of the existence of a linear relationship, type I errors remained below 5% for data sets consisting of more than four samples, whereas the type II error rate quickly raised with increasing technical errors. Conversely, a filter was developed to balance the error rates in the opposite direction. A minimum of 20 biological replicates is recommended if technical errors remain below 20% relative standard deviation and if thresholds for false error rates are acceptable at less than 5%. The algorithm was proven to be robust against outliers, unlike Pearson's correlations.The algorithm facilitates finding linear relationships in complex datasets, which is radically different from estimating linearity parameters from given linear relationships. Without filter, it provides high sensitivity and fair specificity. If the filter is activated, high specificity but only fair sensitivity is yielded. Total error rates are more favorable with deactivated filters, and hence, metabolomic networks should be generated without the filter. In addition, Bayesian likelihoods facilitate the detection of multiple linear dependencies between two variables. This property of the algorithm enables its use as a discovery tool and to generate novel hypotheses of the existence of otherwise hidden biological factors.In recent years, time course analyses of metabolic perturbations have become more important to understand metabolic networks based on experimental data [1,2]. One way to analyze metabolic networks is by systematically investigating linear relationships between all analyzed metabolites (variables) followed by constructing networks from positively identified components, and eventually comparing ne
Software platform virtualization in chemistry research and university teaching
Tobias Kind, Tim Leamy, Julie A Leary, Oliver Fiehn
Journal of Cheminformatics , 2009, DOI: 10.1186/1758-2946-1-18
Abstract: Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs.Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide."Virtual machines have finally arrived. Dismissed for a number of years as merely academic curiosities, they are now seen as cost-effective techniques for organizing computer systems resources to provide extraordinary system flexibility and support for certain unique applications." This statement from one of the pioneers of virtualization (Goldberg 1974 [1]) is equally true 35 years later and the hype generated by the computer science community and software companies require special attention especially in chemistry and life sciences because virtual machines have undoubtedly and truly arrived. M
Metabolic profiling of laser microdissected vascular bundles of Arabidopsis thaliana
Martina Schad, Rajsree Mungur, Oliver Fiehn, Julia Kehr
Plant Methods , 2005, DOI: 10.1186/1746-4811-1-2
Abstract: In this study, we used cryosectioning as an alternative method that preserves sufficient cellular structure while minimizing metabolite loss by excluding any solute exchange steps. Using this pre-treatment procedure, Arabidopsis thaliana stem sections were prepared for laser microdissection of vascular bundles. Collected samples were subsequently analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS) to obtain metabolite profiles. From 100 collected vascular bundles (~5,000 cells), 68 metabolites could be identified. More than half of the identified metabolites could be shown to be enriched or depleted in vascular bundles as compared to the surrounding tissues.This study uses the example of vascular bundles to demonstrate for the first time that it is possible to analyze a comprehensive set of metabolites from laser microdissected samples at a tissue-specific level, given that a suitable sample preparation procedure is used.Unlike unicellular organisms, plants and animals have evolved as complex organisms that are defined by distributing special vital functions to spatially separated organs and tissues. The distinct functions of tissues and organs result from the integrated activity of individual cells. Current approaches mostly ignore this fact by analyzing samples that consist of a variety of different cell types and thus average and dilute the information obtained. Parameters that define the function and the physiological state of cells include gene and protein expression, but also the complement of low-molecular-weight compounds such as lipids, carbohydrates, vitamins or hormones that carry out much of the cell's business. Therefore, in addition to transcriptomic and proteomic studies, a comprehensive metabolite analysis with high spatial resolution is essential to fully characterize the state of a certain tissue.To achieve this, analysis of small solutes in individual plant cells has so far been performed after extracting picoliter-sized sa
Assessment of Metabolome Annotation Quality: A Method for Evaluating the False Discovery Rate of Elemental Composition Searches
Fumio Matsuda,Yoko Shinbo,Akira Oikawa,Masami Yokota Hirai,Oliver Fiehn,Shigehiko Kanaya,Kazuki Saito
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0007490
Abstract: In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed.
Page 1 /4648
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.