Abstract:
Background: Hypothermia is an important determinant of survival in newborns, especially among low-birth-weight ones. Prolonged hypothermia leads to edema, generalized hemorrhage, jaundice and ultimately death. This study was undertaken to examine the factors affecting transition from hypothermic state in neonates.Methods: The study consisted of 439 neonates hospitalized in NICU of Valiasr in Tehran, Iran in 2005. The neonates' rectal temperature was measured immediately after birth and every 30 minutes afterwards, until neonates passed hypothermia stages. In order to estimate the rate of transition from neonatal hypothermic state, we used multi-state Markov models with two covariates, birth weight and environmental temperature. We also used R package to fit the model.Results: Estimated transition rates from severe hypothermia and mild hypothermia were 0.1192 and 0.0549 per minute, respectively. Weight had a significant effect on transition from hypothermia to normal condition (95% CI: 0.1364-0.4165, P<0.001). Environmental temperature significantly affected the transition from hypothermia to normal stage (95% CI: 0.0439-0.4963, P<0.001).Conclusion: The results of this study showed that neonates with normal weight and neonates in an environmental temperature greater than 28 °C had a higher transition rate from hypothermia stages. Since birth weight at the time of delivery is not under the control of medical staff, keeping the environmental temperature in an optimum level could help neonates to pass through the hypothermia stages faster.

Abstract:
we sought the role of mRNA secondary structures and their information contents for five vertebrate and plant splice site datasets. We selected 900-nucleotide sequences centered at each (real or decoy) donor and acceptor sites, and predicted their corresponding RNA structures by Vienna software. Then, based on whether the nucleotide is in a stem or not, the conventional four-letter nucleotide alphabet was translated into an eight-letter alphabet. Zero-, first- and second-order Markov models were selected as the signal detection methods. It is shown that applying the eight-letter alphabet compared to the four-letter alphabet considerably increases the accuracy of both donor and acceptor site predictions in case of higher order Markov models.Our results imply that RNA structure contains important data and future gene prediction programs can take advantage of such information.In recent years, complete genomic sequences of many eukaryotic organisms are available and identifying genes in genomic DNA sequences by computational methods has become an important task in bioinformatics. Computational gene prediction tools are now essential components of every genome sequencing project. These programs generally identify potential coding regions by homology searches against databases or by identification of gene structural elements (e.g. start and stop positions and donor and acceptor splice sites) in an unknown DNA sequence. The latter task is routinely done using algorithms trained by observed signals in sequences of known structure.Ab initio gene prediction methods are based on searching for splice site signals in genomic sequences. The 5' boundary or donor sites of introns in eukaryotes almost always contain the dinucleotide GU, while the 3' boundary or acceptor sites contain the dinucleotide AG. However, because of the common occurrences of these conserved dinucleotides, correct detection of splice sites is not possible if the gene finding algorithm is merely based on the GU

Abstract:
We developed a new approach to calculate a knowledge-based potential of mean-force, using pairwise residue contact area. To test the performance of our approach, we performed it on several decoy sets to measure its ability to discriminate native structure from decoys. This potential has been able to distinguish native structures from the decoys in the most cases. Further, the calculated Z-scores were quite high for all protein datasets.This knowledge-based potential of mean force can be used in protein structure prediction, fold recognition, comparative modelling and molecular recognition. The program is available at http://www.bioinf.cs.ipm.ac.ir/softwares/surfield webciteConsidering energy function to detect a correct protein fold from incorrect ones is very important for protein structure prediction and protein folding. Mainly, two different types of potential energy function are currently in use, either on the identification of native protein models from a large set of decoys or protein fold recognition and threading studies [1-10]. The first class of potentials, named physical-based potential, is based on the fundamental analysis of the forces between the particles referred to as physical energy function. The second type is knowledge-based energy function based on information from known protein structures. In physical energy function, a molecular mechanics force field is used. Molecular mechanics force fields are parameterized from ab-initio calculation and small molecule structural data. They are essentially the sum of pairwise electrostatic and Van der Waals interaction energies, bonds, angles and dihedral angle terms [11-14]. In addition, terms that are not included such as entropy and the solvent effect are implicitly considered. Although, physical energy function is widely used in molecular dynamic simulation of proteins in their native and denatured states which can be used to distinguish the decoy/native structures, but these functions have not been effi

Abstract:
In comparison with other methods, our algorithm reports blocks of larger average size. Nevertheless, the haplotype diversity within the blocks is captured by a small number of tagSNPs. Resampling HapMap haplotypes under a block-based model of recombination showed that our algorithm is robust in reproducing the same partitioning for recombinant samples. Our algorithm performed better than previously reported models in a case-control association study aimed at mapping a single locus trait, based on simulation results that were evaluated by a block-based statistical test. Compared to methods of haplotype block partitioning, we performed best on detection of recombination hotspots.Our proposed method divides chromosomes into the regions within which allelic associations of SNP pairs are maximized. This approach presents a native design for dimension reduction in genome-wide association studies. Our results show that the pairwise allelic association of SNPs can describe various features of genomic variation, in particular recombination hotspots.Analysis of Single Nucleotide Polymorphisms (SNPs) in the DNA of unrelated individuals revealed a block-like structure of haplotype variation along the human genome. Using the first available genome-wide data of SNPs on chromosome 21, Patil et al. [1] showed that in particular regions on the chromosome, the observed diversity of SNP haplotypes is less than the expected. Almost at the same time, a similar structure in haplotypes within a region of 103 SNPs on chromosome region 5q31 was reported by Daly et al. [2]. In the latter study, a block structure of haplotypes was revealed using a Hidden Markov Model for estimating recombination rates. This approach, unlike models based on haplotype diversity, incorporated a quantity measuring Linkage Disequilibrium (LD) between pairs of SNPs.It is well known that effects such as population bottlenecks, geographic isolation, and natural selection can increase the extent of linkage disequilibr

Abstract:
nstruction of random perfect phylogeny matrix Original Research (3639) Total Article Views Authors: Mehdi Sadeghi, Hamid Pezeshk, Changiz Eslahchi, et al Published Date November 2010 Volume 2010:3 Pages 89 - 96 DOI: http://dx.doi.org/10.2147/AABC.S13397 Mehdi Sadeghi1,2, Hamid Pezeshk4, Changiz Eslahchi3,5, Sara Ahmadian6, Sepideh Mah Abadi6 1National Institute of Genetic Engineering and Biotechnology, Tehran, Iran; 2School of Computer Science, 3School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran; 4School of Mathematics, Statistics and Computer Sciences, Center of Excellence in Biomathematics, College of Science, University of Tehran, Tehran, Iran; 5Department of Mathematics, Shahid Beheshti University, G.C., Tehran, Iran; 6Department of Computer Engineering, Sharif University of Technology, Tehran, Iran Purpose: Interest in developing methods appropriate for mapping increasing amounts of genome-wide molecular data are increasing rapidly. There is also an increasing need for methods that are able to efficiently simulate such data. Patients and methods: In this article, we provide a graph-theory approach to find the necessary and sufficient conditions for the existence of a phylogeny matrix with k nonidentical haplotypes, n single nucleotide polymorphisms (SNPs), and a population size of m for which the minimum allele frequency of each SNP is between two specific numbers a and b. Results: We introduce an O(max(n2, nm)) algorithm for the random construction of such a phylogeny matrix. The running time of any algorithm for solving this problem would be Ω (nm). Conclusion: We have developed software, RAPPER, based on this algorithm, which is available at http://bioinf.cs.ipm.ir/softwares/RAPPER.

Abstract:
Mehdi Sadeghi1,2, Hamid Pezeshk4, Changiz Eslahchi3,5, Sara Ahmadian6, Sepideh Mah Abadi61National Institute of Genetic Engineering and Biotechnology, Tehran, Iran; 2School of Computer Science, 3School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran; 4School of Mathematics, Statistics and Computer Sciences, Center of Excellence in Biomathematics, College of Science, University of Tehran, Tehran, Iran; 5Department of Mathematics, Shahid Beheshti University, G.C., Tehran, Iran; 6Department of Computer Engineering, Sharif University of Technology, Tehran, IranPurpose: Interest in developing methods appropriate for mapping increasing amounts of genome-wide molecular data are increasing rapidly. There is also an increasing need for methods that are able to efficiently simulate such data.Patients and methods: In this article, we provide a graph-theory approach to find the necessary and sufficient conditions for the existence of a phylogeny matrix with k nonidentical haplotypes, n single nucleotide polymorphisms (SNPs), and a population size of m for which the minimum allele frequency of each SNP is between two specific numbers a and b.Results: We introduce an O(max(n2, nm)) algorithm for the random construction of such a phylogeny matrix. The running time of any algorithm for solving this problem would be Ω (nm).Conclusion: We have developed software, RAPPER, based on this algorithm, which is available at http://bioinf.cs.ipm.ir/softwares/RAPPER.Keywords: perfect phylogeny, minimum allele frequency (MAF), tree, recursive algorithm

Abstract:
The profile hidden Markov model (PHMM) is widely used to assign the protein sequences to their respective families. A major limitation of a PHMM is the assumption that given states the observations (amino acids) are independent. To overcome this limitation, the dependency between amino acids in a multiple sequence alignment (MSA) which is the representative of a PHMM can be appended to the PHMM. Due to the fact that with a MSA, the sequences of amino acids are biologically related, the one-by-one dependency between two amino acids can be considered. In other words, based on the MSA, the dependency between an amino acid and its corresponding amino acid located above can be combined with the PHMM. For this purpose, the new emission probability matrix which considers the one-by-one dependencies between amino acids is constructed. The parameters of a PHMM are of two types; transition and emission probabilities which are usually estimated using an EM algorithm called the Baum-Welch algorithm. We have generalized the Baum-Welch algorithm using similarity emission matrix constructed by integrating the new emission probability matrix with the common emission probability matrix. Then, the performance of similarity emission is discussed by applying it to the top twenty protein families in the Pfam database. We show that using the similarity emission in the Baum-Welch algorithm significantly outperforms the common Baum-Welch algorithm in the task of assigning protein sequences to protein families.

Abstract:
We studied the effect of applying different RSA threshold types (namely, fixed thresholds vs. residue-dependent thresholds) on a variety of secondary structure prediction methods. With the consideration of DSSP-assigned RSA values we realized that improvement in the accuracy of prediction strictly depends on the selected threshold(s). Furthermore, we showed that choosing a single threshold for all amino acids is not the best possible parameter. We therefore used residue-dependent thresholds and most of residues showed improvement in prediction. Next, we tried to consider predicted RSA values, since in the real-world problem, protein sequence is the only available information. We first predicted the RSA classes by RVP-net program and then used these data in our method. Using this approach, improvement in prediction was also obtained.The success of applying the RSA information on different secondary structure prediction methods suggest that prediction accuracy can be improved independent of prediction approaches. Thus, solvent accessibility can be considered as a rich source of information to help the improvement of these methods.The problem of accurate prediction of protein three-dimensional structure continues to be one of the challenging problems in Bioinformatics. The large-scale genome sequencing efforts have made this problem even more significant. Roughly 50% of the proteins in a genome have at least one homolog in protein structure databases and their structure can be predicted efficiently by homology modeling [1,2]. However, for the other half of the sequences no structural template is currently known. To date, the performance of ab initio three dimensional prediction methods are still far from being perfect [3-5]. Therefore, in order to obtain information about the structure of a novel protein, one may consider simpler tasks, like one dimensional prediction of protein characteristics [6]. Acquiring such information is a key step in understanding the relation

Abstract:
A comprehensive study is performed to investigate the performance of a non-uniform circular array interferometer in a real time 3-dimensional direction finder. The angular range of view is supposed to be 65 degrees vertically and 120 degrees horizontally, which is suitable for airborne applications. Interferometer is designed to work in the S, C and X bands. Regarding optimization process, the interferometer employs an eight element non-uniform circular array along with a phase reference antenna at the center of the array. Several quantities and parameters are studied, e.g., frequency behavior, origins of phase measurement errors, Signal to Noise Ratio (SNR) effect on phase measurement, and the effect of the phase measurement error on direction finding performance. The proposed interferometer is able to tolerate at least 35 degrees of phase measurement error. Radius of the array is determined to be 22 cm in order to have good frequency response in the desired frequency band. Both Generalized Regression Neural Network (GRNN) and Maximum Likelihood (ML) estimation are applied for mapping the phase relationships between antennas to the Direction of Arrival (DoA). The results of two methods are well matched, and therefore validation is performed.

Abstract:
In the present research, the influence of a deposit control additive on NOx emissions from two types of gasoline engine vehicles i.e., Peykan (base on Hillman) and Pride (South Korea Kia motors) was studied. Exhaust NOx emissions were measured in to stages, before decarbonization process and after that. Statistical analysis was conducted on the measurement results. Results showed that NOx emissions from Peykans increased 0.28% and NOx emissions from Pride automobiles decreased 6.18% on average, due to the elimination of engine deposits. The observed variations were not statistically and practically significant. The results indicated that making use of detergent additives is not an effective way to reduce the exhaust NOx emissions from gasoline engine vehicles.