oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 67 )

2018 ( 831 )

2017 ( 819 )

2016 ( 814 )

Custom range...

Search Results: 1 - 10 of 55063 matches for " Hua Zhou "
All listed articles are free for downloading (OA Articles)
Page 1 /55063
Display every page Item
Combining Gene-Phenotype Association Matrix with KEGG Pathways to Mine Gene Modules Using Data Set in GAW17  [PDF]
Hua Lin, Yang Zheng, Ping Zhou
Engineering (ENG) , 2013, DOI: 10.4236/eng.2013.510B067
Abstract:

Currently, genome-wide association studies have been proved to be a powerful approach to identify risk loci. However, the molecular regulatory mechanisms of complex diseases are still not clearly understood. It is therefore important to consider the interplay between genetic factors and biological networks in elucidating the mechanisms of complex disease pathogenesis. In this paper, we first conducted a genome-wide association analysis by using the SNP genotype data and phenotype data provided by Genetic Analysis Workshop 17, in order to filter significant SNPs associated with the diseases. Second, we conducted a bioinformatics analysis of gene-phenotype association matrix to identify gene modules (biclusters). Third, we performed a KEGG enrichment test of genes involved in biclusters to find evidence to support their functional consensus. This method can be used for better understanding complex diseases.

Machine Learning in China
Zhi-Hua Zhou
Asian Journal of Information Technology , 2012,
Abstract: NA
Positive solutions of four-point boundary-value problems for higher-order with $p$-Laplacian operator
Yunming Zhou,Hua Su
Electronic Journal of Differential Equations , 2007,
Abstract: In this paper, we study the existence of positive solutions for nonlinear four-point singular boundary-value problems for higher-order equation with the $p$-Laplacian operator. Using the fixed-point index theory, we find conditions for the existence of one solution, and of multiple solutions.
Predicting Protein-Protein Interaction by the Mirrortree Method: Possibilities and Limitations
Hua Zhou, Eric Jakobsson
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0081100
Abstract: Molecular co-evolution analysis as a sequence-only based method has been used to predict protein-protein interactions. In co-evolution analysis, Pearson's correlation within the mirrortree method is a well-known way of quantifying the correlation between protein pairs. Here we studied the mirrortree method on both known interacting protein pairs and sets of presumed non-interacting protein pairs, to evaluate the utility of this correlation analysis method for predicting protein-protein interactions within eukaryotes. We varied metrics for computing evolutionary distance and evolutionary span of the species analyzed. We found the differences between co-evolutionary correlation scores of the interacting and non-interacting proteins, normalized for evolutionary span, to be significantly predictive for proteins conserved over a wide range of eukaryotic clades (from mammals to fungi). On the other hand, for narrower ranges of evolutionary span, the predictive power was much weaker.
Rates of convergence of some multivariate Markov chains with polynomial eigenfunctions
Kshitij Khare,Hua Zhou
Mathematics , 2009, DOI: 10.1214/08-AAP562
Abstract: We provide a sharp nonasymptotic analysis of the rates of convergence for some standard multivariate Markov chains using spectral techniques. All chains under consideration have multivariate orthogonal polynomial as eigenfunctions. Our examples include the Moran model in population genetics and its variants in community ecology, the Dirichlet-multinomial Gibbs sampler, a class of generalized Bernoulli--Laplace processes, a generalized Ehrenfest urn model and the multivariate normal autoregressive process.
MM Algorithms for Geometric and Signomial Programming
Kenneth Lange,Hua Zhou
Mathematics , 2010, DOI: 10.1007/s10107-012-0612-1
Abstract: This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Statistical inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates
Yong Zhou,Hua Liang
Mathematics , 2009, DOI: 10.1214/07-AOS561
Abstract: We study semiparametric varying-coefficient partially linear models when some linear covariates are not observed, but ancillary variables are available. Semiparametric profile least-square based estimation procedures are developed for parametric and nonparametric components after we calibrate the error-prone covariates. Asymptotic properties of the proposed estimators are established. We also propose the profile least-square based ratio test and Wald test to identify significant parametric and nonparametric components. To improve accuracy of the proposed tests for small or moderate sample sizes, a wild bootstrap version is also proposed to calculate the critical values. Intensive simulation experiments are conducted to illustrate the proposed approaches.
Path Following in the Exact Penalty Method of Convex Programming
Hua Zhou,Kenneth Lange
Mathematics , 2012,
Abstract: Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to $\infty$, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Regularized Matrix Regression
Hua Zhou,Lexin Li
Statistics , 2012, DOI: 10.1111/rssb.12031
Abstract: Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry, and electroencephalography, matrix type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial due to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge upon the sparsity of the true signal in terms of the number of its nonzero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. In this article, we propose a class of regularized matrix regression methods based on spectral regularization. Highly efficient and scalable estimation algorithm is developed, and a degrees of freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the proposed method is demonstrated on both synthetic and real examples.
A Path Algorithm for Constrained Estimation
Hua Zhou,Kenneth Lange
Statistics , 2011, DOI: 10.1080/10618600.2012.681248
Abstract: Many least squares problems involve affine equality and inequality constraints. Although there are variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current paper proposes a new path following algorithm for quadratic programming based on exact penalization. Similar penalties arise in $l_1$ regularization in model selection. Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to $\infty$, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the lasso and generalized lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well chosen examples illustrate the mechanics and potential of path following.
Page 1 /55063
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.