oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 245 )

2018 ( 1851 )

2017 ( 1762 )

2016 ( 1710 )

Custom range...

Search Results: 1 - 10 of 108832 matches for " Tong Zhang "
All listed articles are free for downloading (OA Articles)
Page 1 /108832
Display every page Item
Chaos Synchronization of Uncertain Lorenz System via Single State Variable Feedback  [PDF]
Fengxiang Chen, Tong Zhang
Applied Mathematics (AM) , 2013, DOI: 10.4236/am.2013.411A2002
Abstract:

This paper treats the problem of chaos synchronization for uncertain Lorenz system via single state variable information of the master system. By the Lyapunov stability theory and adaptive technique, the derived controller is featured as follows: 1) only single state variable information of the master system is needed; 2) chaos synchronization can also be achieved even if the perturbation is occurred in some parameters of the master chaotic system. Finally, the effectiveness of the proposed controllers is also illustrated by the simulations as well as rigorous mathematical proofs.

Progress in research on pharmaceutics for intranasal medication
ZHANG Tong
Zhong Xi Yi Jie He Xue Bao , 2004,
Abstract:
Severi inequality for varieties of maximal Albanese dimension
Tong Zhang
Mathematics , 2013,
Abstract: Let $X$ be a projective, normal, minimal and Gorenstein $n$-dimensional complex variety of general type. Suppose $X$ is of maximal Albanese dimension. We prove that $K^n_X \ge 2 n! \chi(K_X)$
Geography of irregular Gorenstein 3-folds
Tong Zhang
Mathematics , 2013,
Abstract: In this paper, we study the explicit geography problem of irregular Gorenstein minimal 3-folds of general type. We generalize the classical Noether-Castelnuovo inequalities for irregular surfaces to irregular 3-folds according to the Albanese dimension.
Slope inequality for families of curves over surfaces
Tong Zhang
Mathematics , 2015,
Abstract: In this paper, we prove a slope inequality for families of curves over surfaces by a characteristic $p$ method.
From $ε$-entropy to KL-entropy: Analysis of minimum information complexity density estimation
Tong Zhang
Mathematics , 2007, DOI: 10.1214/009053606000000704
Abstract: We consider an extension of $\epsilon$-entropy to a KL-divergence based complexity measure for randomized density estimation methods. Based on this extension, we develop a general information-theoretical inequality that measures the statistical complexity of some deterministic and randomized density estimators. Consequences of the new inequality will be presented. In particular, we show that this technique can lead to improvements of some classical results concerning the convergence of minimum description length and Bayesian posterior distributions. Moreover, we are able to derive clean finite-sample convergence bounds that are not obtainable using previous approaches.
Sparse Recovery with Orthogonal Matching Pursuit under RIP
Tong Zhang
Mathematics , 2010,
Abstract: This paper presents a new analysis for the orthogonal matching pursuit (OMP) algorithm. It is shown that if the restricted isometry property (RIP) is satisfied at sparsity level $O(\bar{k})$, then OMP can recover a $\bar{k}$-sparse signal in 2-norm. For compressed sensing applications, this result implies that in order to uniformly recover a $\bar{k}$-sparse signal in $\Real^d$, only $O(\bar{k} \ln d)$ random projections are needed. This analysis improves earlier results on OMP that depend on stronger conditions such as mutual incoherence that can only be satisfied with $\Omega(\bar{k}^2 \ln d)$ random projections.
Some sharp performance bounds for least squares regression with $L_1$ regularization
Tong Zhang
Statistics , 2009, DOI: 10.1214/08-AOS659
Abstract: We derive sharp performance bounds for least squares regression with $L_1$ regularization from parameter estimation accuracy and feature selection quality perspectives. The main result proved for $L_1$ regularization extends a similar result in [Ann. Statist. 35 (2007) 2313--2351] for the Dantzig selector. It gives an affirmative answer to an open question in [Ann. Statist. 35 (2007) 2358--2364]. Moreover, the result leads to an extended view of feature selection that allows less restrictive conditions than some recent work. Based on the theoretical insights, a novel two-stage $L_1$-regularization procedure with selective penalization is analyzed. It is shown that if the target parameter vector can be decomposed as the sum of a sparse parameter vector with large coefficients and another less sparse vector with relatively small coefficients, then the two-stage procedure can lead to improved performance.
Discussion of "Is Bayes Posterior just Quick and Dirty Confidence?" by D. A. S. Fraser
Tong Zhang
Statistics , 2012, DOI: 10.1214/11-STS352D
Abstract: Discussion of "Is Bayes Posterior just Quick and Dirty Confidence?" by D. A. S. Fraser [arXiv:1112.5582]
Multi-stage Convex Relaxation for Feature Selection
Tong Zhang
Statistics , 2011,
Abstract: A number of recent work studied the effectiveness of feature selection using Lasso. It is known that under the restricted isometry properties (RIP), Lasso does not generally lead to the exact recovery of the set of nonzero coefficients, due to the looseness of convex relaxation. This paper considers the feature selection property of nonconvex regularization, where the solution is given by a multi-stage convex relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure recovers the set of nonzero coefficients without suffering from the bias of Lasso relaxation, which complements parameter estimation results of this procedure.
Page 1 /108832
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.