Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Physics , 2009, DOI: 10.1088/0004-637X/694/1/643 Abstract: (Abridged) We measure the fraction of galaxies undergoing disk-disk major mergers (f_m) at intermediate redshifts (0.35 <= z < 0.85) by studying the asymmetry index A of galaxy images. Results are provided for B- and Ks-band absolute magnitude selected samples from the Groth strip in the GOYA photometric survey. Three sources of systematic error are carefully addressed: (i) we avoid morphological K-corrections, (ii) we measure asymmetries in artificially redshifted to z_d = 0.75 galaxies to lead with loss of morphological information with redshift, and (iii) we take into account the observational errors in z and A, that tend to overestimate the merger fraction, by maximum likelihood techniques. We find: (i) our data allow for a robust merger fraction to be provided for a single redshift bin centered at z=0.6. (ii) Merger fractions have low values: f_m = 0.045 for M_B <= -20 galaxies, and f_m = 0.031 for M_Ks <= -23.5 galaxies. And, (iii) failure to address the effects of the observational errors leads to overestimating f_m by factors of 10%-60%. Combining our results with those on literature, and parameterizing the merger fraction evolution as f_m(z) = f_m(0)(1+z)^m, we obtain that m = 2.9 +- 0.8, and f_m(0) = 0.012 +- 0.004$. Assuming a Ks-band mass-to-light ratio not varying with luminosity, we infer that the merger rate of galaxies with stellar mass M >= 3.5x10^10 M_Sun is R_m = 1.6x10^-4 Mpc^-3 Gyr^-1. When we compare with previous studies at similar redshifts, we find that the merger rate decreases when mass increases.  Yongjin Li International Journal of Mathematics and Mathematical Sciences , 2004, DOI: 10.1155/s0161171204305272 Abstract: The Ishikawa iterative sequences with errors are studied for Lipschitzian strongly pseudocontractive operators in arbitrary real Banach spaces; some well-known results of Chidume (1998) and Zeng (2001) are generalized.  Computer Science , 2013, DOI: 10.1007/s00041-014-9361-2 Abstract: The problem of retrieving phase information from amplitude measurements alone has appeared in many scientific disciplines over the last century. PhaseLift is a recently introduced algorithm for phase recovery that is computationally efficient, numerically stable, and comes with rigorous performance guarantees. PhaseLift is optimal in the sense that the number of amplitude measurements required for phase reconstruction scales linearly with the dimension of the signal. However, it specifically demands Gaussian random measurement vectors - a limitation that restricts practical utility and obscures the specific properties of measurement ensembles that enable phase retrieval. Here we present a partial derandomization of PhaseLift that only requires sampling from certain polynomial size vector configurations, called t-designs. Such configurations have been studied in algebraic combinatorics, coding theory, and quantum information. We prove reconstruction guarantees for a number of measurements that depends on the degree t of the design. If the degree is allowed to to grow logarithmically with the dimension, the bounds become tight up to polylog-factors. Beyond the specific case of PhaseLift, this work highlights the utility of spherical designs for the derandomization of data recovery schemes.  BMC Bioinformatics , 2005, DOI: 10.1186/1471-2105-6-119 Abstract: This article addresses the impact of erroneous links on network topological inference and explores possible error mechanisms for scale-free networks with an emphasis on Saccharomyces cerevisiae protein interaction networks. We study this issue by both theoretical derivations and simulations. We show that the ignorance of erroneous links in network analysis may lead to biased estimates of the scale parameter and recommend robust estimators in such scenarios. Possible error mechanisms of yeast protein interaction networks are explored by comparisons between real data and simulated data.Our studies show that, in the presence of erroneous links, the connectivity distribution of scale-free networks is still scale-free for the middle range connectivities, but can be greatly distorted for low and high connecitivities. It is more appropriate to use robust estimators such as the least trimmed mean squares estimator to estimate the scale parameter γ under such circumstances. Moreover, we show by simulation studies that the scale-free property is robust to some error mechanisms but untenable to others. The simulation results also suggest that different error mechanisms may be operating in the yeast protein interaction networks produced from different data sources. In the MIPS gold standard protein interaction data, there appears to be a high rate of false negative links, and the false negative and false positive rates are more or less constant across proteins with different connectivities. However, the error mechanism of yeast two-hybrid data may be very different, where the overall false negative rate is low and the false negative rates tend to be higher for links involving proteins with more interacting partners.Recent studies have found that many complex networks, ranging from the World-Wide Web [1] and the scientific collaboration network [2] to biological systems such as the yeast protein interaction network [3], are scale-free. The scale-free property states that the dis  Physics , 2015, DOI: 10.1016/j.physa.2015.04.025 Abstract: We consider the problem of linear fitting of noisy data in the case of broad (say$\alpha$-stable) distributions of random impacts ("noise"), which can lack even the first moment. This situation, common in statistical physics of small systems, in Earth sciences, in network science or in econophysics, does not allow for application of conventional Gaussian maximum-likelihood estimators resulting in usual least-squares fits. Such fits lead to large deviations of fitted parameters from their true values due to the presence of outliers. The approaches discussed here aim onto the minimization of the width of the distribution of residua. The corresponding width of the distribution can either be defined via the interquantile distance of the corresponding distributions or via the scale parameter in its characteristic function. The methods provide the robust regression even in the case of short samples with large outliers, and are equivalent to the normal least squares fit for the Gaussian noises. Our discussion is illustrated by numerical examples.  Brazilian Journal of Operations & Production Management , 2010, Abstract: This paper discusses the problem of the estimation of the proportion p when the inspection system is imperfect (subject to diagnosis errors) and the sampled items are classified repeatedly m times. One assumes that no relevant information of the prior distributions of these errors is available and consequently a posterior distribution for the proportion p with high variability is generated due to non-informative prior distributions for those errors. In this paper, the authors suggest to split randomly the sample into two subsamples. Parameters of prior distributions are estimated by the first sample and a Bayesian inferential procedure is proceeded by the second sample. Numerical results indicate that such procedure yields better performance (lower variance for the posteriori distribution) rather than a single sample of size n= n1+n2 and non-informative prior distributions for the classification errors.  Michael Kech Mathematics , 2015, Abstract: We explicitly give a frame of cardinality$5n-6$such that every signal in$\mathbb{C}^n$can be recovered up to a phase from its associated intensity measurements via the PhaseLift algorithm. Furthermore, we give explicit linear measurements with$4r(n-r)+n-2r$outcomes that enable the recovery of every positive$n\times n$matrix of rank at most$r\$.