Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 Physics , 2005, DOI: 10.1103/PhysRevE.72.036118 Abstract: We discuss two sampling schemes for selecting random subnets from a network: Random sampling and connectivity dependent sampling, and investigate how the degree distribution of a node in the network is affected by the two types of sampling. Here we derive a necessary and sufficient condition that guarantees that the degree distribution of the subnet and the true network belong to the same family of probability distributions. For completely random sampling of nodes we find that this condition is fulfilled by classical random graphs; for the vast majority of networks this condition will, however, not be met. We furthermore discuss the case where the probability of sampling a node depends on the degree of a node and we find that even classical random graphs are no longer closed under this sampling regime. We conclude by relating the results to real {\it E.coli} protein interaction network data.
 Computer Science , 2015, Abstract: Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced sample violates only a small subset of the constraints? The question naturally comes up in situations where the solution subject to the sampled constraints is used as an approximate solution to the original problem. In this case, it makes sense to improve cost and volatility of the sample solution by removing some of the constraints that appear most restricting. At the same time, the approximation quality (measured in terms of violated constraints) should remain high. We study random sampling with removal in a generalized, completely abstract setting where we assign to each subset $R$ of the constraints an arbitrary set $V(R)$ of constraints disjoint from $R$; in applications, $V(R)$ corresponds to the constraints violated by the optimal solution subject to only the constraints in R. Furthermore, our results are parametrized by the dimension $\delta$. In this setting, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of $k$ elements. For a large range of values of $k$, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases. We show that this bound on special LP-type problems, can be derived in the much more general setting of violator spaces, and with very elementary proofs.
 Mathematics , 2006, Abstract: For the space of functions that can be approximated by linear chirps, we prove a reconstruction theorem by random sampling at arbitrary rates.
 Physics , 2012, DOI: 10.1088/1367-2630/14/6/063027 Abstract: We consider a self-attracting random walk in dimension d=1, in presence of a field of strength s, which biases the walker toward a target site. We focus on the dynamic case (true reinforced random walk), where memory effects are implemented at each time step, differently from the static case, where memory effects are accounted for globally. We analyze in details the asymptotic long-time behavior of the walker through the main statistical quantities (e.g. distinct sites visited, end-to-end distance) and we discuss a possible mapping between such dynamic self-attracting model and the trapping problem for a simple random walk, in analogy with the static model. Moreover, we find that, for any s>0, the random walk behavior switches to ballistic and that field effects always prevail on memory effects without any singularity, already in d=1; this is in contrast with the behavior observed in the static model.
 Mathematics , 2007, Abstract: This article presents uniform random generators of plane partitions according to the size (the number of cubes in the 3D interpretation). Combining a bijection of Pak with the method of Boltzmann sampling, we obtain random samplers that are slightly superlinear: the complexity is $O(n (\ln n)^3)$ in approximate-size sampling and $O(n^{4/3})$ in exact-size sampling (under a real-arithmetic computation model). To our knowledge, these are the first polynomial-time samplers for plane partitions according to the size (there exist polynomial-time samplers of another type, which draw plane partitions that fit inside a fixed bounding box). The same principles yield efficient samplers for $(a\times b)$-boxed plane partitions (plane partitions with two dimensions bounded), and for skew plane partitions. The random samplers allow us to perform simulations and observe limit shapes and frozen boundaries, which have been analysed recently by Cerf and Kenyon for plane partitions, and by Okounkov and Reshetikhin for skew plane partitions.
 Physics , 2009, DOI: 10.1364/OL.34.001876 Abstract: We propose a new approach to nondeterministic random number generation. In theory, the randomness originated from the uncorrelated nature of consecutive laser pulses with Poissonian photon number distribution and that of the consecutive single photon detections is used to generate random bit. In experiment, von Neumann correction method is applied to extract the final random bit. This method is proved to be bias free in randomness generation, provided that the single photon detections are mutually independent, and further, it has the advantage in generation efficiency of random bits since no postprocessing is needed. A true random number generator based on this new method is realized and its randomness is guaranteed using three batteries of statistical tests.
 Computer Science , 2010, Abstract: A TPM (trusted platform module) is a chip present mostly on newer motherboards, and its primary function is to create, store and work with cryptographic keys. This dedicated chip can serve to authenticate other devices or to protect encryption keys used by various software applications. Among other features, it comes with a True Random Number Generator (TRNG) that can be used for cryptographic purposes. This random number generator consists of a state machine that mixes unpredictable data with the output of a one way hash function. According the specification it can be a good source of unpredictable random numbers even without having to require a genuine source of hardware entropy. However the specification recommends collecting entropy from any internal sources available such as clock jitter or thermal noise in the chip itself, a feature that was implemented by most manufacturers. This paper will benchmark the random number generator of several TPM chips from two perspectives: the quality of the random bit sequences generated, as well as the output bit rate.
 Physics , 2013, DOI: 10.1051/0004-6361/201526236 Abstract: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and cost. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, it was shown that a sparse sampling strategy could be a powerful substitute for the contiguous observations. However, in this previous paper a regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. In this paper, we use the Bayesian experimental design to investigate a random sparse sampling, where the observed patches are randomly distributed over the total sparsely sampled area. We find that, as there is no preferred scale in the window function, the induced correlation is evenly distributed amongst all scales. This could be desirable if we are interested in specific scales in the galaxy power spectrum, such as the Baryonic Acoustic Oscillation (BAO) scales. However, for constraining the overall galaxy power spectrum and the cosmological parameters, there is no preference over regular or random sampling. Hence any approach that is practically more suitable can be chosen and we can relax the regular-grid condition for the distribution of the observed patches.
 Mathematics , 2006, DOI: 10.1016/j.jmva.2006.01.007 Abstract: A random balanced sample (RBS) is a multivariate distribution with n components X_1,...,X_n, each uniformly distributed on [-1, 1], such that the sum of these components is precisely 0. The corresponding vectors X lie in an (n-1)-dimensional polytope M(n). We present new methods for the construction of such RBS via densities over M(n) and these apply for arbitrary n. While simple densities had been known previously for small values of n (namely 2,3 and 4), for larger n the known distributions with large support were fractal distributions (with fractal dimension asymptotic to n as n approaches infinity). Applications of RBS distributions include sampling with antithetic coupling to reduce variance, and the isolation of nonlinearities. We also show that the previously known densities (for n<5) are in fact the only solutions in a natural and very large class of potential RBS densities. This finding clarifies the need for new methods, such as those presented here.
 Dai Nguyen Bui Computer Science , 2015, Abstract: We present a sampling method called, CacheDiff, that has both time and space complexity of O(k) to randomly select k items from a pool of N items, in which N is known.
 Page 1 /100 Display every page 5 10 20 Item