Abstract:
In this paper we address the issue of pruning (i.e., shortening) a given interleaver via truncation of the transposition vector of the mother permutation and study its impact on the structural properties of the permutation. This method of pruning allows for continuous un-interrupted data flow regardless of the permutation length since the permutation engine is a buffer whose leading element is swapped by other elements in the queue. The principle goal of pruning is that of construction of variable length and hence delay interleavers with application to iterative soft information processing and concatenated codes, using the same structure (possibly in hardware) of the interleaver and deinterleaver units. We address the issue of how pruning impacts the spread of the permutation and also look at how pruning impacts algebraically constructed permutations. We note that pruning via truncation of the transposition vector of the permutation can have a catastrophic impact on the permutation spread of algebraically constructed permutations. To remedy this problem, we propose a novel lifting method whereby a subset of the points in the permutation map leading to low spread of the pruned permutation are identified and eliminated. Practical realization of this lifting is then proposed via dummy symbol insertion in the input queue of the Finite State Permuter (FSP), and subsequent removal of the dummy symbols at the FSP output.

Abstract:
We derive the probability that a randomly chosen NL-node over $S$ gets localized as a function of a variety of parameters. Then, we derive the probability that the whole network of NL-nodes over $S$ gets localized. In connection with the asymptotic thresholds, we show the presence of asymptotic thresholds on the network localization probability in two different scenarios. The first refers to dense networks, which arise when the domain $S$ is bounded and the densities of the two kinds of nodes tend to grow unboundedly. The second kind of thresholds manifest themselves when the considered domain increases but the number of nodes grow in such a way that the L-node density remains constant throughout the investigated domain. In this scenario, what matters is the minimum value of the maximum transmission range averaged over the fading process, denoted as $d_{max}$, above which the network of NL-nodes almost surely gets asymptotically localized.

Abstract:
The quantum bit error rate is a key quantity in quantum communications. If the quantum channel is the atmosphere, the information is usually encoded in the polarization of a photon. A link budget is required, which takes into account the depolarization of the photon after its interaction with the atmosphere as well as absorption, scattering and atmospheric emissions. An experimental setup for the reproduction of a simple model of the atmosphere is used to evaluate the quantum bit error rate in a BB84 protocol and the results are presented. This result represents a first step toward the realization of an optical bench experiment where atmospheric effects are simulated and controlled for reproducing the effects on a quantum channel in different meteorological situations.

This essay speaks to government policies, the practices of the corporate
sector, the inherited wealth of the few, and the consumptive behavior of the
masses as the underlying causes of economic injustice. This is perceived as
leading the country to an oligarchy.

Abstract:
This article deals with localization probability in a network of randomly distributed communication nodes contained in a bounded domain. A fraction of the nodes denoted as L-nodes are assumed to have localization information while the rest of the nodes denoted as NL nodes do not. The basic model assumes each node has a certain radio coverage within which it can make relative distance measurements. We model both the case radio coverage is fixed and the case radio coverage is determined by signal strength measurements in a Log-Normal Shadowing environment. We apply the probabilistic method to determine the probability of NL-node localization as a function of the coverage area to domain area ratio and the density of L-nodes. We establish analytical expressions for this probability and the transition thresholds with respect to key parameters whereby marked change in the probability behavior is observed. The theoretical results presented in the article are supported by simulations.

Abstract:
This article proposes a novel iterative algorithm based on Low Density Parity Check (LDPC) codes for compression of correlated sources at rates approaching the Slepian-Wolf bound. The setup considered in the article looks at the problem of compressing one source at a rate determined based on the knowledge of the mean source correlation at the encoder, and employing the other correlated source as side information at the decoder which decompresses the first source based on the estimates of the actual correlation. We demonstrate that depending on the extent of the actual source correlation estimated through an iterative paradigm, significant compression can be obtained relative to the case the decoder does not use the implicit knowledge of the existence of correlation.

Following the basic ideas of general relativity and
quantum field theory, combing two kinds of standard models, the curvature mass inside hadrons is discussed and
developed, in which the standard model of particle physics and the standard
model of cosmos are naturally unified under the mathematical framework of
geometric field theory, where the phenomena of dark matter and dark energy
could get naturally theoretical interpretation.