Abstract:
In this paper, we present a probabilistic analysis of iterative node-based verification-based (NB-VB) recovery algorithms over irregular graphs in the context of compressed sensing. Verification-based algorithms are particularly interesting due to their low complexity (linear in the signal dimension $n$). The analysis predicts the average fraction of unverified signal elements at each iteration $\ell$ where the average is taken over the ensembles of input signals and sensing matrices. The analysis is asymptotic ($n \rightarrow \infty$) and is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate. This allows us to design irregular sensing graphs for such recovery algorithms. The designed irregular graphs outperform the corresponding regular graphs substantially. For example, for the same recovery complexity per iteration, we design irregular graphs that can recover up to about 40% more non-zero signal elements compared to the regular graphs. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of $n$.

Abstract:
We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation results are also given to demonstrate the superior performance of the proposed algorithm.

Abstract:
In this paper, we propose a general framework for the asymptotic analysis of node-based verification-based algorithms. In our analysis we tend the signal length $n$ to infinity. We also let the number of non-zero elements of the signal $k$ scale linearly with $n$. Using the proposed framework, we study the asymptotic behavior of the recovery algorithms over random sparse matrices (graphs) in the context of compressive sensing. Our analysis shows that there exists a success threshold on the density ratio $k/n$, before which the recovery algorithms are successful, and beyond which they fail. This threshold is a function of both the graph and the recovery algorithm. We also demonstrate that there is a good agreement between the asymptotic behavior of recovery algorithms and finite length simulations for moderately large values of $n$.

Abstract:
In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension $n$). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration $\ell$ in the asymptotic regime where $n \rightarrow \infty$. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of $\ell$. Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of $n$. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.

Abstract:
Energy efficiency is a primary challenge in wireless body sensor networks for the long-term physical movement monitoring. In order to reduce the energy consumption while maintaining the sufficient classification accuracy of the human activity, a compressed classification approach is explored combining classification with data compressing based on sparse representation and compressed sensing. The proposed approach firstly compresses the sensing data by random projection on the sensor nodes, and then recognizes activities on compressed samples after transmitting to the central node by sparse representation, which can reduce the energy transmission of original data. The performance of the method is evaluated on the opened Wearable Action Recognition Database (WARD). Experimental results are validated that the compressed classifier achieves comparable recognition accuracy on the compressed sensing data.

Abstract:
Wireless tomography is a technique for inferring a physical environment within a monitored region by analyzing RF signals traversed across the region. In this paper, we consider wireless tomography in a two and higher dimensionally structured monitored region, and propose a multi-dimensional wireless tomography scheme based on compressed sensing to estimate a spatial distribution of shadowing loss in the monitored region. In order to estimate the spatial distribution, we consider two compressed sensing frameworks: vector-based compressed sensing and tensor-based compressed sensing. When the shadowing loss has a high spatial correlation in the monitored region, the spatial distribution has a sparsity in its frequency domain. Existing wireless tomography schemes are based on the vector-based compressed sensing and estimates the distribution by utilizing the sparsity. On the other hand, the proposed scheme is based on the tensor-based compressed sensing, which estimates the distribution by utilizing its low-rank property. We reveal that the tensor-based compressed sensing has a potential for highly accurate estimation as compared with the vector-based compressed sensing.

Abstract:
In this paper, we provide a new approach to estimating the error of reconstruction from $\Sigma\Delta$ quantized compressed sensing measurements. Our method is based on the restricted isometry property (RIP) of a certain projection of the measurement matrix. Our result yields simple proofs and a slight generalization of the best-known reconstruction error bounds for Gaussian and subgaussian measurement matrices.

Abstract:
Network tomography means to estimate internal link states from end-to-end path measurements. In conventional network tomography, to make packets transmissively penetrate a network, a cooperation between transmitter and receiver nodes is required, which are located at different places in the network. In this paper, we propose a reflective network tomography, which can totally avoid such a cooperation, since a single transceiver node transmits packets and receives them after traversing back from the network. Furthermore, we are interested in identification of a limited number of bottleneck links, so we naturally introduce compressed sensing technique into it. Allowing two kinds of paths such as (fully) loopy path and folded path, we propose a computationally-efficient algorithm for constructing reflective paths for a given network. In the performance evaluation by computer simulation, we confirm the effectiveness of the proposed reflective network tomography scheme.

Abstract:
Finding a suitable measurement matrix is an important topic in compressed sensing. Though the known random matrix, whose entries are drawn independently from a certain probability distribution, can be used as a measurement matrix and recover signal well, in most cases, we hope the measurement matrix imposed with some special structure. In this paper, based on random graph models, we show that the mixed symmetric random matrices, whose diagonal entries obey a distribution and non-diagonal entries obey another distribution, can be used to recover signal successfully with high probability.

Abstract:
We consider a multi-hop wireless sensor network that measures sparse events and propose a simple forwarding protocol based on Compressed Sensing (CS) which does not need any sophisticated Media Access Control (MAC) scheduling, neither a routing protocol, thereby making significant overhead and energy savings. By means of flooding, multiple packets with different superimposed measurements are received simultaneously at any node. Thanks to our protocol, each node is able to recover each measurement and forward it while avoiding cycles. Numerical results show that our protocol achieves close to zero reconstruction errors at the sink, while greatly reducing overhead. This initial research reveals a new and promising approach to protocol design through CS for wireless mesh and sensor networks.