Abstract:
This article investigates the potential impact of manufacturing uncertainty in composite structures here in the form of thickness variation in laminate plies, on the robustness of commonly used Artificial Neural Networks (ANN) in Structural Health Monitoring (SHM). Namely, the robustness of an ANN SHM system is assessed through an airfoil case study based on the sensitivity of delamination location and size predictions, when the ANN is imposed to noisy input. In light of the observed poor performance of the original network, even when its architecture was carefully optimized, it had been proposed to weigh the input layer of the ANN by a set of signal-to-noise (SN) ratios and then trained the network. Both damage location and size predictions of the latter SHM approach were increased to above 90%. Practical aspects of the proposed robust SN-ANN SHM have also been discussed.

Abstract:
The problem of designing efficient feedback-based scheduling policies for chunked codes (CC) over packet networks with delay and loss is considered. For networks with feedback, two scheduling policies, referred to as random push (RP) and local-rarest-first (LRF), already exist. We propose a new scheduling policy, referred to as minimum-distance-first (MDF), based on the expected number of innovative successful packet transmissions at each node of the network prior to the "next" transmission time, given the feedback information from the downstream node(s) about the received packets. Unlike the existing policies, the MDF policy incorporates loss and delay models of the link in the selection process of the chunk to be transmitted. Our simulations show that MDF significantly reduces the expected time required for all the chunks (or equivalently, all the message packets) to be decodable compared to the existing scheduling policies for line networks with feedback. The improvements are particularly profound (up to about 46% for the tested cases) for smaller chunks and larger networks which are of more practical interest. The improvement in the performance of the proposed scheduling policy comes at the cost of more computations, and a slight increase in the amount of feedback. We also propose a low-complexity version of MDF with a rather small loss in the performance, referred to as minimumcurrent-metric-first (MCMF). The MCMF policy is based on the expected number of innovative packet transmissions prior to the "current" transmission time, as opposed to the next transmission time, used in MDF. Our simulations (over line networks) demonstrate that MCMF is always superior to RP and LRF policies, and the superiority becomes more pronounced for smaller chunks and larger networks.

Abstract:
In this paper, we analyze the coding delay and the average coding delay of random linear network codes (a.k.a. dense codes) and chunked codes (CC), which are an attractive alternative to dense codes due to their lower complexity, over line networks with Bernoulli losses and deterministic regular or Poisson transmissions. Our results, which include upper bounds on the delay and the average delay, are (i) for dense codes, in some cases more general, and in some other cases tighter, than the existing bounds, and provide a more clear picture of the speed of convergence of dense codes to the (min-cut) capacity of line networks; and (ii) the first of their kind for CC over networks with such probabilistic traffics. In particular, these results demonstrate that a stand-alone CC or a precoded CC provide a better tradeoff between the computational complexity and the convergence speed to the network capacity over the probabilistic traffics compared to arbitrary deterministic traffics which have previously been studied in the literature.

Abstract:
Network coding is known to improve the throughput and the resilience to losses in most network scenarios. In a practical network scenario, however, the accurate modeling of the traffic is often too complex and/or infeasible. The goal is thus to design codes that perform close to the capacity of any network (with arbitrary traffic) efficiently. In this context, random linear network codes are known to be capacity-achieving while requiring a decoding complexity quadratic in the message length. Chunked Codes (CC) were proposed by Maymounkov et al. to improve the computational efficiency of random codes by partitioning the message into a number of non-overlapping chunks. CC can also be capacity-achieving but have a lower encoding/decoding complexity at the expense of slower convergence to the capacity. In this paper, we propose and analyze a generalized version of CC called Overlapped Chunked Codes (OCC) in which chunks are allowed to overlap. Our theoretical analysis and simulation results show that compared to CC, OCC can achieve the capacity with a faster speed while maintaining almost the same advantage in computational efficiency.

Abstract:
In this paper, we analyze the coding delay and the average coding delay of Chunked network Codes (CC) over line networks with Bernoulli losses and deterministic regular or Poisson transmissions. Chunked codes are an attractive alternative to random linear network codes due to their lower complexity. Our results, which include upper bounds on the delay and the average delay, are the first of their kind for CC over networks with such probabilistic traffics. These results demonstrate that a stand-alone CC or a precoded CC provides a better tradeoff between the computational complexity and the convergence speed to the network capacity over the probabilistic traffics compared to arbitrary deterministic traffics. The performance of CC over the latter traffics has already been studied in the literature.

Abstract:
In this paper, we study the coding delay and the average coding delay of random linear network codes (dense codes) over line networks with deterministic regular and Poisson transmission schedules. We consider both lossless networks and networks with Bernoulli losses. The upper bounds derived in this paper, which are in some cases more general, and in some other cases tighter, than the existing bounds, provide a more clear picture of the speed of convergence of dense codes to the min-cut capacity of line networks.

Abstract:
In this paper, the problem of designing network codes that are both communicationally and computationally efficient over packet line networks with worst-case schedules is considered. In this context, random linear network codes (dense codes) are asymptotically capacity-achieving, but require highly complex coding operations. To reduce the coding complexity, Maymounkov et al. proposed chunked codes (CC). Chunked codes operate by splitting the message into non-overlapping chunks and send a randomly chosen chunk at each transmission time by a dense code. The complexity, that is linear in the chunk size, is thus reduced compared to dense codes. In this paper, the existing analysis of CC is revised, and tighter bounds on the performance of CC are derived. As a result, we prove that (i) CC with sufficiently large chunks are asymptotically capacity-achieving, but with a slower speed of convergence compared to dense codes; and (ii) CC with relatively smaller chunks approach the capacity with an arbitrarily small but non-zero constant gap. To improve the speed of convergence of CC, while maintaining their advantage in reducing the computational complexity, we propose and analyze a new CC scheme with overlapping chunks, referred to as overlapped chunked codes (OCC). We prove that for smaller chunks, which are advantageous due to lower computational complexity, OCC with larger overlaps provide a better tradeoff between the speed of convergence and the message or packet error rate. This implies that for smaller chunks, and with the same computational complexity, OCC outperform CC in terms of the speed of approaching the capacity for sufficiently small target error rate. In fact, we design linear-time OCC with very small chunks (constant in the message size) that are both computationally and communicationally efficient, and that outperform linear-time CC.

Abstract:
To lower the complexity of network codes over packet line networks with arbitrary schedules, chunked codes (CC) and overlapped chunked codes (OCC) were proposed in earlier works. These codes have been previously analyzed for relatively large chunks. In this paper, we prove that for smaller chunks, CC and OCC asymptotically approach the capacity with an arbitrarily small but non-zero constant gap. We also show that unlike the case for large chunks, the larger is the overlap size, the better would be the tradeoff between the speed of convergence and the message or packet error rate. This implies that OCC are superior to CC for shorter chunks. Simulations consistent with the theoretical results are also presented, suggesting great potential for the application of OCC for multimedia transmission over packet networks.

Abstract:
In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension $n$). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration $\ell$ in the asymptotic regime where $n \rightarrow \infty$. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of $\ell$. Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of $n$. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.

Abstract:
In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension $n$). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration $\ell$ in the asymptotic regime where $n \rightarrow \infty$. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of $\ell$. Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of $n$. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.