Abstract:
A computational model of a self-structuring neuronal net is presented in which repetitively applied pattern sets induce the formation of cortical columns and microcircuits which decode distinct patterns after a learning phase. In a case study, it is demonstrated how specific neurons in a feature classifier layer become orientation selective if they receive bar patterns of different slopes from an input layer. The input layer is mapped and intertwined by self-evolving neuronal microcircuits to the feature classifier layer. In this topical overview, several models are discussed which indicate that the net formation converges in its functionality to a mathematical transform which maps the input pattern space to a feature representing output space. The self-learning of the mathematical transform is discussed and its implications are interpreted. Model assumptions are deduced which serve as a guide to apply model derived repetitive stimuli pattern sets to in vitro cultures of neuron ensembles to condition them to learn and execute a mathematical transform. 1. Introduction It can be said that neuronal networks, whether artificial, in vivo, or in vitro, are capable of information processing if they are able to learn and discriminate between pattern sets [1–3]. The central focus in modeling the information processing of such networks is on the specific neuronal architecture which is trained. This is because the architecture of the network determines the possible pattern discriminations that can be performed between pattern sets. For example, a specific architecture may provide orientation selectivity and thus be capable of discriminating between bars of different slopes. Bioinspired concepts will be introduced in the first section of this review with emphasis on the aspects of the in vivo experiments of orientation selectivity by Hubel [4]. Furthermore, the hypothesis of Blasdel will be revisited in the section on the Hough transform in the neurobiological context. The hypothesis states that the firing of these orientation selective cells can be explained by mapping the input stimuli back to the firing cells using a mathematical Hough transform [5]. To strengthen the plausibility of Blasdel’s hypothesis, the motion-detection experiments of Okamoto et al. are also revisited which investigated the hypothesis under the assumption that the mathematical Hough transformation is functionally used and represented as microcircuitry for bar detection in the medial temporal lobe (MTL) of the brain [6]. The base principle of the mathematical Hough transform will be

Abstract:
Global mobile robot localization is the problem of determining a robot's pose in an environment, using sensor data, when the starting position is unknown. A family of probabilistic algorithms known as Monte Carlo Localization (MCL) is currently among the most popular methods for solving this problem. MCL algorithms represent a robot's belief by a set of weighted samples, which approximate the posterior probability of where the robot is located by using a Bayesian formulation of the localization problem. This article presents an extension to the MCL algorithm, which addresses its problems when localizing in highly symmetrical environments; a situation where MCL is often unable to correctly track equally probable poses for the robot. The problem arises from the fact that sample sets in MCL often become impoverished, when samples are generated according to their posterior likelihood. Our approach incorporates the idea of clusters of samples and modifies the proposal distribution considering the probability mass of those clusters. Experimental results are presented that show that this new extension to the MCL algorithm successfully localizes in symmetric environments where ordinary MCL often fails.

Abstract:
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of single-user memoryless channels. Recently Polyanskiy et al. studied the benefit of variable-length feedback with termination (VLFT) codes in the non-asymptotic regime. In that work, achievability is based on an infinite length random code and decoding is attempted at every symbol. The coding rate backoff from capacity due to channel dispersion is greatly reduced with feedback, allowing capacity to be approached with surprisingly small expected latency. This paper is mainly concerned with VLFT codes based on finite-length codes and decoding attempts only at certain specified decoding times. The penalties of using a finite block-length $N$ and a sequence of specified decoding times are studied. This paper shows that properly scaling $N$ with the expected latency can achieve the same performance up to constant terms as with $N = \infty$. The penalty introduced by periodic decoding times is a linear term of the interval between decoding times and hence the performance approaches capacity as the expected latency grows if the interval between decoding times grows sub-linearly with the expected latency.

Abstract:
The read channel in Flash memory systems degrades over time because the Fowler-Nordheim tunneling used to apply charge to the floating gate eventually compromises the integrity of the cell because of tunnel oxide degradation. While degradation is commonly measured in the number of program/erase cycles experienced by a cell, the degradation is proportional to the number of electrons forced into the floating gate and later released by the erasing process. By managing the amount of charge written to the floating gate to maintain a constant read-channel mutual information, Flash lifetime can be extended. This paper proposes an overall system approach based on information theory to extend the lifetime of a flash memory device. Using the instantaneous storage capacity of a noisy flash memory channel, our approach allocates the read voltage of flash cell dynamically as it wears out gradually over time. A practical estimation of the instantaneous capacity is also proposed based on soft information via multiple reads of the memory cells.

Abstract:
Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability).

Abstract:
This paper presents a reliability-based decoding scheme for variable-length coding with feedback and demonstrates via simulation that it can achieve higher rates than Polyanskiy et al.'s random coding lower bound for variable-length feedback (VLF) coding on both the BSC and AWGN channel. The proposed scheme uses the reliability output Viterbi algorithm (ROVA) to compute the word error probability after each decoding attempt, which is compared against a target error threshold and used as a stopping criterion to terminate transmission. The only feedback required is a single bit for each decoding attempt, informing the transmitter whether the ROVA-computed word-error probability is sufficiently low. Furthermore, the ROVA determines whether transmission/decoding may be terminated without the need for a rate-reducing CRC.

Abstract:
This paper presents a variable-length decision-feedback scheme that uses tail-biting convolutional codes and the tail-biting Reliability-Output Viterbi Algoritm (ROVA). Comparing with recent results in finite-blocklength information theory, simulation results for both the BSC and the AWGN channel show that the decision-feedback scheme using ROVA can surpass the random-coding lower bound on throughput for feedback codes at average blocklengths less than 100 symbols. This paper explores ROVA-based decision feedback both with decoding after every symbol and with decoding limited to a small number of increments. The performance of the reliability-based stopping rule with the ROVA is compared to retransmission decisions based on CRCs. For short blocklengths where the latency overhead of the CRC bits is severe, the ROVA-based approach delivers superior rates.

Abstract:
We present extensions to Raghavan and Baum's reliability-output Viterbi algorithm (ROVA) to accommodate tail-biting convolutional codes. These tail-biting reliability-output algorithms compute the exact word-error probability of the decoded codeword after first calculating the posterior probability of the decoded tail-biting codeword's starting state. One approach employs a state-estimation algorithm that selects the maximum a posteriori state based on the posterior distribution of the starting states. Another approach is an approximation to the exact tail-biting ROVA that estimates the word-error probability. A comparison of the computational complexity of each approach is discussed in detail. The presented reliability-output algorithms apply to both feedforward and feedback tail-biting convolutional encoders. These tail-biting reliability-output algorithms are suitable for use in reliability-based retransmission schemes with short blocklengths, in which terminated convolutional codes would introduce rate loss.

Abstract:
Multiple sclerosis (MS) is a leading cause of disability in young adults. Susceptibility to MS is determined by environmental exposure on the background of genetic risk factors. A previous meta-analysis suggested that smoking was an important risk factor for MS but many other studies have been published since then.

Abstract:
Multiple sclerosis (MS) appears to develop in genetically susceptible individuals as a result of environmental exposures. Epstein-Barr virus (EBV) infection is an almost universal finding among individuals with MS. Symptomatic EBV infection as manifested by infectious mononucleosis (IM) has been shown in a previous meta-analysis to be associated with the risk of MS, however a number of much larger studies have since been published.