Abstract:
High school physics teachers often turn to various resources, including the Internet, as they search for engaging physics activities for their students. An important question, especially for new physics teachers, concerns the safety of these activities. Have safety issues been adequately addressed within these activities? The purpose of this article is to emphasize potential safety issues involving high school physics projects as well as to provide a checklist for physics teachers to use as they evaluate activities. If the activity is deemed to contain safety issues, physics teachers are encouraged to attempt to modify the activity to make it safe. If the activity cannot be modified for safety purposes, then it is recommended that the physics teacher search for a different activity. The intention of this article is to provide high school physics teachers with safety information that can be used in preparing safe, inquiry-based, hands-on, engaging and topic-appropriate physics activities for their students.

Abstract:
In this work, we consider a discrete-time stationary Rayleigh flat-fading channel with unknown channel state information at transmitter and receiver. The law of the channel is presumed to be known to the receiver. In addition, we assume the power spectral density (PSD) of the fading process to be compactly supported. For i.i.d. zero-mean proper Gaussian input distributions, we investigate the achievable rate. One of the main contributions is the derivation of two new upper bounds on the achievable rate with zero-mean proper Gaussian input symbols. The first one holds only for the special case of a rectangular PSD and depends on the SNR and the spread of the PSD. Together with a lower bound on the achievable rate, which is achievable with i.i.d. zero-mean proper Gaussian input symbols, we have found a set of bounds which is tight in the sense that their difference is bounded. Furthermore, we show that the high SNR slope is characterized by a pre-log of 1-2f_d, where f_d is the normalized maximum Doppler frequency. This pre-log is equal to the high SNR pre-log of the peak power constrained capacity. Furthermore, we derive an alternative upper bound on the achievable rate with i.i.d. input symbols which is based on the one-step channel prediction error variance. The novelty lies in the fact that this bound is not restricted to peak power constrained input symbols like known bounds, e.g. in [1]. Therefore, the derived upper bound can also be used to evaluate the achievable rate with i.i.d. proper Gaussian input symbols. We compare the derived bounds on the achievable rate with i.i.d. zero-mean proper Gaussian input symbols with bounds on the peak power constrained capacity given in [1-3]. Finally, we compare the achievable rate with i.i.d. zero-mean proper Gaussian input symbols with the achievable rate using synchronized detection in combination with a solely pilot based channel estimation.

Abstract:
Future wireless communication systems require efficient and flexible baseband receivers. Meaningful efficiency metrics are key for design space exploration to quantify the algorithmic and the implementation complexity of a receiver. Most of the current established efficiency metrics are based on counting operations, thus neglecting important issues like data and storage complexity. In this paper we introduce suitable energy and area efficiency metrics which resolve the afore-mentioned disadvantages. These are decoded information bit per energy and throughput per area unit. Efficiency metrics are assessed by various implementations of turbo decoders, LDPC decoders and convolutional decoders. New exploration methodologies are presented, which permit an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs. These exploration methodologies are based on efficiency trajectories rather than a single snapshot metric as done in state-of-the-art approaches.

Abstract:
In many typical mobile communication receivers the channel is estimated based on pilot symbols to allow for a coherent detection and decoding in a separate processing step. Currently much work is spent on receivers which break up this separation, e.g., by enhancing channel estimation based on reliability information on the data symbols. In the present work, we evaluate the possible gain of a joint processing of data and pilot symbols in comparison to the case of a separate processing in the context of stationary Rayleigh flat-fading channels. Therefore, we discuss the nature of the possible gain of a joint processing of pilot and data symbols. We show that the additional information that can be gained by a joint processing is captured in the temporal correlation of the channel estimation error of the solely pilot based channel estimation, which is not retrieved by the channel decoder in case of separate processing. In addition, we derive a new lower bound on the achievable rate for joint processing of pilot and data symbols.

Abstract:
This article is an introduction to the FRIDGE design environment which supports the design and DSP implementation of fixed-point digital signal processing systems. We present the tool-supported transformation of signal processing algorithms coded in floating-point ANSI C to a fixed-point representation in SystemC. We introduce the novel approach to control and data flow analysis, which is necessary for the transformation. The design environment enables fast bit-true simulation by mapping the fixed-point algorithm to integral data types of the host machine. A speedup by a factor of 20 to 400 can be achieved compared to C++-library-based bit-true simulation. FRIDGE also provides a direct link to DSP implementation by processor specific C code generation and advanced code optimization.

Abstract:
We show that the steady-state entropy production rate of a stochastic process is inversely proportional to the minimal time needed to decide on the direction of the arrow of time. Here we apply Wald's sequential probability ratio test to optimally decide on the direction of time's arrow in stationary Markov processes. Furthermore the steady state entropy production rate can be estimated using mean first-passage times of suitable physical variables. We derive a first-passage time fluctuation theorem which implies that the decision time distributions for correct and wrong decisions are equal. Our results are illustrated by numerical simulations of two simple examples of nonequilibrium processes.

Abstract:
We provide an information theoretic analysis of Wald's sequential probability ratio test. The optimality of the Wald test in the sense that it yields the minimum average decision time for a binary decision problem is reflected by the evolution of the information densities over time. Information densities are considered as they take into account the fact that the termination time of the Wald test depends on the actual realization of the observation sequence. Based on information densities we show that in case the test terminates at time instant $k$ the probability to decide for hypothesis $\mathcal{H}_1$ (or the counter-hypothesis $\mathcal{H}_0$) is independent of time. We use this characteristic to evaluate the evolution of the mutual information between the binary random variable and the decision variable of the Wald test. Our results establish a connection between minimum mean decision times and the corresponding information processing.

Abstract:
We analyze the capacity of a continuous-time, time-selective, Rayleigh block-fading channel in the high signal-to-noise ratio (SNR) regime. The fading process is assumed stationary within each block and to change independently from block to block; furthermore, its realizations are not known a priori to the transmitter and the receiver (noncoherent setting). A common approach to analyzing the capacity of this channel is to assume that the receiver performs matched filtering followed by sampling at symbol rate (symbol matched filtering). This yields a discrete-time channel in which each transmitted symbol corresponds to one output sample. Liang & Veeravalli (2004) showed that the capacity of this discrete-time channel grows logarithmically with the SNR, with a capacity pre-log equal to $1-{Q}/{N}$. Here, $N$ is the number of symbols transmitted within one fading block, and $Q$ is the rank of the covariance matrix of the discrete-time channel gains within each fading block. In this paper, we show that symbol matched filtering is not a capacity-achieving strategy for the underlying continuous-time channel. Specifically, we analyze the capacity pre-log of the discrete-time channel obtained by oversampling the continuous-time channel output, i.e., by sampling it faster than at symbol rate. We prove that by oversampling by a factor two one gets a capacity pre-log that is at least as large as $1-1/N$. Since the capacity pre-log corresponding to symbol-rate sampling is $1-Q/N$, our result implies indeed that symbol matched filtering is not capacity achieving at high SNR.

Abstract:
Multiple-input multiple-output (MIMO) wireless transmission imposes huge challenges on the design of efficient hardware architectures for iterative receivers. A major challenge is soft-input soft-output (SISO) MIMO demapping, often approached by sphere decoding (SD). In this paper, we introduce the - to our best knowledge - first VLSI architecture for SISO SD applying a single tree-search approach. Compared with a soft-output-only base architecture similar to the one proposed by Studer et al. in IEEE J-SAC 2008, the architectural modifications for soft input still allow a one-node-per-cycle execution. For a 4x4 16-QAM system, the area increases by 57% and the operating frequency degrades by 34% only.

Abstract:
Future wireless communication systems should be flexible to support different waveforms (WFs) and be cognitive to sense the environment and tune themselves. This has lead to tremendous interest in software defined radios (SDRs). Constraints like throughput, latency and low energy demand high implementation efficiency. The tradeoff of going for a highly efficient implementation is the increase of porting effort to a new hardware (HW) platform. In this paper, we propose a novel concept for WF development, the Nucleus concept, that exploits the common structure in various wireless signal processing algorithms and provides a way for efficient and portable implementation. Tool assisted WF mapping and exploration is done efficiently by propagating the implementation and interface properties of Nuclei. The Nucleus concept aims at providing software flexibility with high level programmability, but at the same time limiting HW flexibility to maximize area and energy efficiency.