Abstract:
This paper presents a 6？bit, 11？MS/s time-interleaved pipeline A/D converter design. The specification process, from block level to elementary circuits, is gradually covered to draw a design methodology. Both power consumption and mismatch between the parallel chain elements are intended to be reduced by using some techniques such as double and bottom-plate sampling, fully differential circuits, RSD digital correction, and geometric programming (GP) optimization of the elementary analog circuits (OTAs and comparators) design. Prelayout simulations of the complete ADC are presented to characterize the designed converter, which consumes 12？mW while sampling a 500？kHz input signal. Moreover, the block inside the ADC with the most stringent requirements in power, speed, and precision was sent to fabrication in a CMOS 0.35？μm AMS technology, and some postlayout results are shown. 1. Introduction The ADC design for a multistandard receiver system has different ways to be developed seeing that both the involved standards and the selected architecture face their own drawbacks and implementation issues. A multistandard receiver is not only a combination of isolated systems operating under each of the standards, but a system capable of working in an efficient way under those dynamic conditions. To do that, some desired capabilities are reconfigurable computing and the possibility of sharing and reusing as many blocks as possible between the operation modes. The time-interleaved pipeline architecture is frequently used to satisfy the previous requirements in high speed, moderate resolution applications [1–3]. Its main advantage is the flexibility, hence different number of time-interleaved branches and pipeline stages can be enabled/disabled to configure variable resolution and sampling frequency, thus leading to a reconfigurable system. Figure 1 shows a 2-channel, 4-stage version of the architecture, which could provide 12 bits @ 2.75？MS/s and 6 bits @ 11？MS/s for a GSM/Bluetooth receiver. There are, however, some drawbacks related to the parallelism of time-interleaved pipeline ADCs, such as channel offset, gain and timing mismatch. A front-end sample and hold (S&H) circuit is the most straightforward way to avoid timing skew between channels, as shown in Figure 1 [3]. After this S&H block operating at the full-sample rate of the converter, input signals are not anymore continuous. Thus, exact sampling moments of the first pipeline stages over these new ideally constant input signals are no longer critical. Additionally, if double sampling techniques are used,

Abstract:
We propose a novel cross layer scheme to reduce the power consumption of ADCs in OFDM systems. The ADCs in a receiver can consume up to 50% of the total baseband energy. Our scheme is based on resolution-adaptive ADCs and Fountain codes. In a wireless frequency-selective channel some subcarriers have good channel conditions and others are attenuated. The key part of the proposed system is that the dynamic range of ADCs can be reduced by discarding subcarriers that are attenuated by the channel. Correspondingly, the power consumption in ADCs can be decreased. In our approach, each subcarrier carries a Fountain-encoded packet. To protect Fountain-encoded packets against bit errors, an LDPC code has been used. The receiver only decodes subcarriers (i.e., Fountain-encoded packets) with the highest SNR. Others are discarded. For that reason a LDPC code with a relatively high code rate can be used. The new error correction layer does not require perfect channel knowledge, so it can be used in a realistic system where the channel is estimated. With our approach, more than 70% of the energy consumption in the ADCs can be saved compared with the conventional IEEE 802.11a WLAN system under the same channel conditions and throughput. In addition, it requires 7.5 dB less SNR than the 802.11a system. To reduce the overhead of Fountain codes, we apply message passing and Gaussian elimination in the decoder. In this way, the overhead is 3% for a small block size (i.e., 500 packets). Using both methods results in an efficient system with low delay.

Abstract:
We propose a novel cross layer scheme to reduce the power consumption of ADCs in OFDM systems. The ADCs in a receiver can consume up to 50% of the total baseband energy. Our scheme is based on resolution-adaptive ADCs and Fountain codes. In a wireless frequency-selective channel some subcarriers have good channel conditions and others are attenuated. The key part of the proposed system is that the dynamic range of ADCs can be reduced by discarding subcarriers that are attenuated by the channel. Correspondingly, the power consumption in ADCs can be decreased. In our approach, each subcarrier carries a Fountain-encoded packet. To protect Fountain-encoded packets against bit errors, an LDPC code has been used. The receiver only decodes subcarriers (i.e., Fountain-encoded packets) with the highest SNR. Others are discarded. For that reason a LDPC code with a relatively high code rate can be used. The new error correction layer does not require perfect channel knowledge, so it can be used in a realistic system where the channel is estimated. With our approach, more than 70% of the energy consumption in the ADCs can be saved compared with the conventional IEEE 802.11a WLAN system under the same channel conditions and throughput. In addition, it requires 7.5 dB less SNR than the 802.11a system. To reduce the overhead of Fountain codes, we apply message passing and Gaussian elimination in the decoder. In this way, the overhead is 3% for a small block size (i.e., 500 packets). Using both methods results in an efficient system with low delay.

Abstract:
Quantum error-correction routines are developed for continuous quantum variables such as position and momentum. The result of such analog quantum error correction is the construction of composite continuous quantum variables that are largely immune to the effects of noise and decoherence.

Abstract:
We describe new implementations of quantum error correction that are continuous in time, and thus described by continuous dynamical maps. We evaluate the performance of such schemes using numerical simulations, and comment on the effectiveness and applicability of continuous error correction for quantum computing.

Abstract:
This paper is an expanded and more detailed version of our recent work in which the Operator Quantum Error Correction formalism was introduced. This is a new scheme for the error correction of quantum operations that incorporates the known techniques - i.e. the standard error correction model, the method of decoherence-free subspaces, and the noiseless subsystem method - as special cases, and relies on a generalized mathematical framework for noiseless subsystems that applies to arbitrary quantum operations. We also discuss a number of examples and introduce the notion of ``unitarily noiseless subsystems''.

Abstract:
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspect of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. This development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

Abstract:
The purpose of this little survey is to give a simple description of the main approaches to quantum error correction and quantum fault-tolerance. Our goal is to convey the necessary intuitions both for the problems and their solutions in this area. After characterising quantum errors we present several error-correction schemes and outline the elements of a full fledged fault-tolerant computation, which works error-free even though all of its components can be faulty. We also mention alternative approaches to error-correction, so called error-avoiding or decoherence-free schemes. Technical details and generalisations are kept to a minimum.

Abstract:
The errors that arise in a quantum channel can be corrected perfectly if and only if the channel does not decrease the coherent information of the input state. We show that, if the loss of coherent information is small, then approximate error correction is possible.

Abstract:
Quantum error correction is required to compensate for the fragility of the state of a quantum computer. We report the first experimental implementations of quantum error correction and confirm the expected state stabilization. In NMR computing, however, a net improvement in the signal-to-noise would require very high polarization. The experiment implemented the 3-bit code for phase errors in liquid state state NMR.