Abstract:
We investigate if physical laws can impose limit on computational time and speed of a quantum computer built from elementary particles. We show that the product of the speed and the running time of a quantum computer is limited by the type of fundamental interactions present inside the system. This will help us to decide as to what type of interaction should be allowed in building quantum computers in achieving the desired speed.

Abstract:
For any quantum algorithm operating on pure states we prove that the presence of multi-partite entanglement, with a number of parties that increases unboundedly with input size, is necessary if the quantum algorithm is to offer an exponential speed-up over classical computation. Furthermore we prove that the algorithm can be classically efficiently simulated to within a prescribed tolerance \eta even if a suitably small amount of global entanglement (depending on \eta) is present. We explicitly identify the occurrence of increasing multi-partite entanglement in Shor's algorithm. Our results do not apply to quantum algorithms operating on mixed states in general and we discuss the suggestion that an exponential computational speed-up might be possible with mixed states in the total absence of entanglement. Finally, despite the essential role of entanglement for pure state algorithms, we argue that it is nevertheless misleading to view entanglement as a key resource for quantum computational power.

Abstract:
While it seems possible that quantum computers may allow for algorithms offering a computational speed-up over classical algorithms for some problems, the issue is poorly understood. We explore this computational speed-up by investigating the ability to de-quantise quantum algorithms into classical simulations of the algorithms which are as efficient in both time and space as the original quantum algorithms. The process of de-quantisation helps formulate conditions to determine if a quantum algorithm provides a real speed-up over classical algorithms. These conditions can be used to develop new quantum algorithms more effectively (by avoiding features that could allow the algorithm to be efficiently classically simulated), as well as providing the potential to create new classical algorithms (by using features which have proved valuable for quantum algorithms). Results on many different methods of de-quantisations are presented, as well as a general formal definition of de-quantisation. De-quantisations employing higher-dimensional classical bits, as well as those using matrix-simulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is de-quantisable. These methods are contrasted with the stabiliser formalism de-quantisations due to the Gottesman-Knill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The benefits of the different methods are contrasted, and the importance of a range of techniques is emphasised. We further discuss some features of quantum algorithms which current de-quantisation methods do not cover.

Abstract:
In modern transistor based logic gates, the impact of noise on computation has become increasingly relevant since the voltage scaling strategy, aimed at decreasing the dissipated power, has increased the probability of error due to the reduced switching threshold voltages. In this paper we discuss the role of noise in a two state model that mimic the dynamics of standard logic gates and show that the presence of the noise sets a fundamental limit to the computing speed. An optimal idle time interval that minimizes the error probability, is derived.

Abstract:
This article surveys quantum computational complexity, with a focus on three fundamental notions: polynomial-time quantum computations, the efficient verification of quantum proofs, and quantum interactive proof systems. Properties of quantum complexity classes based on these notions, such as BQP, QMA, and QIP, are presented. Other topics in quantum complexity, including quantum advice, space-bounded quantum computation, and bounded-depth quantum circuits, are also discussed.

Abstract:
Insofar as quantum computation is faster than classical, it appears to be irreversible. In all quantum algorithms found so far the speed-up depends on the extra-dynamical irreversible projection representing quantum measurement. Quantum measurement performs a computation that dynamical computation cannot accomplish as efficiently.

Abstract:
The computational problem of distinguishing two quantum channels is central to quantum computing. It is a generalization of the well-known satisfiability problem from classical to quantum computation. This problem is shown to be surprisingly hard: it is complete for the class QIP of problems that have quantum interactive proof systems, which implies that it is hard for the class PSPACE of problems solvable by a classical computation in polynomial space. Several restrictions of distinguishability are also shown to be hard. It is no easier when restricted to quantum computations of logarithmic depth, to mixed-unitary channels, to degradable channels, or to antidegradable channels. These hardness results are demonstrated by finding reductions between these classes of quantum channels. These techniques have applications outside the distinguishability problem, as the construction for mixed-unitary channels is used to prove that the additivity problem for the classical capacity of quantum channels can be equivalently restricted to the mixed unitary channels.

Abstract:
We introduce the notion of quantum computational webs: These are quantum states universal for measurement-based computation which can be built up from a collection of simple primitives. The primitive elements - reminiscent of building blocks in a construction kit - are (i) states on a one-dimensional chain of systems ("computational quantum wires") with the power to process one logical qubit and (ii) suitable couplings which connect the wires to a computationally universal "web". All elements are preparable by nearest-neighbor interactions in a single pass - a type of operation well-suited for a number of physical architectures. We provide a complete classification of qubit wires. This is first instance where a physically well-motivated class of universal resources can be fully understood. Finally, we sketch possible realizations in superlattices, and explore the power of coupling mechanisms based on Ising or exchange-interactions.

Abstract:
We study the apparent nonlocality of quantum mechanics as a transport problem. If space is a physical entity through which quantum information (QI) must be transported, then one can define its speed. If not, QI exists apart from space, making space in some sense `nonphysical'. But we can still assign a `speed' of QI to such models based on their properties. In both cases, classical information must still travel at $c$, though in the latter case the origin of local spacetime itself is a puzzle. We consider the properties of different regimes for this speed of QI, and relevant quantum interpretations. For example, we show that the Many Worlds Interpretation (MWI) is nonlocal because it is what we call `spatially complete'.

Abstract:
Every measurement leaves the object in a family of states indexed by the possible outcomes. This family, called the posterior states, is usually a family of the eigenstates of the measured observable, but it can be an arbitrary family of states by controlling the object-apparatus interaction. A potentially realizable object-apparatus interaction measures position in such a way that the posterior states are the translations of an arbitrary wave function. In particular, position can be measured without perturbing the object in a momentum eigenstate.