Abstract:
We investigate the practical implementation of Taylor's (2002) 3-dimensional gravitational potential reconstruction method using weak gravitational lensing, together with the requisite reconstruction of the lensing potential. This methodology calculates the 3-D gravitational potential given a knowledge of shear estimates and redshifts for a set of galaxies. We analytically estimate the noise expected in the reconstructed gravitational field, taking into account the uncertainties associated with a finite survey, photometric redshift uncertainty, redshift-space distortions, and multiple scattering events. In order to implement this approach for future data analysis, we simulate the lensing distortion fields due to various mass distributions. We create catalogues of galaxies sampling this distortion in three dimensions, with realistic spatial distribution and intrinsic ellipticity for both ground-based and space-based surveys. Using the resulting catalogues of galaxy position and shear, we demonstrate that it is possible to reconstruct the lensing and gravitational potentials with our method. For example, we demonstrate that a typical ground-based shear survey with redshift limit z=1 and photometric redshifts with error Delta z=0.05 is directly able to measure the 3-D gravitational potential for mass concentrations >10^14 M_\odot between 0.1

Abstract:
We present a theoretical analysis of the paradigm of encoded universality, using a Lie algebraic analysis to derive specific conditions under which physical interactions can provide universality. We discuss the significance of the tensor product structure in the quantum circuit model and use this to define the conjoining of encoded qudits. The construction of encoded gates between conjoined qudits is discussed in detail. We illustrate the general procedures with several examples from exchange-only quantum computation. In particular, we extend our earlier results showing universality with the isotropic exchange interaction to the derivation of encoded universality with the anisotropic exchange interaction, i.e., to the XY model. In this case the minimal encoding for universality is into qutrits rather than into qubits as was the case for isotropic (Heisenberg) exchange. We also address issues of fault-tolerance, leakage and correction of encoded qudits.

Abstract:
It is of great interest to measure the properties of substructures in dark matter halos at galactic and cluster scales. Here we suggest a method to constrain substructure properties using the variance of weak gravitational flexion in a galaxy-galaxy lensing context. We show the effectiveness of flexion variance in measuring substructures in N-body simulations of dark matter halos, and present the expected galaxy-galaxy lensing signals. We show the insensitivity of the method to the overall galaxy halo mass, and predict the method's signal-to-noise for a space-based all-sky survey, showing that the presence of substructure down to 10^9 M_\odot halos can be reliably detected.

Abstract:
Protecting quantum information from the detrimental effects of decoherence and lack of precise quantum control is a central challenge that must be overcome if a large robust quantum computer is to be constructed. The traditional approach to achieving this is via active quantum error correction using fault-tolerant techniques. An alternative to this approach is to engineer strongly interacting many-body quantum systems that enact the quantum error correction via the natural dynamics of these systems. Here we present a method for achieving this based on the concept of concatenated quantum error correcting codes. We define a class of Hamiltonians whose ground states are concatenated quantum codes and whose energy landscape naturally causes quantum error correction. We analyze these Hamiltonians for robustness and suggest methods for implementing these highly unnatural Hamiltonians.

Abstract:
The first quantum algorithm to offer an exponential speedup (in the query complexity setting) over classical algorithms was Simon's algorithm for identifying a hidden exclusive-or mask. Here we observe how part of Simon's algorithm can be interpreted as a Clebsch-Gordan transform. Inspired by this we show how Clebsch-Gordan transforms can be used to efficiently find a hidden involution on the group G^n where G is the dihedral group of order eight (the group of symmetries of a square.) This problem previously admitted an efficient quantum algorithm but a connection to Clebsch-Gordan transforms had not been made. Our results provide further evidence for the usefulness of Clebsch-Gordan transform in quantum algorithm design.

Abstract:
In this thesis we describe methods for avoiding the detrimental effects of decoherence while at the same time still allowing for computation of the quantum information. The philosophy of the method discussed in the first part of this thesis is to use a symmetry of the decoherence mechanism to find robust encodings of the quantum information. Stability, control, and methods for using decoherence-free information in a quantum computer are presented with a specific emphasis on decoherence due to a collective coupling between the system and its environment. Universal quantum computation on such collective decoherence decoherence-free encodings is demonstrated. Rigorous definitions of control and the use of encoded universality in quantum computers are addressed. Explicit gate constructions for encoded universality on ion trap and exchange based quantum computers are given. In the second part of the thesis we examine physical systems with error correcting properties. We examine systems that can store quantum information in their ground state such that decoherence processes are prohibited via energetics. We present the theory of supercoherent systems whose ground states are quantum error detecting codes and describe a spin ladder whose ground state has both the error detecting and correcting properties. We conclude by discussing naturally fault-tolerant quantum computation.

Abstract:
We revisit the question of universality in quantum computing and propose a new paradigm. Instead of forcing a physical system to enact a predetermined set of universal gates (e.g., single-qubit operations and CNOT), we focus on the intrinsic ability of a system to act as a universal quantum computer using only its naturally available interactions. A key element of this approach is the realization that the fungible nature of quantum information allows for universal manipulations using quantum information encoded in a subspace of the full system Hilbert space, as an alternative to using physical qubits directly. Starting with the interactions intrinsic to the physical system, we show how to determine the possible universality resulting from these interactions over an encoded subspace. We outline a general Lie-algebraic framework which can be used to find the encoding for universality and give several examples relevant to solid-state quantum computing.

Abstract:
Experimental implementations of quantum computer architectures are now being investigated in many different physical settings. The full set of requirements that must be met to make quantum computing a reality in the laboratory [1] is daunting, involving capabilities well beyond the present state of the art. In this report we develop a significant simplification of these requirements that can be applied in many recent solid-state approaches, using quantum dots [2], and using donor-atom nuclear spins [3] or electron spins [4]. In these approaches, the basic two-qubit quantum gate is generated by a tunable Heisenberg interaction (the Hamiltonian is $H_{ij}=J(t){\vec S}_i\cdot{\vec S}_j$ between spins $i$ and $j$), while the one-qubit gates require the control of a local Zeeman field. Compared to the Heisenberg operation, the one-qubit operations are significantly slower and require substantially greater materials and device complexity, which may also contribute to increasing the decoherence rate. Here we introduce an explicit scheme in which the Heisenberg interaction alone suffices to exactly implement any quantum computer circuit, at a price of a factor of three in additional qubits and about a factor of ten in additional two-qubit operations. Even at this cost, the ability to eliminate the complexity of one-qubit operations should accelerate progress towards these solid-state implementations of quantum computation.

Abstract:
We carry out an exploratory weak gravitational lensing analysis on a combined VLA and MERLIN radio data set: a deep (3.3 micro-Jy beam^-1 rms noise) 1.4 GHz image of the Hubble Deep Field North. We measure the shear estimator distribution at this radio sensitivity for the first time, finding a similar distribution to that of optical shear estimators for HST ACS data in this field. We examine the residual systematics in shear estimation for the radio data, and give cosmological constraints from radio-optical shear cross-correlation functions. We emphasize the utility of cross-correlating shear estimators from radio and optical data in order to reduce the impact of systematics. Unexpectedly we find no evidence of correlation between optical and radio intrinsic ellipticities of matched objects; this result improves the properties of optical-radio lensing cross-correlations. We explore the ellipticity distribution of the radio counterparts to optical sources statistically, confirming the lack of correlation; as a result we suggest a connected statistical approach to radio shear measurements.

Abstract:
A double-helix electrode configuration is combined with a $^{10}$B powder coating technique to build large-area (9 in $\times$ 36 in) neutron detectors. The neutron detection efficiency for each of the four prototypes is comparable to a single 2-bar $^3$He drift tube of the same length (36 in). One unit has been operational continuously for 18 months and the change of efficiency is less than 1%. An analytic model for pulse heigh spectra is described and the predicted mean film thickness agrees with the experiment to within 30%. Further detector optimization is possible through film texture, power size, moderator box and gas. The estimated production cost per unit is less than 3k US\$ and the technology is thus suitable for deployment in large numbers.