Abstract:
Low-mass protostars are less luminous than expected. This luminosity problem is important because the observations appear to be inconsistent with some of the basic premises of star formation theory. Two possible solutions are that stars form slowly, which is supported by recent data, and/or that protostellar accretion is episodic; current data suggest that the latter accounts for less than half the missing luminosity. The solution to the luminosity problem bears directly on the fundamental problem of the time required to form a low-mass star. The protostellar mass and luminosity functions provide powerful tools both for addressing the luminosity problem and for testing theories of star formation. Results are presented for the collapse of singular isothermal spheres, for the collapse of turbulent cores, and for competitive accretion.

Abstract:
(Abridged) We report on the structure of the nuclear star cluster in the innermost 0.16 pc of the Galaxy as measured by the number density profile of late-type giants. Using laser guide star adaptive optics in conjunction with the integral field spectrograph, OSIRIS, at the Keck II telescope, we are able to differentiate between the older, late-type (~ 1 Gyr) stars, which are presumed to be dynamically relaxed, and the unrelaxed young (~ 6 Myr) population. This distinction is crucial for testing models of stellar cusp formation in the vicinity of a black hole, as the models assume that the cusp stars are in dynamical equilibrium in the black hole potential. We find that contamination from young stars is significant, with more than twice as many young stars as old stars in our sensitivity range (K < 15.5) within the central arcsecond. Based on the late-type stars alone, the surface stellar number density profile, is flat, with a projected power law slope of -0.26+-0.24. These results are consistent with the nuclear star cluster having no cusp, with a core profile that is significantly flatter than predicted by most cusp formation theories. Here, we also review the methods for further constraining the true three-dimensional radial profile using kinematic measurements. Precise acceleration measurements in the plane of the sky as well as along the line of sight has the potential to directly measure the density profile to establish whether there is a "hole" in the distribution of late-type stars in the inner 0.1 pc.

Abstract:
The close interaction between mother and offspring in mammals is thought to contribute to the evolution of genomic imprinting or parent-of-origin dependent gene expression. Empirical tests of theories about the evolution of imprinting have been scant for several reasons. Models make different assumptions about the traits affected by imprinted genes and the scenarios in which imprinting is predicted to have been selected for. Thus, competing hypotheses cannot readily be tested against each other. Further, it is far from clear how predictions about expression patterns of genes with specific phenotypic effects can be tested given current methodology of assaying gene expression levels, be it in the brain or in other tissues. We first set out a scenario for testing competing hypotheses and delineate the different assumptions and predictions of models. We then outline how predictions may be tested using mouse models such as intercrosses or recombinant inbred (RI) systems that can be phenotyped for traits relevant to imprinting theories. Further, we briefly discuss different molecular approaches that may be used in conjunction with experiments to ascertain expression patterns of imprinted genes and thus the testing of predictions.

Abstract:
Integrating inspection processes with testing processes promises to deliver several benefits, including reduced effort for quality assurance or higher defect detection rates. Systematic integration of these processes requires knowledge regarding the relationships between these processes, especially regarding the relationship between inspection defects and test defects. Such knowledge is typically context-dependent and needs to be gained analytically or empirically. If such kind of knowledge is not available, assumptions need to be made for a specific context. This article describes the relevance of assumptions and context factors for integrating inspection and testing processes and provides mechanisms for deriving assumptions in a systematic manner.

Abstract:
Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify `chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions -- or any other test statistic -- is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000--2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions--a common ``nonparametric'' assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The $P$-value for the observed success rate is $<0.001$: The method successfully predicts about 5% of earthquakes, far better than `chance,' because the predictor exploits the clustering of earthquakes -- occasional foreshocks -- which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.

Abstract:
This paper develops an improved surrogate data test to show experimental evidence, for all the simple vowels of US English, for both male and female speakers, that Gaussian linear prediction analysis, a ubiquitous technique in current speech technologies, cannot be used to extract all the dynamical structure of real speech time series. The test provides robust evidence undermining the validity of these linear techniques, supporting the assumptions of either dynamical nonlinearity and/or non-Gaussianity common to more recent, complex, efforts at dynamical modelling speech time series. However, an additional finding is that the classical assumptions cannot be ruled out entirely, and plausible evidence is given to explain the success of the linear Gaussian theory as a weak approximation to the true, nonlinear/non-Gaussian dynamics. This supports the use of appropriate hybrid linear/nonlinear/non-Gaussian modelling. With a calibrated calculation of statistic and particular choice of experimental protocol, some of the known systematic problems of the method of surrogate data testing are circumvented to obtain results to support the conclusions to a high level of significance.

Abstract:
Constraints on the expansion history of the universe from measurements of cosmological distances make predictions for large-scale structure growth. Since these predictions depend on assumptions about dark energy evolution and spatial curvature, they can be used to test general classes of dark energy models by comparing predictions for those models with direct measurements of the growth history. I present predictions from current distance measurements for the growth history of dark energy models including a cosmological constant and quintessence. Although a time-dependent dark energy equation of state significantly weakens predictions for growth from measured distances, for quintessence there is a generic limit on the growth evolution that could be used to falsify the whole class of quintessence models. Understanding the allowed range of growth for dark energy models in the context of general relativity is a crucial step for efforts to distinguish dark energy from modified gravity.

Abstract:
Evidence for fine-tuning of physical parameters suitable for life can perhaps be explained by almost any combination of providence, coincidence or multiverse. A multiverse usually includes parts unobservable to us, but if the theory for it includes suitable measures for observations, what is observable can be explained in terms of the theory even if it contains such unobservable elements. Thus good multiverse theories can be tested against observations. For these tests and Bayesian comparisons of different theories that predict more than one observation, it is useful to define the concept of ``typicality'' as the likelihood given by a theory that a random result of an observation would be at least as extreme as the result of one's actual observation. Some multiverse theories can be regarded as pertaining to a single universe (e.g. a single quantum state obeying certain equations), raising the question of why those equations apply. Other multiverse theories can be regarded as pertaining to no single universe at all. These no longer raise the question of what the equations are for a single universe but rather the question of why the measure for the set of different universes is such as to make our observations not too atypical.

Abstract:
We review the predictions of the replica approach both for the statics and for the off-equilibrium dynamics. We stress the importance of the Cugliandolo-Kurchan off-equilibrium fluctuation-dissipation relation in providing a bridge between the statics and the dynamics. We present numerical evidence for the correctness of these relations. This approach allows an experimental determination of the basic parameters of the replica theory.

Abstract:
The origin of quark and lepton masses is one of the outstanding problems of physics. As the experimental data becomes more and more accurate, testing theories of fermion masses requires greater care. In this talk we discuss a theoretical framework for testing those theories with a high energy desert. It is only with precision tests that we can hope to narrow the set of viable, beyond the standard model theories.