Abstract:
There is currently no consistent approach to modelling galaxy bias evolution in cosmological inference. This lack of a common standard makes the rigorous comparison or combination of probes difficult. We show that the choice of biasing model has a significant impact on cosmological parameter constraints for a survey such as the Dark Energy Survey (DES), considering the 2-point correlations of galaxies in five tomographic redshift bins. We find that modelling galaxy bias with a free biasing parameter per redshift bin gives a Figure of Merit (FoM) for Dark Energy equation of state parameters w_0, w_a smaller by a factor of 10 than if a constant bias is assumed. An incorrect bias model will also cause a shift in measured values of cosmological parameters. Motivated by these points and focusing on the redshift evolution of linear bias, we propose the use of a generalised galaxy bias which encompasses a range of bias models from theory, observations and simulations, b(z) = c + (b_0 - c)/D(z)^alpha, where parameters c, b_0 and alpha depend on galaxy properties such as halo mass. For a DES-like galaxy survey we find that this model gives an unbiased estimate of w_0, w_a with the same number or fewer nuisance parameters and a higher FoM than a simple b(z) model allowed to vary in z-bins. We show how the parameters of this model are correlated with cosmological parameters. We fit a range of bias models to two recent datasets, and conclude that this generalised parameterisation is a sensible benchmark expression of galaxy bias on large scales.

Abstract:
Many inverse problems include nuisance parameters which, while not of direct interest, are required to recover primary parameters. Structure present in these problems allows efficient optimization strategies - a well known example is variable projection, where nonlinear least squares problems which are linear in some parameters can be very efficiently optimized. In this paper, we extend the idea of projecting out a subset over the variables to a broad class of maximum likelihood (ML) and maximum a posteriori likelihood (MAP) problems with nuisance parameters, such as variance or degrees of freedom. As a result, we are able to incorporate nuisance parameter estimation into large-scale constrained and unconstrained inverse problem formulations. We apply the approach to a variety of problems, including estimation of unknown variance parameters in the Gaussian model, degree of freedom (d.o.f.) parameter estimation in the context of robust inverse problems, automatic calibration, and optimal experimental design. Using numerical examples, we demonstrate improvement in recovery of primary parameters for several large- scale inverse problems. The proposed approach is compatible with a wide variety of algorithms and formulations, and its implementation requires only minor modifications to existing algorithms.

Abstract:
We point out that the ideas underlying some test procedures recently proposed for testing post-model-selection (and for some other test problems) in the econometrics literature have been around for quite some time in the statistics literature. We also sharpen some of these results in the statistics literature. Furthermore, we show that some intuitively appealing testing procedures, that have found their way into the econometrics literature, lead to tests that do not have desirable size properties, not even asymptotically.

Abstract:
We perform a model-independent fit of the short-distance couplings $C_{7,9,10}$ within the Standard Model set of $b\to s\gamma$ and $b\to s\bar\ell\ell$ operators. Our analysis of $B \to K^* \gamma$, $B \to K^{(*)} \bar\ell\ell$ and $B_s \to \bar\mu\mu$ decays is the first to harness the full power of the Bayesian approach: all major sources of theory uncertainty explicitly enter as nuisance parameters. Exploiting the latest measurements, the fit reveals a flipped-sign solution in addition to a Standard-Model-like solution for the couplings $C_i$. Each solution contains about half of the posterior probability, and both have nearly equal goodness of fit. The Standard Model prediction is close to the best-fit point. No New Physics contributions are necessary to describe the current data. Benefitting from the improved posterior knowledge of the nuisance parameters, we predict ranges for currently unmeasured, optimized observables in the angular distributions of $B\to K^*(\to K\pi)\,\bar\ell\ell$.

Abstract:
We describe some recent approaches to likelihood based inference in the presence of nuisance parameters. Our approach is based on plotting the likelihood function and the $p$-value function, using recently developed third order approximations. Orthogonal parameters and adjustments to profile likelihood are also discussed. Connections to classical approaches of conditional and marginal inference are outlined.

Abstract:
The possibility of determining MDM model parameters on the basis of observable data on the Abell-ACO power spectrum and mass function is analysed. It is shown that spectrum area corresponding to these data is sensitive enough to such MDM model parameters as neutrino mass $m_{\nu}$, number species of massive neutrino $N_{\nu}$, baryon content $\Omega_b$ and Hubble constant $h\equiv H_0/100km/s/Mpc$. The $\chi^2$ minimization method was used for their determination. If all these parameters are under searching then observable data on the Abell-ACO power spectrum and mass function prefer models which have parameters in the range $\Omega_{\nu}$ ($\sim 0.4-0.5$), low $\Omega_b$ ($\le 0.01$) and $h$ ($\sim 0.4-0.6$). The best-fit parameters are as follows: $N_{\nu}=3$, $m_{\nu}=4.4eV$, $h=0.56$, $\Omega_b\le 0.01$. The high-$\Omega_b\sim 0.4-0.5$ solutions are obtained when mass of neutrino is fixed and $\le 3eV$. To explain the observable excessive power at $k\approx 0.05h/Mpc$ the peak of Gaussian form was introduced in primordial power spectrum. Its parameters (amplitude, position and width) were determined along with the MDM model parameters. It decreases $\chi^2$, increases the bulk motions, but does not change essentially the best-fit MDM parameters. It is shown also that models with the median $\Omega_{\nu}\sim 0.2-0.3$ ($m_{\nu}\sim 2.5$, $N_{\nu}\sim 2-3$) and $\Omega_b=0.024/h^2$, which match constraints arising from cosmological nucleosynthesis and high redshift objects, are not ruled out by these data ($\Delta \chi^2<1$).

Abstract:
We describe here the general mathematical approach to constructing likelihoods for fitting observed spectra in one or more dimensions with multiple sources, including the effects of systematic uncertainties represented as nuisance parameters, when the likelihood is to be maximized with respect to these parameters. We consider three types of nuisance parameters: simple multiplicative factors, source spectra "morphing" parameters, and parameters representing statistical uncertainties in the predicted source spectra.

Abstract:
We discuss some issues arising in the evaluation of confidence intervals in the presence of nuisance parameters (systematic uncertainties) by means of direct Neyman construction in multi-dimensional space. While this kind of procedure provides rigorous coverage, it may be affected by large overcoverage, and/or produce results with counterintuitive behavior with respect to the uncertainty on the nuisance parameters, or other undesirable properties. We describe a choice of ordering algorithm that provides results with good general properties, the correct behavior for small uncertainties, and limited overcoverage.

Abstract:
In most of the cases, only a subvector of the parameters is tested in a model. The remaining parameters arise in the tests as nuisance parameters. The presence of nuisance parameters causes biases in key estimates used in the tests. So inferences made on the presence of nuisance parameters may lead to less accurate conclusions. Even the presence of nuisance parameters can destroy the test. Thus in eliminating the influence of nuisance parameters from the test can improve the tests' performance. The effect of the nuisance parameters can be eliminated by the marginal likelihood, conditional likelihood, canonical likelihood, profile likelihood and Bayesian tests. This paper is concerned with marginal likelihood-based test for eliminating the influence of nuisance parameters. In general, existing one-sided and two-sided tests for autocorrelation are tested only autocorrelation coefficients but not the regression coefficients in the model. So we proposed a distance-based marginal likelihood one-sided Likelihood Ratio (DMLR) test in eliminating the influence of nuisance parameters for testing higher order autocorrelation with one-sided alternatives in linear regression model using marginal likelihood and distance-based approach. Monte Carlo simulations are conducted to compare power properties of the proposed DMLR test with their respective conventional counterparts. It is found that the DMLR test shows substantially improved power for most of cases considered.

Abstract:
We study the frequentist properties of confidence intervals computed by the method known to statisticians as the Profile Likelihood. It is seen that the coverage of these intervals is surprisingly good over a wide range of possible parameter values for important classes of problems, in particular whenever there are additional nuisance parameters with statistical or systematic errors. Programs are available for calculating these intervals.