Abstract:
In this article we study the asymptotic predictive optimality of a model selection criterion based on the cross-validatory predictive density, already available in the literature. For a dependent variable and associated explanatory variables, we consider a class of linear models as approximations to the true regression function. One selects a model among these using the criterion under study and predicts a future replicate of the dependent variable by an optimal predictor under the chosen model. We show that for squared error prediction loss, this scheme of prediction performs asymptotically as well as an oracle, where the oracle here refers to a model selection rule which minimizes this loss if the true regression were known.

Abstract:
In this article, we consider the problem of simultaneous testing of hypotheses when the individual test statistics are not necessarily independent. Specifically, we consider the problem of simultaneous testing of point null hypotheses against two-sided alternatives about the mean parameters of normally distributed random variables. We assume that conditionally given the vector means, these random variables jointly follow a multivariate normal distribution with a known but arbitrary covariance matrix. We consider a Bayesian framework where each unknown mean is modeled via a two component point mass mixture prior, whereby unconditionally the test statistics jointly have a mixture of multivariate normal distributions. A new testing procedure is developed that uses the dependence among the test statistics and works in a step down like manner. This procedure is general enough to be applied to even for non-normal data. A decision theoretic justification in favor of the proposed testing procedure has been provided by showing that unlike the traditional $p$-value based stepwise procedures, this new method possesses a certain convexity property which is essential for the admissibility of a multiple testing procedure with respect to the vector risk function. Consistent estimation of the unknown proportion of alternative hypotheses and variance of the distribution of the non-zero means is theoretically investigated. An alternative representation of the proposed test statistics has also been established resulting in a great reduction in computational complexity. It is demonstrated through extensive simulations that for various forms of dependence and a wide range of sparsity levels, the proposed testing procedure compares quite favourably with several existing multiple testing procedures available in the literature in terms of overall misclassification probability.

Abstract:
Suppose we have data generated according to a multivariate normal distribution with a fixed unknown mean vector that is sparse in the sense of being nearly black. Optimality of Bayes estimates and posterior concentration properties in terms of the minimax risk in the $l_2$ norm corresponding to a very general class of continuous shrinkage priors are studied in this work. The class of priors considered is rich enough to include a great variety of heavy tailed prior distributions, such as, the three parameter beta normal mixtures (including the horseshoe), the generalized double Pareto, the inverse gamma and the normal-exponential-gamma priors. Assuming that the number of non-zero components of the mean vector is known, we show that the Bayes estimators corresponding to this general class of priors attain the minimax risk in the $l_2$ norm (possibly up to a multiplicative constant) and the corresponding posterior distributions contract around the true mean vector at the minimax optimal rate for appropriate choice of the global shrinkage parameter. Moreover, we provide conditions for which these posterior distributions contract around the corresponding Bayes estimates at least as fast as the minimax risk in the $l_2$ norm. We also provide a lower bound to the total posterior variance for an important subclass of this general class of shrinkage priors that includes the generalized double Pareto priors with shape parameter $\alpha=0.5$ and the three parameter beta normal mixtures with parameters $a=0.5$ and $b>0$ (including the horseshoe) in particular. The present work is inspired by the recent work of van der Pas et al. (2014) on the posterior contraction properties of the horseshoe prior under the present set-up. We extend their results for this general class of priors and come up with novel unifying proofs which work for a very broad class of one-group continuous shrinkage priors.

Abstract:
In this article, we investigate certain asymptotic optimality properties of a very broad class of one-group continuous shrinkage priors for simultaneous estimation and testing of a sparse normal mean vector. Asymptotic optimality of Bayes estimates and posterior concentration properties corresponding to the general class of one-group priors under consideration are studied where the data is assumed to be generated according to a multivariate normal distribution with a fixed unknown mean vector. Under the assumption that the number of non-zero means is known, we show that Bayes estimators arising out of this general class of shrinkage priors under study, attain the minimax risk, up to some multiplicative constant, under the $l_2$ norm. In particular, it is shown that for the horseshoe-type priors such as the three parameter beta normal mixtures with parameters $a=0.5, b>0$ and the generalized double Pareto prior with shape parameter $\alpha=1$, the corresponding Bayes estimates become asymptotically minimax. Moreover, posterior distributions arising out of this general class of one-group priors are shown to contract around the true mean vector at the minimax $l_2$ rate for a wide range of values of the global shrinkage parameter depending on the proportion of non-zero components of the underlying mean vector. An important and remarkable fact that emerges as a consequence of one key result essential for proving the aforesaid minimaxity result is that, within the asymptotic framework of Bogdan et al. (2011), the natural thresholding rules due to Carvalho et al. (2010) based on the horseshoe-type priors, asymptotically attain the optimal Bayes risk w.r.t. a $0-1$ loss, up to the correct multiplicative constant and are thus, asymptotically Bayes optimal under sparsity (ABOS).

Abstract:
Recent results concerning asymptotic Bayes-optimality under sparsity (ABOS) of multiple testing procedures are extended to fairly generally distributed effect sizes under the alternative. An asymptotic framework is considered where both the number of tests m and the sample size m go to infinity, while the fraction p of true alternatives converges to zero. It is shown that under mild restrictions on the loss function nontrivial asymptotic inference is possible only if n increases to infinity at least at the rate of log m. Based on this assumption precise conditions are given under which the Bonferroni correction with nominal Family Wise Error Rate (FWER) level alpha and the Benjamini- Hochberg procedure (BH) at FDR level alpha are asymptotically optimal. When n is proportional to log m then alpha can remain fixed, whereas when n increases to infinity at a quicker rate, then alpha has to converge to zero roughly like n^(-1/2). Under these conditions the Bonferroni correction is ABOS in case of extreme sparsity, while BH adapts well to the unknown level of sparsity. In the second part of this article these optimality results are carried over to model selection in the context of multiple regression with orthogonal regressors. Several modifications of Bayesian Information Criterion are considered, controlling either FWER or FDR, and conditions are provided under which these selection criteria are ABOS. Finally the performance of these criteria is examined in a brief simulation study.

Abstract:
Within a Bayesian decision theoretic framework we investigate some asymptotic optimality properties of a large class of multiple testing rules. A parametric setup is considered, in which observations come from a normal scale mixture model and the total loss is assumed to be the sum of losses for individual tests. Our model can be used for testing point null hypotheses, as well as to distinguish large signals from a multitude of very small effects. A rule is defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme where the proportion p of "true" alternatives converges to zero. We fully characterize the class of fixed threshold multiple testing rules which are ABOS, and hence derive conditions for the asymptotic optimality of rules controlling the Bayesian False Discovery Rate (BFDR). We finally provide conditions under which the popular Benjamini-Hochberg (BH) and Bonferroni procedures are ABOS and show that for a wide class of sparsity levels, the threshold of the former can be approximated by a nonrandom threshold.

Abstract:
Consider the problem of simultaneous testing for the means of independent normal observations. In this paper, we study some asymptotic optimality properties of certain multiple testing rules induced by a general class of one-group shrinkage priors in a Bayesian decision theoretic framework, where the overall loss is taken as the number of misclassified hypotheses. We assume a two-groups normal mixture model for the data and consider the asymptotic framework adopted in Bogdan et al. (2011) who introduced the notion of asymptotic Bayes optimality under sparsity in the context of multiple testing. The general class of one-group priors under study is rich enough to include, among others, the families of three parameter beta, generalized double Pareto priors, and in particular the horseshoe, the normal-exponential-gamma and the Strawderman-Berger priors. We establish that within our chosen asymptotic framework, the multiple testing rules under study asymptotically attain the risk of the Bayes Oracle up to a multiplicative factor, with the constant in the risk close to the constant in the Oracle risk. This is similar to a result obtained in Datta and Ghosh (2013) for the multiple testing rule based on the horseshoe estimator introduced in Carvalho et al. (2009, 2010). We further show that under very mild assumption on the underlying sparsity parameter, the induced decision rules based on an empirical Bayes estimate of the corresponding global shrinkage parameter proposed by van der Pas et al. (2014), attain the optimal Bayes risk up to the same multiplicative factor asymptotically. We provide a unifying argument applicable for the general class of priors under study. In the process, we settle a conjecture regarding optimality property of the generalized double Pareto priors made in Datta and Ghosh (2013). Our work also shows that the result in Datta and Ghosh (2013) can be improved further.

Abstract:
Millimeter-wave frequencies are gaining importance for applications in solid state transmitters for radar, radiometry, or short-range communication systems. The high-power pulsed IMPATT diode has beenproven to be best suitable for these applications. The most commonly used mm-wave IMPATT oscillator is a reduced-height waveguide circuit cross coupled with a coaxial line. The mounting parasitics at millimeter-waves usually limit output power and efficiency of these kinds of oscillators. In the current paper the author has carriedout modeling, simulation and optimization of a quarter-wave step transformer section for a W-band reduced height type IMPATT oscillator by using High Frequency Structure Simulator (HFSS). Also an easy method of designing and optimizing a tapered impedance transformer section has also been investigated using HFSS.