Abstract:
In this paper, we present the fundamental framework of the evaluation problem under which the evaluation operator satisfying some axioms is linear. Based on the dynamic linear evaluation mechanism of contingent claims, studying this evaluation rule in the market driven by fractional Brownian motions has led to a dynamic capital asset pricing model. It is deduced here mainly with the fractional Girsanov theorem and the Clark-Haussmann-Ocone theorem.

Abstract:
This paper considers a mean-field type stochastic control problem where the dynamics is governed by a forward and backward stochastic differential equation (SDE) driven by Lévy processes and the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.

Abstract:
An efficient algorithm is developed to construct disconnectivity graphs by a random walk over basins of attraction. This algorithm can detect a large number of local minima, find energy barriers between them, and estimate local thermal averages over each basin of attraction. It is applied to the SK spin glass Hamiltonian where existing methods have difficulties even for a moderate number of spins. Finite-size results are used to make predictions in the thermodynamic limit that match theoretical approximations and recent findings on the free energy landscapes of SK spin glasses.

Abstract:
Let $\Sigma$ be a hyperbolic link with $m$ components in a 3-dimensional manifold $X$. In this paper, we will show that the moduli space of marked hyperbolic cone structures on the pair $(X, \Sigma)$ with all cone angle less than $2\pi /3$ is an $m$-dimensional open cube, parameterized naturally by the $m$ cone angles. As a corollary, we will give a proof of a special case of Thurston's geometrization theorem for orbifolds.

Abstract:
The problem of motif detection can be formulated as the construction of a discriminant function to separate sequences of a specific pattern from background. In computational biology, motif detection is used to predict DNA binding sites of a transcription factor (TF), mostly based on the weight matrix (WM) model or the Gibbs free energy (FE) model. However, despite the wide applications, theoretical analysis of these two models and their predictions is still lacking. We derive asymptotic error rates of prediction procedures based on these models under different data generation assumptions. This allows a theoretical comparison between the WM-based and the FE-based predictions in terms of asymptotic efficiency. Applications of the theoretical results are demonstrated with empirical studies on ChIP-seq data and protein binding microarray data. We find that, irrespective of underlying data generation mechanisms, the FE approach shows higher or comparable predictive power relative to the WM approach when the number of observed binding sites used for constructing a discriminant decision is not too small.

Abstract:
Regularized linear regression under the $\ell_1$ penalty, such as the Lasso, has been shown to be effective in variable selection and sparse modeling. The sampling distribution of an $\ell_1$-penalized estimator $\hat{\beta}$ is hard to determine as the estimator is defined by an optimization problem that in general can only be solved numerically and many of its components may be exactly zero. Let $S$ be the subgradient of the $\ell_1$ norm of the coefficient vector $\beta$ evaluated at $\hat{\beta}$. We find that the joint sampling distribution of $\hat{\beta}$ and $S$, together called an augmented estimator, is much more tractable and has a closed-form density under a normal error distribution in both low-dimensional ($p\leq n$) and high-dimensional ($p>n$) settings. Given $\beta$ and the error variance $\sigma^2$, one may employ standard Monte Carlo methods, such as Markov chain Monte Carlo and importance sampling, to draw samples from the distribution of the augmented estimator and calculate expectations with respect to the sampling distribution of $\hat{\beta}$. We develop a few concrete Monte Carlo algorithms and demonstrate with numerical examples that our approach may offer huge advantages and great flexibility in studying sampling distributions in $\ell_1$-penalized linear regression. We also establish nonasymptotic bounds on the difference between the true sampling distribution of $\hat{\beta}$ and its estimator obtained by plugging in estimated parameters, which justifies the validity of Monte Carlo simulation from an estimated sampling distribution even when $p\gg n\to \infty$.

Abstract:
When a posterior distribution has multiple modes, unconditional expectations, such as the posterior mean, may not offer informative summaries of the distribution. Motivated by this problem, we propose to decompose the sample space of a multimodal distribution into domains of attraction of local modes. Domain-based representations are defined to summarize the probability masses of and conditional expectations on domains of attraction, which are much more informative than the mean and other unconditional expectations. A computational method, the multi-domain sampler, is developed to construct domain-based representations for an arbitrary multimodal distribution. The multi-domain sampler is applied to structural learning of protein-signaling networks from high-throughput single-cell data, where a signaling network is modeled as a causal Bayesian network. Not only does our method provide a detailed landscape of the posterior distribution but also improves the accuracy and the predictive power of estimated networks.

Abstract:
Quantifying the uncertainty in a penalized estimator under group sparsity, such as the group Lasso, is an important, yet still open, question. We establish, under a high-dimensional scaling, the consistency of an estimated sampling distribution for the group Lasso, assuming a normal error model and mild conditions on the design matrix and the true coefficients. Consequently, simulation from the estimated sampling distribution provides a valid and convenient means of constructing interval estimates for both individual coefficients and potentially large groups of coefficients. The results are further generalized to other group norm penalties and sub-Gaussian errors.

Task-Based Learning (TBL) is a student-centered, teacher-guided and task-performed teaching approach. This study was aimed to investigate the effects of task-based learning (TBL) in chemistry experiment teaching on promoting high school students’ critical thinking skills in Xi’an, China. To achieve the aims, a pre-test and post-test experimental design with an experimental group and a control group was employed. Students in the experimental group were taught with TBL, while students in the control group were taught with lecturing teaching methods. Five chemical experiments were selected, and 119 students aged at 17-19 voluntarily participated in the research which lasted one semester. The California Critical Thinking Skills Test (CCTST) was used as a data collection tool. Results showed there was an obvious significant difference (p<0.05) in the dimension of analyticity in the experimental group after TBL, while there were no significant differences in the total score, the evaluation and inference of CCTST. The findings provide an effective way for chemistry teachers to improve students’ critical thinking analyticity skills.