Abstract:
It is known that quantum computers yield a speed-up for certain discrete problems. Here we want to know whether quantum computers are useful for continuous problems. We study the computation of the integral of functions from the classical Hoelder classes with d variables. The optimal orders for the complexity of deterministic and (general) randomized methods are known. We obtain the respective optimal orders for quantum algorithms and also for restricted Monte Carlo (only coin tossing instead of general random numbers). To summarize the results one can say that (1) there is an exponential speed-up of quantum algorithms over deterministic (classical) algorithms, if the smoothness is small; (2) there is a (roughly) quadratic speed-up of quantum algorithms over randomized classical methods, if the smoothness is small.

Abstract:
This is a survey (21 pages, 124 references) written for the MCQMC 2014 conference in Leuven, April 2014. We start with the seminal paper of Bakhvalov (1959) and end with new results on the curse of dimension and on the complexity of oscillatory integrals. Some small errors of earlier versions are corrected.

Abstract:
We present a lower error bound for approximating linear multivariate operators defined over Hilbert spaces in terms of the error bounds for appropriately constructed linear functionals as long as algorithms use function values. Furthermore, some of these linear functionals have the same norm as the linear operators. We then apply this error bound for linear (unweighted) tensor products. In this way we use negative tractability results known for linear functionals to conclude the same negative results for linear operators. In particular, we prove that $L_2$-multivariate approximation defined for standard Sobolev space suffers the curse of dimensionality if function values are used although the curse is not present if linear functionals are allowed.

Abstract:
We consider the computation of the mean of sequences in the quantum model of computation. We determine the query complexity in the case of sequences which satisfy a $p$-summability condition for $1\le p<2$. This settles a problem left open in Heinrich (2001).

Abstract:
We study cubature formulas for d-dimensional integrals with an arbitrary symmetric weight function of tensor product form. We present a construction that yields a high polynomial exactness: for fixed degree l=5 or l=7 and large dimension, the number of knots is only slightly larger than the lower bound of M\"oller and much smaller compared to the known constructions. We also show, for any odd degree l=2k+1, that the minimal number of points is almost independent of the weight function. This is also true for the integration over the (Euclidean) sphere.

Abstract:
We study the integration of functions with respect to an unknown density. We compare the simple Monte Carlo method (which is almost optimal for a certain large class of inputs) and compare it with the Metropolis algorithm (based on a suitable ball walk). Using MCMC we prove (for certain classes of inputs) that adaptive methods are much better than nonadaptive ones. Actually, the curse of dimension (for nonadaptive methods) can be broken by adaption.

Abstract:
We present an algorithm for multivariate integration over cubes that is unbiased and has optimal order of convergence (in the randomized sense as well as in the worst case setting) for all Sobolev spaces $H^{r, mix}([0,1]^d)$ and $H^s([0,1]^d)$ for for $s>d/2$.

Abstract:
We study the approximation of high-dimensional rank one tensors using point evaluations and consider deterministic as well as randomized algorithms. We prove that for certain parameters (smoothness and norm of the $r$th derivative) this problem is intractable while for other parameters the problem is tractable and the complexity is only polynomial in the dimension for every fixed $\varepsilon>0$. For randomized algorithms we completely characterize the set of parameters that lead to easy or difficult problems, respectively. In the "difficult" case we modify the class to obtain a tractable problem: The problem gets tractable with a polynomial (in the dimension) complexity if the support of the function is not too small.

Abstract:
Markov chain Monte Carlo (MCMC) methods are a very versatile and widely used tool to compute integrals and expectations. In this short survey we focus on error bounds, rules for choosing the burn in, high dimensional problems and tractability versus curse of dimension.