Abstract:
In this work we detail the application of a fast convolution algorithm computing high dimensional integrals to the context of multiplicative noise stochastic processes. The algorithm provides a numerical solution to the problem of characterizing conditional probability density functions at arbitrary time, and we applied it successfully to quadratic and piecewise linear diffusion processes. The ability in reproducing statistical features of financial return time series, such as thickness of the tails and scaling properties, makes this processes appealing for option pricing. Since exact analytical results are missing, we exploit the fast convolution as a numerical method alternative to the Monte Carlo simulation both in objective and risk neutral settings. In numerical sections we document how fast convolution outperforms Monte Carlo both in velocity and efficiency terms.

Abstract:
We investigate the level density for several ensembles of positive random matrices of a Wishart--like structure, $W=XX^{\dagger}$, where $X$ stands for a nonhermitian random matrix. In particular, making use of the Cauchy transform, we study free multiplicative powers of the Marchenko-Pastur (MP) distribution, ${\rm MP}^{\boxtimes s}$, which for an integer $s$ yield Fuss-Catalan distributions corresponding to a product of $s$ independent square random matrices, $X=X_1\cdots X_s$. New formulae for the level densities are derived for $s=3$ and $s=1/3$. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

Abstract:
Kernel random matrices have attracted a lot of interest in recent years, from both practical and theoretical standpoints. Most of the theoretical work so far has focused on the case were the data is sampled from a low-dimensional structure. Very recently, the first results concerning kernel random matrices with high-dimensional input data were obtained, in a setting where the data was sampled from a genuinely high-dimensional structure---similar to standard assumptions in random matrix theory. In this paper, we consider the case where the data is of the type "information${}+{}$noise." In other words, each observation is the sum of two independent elements: one sampled from a "low-dimensional" structure, the signal part of the data, the other being high-dimensional noise, normalized to not overwhelm but still affect the signal. We consider two types of noise, spherical and elliptical. In the spherical setting, we show that the spectral properties of kernel random matrices can be understood from a new kernel matrix, computed only from the signal part of the data, but using (in general) a slightly different kernel. The Gaussian kernel has some special properties in this setting. The elliptical setting, which is important from a robustness standpoint, is less prone to easy interpretation.

Abstract:
We show that the monotonic independence introduced by Muraki can also be used to define a multiplicative convolution. We also find a method for the calculation of this convolution based on an appropriate form of the Cauchy transform. We discuss infinite divisibility in the multiplicative monotonic context as well.

Abstract:
Different types of convolution operations involving large Vandermonde matrices are considered. The convolutions parallel those of large Gaussian matrices and additive and multiplicative free convolution. First additive and multiplicative convolution of Vandermonde matrices and deterministic diagonal matrices are considered. After this, several cases of additive and multiplicative convolution of two independent Vandermonde matrices are considered. It is also shown that the convergence of any combination of Vandermonde matrices is almost sure. We will divide the considered convolutions into two types: those which depend on the phase distribution of the Vandermonde matrices, and those which depend only on the spectra of the matrices. A general criterion is presented to find which type applies for any given convolution. A simulation is presented, verifying the results. Implementations of all considered convolutions are provided and discussed, together with the challenges in making these implementations efficient. The implementation is based on the technique of Fourier-Motzkin elimination, and is quite general as it can be applied to virtually any combination of Vandermonde matrices. Generalizations to related random matrices, such as Toeplitz and Hankel matrices, are also discussed.

Abstract:
Using the combinatorics of non-crossing partitions, we construct a conditionally free analogue of the Voiculescu's S-transform. The result is applied to analytical description of conditionally free multiplicative convolution and characterization of infinite divisibility.

Abstract:
We consider large Information-Plus-Noise type matrices of the form $M_N=(\sigma \frac{X_N}{\sqrt{N}}+A_N)(\sigma \frac{X_N}{\sqrt{N}}+A_N)^*$ where $X_N$ is an $n \times N$ ($n\leq N)$ matrix consisting of independent standardized complex entries, $A_N$ is an $n \times N$ nonrandom matrix and $\sigma>0$. As $N$ tends to infinity, if $n/N \rightarrow c\in ]0,1]$ and if the empirical spectral measure of $A_N A_N^*$ converges weakly to some compactly supported probability distribution $\nu \neq \delta_0$, Dozier and Silverstein established that almost surely the empirical spectral measure of $M_N$ converges weakly towards a nonrandom distribution $\mu_{\sigma,\nu,c}$. Bai and Silverstein proved, under certain assumptions on the model, that for some closed interval in $]0;+\infty[$ outside the support of $\mu_{\sigma,\nu,c}$ satisfying some conditions involving $A_N$, almost surely, no eigenvalues of $M_N$ will appear in this interval for all $N$ large. In this paper, we carry on with the study of the support of the limiting spectral measure previously investigated by Dozier and Silverstein and later by Vallet, Loubaton and Mestre and Loubaton and P. Vallet, and we show that, under almost the same assumptions as Bai and Silvertein, there is an exact separation phenomenon between the spectrum of $M_N$ and the spectrum of $A_NA_N^*$: to a gap in the spectrum of $M_N$ pointed out by Bai and Silverstein, it corresponds a gap in the spectrum of $A_NA_N^*$ which splits the spectrum of $A_NA_N^*$ exactly as that of $M_N$. We use the previous results to characterize the outliers of spiked Information-Plus-Noise type models.

Abstract:
We establish a large deviation principle for the empirical spectral measure of a sample covariance matrix with sub-Gaussian entries, which extends Bordenave and Caputo's result for Wigner matrices having the same type of entries [7]. To this aim, we need to establish an asymptotic freeness result for rectangular free convolution, more precisely, we give a bound in the subordination formula for information-plus-noise matrices.

Abstract:
This paper investigates homomorphisms \`a la Bercovici-Pata between additive and multiplicative convolutions. We also consider their matricial versions which are associated with measures on the space of Hermitian matrices and on the unitary group. The previous results combined with a matricial model of Benaych-Georges and Cabanal-Duvillard allows us to define and study the large N limit of a new matricial model on the unitary group for free multiplicative L\'evy processes.

Abstract:
We study memoryless, discrete time, matrix channels with additive white Gaussian noise and input power constraints of the form $Y_i = \sum_j H_{ij} X_j + Z_i$, where $Y_i$ ,$X_j$ and $Z_i$ are complex, $i=1..m$, $j=1..n$, and $H$ is a complex $m\times n$ matrix with some degree of randomness in its entries. The additive Gaussian noise vector is assumed to have uncorrelated entries. Let $H$ be a full matrix (non-sparse) with pairwise correlations between matrix entries of the form $ E[H_{ik} H^*_{jl}] = {1\over n}C_{ij} D_{kl} $, where $C$,$D$ are positive definite Hermitian matrices. Simplicities arise in the limit of large matrix sizes (the so called large-N limit) which allow us to obtain several exact expressions relating to the channel capacity. We study the probability distribution of the quantity $ f(H) = \log \det (1+P H^{\dagger}S H) $. $S$ is non-negative definite and hermitian, with $Tr S=n$. Note that the expectation $E[f(H)]$, maximised over $S$, gives the capacity of the above channel with an input power constraint in the case $H$ is known at the receiver but not at the transmitter. For arbitrary $C$,$D$ exact expressions are obtained for the expectation and variance of $f(H)$ in the large matrix size limit. For $C=D=I$, where $I$ is the identity matrix, expressions are in addition obtained for the full moment generating function for arbitrary (finite) matrix size in the large signal to noise limit. Finally, we obtain the channel capacity where the channel matrix is partly known and partly unknown and of the form $\alpha I+ \beta H$, $\alpha,\beta$ being known constants and entries of $H$ i.i.d. Gaussian with variance $1/n$. Channels of the form described above are of interest for wireless transmission with multiple antennae and receivers.