Abstract:
We consider the problem of constructing a linear map from a Hilbert space $\mathcal{H}$ (possibly infinite dimensional) to $\mathbb{R}^m$ that satisfies a restricted isometry property (RIP) on an arbitrary signal model $\mathcal{S} \subset \mathcal{H}$. We present a generic framework that handles a large class of low-dimensional subsets but also unstructured and structured linear maps. We provide a simple recipe to prove that a random linear map satisfies a general RIP on $\mathcal{S}$ with high probability. We also describe a generic technique to construct linear maps that satisfy the RIP. Finally, we detail how to use our results in several examples, which allow us to recover and extend many known compressive sampling results.

Abstract:
Takens' Embedding Theorem asserts that when the states of a hidden dynamical system are confined to a low-dimensional attractor, complete information about the states can be preserved in the observed time-series output through the delay coordinate map. However, the conditions for the theorem to hold ignore the effects of noise and time-series analysis in practice requires a careful empirical determination of the sampling time and number of delays resulting in a number of delay coordinates larger than the minimum prescribed by Takens' theorem. In this paper, we use tools and ideas in Compressed Sensing to provide a first theoretical justification for the choice of the number of delays in noisy conditions. In particular, we show that under certain conditions on the dynamical system, measurement function, number of delays and sampling time, the delay-coordinate map can be a stable embedding of the dynamical system's attractor.

Abstract:
The fields of compressed sensing (CS) and matrix completion have shown that high-dimensional signals with sparse or low-rank structure can be effectively projected into a low-dimensional space (for efficient acquisition or processing) when the projection operator achieves a stable embedding of the data by satisfying the Restricted Isometry Property (RIP). It has also been shown that such stable embeddings can be achieved for general Riemannian submanifolds when random orthoprojectors are used for dimensionality reduction. Due to computational costs and system constraints, the CS community has recently explored the RIP for structured random matrices (e.g., random convolutions, localized measurements, deterministic constructions). The main contribution of this paper is to show that any matrix satisfying the RIP (i.e., providing a stable embedding for sparse signals) can be used to construct a stable embedding for manifold-modeled signals by randomizing the column signs and paying reasonable additional factors in the number of measurements. We demonstrate this result with several new constructions for stable manifold embeddings using structured matrices. This result allows advances in efficient projection schemes for sparse signals to be immediately applied to manifold signal models.

Abstract:
In order to model entanglements of polymers in a confined region, we consider the linking numbers and writhes of cycles in random linear embeddings of complete graphs in a cube. Our main results are that for a random linear embedding of $K_n$ in a cube, the mean sum of squared linking numbers and the mean sum of squared writhes are of the order of $\theta(n(n!))$. We obtain a similar result for the mean sum of squared linking numbers in linear embeddings of graphs on $n$ vertices, such that for any pair of vertices, the probability that they are connected by an edge is $p$. We also obtain experimental results about the distribution of linking numbers for random linear embeddings of these graphs. Finally, we estimate the probability of specific linking configurations occurring in random linear embeddings of the graphs $K_6$ and $K_{3,3,1}$.

Abstract:
In a random graph with a spatial embedding, the probability of linking to a particular vertex $v$ decreases with distance, but the rate of decrease may depend on the particular vertex $v$, and on the direction in which the distance increases. In this article, we consider the question when the embedding can be chosen to be uniform, so the probability of a link between two vertices depends only on the distance between them. We give necessary and sufficient conditions for the existence of a uniform linear embedding (embedding into a one-dimensional space) for spatial random graphs where the link probability can attain only a finite number of values.

Abstract:
In this paper we study the behavior of dynamical systems in uniform space which are Poisson stable and distal. The main results are that the trajectories of such motions are closed and every almost periodic motion is stable P and distal. In complete uniform space, this motion is periodic, Lagrange stable and recurrent.

Abstract:
Consider a random graph process where vertices are chosen from the interval $[0,1]$, and edges are chosen independently at random, but so that, for a given vertex $x$, the probability that there is an edge to a vertex $y$ decreases as the distance between $x$ and $y$ increases. We call this a random graph with a linear embedding. We define a new graph parameter $\Gamma^*$, which aims to measure the similarity of the graph to an instance of a random graph with a linear embedding. For a graph $G$, $\Gamma^*(G)=0$ if and only if $G$ is a unit interval graph, and thus a deterministic example of a graph with a linear embedding. We show that the behaviour of $\Gamma^*$ is consistent with the notion of convergence as defined in the theory of dense graph limits. In this theory, graph sequences converge to a symmetric, measurable function on $[0,1]^2$. We define an operator $\Gamma$ which applies to graph limits, and which assumes the value zero precisely for graph limits that have a linear embedding. We show that, if a graph sequence $\{ G_n\}$ converges to a function $w$, then $\{ \Gamma^*(G_n)\}$ converges as well. Moreover, there exists a function $w^*$ arbitrarily close to $w$ under the box distance, so that $\lim_{n\rightarrow \infty}\Gamma^*(G_n)$ is arbitrarily close to $\Gamma (w^*)$.

Abstract:
In this paper, we study the problem of approximately computing the product of two real matrices. In particular, we analyze a dimensionality-reduction-based approximation algorithm due to Sarlos [1], introducing the notion of nuclear rank as the ratio of the nuclear norm over the spectral norm. The presented bound has improved dependence with respect to the approximation error (as compared to previous approaches), whereas the subspace -- on which we project the input matrices -- has dimensions proportional to the maximum of their nuclear rank and it is independent of the input dimensions. In addition, we provide an application of this result to linear low-dimensional embeddings. Namely, we show that any Euclidean point-set with bounded nuclear rank is amenable to projection onto number of dimensions that is independent of the input dimensionality, while achieving additive error guarantees.

Abstract:
Low dimensional representations of words allow accurate NLP models to be trained on limited annotated data. While most representations ignore words' local context, a natural way to induce context-dependent representations is to perform inference in a probabilistic latent-variable sequence model. Given the recent success of continuous vector space word representations, we provide such an inference procedure for continuous states, where words' representations are given by the posterior mean of a linear dynamical system. Here, efficient inference can be performed using Kalman filtering. Our learning algorithm is extremely scalable, operating on simple cooccurrence counts for both parameter initialization using the method of moments and subsequent iterations of EM. In our experiments, we employ our inferred word embeddings as features in standard tagging tasks, obtaining significant accuracy improvements. Finally, the Kalman filter updates can be seen as a linear recurrent neural network. We demonstrate that using the parameters of our model to initialize a non-linear recurrent neural network language model reduces its training time by a day and yields lower perplexity.

Abstract:
In an earlier paper, we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of $n$-by-$n$ matrices can be done by any algorithm in $O(n^{\omega + \eta})$ operations for any $\eta > 0$, then it can be done stably in $O(n^{\omega + \eta})$ operations for any $\eta > 0$. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in $O(n^{\omega + \eta})$ operations.