Abstract:
For a $3$-tensor of dimensions $I_1\times I_2\times I_3$, we show that the nuclear norm of its every matrix flattening is a lower bound of the tensor nuclear norm, and which in turn is upper bounded by $\sqrt{\min\{I_i : i\neq j\}}$ times the nuclear norm of the matrix flattening in mode $j$ for all $j=1,2,3$. The results can be generalized to $N$-tensors with any $N\geq 3$. Both the lower and upper bounds for the tensor nuclear norm are sharp in the case $N=3$. A computable criterion for the lower bound being tight is given as well.

Abstract:
In this paper, we show that the eigenvectors of the zero Laplacian and signless Lapacian eigenvalues of a $k$-uniform hypergraph are closely related to some configured components of that hypergraph. We show that the components of an eigenvector of the zero Laplacian or signless Lapacian eigenvalue have the same modulus. Moreover, under a {\em canonical} regularization, the phases of the components of these eigenvectors only can take some uniformly distributed values in $\{\{exp}(\frac{2j\pi}{k})\;|\;j\in [k]\}$. These eigenvectors are divided into H-eigenvectors and N-eigenvectors. Eigenvectors with minimal support is called {\em minimal}. The minimal canonical H-eigenvectors characterize the even (odd)-bipartite connected components of the hypergraph and vice versa, and the minimal canonical N-eigenvectors characterize some multi-partite connected components of the hypergraph and vice versa.

Abstract:
We first show that the eigenvector of a tensor is well-defined. The differences between the eigenvectors of a tensor and its E-eigenvectors are the eigenvectors on the nonsingular projective variety $\mathbb S=\{\mathbf x\in\mathbb P^n\;|\;\sum\limits_{i=0}^nx_i^2=0\}$. We show that a generic tensor has no eigenvectors on $\mathbb S$. Actually, we show that a generic tensor has no eigenvectors on a proper nonsingular projective variety in $\mathbb P^n$. By these facts, we show that the coefficients of the E-characteristic polynomial are algebraically dependent. Actually, a certain power of the determinant of the tensor can be expressed through the coefficients besides the constant term. Hence, a nonsingular tensor always has an E-eigenvector. When a tensor $\mathcal T$ is nonsingular and symmetric, its E-eigenvectors are exactly the singular points of a class of hypersurfaces defined by $\mathcal T$ and a parameter. We give explicit factorization of the discriminant of this class of hypersurfaces, which completes Cartwright and Strumfels' formula. We show that the factorization contains the determinant and the E-characteristic polynomial of the tensor $\mathcal T$ as irreducible factors.

Abstract:
In 1907, Oskar Perron showed that a positive square matrix has a unique largest positive eigenvalue with a positive eigenvector. This result was extended to irreducible nonnegative matrices by Geog Frobenius in 1912, and to irreducible nonnegative tensors and weakly irreducible nonnegative tensors recently. This result is a fundamental result in matrix theory and has found wide applications in probability theory, internet search engines, spectral graph and hypergraph theory, etc. In this paper, we give a necessary and sufficient condition for the existence of such a positive eigenvector, i.e., a positive Perron vector, for a nonnegative tensor. We show that every nonnegative tensor has a canonical nonnegative partition form, from which we introduce strongly nonnegative tensors. A tensor is called strongly nonnegative, if the spectral radius of each genuine weakly irreducible block is equal to the spectral radius of the tensor, which is strictly larger than the spectral radius of any other block. We prove that a nonnegative tensor has a positive Perron vector if and only if it is strongly nonnegative. The proof is nontrivial. Numerical results for finding a positive Perron vector are reported.

Abstract:
A tensor $\mathcal T\in \mathbb T(\mathbb C^n,m+1)$, the space of tensors of order $m+1$ and dimension $n$ with complex entries, has $nm^{n-1}$ eigenvalues (counted with algebraic multiplicities). The inverse eigenvalue problem for tensors is a generalization of that for matrices. Namely, given a multiset $S\in \mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ of total multiplicity $nm^{n-1}$, is there a tensor in $\mathbb T(\mathbb C^n,m+1)$ such that the multiset of eigenvalues of $\mathcal{T}$ is exact $S$? The solvability of the inverse eigenvalue problem for tensors is studied in this paper. With tools from algebraic geometry, it is proved that the necessary and sufficient condition for this inverse problem to be generically solvable is $m=1,\ \text{or }n=2,\ \text{or }(n,m)=(3,2),\ (4,2),\ (3,3)$.

Abstract:
In this paper, we consider convergence properties of a second order Markov chain. Similar to a column stochastic matrix is associated to a Markov chain, a so called {\em transition probability tensor} $P$ of order 3 and dimension $n$ is associated to a second order Markov chain with $n$ states. For this $P$, define $F_P$ as $F_P(x):=Px^{2}$ on the $n-1$ dimensional standard simplex $\Delta_n$. If 1 is not an eigenvalue of $\nabla F_P$ on $\Delta_n$ and $P$ is irreducible, then there exists a unique fixed point of $F_P$ on $\Delta_n$. In particular, if every entry of $P$ is greater than $\frac{1}{2n}$, then 1 is not an eigenvalue of $\nabla F_P$ on $\Delta_n$. Under the latter condition, we further show that the second order power method for finding the unique fixed point of $F_P$ on $\Delta_n$ is globally linearly convergent and the corresponding second order Markov process is globally $R$-linearly convergent.

Abstract:
We study in this article multiplicities of eigenvalues of tensors. There are two natural multiplicities associated to an eigenvalue $\lambda$ of a tensor: algebraic multiplicity $\operatorname{am}(\lambda)$ and geometric multiplicity $\operatorname{gm}(\lambda)$. The former is the multiplicity of the eigenvalue as a root of the characteristic polynomial, and the latter is the dimension of the eigenvariety (i.e., the set of eigenvectors) corresponding to the eigenvalue. We show that the algebraic multiplicity could change along the orbit of tensors by the orthogonal linear group action, while the geometric multiplicity of the zero eigenvalue is invariant under this action, which is the main difficulty to study their relationships. However, we show that for a generic tensor, every eigenvalue has a unique (up to scaling) eigenvector, and both the algebraic multiplicity and geometric multiplicity are one. In general, we suggest for an $m$-th order $n$-dimensional tensor the relationship \[ \operatorname{am}(\lambda)\geq \operatorname{gm}(\lambda)(m-1)^{\operatorname{gm}(\lambda)-1}. \] We show that it is true for serveral cases, especially when the eigenvariety contains a linear subspace of dimension $\operatorname{gm}(\lambda)$ in coordinate form. As both multiplicities are invariants under the orthogonal linear group action in the matrix counterpart, this generalizes the classical result for a matrix: the algebraic mutliplicity is not smaller than the geometric multiplicity.

Abstract:
The talent policy in contemporary China has experienced several development periods like restoration and reestablishment, market transformation and strategic propelling, featuring constant evolution with the conflicts and contradictions between its dependence on old path of planned system and its transition into the path suitable for socialist market economic system. Meanwhile, the talent policy in contemporary China is an outcome of the combination between induced system changes and mandatory system changes. The talent policy changes in contemporary China are inherently driven by talent problems, guided by the value of policy community and finally determined by interest differentiation of actors. And the main problems in the changes of talent policy lie in its fragmentation due to lack of systematization and continuity, so it is of top priority to formulate an overall basic talent law.

Abstract:
In this paper, we show that the largest Laplacian H-eigenvalue of a $k$-uniform nontrivial hypergraph is strictly larger than the maximum degree when $k$ is even. A tight lower bound for this eigenvalue is given. For a connected even-uniform hypergraph, this lower bound is achieved if and only if it is a hyperstar. However, when $k$ is odd, it happens that the largest Laplacian H-eigenvalue is equal to the maximum degree, which is a tight lower bound. On the other hand, tight upper and lower bounds for the largest signless Laplacian H-eigenvalue of a $k$-uniform connected hypergraph are given. For a connected $k$-uniform hypergraph, the upper (respectively lower) bound of the largest signless Laplacian H-eigenvalue is achieved if and only if it is a complete hypergraph (respectively a hyperstar). The largest Laplacian H-eigenvalue is always less than or equal to the largest signless Laplacian H-eigenvalue. When the hypergraph is connected, the equality holds here if and only if $k$ is even and the hypergraph is odd-bipartite.

Abstract:
Yuan's theorem of the alternative is an important theoretical tool in optimization, which provides a checkable certificate for the infeasibility of a strict inequality system involving two homogeneous quadratic functions. In this paper, we provide a tractable extension of Yuan's theorem of the alternative to the symmetric tensor setting. As an application, we establish that the optimal value of a class of nonconvex polynomial optimization problems with suitable sign structure (or more explicitly, with essentially non-positive coefficients) can be computed by a related convex conic programming problem, and the optimal solution of these nonconvex polynomial optimization problems can be recovered from the corresponding solution of the convex conic programming problem. Moreover, we obtain that this class of nonconvex polynomial optimization problems enjoy exact sum-of-squares relaxation, and so, can be solved via a single semidefinite programming problem.