%0 Journal Article %T Approximations of Antieigenvalue and Antieigenvalue-Type Quantities %A Morteza Seddighin %J International Journal of Mathematics and Mathematical Sciences %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/318214 %X We will extend the definition of antieigenvalue of an operator to antieigenvalue-type quantities, in the first section of this paper, in such a way that the relations between antieigenvalue-type quantities and their corresponding Kantorovich-type inequalities are analogous to those of antieigenvalue and Kantorovich inequality. In the second section, we approximate several antieigenvalue-type quantities for arbitrary accretive operators. Each antieigenvalue-type quantity is approximated in terms of the same quantity for normal matrices. In particular, we show that for an arbitrary accretive operator, each antieigenvalue-type quantity is the limit of the same quantity for a sequence of finite-dimensional normal matrices. 1. Introduction Since 1948, the Kantorovich and Kantorovich-type inequalities for positive bounded operators have had many applications in operator theory and other areas of mathematical sciences such as statistics. Let be a positive operator on a Hilbert space with , then the Kantorovich inequality asserts that for every unit vector (see [1]). When and are the smallest and the largest eigenvalues of , respectively, it can be easily verified that for every pair of nonnegative numbers , , with and . The expression is called the Kantorovich constant and is denoted by . Given an operator on a Hilbert space , the antieigenvalue of , denoted by , is defined by Gustafson (see [2¨C5]) to be Definition (1.4) is equivalent to A unit vector for which the in (1.5) is attained is called an antieigenvector of . For a positive operator , we have Thus, for a positive operator, both the Kantorovich constant and are expressed in terms of the smallest and the largest eigenvalues. It turns out that the former can be obtained from the latter. Matrix optimization problems analogues to (1.4), where the quantity to be optimized involves inner products and norms, frequently occur in statistics. For example, in the analysis of statistical efficiency one has to compute quantities such as where is a positive definite matrix and with . Each is a column vector of size , and . Here, denotes the identity matrix, and stands for the determinant of a matrix . Please see [6¨C12]. Notice that in the references just cited, the sup¡¯s of the reciprocal of expressions involved in (1.8), (1.9), (1.10), and (1.11) are sought. Nevertheless, since the quantities involved are always positive, those sup¡¯s are obtained by finding the reciprocals of the inf's found in (1.8), (1.9), (1.10) and (1.11), while the optimizing vectors remain the same. Note that in (1.7) through (1.11). one %U http://www.hindawi.com/journals/ijmms/2012/318214/