%0 Journal Article %T Nonnegative Matrix Factorizations Performing Object Detection and Localization %A G. Casalino %A N. Del Buono %A M. Minervini %J Applied Computational Intelligence and Soft Computing %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/781987 %X We study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms, which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations. 1. Introduction The notion of low dimensional approximation has played a fundamental role in effectively and efficiently processing and conceptualizing huge amount of data stored in large sparse matrices. Particularly, subspace techniques, such as Singular Value Decomposition [1], Principal Component Analysis (PCA) [2], and Independent Component Analysis [3], represent a class of linear algebra methods largely adopted to analyze high dimensional dataset in order to discover latent structures by projecting it onto a low dimensional space. Generally, a subspace method is characterized by learning a set of base vectors from a set of suitable data templates. This vector spans a subspace which is able to capture the essential structure of the input data. Once the subspace has been found (during the off-line learning phase), the detection of a new sample can be accomplished (in the so-called on-line detection phase) by projecting it on the subspace and finding the nearest neighbor of templates projected onto this subspace. These methods have found efficient applications in several areas of information retrieval, computer vision, and pattern recognition, especially in the fields of face identification [4, 5], recognition of digits and characters [6, 7], and molecular pattern discovery [8, 9]. However, pertinent information stored in many data matrices are often nonnegative (examples are pixels in images, the probability of a particular topic appearing in a linguistic document, the amount of pollutant emitted by a factory, and so on [10¨C15]). During the analysis process, taking into account this nonnegativity constraint could bring some benefits in terms of interpretability and visualization of large scale data, while maintaining the physical feasibility more closely. Nevertheless, classical subspace methods describe data as a %U http://www.hindawi.com/journals/acisc/2012/781987/