%0 Journal Article %T Grounded Semantic Composition for Visual Scenes %A P. Gorniak %A D. Roy %J Computer Science %D 2011 %I arXiv %R 10.1613/jair.1327 %X We present a visually-grounded language understanding model based on a study of how people verbally describe objects in scenes. The emphasis of the model is on the combination of individual word meanings to produce meanings for complex referring expressions. The model has been implemented, and it is able to understand a broad range of spatial referring expressions. We describe our implementation of word level visually-grounded semantics and their embedding in a compositional parsing framework. The implemented system selects the correct referents in response to natural language expressions for a large percentage of test cases. In an analysis of the system's successes and failures we reveal how visual context influences the semantics of utterances and propose future extensions to the model that take such context into account. %U http://arxiv.org/abs/1107.0031v1