oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 72 )

2019 ( 414 )

2018 ( 500 )

2017 ( 510 )

Custom range...

Search Results: 1 - 10 of 273496 matches for " Heinrich H. Bülthoff "
All listed articles are free for downloading (OA Articles)
Page 1 /273496
Display every page Item
Verbal Shadowing and Visual Interference in Spatial Memory
Tobias Meilinger, Heinrich H. Bülthoff
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0074177
Abstract: Spatial memory is thought to be organized along experienced views and allocentric reference axes. Memory access from different perspectives typically yields V-patterns for egocentric encoding (monotonic decline in performance along with the angular deviation from the experienced perspectives) and W-patterns for axes encoding (better performance along parallel and orthogonal perspectives than along oblique perspectives). We showed that learning an object array with a verbal secondary task reduced W-patterns compared with learning without verbal shadowing. This suggests that axes encoding happened in a verbal format; for example, by rows and columns. Alternatively, general cognitive load from the secondary task prevented memorizing relative to a spatial axis. Independent of encoding, pointing with a surrounding room visible yielded stronger W-patterns compared with pointing with no room visible. This suggests that the visible room geometry interfered with the memorized room geometry. With verbal shadowing and without visual interference only V-patterns remained; otherwise, V- and W-patterns were combined. Verbal encoding and visual interference explain when W-patterns can be expected alongside V-patterns and thus can help in resolving different performance patterns in a wide range of experiments.
Learned Non-Rigid Object Motion is a View-Invariant Cue to Recognizing Novel Objects
Lewis L. Chuang,Heinrich H. Bülthoff
Frontiers in Computational Neuroscience , 2012, DOI: 10.3389/fncom.2012.00026
Abstract: There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint.
Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments
Trevor J. Dodds, Betty J. Mohler, Heinrich H. Bülthoff
PLOS ONE , 2011, DOI: 10.1371/journal.pone.0025759
Abstract: Background When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. Principal Findings In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Conclusions Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.
Imagined Self-Motion Differs from Perceived Self-Motion: Evidence from a Novel Continuous Pointing Method
Jennifer L. Campos,Joshua H. Siegle,Betty J. Mohler,Heinrich H. Bülthoff,Jack M. Loomis
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0007793
Abstract: The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking.
Second-Order Relational Manipulations Affect Both Humans and Monkeys
Christoph D. Dahl, Nikos K. Logothetis, Heinrich H. Bülthoff, Christian Wallraven
PLOS ONE , 2011, DOI: 10.1371/journal.pone.0025793
Abstract: Recognition and individuation of conspecifics by their face is essential for primate social cognition. This ability is driven by a mechanism that integrates the appearance of facial features with subtle variations in their configuration (i.e., second-order relational properties) into a holistic representation. So far, there is little evidence of whether our evolutionary ancestors show sensitivity to featural spatial relations and hence holistic processing of faces as shown in humans. Here, we directly compared macaques with humans in their sensitivity to configurally altered faces in upright and inverted orientations using a habituation paradigm and eye tracking technologies. In addition, we tested for differences in processing of conspecific faces (human faces for humans, macaque faces for macaques) and non-conspecific faces, addressing aspects of perceptual expertise. In both species, we found sensitivity to second-order relational properties for conspecific (expert) faces, when presented in upright, not in inverted, orientation. This shows that macaques possess the requirements for holistic processing, and thus show similar face processing to that of humans.
Looking for Discriminating Is Different from Looking for Looking’s Sake
Hans-Joachim Bieg, Jean-Pierre Bresciani, Heinrich H. Bülthoff, Lewis L. Chuang
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0045445
Abstract: Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.
Perceived Object Stability Depends on Multisensory Estimates of Gravity
Michael Barnett-Cowan,Roland W. Fleming,Manish Singh,Heinrich H. Bülthoff
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0019289
Abstract: How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.
The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions
Kathrin Kaulard, Douglas W. Cunningham, Heinrich H. Bülthoff, Christian Wallraven
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0032321
Abstract: The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.
Attentional Networks and Biological Motion
Chandramouli Chandrasekaran,Lucy Turner,Heinrich H. Bülthoff,Ian M. Thornton
Psihologija , 2010,
Abstract: Our ability to see meaningful actions when presented with pointlight traces of human movement is commonly referred to as the perception of biological motion. While traditionalexplanations have emphasized the spontaneous and automatic nature of this ability, morerecent findings suggest that attention may play a larger role than is typically assumed. Intwo studies we show that the speed and accuracy of responding to point-light stimuli is highly correlated with the ability to control selective attention. In our first experiment we measured thresholds for determining the walking direction of a masked point-light figure, and performance on a range of attention-related tasks in the same set of observers. Mask-density thresholds for the direction discrimination task varied quite considerably from observer to observer and this variation was highly correlated with performance on both Stroop and flanker interference tasks. Other components of attention, such as orienting, alerting and visual search efficiency, showed no such relationship. In a second experiment, we examined the relationship between the ability to determine the orientation of unmasked point-light actions and Stroop interference, again finding a strong correlation. Our results are consistent with previous research suggesting that biological motion processing may requite attention, and specifically implicate networks of attention related to executive control and selection.
Putting Actions in Context: Visual Action Adaptation Aftereffects Are Modulated by Social Contexts
Stephan de la Rosa, Stephan Streuber, Martin Giese, Heinrich H. Bülthoff, Cristóbal Curio
PLOS ONE , 2014, DOI: 10.1371/journal.pone.0086502
Abstract: The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action.
Page 1 /273496
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.