Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Experimental evidence for recursion in prosody
Hunyadi, László
Argumentum , 2008,
Abstract: It is widely assumed that mismatches between syntactic and prosodic structure are mainly due to the fact that they represent two, principally different kinds of structure. Namely, whereas syntactic structure has an indefinite depth generated by recursion, prosodic structure is flatter owing to the lack of this generative power. This article argues that in some essential aspects prosody is also recursive. Namely, it is based on recursive grouping in the form of recursive embedding expressed by such prosodic principles as inherent grouping and tonal continuity. The article presents these principles supported by a series of production and perception experiments.
ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies  [cached]
Agrawal Deepashri,Timm Lydia,Viola Filipa,Debener Stefan
BMC Neuroscience , 2012, DOI: 10.1186/1471-2202-13-113
Abstract: Background Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs). Results Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy. Conclusions Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.
Emotional Speech Processing at the Intersection of Prosody and Semantics  [PDF]
Rachel Schwartz, Marc D. Pell
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0047279
Abstract: The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the relative contributions of processing utterances with single-channel (prosody-only) versus multi-channel (prosody and semantic) cues on the perception of happy, sad, and angry emotional expressions. Our data show that emotional speech cues produce robust congruency effects on decisions about an emotionally related face target, although no processing advantage occurred when prime stimuli contained multi-channel as opposed to single-channel speech cues. Our data suggest that utterances with prosodic cues alone and utterances with combined prosody and semantic cues both activate knowledge that leads to emotional congruency (priming) effects, but that the convergence of these two information sources does not always heighten access to this knowledge during emotional speech processing.
Sex Differences in Facial, Prosodic, and Social Context Emotional Recognition in Early-Onset Schizophrenia  [PDF]
Julieta Ramos-Loyo,Leonor Mora-Reynoso,Luis Miguel Sánchez-Loyo,Virginia Medina-Hernández
Schizophrenia Research and Treatment , 2012, DOI: 10.1155/2012/584725
Abstract: The purpose of the present study was to determine sex differences in facial, prosodic, and social context emotional recognition in schizophrenia (SCH). Thirty-eight patients (SCH, 20 females) and 38 healthy controls (CON, 20 females) participated in the study. Clinical scales (BPRS and PANSS) and an Affective States Scale were applied, as well as tasks to evaluate facial, prosodic, and within a social context emotional recognition. SCH showed lower accuracy and longer response times than CON, but no significant sex differences were observed in either facial or prosody recognition. In social context emotions, however, females showed higher empathy than males with respect to happiness in both groups. SCH reported being more identified with sad films than CON and females more with fear than males. The results of this study confirm the deficits of emotional recognition in male and female patients with schizophrenia compared to healthy subjects. Sex differences were detected in relation to social context emotions and facial and prosodic recognition depending on age.
Use of Prosody and Information Structure in High Functioning Adults with Autism in Relation to Language Ability  [PDF]
Anne-Marie R. DePape,Aoju Chen,Laurel J. Trainor
Frontiers in Psychology , 2012, DOI: 10.3389/fpsyg.2012.00072
Abstract: Abnormal prosody is a striking feature of the speech of those with Autism spectrum disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without ASD and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language functioning. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
Cognitive grouping and recursion in prosody
Hunyadi, László
Argumentum , 2008,
Abstract: This article contributes to the debate on the structure of the narrow faculty of language (FLN) as suggested in Hauser, Chomsky, and Fitch (2002), especially to the issue whether linguistic structures beyond syntax can be recursive. It argues that a.) speech prosody displays signi cant cues of recursion in the form of tonal and pausal grouping, and b.) recursion found in prosody is the manifestation of a more general computational mechanism. It introduces the principle of tonal continuity to account for the continuous tonal phrasing of discontinuous structures with nested embedding and suggests that what underlies this cognitive computational process is the bookmark effect. It shows that the computational difference between nested recursion and iteration correlates with their prosodic difference, whereas the computationally indistinguishable tail recursion and iteration are similar in their prosodic realization. Experiments are presented involving speech prosody, grouping in abstract prosodic patterns as well as grouping in abstract visual patterns demonstrating that recursive phrasing has similar properties across modalities. Their differences suggest that grouping in prosody has its cognitive basis in the grouping of less speci c, more abstract, non-linguistic elements. It is concluded that recursion in prosody cannot be the effect of an interface relation between syntax and prosody, instead, it is the manifestation of a more general, more universal computational mechanism beyond linguistic structure.
Results of a pilot study on the involvement of bilateral inferior frontal gyri in emotional prosody perception: an rTMS study
Marjolijn Hoekert, Guy Vingerhoets, André Aleman
BMC Neuroscience , 2010, DOI: 10.1186/1471-2202-11-93
Abstract: Reaction times on the emotional prosody task condition were significantly longer after rTMS over both the right and the left inferior frontal gyrus as compared to sham stimulation and after controlling for learning effects associated with order of condition. When taking all emotions together, there was no difference in effect on reaction times between the right and left stimulation. For the emotion Fear, reaction times were significantly longer after stimulating the left inferior frontal gyrus as compared to the right inferior frontal gyrus. Reaction times in the semantics task condition were not significantly different between the three TMS conditions.The data indicate a critical involvement of both the right and the left inferior frontal gyrus in emotional prosody perception. The findings of this pilot study need replication. Future studies should include more subjects and examine whether the left and right inferior frontal gyrus play a differential role and complement each other, e.g. in the integrated processing of linguistic and prosodic aspects of speech, respectively.In auditory language processing, distinct brain areas serve different aspects of language. Language has been attributed to the left hemisphere since Broca (1861) and Wernicke (1874). Their studies showed that articulate speech and verbal comprehension are disrupted by left but not right hemisphere lesions [1]. Emotional prosody, a paralinguistic feature of language, is characterized by intonation, loudness and stress placement in speech. The emotional prosody of spoken language may convey crucial information about the emotional state of the speaker. Not only what is said but also how it is said gives significant information about the speaker's true communicative intent and is therefore crucial for proficient social interaction [2]. Studies examining the neural substrate of emotional prosody perception have revealed a network including bilateral regions in superior and middle temporal gyri and orb
Time Course of the Involvement of the Right Anterior Superior Temporal Gyrus and the Right Fronto-Parietal Operculum in Emotional Prosody Perception  [PDF]
Marjolijn Hoekert, Leonie Bais, René S. Kahn, André Aleman
PLOS ONE , 2008, DOI: 10.1371/journal.pone.0002244
Abstract: In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400–1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence.
The Integration of Prosodic Speech in High Functioning Autism: A Preliminary fMRI Study  [PDF]
Isabelle Hesling,Bixente Dilharreguy,Sue Peppé,Marion Amirault,Manuel Bouvard,Michèle Allard
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0011571
Abstract: Autism is a neurodevelopmental disorder characterized by a specific triad of symptoms such as abnormalities in social interaction, abnormalities in communication and restricted activities and interests. While verbal autistic subjects may present a correct mastery of the formal aspects of speech, they have difficulties in prosody (music of speech), leading to communication disorders. Few behavioural studies have revealed a prosodic impairment in children with autism, and among the few fMRI studies aiming at assessing the neural network involved in language, none has specifically studied prosodic speech. The aim of the present study was to characterize specific prosodic components such as linguistic prosody (intonation, rhythm and emphasis) and emotional prosody and to correlate them with the neural network underlying them.
Prosody Discrimination by Songbirds (Padda oryzivora)  [PDF]
Nozomi Naoi, Shigeru Watanabe, Kikuo Maekawa, Junko Hibiya
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0047446
Abstract: In human verbal communication, not only lexical information, but also paralinguistic information plays an important role in transmitting the speakers’ mental state. Paralinguistic information is conveyed mainly through acoustic features like pitch, rhythm, tempo and so on. These acoustic features are generally known as prosody. It is known that some species of birds can discriminate certain aspects of human speech. However, there have not been any studies on the discrimination of prosody in human language which convey different paralinguistic meanings by birds. In the present study, we have shown that the Java sparrow (Padda oryzivora) can discriminate different prosodic patterns of Japanese sentences. These birds could generalize prosodic discrimination to novel sentences, but could not generalize sentence discrimination to those with novel prosody. Moreover, unlike Japanese speakers, Java sparrows used the first part of the utterance as the discrimination cue.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.