oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 2 )

2019 ( 34 )

2018 ( 185 )

2017 ( 168 )

Custom range...

Search Results: 1 - 10 of 6961 matches for " speech recognition "
All listed articles are free for downloading (OA Articles)
Page 1 /6961
Display every page Item
Automatic evaluation of speech impairment caused by wearing a dental appliance  [PDF]
Mariko Hattori, Yuka I. Sumita, Hisashi Taniguchi
Open Journal of Stomatology (OJST) , 2013, DOI: 10.4236/ojst.2013.37062
Abstract:

In dentistry, speech evaluation is important for appropriate orofacial dysfunction rehabilitation. The speech intelligibility test is often used to assess patients’ speech, and it involves an evaluation by human listeners. However, the test has certain shortcomings, and an alternative method, without a listening procedure, is needed. The purpose of this study was to test the applicability of an automatic speech intelligibility test system using a computerized speech recognition technique. Speech of 10 normal subjects, when wearing a dental appliance, was evaluated using an automatic speech intelligibility test system that was developed using computerized speech recognition software. The results of the automatic test were referred to as the speech recognition scores. The Wilcoxon signed rank test was used to analyze differences in the results of the test between the following 2 conditions: with the palatal plate in place and with the palatal plate removed. Spearman correlation coefficients were used to evaluate whether the speech recognition score correlated with the result of conventional intelligibility test. The speech recognition score was significantly decreased when wearing the plate (z = -2.807, P = 0.0050). The automatic evaluation results positively correlated with that of conventional evaluation when wearing the appliance (r = 0.729, P = 0.017). The automatic speech testing system may be useful for evaluating speech intelligibility in denture wearers.

Cochlear Implantation in Patients with Eosinophilic Otitis Media  [PDF]
Masahiro Takahashi, Yasuhiro Arai, Naoko Sakuma, Daisuke Sano, Goshi Nishimura, Takahide Taguchi, Nobuhiko Oridate, Satoshi Iwasaki, Shin-Ichi Usami
International Journal of Otolaryngology and Head & Neck Surgery (IJOHNS) , 2015, DOI: 10.4236/ijohns.2015.41006
Abstract: It is known that cochlear implantation for deaf patients with eosinophilic otitis media (EOM) is safe and can provide good speech perception. However, the best timing of implant surgery in patients with EOM is not yet known. The aim of this case report is to suggest the appropriate timing of the surgery in EOM patients with deaf. Cochlear implantation was indicated in two patients with EOM. One underwent cochlear implantation in the absence of any ear discharge. In the other case, implant surgery was delayed for three years due to persistent ear discharge. No complications related to implant device or skin flap were observed in either case. The speech recognition score after implantation was good in the first case and poor in the second case. Perioperative complications were manageable even in the patient with persistent ear discharge. However, the delay in implant surgery due to the persistent ear discharge resulted in a poor speech recognition score. Early implantation should be considered even in EOM patients with ear discharge, although the presence of active middle ear inflammation is regarded as one of the contraindications for implantation according to the current Japanese guidelines.
Enhancing the Efficiency of Voice Controlled Wheelchairs Using NAM for Recognizing Partial Speech in Tamil  [PDF]
Angappan Kumaresan, Nagarajan Mohankumar, Mathavan Sureshanand, Jothi Suganya
Circuits and Systems (CS) , 2016, DOI: 10.4236/cs.2016.710247
Abstract: In this paper, we have presented an effective method for recognizing partial speech with the help of Non Audible Murmur (NAM) microphone which is robust against noise. NAM is a kind of soft murmur that is so weak that even people nearby the speaker cannot hear it. We can recognize this NAM from the mastoid of humans. It can be detected only with the help of a special type of microphone termed as NAMmicrophone. We can use this approach for impaired people who can hear sound but can speak only partial words (semi-mute) or incomplete words. We can record and recognize partial speech using NAM microphone. This approach can be used to solve problems for paralysed people who use voice controlled wheelchair which helps them to move around without the help of others. The present voice controlled wheelchair systems can recognize only fully spoken words and can’t recognise words spoken by semi-mute or partially speech impaired people. Further it uses normal microphone which hassevere degradation and external noise influence when used for recognizing partial speech inputs from impaired people. To overcome this problem, we can use NAM microphone along with Tamil Speech Recognition Engine (TSRE) to improve the accuracy of the results. The proposed method was designed and implemented in a wheelchair like model using Arduino microcontroller kit. Experimental results have shown that 80% accuracy can be obtained in this method and also proved that recognizing partially spoken words using NAM microphone was much efficient compared to the normal microphone.
On Speech Recognition
A. Srinivasan
Research Journal of Applied Sciences, Engineering and Technology , 2012,
Abstract: Speech processing is the study of processing the speech signals and it is closely tied with natural language processing. The purpose of this survey study is to bring the collective idea of research happened on speech processing and recognition in the author point of view. In particular the author look at some of the technical developments underpinning these recent developments and look ahead to current study which promises to enable the next wave of innovations in accuracy and scale for speech processing.
SPEAKER IDENTIFICATION
Arundhati S. Mehendale,M. R. Dixit
Signal & Image Processing , 2011,
Abstract: Speaker recognition is the computing task of validating a user's claimed identity using characteristics extracted from their voices. Voice -recognition is combination of the two where it uses learned aspects of a speaker’s voice to determine what is being said - such a system cannot recognize speech from random speakers very accurately, but it can reach high accuracy for individual voices it has been trained with, which gives us various applications in day today life.
Schnellere Transkription durch Spracherkennung? Speech Recognition Software—An Improvement to the Transcription Process? Software para el reconocimiento de voz: Una mejora al proceso de transcripción?
Thorsten Dresing,Thorsten Pehl,Claudia Lombardo
Forum : Qualitative Social Research , 2008,
Abstract: Ist Spracherkennung dazu in der Lage, Forschenden eine schnellere Transkription ihrer Interview-Audioaufnahmen zu erm glichen als die bisher etablierte Form der manuellen Transkription? Die Untersuchung, über die hier berichtet wird, wirft einen ersten empirischen Blick auf die M glichkeit, die Transkription von Interviews im sozialwissenschaftlichen Kontext durch den Einsatz von Spracherkennungssoftware zu bew ltigen. Hierzu wurden von 20 Personen unter gleichen Bedingungen Transkripte sowohl manuell als auch unter Zuhilfenahme von Spracherkennungssoftware erstellt. Die Erfahrungen hiermit wurden qualitativ und quantitativ ausgewertet. Dabei l sst sich feststellen, dass Spracherkennung und manuelle Transkription etwa gleiche Bearbeitungszeiten ben tigen, die Spracherkennung aber hinsichtlich ihrer Pr zision und Bedienbarkeit deutliche Schw chen aufweist. URN: urn:nbn:de:0114-fqs0802174 The study outlined in this paper addresses the question: Does the use of speech recognition software allow a faster transcription of interview and audio recordings than traditional modes of transcription? Under similar conditions, 20 people made a transcript using a traditional method of transcription and then using Speech Recognition Software the same transcript. The outcomes were evaluated quantitatively and qualitatively. The results revealed that both modes of transcription need the same amount of working time. Concerning the precision and operability, it can be said that the Speech Recognition Software shows clear weaknesses. URN: urn:nbn:de:0114-fqs0802174 El estudio presentado en esta contribución abordó la pregunta: El uso de software para el reconocimiento de voz permite una transcripción más rápida de las entrevistas y grabaciones de audio que los modos tradicionales de transcripción? En condiciones similares 20 personas hicieron una transcripción de la misma entrevista usando métodos tradicionales y software de reconocimiento de voz. Se evaluaron cualitativa y cuantitativamente los resultados. éstos revelan que ambos modos de transcripción requieren la misma inversión de tiempo. Con relación a la precisión y funcionamiento, se puede decir que el software de reconocimiento de voz muestra claras debilidades. URN: urn:nbn:de:0114-fqs0802174
Sudden Noise Reduction Based on GMM with Noise Power Estimation  [PDF]
Nobuyuki Miyake, Tetsuya Takiguchi, Yasuo Ariki
Journal of Software Engineering and Applications (JSEA) , 2010, DOI: 10.4236/jsea.2010.34039
Abstract: This paper describes a method for reducing sudden noise using noise detection and classification methods, and noise power estimation. Sudden noise detection and classification have been dealt with in our previous study. In this paper, GMM-based noise reduction is performed using the detection and classification results. As a result of classification, we can determine the kind of noise we are dealing with, but the power is unknown. In this paper, this problem is solved by combining an estimation of noise power with the noise reduction method. In our experiments, the proposed method achieved good performance for recognition of utterances overlapped by sudden noises.
Development of Application Specific Continuous Speech Recognition System in Hindi  [PDF]
Gaurav Gaurav, Devanesamoni Shakina Deiv, Gopal Krishna Sharma, Mahua Bhattacharya
Journal of Signal and Information Processing (JSIP) , 2012, DOI: 10.4236/jsip.2012.33052
Abstract: Application specific voice interfaces in local languages will go a long way in reaching the benefits of technology to rural India. A continuous speech recognition system in Hindi tailored to aid teaching Geometry in Primary schools is the goal of the work. This paper presents the preliminary work done towards that end. We have used the Mel Frequency Cepstral Coefficients as speech feature parameters and Hidden Markov Modeling to model the acoustic features. Hidden Markov Modeling Tool Kit —3.4 was used both for feature extraction and model generation. The Julius recognizer which is language independent was used for decoding. A speaker independent system is implemented and results are presented.
Virtual Learning System (Miqra’ah) for Quran Recitations for Sighted and Blind Students  [PDF]
Samir A. Elsagheer Mohamed, Allam Shehata Hassanin, Mohamed Taher Ben Othman
Journal of Software Engineering and Applications (JSEA) , 2014, DOI: 10.4236/jsea.2014.74021
Abstract:

Quran has ten famous recitations and twenty different narrations. It is well known that the best way is to learn from qualified and authentic scientists (Sheikh's) in one or more of these narrations. Due to 1) the widespread of the Internet and the ease of use and availability of computers and the smart phones that enables access to the Internet; 2) the business of people hindering them to attend physical learning environments; and 3) the very few number of elder licensed scientists, we have developed a virtual learning system (Electronic Miqra’ah). Scientists can supervise remotely the registered students. Students (from different ages) can register from anywhere in the world given that they have Internet connection. Students can interact with the scientist in real time so that they can help them memorize (Tahfeez), guide them for error correction, and give them lectures or lessons through virtual learning rooms. The targeted groups of users can be nonblind people, blind people, manual-disabled people and illiterate people. We have developed this system such that it takes the commands via voice in addition to the normal inputs like mouse and keyboard. Users can dictate the commands to the system orally and the system recognizes the spoken phrases and executes them. We have developed an efficient speech recognition engine that is speaker independent and accent independent. The system administrators create several virtual learning rooms and register the licensed scientists. Administrators prepare a daily schedule for each room. Students can register to any of these rooms by pronouncing its name. Each student is allocated a portion of time where he/she can interact directly by voice with the scientist. Other students can listen to the current student’s recitation and the error corrections, guidance or lessons from the scientists.

Phoneme Sequence Modeling in the Context of Speech Signal Recognition in Language “Baoule”  [PDF]
Hyacinthe Konan, Etienne Soro, Olivier Asseu, Bi Tra Goore, Raymond Gbegbe
Engineering (ENG) , 2016, DOI: 10.4236/eng.2016.89055
Abstract: This paper presents the recognition of “Baoule” spoken sentences, a language of C?te d’Ivoire. Several formalisms allow the modelling of an automatic speech recognition system. The one we used to realize our system is based on Hidden Markov Models (HMM) discreet. Our goal in this article is to present a system for the recognition of the Baoule word. We present three classical problems and develop different algorithms able to resolve them. We then execute these algorithms with concrete examples.
Page 1 /6961
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.