oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 27 )

2018 ( 99 )

2017 ( 125 )

2016 ( 127 )

Custom range...

Search Results: 1 - 10 of 15351 matches for " Kang Ryoung Park "
All listed articles are free for downloading (OA Articles)
Page 1 /15351
Display every page Item
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Jae Won Bang,Jong-Suk Choi,Kang Ryoung Park
Sensors , 2013, DOI: 10.3390/s130506272
Abstract: Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user’s head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods.
A Novel Gaze Tracking Method Based on the Generation of Virtual Calibration Points
Ji Woo Lee,Hwan Heo,Kang Ryoung Park
Sensors , 2013, DOI: 10.3390/s130810802
Abstract: Most conventional gaze-tracking systems require that users look at many points during the initial calibration stage, which is inconvenient for them. To avoid this requirement, we propose a new gaze-tracking method with four important characteristics. First, our gaze-tracking system uses a large screen located at a distance from the user, who wears a lightweight device. Second, our system requires that users look at only four calibration points during the initial calibration stage, during which four pupil centers are noted. Third, five additional points (virtual pupil centers) are generated with a multilayer perceptron using the four actual points (detected pupil centers) as inputs. Fourth, when a user gazes at a large screen, the shape defined by the positions of the four pupil centers is a distorted quadrangle because of the nonlinear movement of the human eyeball. The gaze-detection accuracy is reduced if we map the pupil movement area onto the screen area using a single transform function. We overcame this problem by calculating the gaze position based on multi-geometric transforms using the five virtual points and the four actual points. Experiment results show that the accuracy of the proposed method is better than that of other methods.
A Study on Iris Localization and Recognition on Mobile Phones
Kang Ryoung Park,Hyun-Ae Park,Byung Jun Kang,Eui Chul Lee
EURASIP Journal on Advances in Signal Processing , 2007, DOI: 10.1155/2008/281943
Abstract: A new iris recognition method for mobile phones based on corneal specular reflections (SRs) is discussed. We present the following three novelties over previous research. First, in case of user with glasses, many noncorneal SRs may happen on the surface of glasses and it is very difficult to detect genuine SR on the cornea. To overcome such problems, we propose a successive on/off dual illuminator scheme to detect genuine SRs on the corneas of users with glasses. Second, to detect SRs robustly, we estimated the size, shape, and brightness of the SRs based on eye, camera, and illuminator models. Third, the detected eye (iris) region was verified again using the AdaBoost eye detector. Experimental results with 400 face images captured from 100 persons with a mobile phone camera showed that the rate of correct iris detection was 99.5% (for images without glasses) and 98.9% (for images with glasses or contact lenses). The consequent accuracy of iris authentication was 0.05% of the EER (equal error rate) based on detected iris images.
Enhanced Perception of User Intention by Combining EEG and Gaze-Tracking for Brain-Computer Interfaces (BCIs)
Jong-Suk Choi,Jae Won Bang,Kang Ryoung Park,Mincheol Whang
Sensors , 2013, DOI: 10.3390/s130303454
Abstract: Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable devices are combined to simultaneously measure both the EEG signal and the gaze position. Second, the speller UI system usually has a 6 × 6 matrix of alphanumeric characters, which has disadvantage in that the number of characters is limited to 36. Thus, a 12 × 12 matrix that includes 144 characters is used. Third, in order to reduce the highlighting time of each of the 12 × 12 rows and columns, only the three rows and three columns (which are determined on the basis of the 3 × 3 area centered on the user’s gaze position) are highlighted. Fourth, by analyzing the P300 EEG signal that is obtained only when each of the 3 × 3 rows and columns is highlighted, the accuracy of selecting the correct character is enhanced. The experimental results showed that the accuracy of proposed method was higher than the other methods.
Intelligent query by humming system based on score level fusion of multiple classifiers
Nam Gi Pyo,Luong Thi Thu Trang,Nam Hyun Ha,Park Kang Ryoung
EURASIP Journal on Advances in Signal Processing , 2011,
Abstract: Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH) systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB) code-based linear scaling (LS), pitch-based dynamic time warping (DTW), and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.
Remote Gaze Tracking System on a Large Display
Hyeon Chang Lee,Won Oh Lee,Chul Woo Cho,Su Yeong Gwon,Kang Ryoung Park,Heekyung Lee,Jihun Cha
Sensors , 2013, DOI: 10.3390/s131013439
Abstract: We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user’s facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Fano resonance in crossed carbon nanotubes
Jinhee Kim,Jae-Ryoung Kim,Jeong-O Lee,Jong Wan Park,Hye Mi So,Nam Kim,Kicheon Kang,Kyung-Hwa Yoo,Ju-Jin Kim
Physics , 2003, DOI: 10.1103/PhysRevLett.90.166403
Abstract: We report the observation of the resonant transport in multiwall carbon nanotubes in a crossed geometry. The resonant transport is manifested by an asymmetric peak in the differential conductance curve. The observed asymmetric conductance peak is well explained by Fano resonance originating from the scattering at the contact region of the two nanotubes. The conductance peak depends sensitively on the external magnetic field and exhibits Aharonov-Bohm-type oscillation.
Experimental evidences of Luttinger liquid behavior in the crossed multi-wall carbon nanotubes
Jinhee Kim,Kicheon Kang,Jeong-O Lee,Kyung-Hwa Yoo,Jae-Ryoung Kim,Jong Wan Park,Hye Mi So,Ju-Jin Kim
Physics , 2000,
Abstract: Luttinger liquid behavior was observed in a crossed junction formed with two metallic multi-wall carbon nanotubes whose differential conductance vanished with the power of bias voltage and temperature. With applying constant voltage or current to one of the two carbon nanotubes in a crossed geometry, the electrical transport properties of the other carbon nanotube were affected significantly, implying there exists strong correlation between the carbon nanotubes. Such characteristic features are in good agreement with the theoretical predictions for the crossed two Luttinger liquids.
Asymptotic Value of the Probability That the First Order Statistic Is from Null Hypothesis  [PDF]
Iickho Song, Seungwon Lee, So Ryoung Park, Seokho Yoon
Applied Mathematics (AM) , 2013, DOI: 10.4236/am.2013.412231
Abstract:

When every element of a random vector X =(X1,X2,...,Xn) assumes the cumulative distribution function F0 and F1 with probability p and 1 - p, respectively, we have shown that the probability S0 that the first order statistic of X is originally under F0 can be expressed as \"\". We have also shown that \"\", where \"\" and \"\"  with \"\" the support of (x) . Applications and implications of the results are discussed in the performance of wideband spectrum sensing schemes.

Crystal Structure of Sus scrofa Quinolinate Phosphoribosyltransferase in Complex with Nicotinate Mononucleotide
Hyung-Seop Youn, Mun-Kyoung Kim, Gil Bu Kang, Tae Gyun Kim, Jung-Gyu Lee, Jun Yop An, Kyoung Ryoung Park, Youngjin Lee, Jung Youn Kang, Hye-Eun Song, Inju Park, Chunghee Cho, Shin-Ichi Fukuoka, Soo Hyun Eom
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0062027
Abstract: We have determined the crystal structure of porcine quinolinate phosphoribosyltransferase (QAPRTase) in complex with nicotinate mononucleotide (NAMN), which is the first crystal structure of a mammalian QAPRTase with its reaction product. The structure was determined from protein obtained from the porcine kidney. Because the full protein sequence of porcine QAPRTase was not available in either protein or nucleotide databases, cDNA was synthesized using reverse transcriptase-polymerase chain reaction to determine the porcine QAPRTase amino acid sequence. The crystal structure revealed that porcine QAPRTases have a hexameric structure that is similar to other eukaryotic QAPRTases, such as the human and yeast enzymes. However, the interaction between NAMN and porcine QAPRTase was different from the interaction found in prokaryotic enzymes, such as those of Helicobacter pylori and Mycobacterium tuberculosis. The crystal structure of porcine QAPRTase in complex with NAMN provides a structural framework for understanding the unique properties of the mammalian QAPRTase active site and designing new antibiotics that are selective for the QAPRTases of pathogenic bacteria, such as H. pylori and M. tuberculosis.
Page 1 /15351
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.