oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Brainstorm: A User-Friendly Application for MEG/EEG Analysis  [PDF]
Fran?ois Tadel,Sylvain Baillet,John C. Mosher,Dimitrios Pantazis,Richard M. Leahy
Computational Intelligence and Neuroscience , 2011, DOI: 10.1155/2011/879716
Abstract: Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI). 1. Introduction Although MEG and EEG instrumentation is becoming more common in neuroscience research centers and hospitals, research software availability and standardization remain limited compared to the other functional brain imaging modalities. MEG/EEG source imaging poses a series of specific technical challenges that have, until recently, impeded academic software developments and their acceptance by users (e.g., the multidimensional nature of the data, the multitude of approaches to modeling head tissues and geometry, and the ambiguity of source modeling). Ideally, MEG/EEG imaging is multimodal: MEG and EEG recordings need to be registered to a source space that may be obtained from structural MRI data, which adds to the complexity of the analysis. Further, there is no widely accepted standard MEG/EEG data format, which has limited the distribution and sharing of data and created a major technical hurdle to academic software developers. MEG/EEG data analysis and source imaging feature a multitude of possible approaches, which draw on a wide range of signal processing techniques. Forward head modeling for example, which maps elemental neuronal current sources to scalp potentials and external magnetic fields, is dependent on the shape and conductivity of head tissues and can be performed using a number of methods, ranging from simple spherical head models [1] to overlapping spheres [2] and boundary or finite element methods [3]. Inverse source modeling, which resolves the cortical sources that gave rise to MEG/EEG recordings, has been approached through a multitude of methods, ranging from dipole fitting [4] to distributed source imaging using Bayesian inference [5–7]. This diversity of models and methods reflects the ill-posed nature of electrophysiological imaging which requires restrictive models or regularization procedures to ensure a stable inverse solution. The user’s needs for analysis and visualization of MEG and EEG data vary greatly depending on their application. In a clinical environment, raw recordings are often used to identify and
Classification methods for ongoing EEG and MEG signals
BESSERVE,MICHEL; JERBI,KARIM; LAURENT,FRANCOIS; BAILLET,SYLVAIN; MARTINERIE,JACQUES; GARNERO,LINE;
Biological Research , 2007, DOI: 10.4067/S0716-97602007000500005
Abstract: classification algorithms help predict the qualitative properties of a subject's mental state by extracting useful information from the highly multivariate non-invasive recordings of his brain activity. in particular, applying them to magneto-encephalography (meg) and electro-encephalography (eeg) is a challenging and promising task with prominent practical applications to e.g. brain computer interface (bci). in this paper, we first review the principles of the major classification techniques and discuss their application to meg and eeg data classification. next, we investigate the behavior of classification methods using real data recorded during a meg visuomotor experiment. in particular, we study the influence of the classification algorithm, of the quantitative functional variables used in this classifier, and of the validation method. in addition, our findings suggest that by investigating the distribution of classifier coefficients, it is possible to infer knowledge and construct functional interpretations of the underlying neural mechanisms of the performed tasks. finally, the promising results reported here (up to 97% classification accuracy on 1-second time windows) reflect the considerable potential of meg for the continuous classification of mental states
Classification methods for ongoing EEG and MEG signals
MICHEL BESSERVE,KARIM JERBI,FRANCOIS LAURENT,SYLVAIN BAILLET
Biological Research , 2007,
Abstract: Classification algorithms help predict the qualitative properties of a subject's mental state by extracting useful information from the highly multivariate non-invasive recordings of his brain activity. In particular, applying them to Magneto-encephalography (MEG) and electro-encephalography (EEG) is a challenging and promising task with prominent practical applications to e.g. Brain Computer Interface (BCI). In this paper, we first review the principles of the major classification techniques and discuss their application to MEG and EEG data classification. Next, we investigate the behavior of classification methods using real data recorded during a MEG visuomotor experiment. In particular, we study the influence of the classification algorithm, of the quantitative functional variables used in this classifier, and of the validation method. In addition, our findings suggest that by investigating the distribution of classifier coefficients, it is possible to infer knowledge and construct functional interpretations of the underlying neural mechanisms of the performed tasks. Finally, the promising results reported here (up to 97% classification accuracy on 1-second time windows) reflect the considerable potential of MEG for the continuous classification of mental states
A three domain covariance framework for EEG/MEG data  [PDF]
Beata Ro?,Fetsje Bijma,Mathisca de Gunst,Jan de Munck
Statistics , 2014,
Abstract: In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.
Difficulties applying recent blind source separation techniques to EEG and MEG  [PDF]
Kevin H. Knuth
Statistics , 2015,
Abstract: High temporal resolution measurements of human brain activity can be performed by recording the electric potentials on the scalp surface (electroencephalography, EEG), or by recording the magnetic fields near the surface of the head (magnetoencephalography, MEG). The analysis of the data is problematic due to the fact that multiple neural generators may be simultaneously active and the potentials and magnetic fields from these sources are superimposed on the detectors. It is highly desirable to un-mix the data into signals representing the behaviors of the original individual generators. This general problem is called blind source separation and several recent techniques utilizing maximum entropy, minimum mutual information, and maximum likelihood estimation have been applied. These techniques have had much success in separating signals such as natural sounds or speech, but appear to be ineffective when applied to EEG or MEG signals. Many of these techniques implicitly assume that the source distributions have a large kurtosis, whereas an analysis of EEG/MEG signals reveals that the distributions are multimodal. This suggests that more effective separation techniques could be designed for EEG and MEG signals.
EEG and MEG Data Analysis in SPM8  [PDF]
Vladimir Litvak,Jérémie Mattout,Stefan Kiebel,Christophe Phillips,Richard Henson,James Kilner,Gareth Barnes,Robert Oostenveld,Jean Daunizeau,Guillaume Flandin,Will Penny,Karl Friston
Computational Intelligence and Neuroscience , 2011, DOI: 10.1155/2011/852961
Abstract: SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. 1. Introduction Statistical parametric mapping (SPM) is a free and open source academic software distributed under GNU General Public License. The aim of SPM is to communicate and disseminate methods for neuroimaging data analysis to the scientific community that have been developed by the SPM coauthors associated with the Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology. The origins of SPM software go back to 1990, when SPM was first formulated for the statistical analysis of positron emission tomography (PET) data [1, 2]. The software incorporated several important theoretical advances, such as the use of general linear model (GLM) to describe, in a generic way, a variety of experimental designs [3] and random field theory (RFT) to solve the problem of multiple comparisons arising from the application of mass univariate tests to images with multiple voxels [4]. As functional magnetic resonance imaging (fMRI) gained popularity later in the decade, SPM was further developed to support this new imaging modality, introducing the notion of a hemodynamic response function and associated convolution models for serially correlated time series. This formulation became an established standard in the field and most other free and commercial packages for fMRI analysis implement variants of it. In parallel, increasingly more sophisticated tools for registration, spatial normalization, and segmentation of functional and structural images were developed [5]. In addition to finessing fMRI and PET analyses, these methods made it possible to
EEG/MEG Source Imaging: Methods, Challenges, and Open Issues  [PDF]
Katrina Wendel,Outi V is nen,Jaakko Malmivuo,Nevzat G. Gencer,Bart Vanrumste,Piotr Durka,Ratko Magjarevi ,Selma Supek,Mihail Lucian Pascu,Hugues Fontenelle,Rolando Grave de Peralta Menendez
Computational Intelligence and Neuroscience , 2009, DOI: 10.1155/2009/656092
Abstract: We present the four key areas of research—preprocessing, the volume conductor, the forward problem, and the inverse problem—that affect the performance of EEG and MEG source imaging. In each key area we identify prominent approaches and methodologies that have open issues warranting further investigation within the community, challenges associated with certain techniques, and algorithms necessitating clarification of their implications. More than providing definitive answers we aim to identify important open issues in the quest of source localization.
PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction  [PDF]
Forrest Sheng Bao,Xin Liu,Christina Zhang
Computational Intelligence and Neuroscience , 2011, DOI: 10.1155/2011/406391
Abstract: Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. 1. Introduction Computer-aided diagnosis based on EEG has become possible in the last decade for several neurological diseases such as Alzheimer's disease [1, 2] and epilepsy [3, 4]. Implemented systems can be very useful in the early diagnosis of those diseases. For example, traditional epilepsy diagnosis may require trained physicians to visually screen lengthy EEG records whereas computer-aided systems can shorten this time-consuming procedure by detecting and picking out EEG segments of interest to physicians [5, 6]. On top of that, computers can extend our ability to analyze signals. Recently, researchers have developed systems [3, 4, 7, 8] that can hopefully use (any) random interictal (i.e., non-seizure) EEG records for epilepsy diagnosis in instances that are difficult for physicians to make diagnostic decisions with their naked eyes. In addition to analyzing existing signals, this computer-based approach can help us model the brain and predict future signals, for example, seizure prediction [9, 10]. All the above systems rely on characterizing the EEG signal into certain features, a step known as feature extraction. EEG features can come from different fields that study time series: power spectral density from signal processing, fractal dimensions from computational geometry, entropies from information theory, and so forth. An open source tool that can extract EEG features would benefit the computational neuroscience community since feature extraction is repeatedly invoked in the analysis of EEG signals. Because of Python's increasing popularity in scientific computing, and especially in computational neuroscience, a Python module for EEG feature extraction would be highly useful. In response, we have developed PyEEG, a Python module for EEG feature extraction, and have tested it in our previous epileptic EEG research [3, 8, 11]. Compared to other popular programming languages in
Source Separation and Higher-Order Causal Analysis of MEG and EEG  [PDF]
Kun Zhang,Aapo Hyvarinen
Computer Science , 2012,
Abstract: Separation of the sources and analysis of their connectivity have been an important topic in EEG/MEG analysis. To solve this problem in an automatic manner, we propose a two-layer model, in which the sources are conditionally uncorrelated from each other, but not independent; the dependence is caused by the causality in their time-varying variances (envelopes). The model is identified in two steps. We first propose a new source separation technique which takes into account the autocorrelations (which may be time-varying) and time-varying variances of the sources. The causality in the envelopes is then discovered by exploiting a special kind of multivariate GARCH (generalized autoregressive conditional heteroscedasticity) model. The resulting causal diagram gives the effective connectivity between the separated sources; in our experimental results on MEG data, sources with similar functions are grouped together, with negative influences between groups, and the groups are connected via some interesting sources.
On the Use of EEG or MEG Brain Imaging Tools in Neuromarketing Research  [PDF]
Giovanni Vecchiato,Laura Astolfi,Fabrizio De Vico Fallani,Jlenia Toppi,Fabio Aloise,Francesco Bez,Daming Wei,Wanzeng Kong,Jounging Dai,Febo Cincotti,Donatella Mattia,Fabio Babiloni
Computational Intelligence and Neuroscience , 2011, DOI: 10.1155/2011/643489
Abstract: Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries. 1. Introduction In scientific literature, the most accepted definition of neuromarketing is that it is a field of study concerning the application of neuroscientific methods to analyze and understand human behaviour related to markets and marketing exchanges [1]. Nowadays, neuroscientific methodology includes powerful brain imaging tools, based on the gathering of hemodynamic or electromagnetic signals related to the human brain activity during the performance of a relevant task for marketing objectives. The reason why marketing researchers are interested to the use of brain imaging tools, instead to simply ask to the persons their preferences in front of marketing stimuli, arises from the assumption that people cannot (or do not want) fully explain their preference when explicitly asked. Researchers in the field hypothesize that neuroimaging tools can access information within the consumer’s brain during the generation of a preference or the observation of a commercial advertising. If this information could be useful to further promote the product is still a matter of debate in marketing literature. From the marketing researchers point of view, there is the hope that this body of
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.