oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 17 )

2018 ( 114 )

2017 ( 134 )

2016 ( 109 )

Custom range...

Search Results: 1 - 10 of 20955 matches for " Jean-Baptiste Poline "
All listed articles are free for downloading (OA Articles)
Page 1 /20955
Display every page Item
Data sharing and publishing in the field of neuroimaging
Janis L Breeze, Jean-Baptiste Poline, David N Kennedy
GigaScience , 2012, DOI: 10.1186/2047-217x-1-9
Abstract: One crucial issue is how producers of shared data can and should be acknowledged and how this important component of science will benefit individuals in their academic careers. While we encourage the field to make use of these opportunities for data publishing, it is critical that standards for metadata, provenance, and other descriptors are used. This commentary outlines the efforts of the International Neuroinformatics Coordinating Facility Task Force on Neuroimaging Datasharing to coordinate and establish such standards, as well as potential ways forward to relieve the issues that researchers who produce these massive, reusable community resources face when making the data rapidly and freely available to the public. Both the technical and human aspects of data sharing must be addressed if we are to go forward.With the worldwide push for more open science and data sharing [1], it is an ideal time to consider the current state of data sharing in neuroscience, and in particular neuroimaging research. A huge amount of neuroimaging data has been acquired around the world; a recent literature search on PubMed led to an estimate of 12 000 datasets or 144 000 scans (around 55 petabytes of data) over the past 10?years, but only a few percent of such data is available in public repositories. Over the past two years, the International Neuroinformatics Coordinating Facility (http://www.incf.org webcite) has investigated barriers to data sharing through task force working groups and public workshops, and has identified a number of roadblocks, many of which are readily addressable, that impede researchers from both sharing and making use of existing shared data. These include a lack of simple tools for finding, uploading, and downloading shared data; uncertainty about how to best organize and prepare data for sharing, and concerns about data attribution. Many researchers are also wary of data sharing because of confusion institutional human research subject protection and the
A simple tool for neuroimaging data sharing
Christian Haselgrove,Jean-Baptiste Poline,David N. Kennedy
Frontiers in Neuroinformatics , 2014, DOI: 10.3389/fninf.2014.00052
Abstract: Data sharing is becoming increasingly common, but despite encouragement and facilitation by funding agencies, journals, and some research efforts, most neuroimaging data acquired today is still not shared due to political, financial, social, and technical barriers to sharing data that remain. In particular, technical solutions are few for researchers that are not a part of larger efforts with dedicated sharing infrastructures, and social barriers such as the time commitment required to share can keep data from becoming publicly available. We present a system for sharing neuroimaging data, designed to be simple to use and to provide benefit to the data provider. The system consists of a server at the International Neuroinformatics Coordinating Facility (INCF) and user tools for uploading data to the server. The primary design principle for the user tools is ease of use: the user identifies a directory containing Digital Imaging and Communications in Medicine (DICOM) data, provides their INCF Portal authentication, and provides identifiers for the subject and imaging session. The user tool anonymizes the data and sends it to the server. The server then runs quality control routines on the data, and the data and the quality control reports are made public. The user retains control of the data and may change the sharing policy as they need. The result is that in a few minutes of the user’s time, DICOM data can be anonymized and made publicly available, and an initial quality control assessment can be performed on the data. The system is currently functional, and user tools and access to the public image database are available at http://xnat.incf.org/.
Which fMRI clustering gives good brain parcellations?
Bertrand Thirion,Gael Varoquaux,Jean-Baptiste Poline
Frontiers in Neuroscience , 2014, DOI: 10.3389/fnins.2014.00167
Abstract: Analysis and interpretation of neuroimaging data often require one to divide the brain into a number of regions, or parcels, with homogeneous characteristics, be these regions defined in the brain volume or on on the cortical surface. While predefined brain atlases do not adapt to the signal in the individual subjects images, parcellation approaches use brain activity (e.g. found in some functional contrasts of interest) and clustering techniques to define regions with some degree of signal homogeneity. In this work, we address the question of which clustering technique is appropriate and how to optimize the corresponding model. We use two principled criteria: goodness of fit (accuracy), and reproducibility of the parcellation across bootstrap samples. We study these criteria on both simulated and two task-based functional Magnetic Resonance Imaging datasets for the Ward, spectral and K-means clustering algorithms. We show that in general Ward’s clustering performs better than alternative methods with regards to reproducibility and accuracy and that the two criteria diverge regarding the preferred models (reproducibility leading to more conservative solutions), thus deferring the practical decision to a higher level alternative, namely the choice of a trade-off between accuracy and stability.
Comment on “A simple tool for neuroimaging data sharing”
Christian Haselgrove,Jean-Baptiste Poline,David N. Kennedy
Frontiers in Neuroinformatics , 2014, DOI: 10.3389/fninf.2014.00082
Abstract:
Does an Oblique/Slanted Perspective during Virtual Navigation Engage Both Egocentric and Allocentric Brain Strategies?
Julien Barra, Laetitia Laou, Jean-Baptiste Poline, Denis Lebihan, Alain Berthoz
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0049537
Abstract: Perspective (route or survey) during the encoding of spatial information can influence recall and navigation performance. In our experiment we investigated a third type of perspective, which is a slanted view. This slanted perspective is a compromise between route and survey perspectives, offering both information about landmarks as in route perspective and geometric information as in survey perspective. We hypothesized that the use of slanted perspective would allow the brain to use either egocentric or allocentric strategies during storage and recall. Twenty-six subjects were scanned (3-Tesla fMRI) during the encoding of a path (40-s navigation movie within a virtual city). They were given the task of encoding a segment of travel in the virtual city and of subsequent shortcut-finding for each perspective: route, slanted and survey. The analysis of the behavioral data revealed that perspective influenced response accuracy, with significantly more correct responses for slanted and survey perspectives than for route perspective. Comparisons of brain activation with route, slanted, and survey perspectives suggested that slanted and survey perspectives share common brain activity in the left lingual and fusiform gyri and lead to very similar behavioral performance. Slanted perspective was also associated with similar activation to route perspective during encoding in the right middle occipital gyrus. Furthermore, slanted perspective induced intermediate patterns of activation (in between route and survey) in some brain areas, such as the right lingual and fusiform gyri. Our results suggest that the slanted perspective may be considered as a hybrid perspective. This result offers the first empirical support for the choice to present the slanted perspective in many navigational aids.
Improving accuracy and power with transfer learning using a meta-analytic database
Yannick Schwartz,Ga?l Varoquaux,Christophe Pallier,Philippe Pinel,Jean-Baptiste Poline,Bertrand Thirion
Statistics , 2012,
Abstract: Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e. to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
Fast reproducible identification and large-scale databasing of individual functional cognitive networks
Philippe Pinel, Bertrand Thirion, Sébastien Meriaux, Antoinette Jobert, Julien Serres, Denis Le Bihan, Jean-Baptiste Poline, Stanislas Dehaene
BMC Neuroscience , 2007, DOI: 10.1186/1471-2202-8-91
Abstract: 81 subjects were successfully scanned. Before describing inter-individual variability, we demonstrated in the present study the reliability of individual functional data obtained with this short protocol. Considering the anatomical variability, we then needed to correctly describe individual functional networks in a voxel-free space. We applied then non-voxel based methods that automatically extract main features of individual patterns of activation: group analyses performed on these individual data not only converge to those reported with a more conventional voxel-based random effect analysis, but also keep information concerning variance in location and degrees of activation across subjects.This collection of individual fMRI data will help to describe the cerebral inter-subject variability of the correlates of some language, calculation and sensorimotor tasks. In association with demographic, anatomical, behavioral and genetic data, this protocol will serve as the cornerstone to establish a hybrid database of hundreds of subjects suitable to study the range and causes of variation in the cerebral bases of numerous mental processes.Inter-subjects variability is a missing facet of the current neuroimaging literature [1-3], and until recently has been viewed more as a nuisance for brain imaging studies than as a relevant dimension to investigate the mechanisms of human cognition. Indeed, most of the published studies described the cerebral bases of various cognitive processes from voxel-based group analyses performed on the data from 10–15 subjects. Group analysis of a small collection of brains assures that the description of these functional invariants may be extended to other healthy subjects. However we usually do not know if a cerebral network involved in a task is homogenous enough among the healthy population to be analyzed in only one group or if several groups have to be considered, nor how many subjects are required to correctly describe different sub-group
PyXNAT: XNAT in Python
Yannick Schwartz,Alexis Barbot,Benjamin Thyreau,Vincent Frouin,Ga?l Varoquaux,Daniel S. Marcus,Jean-Baptiste Poline
Frontiers in Neuroinformatics , 2012, DOI: 10.3389/fninf.2012.00012
Abstract: As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line.
Une carte de l’élargissement de l’horizon géographique au début du XXe siècle
Jean-Baptiste Arrault
M@ppemonde , 2008,
Abstract: La carte intitulée Planisphère montrant l’élargissement progressif de l’horizon géographique , qui constitue la première figure du Traité de géographie physique d’Emmanuel de Martonne, a plusieurs fonctions: illustrer l’histoire de la géographie et éclairer les conditions de possibilité d’une géographie générale, à un moment où la Terre para t entièrement connue. Elle incarne aussi la prise de conscience d’une mondialisation produite par la colonisation.
NAFTA's Developmental Impact on Mexico: Assessment and prospects Impact socio-économique de l’ALENA au Mexique : Bilan et perspectives Impacto socioeconómico del TLCAN sobre México: balance y perspectivas
Jean-Baptiste Velut
IdeAs : Idées d’Amériques , 2011,
Abstract: This article assesses the developmental record of the North American Free Trade Agreement (NAFTA) in Mexico fifteen years after its implementation. After analyzing the evolution of trade and investment flows and their impact on employment and wage levels in the manufacturing and agricultural sectors, the author highlights the success and limits of the NAFTA integration model. He concludes that while NAFTA should not be seen as a solution to all of Mexico’s socio-economic problems, NAFTA nonetheless suffers from a "deficient [social] institutionality" that can be addressed through both domestic and supranational reforms. At the domestic level, the Mexican government should rethink its export-led growth strategy and prioritize tax reforms and domestic investments in education and infrastructure. At the supranational level, the NAFTA model should be upgraded to address its social lacunae, especially in the policy spheres of investment, immigration, agriculture, and resource transfers. Cet article dresse le bilan socio-économique de l’Accord de libre-échange nord-américain (ALENA) pour le Mexique quinze ans après son entrée en vigueur. à travers une analyse de l’évolution des flux de capitaux et de commerce et de leur impact sur l’emploi et le niveau des salaires dans les secteurs industriels et agricoles, l’auteur révèle les succès et limites du modèle d’intégration de l’ALENA. Il conclut que si l’ALENA n’est pas une solution à tous les problèmes socio-économiques du Mexique, l'accord souffre malgré tout d'une "institutionalité [sociale] déficiente" qui peut être consolidée par le biais de réformes nationales et supranationales. Au niveau national, le gouvernement mexicain doit repenser sa stratégie de croissance tirée par les exportations et donner la priorité à la réforme fiscale et aux investissements dans l'éducation et l'infrastructure. à l'échelle supranationale, le modèle de l'ALENA devrait être amélioré afin de combler ses lacunes sociales, en particulier dans les domaines de l'investissement, l'immigration, l'agriculture et des transferts de ressources. Este artículo presenta el balance socioeconómico del Acuerdo de Libre comercio de América del Norte (TLCAN) para México quince a os después de su entrada en vigor. Mediante un análisis de la evolución de los flujos de capitales y del comercio, y de su impacto sobre el empleo y el nivel de los salarios en los sectores industriales y agrícolas, el autor revela los logros y los límites del modelo de integración del TLCAN. Concluye que aunque el TLCAN no puede ser la solución a todos los problemas soci
Page 1 /20955
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.