Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99


Any time

3 ( 1 )

2019 ( 5 )

2018 ( 2 )

2017 ( 4 )

Custom range...

Search Results: 1 - 10 of 3055 matches for " Francesca Biagioni "
All listed articles are free for downloading (OA Articles)
Page 1 /3055
Display every page Item
Hyperalgesic activity of kisspeptin in mice
Simona Spampinato, Angela Trabucco, Antonella Biasiotta, Francesca Biagioni, Giorgio Cruccu, Agata Copani, William H Colledge, Maria Sortino, Ferdinando Nicoletti, Santina Chiechio
Molecular Pain , 2011, DOI: 10.1186/1744-8069-7-90
Abstract: Immunofluorescent staining in the mouse skin showed the presence of GPR54 receptors in PGP9.5-positive sensory fibers. Intraplantar injection of kisspeptin (1 or 3 nmol/5 μl) induced a small nocifensive response in naive mice, and lowered thermal pain threshold in the hot plate test. Both intraplantar and intrathecal (0.5 or 1 nmol/3 μl) injection of kisspeptin caused hyperalgesia in the first and second phases of the formalin test, whereas the GPR54 antagonist, p234 (0.1 or 1 nmol), caused a robust analgesia. Intraplantar injection of kisspeptin combined with formalin enhanced TRPV1 phosphorylation at Ser800 at the injection site, and increased ERK1/2 phosphorylation in the ipsilateral dorsal horn as compared to naive mice and mice treated with formalin alone.These data demonstrate for the first time that kisspeptin regulates pain sensitivity in rodents and suggest that peripheral GPR54 receptors could be targeted by novel drugs in the treatment of inflammatory pain.Kisspeptin is a 54-amino acid peptide originally discovered for its activity as metastasis-suppressor [1]. It is encoded by the Kiss1 gene as a 145-amino acid precursor protein and cleaved to a 54-amino acid protein as well as into shorter products (kisspeptin-10,-13,-14) known to play a critical role in the neuroendocrine regulation of reproduction [2-5].In the brain, kisspeptin is localized not only in areas involved in gonadotropin secretion, but also in other regions such as the amygdala, hippocampus, and the periacqueductal gray [6,7].Its action is mediated by a 7-TM receptor named GPR54, also known as KISS1R, which is coupled to polyphosphoinositide hydrolysis via a Gq/11 GTP binding protein [2,8].Loss-of-function mutations of GPR54 cause a non-Kallman variant of hypogonadotropic/hypogonadism in humans (i.e. hypogonadotropic/hypogonadism without anosmia) [2,9]. Interestingly, the expression of kisspeptin and GPR54 is not restricted to the hypothalamus. Relatively high levels of kisspeptin and GPR5
APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters
Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Pier Stanislao Paolucci,Davide Rossetti,Andrea Salamon,Gaetano Salina,Francesco Simula,Laura Tosoratto,Piero Vicini
Physics , 2011, DOI: 10.1088/1742-6596/331/5/052029
Abstract: We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera FPGA, are provided.
Architectural improvements and 28 nm FPGA implementation of the APEnet+ 3D Torus network for hybrid HPC systems
Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Pier Stanislao Paolucci,Alessandro Lonardo,Davide Rossetti,Francesco Simula,Laura Tosoratto,Piero Vicini
Physics , 2013, DOI: 10.1088/1742-6596/513/5/052002
Abstract: Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is the reason why an efficient and scalable interconnect is a key technology to finally deliver GPUs for scientific HPC. In this paper we show the latest architectural and performance improvement of the APEnet+ network fabric, a FPGA-based PCIe board with 6 fully bidirectional off-board links with 34 Gbps of raw bandwidth per direction, and X8 Gen2 bandwidth towards the host PC. The board implements a Remote Direct Memory Access (RDMA) protocol that leverages upon peer-to-peer (P2P) capabilities of Fermi- and Kepler-class NVIDIA GPUs to obtain real zero-copy, low-latency GPU-to-GPU transfers. Finally, we report on the development activities for 2013 focusing on the adoption of the latest generation 28 nm FPGAs and the preliminary tests performed on this new platform.
Impact of exponential long range and Gaussian short range lateral connectivity on the distributed simulation of neural networks including up to 30 billion synapses
Elena Pastorelli,Pier Stanislao Paolucci,Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Michele Martinelli,Francesco Simula,Piero Vicini
Computer Science , 2015,
Abstract: Recent experimental neuroscience studies are pointing out the role of long-range intra-areal connectivity that can be modeled by a distance dependent exponential decay of the synaptic probability distribution. This short report provides a preliminary measure of the impact of exponentially decaying lateral connectivity compared to that of shorter-range Gaussian decays on the scaling behaviour and memory occupation of a distributed spiking neural network simulator (DPSNN). Two-dimensional grids of cortical columns composed by point-like spiking neurons have been connected by up to 30 billion synapses using exponential and Gaussian connectivity models. Up to 1024 hardware cores, hosted on a 64 nodes server platform, executed the MPI processes composing the distributed simulator. The hardware platform was a cluster of IBM NX360 M5 16-core compute nodes, each one containing two Intel Xeon Haswell 8-core E5-2630 v3 processors, with a clock of 2.40GHz, interconnected through an InfiniBand network. This study is conducted in the framework of the CORTICONIC FET project, also in view of the next -to-start activities foreseen as part of the Human Brain Project (HBP), SubProject 3 Cognitive and Systems Neuroscience, WaveScalES work-package.
The Distributed Network Processor: a novel off-chip and on-chip interconnection network architecture
Andrea Biagioni,Francesca Lo Cicero,Alessandro Lonardo,Pier Stanislao Paolucci,Mersia Perra,Davide Rossetti,Carlo Sidore,Francesco Simula,Laura Tosoratto,Piero Vicini
Computer Science , 2012,
Abstract: One of the most demanding challenges for the designers of parallel computing architectures is to deliver an efficient network infrastructure providing low latency, high bandwidth communications while preserving scalability. Besides off-chip communications between processors, recent multi-tile (i.e. multi-core) architectures face the challenge for an efficient on-chip interconnection network between processor's tiles. In this paper, we present a configurable and scalable architecture, based on our Distributed Network Processor (DNP) IP Library, targeting systems ranging from single MPSoCs to massive HPC platforms. The DNP provides inter-tile services for both on-chip and off-chip communications with a uniform RDMA style API, over a multi-dimensional direct network with a (possibly) hybrid topology.
'Mutual Watch-dog Networking': Distributed Awareness of Faults and Critical Events in Petascale/Exascale systems
Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Pier Stanislao Paolucci,Davide Rossetti,Francesco Simula,Laura Tosoratto,Piero Vicini
Computer Science , 2013,
Abstract: Many tile systems require techniques to be applied to increase components resilience and control the FIT (Failures In Time) rate. When scaling to peta- exa-scale systems the FIT rate may become unacceptable due to component numerosity, requiring more systemic countermeasures. Thus, the ability to be fault aware, i.e. to detect and collect information about fault and critical events, is a necessary feature that large scale distributed architectures must provide in order to apply systemic fault tolerance techniques. In this context, the LO|FA|MO approach is a way to obtain systemic fault awareness, by implementing a mutual watchdog mechanism and guaranteeing fault detection in a no-single-point-of-failure fashion. This document contains specification and implementation details about this approach, in the shape of a technical report.
A heterogeneous many-core platform for experiments on scalable custom interconnects and management of fault and critical events, applied to many-process applications: Vol. II, 2012 technical report
Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Werner Geurts,Gert Goossens,Francesca Lo Cicero,Alessandro Lonardo,Pier Stanislao Paolucci,Davide Rossetti,Francesco Simula,Laura Tosoratto,Piero Vicini
Computer Science , 2013,
Abstract: This is the second of a planned collection of four yearly volumes describing the deployment of a heterogeneous many-core platform for experiments on scalable custom interconnects and management of fault and critical events, applied to many-process applications. This volume covers several topics, among which: 1- a system for awareness of faults and critical events (named LO|FA|MO) on experimental heterogeneous many-core hardware platforms; 2- the integration and test of the experimental hardware heterogeneous many-core platform QUoNG, based on the APEnet+ custom interconnect; 3- the design of a Software-Programmable Distributed Network Processor architecture (DNP) using ASIP technology; 4- the initial stages of design of a new DNP generation onto a 28nm FPGA. These developments were performed in the framework of the EURETILE European Project under the Grant Agreement no. 247846.
Scaling to 1024 software processes and hardware cores of the distributed simulation of a spiking neural network including up to 20G synapses
Elena Pastorelli,Pier Stanislao Paolucci,Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Michele Martinelli,Francesco Simula,Piero Vicini
Computer Science , 2015,
Abstract: This short report describes the scaling, up to 1024 software processes and hardware cores, of a distributed simulator of plastic spiking neural networks. A previous report demonstrated good scalability of the simulator up to 128 processes. Herein we extend the speed-up measurements and strong and weak scaling analysis of the simulator to the range between 1 and 1024 software processes and hardware cores. We simulated two-dimensional grids of cortical columns including up to ~20G synapses connecting ~11M neurons. The neural network was distributed over a set of MPI processes and the simulations were run on a server platform composed of up to 64 dual-socket nodes, each socket equipped with Intel Haswell E5-2630 v3 processors (8 cores @ 2.4 GHz clock). All nodes are interconned through an InfiniBand network. The DPSNN simulator has been developed by INFN in the framework of EURETILE and CORTICONIC European FET Project and will be used by the WaveScalEW tem in the framework of the Human Brain Project (HBP), SubProject 2 - Cognitive and Systems Neuroscience. This report lays the groundwork for a more thorough comparison with the neural simulation tool NEST.
Power, Energy and Speed of Embedded and Server Multi-Cores applied to Distributed Simulation of Spiking Neural Networks: ARM in NVIDIA Tegra vs Intel Xeon quad-cores
Pier Stanislao Paolucci,Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Michele Martinelli,Elena Pastorelli,Francesco Simula,Piero Vicini
Computer Science , 2015,
Abstract: This short note regards a comparison of instantaneous power, total energy consumption, execution time and energetic cost per synaptic event of a spiking neural network simulator (DPSNN-STDP) distributed on MPI processes when executed either on an embedded platform (based on a dual socket quad-core ARM platform) or a server platform (INTEL-based quad-core dual socket platform). We also compare the measure with those reported by leading custom and semi-custom designs: TrueNorth and SpiNNaker. In summary, we observed that: 1- we spent 2.2 micro-Joule per simulated event on the "embedded platform", approx. 4.4 times lower than what was spent by the "server platform"; 2- the instantaneous power consumption of the "embedded platform" was 14.4 times better than the "server" one; 3- the server platform is a factor 3.3 faster. The "embedded platform" is made of NVIDIA Jetson TK1 boards, interconnected by Ethernet, each mounting a Tegra K1 chip including a quad-core ARM Cortex-A15 at 2.3GHz. The "server platform" is based on dual-socket quad-core Intel Xeon CPUs (E5620 at 2.4GHz). The measures were obtained with the DPSNN-STDP simulator (Distributed Simulator of Polychronous Spiking Neural Network with synaptic Spike Timing Dependent Plasticity) developed by INFN, that already proved its efficient scalability and execution speed-up on hundreds of similar "server" cores and MPI processes, applied to neural nets composed of several billions of synapses.
Distributed simulation of polychronous and plastic spiking neural networks: strong and weak scaling of a representative mini-application benchmark executed on a small-scale commodity cluster
Pier Stanislao Paolucci,Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Elena Pastorelli,Francesco Simula,Laura Tosoratto,Piero Vicini
Computer Science , 2013,
Abstract: We introduce a natively distributed mini-application benchmark representative of plastic spiking neural network simulators. It can be used to measure performances of existing computing platforms and to drive the development of future parallel/distributed computing systems dedicated to the simulation of plastic spiking networks. The mini-application is designed to generate spiking behaviors and synaptic connectivity that do not change when the number of hardware processing nodes is varied, simplifying the quantitative study of scalability on commodity and custom architectures. Here, we present the strong and weak scaling and the profiling of the computational/communication components of the DPSNN-STDP benchmark (Distributed Simulation of Polychronous Spiking Neural Network with synaptic Spike-Timing Dependent Plasticity). In this first test, we used the benchmark to exercise a small-scale cluster of commodity processors (varying the number of used physical cores from 1 to 128). The cluster was interconnected through a commodity network. Bidimensional grids of columns composed of Izhikevich neurons projected synapses locally and toward first, second and third neighboring columns. The size of the simulated network varied from 6.6 Giga synapses down to 200 K synapses. The code demonstrated to be fast and scalable: 10 wall clock seconds were required to simulate one second of activity and plasticity (per Hertz of average firing rate) of a network composed by 3.2 G synapses running on 128 hardware cores clocked @ 2.4 GHz. The mini-application has been designed to be easily interfaced with standard and custom software and hardware communication interfaces. It has been designed from its foundation to be natively distributed and parallel, and should not pose major obstacles against distribution and parallelization on several platforms.
Page 1 /3055
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.