Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Lila Ghemri,Raji Kannah
International Journal of Cyber-Security and Digital Forensics , 2012,
Abstract: Privacy in data publishing concerns itself with the problem of releasing data to enable its study and analysis while protecting the privacy of the people or the subjects whose data is being released. The main motivation behind this work is the need to comply with HIPAA (Health Insurance Portability and Accountability Act) requirements on preserving patienta€ s privacy before making their data public. In this work, we present a policy-aware system that detects HIPAA privacy rule violations in medical records in textual format and takes remedial steps to mask the attributes that cause the violation to make them HIPAA-compliant.
Global spatial data infrastructure  [PDF]
Ili? Aleksandar
Journal of the Geographical Institute Jovan Cvijic, SASA , 2009, DOI: 10.2298/ijgi0959179i
Abstract: In this paper, the explanation of the term and concept of spatial data infrastructure is represented. It is emphasized the role of global spatial data infrastructure to support the joint efforts, worldwide, for the sustainable development of environment protection and efficient decision-making. Geo-information technology and spatial data infrastructure have an important role in the modern world by providing the possibility of governments, local communities, organizations, business sector, academic community and ordinary people progress in solving many problems. Most of these problems have regional or global character. International organization and institutions around the globe provide and share global spatial data about the state of globe and its changes, stressing the importance of public access to information and international cooperation. Approach in building the global infrastructure of spatial data requires the culture of joint working and sharing of spatial data as a common good. Significant support to the global spatial data infrastructure provides cartographic initiative Global Map and the development of geo-information products and services prominent the Earth Viewers. The development of global spatial data infrastructure is undoubtedly a step forward civilization but further increase the problems of privacy, public and national security.
Publishing proteomic data
Martin Latterich
Proteome Science , 2006, DOI: 10.1186/1477-5956-4-8
Abstract: Historically, most scientific publications included a detailed methodology section that provided details on source of reagents, information, such as batch or lot numbers, and a description of methodology that would enable another research group to follow the same procedures. Given the same starting material, this practice would allow arriving at identical or very similar data. At the very least, methodology sections should refer to prior publications that provide sufficient experimental detail to allow the reproduction of scientific experiments. Most publications would then display "typical" results, such as photographs or micrographs of the experimental subject, images of detected molecules, or minimally processed data, such as statistically evaluated graphs or tables. These results were displayed together with negative and often positive controls that validate the experiment and reagents. The printed media was mostly adequate to publish these studies, because most studies investigated individual phenomena or molecules.The advent of high-throughput methods in biological experimentation have imposed some unique challenges both in data presentation in classical print format, as well as in describing the methodology and data analysis workflow in sufficient detail to conform to good publication practice. This especially is an issue with proteomic analyses conducted by mass spectrometry [1,2]. Electronic media and public repositories are addressing the need for publishing uninterpreted data sets [3-5], such as raw or minimally processed mass spectrometer data, as well as lists of identified peptides. The remaining challenge is in the generation of ontologies and common experimental descriptions that capture the wealth of information that has both gone into the design and the analysis of proteomic experiments. This ultimately is needed when directly comparing multi-centre studies.Much progress has been made by the community to propose data format standards that are compa
Data hosting infrastructure for primary biodiversity data  [cached]
Goddard Anthony,Wilson Nathan,Cryer Phil,Yamashita Grant
BMC Bioinformatics , 2011, DOI: 10.1186/1471-2105-12-s15-s5
Abstract: Background Today, an unprecedented volume of primary biodiversity data are being generated worldwide, yet significant amounts of these data have been and will continue to be lost after the conclusion of the projects tasked with collecting them. To get the most value out of these data it is imperative to seek a solution whereby these data are rescued, archived and made available to the biodiversity community. To this end, the biodiversity informatics community requires investment in processes and infrastructure to mitigate data loss and provide solutions for long-term hosting and sharing of biodiversity data. Discussion We review the current state of biodiversity data hosting and investigate the technological and sociological barriers to proper data management. We further explore the rescuing and re-hosting of legacy data, the state of existing toolsets and propose a future direction for the development of new discovery tools. We also explore the role of data standards and licensing in the context of data hosting and preservation. We provide five recommendations for the biodiversity community that will foster better data preservation and access: (1) encourage the community's use of data standards, (2) promote the public domain licensing of data, (3) establish a community of those involved in data hosting and archival, (4) establish hosting centers for biodiversity data, and (5) develop tools for data discovery. Conclusion The community's adoption of standards and development of tools to enable data discovery is essential to sustainable data preservation. Furthermore, the increased adoption of open content licensing, the establishment of data hosting infrastructure and the creation of a data hosting and archiving community are all necessary steps towards the community ensuring that data archival policies become standardized.
The data paper: a mechanism to incentivize data publishing in biodiversity science  [cached]
Chavan Vishwas,Penev Lyubomir
BMC Bioinformatics , 2011, DOI: 10.1186/1471-2105-12-s15-s2
Abstract: Background Free and open access to primary biodiversity data is essential for informed decision-making to achieve conservation of biodiversity and sustainable development. However, primary biodiversity data are neither easily accessible nor discoverable. Among several impediments, one is a lack of incentives to data publishers for publishing of their data resources. One such mechanism currently lacking is recognition through conventional scholarly publication of enriched metadata, which should ensure rapid discovery of 'fit-for-use' biodiversity data resources. Discussion We review the state of the art of data discovery options and the mechanisms in place for incentivizing data publishers efforts towards easy, efficient and enhanced publishing, dissemination, sharing and re-use of biodiversity data. We propose the establishment of the 'biodiversity data paper' as one possible mechanism to offer scholarly recognition for efforts and investment by data publishers in authoring rich metadata and publishing them as citable academic papers. While detailing the benefits to data publishers, we describe the objectives, work flow and outcomes of the pilot project commissioned by the Global Biodiversity Information Facility in collaboration with scholarly publishers and pioneered by Pensoft Publishers through its journals Zookeys, PhytoKeys, MycoKeys, BioRisk, NeoBiota, Nature Conservation and the forthcoming Biodiversity Data Journal. We then debate further enhancements of the data paper beyond the pilot project and attempt to forecast the future uptake of data papers as an incentivization mechanism by the stakeholder communities. Conclusions We believe that in addition to recognition for those involved in the data publishing enterprise, data papers will also expedite publishing of fit-for-use biodiversity data resources. However, uptake and establishment of the data paper as a potential mechanism of scholarly recognition requires a high degree of commitment and investment by the cross-sectional stakeholder communities.
Electronic Publishing Infrastructure (EPI), innovazione nell’editoria accademica
Susanna Mornati
Bollettino del CILEA , 2007, DOI: 10.1472/bc.v106i0.1331
Abstract: Nelle università italiane le iniziative editoriali sono limitate per diversi motivi. Le nuove modalità di conduzione e comunicazione della ricerca scientifica richiedono innovazioni nella pubblicazione di dati e risultati. Le innovazioni tecnologiche nel campo dell’editoria elettronica favoriscono l’acquisizione di un ruolo più importante per gli atenei. Il CILEA offre azioni e strumenti a supporto dell’editoria accademica e ha siglato con Firenze University Press un accordo per lo sviluppo di una piattaforma di servizi per l’e-publishing. Il progetto impegna i partner a collaborare per innovare processi e tecnologie, ma è certo il cambiamento culturale la sfida più cospicua che l’editoria accademica dovrà affrontare per non chiudere i battenti nel prossimo futuro. Italian universities have limited publishing activities for several reasons. The nature of scholarly research and communication is changing dramatically and innovative publishing systems for data and results are needed. Technological advances in e-publishing foster the takeover of a new role for universities. CILEA provides actions and tools to support academic publishing and signed an agreement with Firenze University Press to develop a platform of services for e-publishing. This project commits the partners to collaborate for innovation in processes and technologies, but cultural change is the real challenge that academic publishing will have to tackle to survive in the near future.
Publishing and linking transport data on the Web  [PDF]
Julien Plu,Fran?ois Scharffe
Computer Science , 2012,
Abstract: Without Linked Data, transport data is limited to applications exclusively around transport. In this paper, we present a workflow for publishing and linking transport data on the Web. So we will be able to develop transport applications and to add other features which will be created from other datasets. This will be possible because transport data will be linked to these datasets. We apply this workflow to two datasets: NEPTUNE, a French standard describing a transport line, and Passim, a directory containing relevant information on transport services, in every French city.
Towards mainstreaming of biodiversity data publishing: recommendations of the GBIF Data Publishing Framework Task Group  [cached]
Moritz Tom,Krishnan S,Roberts Dave,Ingwersen Peter
BMC Bioinformatics , 2011, DOI: 10.1186/1471-2105-12-s15-s1
Abstract: Background Data are the evidentiary basis for scientific hypotheses, analyses and publication, for policy formation and for decision-making. They are essential to the evaluation and testing of results by peer scientists both present and future. There is broad consensus in the scientific and conservation communities that data should be freely, openly available in a sustained, persistent and secure way, and thus standards for 'free' and 'open' access to data have become well developed in recent years. The question of effective access to data remains highly problematic. Discussion Specifically with respect to scientific publishing, the ability to critically evaluate a published scientific hypothesis or scientific report is contingent on the examination, analysis, evaluation - and if feasible - on the re-generation of data on which conclusions are based. It is not coincidental that in the recent 'climategate' controversies, the quality and integrity of data and their analytical treatment were central to the debate. There is recent evidence that even when scientific data are requested for evaluation they may not be available. The history of dissemination of scientific results has been marked by paradigm shifts driven by the emergence of new technologies. In recent decades, the advance of computer-based technology linked to global communications networks has created the potential for broader and more consistent dissemination of scientific information and data. Yet, in this digital era, scientists and conservationists, organizations and institutions have often been slow to make data available. Community studies suggest that the withholding of data can be attributed to a lack of awareness, to a lack of technical capacity, to concerns that data should be withheld for reasons of perceived personal or organizational self interest, or to lack of adequate mechanisms for attribution. Conclusions There is a clear need for institutionalization of a 'data publishing framework' that can address sociocultural, technical-infrastructural, policy, political and legal constraints, as well as addressing issues of sustainability and financial support. To address these aspects of a data publishing framework - a systematic, standard approach to the formal definition and public disclosure of data - in the context of biodiversity data, the Global Biodiversity Information Facility (GBIF, the single inter-governmental body most clearly mandated to undertake such an effort) convened a Data Publishing Framework Task Group. We conceive this data publishing framework as an envir
Data sharing and publishing in the field of neuroimaging
Janis L Breeze, Jean-Baptiste Poline, David N Kennedy
GigaScience , 2012, DOI: 10.1186/2047-217x-1-9
Abstract: One crucial issue is how producers of shared data can and should be acknowledged and how this important component of science will benefit individuals in their academic careers. While we encourage the field to make use of these opportunities for data publishing, it is critical that standards for metadata, provenance, and other descriptors are used. This commentary outlines the efforts of the International Neuroinformatics Coordinating Facility Task Force on Neuroimaging Datasharing to coordinate and establish such standards, as well as potential ways forward to relieve the issues that researchers who produce these massive, reusable community resources face when making the data rapidly and freely available to the public. Both the technical and human aspects of data sharing must be addressed if we are to go forward.With the worldwide push for more open science and data sharing [1], it is an ideal time to consider the current state of data sharing in neuroscience, and in particular neuroimaging research. A huge amount of neuroimaging data has been acquired around the world; a recent literature search on PubMed led to an estimate of 12 000 datasets or 144 000 scans (around 55 petabytes of data) over the past 10?years, but only a few percent of such data is available in public repositories. Over the past two years, the International Neuroinformatics Coordinating Facility (http://www.incf.org webcite) has investigated barriers to data sharing through task force working groups and public workshops, and has identified a number of roadblocks, many of which are readily addressable, that impede researchers from both sharing and making use of existing shared data. These include a lack of simple tools for finding, uploading, and downloading shared data; uncertainty about how to best organize and prepare data for sharing, and concerns about data attribution. Many researchers are also wary of data sharing because of confusion institutional human research subject protection and the
Managing Computing Infrastructure for IoT Data  [PDF]
Sapna Tyagi, Ashraf Darwish, Mohammad Yahiya Khan
Advances in Internet of Things (AIT) , 2014, DOI: 10.4236/ait.2014.43005
Digital data have become a torrent engulfing every area of business, science and engineering disciplines, gushing into every economy, every organization and every user of digital technology. In the age of big data, deriving values and insights from big data using rich analytics becomes important for achieving competitiveness, success and leadership in every field. The Internet of Things (IoT) is causing the number and types of products to emit data at an unprecedented rate. Heterogeneity, scale, timeliness, complexity, and privacy problems with large data impede progress at all phases of the pipeline that can create value from data issues. With the push of such massive data, we are entering a new era of computing driven by novel and ground breaking research innovation on elastic parallelism, partitioning and scalability. Designing a scalable system for analysing, processing and mining huge real world datasets has become one of the challenging problems facing both systems researchers and data management researchers. In this paper, we will give an overview of computing infrastructure for IoT data processing, focusing on architectural and major challenges of massive data. We will briefly discuss about emerging computing infrastructure and technologies that are promising for improving massive data management.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.