oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Composition Attacks and Auxiliary Information in Data Privacy  [PDF]
Srivatsava Ranjit Ganta,Shiva Prasad Kasiviswanathan,Adam Smith
Computer Science , 2008,
Abstract: Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. This paper explores how one can reason about privacy in the face of rich, realistic sources of auxiliary information. Specifically, we investigate the effectiveness of current anonymization schemes in preserving privacy when multiple organizations independently release anonymized data about overlapping populations. 1. We investigate composition attacks, in which an adversary uses independent anonymized releases to breach privacy. We explain why recently proposed models of limited auxiliary information fail to capture composition attacks. Our experiments demonstrate that even a simple instance of a composition attack can breach privacy in practice, for a large class of currently proposed techniques. The class includes k-anonymity and several recent variants. 2. On a more positive note, certain randomization-based notions of privacy (such as differential privacy) provably resist composition attacks and, in fact, the use of arbitrary side information. This resistance enables stand-alone design of anonymization schemes, without the need for explicitly keeping track of other releases. We provide a precise formulation of this property, and prove that an important class of relaxations of differential privacy also satisfy the property. This significantly enlarges the class of protocols known to enable modular design.
Transparent Anonymization: Thwarting Adversaries Who Know the Algorithm  [PDF]
Xiaokui Xiao,Yufei Tao,Nick Koudas
Computer Science , 2010,
Abstract: Numerous generalization techniques have been proposed for privacy preserving data publishing. Most existing techniques, however, implicitly assume that the adversary knows little about the anonymization algorithm adopted by the data publisher. Consequently, they cannot guard against privacy attacks that exploit various characteristics of the anonymization mechanism. This paper provides a practical solution to the above problem. First, we propose an analytical model for evaluating disclosure risks, when an adversary knows everything in the anonymization process, except the sensitive values. Based on this model, we develop a privacy principle, transparent l-diversity, which ensures privacy protection against such powerful adversaries. We identify three algorithms that achieve transparent l-diversity, and verify their effectiveness and efficiency through extensive experiments with real data.
Preserving privacy in social networks based on d-neighborhood subgraph anonymity
基于d-邻域子图匿名的社会网络隐私保护*

JIN Hua,ZHANG Zhi-xiang,LIU Shan-cheng,JU Shi-guang,
金华
,张志祥,刘善成,鞠时光

计算机应用研究 , 2011,
Abstract: Preserving privacy is very necessary for social network information publishing, because analysis of social networks can violate the individual privacy. This paper proposed a k-anonymity model of d-neighborhood subgraph described by matrix of supe-edge. It transformed the anonymization of subgraph into matching the matrix which represented the d-neighborhood subgraph of vertex, and ensured that the numbers of isomorphic d-neighborhood subgraph was no less than k for every vertex. Experimental results show that the proposed model can effectively resist neighborhood attacks and preserve privacy information.
Preserving Individual Privacy in Serial Data Publishing  [PDF]
Raymond Chi-Wing Wong,Ada Wai-Chee Fu,Jia Liu,Ke Wang,Yabo Xu
Computer Science , 2009,
Abstract: While previous works on privacy-preserving serial data publishing consider the scenario where sensitive values may persist over multiple data releases, we find that no previous work has sufficient protection provided for sensitive values that can change over time, which should be the more common case. In this work we propose to study the privacy guarantee for such transient sensitive values, which we call the global guarantee. We formally define the problem for achieving this guarantee and derive some theoretical properties for this problem. We show that the anonymized group sizes used in the data anonymization is a key factor in protecting individual privacy in serial publication. We propose two strategies for anonymization targeting at minimizing the average group size and the maximum group size. Finally, we conduct experiments on a medical dataset to show that our method is highly efficient and also produces published data of very high utility.
Privacy Preserving Data Publishing: Current Status and New Directions  [PDF]
Junqiang Liu
Information Technology Journal , 2012,
Abstract: The universal information sharing on the internet has greatly improved the productivity of our society but also increased the risk of privacy violations. Privacy preserving data publishing renders approaches and methods for sharing useful information in the form of publication while preserving data privacy. Recently, abundant literature has been dedicated to this research and tremendous progress has been made, ranging from privacy risk evaluation and privacy protection principles, counter-threat measures and anonymization techniques, information loss and data utility metrics and algorithms. This study provides a comparative analysis of the state of the art works along multiple dimensions. Privacy preserving data publishing research is motivated by real world problems which however are far from being solved as there are still challenging issues to be addressed. This study helps to identify challenges, focus on research efforts and highlight the future directions.
A Random Matrix Approach to Differential Privacy and Structure Preserved Social Network Graph Publishing  [PDF]
Faraz Ahmed,Rong Jin,Alex X. Liu
Computer Science , 2013,
Abstract: Online social networks are being increasingly used for analyzing various societal phenomena such as epidemiology, information dissemination, marketing and sentiment flow. Popular analysis techniques such as clustering and influential node analysis, require the computation of eigenvectors of the real graph's adjacency matrix. Recent de-anonymization attacks on Netflix and AOL datasets show that an open access to such graphs pose privacy threats. Among the various privacy preserving models, Differential privacy provides the strongest privacy guarantees. In this paper we propose a privacy preserving mechanism for publishing social network graph data, which satisfies differential privacy guarantees by utilizing a combination of theory of random matrix and that of differential privacy. The key idea is to project each row of an adjacency matrix to a low dimensional space using the random projection approach and then perturb the projected matrix with random noise. We show that as compared to existing approaches for differential private approximation of eigenvectors, our approach is computationally efficient, preserves the utility and satisfies differential privacy. We evaluate our approach on social network graphs of Facebook, Live Journal and Pokec. The results show that even for high values of noise variance sigma=1 the clustering quality given by normalized mutual information gain is as low as 0.74. For influential node discovery, the propose approach is able to correctly recover 80 of the most influential nodes. We also compare our results with an approach presented in [43], which directly perturbs the eigenvector of the original data by a Laplacian noise. The results show that this approach requires a large random perturbation in order to preserve the differential privacy, which leads to a poor estimation of eigenvectors for large social networks.
PRIVACY IN MEDICAL DATA PUBLISHING  [PDF]
Lila Ghemri,Raji Kannah
International Journal of Cyber-Security and Digital Forensics , 2012,
Abstract: Privacy in data publishing concerns itself with the problem of releasing data to enable its study and analysis while protecting the privacy of the people or the subjects whose data is being released. The main motivation behind this work is the need to comply with HIPAA (Health Insurance Portability and Accountability Act) requirements on preserving patienta€ s privacy before making their data public. In this work, we present a policy-aware system that detects HIPAA privacy rule violations in medical records in textual format and takes remedial steps to mask the attributes that cause the violation to make them HIPAA-compliant.
The Boundary Between Privacy and Utility in Data Anonymization  [PDF]
Vibhor Rastogi,Dan Suciu,Sungho Hong
Computer Science , 2006,
Abstract: We consider the privacy problem in data publishing: given a relation I containing sensitive information 'anonymize' it to obtain a view V such that, on one hand attackers cannot learn any sensitive information from V, and on the other hand legitimate users can use V to compute useful statistics on I. These are conflicting goals. We use a definition of privacy that is derived from existing ones in the literature, which relates the a priori probability of a given tuple t, Pr(t), with the a posteriori probability, Pr(t | V), and propose a novel and quite practical definition for utility. Our main result is the following. Denoting n the size of I and m the size of the domain from which I was drawn (i.e. n < m) then: when the a priori probability is Pr(t) = Omega(n/sqrt(m)) for some t, there exists no useful anonymization algorithm, while when Pr(t) = O(n/m) for all tuples t, then we give a concrete anonymization algorithm that is both private and useful. Our algorithm is quite different from the k-anonymization algorithm studied intensively in the literature, and is based on random deletions and insertions to I.
Research on Anonymity Technique for Personalization Privacy-preserving Data Publishing
数据发布中的个性化隐私匿名技术研究

WANG Bo,YANG Jing,
王波
,杨静

计算机科学 , 2012,
Abstract: Privacy-preserving data publishing for personalization is a hot research topic in technique for controlling pri- vacy disclosure in data publishing in recent years. An overview of this area was given in this survey. First, on the basis of analyzing types of different personalization service, relevant anonymization models of personalization privacy were built. Second, according to the different adopting technictues, a summarization of the art of personalization privacy-pre- serving techniques was given, and the fundamental principles and characteristics of various techniques were generally de- scribed. In addition, some existing privacy measure methods and standards were provided, according to the differences of information measure of adopted algorithms. Finally, based on the comparison and analysis of existing researches, future research directions of privacy-preserving data publishing for personalization were forecasted.
Slicing: A New Approach to Privacy Preserving Data Publishing  [PDF]
Tiancheng Li,Ninghui Li,Jian Zhang,Ian Molloy
Computer Science , 2009,
Abstract: Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.