oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 4 )

2018 ( 12 )

2017 ( 12 )

2016 ( 12 )

Custom range...

Search Results: 1 - 10 of 2015 matches for " Nina Taft "
All listed articles are free for downloading (OA Articles)
Page 1 /2015
Display every page Item
Private Decayed Sum Estimation under Continual Observation
Jean Bolot,Nadia Fawaz,S. Muthukrishnan,Aleksandar Nikolov,Nina Taft
Computer Science , 2011, DOI: 10.1145/2448496.2448530
Abstract: In monitoring applications, recent data is more important than distant data. How does this affect privacy of data analysis? We study a general class of data analyses - computing predicate sums - with privacy. Formally, we study the problem of estimating predicate sums {\em privately}, for sliding windows (and other well-known decay models of data, i.e. exponential and polynomial decay). We extend the recently proposed continual privacy model of Dwork et al. We present algorithms for decayed sum which are $\eps$-differentially private, and are accurate. For window and exponential decay sums, our algorithms are accurate up to additive $1/\eps$ and polylog terms in the range of the computed function; for polynomial decay sums which are technically more challenging because partial solutions do not compose easily, our algorithms incur additional relative error. Further, we show lower bounds, tight within polylog factors and tight with respect to the dependence on the probability of error.
Recommending with an Agenda: Active Learning of Private Attributes using Matrix Factorization
Smriti Bhagat,Udi Weinsberg,Stratis Ioannidis,Nina Taft
Computer Science , 2013,
Abstract: Recommender systems leverage user demographic information, such as age, gender, etc., to personalize recommendations and better place their targeted ads. Oftentimes, users do not volunteer this information due to privacy concerns, or due to a lack of initiative in filling out their online profiles. We illustrate a new threat in which a recommender learns private attributes of users who do not voluntarily disclose them. We design both passive and active attacks that solicit ratings for strategically selected items, and could thus be used by a recommender system to pursue this hidden agenda. Our methods are based on a novel usage of Bayesian matrix factorization in an active learning setting. Evaluations on multiple datasets illustrate that such attacks are indeed feasible and use significantly fewer rated items than static inference methods. Importantly, they succeed without sacrificing the quality of recommendations to users.
Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning
Benjamin I. P. Rubinstein,Peter L. Bartlett,Ling Huang,Nina Taft
Computer Science , 2009,
Abstract: Several recent studies in privacy-preserving learning have considered the trade-off between utility or risk and the level of differential privacy guaranteed by mechanisms for statistical query processing. In this paper we study this trade-off in private Support Vector Machine (SVM) learning. We present two efficient mechanisms, one for the case of finite-dimensional feature mappings and one for potentially infinite-dimensional feature mappings with translation-invariant kernels. For the case of translation-invariant kernels, the proposed mechanism minimizes regularized empirical risk in a random Reproducing Kernel Hilbert Space whose kernel uniformly approximates the desired kernel with high probability. This technique, borrowed from large-scale learning, allows the mechanism to respond with a finite encoding of the classifier, even when the function class is of infinite VC dimension. Differential privacy is established using a proof technique from algorithmic stability. Utility--the mechanism's response function is pointwise epsilon-close to non-private SVM with probability 1-delta--is proven by appealing to the smoothness of regularized empirical risk minimization with respect to small perturbations to the feature mapping. We conclude with a lower bound on the optimal differential privacy of the SVM. This negative result states that for any delta, no mechanism can be simultaneously (epsilon,delta)-useful and beta-differentially private for small epsilon and small beta.
CARE: Content Aware Redundancy Elimination for Disaster Communications on Damaged Networks
Udi Weinsberg,Athula Balachandran,Nina Taft,Gianluca Iannaccone,Vyas Sekar,Srinivasan Seshan
Computer Science , 2012,
Abstract: During a disaster scenario, situational awareness information, such as location, physical status and images of the surrounding area, is essential for minimizing loss of life, injury, and property damage. Today's handhelds make it easy for people to gather data from within the disaster area in many formats, including text, images and video. Studies show that the extreme anxiety induced by disasters causes humans to create a substantial amount of repetitive and redundant content. Transporting this content outside the disaster zone can be problematic when the network infrastructure is disrupted by the disaster. This paper presents the design of a novel architecture called CARE (Content-Aware Redundancy Elimination) for better utilizing network resources in disaster-affected regions. Motivated by measurement-driven insights on redundancy patterns found in real-world disaster area photos, we demonstrate that CARE can detect the semantic similarity between photos in the networking layer, thus reducing redundant transfers and improving buffer utilization. Using DTN simulations, we explore the boundaries of the usefulness of deploying CARE on a damaged network, and show that CARE can reduce packet delivery times and drops, and enables 20-40% more unique information to reach the rescue teams outside the disaster area than when CARE is not deployed.
Mixture Models of Endhost Network Traffic
John Mark Agosta,Jaideep Chandrashekar,Mark Crovella,Nina Taft,Daniel Ting
Computer Science , 2012,
Abstract: In this work we focus on modeling a little studied type of traffic, namely the network traffic generated from endhosts. We introduce a parsimonious parametric model of the marginal distribution for connection arrivals. We employ mixture models based on a convex combination of component distributions with both heavy and light-tails. These models can be fitted with high accuracy using maximum likelihood techniques. Our methodology assumes that the underlying user data can be fitted to one of many modeling options, and we apply Bayesian model selection criteria as a rigorous way to choose the preferred combination of components. Our experiments show that a simple Pareto-exponential mixture model is preferred for a wide range of users, over both simpler and more complex alternatives. This model has the desirable property of modeling the entire distribution, effectively segmenting the traffic into the heavy-tailed as well as the non-heavy-tailed components. We illustrate that this technique has the flexibility to capture the wide diversity of user behaviors.
Privacy Tradeoffs in Predictive Analytics
Stratis Ioannidis,Andrea Montanari,Udi Weinsberg,Smriti Bhagat,Nadia Fawaz,Nina Taft
Computer Science , 2014,
Abstract: Online services routinely mine user data to predict user preferences, make recommendations, and place targeted ads. Recent research has demonstrated that several private user attributes (such as political affiliation, sexual orientation, and gender) can be inferred from such data. Can a privacy-conscious user benefit from personalization while simultaneously protecting her private attributes? We study this question in the context of a rating prediction service based on matrix factorization. We construct a protocol of interactions between the service and users that has remarkable optimality properties: it is privacy-preserving, in that no inference algorithm can succeed in inferring a user's private attribute with a probability better than random guessing; it has maximal accuracy, in that no other privacy-preserving protocol improves rating prediction; and, finally, it involves a minimal disclosure, as the prediction accuracy strictly decreases when the service reveals less information. We extensively evaluate our protocol using several rating datasets, demonstrating that it successfully blocks the inference of gender, age and political affiliation, while incurring less than 5% decrease in the accuracy of rating prediction.
Managing your Private and Public Data: Bringing down Inference Attacks against your Privacy
Salman Salamatian,Amy Zhang,Flavio du Pin Calmon,Sandilya Bhamidipati,Nadia Fawaz,Branislav Kveton,Pedro Oliveira,Nina Taft
Computer Science , 2014, DOI: 10.1109/JSTSP.2015.2442227
Abstract: We propose a practical methodology to protect a user's private data, when he wishes to publicly release data that is correlated with his private data, in the hope of getting some utility. Our approach relies on a general statistical inference framework that captures the privacy threat under inference attacks, given utility constraints. Under this framework, data is distorted before it is released, according to a privacy-preserving probabilistic mapping. This mapping is obtained by solving a convex optimization problem, which minimizes information leakage under a distortion constraint. We address practical challenges encountered when applying this theoretical framework to real world data. On one hand, the design of optimal privacy-preserving mechanisms requires knowledge of the prior distribution linking private data and data to be released, which is often unavailable in practice. On the other hand, the optimization may become untractable and face scalability issues when data assumes values in large size alphabets, or is high dimensional. Our work makes three major contributions. First, we provide bounds on the impact on the privacy-utility tradeoff of a mismatched prior. Second, we show how to reduce the optimization size by introducing a quantization step, and how to generate privacy mappings under quantization. Third, we evaluate our method on three datasets, including a new dataset that we collected, showing correlations between political convictions and TV viewing habits. We demonstrate that good privacy properties can be achieved with limited distortion so as not to undermine the original purpose of the publicly released data, e.g. recommendations.
Orthographic Influences When Processing Spoken Pseudowords: Theoretical Implications
Marcus Taft
Frontiers in Psychology , 2011, DOI: 10.3389/fpsyg.2011.00140
Abstract: When we hear an utterance, is the orthographic representation of that utterance activated when it is being processed? Orthographic influences have been previously examined in relation to spoken pseudoword processing in three different paradigms. Unlike real word processing, no orthographic effects with pseudowords have been observed in a phoneme goodness ratings task, and there is a mixed outcome in studies looking for spelling–sound consistency effects. In contrast, the orthography of spoken pseudohomographs has been shown to be activated, given that they prime their homographic base word. Explanations are sought for the findings in these three paradigms, leading to an exploration of theoretical models of spoken word recognition.
The Total Gauss Curvature of a Three-Manifold Immersed in r 4
Jefferson Taft
Mathematics , 2008,
Abstract: This paper has been removed. It is already a well known result.
Gentamicin Renal Excretion in Rats: Probing Strategies to Mitigate Drug-Induced Nephrotoxicity  [PDF]
Aruna Dontabhaktuni, David R. Taft, Mayankbhai Patel
Pharmacology & Pharmacy (PP) , 2016, DOI: 10.4236/pp.2016.71007
Abstract: The renal excretion of gentamicin, an aminoglycoside antibiotic, was studied in the isolated perfused rat kidney (IPRK) model. Dose-linearity experiments were carried out at four doses (400, 800, 1600, 3200 μg), targeting initial perfusate levels of 5, 10, 20 and 40 μg/ml. Additionally, gentamicin was co-perfused with sodium bicarbonate (0.25 mM) and/or cimetidine (2 mM) to evaluate the effect of urinary alkalization and secretory inhibition on gentamicin excretion and kidney accumulation. Gentamicin displayed net reabsorption in the IPRK, consistent with extensive luminal uptake. Kinetic analysis indicated that luminal transport of gentamicin (kidney ? urine) is the rate-determining step for gentamicin urinary excretion. Clearance and cumulative excretion decreased with increased gentamicin dose. Gentamicin kidney accumulation, estimated by mass balance, ranged from ~20% - 30%. Urinary alkalization significantly increased gentamicin excretion, with no effect on kidney accumulation. Conversely, cimetidine co-administration did not affect gentamicin clearance in the IPRK, but kidney accumulation was significantly reduced. When both sodium bicarbonate and cimetidine were administered together, gentamicin kidney accumulation decreased ~80% with corresponding increases in clearance and excretion ratio (XR) compared to gentamicin alone. A main strategy to reduce the incidence of nephrotoxicity with gentamicin therapy (up to ~25%) involves reducing kidney accumulation of the compound. The results of this research suggest that the combination of urinary alkalization and inhibition of basolateral secretion (blood → kidney) may be a viable approach to mitigate aminoglycoside toxicity, and warrants further investigation.
Page 1 /2015
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.