Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99


Any time

2019 ( 6 )

2018 ( 61 )

2017 ( 40 )

2016 ( 64 )

Custom range...

Search Results: 1 - 10 of 3543 matches for " Ran Gilad-Bachrach "
All listed articles are free for downloading (OA Articles)
Page 1 /3543
Display every page Item
DART: Dropouts meet Multiple Additive Regression Trees
K. V. Rashmi,Ran Gilad-Bachrach
Computer Science , 2015,
Abstract: Multiple Additive Regression Trees (MART), an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and it is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks. We propose a novel way of employing dropouts in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin. We also show that DART overcomes the issue of over-specialization to a considerable extent.
Robust Distributed Online Prediction
Ofer Dekel,Ran Gilad-Bachrach,Ohad Shamir,Lin Xiao
Mathematics , 2010,
Abstract: The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guarantees. In \cite{DMB}, we presented the DMB algorithm, which is a generic framework to convert any serial gradient-based online prediction algorithm into a distributed algorithm. Moreover, its regret guarantee is asymptotically optimal for smooth convex loss functions and stochastic inputs. On the flip side, it is fragile to many types of failures that are common in distributed environments. In this companion paper, we present variants of the DMB algorithm, which are resilient to many types of network failures, and tolerant to varying performance of the computing nodes.
Optimal Distributed Online Prediction using Mini-Batches
Ofer Dekel,Ran Gilad-Bachrach,Ohad Shamir,Lin Xiao
Mathematics , 2010,
Abstract: Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the \emph{distributed mini-batch} algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.
Using Multiple Samples to Learn Mixture Models
Jason D Lee,Ran Gilad-Bachrach,Rich Caruana
Computer Science , 2013,
Abstract: In the mixture models problem it is assumed that there are $K$ distributions $\theta_{1},\ldots,\theta_{K}$ and one gets to observe a sample from a mixture of these distributions with unknown coefficients. The goal is to associate instances with their generating distributions, or to identify the parameters of the hidden distributions. In this work we make the assumption that we have access to several samples drawn from the same $K$ underlying distributions, but with different mixing weights. As with topic modeling, having multiple samples is often a reasonable assumption. Instead of pooling the data into one sample, we prove that it is possible to use the differences between the samples to better recover the underlying structure. We present algorithms that recover the underlying structure under milder assumptions than the current state of art when either the dimensionality or the separation is high. The methods, when applied to topic modeling, allow generalization to words not present in the training data.
Efficient Human Computation
Ran Gilad-Bachrach,Aharon Bar-Hillel,Liat Ein-Dor
Computer Science , 2009,
Abstract: Collecting large labeled data sets is a laborious and expensive task, whose scaling up requires division of the labeling workload between many teachers. When the number of classes is large, miscorrespondences between the labels given by the different teachers are likely to occur, which, in the extreme case, may reach total inconsistency. In this paper we describe how globally consistent labels can be obtained, despite the absence of teacher coordination, and discuss the possible efficiency of this process in terms of human labor. We define a notion of label efficiency, measuring the ratio between the number of globally consistent labels obtained and the number of labels provided by distributed teachers. We show that the efficiency depends critically on the ratio alpha between the number of data instances seen by a single teacher, and the number of classes. We suggest several algorithms for the distributed labeling problem, and analyze their efficiency as a function of alpha. In addition, we provide an upper bound on label efficiency for the case of completely uncoordinated teachers, and show that efficiency approaches 0 as the ratio between the number of labels each teacher provides and the number of classes drops (i.e. alpha goes to 0).
Crypto-Nets: Neural Networks over Encrypted Data
Pengtao Xie,Misha Bilenko,Tom Finley,Ran Gilad-Bachrach,Kristin Lauter,Michael Naehrig
Computer Science , 2014,
Abstract: The problem we address is the following: how can a user employ a predictive model that is held by a third party, without compromising private information. For example, a hospital may wish to use a cloud service to predict the readmission risk of a patient. However, due to regulations, the patient's medical files cannot be revealed. The goal is to make an inference using the model, without jeopardizing the accuracy of the prediction or the privacy of the data. To achieve high accuracy, we use neural networks, which have been shown to outperform other learning models for many tasks. To achieve the privacy requirements, we use homomorphic encryption in the following protocol: the data owner encrypts the data and sends the ciphertexts to the third party to obtain a prediction from a trained model. The model operates on these ciphertexts and sends back the encrypted prediction. In this protocol, not only the data remains private, even the values predicted are available only to the data owner. Using homomorphic encryption and modifications to the activation functions and training algorithms of neural networks, we show that it is protocol is possible and may be feasible. This method paves the way to build a secure cloud-based neural network prediction services without invading users' privacy.
Percutaneous Heart Valves: The Emergence of a Disruptive Technology
Ran Gilad,Ron Somogyi
University of Toronto Medical Journal , 2005, DOI: 10.5015/utmj.v82i3.446
Dr. Tirone E. David: A Life of Passion and Compassion in Cardiac Surgery
Bobby Yanagawa,Ran Gilad
University of Toronto Medical Journal , 2006, DOI: 10.5015/utmj.v83i2.315
LL-37 Induces Polymerization and Bundling of Actin and Affects Actin Structure
Asaf Sol, Edna Blotnick, Gilad Bachrach, Andras Muhlrad
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0050078
Abstract: Actin exists as a monomer (G-actin) which can be polymerized to filaments) F-actin) that under the influence of actin-binding proteins and polycations bundle and contribute to the formation of the cytoskeleton. Bundled actin from lysed cells increases the viscosity of sputum in lungs of cystic fibrosis patients. The human host defense peptide LL-37 was previously shown to induce actin bundling and was thus hypothesized to contribute to the pathogenicity of this disease. In this work, interactions between actin and the cationic LL-37 were studied by optical, proteolytic and surface plasmon resonance methods and compared to those obtained with scrambled LL-37 and with the cationic protein lysozyme. We show that LL-37 binds strongly to CaATP-G-actin while scrambled LL-37 does not. While LL-37, at superstoichiometric LL-37/actin concentrations polymerizes MgATP-G-actin, at lower non-polymerizing concentrations LL-37 inhibits actin polymerization by MgCl2 or NaCl. LL-37 bundles Mg-F-actin filaments both at low and physiological ionic strength when in equimolar or higher concentrations than those of actin. The LL-37 induced bundles are significantly less sensitive to increase in ionic strength than those induced by scrambled LL-37 and lysozyme. LL-37 in concentrations lower than those needed for actin polymerization or bundling, accelerates cleavage of both monomer and polymer actin by subtilisin. Our results indicate that the LL-37-actin interaction is partially electrostatic and partially hydrophobic and that a specific actin binding sequence in the peptide is responsible for the hydrophobic interaction. LL-37-induced bundles, which may contribute to the accumulation of sputum in cystic fibrosis, are dissociated very efficiently by DNase-1 and also by cofilin.
The Interestingness Tool for Search in the Web
Iaakov Exman,Gilad Amar,Ran Shaltiel
Computer Science , 2014, DOI: 10.5220/0004178900540063
Abstract: Interestingness,as the composition of Relevance and Unexpectedness, has been tested by means of Web search cases studies and led to promising results. But for thorough investigation and routine practical application one needs a flexible and robust tool. This work describes such an Interestingness based search tool, its software architecture and actual implementation. One of its flexibility traits is the choice of Interestingness functions: it may work with Match-Mismatch and Tf-Idf, among other functions. The tool has been experimentally verified by application to various domains of interest. It has been validated by comparison of results with those of commercial search engines and results from differing Interestingness functions.
Page 1 /3543
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.