Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 Computer Science , 2015, Abstract: Algorithm designers typically assume that the input data is correct, and then proceed to find "optimal" or "sub-optimal" solutions using this input data. However this assumption of correct data does not always hold in practice, especially in the context of online learning systems where the objective is to learn appropriate feature weights given some training samples. Such scenarios necessitate the study of inverse optimization problems where one is given an input instance as well as a desired output and the task is to adjust the input data so that the given output is indeed optimal. Motivated by learning structured prediction models, in this paper we consider inverse optimization with a margin, i.e., we require the given output to be better than all other feasible outputs by a desired margin. We consider such inverse optimization problems for maximum weight matroid basis, matroid intersection, perfect matchings, minimum cost maximum flows, and shortest paths and derive the first known results for such problems with a non-zero margin. The effectiveness of these algorithmic approaches to online learning for structured prediction is also discussed.
 Computer Science , 2015, Abstract: Online learning with multiple kernels has gained increasing interests in recent years and found many applications. For classification tasks, Online Multiple Kernel Classification (OMKC), which learns a kernel based classifier by seeking the optimal linear combination of a pool of single kernel classifiers in an online fashion, achieves superior accuracy and enjoys great flexibility compared with traditional single-kernel classifiers. Despite being studied extensively, existing OMKC algorithms suffer from high computational cost due to their unbounded numbers of support vectors. To overcome this drawback, we present a novel framework of Budget Online Multiple Kernel Learning (BOMKL) and propose a new Sparse Passive Aggressive learning to perform effective budget online learning. Specifically, we adopt a simple yet effective Bernoulli sampling to decide if an incoming instance should be added to the current set of support vectors. By limiting the number of support vectors, our method can significantly accelerate OMKC while maintaining satisfactory accuracy that is comparable to that of the existing OMKC algorithms. We theoretically prove that our new method achieves an optimal regret bound in expectation, and empirically found that the proposed algorithm outperforms various OMKC algorithms and can easily scale up to large-scale datasets.
 物理学报 , 2010, Abstract: A multiple kernel least squares support vector machine (MK-LSSVM) modeling method is proposed for the chaos of permanent magnet synchronous motor (PMSM). An equivalent kernel is built by linear-weighted combination of multi kernels to reduce the dependence of modeling accuracy on kernel function and parameters. The solutions of regression parameters and MK-LSSVM output are given in theory. C-C method is employed for the phase space reconstruction of PMSM chaos, then one-step and multi-step real-time online prediction of reconstructed chaotic series are investigated based on moving window learning method. The effect of different measurement noises on the proposed method is discussed. Simulations show that the proposed method can enhance the modeling accuracy and have strong anti-noise capability.
 中国物理 B , 2010, Abstract: A multiple kernel least squares support vector machine (MK-LSSVM) modeling method is proposed for the chaos of permanent magnet synchronous motor (PMSM). An equivalent kernel is built by linear-weighted combination of multi kernels to reduce the dependence of modeling accuracy on kernel function and parameters. The solutions of regression parameters and MK-LSSVM output are given in theory. C-C method is employed for the phase space reconstruction of PMSM chaos, then one-step and multi-step real-time online prediction of reconstructed chaotic series are investigated based on moving window learning method. The effect of different measurement noises on the proposed method is discussed. Simulations show that the proposed method can enhance the modeling accuracy and have strong anti-noise capability.
 PLOS ONE , 2012, DOI: 10.1371/journal.pone.0019035 Abstract: Calpain, an intracellular -dependent cysteine protease, is known to play a role in a wide range of metabolic pathways through limited proteolysis of its substrates. However, only a limited number of these substrates are currently known, with the exact mechanism of substrate recognition and cleavage by calpain still largely unknown. While previous research has successfully applied standard machine-learning algorithms to accurately predict substrate cleavage by other similar types of proteases, their approach does not extend well to calpain, possibly due to its particular mode of proteolytic action and limited amount of experimental data. Through the use of Multiple Kernel Learning, a recent extension to the classic Support Vector Machine framework, we were able to train complex models based on rich, heterogeneous feature sets, leading to significantly improved prediction quality (6% over highest AUC score produced by state-of-the-art methods). In addition to producing a stronger machine-learning model for the prediction of calpain cleavage, we were able to highlight the importance and role of each feature of substrate sequences in defining specificity: primary sequence, secondary structure and solvent accessibility. Most notably, we showed there existed significant specificity differences across calpain sub-types, despite previous assumption to the contrary. Prediction accuracy was further successfully validated using, as an unbiased test set, mutated sequences of calpastatin (endogenous inhibitor of calpain) modified to no longer block calpain's proteolytic action. An online implementation of our prediction tool is available at http://calpain.org.
 Computer Science , 2012, Abstract: We propose Coactive Learning as a model of interaction between a learning system and a human user, where both have the common goal of providing results of maximum utility to the user. At each step, the system (e.g. search engine) receives a context (e.g. query) and predicts an object (e.g. ranking). The user responds by correcting the system if necessary, providing a slightly improved -- but not necessarily optimal -- object as feedback. We argue that such feedback can often be inferred from observable user behavior, for example, from clicks in web-search. Evaluating predictions by their cardinal utility to the user, we propose efficient learning algorithms that have ${\cal O}(\frac{1}{\sqrt{T}})$ average regret, even though the learning algorithm never observes cardinal utility values as in conventional online learning. We demonstrate the applicability of our model and learning algorithms on a movie recommendation task, as well as ranking for web-search.
 Computer Science , 2010, Abstract: Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.
 Computer Science , 2011, Abstract: Structured classification tasks such as sequence labeling and dependency parsing have seen much interest by the Natural Language Processing and the machine learning communities. Several online learning algorithms were adapted for structured tasks such as Perceptron, Passive- Aggressive and the recently introduced Confidence-Weighted learning . These online algorithms are easy to implement, fast to train and yield state-of-the-art performance. However, unlike probabilistic models like Hidden Markov Model and Conditional random fields, these methods generate models that output merely a prediction with no additional information regarding confidence in the correctness of the output. In this work we fill the gap proposing few alternatives to compute the confidence in the output of non-probabilistic algorithms.We show how to compute confidence estimates in the prediction such that the confidence reflects the probability that the word is labeled correctly. We then show how to use our methods to detect mislabeled words, trade recall for precision and active learning. We evaluate our methods on four noun-phrase chunking and named entity recognition sequence labeling tasks, and on dependency parsing for 14 languages.
 控制理论与应用 , 2010, Abstract: Mooney viscosity having significant impact on the properties of the polymer is very difficult to be measured online. A new modeling method using two-stage recursive kernel-learning is proposed for online modeling and prediction of Mooney viscosity in the rubber mixing processes. The model can be established online for each recipe and recursively updated to adapt fast changes of the process. In the present method, a novel error evaluation index is formulated based on the mixing properties. The model parameters are online selected adaptively, using the fast leave-one-out cross validation criterion, to overcome the embarrassment of parameter selection. An industrial system named as Smart Mixing Information Integrated & Control System has been developed and successfully applied to several large-scale rubber and tire manufacturers in China. The results of Mooney viscosity online prediction show that the developed method is very efficient and thus has real economic importance for rubber mixing processes.
 张伟,许爱强,高明哲 - , 2017, DOI: 10.13700/j.bh.1001-5965.2016.0802 Abstract: 摘要 为实现对机载设备工作状态的在线状态预测，提出了一种稀疏核增量超限学习机（ELM）算法。针对核在线学习中核矩阵膨胀问题，基于瞬时信息测量提出了一个融合构造与修剪策略的两步稀疏化方法。通过在构造阶段最小化字典冗余，在修剪阶段最大化字典元素的瞬时条件自信息量，选择一个具有固定记忆规模的稀疏字典。针对基于核的增量超限学习机核权重更新问题，提出改进的减样学习算法，其可以实现字典中任一个核函数删除后剩余核函数Gram矩阵的逆矩阵的前向递推更新。通过对某型飞机发动机的状态预测，在预测数据长度等于20的条件下，本文提出的算法将预测的整体平均误差率下降到2.18%，相比于3种流形的核超限学习机在线算法，预测精度分别提升了0.72%、0.14%和0.13%。Abstract：In order to achieve the online condition prediction for avionic devices, a sparse kernel incremental extreme learning machine (ELM) algorithm is presented. For the problem of Gram matrix expansion in kernel online learning algorithms, a novel sparsification rule is presented by measuring the instantaneous learnable information contained on a data sample for dictionary selection. The proposed sparsification method combines the constructive strategy and the pruning strategy in two stages. By minimizing the redundancy of dictionary in the constructive phase and maximizing the instantaneous conditional self-information of dictionary atoms in the pruning phase, a compact dictionary with predefined size can be selected adaptively. For the kernel weight updating of kernel based incremental ELM, an improved decremental learning algorithm is proposed by using matrix elementary transformation and block matrix inversion formula, which effectively moderate the computational complexity at each iteration.In proposed algorithm, the inverse matrix of Gram matrix of the other samples can be directly updated after one sample is deleted from previous dictionary. The experimental results of the aero-engine condition prediction show that the proposed method can make the whole average error rate reduce to 2.18% when the prediction step is equal to 20. Compared with three well-known kernel ELM online learning algorithms, the prediction accuracy is improved by 0.72%， 0.14% and 0.13% respectively.
 Page 1 /100 Display every page 5 10 20 Item