oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 2 )

2019 ( 20 )

2018 ( 16 )

2017 ( 21 )

Custom range...

Search Results: 1 - 10 of 6796 matches for " Alex Graves "
All listed articles are free for downloading (OA Articles)
Page 1 /6796
Display every page Item
Generating Sequences With Recurrent Neural Networks
Alex Graves
Computer Science , 2013,
Abstract: This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.
Sequence Transduction with Recurrent Neural Networks
Alex Graves
Computer Science , 2012,
Abstract: Many machine learning tasks can be expressed as the transformation---or \emph{transduction}---of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since \emph{finding} the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that is in principle able to transform any input sequence into any finite, discrete output sequence. Experimental results for phoneme recognition are provided on the TIMIT speech corpus.
Grid Long Short-Term Memory
Nal Kalchbrenner,Ivo Danihelka,Alex Graves
Computer Science , 2015,
Abstract: This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images. The network differs from existing deep LSTM architectures in that the cells are connected between network layers as well as along the spatiotemporal dimensions of the data. The network provides a unified way of using LSTM for both deep and sequential computation. We apply the model to algorithmic tasks such as 15-digit integer addition and sequence memorization, where it is able to significantly outperform the standard LSTM. We then give results for two empirical tasks. We find that 2D Grid LSTM achieves 1.47 bits per character on the Wikipedia character prediction benchmark, which is state-of-the-art among neural approaches. In addition, we use the Grid LSTM to define a novel two-dimensional translation model, the Reencoder, and show that it outperforms a phrase-based reference system on a Chinese-to-English translation task.
Neural Turing Machines
Alex Graves,Greg Wayne,Ivo Danihelka
Computer Science , 2014,
Abstract: We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.
Multi-Dimensional Recurrent Neural Networks
Alex Graves,Santiago Fernandez,Juergen Schmidhuber
Computer Science , 2007,
Abstract: Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.
Phoneme recognition in TIMIT with BLSTM-CTC
Santiago Fernández,Alex Graves,Juergen Schmidhuber
Computer Science , 2008,
Abstract: We compare the performance of a recurrent neural network with the best results published so far on phoneme recognition in the TIMIT database. These published results have been obtained with a combination of classifiers. However, in this paper we apply a single recurrent neural network to the same task. Our recurrent neural network attains an error rate of 24.6%. This result is not significantly different from that obtained by the other best methods, but they rely on a combination of classifiers for achieving comparable performance.
Speech Recognition with Deep Recurrent Neural Networks
Alex Graves,Abdel-rahman Mohamed,Geoffrey Hinton
Computer Science , 2013,
Abstract: Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates \emph{deep recurrent neural networks}, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
Recurrent Models of Visual Attention
Volodymyr Mnih,Nicolas Heess,Alex Graves,Koray Kavukcuoglu
Computer Science , 2014,
Abstract: Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.
Economic Growth and Business Cycles: The Labor Supply Decision with Two Types of Technological Progress  [PDF]
Philip E. Graves
Modern Economy (ME) , 2011, DOI: 10.4236/me.2011.23033
Abstract: An informal model is described that leads to multiple macroeconomic equilibria as a consequence of random variation in the relative amounts of technological change for new and existing goods. The novel observation is that the rate of introduction and market penetration of new goods (sometimes called product innovation) vis-à-vis technological advance for existing goods (sometimes called process innovation) importantly affects the labor supply decision. A relatively rapid influx of new goods will generally increase labor supply, while relatively more technological advance for existing goods will reduce labor supply to the market. These impacts are seen to provide insights into Rostow’s stages of growth. Short run variations in the relative importance of the two types of technological change are seen to imply unpredictable business cycle behavior of the type we observe. The welfare implications of national income accounting that fails to consider changes in leisure are discussed.
Playing Atari with Deep Reinforcement Learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Alex Graves,Ioannis Antonoglou,Daan Wierstra,Martin Riedmiller
Computer Science , 2013,
Abstract: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Page 1 /6796
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.