%0 Journal Article %T Architecture Analysis of an FPGA-Based Hopfield Neural Network %A Miguel Angelo de Abreu de Sousa %A Edson Lemos Horta %A Sergio Takeo Kofuji %A Emilio Del-Moral-Hernandez %J Advances in Artificial Neural Systems %D 2014 %I Hindawi Publishing Corporation %R 10.1155/2014/602325 %X Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail. 1. Introduction For nearly 50 years, artificial neural networks (ANNs) have been applied to a wide variety of problems in engineering and scientific fields, such as, function approximation, systems control, pattern recognition, and pattern retrieval [1, 2]. Most of those applications were designed using software simulations of the networks, but, recently, some studies were developed in order to extend the computational simulations by directly implementing ANNs in hardware [3]. Although there were some works reporting network implementations in analog circuits [4] and in optical devices [5], most of the researches in ANNs hardware implementations were developed using digital technologies. General-purpose processors and application-specific integrated circuits (ASICs) are the two technologies usually employed in such developments. While general-purpose processors are often chosen due to economic motivations, ASIC implementations provide an adequate solution to execute parallel architectures of neural networks [6]. In the last decade, however, FPGA-based neurocomputers have become a topic of strong interest due to the larger capabilities and lower costs of reprogrammable logic devices [7]. Other relevant reasons to choose FPGA, reported in the literature, include high performance requirement which is obtained with parallel processing on hardware systems when compared to sequential processing in software implementations [8], reduction of power consumption in robotics or general embedded applications [9], and the maintenance of the %U http://www.hindawi.com/journals/aans/2014/602325/