Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail. 1. Introduction For nearly 50 years, artificial neural networks (ANNs) have been applied to a wide variety of problems in engineering and scientific fields, such as, function approximation, systems control, pattern recognition, and pattern retrieval [1, 2]. Most of those applications were designed using software simulations of the networks, but, recently, some studies were developed in order to extend the computational simulations by directly implementing ANNs in hardware [3]. Although there were some works reporting network implementations in analog circuits [4] and in optical devices [5], most of the researches in ANNs hardware implementations were developed using digital technologies. General-purpose processors and application-specific integrated circuits (ASICs) are the two technologies usually employed in such developments. While general-purpose processors are often chosen due to economic motivations, ASIC implementations provide an adequate solution to execute parallel architectures of neural networks [6]. In the last decade, however, FPGA-based neurocomputers have become a topic of strong interest due to the larger capabilities and lower costs of reprogrammable logic devices [7]. Other relevant reasons to choose FPGA, reported in the literature, include high performance requirement which is obtained with parallel processing on hardware systems when compared to sequential processing in software implementations [8], reduction of power consumption in robotics or general embedded applications [9], and the maintenance of the
References
[1]
G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals, and Systems, vol. 2, no. 4, pp. 303–314, 1989.
[2]
S. Haykin, Neural Networks and Learning Machines, Prentice Hall, 3rd edition, 2009.
[3]
A. R. Omondi and J. C. Rajapakse, FPGA Implementations of Neural Networks, Springer, Dordrecht, The Netherlands, 2006.
[4]
P. D. Moerland and E. Fiesler, “Neural network adaptations to hardware implementations,” in Handbook of Neural Computation, IOP Publishing, Oxford University Publishing, New York, NY, USA, 1997.
[5]
I. F. Saxena and E. Fiesler, “Adaptive multilayer optical neural network with optical thresholding,” Optical Engineering, vol. 34, no. 8, pp. 2435–2440, 1995.
[6]
H.-Y. Hsieh and K.-T. Tang, “Hardware friendly probabilistic spiking neural network with long-term and short-term plasticity,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 12, pp. 2063–2074, 2013.
[7]
M. Papadonikolakis and C.-S. Bouganis, “Novel cascade FPGA accelerator for support vector machines classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 7, pp. 1040–1052, 2012.
[8]
G. Borgese, C. Pace, P. Pantano, and E. Bilotta, “FPGA-based distributed computing microarchitecture for complex physical dynamics investigation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 9, pp. 1390–1399, 2013.
[9]
R. N. A. Prado, J. D. Melo, J. A. N. Oliveira, and A. D. Dória Neto, “FPGA based implementation of a fuzzy neural network mo dular architecture for embedded systems,” in Proceedings of the IEEE World Congress on Computational Intelligence, Brisbane, Australia, June 2012.
[10]
B. J. Leiner, V. Q. Lorena, T. M. Cesar, and M. V. Lorenzo, “Hardware architecture for FPGA implementation of a neural network and its application in images processing,” in Proceedings of the 5th Meeting of the Electronics, Robotics and Automotive Mechanics Conference (CERMA '08), pp. 405–410, Morelos, Mexico, October 2008.
[11]
M. Stepanova, F. Lin, and V. C.-L. Lin, “A hopfield neural classifier and its FPGA implementation for identification of symmetrically structured DNA motifs,” Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, vol. 48, no. 3, pp. 239–254, 2007.
[12]
S. Saif, H. M. Abbas, S. M. Nassar, and A. A. Wahdan, “An FPGA implementation of a neural optimization of block truncation coding for image/video compression,” Microprocessors and Microsystems, vol. 31, no. 8, pp. 477–486, 2007.
[13]
W. Mansour, R. Ayoubi, H. Ziade, R. Velazco, and W. EL Falou, “An optimal implementation on FPGA of a Hopfield neural network,” Advances in Artificial Neural Systems, vol. 2011, Article ID 189368, 9 pages, 2011.
[14]
J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 81, no. 10, pp. 3088–3092, 1984.
[15]
J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no. 8, pp. 2554–2558, 1982.
[16]
D. J. Amit, H. Gutfreund, and H. Sompolinsky, “Storing infinite numbers of patterns in a spin-glass model of neural networks,” Physical Review Letters, vol. 55, no. 14, pp. 1530–1533, 1985.
[17]
J. Hertz, A. Krogh, and R. G. Palmer, Introduction to the Theory of Neural Computation, Santa Fe Institute Studies in the Sciences of Complexity. Lecture Notes, I, Addison-Wesley, Redwood City, Calif, USA, 1991.