%0 Journal Article %T Neural Networks on an FPGA and Hardware-Friendly Activation Functions %A Jiong Si %A Sarah L. Harris %A Evangelos Yfantis %J Journal of Computer and Communications %P 251-277 %@ 2327-5227 %D 2020 %I Scientific Research Publishing %R 10.4236/jcc.2020.812021 %X This paper describes our implementation of several neural networks built on a field programmable gate array (FPGA) and used to recognize a handwritten digit dataset¡ªthe Modified National Institute of Standards and Technology (MNIST) database. We also propose a novel hardware-friendly activation function called the dynamic Rectifid Linear Unit (ReLU)¡ªD-ReLU function that achieves higher performance than traditional activation functions at no cost to accuracy. We built a 2-layer online training multilayer perceptron (MLP) neural network on an FPGA with varying data width. Reducing the data width from 8 to 4 bits only reduces prediction accuracy by 11%, but the FPGA area decreases by 41%. Compared to networks that use the sigmoid functions, our proposed D-ReLU function uses 24% - 41% less area with no loss to prediction accuracy. Further reducing the data width of the 3-layer networks from 8 to 4 bits, the prediction accuracies only decrease by 3% - 5%, with area being reduced by 9% - 28%. Moreover, FPGA solutions have 29 times faster execution time, even despite running at a 60¡Á lower clock rate. Thus, FPGA implementations of neural networks offer a high-performance, low power alternative to traditional software methods, and our novel D-ReLU activation function offers additional improvements to performance and power saving. %K Deep Learning %K D-ReLU %K Dynamic ReLU %K FPGA %K Hardware Acceleration %K Activation Function %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=106317