This study describes a new pulse mode artificial neural network (PNN) implementation based on floating point number format. For on-chip learning operations, the back-propagation algorithm is modified to have pulse mode operations for effective hardware implementation. By using floating point number system for synapse weight value representation, any function can be approximated by the network. The convergence rate of the learning and generalization capability is improved. The proposed network is applied for digit recognition application. The recognition approach is based on a series of features, which are at most independent of orientation and position. The most important featurea are based on Zernike moments. However, an exclusive use of Zernike moments in digit recognition increases tremendously the neural network size, since higher orders are needed to ensure best recognition rates. Moreover, given their geometrical invariance, Zernike moments give the same description to some different digits such as 6 and 9. Thus, we make use of other features based on structural descriptors witch are the terminating point number and the terminating location number which is orientation dependent. This features based presentation of the digits reduces the required Zernike order and the number of hidden layers and adds a great simplicity to the design, making possible the on-chip learning implementation for online operations. The proposed PNN is implemented on a Virtex II FPGA platform. Various experiments are carried on for design evaluation.