全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

On the Brittleness of Handwritten Digit Recognition Models

DOI: 10.5402/2012/834127

Full-Text   Cite this paper   Add to My Lib

Abstract:

Handwritten digit recognition is an important benchmark task in computer vision. Learning algorithms and feature representations which offer excellent performance for this task have been known for some time. Here, we focus on two major practical considerations: the relationship between the the amount of training data and error rate (corresponding to the effort to collect training data to build a model with a given maximum error rate) and the transferability of models' expertise between different datasets (corresponding to the usefulness for general handwritten digit recognition). While the relationship between amount of training data and error rate is very stable and to some extent independent of the specific dataset used—only the classifier and feature representation have significant effect—it has proven to be impossible to transfer low error rates on one or two pooled datasets to similarly low error rates on another dataset. We have called this weakness brittleness, inspired by an old Artificial Intelligence term that means the same thing. This weakness may be a general weakness of trained image classification systems. 1. Introduction Intelligent image analysis is an interesting research area in Artificial Intelligence and also important to a variety of current open research problems. Handwritten digits recognition is a well-researched subarea within the field, which is concerned with learning models to distinguish presegmented handwritten digits. The application of machine learning techniques over the last decade has proven successful in building systems which are competitive to human performance and which perform far better than manually written classical AI systems used in the beginnings of optical character recognition technology. However, not all aspects of such models have been previously investigated. Here, we systematically investigate two new aspects of such systems.(i)Essential training set size, that is, the relation between training set size and accuracy/error rate so as to determine the number of labeled training samples that are essential for a given performance level. Creating labeled training samples is costly and we are generally interested in algorithms which yield acceptable performance with the fewest number of labeled training samples.(ii)Dataset-Independence, that is, how well models trained on one sample dataset for handwritten digit recognition perform on other sample datasets for handwritten digit recognition after comprehensive normalization between the datasets. Models should be robust to small changes in preprocessing and

References

[1]  A. K. Seewald, “Digits—a dataset for handwritten digit recognition,” ?sterreichisches Forschungsinstitut für Artificial Intelligence TR-2005-27, Tec. Rep., Wien, 2005.
[2]  P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Proceedings of the 7th International Conference on Document Analysis and Recognition (ICDAR '03), 2003.
[3]  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[4]  M. O'Neill, Neural Network for Recognition of Handwritten Digits, Code Project, 2006.
[5]  L. Liu, K. Nakashima, H. Sako, and H. Fujisawa, “Handwritten digit recognition: benchmarking of state-of-the-art techniques,” Pattern Recognition, vol. 36, no. 10, pp. 2271–2285, 2003.
[6]  L. Liu and H. Fujisawa, “Classification and learning for character recognition: comparison of methods and remaining problems,” in Proceedings of the International Workshop on Neural Networks and Learning in Document Analysis and Recognition, Seoul, Korea, 2005.
[7]  T. Hastie, R. Tibshirani, and J. H. Friedman, The Elements of Statistical Learning, Springer, Heidelberg, Germany, 2003.
[8]  J. H. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp. 550–554, 1994.
[9]  H. W. Ian and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, San Francisco, Calif, USA, 2nd edition, 2005.
[10]  D. Keysers, W. Macherey, H. Ney, and J. Dahmen, “Adaptation in statistical pattern recognition using tangent vectors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, pp. 269–274, 2004.
[11]  J. Platt, “Fast training of support vector machines using sequential minimal optimization,” in Advances in Kernel Methods: Support Vector Learning, B. Sch?lkopf, C. Burges, and A. Smola, Eds., MIT Press, Cambridge, Mass, USA, 1998.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133