All Title Author
Keywords Abstract

Publish in OALib Journal
ISSN: 2333-9721
APC: Only $99

ViewsDownloads

Relative Articles

More...

An Efficient Constrained Learning Algorithm for Stable 2D IIR Filter Factorization

DOI: 10.1155/2013/292567

Full-Text   Cite this paper   Add to My Lib

Abstract:

A constrained neural network optimization algorithm is presented for factorizing simultaneously the numerator and denominator polynomials of the transfer functions of 2-D IIR filters. The method minimizes a cost function based on the frequency response of the filters, along with simultaneous satisfaction of appropriate constraints, so that factorization is facilitated and the stability of the resulting filter is respected. 1. Introduction Factorization of 2D polynomials is an important problem in the design of IIR filters in two-dimensional signal processing because a factorized filter transfer function can be efficiently implemented in cascade form. However, the fundamental theorem of Algebra, concerning polynomial factorization, cannot be extended to the two-dimensional case; that is, a bivariate polynomial cannot, in general, be factored as a product of lower-order polynomials. The 2D IIR cascade structure may be considered as an attractive circuit design alternative, because of its relative insensitivity to coefficient quantization, the requirement of smaller number of arithmetic operations for a given filter size, and finally, because issues related to the stability of the filters are easier to deal with for filters with a smaller number of denominator coefficients [1]. Hence, an efficient method for the factorization of 2D IIR filters would be most beneficial. Many methods have been proposed for dealing with the two-dimensional polynomial factorization problem [2–6]. Most of those methods, however, adopt a conventional numerical method of roots finding (e.g., Laguerre’s, Newton-Raphson’s, Jenkins-Traub’s methods, etc.), in which successive approximations to the roots are obtained. In addition, almost all numerical methods can only find the roots one by one, that is, by deflation method, where the next root is obtained by the deflated polynomial after the former root is found. This means that numerical root-finding methods are inherently sequential. In an earlier work we have proposed a neural network-based Constrained Learning Algorithm (CLA) for factorizing 2D polynomials using constrained optimization techniques [7]. Using that approach we were able to obtain exact solutions for factorable polynomials and excellent approximate solutions for nonfactorable polynomials. The technique offers the advantage of incorporating a priori information about the relations between the coefficients of the original polynomial and the coefficients of the desired factor polynomials. By incorporating additional stability constraints into the formalism, the

References

[1]  J. S. Lim, Two-Dimensional Signal and Image Processing, Prentice-Hall International, 1990.
[2]  Z. Mou-Yan and R. Unbehauen, “On the approximate factorization of 2-D polynomials,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 35, no. 4, pp. 577–579, 1987.
[3]  N. E. Mastorakis, N. J. Theodorou, and S. G. Tzafestas, “A general factorization method for multivariable polynomials,” Multidimensional Systems and Signal Processing, vol. 5, no. 2, pp. 151–178, 1994.
[4]  P. Misra and R. V. Patel, “Simple factorizability of 2-D polynomials,” in Proceedings of the International Symposium on Circuits and Systems, pp. 1207–1210, New Orleans, La, USA, 1990.
[5]  B. C. Langand Frenzel, “Polynomial root finding,” IEEE Signal Processing Letters, vol. 1, no. 10, pp. 141–143, 1994.
[6]  L. Hoteit, “FFT-based fast polynomial rooting,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP'00), vol. 6, pp. 3315–3318, Istanbul, Turkey, June 2000.
[7]  S. Perantonis, N. Ampazis, S. Varoufakis, and G. Antoniou, “Constrained learning in Nneural networks: application to stable factorization of 2-D polynomials,” Neural Processing Letters, vol. 7, no. 1, pp. 5–14, 1998.
[8]  T. S. Huang, “Stability of two-dimensional recursive filters,” IEEE Transactions on Audio and Electroacoustics, vol. 20, no. 2, pp. 158–163, 1972.
[9]  D. S. Huang, H. H. S. Ip, and Z. Chi, “A neural root finder of polynomials based on root moments,” Neural Computation, vol. 16, no. 8, pp. 1721–1762, 2004.
[10]  D. S. Huang, H. S. I. Horace, C. K. L. Ken, Z. Chi, and H. S. Wong, “A new partitioning neural network model for recursively finding arbitrary roots of higher order arbitrary polynomials,” Applied Mathematics and Computation, vol. 162, no. 3, pp. 1183–1200, 2005.
[11]  D. S. Huang, “constructive approach for finding arbitrary roots of polynomials by neural networks,” AIEEE Transactions On Neural Networks, vol. 15, no. 2, pp. 477–491, 2004.
[12]  R. Hormis, G. Antoniou, and S. Mentzelopoulou, “Separation of two- dimensional polynomials via a sigma-pi neural net,” in Proceedings of the International Conference on Modelling and Simulation, pp. 304–306, Pittsburg, Pa, USA, 1995.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133