Global atmospheric heat exchanges are highly dependent on the variation of cloud types and amounts. For a better understanding of these exchanges, an appropriate cloud type classification method is necessary. The present study proposes an alternative approach to the often used cloud optical and thermodynamic properties based classifications. This approach relies on the application of edge detection techniques on cloud top temperature (CTT) derived from global satellite maps. The gradient map obtained through these techniques is then used to distinguish various types of clouds. The edge detection techniques used are based on the idea that a pixel’s neighborhood contains information about its intensity. The variation of this intensity (gradient) offers the possibility to decompose the image into different cloud morphological features. High gradient areas would correspond to cumulus-like clouds, while low gradient areas would be associated with stratus-like clouds. Following the application of these principles, the results of the cloud classification obtained are evaluated against a common cloud classification method based on cloud optical properties’ variations. Relatively good matches between the two approaches are obtained. The best results are observed with high gradient clouds and the worst with low gradient clouds. 1. Introduction The present study is motivated by the future launch of a new polar orbit satellite, the global change observation mission-climate (GCOM-C) carrying a visible and thermal infrared sensor, the second generation global imager (SGLI). The objectives of this satellite include the reduction of the Earth’s radiation budget uncertainty. One of the major factors affecting this uncertainty is the change in cloud type amount [1]. To quantify such a change, a cloud type classification is needed. The existence of multiple satellite sensors’ channels provides good opportunities for these cloud type classifications. In cloud remote sensing, the most frequently used channels for cloud classifications are in the visible and the infrared bands. For the visible bands, one of the most common classifications relies on the primary cloud property, that is, the cloud optical depth ([2]; Rossow et al., 2003) to distinguish cloud types. In the thermal infrared channels, the classifications often use thermodynamic properties of clouds as derived from split-window channels [3]. In the present study, a different approach from that often used in common classifications is proposed. This approach is based on the cloud top structure contrast. For its
References
[1]
J. R. Dim, H. Murakami, T. Y. Nakajima, B. Nordell, A. K. Heidinger, and T. Takamura, “The recent state of the climate: driving components of cloud-type variability,” Journal of Geophysical Research D, vol. 116, no. 11, Article ID D11117, 2011.
[2]
W. B. Rossow and R. A. Schiffer, “Advances in understanding clouds from ISCCP,” Bulletin of the American Meteorological Society, vol. 80, no. 11, pp. 2261–2287, 1999.
[3]
T. Inoue, “A cloud type classification with NOAA 7 split-window measurements,” Journal of Geophysical Research, vol. 92, no. 4, pp. 3991–4000, 1987.
[4]
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 1, pp. 886–893, June 2005.
[5]
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 1, pp. 886–893, June 2005.
[6]
D. Sen and S. K. Pal, “Histogram thresholding using fuzzy and rough measures of association error,” IEEE Transactions on Image Processing, vol. 18, no. 4, pp. 879–888, 2009.
[7]
S. M. Smith and J. M. Brady, “SUSAN—a new approach to low level image processing,” International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
[8]
A. Shashua, Y. Gdalyahu, and G. Hayun, “Pedestrian detection for driving assistance systems: single-frame classification and system level performance,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 1–6, June 2004.
[9]
F. Suard, A. Rakotomamonjy, A. Bensrhair, and A. Broggi, “Pedestrian detection using infrared images and histograms of oriented gradients,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV '06), pp. 206–212, June 2006.
[10]
J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986.
[11]
I. Sobel, “An isotropic 3×3 gradient operator,” in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed., pp. 376–379, Academic Press, New York, NY, USA, 1990.
[12]
J. M. S. Prewitt, “Object enhancement and extraction,” in Picture Processing and Psychopictorics, Academic Press, 1970.
[13]
L. Roberts, Machine Perception of 3-D Solids, Optical and Electro-Optical Information Processing, MIT Press, 1965.
[14]
C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference, pp. 147–151, Manchester, UK, 1988.
[15]
A. Noble, Descriptions of image surfaces [Ph.D. thesis], Department of Engineering Science, Oxford University, 1989.
[16]
W. B. Rossow, A. W. Walker, and L. C. Garder, “Comparison of ISCCP and other cloud amounts,” Journal of Climate, vol. 6, no. 12, pp. 2394–2418, 1993.