Many recent computational photography techniques play a significant role to avoid limitation of standard digital cameras to handle wide dynamic range of the real-world scenes, containing brightly and poorly illuminated areas. In many of these techniques, it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. In this paper we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement. Computationally simple texture features (i.e., detail layer extracted with the help of edge preserving filter) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multiexposure images. Instead of employing intermediate High Dynamic Range (HDR) reconstruction and tone mapping steps, well-exposed fused image is generated for displaying on conventional display devices. A further advantage of the present technique is that it is well suited for multifocus image fusion. Simulation results are compared with a number of existing single resolution and multiresolution techniques to show the benefits of the proposed scheme for variety of cases. 1. Introduction In recent years several new techniques have been developed that are capable of providing precise representation of complete information of shadows and highlights present in the real-world natural scenes. The direct 8-bit gray and 24-bit RGB representation of visual data, with the standard digital cameras in single exposure settings, often causes loss of information in the real-world scenes because the dynamic range of most scenes is beyond what can be captured by the standard digital cameras. Such representation is referred to as low dynamic range (LDR) image. Digital cameras have the aperture setting, exposure time, and ISO value to regulate the amount of light captured by the sensors. It is therefore important to somehow determine exposure setting for controlling charge capacity of the Charge Coupled Device (CCD). In modern digital cameras, Auto Exposure Bracketing (AEB) allows us to take all the images without touching the camera between exposures, provided the camera is on a tripod and a cable release is used. Handling the camera between exposures can increase the chance of miss-alignment resulting in an image that is not sharp or has ghosting. However, most scenes can be perfectly captured with nine exposures [1], whereas many more are within reach of a camera that allows 5–7 exposures to be
References
[1]
E. Reinhard, G. Ward, S. Pattanaik, and P. Debvec, High Dynamic Range Imaging Acquisition, Manipulation, and Display, Morgan Kaufmann, 2005.
[2]
P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the Conference on Computer Graphics (SIGGRAPH '97), pp. 369–378, August 1997.
[3]
S. Mann and R. W. Picard, “Being “undigital” with digital cameras: extending dynamic range by combining differently exposed pictures,” in Proceedings of IST's 48th Annual Conference, pp. 442–448, May 1995.
[4]
K. Jacobs, C. Loscos, and G. Ward, “Automatic high-dynamic range image generation for dynamic scenes,” IEEE Computer Graphics and Applications, vol. 28, no. 2, pp. 84–93, 2008.
[5]
G. Ward, “Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures,” Journal of Graphics Tools, vol. 8, no. 2, pp. 17–30, 2003.
[6]
A. Tomaszewska and R. Mantiuk, “Image registration for multi-exposure high dynamic range image acquisition,” in Proceedings of the International Conference on Computer Graphics, Visualization and Computer Vision, Plzen, Czech Republic, 2007.
[7]
E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 267–276, 2002.
[8]
Y. Li, L. Sharan, and E. H. Adelson, “Compressing and Companding high dynamic range images with subband architectures,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 836–844, 2005.
[9]
R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 249–256, 2002.
[10]
F. Durand and J. Dorsey, “Fast bilateral Filtering for the display of high dynamic range images,” ACM Transaction on Graphics, vol. 21, no. 3, pp. 257–266, 2002.
[11]
G. W. Larson, H. Rushmeier, and C. Piatko, “A visibility matching tone reproduction operator for high dynamic range scenes,” IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 4, pp. 291–306, 1997.
[12]
F. Drago, K. Myszkowski, T. Annen, and N. Chiba, “Adaptive logarithmic mapping for displaying high contrast scenes,” Computer Graphics Forum, vol. 22, no. 3, pp. 419–426, 2003.
[13]
E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13–24, 2005.
[14]
H. Seetzen, W. Heidrich, W. Stuerzlinger, et al., “High dynamic range display system,” ACM Transaction on Graphics, vol. 23, no. 3, pp. 760–768, 2004.
[15]
H. Seetzen, L. A. Whitehead, and G. Ward, “A high dynamic range display using low and high resolution modulator,” In the Society For InFormation Display International Symposium, vol. 34, no. 1, pp. 1450–1453, 2003.
[16]
K. Kotwal and S. Chaudhuri, “An optimization-based approach to fusion of multi-exposure, low dynamic range images,” in Proceedings of the 14th International Conference on Information Fusion, Chicago, Ill, USA, July 2011.
[17]
S. Raman and S. Chaudhuri, “Bilateral filter based compositing for variable exposure photography,” in Proceedings of Eurographics, Munich, Germany, 2009.
[18]
T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photography,” Computer Graphics Forum, vol. 28, no. 1, pp. 161–171, 2009.
[19]
J. Kuang, G. M. Johnson, and M. D. Fairchild, “iCAM06: a refined image appearance model for HDR image rendering,” Journal of Visual Communication and Image Representation, vol. 18, no. 5, pp. 406–414, 2007.
[20]
Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Transactions on Graphics, vol. 27, no. 3, article 67, 2008.
[21]
R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3634–3646, 2011.
[22]
T. O. Aydin, R. Mantiuk, K. Myszkowski, and H.-P. Seidel, “Dynamic range independent image quality assessment,” ACM Transactions on Graphics, vol. 27, no. 3, article 69, 2008.
[23]
J. H. Adu and M. Wang, “Multi-focus image fusion based on WNMF and focal point analysis,” Journal of Convergence Information Technology, vol. 6, no. 7, pp. 109–117, 2011.
[24]
P. M. Zeeuw, “Wavelet and image fusion, CWI,” Amsterdam, The Netherlands, March 1998, http://www.cwi.nl/~pauldz/.
[25]
J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Optics Communications, vol. 284, no. 1, pp. 80–87, 2011.
[26]
J. Shen, Y. Zhao, and Y. He, “Detail-preserving exposure fusion using subband architecture,” The Visual Computer, vol. 28, no. 5, pp. 463–473, 2012.
[27]
P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990.
[28]
H. Singh, V. Kumar, and S. Bhooshan, “Anisotropic diffusion for details enhancement in multiexposure image fusion,” ISRN Signal Processing, vol. 2013, Article ID 928971, 18 pages, 2013.
[29]
G. Linda Shapiro and C. George, Stockman, Computer Vision, Prentice-Hall, Upper Saddle River, NJ, USA, 2001.
[30]
K. Laws, Textured image segmentation [Ph.D. dissertation], University of Southern California, 1980.
[31]
G. Qiu and J. Duan, “An optimal tone reproduction curve operator for the display of high dynamic range images,” in IEEE International Symposium on Circuits and Systems (ISCAS '05), pp. 6276–6279, May 2005.
[32]
M. H. Kim, T. Weyrich, and J. Kautz, “Modeling human color perception under extended luminance levels,” ACM Transactions on Graphics, vol. 28, no. 3, article 27, 2009.
[33]
J. Tumblin, J. K. Hodgins, and B. K. Guenter, “Two methods for display of high contrast images,” ACM Transactions on Graphics, vol. 18, no. 1, pp. 56–94, 1999.
[34]
A. Ardeshir Goshtasby, 2-D and 3-D Image Registration for Medical, Remote Sensing, and Industrial Applications, Wiley-Interscience, 2005.
[35]
G. Ward, “Fast, robust image registration for. Compositing high dynamic range photographs from hand-held exposures,” Journal of Graphics Tools, vol. 8, pp. 17–30, 2003.
[36]
J. M. Ogden, E. H. Adelson, J. R. Bergen, and P. J. Burt, “Pyramid based computer graphics,” RCA Engineer, vol. 30, no. 5, pp. 4–15, 1985.
[37]
P. J. Burt and E. H. Adelson, “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983.
[38]
A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash exposure sampling,” ACM Transaction on Graphics, vol. 24, no. 3, pp. 828–835, 2005.
[39]
G. Petschnigg, R. Szeliski, M. Agrawala, M. F. Cohen, H. Hoppe, and K. Toyama, “Digital photoy with flash and no-flash image pairs,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 664–672, 2004.
[40]
S. T. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image and Vision Computing, vol. 26, no. 7, pp. 971–979, 2008.
[41]
A. A. Goshtasby, “Fusion of multi-exposure images,” Image and Vision Computing, vol. 23, no. 6, pp. 611–618, 2005.
[42]
R. Szeliski, “System and process for improving the uniformity of the exposure and tone of a digital image,” U. S. Patent No. 6687400, 2004.
[43]
S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transaction on Image Processing, vol. 22, no. 7, pp. 2864–2875, 2013.
[44]
S. Li and X. Kang, “Fast multi-exposure image fusion with median filter and recursive filter,” IEEE Transactions on Consumer Electronics, vol. 58, no. 2, 2012.
[45]
C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE 6th International Conference on Computer Vision, pp. 839–846, January 1998.
[46]
D. Barash, “A fundamental relationship between bilateral filtering, adaptive smoothing, and the nonlinear diffusion equation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 6, pp. 844–847, 2002.
[47]
D. Lischinski, Z. Farbman, M. Uyttendaele, and R. Szeliski, “Interactive local adjustment of tonal values,” ACM Transaction on Graphics, vol. 25, pp. 646–653, 2006.
[48]
C. S. Xydeas and V. Petrovi?, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000.
[49]
Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Information Fusion, vol. 14, no. 2, pp. 127–135, 20131.
[50]
G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters, vol. 38, no. 7, pp. 313–315, 2002.
[51]
R. Sakuldee and S. Udomhunsakul, “Objective performance of compressed image quality assessments,” in Proceedings of World Academy of Science, Engineering and Technology, vol. 26, pp. 434–443, 2007.