• Tomislav Kartalov
  • Zoran Ivanovski


A b s t r a c t: This paper presents a fully automated, computationally inexpensive and high quality exposure
fusion algorithm, intended for use on mobile or handheld devices. A utilization of the device's view-finder screen
video feed data is proposed, in order to increase the overall performance of the exposure fusion, both in static scenes
and in scenes with moving objects. Several novel ideas are implemented in order to make the whole procedure fully
automated, working without need for any intervention, or parameter adjustment by the end-user. The performed ex-
perimental tests show an efficient performance and high quality results, both in visual and objective terms.

Key words: exposure fusion; computational efficiency; mobile platform; image decomposition; motion estimation

[1]    Mann, S. and Picard, R. W.: On being ‘undigital’ with digital cameras: extending dynamic range by combining differ­ently exposed pictures, In: Proc. IS&T, Washington D.C., USA, 1995, pp. 422–428.
[2]    Debevec, P. E.  and Malik, J.: Recovering high dynamic range radiance maps from photographs, In: Proc. SIGGRAPH, Los Angeles, USA, 1997, pp. 369–378.
[3]    Reinhard, E., Stark, M., Shirley, P., and Ferwerda, J.: Photographic tone reproduction for digital images, ACM Trans. Graph., vol. 21, no. 3, pp. 267–276 (Jul. 2002).
[4]    Rubinstein, R.: Fusion of Differently Exposed Images, Technion, Israel IT, Final Project Report, Oct. 2004.
[5]    Goshtasby, A.: Fusion of multi-exposure images, Image and Vision Computing, vol. 23 (6), pp. 611–618 (Jun. 2005).
[6]    Mertens, T., Kautz, J., and Van Reeth, F.: Exposure Fusion, In: Proc. Pacific Graphics, Maui, USA, 2007, pp. 382–390.
[7]    Shen, R., Cheng, I., Shi, J., and Basu, A.: Generalized random walks for fusion of multi-exposure images, IEEE Trans. Image Processing, vol. 20, no. 12, pp. 3634–3646 (Dec. 2011).
[8]    Sen, P., Kalantari, N. K., Yaesoubi, M., Darabi, S., Goldman, D., and Shechtman, E.: Robust patch-based HDR recon­struction of dynamic scenes, ACM Trans. Graph., vol. 31, no. 6, art. 203, pp. 1–111 (Dec. 2012).
[9]    Li, X., Li, F., Zhuo, L., and Feng, D.: A layered-based exposure fusion algorithm, IET Image Processing, Vol. 7 (7), pp 701–711 (Oct. 2013).
[10] Li, S., Kang, X., and Hu, J.: Image fusion with guided filtering, IEEE Trans. Image Processing, Vol. 22, No. 7, pp. 2864–2875 (Jul. 2013).
[11] Ward, G.: Fast, robust image registration for compositing high dynamic range photographs from handheld exposures, J. of Graphics Tools, vol. 8 (2), pp. 17–30 (2003).
[12] Im, J., Jang, S., Lee, S. and Paik, J.: Geometrical transforma­tion-based ghost artifacts removing for high dynamic range image, In: Proc. ICIP, Brussels, Belgium, 2011, pp. 365–368.
[13] Jacobs, K., Loscos, C. and Ward, G.: Automatic high-dynamic range image generation for dynamic scenes, IEEE Comp. Graphics and Applications, vol. 28 (2), pp. 84–93 (Feb. 2008).
[14] Zimmer, H., Bruhn, A. and Weickert, J.: Freehand HDR im­aging of moving scenes with simultaneous resolution en­hancement, Comp. Graph. Forum, vol. 30, no. 2, pp. 405–414 (Apr. 2011).
[15] Hossain, I. and Gunturk, B. K.: High dynamic range imaging of non-static scenes, In: Proc. SPIE, San Francisco, USA, 2011, vol. 7876, pp. 78760P–78760P–8.
[16] Park, S., Oh, H., Kwon, J. and Choe, W.: Motion artifact-free HDR imaging under dynamic environments, In: Proc. ICIP, Brussels, Belgium, 2011, pp. 361–364.
[17] Hu, J., Gallo, O., Pulli, K.: Exposure Stacks for Live Scenes with Hand-held Cameras, In: Proc. ECCV, Florence, It­aly, 2012, pp. 499–512.
[18] Jinno, T. and Okuda, M.: Multiple exposure fusion for high dynamic range image acquisition, IEEE Trans. Image Processing, vol. 21, no. 1, pp. 358–365 (Jan. 2012).
[19] Zheng, J., Li, Z., Zhu, Z., Wu, S. and Rahardja, S.: Hybrid patching for a sequence of differently exposed images with moving objects, IEEE Trans. Image Processing, vol. 22, no. 12, pp. 5190–5201 (Dec. 2013).
[20] Kartalov, T., Ivanovski, Z. and Panovski, Lj.: Optimal expo­sure value shift in acquisition of hdr images, In: Proc. CIIT, Bitola, Macedonia, 2010, Available online: http:// www.
[21] Kartalov, T., Ivanovski, Z. and Panovski, Lj.: Fully automated exposure fusion algorithm for mobile platforms, In: Proc. ICIP, Brussels, Belgium, 2011, pp. 369–372.
[22] Kartalov, T., Ivanovski, Z. and Panovski, Lj.: Optimization of the pyramid height in the pyramid-based exposure fusion algorithms, In: Proc. INFOTEH, Jahorina, Bosnia and Herzegovina, 2011, Vol. 10, Ref. B-III-7, pp. 219–223, Available online:
[23] Kartalov, T., Ivanovski, Z. and Panovski, Lj.: A real time global motion compensation for multi-exposure imaging algorithms, In: EUROCON, Lisbon, Portugal, 2011, IEEEXplore link.
[24] Kartalov, T., Ivanovski, Z. and Panovski, Lj.: Using reduced motion vector set for real time motion registration in HDR imaging, In: Proc. MMSP, Pula, Italy, 2013, pp. 344–349.
[25] Kartalov, T., Ivanovski, Z. and Panovski, Lj.: Error resilient global motion registration for image fusion, In: Proc. ETAI, Ohrid, Macedonia, 2013, Available online:
[26] Burt, P. J. and Adelson, E. H.: The laplacian pyramid as a compact image code, IEEE Trans. Communications, vol. com-3l, no. 4, pp. 532–540 (Apr. 1983).
[27] Crow, F. C.: Summed-area tables for texture mapping, In: Proc. SIGGRAPH 84., vol. 18, no. 3, pp. 207–212 (Jan. 1984).
[28] Colorimetry – Part 4: CIE 1976 L*a*b* Colour Space, CIE Draft Standard, CIE DS 014-4.3/E, 2007.
[29] Photography – Digital still cameras – Determination of expo­sure index, ISO speed ratings, standard output sensitivity, and recommended exposure index, ISO International Standard 12232, 2006.
[30] Tsai, R. Y. and Huang, T. S.: Uniqness and estimation of 3-D motion parameters of rigid objects with curved surfaces, IEEE Trans. Pattern Anal. Mach. Intell., vol. 6, no. 1, pp. 13–27 (Jan. 1984).
[31] Nam, K. M., Kim, J. S., Park, R. and Shim, Y. S.: A fast hier­archical motion vector estimation algorithm using mean pyramid, IEEE Trans. Cir. Sys. for Video Technology, vol. 5, pp. 344–351 (Apr. 1993).
[32] Wang, Z., Bovik, A. C., Sheikh, H. R. and Simoncelli, E. P.: Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Processing, Vol. 13, No. 4, pp. 600–612 (Apr. 2004).
[33] Xydeas, C. and Petrović, V.: Objective image fusion perform­ance measure, Electronics Letters, vol. 36 (4), pp. 308–309 (Feb. 2000).
[34] Hossny, M., Nahavandi, S. and Creighton, D.: Comments on information measure for performance of image fusion, Electronics Letters, vol. 44, no. 18, pp. 1066–1067 (Aug. 2008).
[35] Yang, C., Zhang, J., Wang, X., Liu, X.: A novel similarity based quality metric for image fusion, Inf. Fusion, vol. 9, 2, pp. 156–160 (Apr. 2008).
[36] Zhao, J., Laganiere, R. and Liu, Z.: Performance assessment of combinative pixel-level image fusion based on an abso­lute feature measurement,  Int. J. Innovative Computing, Information and Control, vol. 3, no. 6, pp. 1433–1447 (Dec. 2007).
[37] Han, Y., Cai, Y., Cao, Y. and Xu, X.: A new image fusion performance metric based on visual information fidelity, Inf. Fusion, vol. 14 (2), pp. 127–135 (Apr. 2013).
[38] HDRsoft Photomatix Pro 5.0. (
[39] Digital image processing team (http://dipteam.feit.ukim.

Jul 5, 2017
How to Cite
KARTALOV, Tomislav; IVANOVSKI, Zoran. AUTOMATED AND COMPUTATIONALLY INEXPENSIVE EXPOSURE FUSION FOR MOBILE DEVICES. Journal of Electrical Engineering and Information Technologies - JEEIT, [S.l.], v. 2, n. 1, p. 33-48, july 2017. ISSN 2545-4269. Available at: <>. Date accessed: 24 apr. 2018.