Multi-Focus Fused Image using Inception-Resnet V2
M. Sobhitha1, C. Shoba Bindu2, E. Sudheer Kumar3

1M. Sobhitha, Department of CSE, JNTUA, Ananthapuramu, Andhra Pradesh, India.
2C. Shoba Bindu, Department of CSE, JNTUA, Ananthapuramu, Andhra Pradesh, India.
3E. Sudheer Kumar, Department of CSE, JNTUA, Ananthapuramu, Andhra Pradesh, India.

Manuscript received on 05 July 2019 | Revised Manuscript received on 09 July 2019 | Manuscript published on 30 August 2019 | PP: 1095-1099 | Volume-8 Issue-10, August 2019 | Retrieval Number: I8755078919/2019©BEIESP | DOI: 10.35940/ijitee.I8755.0881019
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Multi-focus image fusion is the process of integration of pictures of the equivalent view and having various targets into one image. The direct capturing of a 3D scene image is challenging, many multi-focus image fusion techniques are involved in generating it from some images focusing at diverse depths. The two important factors for image fusion is activity level information and fusion rule. The necessity of designing local filters for extracting high-frequency details the activity level information is being implemented, and then by using various elaborated designed rules we consider clarity information of different source images which can obtain a clarity/focus map. However, earlier fusion algorithms will excerpt high-frequency facts by considering neighboring filters and by adopting various fusion conventions to achieve the fused image. However, the performance of the prevailing techniques is hardly adequate. Convolutional neural networks have recently used to solve the problem of multi-focus image fusion. By considering the deep neural network a two-stage boundary aware is proposed to address the issue in this paper. They are: (1) for extracting the entire defocus info of the two basis images deep network is suggested. (2) To handle the patches information extreme away from and close to the focused/defocused boundary, we use Inception ResNet v2. The results illustrate that the approach specified in this paper will result in an agreeable fusion image, which is superior to some of the advanced fusion algorithms in comparison with both the graphical and objective evaluations.
Keywords: Multi-Focus, Image Fusion, Fused Image, Convolution Neural Network.
Scope of the Article: Signal and Image Processing