Modified Back Propagation Algorithm of Feed Forward Networks
Shital Solanki1, H.B.Jethva2
1Shital Solanki, Department of Information Technology, L.D. College of Engineering, Gujarat Technological University, Ahmedabad (Gujarat), India.
2H.B Jethva, Associate Professor, L.D. College of Engineering, Gujarat Technological University, Ahmedabad (Gujarat), India.
Manuscript received on 10 May 2013 | Revised Manuscript received on 18 May 2013 | Manuscript Published on 30 May 2013 | PP: 131-134 | Volume-2 Issue-6, May 2013 | Retrieval Number: F0816052613/13©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: The Back-propagation Neural Network (BPNN) Algorithm is widely used in solving many real time problems in world. It is highly suitable for the problems which involve large amount of data and there is no relationships found between the outputs and inputs. However BPNN possesses a problem of slow convergence and convergence to the local optimum. Over the years, many improvements and modifications of the BP learning algorithm have been reported to overcome these shortcomings. In this paper, a modified backpropagation algorithm (MBP) based on minimization of the sum of the squares of errors is proposed and implemented on benchmark XOR problem. Implementation results show that MBP outperforms standard backpropagation algorithm with respect to number of iterations and speed of connvergence.
Keywords: Back Propagation, Convergence, Feed Forward Neural Networks, Training, Local Minima, Learning Rate and Momentum.
Scope of the Article: Algorithm Engineering