Guided Convergence for Training Feed-forward Neural Network using Novel Gravitational Search Optimization

被引:0
|
作者
Saha, Sankhadip [1 ]
Chakraborty, Dwaipayan [2 ]
Dutta, Oindrilla [1 ]
机构
[1] NetajiSubhash Engn Coll, Dept Elect Engn, Kolkata, India
[2] NetajiSubhash Engn Coll, Dept Elect & Instru Engn, Kolkata, India
来源
2014 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND APPLICATIONS (ICHPCA) | 2014年
关键词
Meta-heuristic; optimization; GSA; feed-forward neural network; local minima; ALGORITHM; BACKPROPAGATION;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Training of feed-forward neural network using stochastic optimizationtechniques recently gained a lot of importance invarious pattern recognition and data miningapplications because of its capability of escaping local minima trap. However such techniques may suffer fromslow and poor convergence. This fact inspires us to work onmeta-heuristic optimization technique for training the neural network. In this respect, to train the neural network, we focus on implementing thegravitational search algorithm(GSA) which is based on the Newton's law of motion principle and the interaction of masses. GSA has good ability to search for the global optimum, but it may suffer from slow searching speed in the lastiterations. Our work is directed towards the smart convergence by modifying the original GSA and also guiding the algorithm to make it immune to local minima trap. Results on various benchmark datasets prove the robustness of the modified algorithm.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Vortex search optimization algorithm for training of feed-forward neural network
    Sag, Tahir
    Jalil, Zainab Abdullah Jalil
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (05) : 1517 - 1544
  • [2] Vortex search optimization algorithm for training of feed-forward neural network
    Tahir Sağ
    Zainab Abdullah Jalil Jalil
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 1517 - 1544
  • [3] Feed-forward neural network training using sparse representation
    Yang, Jie
    Ma, Jun
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 116 : 255 - 264
  • [4] A new scheme for training feed-forward neural networks
    AbdelWahhab, O
    SidAhmed, MA
    PATTERN RECOGNITION, 1997, 30 (03) : 519 - 524
  • [5] Feed Forward Neural Network optimization by Particle Swarm Intelligence
    Hajare, Pratik R.
    Bawane, Narendra G.
    2015 7TH INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN ENGINEERING & TECHNOLOGY (ICETET), 2015, : 40 - 45
  • [6] Optimization Methodology Applied to Feed-Forward Artificial Neural Network Parameters
    Furtuna, Renata
    Curteanu, Silvia
    Cazacu, Maria
    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, 2011, 111 (03) : 539 - 553
  • [7] Power system network observability determination using feed-forward neural networks
    Jain, A
    Choi, J
    Min, J
    POWERCON 2002: INTERNATIONAL CONFERENCE ON POWER SYSTEM TECHNOLOGY, VOLS 1-4, PROCEEDINGS, 2002, : 2086 - 2090
  • [8] Genetic based feed-forward neural network training for chaff cluster detection
    Lee, Hansoo
    Yu, Jungwon
    Jeong, Yeongsang
    Kim, Sungshin
    2012 INTERNATIONAL CONFERENCE ON FUZZY THEORY AND ITS APPLICATIONS (IFUZZY2012), 2012, : 215 - 219
  • [9] A Modified Invasive Weed Optimization Algorithm for Training of Feed-Forward Neural Networks
    Giri, Ritwik
    Chowdhury, Aritra
    Ghosh, Arnob
    Das, Swagatam
    Abraham, Ajith
    Snasel, Vaclav
    IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010, : 3166 - 3173
  • [10] Stochastic optimization methods for fitting polyclass and feed-forward neural network models
    Kooperberg, C
    Stone, CJ
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 1999, 8 (02) : 169 - 189