Revisiting Genetic Network Programming (GNP): Towards the Simplified Genetic Operators
被引:4
作者:
Li, Xianneng
论文数: 0引用数: 0
h-index: 0
机构:
Dalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R ChinaDalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R China
Li, Xianneng
[1
]
Yang, Huiyan
论文数: 0引用数: 0
h-index: 0
机构:
Dalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R ChinaDalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R China
Yang, Huiyan
[1
]
Yang, Meihua
论文数: 0引用数: 0
h-index: 0
机构:
Dalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R ChinaDalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R China
Yang, Meihua
[1
]
机构:
[1] Dalian Univ Technol, Fac Management & Econ, Dalian 116024, Peoples R China
Genetic network programming (GNP) is a relatively new type of graph-based evolutionary algorithm, which designs a directed graph structure for its individual representation. A number of studies have demonstrated its expressive ability to model complicated problems/systems and explored it from the perspectives of methodologies and applications. However, the unique features of its directed graph are relatively unexplored, which cause unnecessary dilemma for the further usage and promotion. This paper is dedicated to uncover this issue systematically and theoretically. It is proved that the traditional GNP with uniform genetic operators does not consider the "transition by necessity'' feature of the directed graph, which brings the unnecessary difficulty of evolution to cause invalid/negative evolution problems. Consequently, simplified genetic operators are developed to address these problems. Experimental results on two benchmark testbeds of the agent control problems are carried out to demonstrate its superiority over the traditional GNP and the state-of-the-art algorithms in terms of fitness results, search speed, and computation time.
引用
收藏
页码:43274 / 43289
页数:16
相关论文
共 34 条
[1]
[Anonymous], 1975, ADAPTATION NATURAL A
[2]
[Anonymous], 2015, Reinforcement Learning: An Introduction