Proximal Policy Optimization-Based Reinforcement Learning and Hybrid Approaches to Explore the Cross Array Task Optimal Solution

被引:0
作者
Corecco, Samuel [1 ]
Adorni, Giorgia [2 ]
Gambardella, Luca Maria [2 ]
Reali Costa, Anna Helena
机构
[1] Univ Svizzera Italiana USI, Fac Informat, CH-6900 Lugano, Switzerland
[2] USI, Dalle Molle Inst Artificial Intelligence IDSIA, SUPSI, CH-6900 Lugano, Switzerland
来源
MACHINE LEARNING AND KNOWLEDGE EXTRACTION | 2023年 / 5卷 / 04期
基金
瑞士国家科学基金会;
关键词
computational thinking; problem-solving techniques; clustering; random search; reinforcement learning; proximal policy optimization; THINKING; STATE;
D O I
10.3390/make5040082
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In an era characterised by rapid technological advancement, the application of algorithmic approaches to address complex problems has become crucial across various disciplines. Within the realm of education, there is growing recognition of the pivotal role played by computational thinking (CT). This skill set has emerged as indispensable in our ever-evolving digital landscape, accompanied by an equal need for effective methods to assess and measure these skills. This research places its focus on the Cross Array Task (CAT), an educational activity designed within the Swiss educational system to assess students' algorithmic skills. Its primary objective is to evaluate pupils' ability to deconstruct complex problems into manageable steps and systematically formulate sequential strategies. The CAT has proven its effectiveness as an educational tool in tracking and monitoring the development of CT skills throughout compulsory education. Additionally, this task presents an enthralling avenue for algorithmic research, owing to its inherent complexity and the necessity to scrutinise the intricate interplay between different strategies and the structural aspects of this activity. This task, deeply rooted in logical reasoning and intricate problem solving, often poses a substantial challenge for human solvers striving for optimal solutions. Consequently, the exploration of computational power to unearth optimal solutions or uncover less intuitive strategies presents a captivating and promising endeavour. This paper explores two distinct algorithmic approaches to the CAT problem. The first approach combines clustering, random search, and move selection to find optimal solutions. The second approach employs reinforcement learning techniques focusing on the Proximal Policy Optimization (PPO) model. The findings of this research hold the potential to deepen our understanding of how machines can effectively tackle complex challenges like the CAT problem but also have broad implications, particularly in educational contexts, where these approaches can be seamlessly integrated into existing tools as a tutoring mechanism, offering assistance to students encountering difficulties. This can ultimately enhance students' CT and problem-solving abilities, leading to an enriched educational experience.
引用
收藏
页码:1660 / 1679
页数:20
相关论文
共 63 条
  • [1] Adorni G., 2023, SoftwareX
  • [2] Adorni G., 2023, Int. J. Child-Comput. Interact
  • [3] The k-means Algorithm: A Comprehensive Survey and Performance Evaluation
    Ahmed, Mohiuddin
    Seraj, Raihan
    Islam, Syed Mohammed Shamsul
    [J]. ELECTRONICS, 2020, 9 (08) : 1 - 12
  • [4] Andradóttir S, 2015, INT SER OPER RES MAN, V216, P277, DOI 10.1007/978-1-4939-1384-8_10
  • [5] Arthur D, 2007, PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P1027
  • [6] Babaeizadeh M, 2017, Arxiv, DOI arXiv:1611.06256
  • [7] Baker R.S., 2009, Journal of Educational Data Mining, V1, P3, DOI [10.5281/zenodo.3554657, DOI 10.5281/ZENODO.3554657]
  • [8] Barr Valerie, 2011, ACM Inroads, V2, P48, DOI 10.1145/1929887.1929905
  • [9] Machine learning for combinatorial optimization: A methodological tour d'horizon
    Bengio, Yoshua
    Lodi, Andrea
    Prouvost, Antoine
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2021, 290 (02) : 405 - 421
  • [10] Bergstra J, 2012, J MACH LEARN RES, V13, P281