A Prediction of the Shooting Trajectory for a Tuna Purse Seine Using the Double Deep Q-Network (DDQN) Algorithm

被引:0
|
作者
Cho, Daeyeon [1 ]
Lee, Jihoon [2 ]
机构
[1] Chonnam Natl Univ, Dept Fisheries Sci, Yeosu 59626, South Korea
[2] Chonnam Natl Univ, Dept Marine Prod Management, Yeosu 59626, South Korea
关键词
purse seine; fishing technology; machine learning; numerical method; reinforcement learning; simulation; YELLOWFIN;
D O I
10.3390/jmse13030530
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
The purse seine is a fishing method in which a net is used to encircle a fish school, capturing isolated fish by tightening a purse line at the bottom of the net. Tuna purse seine operations are technically complex, requiring the evaluation of fish movements, vessel dynamics, and their interactions, with success largely dependent on the expertise of the crew. In particular, efficiency in terms of highly complex tasks, such as calculating the shooting trajectory during fishing operations, varies significantly based on the fisher's skill level. To address this challenge, developing techniques to support less experienced fishers is necessary, particularly for operations targeting free-swimming fish schools, which are more difficult to capture compared to those utilizing Fish Aggregating Devices (FADs). This study proposes a method for predicting shooting trajectories using the Double Deep Q-Network (DDQN) algorithm. Observation states, actions, and reward functions were designed to identify optimal scenarios for shooting, and the catchability of the predicted trajectories was evaluated through gear behavior analysis. The findings of this study are expected to aid in the development of a trajectory prediction system for inexperienced fishers and serve as foundational data for automating purse seine fishing systems.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Prediction of shooting trajectory of tuna purse seine fishing
    Lee, Chun-Woo
    Lee, Jihoon
    Park, Subong
    FISHERIES RESEARCH, 2018, 208 : 189 - 201
  • [2] Accurate Price Prediction by Double Deep Q-Network
    Feizi-Derakhshi, Mohammad-Reza
    Lotfimanesh, Bahram
    Amani, Omid
    INTELIGENCIA ARTIFICIAL-IBEROAMERICAN JOURNAL OF ARTIFICIAL INTELLIGENCE, 2024, 27 (74): : 12 - 21
  • [3] Stochastic Double Deep Q-Network
    Lv, Pingli
    Wang, Xuesong
    Cheng, Yuhu
    Duan, Ziming
    IEEE ACCESS, 2019, 7 : 79446 - 79454
  • [4] Averaged Weighted Double Deep Q-Network
    Wu, Jinjin
    Liu, Quan
    Chen, Song
    Yan, Yan
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (03): : 576 - 589
  • [5] Timeslot Scheduling with Reinforcement Learning Using a Double Deep Q-Network
    Ryu, Jihye
    Kwon, Juhyeok
    Ryoo, Jeong-Dong
    Cheung, Taesik
    Joung, Jinoo
    ELECTRONICS, 2023, 12 (04)
  • [6] Double Deep Q-Network for Trajectory Generation of a Commercial 7DOF Redundant Manipulator
    Marchesini, Enrico
    Corsi, Davide
    Benfatti, Andrea
    Farinelli, Alessandro
    Fiorini, Paolo
    2019 THIRD IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2019), 2019, : 421 - 422
  • [7] Path planning of mobile robot based on improved double deep Q-network algorithm
    Wang, Zhenggang
    Song, Shuhong
    Cheng, Shenghui
    FRONTIERS IN NEUROROBOTICS, 2025, 19
  • [8] Reinforced Event-Driven Evolutionary Algorithm Based on Double Deep Q-network
    Zhou, Tianwei
    Zhang, Wenwen
    Lu, Junrui
    He, Pengcheng
    Yao, Keqin
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT I, 2022, : 294 - 304
  • [9] Droplet Routing Based on Double Deep Q-Network Algorithm for Digital Microfluidic Biochips
    Rajesh, Kolluri
    Pyne, Sumanta
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (17)
  • [10] Deep Q-Network Using Reward Distribution
    Nakaya, Yuta
    Osana, Yuko
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 160 - 169