Detect to Focus: Latent-Space Autofocusing System With Decentralized Hierarchical Multi-Agent Reinforcement Learning

被引:2
作者
Anikina, Anna [1 ,2 ]
Rogov, Oleg Y. [1 ]
Dylov, Dmitry V. [1 ]
机构
[1] Skolkovo Inst Sci & Technol Skoltech, Moscow 121205, Russia
[2] Univ Copenhagen, Dept Comp Sci, DK-1172 Copenhagen, Denmark
关键词
Artificial intelligence; Multi-agent systems; Image processing; neural networks; reinforcement learning; multi-agent systems; computer vision; photography; imaging; lenses; ALGORITHM;
D O I
10.1109/ACCESS.2023.3303844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
State-of-the-art object detection models are frequently trained offline using available datasets, such as ImageNet: large and overly diverse data that are unbalanced and hard to cluster semantically. This kind of training drops the object detection performance should the change in illumination, in the environmental conditions (e.g., rain or dust), or in the lens positioning (out-of-focus blur) occur. We propose a simple way to intelligently control the camera and the lens focusing settings in such scenarios using DASHA, a Decentralized Autofocusing System with Hierarchical Agents. Our agents learn to focus on scenes in challenging environments, significantly enhancing the pattern recognition capacity beyond the popular detection models (YOLO, Faster R-CNN, and Retina are considered). At the same time, the decentralized training allows preserving the equipment from overheating. The algorithm relies on the latent representation of the camera's stream and, thus, it is the first method to allow a completely no-reference imaging, where the system trains itself to auto-focus itself. The paper introduces a novel method for auto-tuning imaging equipment via hierarchical reinforcement learning. The technique involves the use of two interacting agents which independently manage the camera and lens settings, enabling optimal focus across different lighting situations. The unique aspect of this approach is its dependence on the latent feature vector of the real-time image scene for autofocusing, marking it as the first method of its kind to auto-tune a camera without necessitating reference or calibration data.
引用
收藏
页码:85214 / 85223
页数:10
相关论文
共 50 条
[21]   An environment model in multi-agent reinforcement learning with decentralized training [J].
Niedziolka-Domanski, Rafal ;
Bylina, Jaroslaw .
2024 19TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS, FEDCSIS 2024, 2024, :661-666
[22]   Decentralized Multi-Agent Pursuit Using Deep Reinforcement Learning [J].
de Souza, Cristino, Jr. ;
Newbury, Rhys ;
Cosgun, Akansel ;
Castillo, Pedro ;
Vidolov, Boris ;
Kulic, Dana .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :4552-4559
[23]   Generalized learning automata for multi-agent reinforcement learning [J].
De Hauwere, Yann-Michael ;
Vrancx, Peter ;
Nowe, Ann .
AI COMMUNICATIONS, 2010, 23 (04) :311-324
[24]   Multi-agent deep reinforcement learning: a survey [J].
Sven Gronauer ;
Klaus Diepold .
Artificial Intelligence Review, 2022, 55 :895-943
[25]   Multi-agent deep reinforcement learning: a survey [J].
Gronauer, Sven ;
Diepold, Klaus .
ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (02) :895-943
[26]   Multi-Agent Reinforcement Learning for Highway Platooning [J].
Kolat, Mate ;
Becsi, Tamas .
ELECTRONICS, 2023, 12 (24)
[27]   Building Collaboration in Multi-agent Systems Using Reinforcement Learning [J].
Aydin, Mehmet Emin ;
Fellows, Ryan .
COMPUTATIONAL COLLECTIVE INTELLIGENCE, ICCCI 2018, PT II, 2018, 11056 :201-212
[28]   Deep Multi-agent Reinforcement Learning in a Homogeneous Open Population [J].
Radulescu, Roxana ;
Legrand, Manon ;
Efthymiadis, Kyriakos ;
Roijers, Diederik M. ;
Nowe, Ann .
ARTIFICIAL INTELLIGENCE, BNAIC 2018, 2019, 1021 :90-105
[29]   When Does Communication Learning Need Hierarchical Multi-Agent Deep Reinforcement Learning [J].
Ossenkopf, Marie ;
Jorgensen, Mackenzie ;
Geihs, Kurt .
CYBERNETICS AND SYSTEMS, 2019, 50 (08) :672-692
[30]   Multi-agent event triggered hierarchical security reinforcement learning [J].
Sun, Hui-Hui ;
Hu, Chun-He ;
Zhang, Jun-Guo .
Kongzhi yu Juece/Control and Decision, 2024, 39 (11) :3755-3762