Active Environmental Monitoring and Anomaly Search System for Space Habitat With Markov Decision Process and Active Sensing

被引:1
作者
Guo, Yanjie [1 ]
Xu, Zhaoyi [1 ]
Saleh, Joseph Homer [1 ]
机构
[1] Georgia Inst Technol, Sch Aerosp Engn, Atlanta, GA 30332 USA
基金
美国国家航空航天局;
关键词
Anomaly detection; environmental monitoring; Markov decision process; space habitat; DECENTRALIZED CONTROL; REINFORCEMENT;
D O I
10.1109/ACCESS.2021.3068950
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For future crewed missions that could last years with limited ground support, the environmental control and life support system (ECLSS) will likely evolve to meet new, more stringent reliability and autonomy requirements. In this work, we focus on improving the performance of the environmental monitoring and anomaly detection systems using Markov decision process and active sensing. We exploit actively moving sensors to develop a novel sensing architecture and supporting analytics, termed Active environmental Monitoring and Anomaly Search System (AMASS). We design a Dynamic Value Iteration policy to solve the path planning problem for the moving sensors in a dynamic environment. To test and validate AMASS, we developed a series of computational experiments for fire search, and we assessed the performance against three metrics: (1) anomaly detection time lag, (2) source location uncertainty, and (3) state estimation error. The results demonstrate that: AMASS provides 10 similar to 15 times better performance than the traditional fixed sensor monitoring and detection strategy; ventilation in the monitored environment affects the performance by 6 similar to 40 times for any monitoring architecture with fixed or moving sensors; the monitoring performance cannot be fully reflected in a monolithic, single metric, but should include different metrics for the timeliness and spatial resolution of the detection function.
引用
收藏
页码:49683 / 49696
页数:14
相关论文
共 44 条
[1]   Robust Relative Navigation by Integration of ICP and Adaptive Kalman Filter Using Laser Scanner and IMU [J].
Aghili, Farhad ;
Su, Chun-Yi .
IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2016, 21 (04) :2015-2026
[2]  
Altintas Onur, 2018, 2018 IEEE VEHICULAR, P1
[3]  
[Anonymous], fminsearch algorithm
[4]  
[Anonymous], 2002, ROBOTICS
[5]   Decentralized time-varying formation control for multi-robot systems [J].
Antonelli, Gianluca ;
Arrichiello, Filippo ;
Caccavale, Fabrizio ;
Marino, Alessandro .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2014, 33 (07) :1029-1043
[6]  
Arora S., 2018, A survey of inverse reinforcement learning
[7]  
Bartels M., COSMONAUTS PATCH SMA
[8]   A MARKOVIAN DECISION PROCESS [J].
BELLMAN, R .
JOURNAL OF MATHEMATICS AND MECHANICS, 1957, 6 (05) :679-684
[9]   The complexity of decentralized control of Markov decision processes [J].
Bernstein, DS ;
Givan, R ;
Immerman, N ;
Zilberstein, S .
MATHEMATICS OF OPERATIONS RESEARCH, 2002, 27 (04) :819-840
[10]  
Bottou L, 2010, ON LINE LEARNING NEU, P9