Value of structural health information in partially observable stochastic environments

被引:11
作者
Andriotis, Charalampos P. [1 ]
Papakonstantinou, Konstantinos G. [1 ]
Chatzi, Eleni N. [2 ]
机构
[1] Penn State Univ, Dept Civil & Environm Engn, University Pk, PA USA
[2] Swiss Fed Inst Technol, Dept Civil Environm & Geomat Engn, Zurich, Switzerland
基金
美国国家科学基金会;
关键词
Value of Information; Value of Structural Health Monitoring; Partially Observable Markov Decision; Processes; Sequential Decision-Making; Point-based Value Iteration; Inspection and Maintenance Planning; MARKOV DECISION-PROCESSES; MAINTENANCE POLICIES; OPTIMAL INSPECTION; OPTIMIZATION; FRAMEWORK; INFRASTRUCTURE; SUSTAINABILITY; NETWORKS; POMDP;
D O I
10.1016/j.strusafe.2020.102072
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Efficient integration of uncertain observations with decision-making optimization is key for prescribing informed intervention actions, able to preserve structural safety of deteriorating engineering systems. To this end, it is necessary that scheduling of inspection and monitoring strategies be objectively performed on the basis of their expected value-based gains that, among others, reflect quantitative metrics such as the Value of Information (VoI) and the Value of Structural Health Monitoring (VoSHM). In this work, we introduce and study the theoretical and computational foundations of the above metrics within the context of Partially Observable Markov Decision Processes (POMDPs), thus alluding to a broad class of decision-making problems of partially observable stochastic deteriorating environments that can be modeled as POMDPs. Step-wise and life-cycle VoI and VoSHM definitions are devised and their bounds are analyzed as per the properties stemming from the Bellman equation and the resulting optimal value function. It is shown that a POMDP policy inherently leverages the notion of VoI to guide observational actions in an optimal way at every decision step, and that the permanent or intermittent information provided by SHM or inspection visits, respectively, can only improve the cost of this policy in the long-term, something that is not necessarily true under locally optimal policies, typically adopted in decision making of structures and infrastructure. POMDP solutions are derived based on point-based value iteration methods, and the various definitions are quantified in stationary and non-stationary deteriorating environments, with both infinite and finite planning horizons, featuring single-or multi-component engineering systems.
引用
收藏
页数:13
相关论文
共 68 条
  • [1] Abdallah I, 2018, SAFETY AND RELIABILITY - SAFE SOCIETIES IN A CHANGING WORLD, P3053
  • [2] Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints
    Andriotis, C. P.
    Papakonstantinou, K. G.
    [J]. RELIABILITY ENGINEERING & SYSTEM SAFETY, 2021, 212
  • [3] Managing engineering systems with large state and action spaces through deep reinforcement learning
    Andriotis, C. P.
    Papakonstantinou, K. G.
    [J]. RELIABILITY ENGINEERING & SYSTEM SAFETY, 2019, 191
  • [4] Extended and Generalized Fragility Functions
    Andriotis, C. P.
    Papakonstantinou, K. G.
    [J]. JOURNAL OF ENGINEERING MECHANICS, 2018, 144 (09)
  • [5] Andriotis CP, 2019, 13 INT C APPL STAT P
  • [6] [Anonymous], 2006, P 21 AAAI C ARTIFICI
  • [7] Bellman R.E., 1957, DYNAMIC PROGRAMMING
  • [8] Bertsekas D. P., 2005, DYNAMIC PROGRAMMING, V1
  • [9] Bismut E, 2018, 6 INT S REL ENG RISK
  • [10] A probabilistic computational framework for bridge network optimal maintenance scheduling
    Bocchini, Paolo
    Frangopol, Dan M.
    [J]. RELIABILITY ENGINEERING & SYSTEM SAFETY, 2011, 96 (02) : 332 - 349