Risk-averse Ambulance Redeployment via Multi-armed Bandits

被引:0
|
作者
Sahin, Umitcan [1 ,2 ]
Yucesoy, Veysel [1 ]
Koc, Aykut [1 ]
Tekin, Cem [2 ]
机构
[1] Aselsan Arastirma Merkezi, Akilli Veri Analit Arastirma Program Mudurlugu, TR-06370 Ankara, Turkey
[2] Bilkent Univ, Elekt & Elekt Muhendisligi Bolumu, TR-06800 Ankara, Turkey
来源
2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU) | 2018年
关键词
Multi-armed bandit problems; risk minimization; ambulance redeployment; RELOCATION;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Ambulance redeployment comprises the problem of deploying ambulances to certain locations in order to minimize the arrival times to possible calls and plays a significant role in improving a country's emergency medical services and increasing the number of lives saved during an emergency. In this study, unlike the existing optimization methods in the literature, the problem is cast as a multi-armed bandit problem. Multi-armed bandit problems are a part of sequential online learning methods and utilized in maximizing a gain function (i.e. reward) when the reward distributions are unknown. In this study, in addition to the objective of maximizing rewards, the objective of minimizing the expected variance of rewards is also considered. The effect of risk taken by the system on average arrival times and number of calls responded on time is investigated. Ambulance redeployment is performed by a risk-averse multi-armed bandit algorithm on a data-driven simulator. As a result, it is shown that the algorithm which takes less risk (i.e. that minimizes the variance of response times) responds to more cases on time.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Dynamic Ambulance Redeployment via Multi-armed Bandits
    Sahin, Umitcan
    Yucesoy, Veysel
    2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,
  • [2] Robust Risk-Averse Stochastic Multi-armed Bandits
    Maillard, Odalric-Ambrym
    ALGORITHMIC LEARNING THEORY (ALT 2013), 2013, 8139 : 218 - 233
  • [3] A revised approach for risk-averse multi-armed bandits under CVaR criterion
    Khajonchotpanya, Najakorn
    Xue, Yilin
    Rujeerapaiboon, Napat
    OPERATIONS RESEARCH LETTERS, 2021, 49 (04) : 465 - 472
  • [4] Statistically Robust, Risk-Averse Best Arm Identification in Multi-Armed Bandits
    Kagrecha, Anmol
    Nair, Jayakrishnan
    Jagannathan, Krishna
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (08) : 5248 - 5267
  • [5] A Risk-Averse Framework for Non-Stationary Stochastic Multi-Armed Bandits
    Alami, Reda
    Mahfoud, Mohammed
    Achab, Mastane
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 272 - 280
  • [6] Risk-Averse Multi-Armed Bandits with Unobserved Confounders: A Case Study in Emotion Regulation in Mobile Health
    Shen, Yi
    Dunn, Jessilyn
    Zavlanos, Michael M.
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 144 - 149
  • [7] Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs
    Lin, Yifan
    Wang, Yuhao
    Zhou, Enlu
    JOURNAL OF SYSTEMS SCIENCE AND SYSTEMS ENGINEERING, 2022,
  • [8] Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs
    Lin, Yifan
    Wang, Yuhao
    Zhou, Enlu
    JOURNAL OF SYSTEMS SCIENCE AND SYSTEMS ENGINEERING, 2023, 32 (03) : 267 - 288
  • [9] Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs
    Yifan Lin
    Yuhao Wang
    Enlu Zhou
    Journal of Systems Science and Systems Engineering, 2023, 32 : 267 - 288
  • [10] Risk-Averse Biased Human Policies with a Robot Assistant in Multi-Armed Bandit Settings
    Koller, Michael
    Patten, Timothy
    Vincze, Markus
    THE 14TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2021, 2021, : 483 - 488