Risk-averse Ambulance Redeployment via Multi-armed Bandits

被引:0
|
作者
Sahin, Umitcan [1 ,2 ]
Yucesoy, Veysel [1 ]
Koc, Aykut [1 ]
Tekin, Cem [2 ]
机构
[1] Aselsan Arastirma Merkezi, Akilli Veri Analit Arastirma Program Mudurlugu, TR-06370 Ankara, Turkey
[2] Bilkent Univ, Elekt & Elekt Muhendisligi Bolumu, TR-06800 Ankara, Turkey
来源
2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU) | 2018年
关键词
Multi-armed bandit problems; risk minimization; ambulance redeployment; RELOCATION;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Ambulance redeployment comprises the problem of deploying ambulances to certain locations in order to minimize the arrival times to possible calls and plays a significant role in improving a country's emergency medical services and increasing the number of lives saved during an emergency. In this study, unlike the existing optimization methods in the literature, the problem is cast as a multi-armed bandit problem. Multi-armed bandit problems are a part of sequential online learning methods and utilized in maximizing a gain function (i.e. reward) when the reward distributions are unknown. In this study, in addition to the objective of maximizing rewards, the objective of minimizing the expected variance of rewards is also considered. The effect of risk taken by the system on average arrival times and number of calls responded on time is investigated. Ambulance redeployment is performed by a risk-averse multi-armed bandit algorithm on a data-driven simulator. As a result, it is shown that the algorithm which takes less risk (i.e. that minimizes the variance of response times) responds to more cases on time.
引用
收藏
页数:4
相关论文
共 50 条
  • [11] Multiplayer Modeling via Multi-Armed Bandits
    Gray, Robert C.
    Zhu, Jichen
    Ontanon, Santiago
    2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 695 - 702
  • [12] Risk-Averse Multi-Armed Bandit Problems Under Mean-Variance Measure
    Vakili, Sattar
    Zhao, Qing
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2016, 10 (06) : 1093 - 1111
  • [13] Robust risk-averse multi-armed bandits with application in social engagement behavior of children with autism spectrum disorder while imitating a humanoid robot
    Aryania, Azra
    Aghdasi, Hadi S.
    Heshmati, Rasoul
    Bonarini, Andrea
    INFORMATION SCIENCES, 2021, 573 : 194 - 221
  • [14] On Kernelized Multi-armed Bandits
    Chowdhury, Sayak Ray
    Gopalan, Aditya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [15] Regional Multi-Armed Bandits
    Wang, Zhiyang
    Zhou, Ruida
    Shen, Cong
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [16] Multi-armed Bandits with Compensation
    Wang, Siwei
    Huang, Longbo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [17] Federated Multi-Armed Bandits
    Shi, Chengshuai
    Shen, Cong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9603 - 9611
  • [18] Approximate Function Evaluation via Multi-Armed Bandits
    Baharav, Tavor Z.
    Cheng, Gary
    Pilanci, Mert
    Tse, David
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 108 - 135
  • [19] Corporate Social Responsibility via Multi-Armed Bandits
    Ron, Tom
    Ben-Porat, Omer
    Shalit, Uri
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 26 - 40
  • [20] Multi-armed Bandits with Probing
    Elumar, Eray Can
    Tekin, Cem
    Yagan, Osman
    2024 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, ISIT 2024, 2024, : 2080 - 2085