Using Markov Decision Process for Recommendations Based on Aggregated Decision Data Models

被引:0
|
作者
Petrusel, Razvan [1 ]
机构
[1] Univ Babes Bolyai, Fac Econ Sci & Business Adm, Cluj Napoca 400591, Romania
来源
BUSINESS INFORMATION SYSTEMS, BIS 2013 | 2013年 / 157卷
关键词
Decision Process Recommendation; Decision Data Model; Markov Decision Process;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Our research is placed in the context of business decision making processes. We look at decision making as at a workflow of (mostly mental) activities directed at choosing one decision alternative. Our goal is to direct the flow of decision activities such that the relevant alternatives are properly evaluated. It is outside our purpose to recommend which alternative should be chosen. Since business decision making is data-centric, we use a Decision Data Model (DDM). It is automatically mined from a log containing the decision maker's actions while interacting with business software. The recommendation is based on an aggregated DDM that shows what many decision makers have done in the same decision situation. In our previous work we created algorithms that seek a local optimum. In this paper we show how the recommendation based on DDM problem can be mapped to a Markov Decision Process (MDP). The aim is to use MDP to find a global optimal decision making strategy.
引用
收藏
页码:125 / 137
页数:13
相关论文
共 50 条
  • [31] Reward Based Cooperative Routing in Mobile Ad Hoc Network by using Markov Decision Process
    Venkanna, U.
    Velusamy, R. Leela
    2014 RECENT ADVANCES IN ENGINEERING AND COMPUTATIONAL SCIENCES (RAECS), 2014,
  • [32] On Minimizing the Age of Information in NOMA-Based Vehicular Networks Using Markov Decision Process
    Abbas, Qamar
    Hassan, Syed Ali
    Jung, Haejoon
    Hossain, M. Shamim
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (12) : 15557 - 15567
  • [33] Towards enhanced threat modelling and analysis using a Markov Decision Process
    Malik, Saif U. R.
    Anjum, Adeel
    Moqurrab, Syed Atif
    Srivastava, Gautam
    COMPUTER COMMUNICATIONS, 2022, 194 : 282 - 291
  • [34] eNB Selection for Machine Type Communications Using Reinforcement Learning Based Markov Decision Process
    Liu, Yu-Jui
    Cheng, Shin-Ming
    Hsueh, Yu-Lin
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2017, 66 (12) : 11330 - 11338
  • [35] A Markov decision process for response adaptive designs
    Yi, Yanqing
    Wang, Xikui
    ECONOMETRICS AND STATISTICS, 2023, 25 : 125 - 133
  • [36] Reinforcement Learning to Rank with Markov Decision Process
    Wei, Zeng
    Xu, Jun
    Lan, Yanyan
    Guo, Jiafeng
    Cheng, Xueqi
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 945 - 948
  • [37] Activity Support Framework for People with Dementia Based on Markov Decision Process
    Sarni, Tomi
    Pulli, Petri
    2015 INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS IE 2015, 2015, : 25 - 32
  • [38] Markov Decision Process Based Content Dissemination in Hybrid Wireless Networks
    Wang, Fang
    Li, Yong
    Wang, Zhaocheng
    Yang, Zhixing
    2012 8TH INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING CONFERENCE (IWCMC), 2012, : 889 - 894
  • [39] Free gait planning for a hexapod robot based on Markov decision process
    Li, Manhong
    Zhang, Jianhua
    Zhang, Xiaojun
    Zhang, Minglu
    Jiqiren/Robot, 2015, 37 (05): : 529 - 537
  • [40] Abstractive Meeting Summarization as a Markov Decision Process
    Murray, Gabriel
    ADVANCES IN ARTIFICIAL INTELLIGENCE (AI 2015), 2015, 9091 : 212 - 219