Using deep Q-learning to understand the tax evasion behavior of risk-averse firms

被引:14
|
作者
Goumagias, Nikolaos D. [1 ]
Hristu-Varsakelis, Dimitrios [2 ]
Assael, Yannis M. [3 ]
机构
[1] Northumbria Univ, Newcastle Business Sch, Cent Campus East 1, Newcastle Upon Tyne NE1 8ST, Tyne & Wear, England
[2] Univ Macedonia, Dept Appl Informat, Egnatia 156, Thessaloniki 54006, Greece
[3] Univ Oxford, Dept Comp Sci, Wolfson Bldg,Parks Rd, Oxford OX1 3QD, England
关键词
Markov decision processes; Tax evasion; Q-learning; Deep learning; NEURAL-NETWORKS; AMNESTIES;
D O I
10.1016/j.eswa.2018.01.039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Designing tax policies that are effective in curbing tax evasion and maximize state revenues requires a rigorous understanding of taxpayer behavior. This work explores the problem of determining the strategy a self-interested, risk-averse tax entity is expected to follow, as it "navigates" - in the context of a Markov Decision Process - a government-controlled tax environment that includes random audits, penalties and occasional tax amnesties. Although simplified versions of this problem have been previously explored, the mere assumption of risk-aversion (as opposed to risk-neutrality) raises the complexity of finding the optimal policy well beyond the reach of analytical techniques. Here, we obtain approximate solutions via a combination of Q-learning and recent advances in Deep Reinforcement Learning. By doing so, we (i) determine the tax evasion behavior expected of the taxpayer entity, (ii) calculate the degree of risk aversion of the "average" entity given empirical estimates of tax evasion, and (iii) evaluate sample tax policies, in terms of expected revenues. Our model can be useful as a testbed for "in-vitro" testing of tax policies, while our results lead to various policy recommendations. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:258 / 270
页数:13
相关论文
共 18 条
  • [1] Automating Vehicles by Risk-Averse Preview-based Q-Learning Algorithm
    Mazouchi, Majid
    Nageshrao, Subramanya
    Modares, Hamidreza
    IFAC PAPERSONLINE, 2022, 55 (15): : 105 - 110
  • [2] Dynamic history-dependent tax and environmental compliance monitoring of risk-averse firms
    Noam Goldberg
    Isaac Meilijson
    Yael Perlman
    Annals of Operations Research, 2024, 334 : 469 - 495
  • [3] Dynamic history-dependent tax and environmental compliance monitoring of risk-averse firms
    Goldberg, Noam
    Meilijson, Isaac
    Perlman, Yael
    ANNALS OF OPERATIONS RESEARCH, 2024, 334 (1-3) : 469 - 495
  • [4] Towards Risk-Averse Edge Computing With Deep Reinforcement Learning
    Xu, Dianlei
    Su, Xiang
    Wang, Huandong
    Tarkoma, Sasu
    Hui, Pan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (06) : 7030 - 7047
  • [5] Collaborative Traffic Signal Automation Using Deep Q-Learning
    Hassan, Muhammad Ahmed
    Elhadef, Mourad
    Khan, Muhammad Usman Ghani
    IEEE ACCESS, 2023, 11 : 136015 - 136032
  • [6] Deep Convolution Q-Learning for emulation of human behavior patterns in gaming bots
    Kalra, Ishmeet Singh
    Pandita, Sarthak
    Soni, Sagar
    Alam, Md Kazmi
    2019 IEEE PUNE SECTION INTERNATIONAL CONFERENCE (PUNECON), 2019,
  • [7] Risk-averse optimization of crop inputs using a deep ensemble of convolutional neural networks
    Barbosa, Alexandre
    Hovakimyan, Naira
    Martin, Nicolas F.
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 178
  • [8] Using Double Deep Q-Learning to learn Attitude Control of Fixed-Wing Aircraft
    Richter, David J.
    Calix, Ricardo A.
    2022 16TH INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS, SITIS, 2022, : 646 - 651
  • [9] Real-time emergency load shedding for power system transient stability control: A risk-averse deep learning method
    Liu, Jizhe
    Zhang, Yuchen
    Meng, Ke
    Dong, Zhao Yang
    Xu, Yan
    Han, Siming
    APPLIED ENERGY, 2022, 307
  • [10] Secure Transmission in Cellular V2X Communications Using Deep Q-Learning
    Jameel, Furqan
    Javed, Muhammad Awais
    Zeadally, Sherali
    Jantti, Riku
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 17167 - 17176