TExSS: Transparency and Explanations in Smart Systems

被引:1
作者
Smith-Renner, Alison [1 ]
Dodge, Jonathan [2 ]
Dugan, Casey [3 ]
Kleanthous, Styliani [4 ]
Kuflik, Tsvi [5 ]
Lee, Mm Kyung [6 ]
Lim, Brian Y. [7 ]
Sarkar, Advait [8 ]
Shulner-Tal, Avital [5 ]
Stumpf, Simone [9 ]
机构
[1] Decis Analyt Corp, Arlington, VA 22202 USA
[2] Oregon State Univ, Corvallis, OR 97331 USA
[3] IBM Res, Yorktown Hts, NY USA
[4] Open Univ Cyprus, Cyrpus Ctr Algorithm Transparency, Latsia, Cyprus
[5] Univ Haifa, Informat Syst, Haifa, Israel
[6] Univ Texas Austin, Austin, TX USA
[7] Natl Univ Singapore, Dept Comp Sci, Singapore, Singapore
[8] Microsoft Res, Redmond, WA USA
[9] City Univ London, Ctr HCI Design, London, England
来源
26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES (IUI '21 COMPANION) | 2021年
关键词
explanations; visualizations; machine learning; intelligent systems; intelligibility; transparency; fairness; accountability;
D O I
10.1145/3397482.3450705
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation.
引用
收藏
页码:24 / 25
页数:2
相关论文
共 6 条
  • [1] Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment
    Dodge, Jonathan
    Liao, Q. Vera
    Zhang, Yunfeng
    Bellamy, Rachel K. E.
    Dugan, Casey
    [J]. PROCEEDINGS OF IUI 2019, 2019, : 275 - 285
  • [2] Glass Alyssa, 2008, 13th International Conference on Intelligent User Interfaces. IUI 2008, P227, DOI 10.1145/1378773.1378804
  • [3] Herlocker J. L., 2000, CSCW 2000. ACM 2000 Conference on Computer Supported Cooperative Work, P241, DOI 10.1145/358916.358995
  • [4] A review of explanation methods for Bayesian networks
    Lacave, C
    Díez, FJ
    [J]. KNOWLEDGE ENGINEERING REVIEW, 2002, 17 (02) : 107 - 127
  • [5] Pu P., 2006, 2006 International Conference on Intelligent User Interfaces, P93, DOI 10.1145/1111449.1111475
  • [6] SWARTOUT W, 1991, IEEE EXPERT, V6, P58, DOI 10.1109/64.87686