Putting the humanity into inhuman systems: How human factors and ergonomics can be used to manage the risks associated with artificial general intelligence

被引:29
作者
Salmon, Paul M. [1 ]
Carden, Tony [1 ]
Hancock, Peter A. [2 ,3 ]
机构
[1] Univ Sunshine Coast, Ctr Human Factors & Sociotech Syst, Maroochydore, Qld 4558, Australia
[2] Univ Cent Florida, Dept Psychol, Orlando, FL 32816 USA
[3] Univ Cent Florida, Inst Simulat & Training, Orlando, FL 32816 USA
基金
澳大利亚研究理事会;
关键词
artificial general intelligence; design; human factors; risk; safety; PARADIGM;
D O I
10.1002/hfm.20883
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The next generation of artificial intelligence, known as artificial general intelligence (AGI) could either revolutionize or destroy humanity. As the discipline which focuses on enhancing human health and wellbeing, human factors and ergonomics (HFE) has a crucial role to play in the conception, design, and operation of AGI systems. Despite this, there has been little examination as to how HFE can influence and direct this evolution. This study uses a hypothetical AGI system, Tegmark's "Prometheus," to frame the role of HFE in managing the risks associated with AGI. Fifteen categories of HFE method are identified and their potential role in AGI system design is considered. The findings suggest that all categories of HFE method can contribute to AGI system design; however, areas where certain methods require extension are identified. It is concluded that HFE can and should contribute to AGI system design and immediate effort is required to facilitate this goal. In closing, we explicate some of the work required to embed HFE in wider multi-disciplinary efforts aiming to create safe and efficient AGI systems.
引用
收藏
页码:223 / 236
页数:14
相关论文
共 59 条
[1]  
Amodei D., 2016, arXiv preprint
[2]  
Annett J., 1971, 6 HMSO DEP EMPL TRAI
[3]  
[Anonymous], 2004, The singularity is near
[4]  
Bisantz A.M., 2008, APPL COGNITIVE WORK
[5]   Agent-based modeling: Methods and techniques for simulating human systems [J].
Bonabeau, E .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2002, 99 :7280-7287
[6]  
Bostrom N., 2014, Superintelligence: Paths, Dangers, Strategies, DOI DOI 10.1080/01402390.2013.844127
[7]   Strategic Implications of Openness in AI Development [J].
Bostrom, Nick .
GLOBAL POLICY, 2017, 8 (02) :135-148
[8]  
Brooke J., 1996, USABILITY EVALUATION, P189, DOI [DOI 10.1201/9781498710411, DOI 10.1201/9781498710411-35]
[9]  
Brundage M., 2018, ARXIV PREPRINT ARXIV
[10]  
Chalmers DJ, 2010, J CONSCIOUSNESS STUD, V17, P7