Establishing safety criteria for artificial neural networks

被引:0
作者
Kurd, Z [1 ]
Kelly, T [1 ]
机构
[1] Univ York, Dept Comp Sci, York YO10 5DD, N Yorkshire, England
来源
KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 1, PROCEEDINGS | 2003年 / 2773卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks are employed in many areas of industry such as medicine and defence. There are many techniques that aim to improve the performance of neural networks for safety-critical systems. However, there is a complete. absence of analytical certification methods for neural network paradigms. Consequently, their role in safety-critical applications, if any, is typically restricted to advisory systems. It is therefore desirable to enable neural networks for highly-dependable roles. This paper defines the safety criteria which if enforced, would contribute to justifying the safety of neural networks. The criteria are a set of safety requirements for the behaviour of neural networks. The paper also highlights the challenge of maintaining performance in terms of adaptability and generalisation whilst providing acceptable safety arguments.
引用
收藏
页码:163 / 169
页数:7
相关论文
共 19 条
  • [1] [Anonymous], 1995, SURVEY CRITIQUE TECH
  • [2] FORMAL NEURAL-NETWORK SPECIFICATION AND ITS IMPLICATIONS ON STANDARDIZATION
    DORFFNER, G
    WIKLICKY, H
    PREM, E
    [J]. COMPUTER STANDARDS & INTERFACES, 1994, 16 (03) : 205 - 219
  • [3] KEARNS M, BOUND ERROR CROSS VA
  • [4] Kelly T., 1998, ARGUING SAFETY SYSTE
  • [5] KILIMASAUKAS CC, 1991, DR DOBBS APR, P16
  • [6] Leveson N., 1995, SAFEWARE SYSTEM SAFE
  • [7] LISBOA P, 2001, HLTH SAFETY EXECUTIV, V327
  • [8] *MOD, 1996, 0055 MOD
  • [9] *MOD, 1996, 0058 MOD
  • [10] NABNEY I, 2000, PRACTICAL ASSESSMENT