EVALUATING AND VALIDATING VERY LARGE KNOWLEDGE-BASED SYSTEMS

被引:9
作者
ONEIL, M
GLOWINSKI, A
机构
[1] Imperial Cancer Research Fund, London
来源
MEDICAL INFORMATICS | 1990年 / 15卷 / 03期
关键词
Evaluation; Knowledge-based systems; Validation;
D O I
10.3109/14639239009025271
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Most knowledge-based systems for use in medicine have been developed in response to specific problems such as the diagnosis of abdominal or chest pain in an accident and emergency department, or the diagnosis and treatment of meningitis. There is a role for a general decision support system capable of answering queries about any aspect of medicine, particularly in primary care. However, evaluating such a knowledge base requires more elaborate methodology than a simple iterative test and refine cycle. At the design stage an adequate knowledge base structure is required to allow focused modification of the knowledge base when errors are discovered. During the prolonged evaluation cycle the partially formed knowledge base must be tested with such techniques as validation checks for consistency and completeness and examination of characteristics of problem-solving procedures. Finally a variety of criteria that represent the performance, robustness, flexibility, predictability, validity, coverage, relevance and congruity of the knowledge base are needed for a full description of the system's worth. Two case studies from the Oxford System of Medicine project are provided as examples of this philosophy: validating specific medical facts and comparing two methods for aggregating reasoning for and against a decision option. © 1990 Informa UK Ltd All rights reserved: reproduction in whole or part not permitted.
引用
收藏
页码:237 / 251
页数:15
相关论文
共 15 条
[1]  
Miller P.L., The evaluation of artificial intelligence systems in medicine, Computer Methods and Programs in Biomedicine, 22, pp. 5-11, (1986)
[2]  
Glowinsiu A.J., O'Neil M., Fox J., Design of a generic information system and its application to primary care, 38, pp. 62-71, (1989)
[3]  
O'Neil M., Glowinski A., Fox J., A symbolic theory of decision-making applied to several medical tasks, 38, pp. 62-71, (1989)
[4]  
Gaschnig J., Klahr P., Pople H., Shortliffe E., Terry A., Evaluation of expert systems: issues and case studies, Building Expert Systems, pp. 241-280, (1983)
[5]  
Cohen P.R., Howe A.E., The invisible hand: how evaluation guides AI research, pp. 1-14, (1988)
[6]  
Heckerman D.E., Horvitz E.J., The myth of modularity in rule-based systems for reasoning with uncertainty, Uncertainty in Artificial Intelligence, 2, pp. 23-34, (1988)
[7]  
Fox J., Glowinski A., O'Neil M., Reliable and reusable tools for medical knowledge-based systems, Proceedings of Conference on Artificial Intelligence in Medicine, (1990)
[8]  
Chandrasekaran B., Josephson J., Herman D., The generic task toolset: high level languages for the construction of planning and problem solving systems, pp. 1-16, (1987)
[9]  
Allemang D., Tanner M.C., Bylander T., Josephson J., Computational complexity of hypothesis assembly, Proceedings of the 10th International Joint Conference on Expert Systems., 2, pp. 1112-1187, (1987)
[10]  
Wyatt J., The evaluation of clinical decision support systems: a discussion of the methodology used in the ACORN project, Proceedings of the European Conference on Artificial Intelligence in Medicine., pp. 15-24, (1987)