Coverage analysis of programs during testing gives a clear measure of testing quality and reveals important aspects of program structure. Thus, it can be a significant component in the assessment of overall software quality. This article describes the uses of a dataflow coverage-testing tool for C programs - called ATAC for Automatic Test Analysis for C - in measuring, controlling, and understanding the testing process. It presents case studies of two real-world software projects using ATAC. The first study, a joint project of the University of Iowa and Rockwell, involved fault-tolerant software for a critical automatic flight-control system. The 12 program versions developed by study participants obtained very high testing coverage scores, indicating the achievement of high-quality software. These results were confirmed in field testing (only one fault was found). The second study, a retrospective evaluation of 33 Bellcore program modules, compared test coverage with number of system faults. The Iowa/Rockwell study suggests guidelines of 90 percent block coverage, 80 percent decision coverage, 90 percent computation-uses coverage, and 70 percent predicate-uses coverage. However, the Bellcore data shows that practical testing falls far short of these standards. Indeed, 70 percent coverage is usually considered a safe, practical minimum for large projects. The basic rule of thumb is that all code should be covered or otherwise validated.