Digital videotaping of operations at construction sites has proven to be a useful resource in the industry. These valuable visual resources, however, are not used to their full potential because they are manually archived and the contents of the videos are not efficiently annotated. A number of research efforts have investigated computer vision algorithms to detect and track construction resources in videos that are mostly captured by stationary cameras; however, only limited research has been performed on semantic annotation of the videos. This paper presents an automated system to analyze the content of heavy-construction videos, including videos with moving viewpoints, and to index them based on midlevel (i.e., objects) and high-level (i.e., operations) semantic content. This system divides videos based on the transition of shots, and then analyzes the objects that appear in each scene. Afterward, it uses a probabilistic method to annotate videos. The experimental results showed promising performance of the developed framework and also highlighted possibilities for the future research direction. (C) 2017 American Society of Civil Engineers.