In extended video sequences, individual frames are grouped into shots which are defined as a sequence taken by a single camera, and related shots are grouped into scenes which are defined as a single dramatic event taken by a small number of related cameras. This hierarchical structure is deliberately constructed, dictated by the limitations and preferences of the human visual and memory systems. We present three novel high-level segmentation results derived from these considerations, some of which are analogous to those involved in the perception of the structure of music. First and primarily, toe derive and demonstrate a method for measuring probable scene boundaries, by calculating a short term memory-based model of shot-to-shot "coherence" The detection of local minima in this continuous measure permits robust and flexible segmentation of the video into scenes, without the necessity for first aggregating shots into clusters. Second, and independently of the first, we then derive and demonstrate a one-pass on-the-fly shot clustering algorithm. Third, we demonstrate partially successful results on the application of these two new methods to the neat higher, "theme", level of video structure.