With the advent of hyperspectral imaging spectrometers comes the need for procedures that detect and interrogate spectral quantities of interest. Such procedures or algorithms play a key role in the dissemination and interpretation of hyperspectral data. Validation of these algorithms involves well-characterized field collection campaigns that can be time and cost prohibitive. Radiometrically, as well as geometrically, correct synthetic imagery offers algorithm developers a surrogate to potentially unattainable field campaigns. The image simulation surrogate must ideally match real world scenes in both spatial and spectral complexity for one to have faith in algorithm performance. To this end, there is a need to develop synthetic scenes, based on real world data, which encompass full 3-dimensional geometric complexities as well as wide-area, spectrally complex backgrounds. Prior work has been done on the inclusion of backgrounds into a synthetic environment, however, this work did not generate wide-area imagery with all the complexities realized in real world data. This paper investigates the generation of a wide area synthetic scene rendered by the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The large area scene or "MegaScene" described in this paper is 0.6 square miles and contains an order of magnitude increase in objects, materials, and spectra, as compared to previously rendered scenes. Hyperspectral analysis using off-the-shelf classification and target detection algorithms was performed on the data to illustrate quantitative and qualitative fidelity.