Modern computer vision systems typically ingest full-resolution, un-compressed imagery from high-definition cameras, then process these images using deep convolutional neural networks (CNNs). These CNNs typically run on high Size, Weight, and Power (SWAP) GPU systems. Reducing the SWAP requirements of these systems is an active area of research. CNN's can be customized and compiled to lower precision versions, "pruned" to lower complexity, or compiled to run on FPGAs to reduce power consumption. Advances in camera design have resulted in the development of next generation "event-based" imaging sensors. These imaging sensors provide super-high-temporal resolution in individual pixels, but only pickup changes in the scene. This enables interesting new capabilities like low-power computer vision bullet-tracking and hostile fire detection and their low power-consumption is important for edge systems. However, computer vision algorithms require massive amounts of data for system training and development and these data collections are expensive and time-consuming; it is unlikely that future event-based computer vision system development efforts will want to re-collect the amounts of data already captured and curated. Therefore, it is of interest to explore whether and how current data can be modified to simulate event-based images for training and evaluation. In this work, we present results from training and testing CNN architectures on both simulated and real event-based imaging sensor systems. Relative performance comparisons as a function of various simulated event-based sensing parameters are presented and comparisons between approaches are provided.