In this demonstration, we will present a video showing depth predictions for street-level 360 degrees panoramic footage generated using our unsupervised learning model. Panoramic depth estimation is important for a range of applications including virtual reality, 3D modeling, and autonomous robotic navigation. We have developed a convolutional neural network (CNN) model for unsupervised learning of depth and ego-motion from cylindrical panoramic video. In contrast with previous works, we focus on cylindrical panoramic projection. Unlike spherical or cube map projection, cylindrical projection is fully compatible with traditional CNN layers while still supporting a continuous 360 degrees horizontal field of view. We find that this increased field of view improves the ego-motion prediction accuracy for street-level video input. This abstract motivates our work in unsupervised structure-from-motion estimation, describes the video demonstration, outlines our implementation, and summarizes our study conclusions.