Despite the heightened popularity of virtual reality (VR) and 360 degrees video, 360 degrees content remains expensive and difficult to stream. 360 degrees live streaming is especially challenging, as it requires high bandwidth and low latency to avoid quality and motion-sickness issues. This paper explores how adaptive bitrate allocation, in which only the user's predicted viewport is streamed in high quality, and the rest of the view is streamed in low quality, can lead to increases in viewport quality. Transformer-based saliency models pre-trained on 2D images are used for viewport prediction. Key contributions include 1) determining whether transformer-based models for 2D images are effective for saliency detection of 360 degrees content 2) examining viewport prediction accuracy of saliency-only models and 3) a novel bitrate allocation algorithm. Empirical results demonstrate that even without access to head-movement data or fine-tuning, these models lead to increased quality in a user's perceived viewport over traditional non-adaptive streaming.