Learning analytics research has highlighted that contexts matter for predictive models, but little research has explicated how contexts matter for models' utility. Such insights are critical for real-world applications where predictive models are frequently deployed across instructional and institutional contexts. Building upon administrative records and behavioral traces from 37,089 students across 1,493 courses, we provide a comprehensive evaluation of performance and fairness shifts of predictive models when transferred across different course contexts. We specifically quantify how differences in various contextual factors moderate model portability. Our findings indicate an average decline in model performance and inconsistent directions in fairness shifts, without a direct trade-off, when models are transferred across different courses within the same institution. Among the course-to-course contextual differences we examined, differences in admin features account for the largest portion of both performance and fairness loss. Differences in student composition can simultaneously amplify drops in performance and fairness while differences in learning design have a greater impact on performance degradation. Given these complexities, our results highlight the importance of considering multiple dimensions of course contexts and evaluating fairness shifts in addition to performance loss when conducting transfer learning of predictive models yin education.