The panoptic driving perception system stands as a pivotal element in autonomous driving, encapsulating the detection of traffic objects, the segmentation of drivable areas, and the identification of lane lines. In this work, we demonstrate the practicability of performing these perception tasks concurrently under heterogeneous dataset domains. We discern that these tasks, originating from diverse dataset domains, inherently possess both general and specific characteristics unique to each dataset. Inspired by this insight, we design UF-Net, a unified network for multiple perception tasks with a novel two-stage feature refinement strategy, meticulously engineered to investigate both task-universal and task-specific attributes. Specifically, at the first stage, by taking the images under various dataset domains as inputs, UF-Net learns the task-universal features and outputs coarse predictions, which serve as a foundational understanding of the commonalities that exist across various tasks. In addition, we propose a gradient homogenization surgery (GHS) to facilitate the optimization of task-shared parameters, thus mitigating the conflicting gradients stemming from the different dataset domains In the second stage, UF-Net implements an adaptive sharing scheme (ASS) to selectively expand task-specific parameters within the deep model, intelligently pinpointing and learning the optimal locations for this tailored expansion, thus fine-tuning the performance for each task. Benefiting from the proposed techniques, we acquire a unified yet efficient model architecture for multiple perception tasks in autonomous driving. Extensive experiments reveal that UF-Net surpasses current state-of-the-art methods in a variety of perception tasks with significantly reduced total storage requirements. In addition, we demonstrate that our proposed GHS and ASS are designed as generic modules that can be integrated into modern multi-task learning frameworks to enhance performance.