As the threat landscape for Operational Technology (OT) and Supervisory Control and Data Acquisition (SCADA) systems grows more complex, there is a pressing need for intrusion detection systems that can dynamically adapt to evolving attack patterns. Traditional Machine Learning (ML) approaches often require frequent manual retraining and struggle to respond efficiently to these dynamic threats. Deep Reinforcement Learning (DRL) models present a promising solution, offering autonomous learning capabilities, adaptability to diverse scenarios with minimal human intervention, and enhanced intrusion detection for Industrial Control Systems (ICS). This paper presents a novel investigation into the application of various DRL models, including Deep Q-Network (DQN), Double Deep Q-Network (DDQN), Dueling Double Deep Q-Network (D3QN), REINFORCE, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO), for network intrusion detection in ICS. Performance comparisons with traditional ML methods are conducted using relevant metrics. To assess their effectiveness without a live environment, labeled pre-recorded intrusion datasets are utilized, with tailored adaptations for DRL model training outlined. These adaptations include generating data samples in mini-batches, integrating small discount factors, and employing straightforward reward functions. Comprehensive results underscore the efficacy of DRL models in bolstering the detection of advanced cyberattacks within OT environments, surpassing conventional ML approaches.