Many applications of artificial intelligence (AI) / assistive automation are in the Army's pipeline of developmental technologies and systems. Ensuring reliability in these systems will require rethinking current approaches. A greater focus on systems' human and environmental interactions is needed. These interactions can range from traditional human factors issues such as ease of use, to more complex situations involving AI decision making, feedback loops, and additional human cognitive loads. Other factors such as unintended uses of AI, adversarial attacks, and the capacity of AI systems to evolve and adapt must also be considered when examining reliability. A review of failure modes for AI systems reveals common threads that deserve additional emphasis in reliability engineering activities: Data pipelines Human-system interactions New adversarial attack modes Traditional reliability engineering tools, such as Failure Modes and Effects Analysis, Reliability Block Diagrams, Highly Accelerated Life Testing, and Physics of Failure Analysis, remain very much relevant and deserve to be reemphasized [1]. They also need to be expanded and supplemented with additional tools to best meet the reliability challenges posed by AI systems. One such tool is the System-Theoretic Process Analysis (STPA), which can be introduced early in concept development and approaches hazard/risk analysis holistically, including through examining human-system interactions, training, and documentation [2]. One of the main benefits of AI systems is their ability to be flexible and adaptable to new conditions. However, changes in environment, the data pipeline, and user behavior can lead to unreliability if the way the system adapts is not skillfully managed. Regular, proactive assessments of system health after deployment-similar to preventative maintenance checks and services (PMCS)-that consider these failure sources can help ensure reliability throughout the system's useful life.