Explainability (often referred to as interpretability) refers to the concept of providing context to an AI/ML model and its output, thereby assisting a human user in understanding the system's decision -making process. The concept of explainability is especially helpful when we consider the high cognitive load and intense data management strategies that are required for current human-in-the-loop operations in use today. The work presented here aims to provide an explainability framework for autonomous systems, to provide system transparency, and enhance operator awareness. This work served to develop a novel method of sorting and evaluating data streams taken from an operational system, to filter and transmit data packages based on mission conditions. Post notsign mission analysis yielded apparent trends in messaging hierarchy, indicating that certain health and status data streams were consistently prioritized, regardless of the pre-defined metrics. Additional data analysis was performed to evaluate sensor outputs with respect to health and status messaging. This process included conducting data correlation and data characterization, to evaluate relationships between data streams, identify data associated with nominal behavior, and perform anomaly detection. Key functional categories were developed, in which the system's behavior is mapped to a corresponding component (and its respective data stream). Monitoring subsystem performance assists with cross -referencing sensor outputs, to confirm data projections and/or aid in identifying faulty readings. Furthermore, the application of anomaly detection algorithms is coupled with data correlation and/or pattern recognition to extract the most important and salient information.