As we increasingly employ AI in critical systems, the importance of reliability in these systems becomes paramount. AI's decisions and operations often have significant consequences, and hence, it must be dependable in carrying out its tasks. For this reason, the conversation around explainability becomes crucial.
Explainability in AI, as a concept, is commonly referred to as a necessity. However, it's crucial to understand that explainability in itself is insufficient; it must be considered in a broader context. The term 'Explainable AI' is broad and extends over a multitude of aspects. The European Union Aviation Safety Agency (EASA), for instance, differentiates between engineering explainability and operator explainability.
Engineering explainability mainly caters to the needs of developers and system engineers who work behind the scenes of AI. They focus on understanding the technical workings and decision-making process of AI systems, and much attention has been given to this area.
However, operator explainability remains a less explored territory. This aspect addresses the front-end users' needs, i.e., the operators who interact with the AI systems in their work environment. These users require a different set of explanations that are geared towards improving their understanding and enhancing their decision-making processes.
In this article, we dive deeper into operator explainability, which we propose is best approached through the lens of 'Human Factors in AI'. This perspective acknowledges the human in the loop, emphasizing the importance of the interaction between humans and AI, and facilitating the development of AI systems that can effectively communicate their reasoning to the operators.
At DMI, we suggest that the insights from the field of systems engineering could be beneficially applied to the field of AI systems. Systems engineering has long acknowledged the importance of human factors, including operator needs and interactions with the systems. We argue that these aspects are equally relevant when discussing operator explainability in AI.
Our objective is to structure the discourse surrounding operator explainability and apply the human factors approach to it. In this way, we aim to contribute to the development of AI systems that are not only explainable but also user-friendly and tailored to the operator's needs and contexts.
When considering the ‘Human Factors’ in explainable AI based on our current knowledge, several key topics emerge as crucial considerations. The specific elements discussed in this article include:
In this section, we delve into each of the 'Human Factors' elements and explore their implications in fostering operator explainability in AI systems, along with the example of an AI-based routing system for pilots that helps to avoid weather hazards and efficiently use positive wind conditions.
Human-AI Teaming (HAT)
Human-AI Teaming (HAT) involves the successful interplay between AI capabilities and human skills. Consider our example AI system that aids pilots by proposing optimized routes based on weather patterns and wind conditions. The human expertise of pilots and the advanced predictive analytics of the AI form a cohesive team, working together towards a safer and more efficient flight.
Human-Readable Explanations
When the AI system recommends a specific flight route, it needs to convey its reasoning in a manner that the pilot can understand. This may involve explaining how it factored in current weather data, wind forecasts, and safety regulations to arrive at its recommendation. The provision of these human-readable explanations is crucial for pilots to comprehend the logic behind the AI's decisions and trust its recommendations.
Human-AI Collaboration vs. Cooperation
In a situation where the AI system is suggesting flight routes to pilots, both collaboration and cooperation are possible. Collaboration might involve the AI system actively working alongside the pilot, sharing in the decision-making process, and adjusting its recommendations based on the pilot's input. In contrast, cooperation might involve the AI system providing the pilot with data and recommendations, and the pilot making the final decision independently.
Operators' Expectations vs. AI System's Actual Behavior
Pilots may have expectations of the AI system, such as accurate weather predictions, efficient routing, and instant updates on changing conditions. If the system does not meet these expectations, this gap can lead to mistrust and ineffective use of the system. Bridging this gap involves refining the AI system to align more closely with operator expectations.
Information Content
The AI system provides pilots with critical information, such as weather updates, wind conditions, and optimal routes. The quality and relevance of this information are essential. For example, pilots need precise and up-to-date weather data to make informed decisions. Therefore, it's crucial for the AI to provide reliable, relevant, and easily interpretable information.
Explanation Modality
The way in which the AI system presents its explanations – visually on the cockpit display, audibly through the comms system, or through a text readout – greatly impacts how well pilots can comprehend and utilize the information. Determining the most effective explanation modalities for different scenarios and individuals is key to ensuring pilots can make the best use of the AI's insights.
Trustworthiness
Pilots need to trust that the AI system will provide reliable and accurate data, particularly given the high-stakes nature of flight. If the AI system has a history of accurately predicting weather hazards and suggesting efficient routes, pilots will trust it more, leading to better Human-AI teaming.
Decision-Making in Critical Situations
AI systems are often used in high-pressure environments, and the AI-based routing system for pilots is no exception. Understanding how pilots utilize AI information in their decision-making, particularly in critical situations like navigating around severe weather, is crucial to improving the design and implementation of these AI systems.
The Seven Pillars of Operator Explainability: A Guiding Framework for XAI Discussions
Breaking down the concept of operator explainability into these seven topics helps us to pinpoint the areas that need attention. Each topic presents a distinct facet of operator explainability that, when addressed effectively, can contribute to the development of reliable, user-friendly, and effective AI systems.
Through this article, we hope to clarify our understanding of operator explainability and facilitate productive conversations and collaborations in this area. In the next piece, we will delve into the human factors approach we utilize in our work at DMI.
I invite you to share your perspectives, insights, and feedback. Together, we can pave the way for reliable, effective, and operator-friendly AI systems.