In our previous discussion, we dissected the term 'Operator Explainability' to expose its limitations and the necessity for a more holistic perspective on 'Human Factors in AI'. Today, we delve one step further, examining the approach that forms the cornerstone of our work at DMI – the MIT approach to human factors in AI: .
The Massachusetts Institute of Technology (MIT) framework for Explainable AI (XAI), called Situation Awareness Framework for Explainable AI (SAFE-AI), presents a comprehensive methodology to understanding and implementing human factors in AI. It aims to improve task performance and user experience by addressing human information needs and contextual elements.
The framework suggests the utilization of established metrics to evaluate workload, trust, and situational awareness in AI. It emphasizes the importance of defining human information needs to design XAI systems to provide relevant and understandable explanations. By leveraging existing engineering techniques, the framework guides designers in enhancing interpretability. Its ultimate goal is to enhance user comprehension, control workload, and foster situational awareness and trust in AI systems. Additionally, the framework underscores the significance of context modeling, considering domain-specific knowledge and real-time situational factors to offer meaningful explanations.
The MIT framework revolves around three critical human factors: situational awareness, workload, and trust. Let's unpack each of these.
Situational Awareness in AI Behavior
The framework translates an existing definition of situation awareness into the design of an AI model. Situational awareness (SA) in AI behavior refers to a user's understanding of an AI system's decisions and actions. The MIT framework proposes three levels of explanation to foster SA:
1. XAI for Perception: What is the AI system's decision, and does the user have sufficient information about this decision? This level deals with the direct output of the model, facilitating the user's basic understanding of the AI's actions.
2. XAI for Comprehension: Why and/or how did the AI system make the decision? This level reveals the system's reasoning, enabling users to grasp the motives behind the AI's actions.
3. XAI for Projection: What are the possibilities for varied inputs and desired outputs, and can users predict the system's behavior in different scenarios? This level assists users in forecasting the AI system's responses under varying conditions.
SA requirements are determined by defining the goals and subgoals of all users in the team. In order to satisfy the framework, the AI system needs to provide explanations at all three levels for each defined subtask. This ensures that the explanation is tailored to the specific context of the task.
Workload Considerations
Workload, an integral aspect of the MIT framework, pertains to the amount of mental processing required by the user. This factor influences the frequency, modality, and amount of explanation needed from an AI system.
Based on the 4-dimension 'Multiple Resource Model’ (MRM), we can evaluate information representation (using, for example, Ecological Interface Designs), information encoding, the stage of information processing, and response modality in the XAI design. These considerations allow us to optimize the AI system's explanations to reduce unnecessary strain on the user, give the right amount of information for each dimension and enhance situational awareness.
Calibrating Trust
Trust is an indispensable component of any human-AI interaction. The MIT approach advocates for clear communication about the AI system's capabilities, limitations, and confidence. Calibrated trust involves aligning the level of trust assigned to the system's outputs with their actual reliability or accuracy and is built on three foundations:
1. Purpose: Provides explanations about AI's capabilities and limitations within a context.
2. Process: Shares information about how the model makes decisions generally.
3. Performance: Communicates about uncertainty and confidence levels for tasks and AI behavior.
The calibration of trust can influence the user's interpretability and workload, making it a critical aspect of XAI design.
Summary: it's a helpful tool. Let's fill in the gaps
While the MIT approach serves as a comprehensive guide to XAI, it's essential to acknowledge the existing gaps in this area. Some of the challenges include:
1. The absence of a method or system that adequately addresses all three levels of XAI.
2. The understudied area of context modeling, and how other human factors could be valuable for each context and explanation.
3. Limited knowledge about domains where SAFE-AI is most effective and most limited.
4. Few XAI comprehensive approaches specifically tailored for human needs, such as determining which information is most relevant to users.
5. A scarcity of studies exploring how to provide user-tailored explanations and ways in which users process explanations provided by the system in different contexts.
6. The absence of optimal techniques for level 2 and 3 situational awareness.
7. Uncertainty in how guidance for workload and trust varies with context.
At DMI, we believe these gaps signify not an insurmountable hurdle, but an invitation for further research and innovation in the field of AI engineering. They underline the importance of a user-centered approach, emphasizing the value of tailoring explanations to individual users and contexts. We recognize that a one-size-fits-all model cannot succeed in the multifaceted realm of AI.
The MIT approach to human factors in AI offers a nuanced, comprehensive framework for enhancing user experience and overall task performance. It appreciates the complexity of the AI-user interaction, recognizing the need for clarity in AI decisions (perception), comprehension of AI reasoning (comprehension), and prediction of AI behavior (projection).
Furthermore, it respects the mental processing capabilities of users, understanding that effective explanations must align with users' individual mental workload. The model also acknowledges the pivotal role of trust in any human-AI interaction, promoting transparency regarding the system's capabilities, limitations, and confidence.
Despite the existing gaps in this approach, it still offers valuable guidance to AI engineers in the aerospace industry, allowing for more effective, efficient, and satisfying human-AI teaming. It provides the foundation upon which we, at DMI, build our AI systems and it is the lens through which we view our future innovations.
As we continue to evolve with the fast-paced world of AI, we remain committed to continuously exploring ways to refine systems’ performance and safety, while also improving the AI-user experience.
We believe in collaboration: if you have any thoughts or would like to exchange ideas, please reach out.