Towards Metaphors for Cascading AI

In the future, more and more systems will be powered by AI. This may exacerbate existing blind spots in explainability research, such as focusing on outputs of an individual AI pipeline as opposed to a holistic and integrative view on the system dynamics of data, algorithms, stakeholders, context and their respective interactions. AI systems will increasingly rely on patterns and models of other AI systems. This will likely introduce a major shift in the desiderata of interpretability, explainability and transparency. In this world of Cascading AI (CAI), AI systems will use the output of other AI systems as their inputs. The typical formulations of desiderata for explaining AI decision-making, such as post-hoc interpretability or model-agnostic explanations, may simply not hold in a world of cascading AI. In this paper, we propose two metaphors which may help designers to frame their efforts when designing Cascading AI systems.

Oppenlaender Jonas, Benjamin Jesse Josua

Publication type:

Place of publication:
Metaphors for human-robot interactions. International workshop held in conjunction with the 12th international conference on social robotics (ICSR 2020), 16 November 2020. Online

AI, artificial intelligence, CAI, cascading AI, explainability, explainable AI, human-AI interaction, interpretability, XAI


Full citation:
Oppenlaender, J., & Benjamin, J. J. (2020, November 11). Towards Metaphors for Cascading AI.


Read the publication here: