Abstract
Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility and interpretability in AI by providing appropriate levels of abstraction to describe structure of AI systems generally, thereby facilitating the sharing of understanding among various stakeholders. The relationship between the Objective of AI designers and the Purpose of AI users is linked to the issues of AI alignment.