Current interactive systems that rely on machine learning techniques have little, if any, support for intelligibility. That is, users are unable to ask why a machine learning system made a particular prediction or classification, or why it suggested a particular action be taken. In our work, we look at the need for intelligibility in machine learning (and other complex) systems, and mechanisms ...