Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then final label based on concept predictions. We extend CBMs interactive settings where model can query a human collaborator some concepts. develop an interaction policy that, at time, chooses which request so as maximally improve pr...