The internal reasoning of intelligent cyber-physical systems gets more complex and opaque with their increasing autonomy. This creates challenges for ensuring safety and correctness during development, diagnosis, and interaction. Collaboration between humans and machines is particularly prone to misunderstandings and false expectations, leading to poor results and even dangerous accidents. Extra effort is needed to regain transparency through explanations of internal behaviour. However, not all explanations are appropriate or useful. They must be tailored to a certain purpose, their recipient, and their situational context. Also, explanations must address the overall system behaviour, not only internal AI algorithms. This talk gives an overview of requirements for useful explanations, shows how tailored explanations can be generated autonomously, and presents challenging research questions.