What Is Explainable AI – Importance of Explainable AI and The Principles
Explainable AI (XAI) is a concept in artificial intelligence that provides the results or output which can be understood by humans. This is based on "white box" theory where human is able to understand that why the machine has reached to a specific conclusion. This is the opposite of the "black box" theory where analysts can know the result but cannot understand the reason behind it. In other words, White-box models are ML models that provide results that are understandable for experts in the domain. Black-box models, on the other hand, are extremely hard to explain and can hardly be understood even by domain experts. The explaining the reason build the trust between human and the intelligent machines. Trusting on agent algorithms is very important, especially in case of medical reasons. For example, humans need to trust algorithmic prescriptions.
In this article, you will find the answers for the key important questions in the form of video and text formats, such as:
- What is the explainable AI and why is important?
- What is an explainable AI example?
- Is explainable AI possible?
- Why do we need explainable AI?
- What would benefit from explainable AI principles?
- What is AI transparency?