What is Explainable AI

Image

What is Explainable AI?

By 2022, 75 percent of new end-user solutions leveraging AI and ML techniques will be built with commercial instead of open-source platforms. AI and ML solutions will be applied in every business irrespective of their product. So, it’s important to know what the ML algorithm is doing behind the scenes, and it’s the duty of the ML engineer to explain the model to the person who is using it. For example, if a model suggests the user's best route, then the user should be aware of the features and constraints the model is considering to show the best route so that the user has an overview of the whole process, and this is what explainable AI is.

Explainable AI (XAI) is artificial intelligence that is programmed to describe its rationale, purpose, and decision-making process in a way that can be easily understood by the normal person. XAI is often discussed in relation to deep learning and plays an important role in the FAT ML model (fairness, accountability, and transparency in machine learning).

An important goal of explainable AI is to provide algorithmic accountability. Until recently, Artificial intelligence systems have essentially been black boxes. Even if the inputs and outputs are known, the algorithms used to arrive at a decision are not easily understood or often proprietary, despite the fact that the inner workings of the programming are open-source and made freely available. As artificial intelligence becomes increasingly prevalent, it is becoming more important than ever to disclose how bias and the question of trust are being addressed.

So, with all the information we have, let us try to relate the concepts to the above figure. Every machine learning model begins with cleaning the data and then segregating the data into training and test data sets, and then different algorithms like clustering regression are applied to the training data . The machine learning model then predicts the output for the given input, and this model is given to the clients to use in their business. Here is where problems might arise, like clients are not aware of when the model will succeed or when it will fail. Why is it taking this feature into consideration? How is it related to the output?

Here is where XAI comes into the picture; the client will be interacting with engineers who had developed the model and get to know all the details of the model. So the client has clear understanding of why the particular error is occurring and why that constraint is taken into consideration. So if anything goes wrong, the user can easily check the error and correct it. With both domain knowledge and machine learning model knowledge, users will be more intuitive in making business decisions.

As advances in AI make these models so sophisticated that their logic becomes increasingly subtle and incomprehensible to humans, the need for auditable, accountable, and understandable AI becomes inevitable and might be hurried along by regulators’ (and consumers’) justified concerns. If your organization is using or looking to use AI—and by now, this should be a universal driver—you’re going to have to make sure you understand how your algorithms are working. If you don’t, you’ll leave your organization open to legal action, regulatory fines, loss of customer trust, security risks through lack of effective oversight, reputational damage, and, most fundamentally, losing full control of how your business operates.

The success of AI models is because of the machine's own internal representations, which are even more difficult than manually generated features, leading to its inexplicable nature. There is a lot of ongoing research on the ante-hoc and post-hoc methods to make AI more answerable and awareness to inculcate these methods in existing programs. Good initiatives by leading organizations like DARPA, Google, DeepMind, etc. are leading the necessary change. Despite this, there will always be a tradeoff between explainability and accuracy, whose balance depends on various factors like end-user, legal liability, technicality of the concerned parties, and the field of application. Artificial intelligence should not become a powerful deity that we simply follow blindly without understanding its reasoning, but we shouldn’t forget about the beneficial insight it can have. Ideally, we should build flexible and interpretable models that can work in collaboration with experts and their domain knowledge.

Related terms that are important to understand

In order to grasp these concepts better, we must understand the terms given here:

Machine Learning: It is the part of Artificial Intelligence in which the system learns and improves by experience without implicitly programming it to do so.

Data cleaning: Data cleaning is the process of detecting and modifying corrupt or inaccurate records from a record table or set or database and refers to identifying incorrect, incomplete, inaccurate, or irrelevant elements of the information and then substituting, modifying, or deleting the dirty or coarse information.

Written by:

Himanshu Bahmani

Founder - NeenOpal Analytics

LinkedIn

Related Post

Leave a Reply