Artificial Intelligence and Machine Learning are trending terms nowadays. They seem closely connected to each other. Artificial Intelligence is intelligence demonstrated by machines and Machine Learning is a way to implement Artificial Intelligence. We can think of that an AI task is trying to solve an input and output problem. For example, tell me what the object is in an image. This is a typical image recognition/classification problem, that is to classify the input image into several classes. Supervised Machine Learning methods can be used to solve such problem. We, human, guides machines to learn from the training data. There are many different Machine Learning methods, but we are not going to go through them in detail. Today, we will focus on what AI is and the connection between AI and explainable artificial intelligence.

Why such problem is defined as an AI task and why we need to solve that by using Machine Learning? It is easy for human to tell what the object is, but it is difficult for machines to do so. We called such task a knowledge task, answer of which is not well-defined, which is opposed to concrete tasks. A typical example of concrete task is, 1 + 1 should have a well-defined answer of 2 while a typical example of knowledge task is, translating a document written in Japanese into English will have multiple versions.

Now, take an example in deep. We are trying to classify an image of a panda. It is easy for us to recognise the object in the image is a panda. What if we take the image as input and to classify using a Machine Learning model such as Inception-v3 for image recognition. The model is trained through a learning process, in which the machine learns the features from a large number of labelled image data. For a trained model, when users assign a task to it, it will in turn generate a sequence of labels and each label associates with a number, e.g. [Panda: 0.92, Cat: 0.05, Dog: 0.01, …], of which the 0.92 is the probability of the image should be/can be classified in that specific class.

Why on earth our machine predicts that a certain object is a panda with a probability of 0.92? There is a very complicated mechanism behind the Machine Learning model such as Inception-v3. What we need to know is that the input breaks down into parts and each part goes through a sequence of complicated mathematical functions, by which probability is calculated. But until now, the machine is just able to classify a certain object into specific class with probabilities but without telling us the reason behind.

Explainable Artificial Intelligence (XAI) will have the ability to tell you the reason. Ordinary AI gives you a number e.g. “Panda: 0.92”. Instead, XAI provides the reasons e.g. “Panda: [‘It has fur.’, ‘It is black and white.’, …]” to support their decision. Now the user understands why by the supported explanations. This is an amazing concept, useful in many different kinds of knowledge tasks, and importantly improving the existing machine explainability, AI usability and human understandability. We will be discussed more about XAI further in later posts.

The Featured Image is cited from: