To speed up this development, please consider lending us a hand. Contact us!
What is Transparent / Explainable AI?
The concept of Transparent / Explainable AI is the ability to assess your AI at every level, even the deep ‘hidden’ layers of a deep learning neural network. Thus, at each step you can see how your algorithm makes the decisions that it does. In other words, you can see what types of input each layer is expecting or using.
You can watch your AI think.
Why do we need Transparent / Explainable AI?
While the performance of AI can be impressive, it also can be easily misled and make errors that humans simply would not make. One infamous case fools a deep learning neural network to think an image of a bus is actually an ostrich by making small changes to the original image. These changes would never fool a human, and it is not difficult to create such cases (called adversarial cases).
The basic problem is that these algorithms are highly dependent upon the data that they are given and may not give solutions that generalize well (to situations outside the original dataset). As we allow AI to make more and more critical decisions (e.g., self-driving cars, medical diagnosis, and military decisions), there is a growing need to understand how and why AI makes the decisions that they do, so that we can trust their decisions.
This issue has become increasingly recognized; for example, the Defense Advanced Research Projects Agency (DARPA) has started an initiative for explainable AI (XAI).
What can Optimizing Mind do for you?
Optimizing Mind provides you with the capability to create Transparent / Explainable AI for any AI algorithm that you currently have. We can take your existing code and transform it to generate equivalent code that allows you to visualize each step of your AI algorithm. Thus, you your networks become transparent/explainable, and you can achieve better quality assurance and trustworthiness, before you release your AI to clients/users/customers.
Why are AI algorithms ‘black boxes’?
Most current AI algorithms use feedforward weights. Recognition is in the form of recognition_state = input * weights. In this form it is not easy to understand the goals or compromises of the network by looking at the weights.
All models that use feedforward weights are a ‘black box’ to some degree. However the more complex they are, the more difficult they are to understand. They vary from from simple to complex including: single nodes (regression models), single layer networks, and multilayer networks (deep, convolutional, recurrent, reinforcement).
What are feed-forward weights?
Virtually all machine learning algorithms use feed-forward weights to perform recognition. Recognition is in the form of output = input * weights. Information flows from input to output. When information goes only from input to output, this is called feed-forward.