Transparent machine learning: How to create ‘clear-box’ AI

AI and robots can be trained to perform many tasks, but systems often operate in a black box, so we don’t know how decisions are made. Here’s how one company created a transparent alternative.

The next big thing in AI may not be getting a machine to perform a task—it might be requiring the machine to communicate why it took that action. For instance, if a robot decides to take a certain route across a warehouse, or a driverless car turns left instead of right, how do we know why it made that decision?

read more

Explainable AI Demonstration

From this explanation it is clear even to someone unfamiliar with Machine Learning that recognition is not happening as intended. Although this network performs well, it is focusing on pixels outside of where digit information is typically found.

We take an existing Machine Learning – Neural Network (a type called SVM) train on a Post-Office hand written numerical image data set (MNIST) and with OM technology demonstrate (explain) what it is doing.

Two Duck-Rabbit Paradigm-Shift Anomalies in Physics and One (maybe) in Machine Learning

You never know what a meeting for a quick coffee in Palo Alto can turn into.

What was supposed to be an ‘informal’ chat (if there is such thing when talking with PhD’s) about feedforward-feedback machine learning models, turned into a philosophical discussion on duck-rabbit paradigm shifts (disclaimer 1: I’m just a nerd without credentials on either topic you choose, with a genuine interest though)

Original: medium.com