Transparent machine learning: How to create ‘clear-box’ AI

AI and robots can be trained to perform many tasks, but systems often operate in a black box, so we don’t know how decisions are made. Here’s how one company created a transparent alternative.

The next big thing in AI may not be getting a machine to perform a task—it might be requiring the machine to communicate why it took that action. For instance, if a robot decides to take a certain route across a warehouse, or a driverless car turns left instead of right, how do we know why it made that decision?

Explainable Artificial Intelligence cracking open black box AI

“It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble,” “You don’t really know why a system made a decision. AI cannot tell you that reason today. It cannot tell you why,” “You need to have the system accountable.”

Original: Computerworld

Understanding the limits of deep learning

Artificial intelligence has reached peak hype.
News outlets report that companies have replaced workers with IBM Watson and that algorithms are beating doctors at diagnoses. New AI startups pop up everyday, claiming to solve all your personal and business problems with machine learning.

Original: VentureBeat

There’s a big problem with AI: even its creators can’t explain how it works

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Magic AI: these are the optical illusions that trick, fool, and flummox computers

There’s a scene in William Gibson’s 2010 novel Zero History, in which a character embarking on a high-stakes raid dons what the narrator refers to as the “ugliest T-shirt” in existence — a garment which renders him invisible to CCTV. In Neal Stephenson’s Snow Crash, a bitmap image is used to transmit a virus that scrambles the brains of hackers, leaping through computer-augmented optic nerves to rot the target’s mind. These stories, and many others, tap into a recurring sci-fi trope: that a simple image has the power to crash computers.

Explainable AI Demonstration

From this explanation it is clear even to someone unfamiliar with Machine Learning that recognition is not happening as intended. Although this network performs well, it is focusing on pixels outside of where digit information is typically found.

We take an existing Machine Learning – Neural Network (a type called SVM) train on a Post-Office hand written numerical image data set (MNIST) and with OM technology demonstrate (explain) what it is doing.

Two Duck-Rabbit Paradigm-Shift Anomalies in Physics and One (maybe) in Machine Learning

You never know what a meeting for a quick coffee in Palo Alto can turn into.

What was supposed to be an ‘informal’ chat (if there is such thing when talking with PhD’s) about feedforward-feedback machine learning models, turned into a philosophical discussion on duck-rabbit paradigm shifts (disclaimer 1: I’m just a nerd without credentials on either topic you choose, with a genuine interest though)