Transparent machine learning: How to create ‘clear-box’ AI

AI and robots can be trained to perform many tasks, but systems often operate in a black box, so we don’t know how decisions are made. Here’s how one company created a transparent alternative.

The next big thing in AI may not be getting a machine to perform a task—it might be requiring the machine to communicate why it took that action. For instance, if a robot decides to take a certain route across a warehouse, or a driverless car turns left instead of right, how do we know why it made that decision?