OM News

There’s a scene in William Gibson’s 2010 novel Zero History, in which a character embarking on a high-stakes raid dons what the narrator refers to as the “ugliest T-shirt” in existence — a garment which renders him invisible to CCTV. In Neal Stephenson’s Snow Crash, a bitmap image is used to transmit a virus that scrambles the brains of hackers, leaping through computer-augmented optic nerves to rot the target’s mind. These stories, and many others, tap into a recurring sci-fi trope: that a simple image has the power to crash computers.

Magic AI: these are the optical illusions that trick, fool, ...

Algorithms are black boxes that do not know what they do not know, as such we need Explainable AI to reduce bias and discrimination in AI.


From this explanation it is clear even to someone unfamiliar with Machine Learning that recognition is not happening as intended. Although this network performs well, it is focusing on pixels outside of where digit information is typically found.

We take an existing Machine Learning – Neural Network (a type called SVM) train on a Post-Office hand written numerical image data set (MNIST) and with OM technology demonstrate (explain) what it is doing.

Explainable AI Demonstration

Dmitry Malioutov can’t say much about what he built. As a research scientist at IBM, Malioutov spends part of his time building…

Original: Nautilus

You never know what a meeting for a quick coffee in Palo Alto can turn into.

What was supposed to be an ‘informal’ chat (if there is such thing when talking with PhD’s) about feedforward-feedback machine learning models, turned into a philosophical discussion on duck-rabbit paradigm shifts (disclaimer 1: I’m just a nerd without credentials on either topic you choose, with a genuine interest though)