BPM.com

Understanding Explainable AI

Explainable AIAs artificial intelligence becomes an increasing part of our daily lives, from the image and facial recognition systems popping up in all manner of applications to machine learning-powered predictive analytics, conversational applications, autonomous machines and hyperpersonalized systems, we are finding that they need to trust these AI-based systems with all manner of decision making and predictions is paramount. AI is finding its way into a broad range of industries such as education, construction, healthcare, manufacturing, law enforcement and finance. The sorts of decisions and predictions being made by AI-enabled systems are becoming much more profound, and in many cases, critical to life, death and personal wellness. This is especially true for AI systems used in healthcare, driverless cars or even drones being deployed during war.

However most of us have little visibility and knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning is being applied. Many of the algorithms used for machine learning are unable to be examined after the fact to understand specifically how and why a decision has been made. This is especially true of the most popular algorithms currently in use — specifically, deep learning neural network approaches. As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. The lack of explainability and trust hampers our ability to fully trust AI systems. We want computer systems to work as expected and produce transparent explanations and reasons for decisions they make. This is known as Explainable AI (XAI).

Making the Black Box of AI Transparent with Explainable AI (XAI)

Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. This area inspects and tries to understand the steps and models involved in making decisions. XAI is thus expected by most of the owners, operators and users to answer some hot questions like why did the AI system make a specific prediction or decision? Why didn’t the AI system do something else? When did the AI system succeed and when did it fail? When do AI systems give enough confidence in the decision that you can trust it, and how can the AI system correct errors that arise?

Read more at Cognitive World.

Ronald Schmelzer
Author: Ronald SchmelzerWebsite: http://www.cognilytica.com

Ronald Schmelzer, columnist, is senior analyst and founder of the Artificial Intelligence-focused analyst and advisory firm Cognilytica, and is also the host of the AI Today podcast, SXSW Innovation Awards Judge, founder and operator of TechBreakfast demo format events, and an expert in AI, Machine Learning, Enterprise Architecture, venture capital, startup and entrepreneurial ecosystems, and more. Prior to founding Cognilytica, Ron founded and ran ZapThink, an industry analyst firm focused on Service-Oriented Architecture (SOA), Cloud Computing, Web Services, XML, & Enterprise Architecture, which was acquired by Dovel Technologies in August 2011.

Ron is the lead author of XML And Web Services Unleashed (SAMS 2002) and co-author of Service-Orient or Be Doomed (Wiley 2006) with Jason Bloomberg. Ron received a B.S. degree in Computer Science and Engineering from Massachusetts Institute of Technology (MIT) and MBA from Johns Hopkins University.