IBM – Prying Open the “Black Box” of AI with Trust and Transparency

By Charles King, Pund-IT, Inc.  September 26, 2018

There are numerous reasons for IT industry vendors’ interest in and focus on artificial intelligence (AI) solutions and services. Though it has long been a goal for scientists and engineers of every stripe, advancements in foundational technologies have finally made AI commercially viable. Equally important is how AI can complement and support the solution of increasingly complex, thorny problems, meaning that it can be applied in numerous technical, industry, workplace and consumer scenarios.

In other words, effective AI-based solutions can be developed, and there’s plenty of work to be done. While that’s great news, critical AI-related trust and transparency issues have never been more important for vendors to address or their customers to understand. That’s especially true as vendors bring new AI solutions and services on line.

Recently in New York City, IBM executives outlined the state of the company’s AI efforts, the critical roles trust and transparency play in that process and the next steps needed to bring those projects in-line with the company’s multi-cloud strategy and vision.

Following those events, IBM announced new Trust and Transparency capabilities on IBM Cloud which automatically detect bias and explain how AI makes decisions, as decisions are being made, and can be applied to models built from machine learning frameworks and AI-build environments, including IBM Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

Let’s consider the current and future state of IBM AI, along with why the company is focusing so much attention on related trust and transparency issues and solutions.

The role of trust and transparency in AI

It’s no wonder that players from tiny start-ups to Silicon Valley’s biggest global vendors are in the AI hunt. But how, how well and how effectively they are progressing is, to be kind, more than a little erratic.

Why is that the case? In part, it’s due to the industry’s increasing reliance on specialization—on companies that focus on solving incremental pieces of far bigger puzzles. That approach makes it hard to maintain a broad view or understanding of larger challenges. It can also be difficult for individuals to fully perceive or groups to reach a consensus when critical points arise, especially when answers could impact how they conduct their work and business.

Since AI has reached just such a juncture crossing over into common use, the value of specialty vendors with a myopic market view is likely to hit some speedbumps along the way. In contrast, those whose AI efforts span a broad range of technologies and use cases are better-positioned to maintain their balance while delivering measurable value to customers.

That’s a critical point since it’s difficult to think of a time when the perceptions of and trust in technology have been at a lower ebb. From elections influenced by groups successfully gaming massive social media platforms to the continual thefts of data from banks, credit agencies and retailers to many of those same “trusted” companies sitting on the evidence of data breaches for months before finally informing their customers.

If vendors are to have any chance of success with AI, they must respect, implement and maintain trust and transparency throughout their services. In addition, they need to provide customers the necessary tools to effectively monitor the accuracy of and correct bias in AI-related processes and solutions.

AI can’t be a black box

Why are trust and transparency so important to AI? You might call it the inevitable intersection of public values and business value. Recently on CNN’s Amanpour and Co., biographer Walter Isaacson asked IBM’s CEO Ginni Rometty about controversies concerning data ownership and whether, through their inaction, companies in Silicon Valley were risking government intervention and the imposition of stringent regulations.

Rometty replied, “Look, we’re builders of this stuff. We believe in the purpose of it … to augment mankind. Ownership of data (is critical) to whether people have trust. ‘Do I have to give you my data? Do you own my data?’ Even more important is that AI is about training (computational) engines. ‘Okay, that engine got trained. Who does that engine belong to now? Did you take it to my competitor?’”

Rometty continued, “We say we can guarantee (that) the way we built it, the … data used to train this will not go to the next guy. We also said that for trust, AI can’t be a black box; it has to be explainable.”

Rometty noted that the issue crystalized in IBM’s early work on Watson AI-based healthcare solutions. When the company would make recommendations, the first questions doctors asked were, “’How did you figure that out? What data went into it? Why?’ So, we had to build (Watson) so it can answer those kinds of questions. Explainability is a really big deal. You’ve got to believe in and live those kinds of principles for people to trust you.”

Trust and transparency: The next steps to AI on IBM Cloud

How enterprises feel about the issues Rometty detailed was highlighted in the recently released IBM Institute for Business Value AI 2018 Report. While 82% of enterprises and 93% of high-performing enterprises are considering or moving ahead with AI adoption, 60% fear liability issues and 63% lack the skills to harness AI’s potential.

What IBM is doing to prevent AI solutions from turning into the “black boxes” that Rometty described was a focal point of the company’s New York events, with presentations focusing on how trust and transparency qualify as a key step on “the ladder to AI.” To that end, IBM is introducing Trust and Transparency services for AI on IBM Cloud, a new service that supports key capabilities, including:

  1. Automatically detects fairness issues at runtime and explains outcomes and recommendations in business terms
  2. Can be programmed to monitor unique decision factors for any workflow, enabling it to be customized for specific organizations
  3. Reduces bias by automatically recommending data to add to the model to mitigate detected bias
  4. Supports auditability and traceability of predictions made in production applications so records of a model’s accuracy, performance and fairness, and the lineage of the AI systems can be easily discovered and recalled for customer service, regulatory or compliance reasons.
  5. Is accessed through visual dashboards, enabling users of all sorts to understand, explain and manage AI-led decisions

Beth Smith, GM of Watson AI at IBM noted, “It’s time to translate principles (of trust and transparency) into practice. We are giving new transparency and control to the businesses that use AI and face the most potential risk from any flawed decision making.” That’s a strategically sensible approach for IBM, but it’s also easy to see how the new services will benefit its customers and end users.

Why is IBM Cloud so important to this process? The company has designed its cloud platform and services to provide highly accessible and dependable support for enterprise-critical workloads. IBM has further evolved its portfolio with solutions that seamlessly access private, hybrid and public cloud (multi-cloud) assets.

The company’s new IBM Cloud Private for Data is a prime example of this. It provides a single console for managing enterprise data wherever it resides, from private clouds enterprises managed on-premises to any public cloud. As Rob Thomas, GM of the IBM Analytics organization noted in New York, Cloud Private for Data qualifies as “the gateway for all your data” and thus “is a prerequisite for AI.”

In other words, there is no better place for the company’s new Trust and Transparency capabilities to reside than IBM Cloud. As AI continues to evolve and come into greater commercial use, IBM Cloud will be central to assuring that trust and transparency remain central to the company’s solutions and services, and that Watson AI never becomes a black box.

Final analysis

Though critically important, trust and transparency were just two of the AI issues IBM focused on during events in New York. Also discussed were the practical steps required to make data ready for AI, efficiently scaling AI workloads for enterprise customers, and how the company is leveraging IBM Cloud and solutions like SPSS to deliver business value to customers.

Several of those customers took the stage during IBM’s public event – “Changing the Game and Winning with AI” – to detail how and how well the company is delivering on its promises. Taken together, these discussions described the process and steps required for the company’s AI solutions to deliver dependable, replicable results.

How important are these issues for enterprise customers? The tremendous pressure that businesses face today speaks to the need for organizations to reinvent themselves. As IBM’s Thomas said, “AI can and will make all the difference. So, not having an AI strategy in 2018 is akin to not having an Internet strategy in 2000 or a mobile strategy in 2010.” But Thomas also noted that successfully executing an AI strategy “means doing the right things that may also be the hard things.”

Ensuring trust and transparency throughout AI processes is anything but simple. That’s where an innovative vendor like IBM, whose AI efforts span a broad range of technologies and use cases, can deliver measurable, dependable value. In fact, trust and transparency qualify as a sort of canopy across IBM’s AI continuum, ensuring that its solutions and services never become an impenetrable “black box” and assuring customers that their precious data assets are properly respected and secured.

Without trust and transparency, artificial intelligence is worth little or nothing. With them, individuals and organizations, including IBM, will find the ladder to successful artificial intelligence simpler and easier to climb.

© 2018 Pund-IT, Inc. All rights reserved.