Explainable AI, a game changer for AI in production - AI Night 2019 workshop

By

Clodéric Mars
May 15, 2019 Event

Explainable AI, a game changer for AI in production - AI Night 2019 workshop

The explainability of AIs has become a major concern for AI builders and users, especially in the enterprise world. As AIs have more and more impact on the daily operations of businesses, trust, acceptance, accountability and certifiability become requirements for any deployment at a large scale.

A workshop was dedicated to this topic during the 2019 European AI Night on April the 18th in Paris. Four major players of the French AI ecosystem were invited by France Digitale and Hub France IA to discuss why they bet on Explainable AI (XAI) : Bleckwen, D-Edge, craft ai and Thales.

Three concrete explainable AIs, deployed today in production, were presented showing how explainability techniques can be leveraged to create better, more usable enterprise tools.

“Explanations are mandatory when AI empowers humans to perform complex tasks” - Antoine Buhl, CTO @ D-Edge

D-Edge provides SaaS solutions to hotels and hotel chains. 11 000 hotels in Europe and Asia are using the D-Edge solutions to optimize their distribution. AI is used by D-Edge along with statistical algorithms to optimize the selling price of rooms and to make predictions of booking cancellations.

The decision of a room price is highly complex and implies the analysis of many factors (room already sold, the prices of the competitors, local events, etc.) including external events that cannot be anticipated. Antoine took one example, the recent “Gilets Jaunes” crisis in France at the end of 2018, it generated an unexpected high cancellation rate of hotel bookings. What looks like an “AI bug” can be easily understood if the AI let the user know it doesn’t know about this event by explaining itself. Moreover, D-Edge faces another challenge: defining, even after the events, if a price was optimal or not is virtually impossible because the environment is continuously changing.

D-Edge provides the tool, but in the end, making the optimal price decision is the role of Revenue Managers. To make the right decisions in this complex and moving environment, Revenue Managers need explanations of the recommendations. Adoption is key in this collaboration between human experts and algorithms. At D-Edge, they measure how often Revenue Managers use the suggested pricing to continuously measure this adoption as well as both the quality of the recommendation and the quality of the explanation. More and more, they see Revenue Managers letting the AI change autonomously the pricing based on explanations and other parameters of context.

When AI empowers humans to perform complex tasks, explainability is mandatory.

Caroline Chopinaud, CCO @ craft ai

“Without explainability, predictions have no value” - Caroline Chopinaud, CCO @ craft ai

craft ai provides Explainable AI as-a-service to enable products and operational teams to deploy and run XAI efficiently. craft ai deals with data stream to automate business processes, enable predictive maintenance or boost user engagement. Caroline specifically presented how one client, Dalkia, leverages craft ai to improve the productivity of their energy managers by providing diagnosis recommendations. In this context, explainability is a requirement; without it, the human experts would need to reinvestigate to understand the it hence nullifying the productivity benefit. That’s just one example of why explainability is a key for AI deploiement and that’s why craft ai develops their own whitebox Machine Learning algorithms!

“Explainability is about communication, it’s important to know the end users and adapt presentation to their expectations” - Yannick Martel, CPO @ Bleckwen

Bleckwen is a young fintech, specialized in applying explainable Artificial Intelligence against Financial Crime. So far, the adoption of Artificial Intelligence in the financial sector has been pretty slow, and they believe, Yannick stated, that explainability is a key factor in their success, because of the need for the analysts, clients and regulators to understand and be able to operate on the decisions provided by an algorithmic solution.

An important area of development is making sure Bleckwen provides the best explanations to users, selecting amongst all the mathematically valid, the ones matching their requirements and expectations. Another challenge, Yannick explained, is to present the generated explanations in a way that makes them easy to understand, in order to build trust in Bleckwen’s algorithms.

“When it comes to create AI for critical systems, trustability and certifiability are mandatory” - VP Research, Innovation & Technology @ Thales

David Sadek closed the first go around by introducing the challenges faced by Thales as they develop AI for critical systems: space, telecommunications, avionics, defence… One key aspect that he talked about is building trust between machines and humans that interact with them. In this context it is important to think about how explanations are conveyed for example through a conversational interface able to answer to natural languages requests and using explaining variables that matter to their operator. Another important usage of explanation is certifying autonomous vehicles. While current perception algorithms are black boxes, being able to understand perceptions decisions will be crucial to certify such systems: why an obstacle was detected, why a detected shape was not considered an obstacle. To this end, hybrid systems combining performant but unexplainable deep learning techniques and symbolic AI reasoning are explored at Thales.

Roundtable

The workshop concluded with a discussion between the attendees and the panelists on the challenges of explainable AIs. The focus of the exchanges was mostly around the quality of the explanations: how accurately the AI decisions process and how understandable they are for the humans receiving it. The panelists insisted that those two topics are tightly linked.

Yannick Martel explained that due to the complexity of the frauds that are detected, especially in terms of number of meaningful features, Bleckwen made the choice of using a dual approach: predictions relying on non-explainable machine learning techniques and locally generated explanations. This enabled them to create very well understood business features to be used in the user facing explanations. During the build of the AI, Bleckwen assessed that the predictions did not miss actual frauds and that the explanations were understood and matched the expectations of the business experts, thus both the explanations accuracy and their understandability dimensions were validated qualitatively.

craft ai, Caroline Chopinaud described, uses an explainable-by-design approach where a single model both predict and explains this means that there is no explanation accuracy problem, however to ensure understandability the predictions themselves must rely on business understood features and combination of features which limits the type of “reasoning” the AI can do, e.g. comparing the sum of temperature and energy consumption to the month can be the best way to predict the yield of a boiler but its “wrong” from any heating expert point of view. That’s why craft ai investment in R&D and better explainable Machine Learning algorithms makes a difference. When it comes to assessing the understandability of the explanations, craft ai rely on end-users feedbacks. A more quantitative approach to this measurement is current R&D challenge.

A similar explainable-by-design approach is used by D-Edge, Antoine Buhl explained, relying on a collection of AI techniques. Because validating a pricing recommendation is very difficult, D-Edge focuses its performance metrics on the trust the end-user put in the recommendations and the explanations by tracking how often they validate the recommendations as-is.

David Sadek concluded the discussion by introducing the problematic of ethics in AI. For him AI should be assessed alongside three dimensions: accuracy, explainability and ethics. The AI community mostly focused on the first aspect for a long time but the two other dimensions is critical to put AIs in productions, especially in critical systems. Explainablity is critical to control and audit the ethics of an AI, helping identifying bias for example, but it is not enough to ensure ethical behavior are enforced.

Takeways

Explainable AI might be a relatively new concern in the spotlight but for some actors in the field it has been key for some time now. It’s not by chance that those actors were able to deploy in production AIs impacting key aspects of businesses. Theses AI doesn’t happen to be in production and explainable, they are in production because they are explainable.

If you want to learn more about craft ai Explainable AI contact us !