Periodic split method: learning more readable decision trees for human activities

12/07/2017

R&D

Tous les articles

Sommaire

This paper presents specific features of the Craft AI Machine Learning engine that enable it to better take into account typical rhythms in human activities. In particular, it improves the quality and explainability of predictive models related to those.

This work was presented at APIA 2017 in Caen (France) and published in its proceedings.

Abstract

periodic split

Placing your trust in algorithms is a major issue in society today. This article introduces a novel split method for decision tree generation algorithms aimed at improving the quality/readability ratio of generated decision trees. We focus on human activities learning that allow the definition of new temporal features. By virtue of these features, we present here the periodic split method, which produces similar or superior quality trees with reduced tree depth.

Vous pourriez également apprécier

IA de confiance
14/09/2022

Garder l'humain dans la boucle avec l'XAI

Quelle place l’explicabilité (XAI) occupe-t-elle aujourd’hui en Machine Learning et en Data Science ? Le challenge de ces dix dernières années en data science, a plutôt été de trouver la bonne “recette algorithmique” pour créer des modèles de ML toujours plus puissants, plus complexes et donc de moins en moins compréhensibles.

Lire l'article

R&D
12/07/2022

L'industrialisation de l'IA & le concept de MLOps

Le MLOps apparaît comme une nécessité pour pallier les difficultés lors du passage à l’échelle de l’IA au sein des entreprises : la reproductibilité, le versionning, l'intégration continue... C’était l’objet de l’une des conférence sur l'industrialisation de l'intelligence artificielle dans le cadre de l'Enjeu Day Industrie & Services 2022. Vous n’aviez pas pu y assister ? Retrouvez le replay.

Lire l'article

IA de confiance
10/05/2022

A guide of the most promising XAI libraries

Using Machine Learning to solve a problem is good, understanding how it does is better. Indeed, many AI-based systems are characterized by their obscure nature. When seeking an explanation, looking to understand how a given result was produced, exposing the system’s architecture or its parameters alone is rarely enough. Explaining that a CNN recognized a dog by detailing the Neural Network weights is, to say the least, obscure. Even for models deemed as glass-box such as decision trees, a proper explanation is never obvious.

Lire l'article