Periodic split method: learning more readable decision trees for human activities

12/07/2017

R&D

Tous les articles

Sommaire

This paper presents specific features of the Craft AI Machine Learning engine that enable it to better take into account typical rhythms in human activities. In particular, it improves the quality and explainability of predictive models related to those.

This work was presented at APIA 2017 in Caen (France) and published in its proceedings.

Abstract

periodic split

Placing your trust in algorithms is a major issue in society today. This article introduces a novel split method for decision tree generation algorithms aimed at improving the quality/readability ratio of generated decision trees. We focus on human activities learning that allow the definition of new temporal features. By virtue of these features, we present here the periodic split method, which produces similar or superior quality trees with reduced tree depth.

Une plateforme compatible avec tout l’écosystème

aws
Azure
Google Cloud
OVH Cloud
scikit-lean
PyTorch
Tensor Flow
XGBoost
jupyter
PC
Python
R
Rust
mongo DB

Vous pourriez également apprécier

MLOps
22/03/2023

How MLOps will streamline your AI projects?

When speaking of Artificial Intelligence, the efficiency and profitability of projects depend on the ability of companies to deploy reliable applications quickly and at low cost. To succeed, you need to organize and improve the processes for creating, implementing, and maintaining AI models with a diverse and sizable team.

Lire l'article

MLOps
15/03/2023

Don’t just build models, deploy them too!

You don’t know what “model deployment” means? Even when you try to understand what it means, you end up searching for the meaning of too many baffling tech words like “CI/CD”, “REST HTTPS API”, “Kubernetes clusters”, “WSGI servers”… and you feel overwhelmed or discouraged by this pile of new concepts?

Lire l'article

IA de confiance
08/03/2023

Un-risk Model Deployment with Differential Privacy

As a general rule, all data ought to be treated as confidential by default. Machine learning models, if not properly designed, can inadvertently expose elements of the training set, which can have significant privacy implications. Differential privacy, a mathematical framework, enables data scientists to measure the privacy leakage of an algorithm. However, it is important to note that differential privacy necessitates a tradeoff between a model's privacy and its utility. In the context of deep learning there are available algorithms which achieve differential privacy. Various libraries exist, making it possible to attain differential privacy with minimal modifications to a model.

Lire l'article