Forgetting Methods for White Box Learning



Tous les articles


This paper presents some of the foundations of Craft AI and especially how we introduced Machine Learning of user habits in an explainable context. It also introduces the initial version of our forgetting method that is able to unlearn lost habits.

This work was presented at PAAMS 2016 in Sevilla (Spain) and published in its proceedings. It was later presented at RFIA 2016 at Clermont Ferrand (France).


Forgetting method

In the Internet of Things (IoT) domain, being able to propose a contextualized and personalized user experience is a major issue. The explosion of connected objects makes it possible to gather more and more information about users and therefore create new, more innovative services that are truly adapted to users. To attain these goals, and meet the user expectations, applications must learn from user behavior and continuously adapt this learning accordingly. To achieve this, we propose a solution that provides a simple way to inject this kind of behavior into IoT applications by pairing a learning algorithm (C4.5) with Behavior Trees. In this context, this paper presents new forgetting methods for the C4.5 algorithm in order to continuously adapt the learning.

Une plateforme compatible avec tout l’écosystème

Google Cloud
OVH Cloud
Tensor Flow
mongo DB

Vous pourriez également apprécier


How MLOps will streamline your AI projects?

When speaking of Artificial Intelligence, the efficiency and profitability of projects depend on the ability of companies to deploy reliable applications quickly and at low cost. To succeed, you need to organize and improve the processes for creating, implementing, and maintaining AI models with a diverse and sizable team.

Lire l'article


Don’t just build models, deploy them too!

You don’t know what “model deployment” means? Even when you try to understand what it means, you end up searching for the meaning of too many baffling tech words like “CI/CD”, “REST HTTPS API”, “Kubernetes clusters”, “WSGI servers”… and you feel overwhelmed or discouraged by this pile of new concepts?

Lire l'article

IA de confiance

Un-risk Model Deployment with Differential Privacy

As a general rule, all data ought to be treated as confidential by default. Machine learning models, if not properly designed, can inadvertently expose elements of the training set, which can have significant privacy implications. Differential privacy, a mathematical framework, enables data scientists to measure the privacy leakage of an algorithm. However, it is important to note that differential privacy necessitates a tradeoff between a model's privacy and its utility. In the context of deep learning there are available algorithms which achieve differential privacy. Various libraries exist, making it possible to attain differential privacy with minimal modifications to a model.

Lire l'article