A fully automated periodicity detection in time series

24/09/2019

R&D

Tous les articles

Sommaire

We recently presented a scientific paper during ECML PKDD Advanced Analytics and Learning on Temporal Data workshop part of the ECML-PKDD 2019 conference at Wurzburg, Germany.

Abstract

"A fully automated periodicity detection in time series" by Tom Puech, Matthieu Boussard, Anthony D’Amato, and Gaëtan Millerand.

To generate good models, you need the good features. For instance, knowing precisely the various periodicities of your temporal data is crucial. Does the time of day has a recurrent impact ? Does it exhibits another, fancier, periodicity ? This is precisely what this paper solves, by providing a new algorithm to automatically extract those features from your time series.

Download the file to read the full paper.

Une plateforme compatible avec tout l’écosystème

aws
Azure
Google Cloud
OVH Cloud
scikit-lean
PyTorch
Tensor Flow
XGBoost
jupyter
PC
Python
R
Rust
mongo DB

Vous pourriez également apprécier

MLOps
22/03/2023

How MLOps will streamline your AI projects?

When speaking of Artificial Intelligence, the efficiency and profitability of projects depend on the ability of companies to deploy reliable applications quickly and at low cost. To succeed, you need to organize and improve the processes for creating, implementing, and maintaining AI models with a diverse and sizable team.

Lire l'article

MLOps
15/03/2023

Don’t just build models, deploy them too!

You don’t know what “model deployment” means? Even when you try to understand what it means, you end up searching for the meaning of too many baffling tech words like “CI/CD”, “REST HTTPS API”, “Kubernetes clusters”, “WSGI servers”… and you feel overwhelmed or discouraged by this pile of new concepts?

Lire l'article

IA de confiance
08/03/2023

Un-risk Model Deployment with Differential Privacy

As a general rule, all data ought to be treated as confidential by default. Machine learning models, if not properly designed, can inadvertently expose elements of the training set, which can have significant privacy implications. Differential privacy, a mathematical framework, enables data scientists to measure the privacy leakage of an algorithm. However, it is important to note that differential privacy necessitates a tradeoff between a model's privacy and its utility. In the context of deep learning there are available algorithms which achieve differential privacy. Various libraries exist, making it possible to attain differential privacy with minimal modifications to a model.

Lire l'article