Craft AI Unveils New MLOps Platform

Discover the latest in MLOps with Craft AI. Elevate your data science workflows with enhanced observability, machine learning monitoring, and simplified usage.


Hugo Philipp
Product Owner at Craft AI

Tous les articles


À télécharger


In the continually evolving landscape of data science, the role of MLOps (Machine Learning Operations) has become indispensable. MLOps serves as the bridge between the realm of machine learning theory and the practicality of real-world applications. Its primary objective is to ensure that the remarkable models crafted by data scientists are not confined to the laboratory but rather deployed to make a tangible impact. Today, we are excited to introduce the latest iteration of our Craft AI MLOps platform. This new version not only strengthens the connection between data science and production but also introduces an array of new features that will revolutionize the way you manage your machine learning workflows.

In this article, we will delve into the importance of MLOps in data science workflows and then explore the exciting world of our new product version. Specifically, we will examine:

  • Enhanced observability of artifacts
  • Monitoring capabilities for your machine learning models
  • The expansion of possibilities with GPU-enabled environments and the new trigger method

By the end, you will have a comprehensive understanding of how this new version can elevate the productivity, collaboration, and overall effectiveness of your data science team. Let's begin!

Enhanced Observability of Artifacts

Environment and Pipelines Interface

The Craft AI MLOps platform has always been at the forefront of empowering data scientists and machine learning engineers to effortlessly create environments for running Python executions. Python code is neatly organized within a pipeline structure, facilitating the seamless management of inputs, outputs, and code steps. This approach not only enhances project organization but also ensures repeatability and consistency in your workflows.

With the release of the new version, we have introduced two powerful visualization pages: the Environment Interface and the Pipelines page. Let's delve into the exciting capabilities of the Environment Interface:

Simplified Element Management and Enhanced Observability

The Environment Interface serves as your command center for all things related to your environment, offering a one-stop solution for gaining quick insights into your setup. At a single glance, you can access crucial information about your environment, including the environment's URL, public IP address, the number of artifacts created, and essential details about the state of your infrastructure. This empowers you to make informed decisions about your environment configurations and resource utilization.

Seamless Pipeline Management for Data Scientists

The Environment Interface is just one component of the equation. The Craft AI MLOps platform continues to streamline the process of creating, locating, and managing pipelines for data scientists. This means that you can focus on your data and models while our solution takes care of the pipeline management.

However, one of the most exciting features of this new version is its collaborative advantage. Colleagues can now seamlessly view and collaborate on the pipelines you've created. This level of teamwork ensures that your projects progress smoothly and efficiently, with all stakeholders having visibility into the process.

Input / Output Tracking

In the domain of data science, where complex machine learning projects often span weeks or months, the significance of robust Input/Output tracking cannot be overstated. Without it, locating and reproducing specific experiments or results can feel like an insurmountable challenge.

Every element associated with an experiment, including input data, output results, and pipeline executions, is meticulously and automatically logged and stored in a readily accessible location. This means no more endless folder searches or guesswork regarding dataset choices or hyperparameter configurations, significantly saving time and effort.

Furthermore, our Input/Output tracking feature enables the association of hyperparameters with specific executions. This results in a transparent record of which hyperparameters were utilized for each experiment, eliminating guesswork and manual documentation.

This meticulous tracking supports reproducibility and accountability, creating a clear, auditable history of your experiments. It facilitates the recreation of precise conditions for previous results, fostering transparency. In collaborative data science environments, associating hyperparameters with executions promotes teamwork and knowledge sharing, allowing team members to build upon each other's successes effectively.

Monitoring capabilities for your machine learning models

Machine Learning Metrics

In the latest iteration of our MLOps platform, one of the standout features is the inclusion of ML metrics directly into your machine learning pipelines. This enhancement plays a crucial role in monitoring model performance throughout the training process. For instance, in deep learning scenarios, metrics like epoch progress, loss curves, and validation accuracy can be easily tracked. This real-time feedback empowers data scientists to make informed decisions about their models, ultimately leading to better outcomes. Our integration with our solution allows you to retrieve metrics in just 2 lines of code in our step’s code:

What sets our product apart is its ability to automatically generate charts and graphs from these metrics, offering an intuitive visual representation of model performance. This real-time visualization provides a dynamic, at-a-glance view of how your model is behaving during training. It allows for quick identification of potential issues, such as overfitting or slow convergence, enabling timely adjustments to improve model quality. With this enhanced monitoring capability, data scientists can fine-tune their models more effectively and with greater confidence.

See the evolution of a Metric in Production

Previous metrics are also automatically aggregated for pipelines in production, enabling metrics to be tracked over the lifetime of a deployed model.

Perhaps one of the most significant advantages of this new feature is its ability to simplify issue identification and resolution. When unusual values or trends emerge in the production metrics chart on our product, it serves as an early warning system for potential problems. This proactive monitoring ensures that data teams can swiftly detect and address issues before they impact business operations, maintaining model stability and reliability.

Maintaining model effectiveness post-deployment is a challenge many organizations face. This feature facilitates ongoing optimization and adaptation, ensuring that ML models remain in peak condition, consistently providing accurate results and contributing to business success.

New Executions Comparison Page

The introduction of the Executions comparison page in our latest MLOps platform update is a game-changer for data teams and machine learning practitioners. This feature simplifies the often complex and time-consuming task of model selection by providing a centralized hub for comparing various model executions. What sets this page apart is its comprehensive tracking of metadata, including the name of the pipeline, user, timestamp, and more.

One of the standout utilities of this page is its ability to streamline the comparison of different model iterations. Users can effortlessly examine hyperparameters and input configurations alongside the corresponding output and result metrics. Whether it's comparing the performance of various algorithms, evaluating the impact of different data preprocessing steps, or fine-tuning hyperparameters, this feature simplifies the decision-making process, saving valuable time and resources in the pursuit of optimal model selection.

Boundless possibilities with GPU-enabled environments and the new trigger method

Enhanced Trigger Method

Streamlined Usage

As explained in our previous article, it is becoming essential to be able to publish your models and not just build them. Our newly released product places a strong emphasis on simplification, making it more accessible and user-friendly for data scientists of all backgrounds. One remarkable enhancement is the "Run a Pipeline" feature, representing a significant leap in simplifying the execution of pipelines on the cloud. With just a single line of Python, data scientists can initiate complex workflows, harnessing the power of the cloud without the burden of intricate configurations. By streamlining the execution of pipelines, we have removed unnecessary hurdles, allowing data scientists to focus on what they do best—developing and fine-tuning machine learning models :

Periodic Trigger Pipeline

The ability to set execution rules to "Periodic" represents a significant advancement in our solution's capabilities. This feature streamlines the process of retraining models at regular intervals, ensuring that machine learning models remain up-to-date and accurate.

Consider a scenario where a fraud detection model, initially trained on historical data, begins to experience performance degradation as new fraud patterns emerge. By setting up a periodic trigger pipeline, data teams can automatically retrain the model at predefined intervals, adapting it to evolving data trends.

The creation of periodic deployment can be done in a single line too:

GPU-Enabled Environments

The integration of GPU-enabled environments within our latest MLOps platform marks a significant leap forward in empowering data scientists and machine learning practitioners. The importance of GPU acceleration cannot be overstated, particularly when dealing with resource-intensive tasks like deep learning and large language models (LLMs).

The parallel processing capabilities of GPUs dramatically reduce training times, allowing data scientists to experiment and iterate at a much faster pace. Moreover, large language models (LLM), which have revolutionized natural language processing and understanding, heavily rely on GPUs to handle the vast amount of computation required for tasks like text generation and translation. In essence, GPU-enabled environments are not just a convenience but a necessity for pushing the boundaries of AI and machine learning, enabling researchers and practitioners to tackle more ambitious and data-intensive projects than ever before.

The NVIDIA GPUs on our platform are compatible with major deep learning libraries such as TensorFlow and PyTorch.


In Data Science, efficiency and collaboration are paramount. Our latest Craft AI MLOps platform version is here to meet these demands and more.

We've discussed the significance of MLOps in data science workflows and showcased the exceptional features of our updated product, from improved environment and pipeline management to precise input/output tracking, real-time machine learning monitoring, and simplified usability.

Dive into MLOps with a hands-on personalized demo. Witness how our solution can enhance your machine learning use cases into real-world. Schedule your demo now!

Une plateforme compatible avec tout l’écosystème

Google Cloud
OVH Cloud
Tensor Flow
mongo DB