Explainable AI for Small Data: How it Works

Explainable AI for Small Data

Explainable AI for Small Data is the next step for AI adoption in companies. The big question is how to make it a reality? At MyDataModels, we found a way thanks to our patented Machine Learning algorithm. But first, let’s go back to school for a quick reminder on Darwinian evolution.

Survival of the fittest

Charles Darwin discovered that the evolution of organisms correlates to natural selection of small traits inherited from one generation to another. By doing so, species evolve to survive in their environment. Those who don’t evolve disappear, which is why we talk about the survival of the fittest.

At first, Species evolution sounds far from Explainable AI for Small Data, doesn’t it? Well it makes more sense when you learn that we developed our Decision Intelligence Platforms using Evolutionary algorithms. As their names suggest, these algorithms replicate the Darwinian process by adapting and improving one generation after another. But how does it work?

When models replace species

We designed ZGP (that’s the name of our algorithm) to do Supervised Machine Learning for Small Data. As a reminder, Supervised Machine Learning needs organized and labeled historical data to generate models. The algorithm then knows what it is learning from. A human then tells the algorithm what labeled data is the expected outcome. The algorithm will then try to find the best correlation and patterns between other variables to learn from the past to predict the target outcome, e.g. the result.

In our case, ZGP mimics the Darwinian evolution process by dividing the dataset in two to analyze a part and generate a high number of models. The algorithm then uses the remaining data to rate the model quality. ZGP keeps the models with the best results, meaning that they are simultaneously simple and accurate. ZGP creates a new generation of models based on their characteristics. It then repeats this process until a model is considered to be the best of the breed (i.e. when marginal performance gains become null).

Simple, understandable mathematical formulas

However, mimicking Darwinism doesn’t guarantee Explainable AI for Small Data. Techniques such as Symbolic regression, Global optimization under constraints and Strong regularization guarantee the explainability of ZGP AI models. Sounds complex? Let’s sum up these concepts briefly and explain why they matter.

Symbolic Regression means that the models that ZGP generates are mathematical formulas. They combine the values of the input variables to predict the target outcome.

Global optimization under constraints completes Symbolic Regression by ensuring that the mathematical formula stays simple enough to be understandable. It forces the algorithm to find the best tradeoff between accuracy and explainability.

Finally, Strong Regularization is a mechanism that limits the overfitting of models and reduces their complexity. By default, algorithms tend to be very close to the data they are based upon, which limits their adaptability to new datasets. Strong Regularization makes sure that the models can be used on other datasets of the same shape. Indeed, if your model is very accurate on a specific set of data but can’t be used on other data, it is useless. Hence, Strong Regularization will reduce the specificity of the model and ensure that its accuracy is good enough.

ZGP combines these techniques to provide short, understandable predictive models that can be used for simulation and help the decision-making process, based on Small Data – starting at 300-row, 15-column tabular datasets.

How to use Explainable AI for Small Data in my daily work?

Of course, even if ZGP models are explainable and work perfectly for Small Data, business experts won’t use them directly on their data. This is why we develop end-to-end Decision Intelligence platforms to deliver actionable insights to business experts. These platforms connect to the data sources, process the data and use ZGP to generate predictive models which can be understood and used by business experts thanks to user-friendly interfaces.
These platforms can be used to predict future failures, reduce employee leave, improve customer satisfaction and much more! So if you’re looking to improve your operational performance, our Explainable AI for Small Data could help – contact us today to discuss your use cases!

Share

Start making sense of  your data

Test easily Quaartz with our test data here

You might also like...

MyDataModels France Digitale

MyDataModels among France Digitale’s top AI startups!

The CIAR Project: Frugal Data Analysis for Process Optimization

Frugal Data Analysis for Process Optimization

BlueGuard - Accurate Classification Models for Accelerated Decision-Making

BlueGuard – Building Highly Accurate Classification Models

How Small Data Predicts Delivery Delays

How Small Data Predicts Delivery Delays