Back

Telco, Customer Services

Churn Prediction

churn.jpg

Improving Customer Retention with Churn prediction

Predictive churn achieves three goals: understanding the key factors of customers attrition, identifying customers most at risk of leaving, and providing targeted insights on which retention actions should be implemented. Churn rate is an important business metric as it reflects customer response to service, pricing, competition... As such, measuring churn, understanding the underlying reasons and being able to anticipate and manage risks associated to customer churn are key areas for continuous increase in business value.

Problems to solve

How to predict customer churn?
How to detect early customers intention to create targeted retention programs?
Overall, how to improve customer loyalty by reducing the attrition rate?
Can machine learning help in these matters and how accurate predictive models can be to
predict churn?

Benefits of TADA

All Sales & Marketing professionals working on customer loyalty could use predictive models. However, they are not data scientists and they may not have the required skills in machine learning nor coding expertise to build models. Most data handled by these professionals are Small Data, meaning that often their historical data contains only few hundreds of campaigns or thousands of customers but rarely millions (like in Big Data). Traditional machine learning tools work well with Big Data but do not perform well with predictions from Small Data.

MyDataModels allows domain experts to build predictive models from Small Data automatically and without training. They can use their collected data directly, without normalization and outlier’s management nor feature engineering. Thanks to this limited data preparation, the results from this specific dataset were obtained with a few clicks in less than a minute on a regular laptop. MyDataModels brings a self-service solution for those who have Small Data and no data scientists.

Conclusion

Most Marketing professionals know that it’s easier (and cheaper) to retain existing customers than to acquire new ones. It is generally too late to take retention actions after a customer has left, success rates to retain users are around 2–3%.
Customer service teams in charge of retention tend to have limited resources and are unable to devote the same level of attention to every customer. Sales & Marketing experts need to determine which customers are more likely to churn so they can prioritize their retention efforts.

In this churn detection use case, the results obtained from MyDataModels’ predictive models are satisfying with 76% accuracy rate on average.

By using an automated machine learning solution like TADA, companies can now proactively identify the factors driving the churn and predict which of the current customers are most likely to leave to competition. This enables retention team to focus their resources on the customers most at risk and offer them personalized incentives to remain loyal. By targeting the right audience, this technology offers a great opportunity for companies to lower their retention cost while increasing their overall customer loyalty.

Case study

Solution

Automated Machine Learning solutions consist of predicting the future with historical data.
To predict a future result, you must bring your descriptive data and the past result obtained.

TADA allows you to simply create a relevant predictive model from your data and apply it to future data.

In this case, the descriptive data are client’s informations on their actual situation with the telecom company.
The goal of the dataset is to predict if a client will churn or no, it’s a binary task (yes/no).

To generate a model, the steps are the following:

  • Create your project and load your data as a CSV table (with data in rows and variables in columns).
  • Select the variable you want to predict, called Goal.
    In this case, the Goal is the variable "Y_house_of_price_unit_area" (a visualization of the variable is provided).
  • Select your data for the model generation. This step is called "Creating the Variable set" and allows you to manually select the descriptive variables you want to use. By default, they are all selected.
    TADA identifies the relevant descriptive variables by itself, which affects the calculation time required to create the model.
    The fewer variables selected, the faster the model creation.
  • Create your model.
    At creation, default values are proposed to you: Name of models, Population, Iteration. You only need to validate the default values to start model generation. ‘Best practices’ are at your disposal to guide you in the choice of these parameters.

    Depending on the size of the descriptive data file, this step can take between a few seconds and ten minutes.
    Once the model is created, you can see the results of the model using metrics and charts so you can judge its relevance.

Note:
To apply a model that you think is relevant, you can:

  • Retrieve the associated mathematical formula and apply it (for instance on Excel)
  • Retrieve the source code of the formula and use it by yourself (Valid only on TADA
    paying offers). The source code is available in R, Java, C ++ and soon Python.
  • In order to use our "Predict" feature on the product, you will have to upload your file containing the data to be predicted. You will be returned a downloadable file containing the given data, with
    the calculated predictions. 


Dataset information

The screenshot below shows an extract of the public dataset.
Each row is a customer and each column is a variable which can be used in

dataset.png

The dataset includes information about:

  • Customers who left within the last month – the column is called Churn
  • Services that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
  • Customer account information – how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
  • Demographic info about customers – gender, age range, and if they have partners and dependents

Task type : Binary Classification
Number of variables: 29
Number of rows: 4923
Goal : (churn) Customer actually churned ? yes=1/no=0.
Weight: Positive class (churn=1) 27%, Negative class (no churn=0): 73%

  • gender : whether the customer is a male or a female
  • SeniorCitizen : whether the customer is a senior citizen or not (1, 0)
  • Partner : whether the customer has a partner or not (1, 0)
  • Dependents : whether the customer has dependents or not (1, 0)
  • Tenure : Number of months the customer has stayed with the company
  • PhoneService : whether the customer has a phone service or not (1, 0)
  • MultipleLines_Yes : whether the customer has multiple lines or not (1, 0)
  • MultipleLines_No : whether the customer has multiple lines or not (1, 0)
  • MultipleLines_No phone service : whether the customer has phone service or not (1, 0)
  • InternetService_DSL : customer’s internet service DSL (1,0)
  • InternetService _Fiber optic: customer’s internet service Fiber optic (1,0)
  • InternetService_No : whether the Customer has an internet service provider (1,0)
  • OnlineSecurity: whether the customer has online security or not (1,0)
  • OnlineBackup : whether the customer has online backup or not (1,0)
  • DeviceProtection : whether the customer has device protection or not (1,0)
  • TechSupport : whether the customer has tech support or not (1,0)
  • StreamingTV : whether the customer has streaming TV or not (1,0)
  • StreamingMovies : whether the customer has streaming movies or not (1,0)
  • Contract_Month_to_month : whether the customer has a month to month contract (1,0)
  • Contract_One year : whether the customer has a one year contract(1,0)
  • Contract_Two years : whether the customer has a two years contract(1,0)
  • PaperlessBilling : whether the customer has paperless billing or not (1,0)
  • PaymentMethod_ Credit card (automatic) :whether the customer paid by credit card or not (1,0)
  • PaymentMethod_Electronic check : whether the customer paid by electronic check or not (1,0)
  • PaymentMethod_Mailed check : whether the customer paid by mail check or not (1,0)
  • PaymentMethod_Transfer : whether the customer paid by transfer or not (1,0)
  • MonthlyCharges : the amount charged to the customer monthly
  • TotalCharges : the total amount charged to the customer
  • Churn is our Goal : whether the customer churned or not (1,0)

Results

The results of the model are available following the generation of the model.

They present the performance of the predictive model.

The type of predictive model and the measurement indicators of the associated model are related to the Goal (Variable to be predicted) and the values ​​of this variable.

The type of model you make is shown on the model results display.

According to the type of the Goal (in our case, the Goal is "Churn"), we can make three types of predictions:
- Binary classification: Discrete value taking only two values (yes/no for instance)
- Multiclass classification: Discrete value taking more than two values (for instance a status of state with values ​​like: On, Risk of breakdown, Down, etc.)
- Regression: Continuous value that can take an infinite number of values (a temperature, a pressure, a turnover, the price of a house for instance)

At the generation of the model and according to the practices and state of the art of
Machine Learning, your dataset will be divided into three parts by TADA:

  • A training part which represents 40% of your dataset, it allows to train a certain
    number of formulas,
  • A validation part, which represents 30% of your dataset, which validates and
    selects the best formulas found in the previous step,
  • A test part which represents the last 30% of the model and which corresponds to the test of the formulas approved by the preceding stage. The performance measurement and the evaluation of your model should mainly be done on this partition (Standard and state of the art of Machine Learning) because the present data were not used in the learning and validation phase of the machine learning model and serve just to measure its performance.

metrics.png

ACC (Accuracy) represents the overall accuracy rate of the model, it is the percentage of classes that are well distributed (here we have 76.39% predictions that are correct)

TPR (True Positive Rate) represents the accuracy rate of the prediction of the positive class, i.e. of the "yes/1" class

TNR (True Negative Rate) represents the accuracy rate of the prediction of the positive class, i.e. of the "No/0" class

MCC (Matthew's Correlation Coefficient) represents the good prediction as a whole, that is, if we were able to divide the predictions between the two classes.

Confusion matrix 

confusion_matrix.png

Here, the confusion matrix represents a visual way of interpreting the metrics.
In this case, TADA predicted 917 times that a client will not churn and was only mistaken 102 (We miss 102 churn).
In parallel, TADA predicted 561 times that the client will churn, and was wrong 247 times (We told that 247 clients would churn but didn’t).

Ready to use TADA?

You don't have immediate data?

No problem, data are available to make your trial as relevant as possible!

Try it now!

Detailed informations

General

Artificial intelligence: Theories and techniques aiming to simulate intelligence (human, animal or other).

Binary Classification: It is the problem type when you are trying to predict one of two states, e.g. yes/no, true/ false, A/B, 0/1, red/green, etc. This kind of analysis requires that the Goal variable type is of type CLASS. Binary Classification analysis also requires that there be only 2 different values in the Goal column. Otherwise, it is not a binary problem (two choices and no more).

Convolutional Neural Network: This type of network is dedicated to object recognition. They are generally composed of several layers of convolutions + pooling followed by one or more FC layers. A convolutional layer can be seen as a filter. Thus, the first layer of a CNN make it possible to filter the corners, curves and segments and the following ones, more and more complex forms.

Data Mining: Field of data science aimed at extracting knowledge and / or information from a body of data.

Deep Learning: Deep Learning is a category of so-called "layered" machine learning algorithms. A deep learning algorithm is a neural network with a large number of layers. The main interest of these networks is their ability to learn models from raw data, thus reducing pre-processing (often important in the case of classical algorithms).

Fully Convolutional Networks: An FCN is a CNN with the last FC layers removed. This type of network is currently not used much but can be very useful if it is succeeded by an RNN network allowing integration of the time dimension in a visual recognition analysis.

GRU (Gated Recurrent Unit): A GRU network is a simplified LSTM invented recently (2014) and allowing better predictions and easier parameterization.

LSTM (Long Short-Term Memory): An LSTM is an RNN to which a system has been added to control access to memory cells. We speak of "Gated Activation Function". LSTMs perform better than conventional RNNs.

Machine learning : Subfield of Artificial Intelligence (AI), Machine Learning is the scientific study of algorithms and statistical models that provides systems the ability to learn and improve any specific tasks without explicit programming.

Multi Classification: Classification when there is more than two classes in the goal variable, e.g. A/B/C/D, red/orange/green, etc.

Multilayer perceptron: This is a classic neural network. Generally, all the neurons of a layer are connected to all the neurons of the next layer. We are talking about Fully Connected (FC) layers.

RCNN (Regional CNN): This type of network compensates for the shortcomings of a classic CNN and answers the question: what to do when an image contains several objects to recognize? An RCNN makes it possible to extract several labels (each associated with a bounding box) of an image.

Regression: Set of statistical processes to predict a specific number or value. Regression analysis requires the type of Goal variable to be numeric (INTEGER or DOUBLE).

Reinforcement learning: Reinforcement learning is about supervised learning. It involves using new predicted data to improve the learning model (calculated upstream).

RNN (Recurrent Neural Networks): Recurrent networks are a set of networks integrating the temporal dimension. Thus, from one prediction to another, information is shared. These networks are mainly used for the recognition of activities or actions via video or other sensors.

Semi supervised learning: Semi-supervised learning is a special case of supervised learning. Semi-supervised learning is when training data is incomplete. The interest is to learn a model with little labeled data.

Stratified sampling: It is a method of sampling such that the distribution of goal observations in each stratum of the sample is the same as the distribution of goal observations in the population. TADA uses this method to shuffle the data set from binary and multi classification projects.

Simple random sampling: It is a method of sampling in which each observation is equally likely to be chosen randomly. TADA uses this method to shuffle the data set from regression projects.

Supervised learning: Sub-domain of machine learning, supervised learning aims to generalize and extract rules from labeled data. All this in order to make predictions (to predict the label associated with a data without label).

Transfer learning: Brought up to date by deep learning, transfer learning consists of reusing pre-learned learning models in order not to reinvent the wheel at each learning.

Unsupervised learning: Sub-domain of machine learning, unsupervised learning aims to group data that are similar and divide/separate different data. We talk about minimizing intra-class variance and maximizing inter-class variance.


Metrics

Binary

ACC (Accuracy): Percentage of samples in the test set correctly classified by the model.

Actual Negative: Number of samples of negative case in the raw source data subset.

Actual Positive: Number of samples of positive case in the raw source data subset.

AUC: Area Under the Curve (AUC) of the Receiver Operating characteristic (ROC) curve. It is in the interval [0;1]. A perfect predictive model gives an AUC score of 1. A predictive model which makes random guesses has an AUC score of 0.5.

F1 score: Single value metric that gives an indication of a Binary Classification model's efficiency at predicting both True and False predictions. It is computed using the harmonic mean of PPV and TPR.

False Negative: Number of positive class samples in the source data subset that were incorrectly predicted as negative.

False Positive: Number of negative class samples in the source data subset that were incorrectly predicted as positive.

MCC (Matthews Correlation Coefficient): Single value metric that gives an indication of a Binary Classification model's efficacy at predicting both classes. This value ranges between -1 to +1 with +1 being a perfect classifier.

PPV (Positive Predictive Value/Precision): Number of a model's True Positive predictions divided by the number of (True Positives + False Positives) in the test set.

Predicted Positive: Number of samples in the source data subset predicted as the positive case by the model.

Predicted Negative: Number of samples in the source data subset predicted as the negative case by the model.

True Positive: Number of positive class samples in the source data subset accurately predicted by the model.

True Negative: Number of negative class samples in the source data subset accurately predicted by the model.

TPR (True Positive Rate/Sensitivity/Recall): Ratio of True Positive predictions to actual positives with respect to the test set. It is calculated by dividing the true positive count by the actual positive count.

TNR (True Negative Rate/Specificity): Ratio of True Negative predictions to actual negatives with respect to the test set. It is calculated by dividing the True Negative count by the actual negative count.

 

Multi classification

ACC (Accuracy): Ratio of the correctly classified samples over all the samples.

Actual Total: Total number of samples in the source data subset that were of the given class.

Cohen’s Kappa (K): Coefficient that measures inter-rater agreement for categorical items, it tells how much better a classifier is performing over the performance of a classifier that simply guesses at random according to the frequency of each class. It is in the interval [-1:1]. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement.

False Negative: Number of positive class samples in the source data subset that were incorrectly predicted as negative.

False Positive: Number of negative class samples in the source data subset that were incorrectly predicted as positive.

Macro-PPV (Positive Predictive Value/Precision): The mean of the computed PPV within each class (independently of the other classes). Each PPV is the number of True Positive (TP) predictions divided by the total number of positive predictions (TP+FP, with FP for False Positive) within each class. PPV is in the interval [0;1]. The higher this value, the better the confidence that positive results are true.

Macro-TPR (True Positive Rate/Recall): The mean of the computed TPR within each class (independently of the other classes). Each TPR is the proportion of samples predicted Truly Positive (TP) out of all the samples that actually are positive (TP+FN, with FN for False Negative). TPR is in the interval [0;1]. The higher this value, the fewer actual samples of positive class are labeled as negative.

Macro F1 score: Harmonic mean of macro-average PPV and TPR. F1 Score is in the interval [0;1]. The F1 Score can be interpreted as a weighted average of the PPV and TPR values. It reaches its best value at 1 and worst value at 0.

MCC (Matthews Correlation Coefficient): Represents the multi class confusion matrix with a single value. Precision and recall for all the classes are computed and averaged into a single real number within the interval [-1;1]. However, in the multiclass case, its minimum value lies between -1 (total disagreement between prediction and truth) and 0 (no better than random) depending on the data distribution.

Predicted Total: Total number of samples in the source data subset that were predicted of the given class.

True Positive: Number of positive class samples in the source data subset accurately predicted by the model.

True Negative: Number of negative class samples in the source data subset accurately predicted by the model.

 

Regression

MAE (Mean Absolute Error): represents the average magnitude of the errors in a set of predictions, without considering their direction. It’s the average over the test sample of the absolute differences between prediction and actual observation where all individual differences have equal weight. MAE is in the intervall [0;+∞]. A coefficient of 0 represents a perfect prediction, the higher this value is the more error (relative error) the model have.

MAPE (Mean Absolute Percentage Error): MAPE is computed as the average of the absolute values of the deviations of the predicted versus actual values.

Max-Error: Maximum Error. The application considers here the magnitude (absolute error when identifying the maximum error. Thus -1.5 would be consider the maximum error over +1.3. The sign of the error however is still reported in this column in case it has domain significance for the user.

R2 (R Squared): also known as the Coefficient of Determination. The application computes the R2 statistic as 1 - (SSres / SStot) where SSres is the residual sum of squares and SStot is the total sum of squares.

RMSE: Root Mean Square Error against the Dataset partition selected. RMSE is computed as the square root of the mean of the squared deviations of the predicted from actual values.

SD-ERROR (Standard Deviation Error): Standard statistical measure used to quantify the amount of variation of a set of data values.