Financial fraud amounts for considerable money.
Hackers and crooks around the world are always looking for new ways of committing financial fraud every day.
Relying exclusively on rule-based, conventionally programmed systems for detecting financial fraud does not provide the appropriate response time.
This is where Machine Learning brings a unique solution for this type of problem.
Problems to solve
- How to detect a fraud before it’s too late?
- In a case of fraudulent transactions, how to investigate without upsetting innocent people?
- How to predict weak signals within a large amount of data?
- Can machine learning help in these matters and how accurate predictive models can be to detect fraud?
Benefits of TADA
Most professionals from Banking & Insurance industries could benefit from predictive models. However, they are not data scientists and may not have the required skills in machine learning nor coding to build them. Even though the data handled by these professionals could be considered as Big Data (transactions for instance), the risks to predict, frauds in this specific use case, are Small Data.
Here, historical data contains a limited number of frauds compared to the number of transactions. Traditional machine learning tools work well with Big Data but do not perform well for prediction of Small Data within Big Data (imbalanced dataset).
MyDataModels allows domain experts to build automatically predictive models from Small Data. No training is required, and domain experts can use their raw data directly: no normalization, outliers handling nor feature engineering are required. Thanks to this limited data preparation, the results from this specific dataset were obtained with a few clicks in less than 10 minutes on a standard laptop.
MyDataModels brings a self-service solution for the domain experts who have Small Data and no data scientists.
According to a report published by Nilson, in 2017 the worldwide losses in card-fraud-related cases reached $22.8 billion and is expected to reach $32.96 billion by 2021.
Frauds are costly but investigating every transaction it is too expensive and inefficient. Investigating innocent customers can also be a bad experience, leading to greater customer churn. By accurately predicting which transactions are likely fraudulent, banks can significantly reduce these illegal transactions while providing card holders an excellent customer experience.
Regardless the industry, fraudulent activity is a high-cost threat that can compromise the integrity of companies and hurt bottom line. The use of fraud detection analytics based on an automatic machine learning solution such as TADA enables companies to discover fraudulent activities before it’s too late.
Automated Machine Learning solutions consist of predicting the future with historical data. To predict a future result, you must bring your descriptive data and the past result obtained.
TADA allows you to simply create a relevant predictive model from your data and apply it to future data.
In this case, the descriptive data are client’s information on their actual situation with the telecom company.
The goal of the dataset is to predict if a client will or won’t buy: it’s a binary task (yes/no).
To generate a model, the steps are the following:
- Create your project and load your data as a CSV table (with data in rows and variables in columns).
Select the variable you want to predict, called Goal.
In this case, the Goal is the variable "Target" (a visualization of the variable is provided).
Select your data for the model generation. This step is called "Creating the Variable set" and allows you to manually select the descriptive variables you want to use. By default, they are all selected.
TADA identifies the relevant descriptive variables by itself, which affects the calculation time required to create the model.
The fewer variables selected, the faster the model creation.
- Create your model.
At creation, default values are proposed to you: Name of models, Population, Iteration. You only need to validate the default values to start model generation.
‘Best practices’ are at your disposal to guide you in the choice of these parameters.
Depending on the size of the descriptive data file, this step can take between a few seconds and ten minutes.
Once the model is created, you can see the results of the model using metrics and charts so you can judge its relevance.
To apply a model that you think is relevant, you can:
- Retrieve the associated mathematical formula and apply it (for instance on Excel)
- Retrieve the source code of the formula and use it by yourself (Valid only on TADA paying offers). The source code is available in R, Java, C ++ and soon Python.
- In order to use our "Predict" feature on the product, you will have to upload your file containing the data to be predicted. You will be returned a downloadable file containing the given data, with
the calculated predictions.
The screenshot below shows an extract of the public dataset.
Each row is a transaction and each Column is a variable
- Task Type: Binary Classification
- Number of columns: 32
- Number of rows: 6026
- Goal: (Class) Is it a fraudulent transaction ? (yes or no).
- Weight: Positive class (fraud) 8%, Negative class: 92%
The dataset contains transactions made by credit cards in September 2013 by European cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions, therefore this dataset is highly unbalanced.
It contains only numerical data which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original variables and more background information about the data. Variables V1, V2, ... V28 are the principal components obtained with PCA, the only variables which have not been transformed with PCA are 'Time' and 'Amount'. Variables 'Time' contains the time in seconds between each transaction and the first transaction in the dataset. The variable 'Amount' is the transaction amount.
The results of the model are available following the generation of the model.
They present the performance of the predictive model.
The type of predictive model and the measurement indicators of the associated model are related to the Goal (Variable to be predicted) and the values of this variable.
The type of model you make is shown on the model results display.
According to the type of the Goal (in our case, the Goal is "Target"), we can make three types of predictions:
- Binary classification: Discrete value taking only two values (yes / no for instance)
- Multiclass classification: Discrete value taking more than two values (for instance a status of state with values like: On, Risk of breakdown, Down, etc.)
- Regression: Continuous value that can take an infinite number of values (a temperature, a pressure, a turnover, the price of a house, etc.)
At the generation of the model and according to the practices and state of the art of Machine Learning, your dataset will be divided into three parts by TADA:
- A training part which represents 40% of your dataset, it allows to train a certain number of formulas,
- A validation part, which represents 30% of your dataset, which validates and selects the best formulas found in the previous step,
- A test part which represents the last 30% of the model and which corresponds to the test of the formulas approved by the preceding stage. The performance measurement and the evaluation of your model should mainly be done on this partition (Standard and state of the art of Machine Learning) because the present data were not used in the learning and validation phase of the machine learning. model and serve just to measure its performance.
ACC (Accuracy) represents the overall accuracy rate of the model, it is the percentage of classes that are well distributed (here we have 99.78% predictions that are correct)
TPR (True Positive Rate) represents the accuracy rate of the prediction of the positive class, ie of the "yes" class
TNR (True Negative Rate) represents the accuracy rate of the prediction of the positive class, ie of the "No" class
MCC (Matthew's Correlation Coefficient) represents the good prediction as a whole, that is, if we were able to divide the predictions between the two classes.
The confusion matrix is a visual way of interpreting the metrics.
In this case, TADA predicted 149 times that a transaction was a fraud and was wrong 3 times (TADA put 3 flags on transactions that were not fraudulent).
In parallel, TADA predicted 1660 times that the transaction was not a fraud and was wrong 1 time (TADA missed 1 fraud).
Ready to use TADA?
You don't have immediate data?
No problem, data are available to make your trial as relevant as possible!Try it now!