General Practitioners, COVID-19, and Machine Learning

Among the many lingering issues surrounding the spread of coronavirus and its long-term development is “How to make a correct diagnosis of the disease?”. 

Are 80% of COVID-19 patients asymptomatic? The prevalence is the contamination rate among the population. What is the prevalence of SARS-CoV-2 antibody in the U.S.? In Europe? The prevalence expectedly varies from less than 5% to 25% in the U.S. and most European countries. The prevalence percentages stand between 10 and 15% in Southern Europe: 13% in Italy, 11% in Spain, 15% in France. But with significant regional differences, in France, the Grand Est (North East) is estimated to have a prevalence of over 20% while the South West is under 5%. In the state of New York (USA), the prevalence rate is 13.9%, with a high 21.2% in New York City. But, both in L.A. County and in Santa Clara, it is between 4 and 9%. We are still far from herd immunity, which stands around 65%. 

Testing and COVID-19: a Complex Relationship

In this context, as Europe and the U.S. start to reopen, precise testing is an essential tool in states’ arsenals to track and end the coronavirus spread. 

There are three ways to detect COVID-19 infections, combining the people currently infected and those who have been sick and have recovered. 

  • The most common method uses a clinical diagnosis carried out by a medical doctor. The clinical exam is conducted either in person or by video conference. 
  • Another procedure consists in adopting a technique called polymerase chain reaction or PCR. It detects coronavirus genetic elements present when the virus is active. Clinicians typically collect a specimen for testing from the back of a person’s throat. But sometimes, the virus has migrated to the lungs or the stomach. Although the person under examination is infected, the sample obtained does not permit the diagnosis of COVID-19. Furthermore, the collection technique is prone to inaccuracies. Hence, it generates high risks of false negatives, i.e., people who are infected and test negative. The error percentage varies within 20 and 40%, depending on sources. 
  • The most sophisticated approach consists of antibody tests, often called serologic tests. These tests look for evidence of an immune response to the COVID-19 infection. They check whether the person has been infected and has developed some form of immunity. In medical analysis, test sensitivity is the ability of a test to distinguish people with the disease (true positive rate). In contrast, test specificity is the test’s capacity to accurately identify those without the disease (true negative rate). The testing is not reliable, assuming a prevalence of 5%, a sensitivity of the serologic tests of 95%, and 95% specificity. Indeed, with 95% sensitivity, there are 5% of false positive, i.e., non-contaminated people who tested positive. Let’s assume 100 people are living in Santa Clara. Five are sick (prevalence 5%). If the other 95 non-sick people take a test, 4.75 persons will test positive, as many as the number of real positives. Therefore, for every positive test, there is one chance in two of the tested subjects being sick.

In a nutshell, COVID-19 testing is a jumble for now. 

Machine Learning: a way out of the Jumble?

In these circumstances, a real clinical exam completed by the general practitioner remains a perfectly reasonable strategy in diagnosing patients who are (or who have been) contaminated, showing symptoms. 

Within this scope, can Machine Learning help? We had long discussions to figure out whether Machine Learning could make a difference in helping the general practitioners diagnose COVID-19 based on a set of symptoms or not. We did not understand if Machine Learning could learn accurately from the short amounts of data produced by the medical environment (a few dozen, hundred, thousand patients at best). Therefore we tried. We received 117 records provided by less than twenty general practitioners based in France. These records included the name of the medical doctor who examined the patient, whether the consultation occurred in video or live, its date. It holds the patient gender, the age, the co-morbidities, and the symptoms reported. For each instance, the doctor delivered a diagnosis concerning COVID-19 contamination. Few subjects have been lab-tested, as there is a shortage of tests in France, and only those who are seriously ill receive a test. Furthermore, the database is biased as it contains mostly positive COVID-19 diagnoses (85%). We utilized our exclusive Small Data Machine Learning software, TADA, to examine the data obtained. 

Our first experiment consisted in supplying all the records to TADA. Then we asked the tool: “Who is infected, and what are the essential factors you have used to estimate contamination?”. The software made its predictions. 93 times out of 100, it precisely predicted whether a subject was declared infected or not. Next, we observed the foremost determinants utilized by TADA to predict the outcome. They were, in the rank of importance: 

  • the general practitioner’s name, weights 63% in the overall score,
  • dyspnea during physical activity, with an influence of 33%,
  • anosmia, with an importance of 3%,
  • pulmonary infection, with an impact of 1%. 

The conjunction of these two factors, namely the doctor’s name and dyspnea alone, accounts for 96% of the COVID-19 positivity score. In short, the interpretation of this initial finding is that, for TADA, the general practitioner who accomplished the examination, was the most prominent influencing factor (with a weight of 63%) in the COVID-19 diagnosis. This result appears not to make sense. 

Stepping Back from our Results

Where does it originate? The database employed comprises 117 records distributed among 15 medical doctors. Some doctors are associated with extremely few records, creating a ‘doctor name’ parameter biased by the sample size. As a consequence, we ran another experiment, this time challenging our tool: “Without considering the doctor’s name in the patient record, who is infected and according to which criteria do you figure this out?.” The outcomes we achieved were 84% accurate, i.e., prediction in phase with the actual diagnosis. The predominant criteria were:

  • headaches, weight 60% in the overall score,
  • dyspnea during physical activity, with an influence of 22%,
  • chest pain, with a weight of 14%,
  • patients asking for follow up phone calls (a measure of patient’s anxiety), with an importance of 4%.

Headaches and dyspnea alone account for 82% of the final score. 

More importantly, in both experiments, the amount of false negative, i.e., indicating to a patient that he/she does not have COVID-19 when they do, was meager. It was 3% in the first model (with doctor’s names) and less than 2% in the second model (without doctor’s name)—an excellent bellwether for more experiments to come. 

In Short

These were our very first experiments, and we aspired to share them with you. We are conscious that the reduced amount of records produces biases in this first investigation. Nevertheless, the accuracy of the predictions, the low false-negative rate, and the identification of the key influencing factors are extremely encouraging results. We are continuing with our experiments using more extensive databases and shall share the results with you along the way. We aim to provide the general practitioners with crisp tools to be faster and more specific in their diagnoses.

Please follow us on social networks :

Sources

https://edition.cnn.com/2020/05/26/health/antibody-tests-cdc-coronavirus-wrong/index.html
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
https://coronavirus.medium.com/
https://edition.cnn.com/2020/05/22/health/immunity-passport-coronavirus/index.html
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
https://www.nbcnews.com/health/health-news/questions-about-covid-19-test-accuracy-raised-across-testing-spectrum-n1214981
https://www.sciencedirect.com/science/article/pii/S0048969720323342
https://www.medrxiv.org/content/10.1101/2020.04.18.20070821v1.full.pdf
https://www.webmd.com/lung/news/20200424/more-data-bolsters-higher-covid-prevalence
https://www.euronews.com/2020/05/15/analysis-how-close-are-we-to-covid-19-herd-immunity

Share
Share on linkedin
Share on twitter
Share on facebook

Start making sense of  your data

Test easily TADA with our test data here

You might also like...

Doctor studing on pulmonary disease

MyDataModels Wins the EUREKA International Call for Projects

MyDataModels & Vertical M2M Bring AIoT One Step Further

Semi-supervised learning for document classification

Sorting Algorithms

Multiclass Classification: Sorting Algorithms