Deep Learning to Revolutionize the Cancer Cure
After cardiovascular diseases, cancer is the leading cause of death in the world. The death toll for cancer was 18.1 million in 2018, resulting in a financial burden of more than 1.16 trillion dollars worldwide. Even this number pertains only to the definitively diagnosed cases, as a large chunk of the affected population goes undiagnosed due to the inaccessibility of medical facilities or the lack of infrastructure to detect the unimaginable diversity cancer presents. Cancer is becoming one of the most wide-spread diseases – thanks to our modern lifestyle. The following visualizations show the rate of new cancers in the U.S. (State-wise map visualization), types of cancers by number of cases and death tolls.
With recent advancements in medicine, many statistics will show that the accuracy of diagnosis has increased. However, errors due to lead-time bias, length bias, overdiagnosis, etc. can often lead to bloated, misinterpreted numbers. Present detection techniques are just not powerful enough to characterize each cancer accurately based on diagnosis and prognosis. Neither can they predict the possibility of malignancy at a later stage. This is exactly what machine learning (ML), especially deep learning, is set to change.
The disparity between the diagnostic and the prognostic accuracy is a telling sign of our lack of understanding of cancer. Presently, doctors can correctly ascertain if you have cancer or not using various test results over, or at least around, 70% of the time. In contrast, the prognosis accuracy percentages are mostly sub-50.
A study revealed accuracy to be 25% within a week, 43% within two weeks, and 61% within four weeks. The increase in accuracy with time indicates that most long-term prognosis is simply speculation. As cancer progresses, its unpredictability does too. Consequently, doctors make more mistakes in diagnosis as well as prescription of definitive prognosis-based treatment. This is why early detection is so essential even in relatively harmless cancers.
With machines, the numbers move to the 80s and 90s. Technology like the convolutional neural network (CNN) employs AI-powered neural networks for unsupervised learning of structured or unstructured datasets to make pixel-level analyses of images, detect lines and curves through the relational study of the pixels, and accordingly aid in the diagnosis, prognosis, and highly targeted treatment recommendations.
Such granularity cannot be provided by a human. Most medical diagnoses are done via some form of imaging, either in vivo or in vitro. AI machines can read thousands of these in minutes, using the same method as human brains to ascertain variables, dimensions, normals, and anomalies. As more test sets are run through it, the learning process becomes more refined, rapid, and accurate, providing exponentially better results than multiple humans put together.
Artificial neural networks (ANNs) dish out more accurate results when it has all the variables that affect cancer. A simplistic ANN working model has three layers of data - the input layer, the hidden layer, and the output layer. The output layer consists of the diagnosis, prognosis, and recommended treatment. The hidden layer consists of the multi-level processed data and information.
The input layer consists of all the data that the doctor thinks is relevant to cancer. This would contain the patient’s medical history and test results of the patient from whichever technique was used, like mammography, USG, CT, MRI, PET, radio-imaging, etc. Besides that, secondary data like the patient’s diet and exercise habits, pollutant levels in the immediate environment that they mostly inhabit, possible contact with common carcinogens, history of substance abuse, daily health data from wearables, etc could give useful insight into possible causes and expected progression.
ML can elevate cancer prediction from mere conjecture and hypothesis to precise statements with a low error margin.
There are two main avenues where ML can benefit cancer cure.
The first is cancer prognosis and the second is the outcome. Cancer prognosis is far more developed than the outcome. We will discuss the use cases for the former here.
It refers to the likelihood of a person developing a certain type of cancer. Predicting cancer susceptibility is as much an individual study in pathology as it is in the demographic study of various genetic, epigenetic, biochemical, environmental, and anthropological factors. It is one of the most complex procedures, requiring a massive sample set of several types of medical and biological data.
Pre-processing is a must to reduce the complexity of the data as much as possible. Once a model is created, it can be used to pinpoint high-risk individuals and subject them to screening for early detection.
The dataset for this includes as many instances of cancer as the people can get their hands on. The algorithm should be kept dynamic to allow instantaneous addition of new cases, deletion of false positives, and modification of wrong diagnoses. The machine’s learning can be refined to filter and present even more specific data relating to the type, aggressiveness, stage, spread, and metastasis of cancer.
As explained before, AI-driven algorithms can achieve a level of detail from basic images and data of great volume with great accuracy in record time.
Prediction of recurrence requires both genomic and clinical data, including imaging. This is because recurrence has as much to do with susceptibility as it does with the nature of cancer.
The best way for the machine to learn the distinction between a case with high recurrence probability and one with a low value is to study and compare the records of patients who showed relapse at various levels with those who did not, keeping as many external factors and circumstances constant as possible. The data thus obtained can be utilized to improve follow-up and recall accuracy.
A survivability predictive model, although possible, has the flaw of suffering from high specificity and exclusion of other causes of mortality that could have a chance of being traced back to cancer or the treatment. This is because survivability looks farthest into the future for an individual and thus, has the least scope to allow discrepancies.
It must take into account both cancers, the treatment, the individual’s response to both, and the subsequent possibility of exposure to factors that could influence the patient’s likelihood of survival.
Using deep learning viably in cancer treatment is still at a nebulous stage. There is much room for advancement. For example, cancer outcome prediction concerning the response to treatment, progression, survivability, and life expectancy, the possibility of genetic transmission, risk stratification for a targeted response, and inclusion of all possible causes of cancer are some areas that require more work.
Current limitations in both bioinformatics and medical science, especially genetics must be overcome for this. Also of importance is the fortification of security concerning medical data privacy, since AI requires sensitive medical information to work, information that, in the wrong hands, could potentially put a person’s life at risk.
Deep learning has the potential to make cancer cure, eradication, and prevention as easy a reality as any other disease. But it is yet to be developed enough to do that. Even as various outfits are on their toes trying to hasten the process due to the pandemic level of cancer, one must remember it will be a long time before machines can be trusted to do the work of doctors. But once we reach that point, there is no doubt that people will no longer have to be as scared of the disease as they are today.
Strategic Intern- NeenOpal Analytics @Omkar Shindekar
Data Analytics in Election Campaigns
Application of AI for Search Engine Optimization (SEO)
February 29, 2020
Market Basket Analaysis
December 27, 2019