The artificial intelligence revolution is in full swing. AI is both powering and impacting our everyday lives.
Whether you are using Google Maps to find the shortest commute to work, scheduling a hair appointment for your mother by voice command, or viewing personalized recommendations during an online shopping experience, the use of AI has disrupted nearly every industry, and it’s now taking over health care.
Many of the major issues within our health care system are now being addressed using a family of AI techniques known as deep learning models. These methods allow us to feed raw data into a computer that then automatically discovers the proper representation needed for classification, detection and diagnosis.
Data-driven predictions underpin personalized medicine, and such insights are key to providing better, more cost-effective care. Leading health care experts affirm that “the scenario in which medical information, gathered at the point of care, analyzed using algorithms to provide real-time, actionable insights is now within touching distance.” Putting sophisticated DL tools at the fingertips of physicians, nurses and caregivers empowers them to deliver “The Triple Aim” — improve the patient experience of care, improve the health of populations, and reduce the cost of care.
DL models, also referred to as deep neural nets, utilize a hierarchical structure whereby data are broken down, distributed and organized into consecutive layers or networks. The layers constitute an artificial neural network that mimic and were inspired by the structure and function of neurons in the human brain.
Arguably, one of the most attractive signatures of DL is that “the layers of features are not designed by human engineers, they are learned from data using a general-purpose learning procedure.” What this means is that the machine has the ability to “learn” with training or “labeled” data, and thus processes new data and identifies new patterns without being supervised or explicitly programmed to do so.
Previously, constructing an ML system required engineering experts to design sophisticated feature extractors to transform data into suitable representations. Now with DL, rather than personally doing the feature engineering, engineers can feed a learning algorithm terabytes of relevant data and let the computer figure out the patterns that produce the desired result.
The successful implementation of DL nets requires large amounts of data, ultrafast computing processing speed and sophisticated, large models whereby performance scales with both increases in data and model size. Significant advances have been made on all three fronts in recent years, and as a result, we have observed a notable, sudden success of DL in a variety of industries.
Massive datasets are required to properly train sophisticated DL nets. For instance, thousands to millions of images are typically used in training datasets for automatic image recognition. Fortunately, in 2017, for the first time in recorded history, the number of connected devices exceeded that of the world population. The recent growth of internet-connected devices (smartphones, wearables, etc.) have opened the floodgates for digital data. In fact, it is estimated that 90 percent of the data in the world today has been created in the last two years alone.
Deep learning requires powerful computer processors to quickly and efficiently perform massive calculations. For example, over 100 million parameters are calculated for a commonly used convolutional NN of 16 hidden layers. Until the advent of the graphical processing unit, central processing units were used to train NN. CPUs calculate the parameters one at a time, whereas GPUs can calculate all of the operations in parallel.
When GPUs hit the computer scene in the late 2000s, they were used for training NN. GPUs can train NN considerably quicker than the fastest CPUs. For instance, a CPU may take 150 hours to train a convolutional NN, whereas a GPU only requires two hours.
DL nets are fundamentally different than other ML techniques in that their performance continues to increase with both increases in data and model size.
Deep learning networks are being tested in a multitude of heath care applications — imaging diagnostics on the frontline, clinical decision-making and “machine-augmented” preliminary diagnoses. The vast majority of DL applications in health care have used deep convolutional nets for processing images, video and audio; however, emerging work with recurrent nets is showing progress making insightful predictions on behavioral and biological data.
Over the past year, deep convolutional nets have successfully been used to improve medical image analysis. These studies illustrate that such models can identify cancer up to 50 percent faster and with performance on par with leading radiologists and dermatologists.
These algorithms can potentially save our health care system billions of dollars annually, by providing a preliminary diagnosis before a patient sees a specialist or visits an emergency room. Another recent study showed that a deep learning system that analyzed OCT images could detect over 50 eye diseases as accurately as a doctor. This software was issued the first Food and Drug Administration permit for an AI diagnostic system.
In addition to diagnosing diseases, other deep learning platforms are being commercialized to provide preventative, person-centered senior care. For example, CarePredict has commercialized a system that helps predict when seniors are at a higher fall risk or showing changes in activity and behavior foreshadowing a urinary tract infection or depression.
In summary, the use of deep learning tools in health care has increased in recent years, and early results demonstrate its game-changing capability. The field is rapidly maturing, and DL “augmentation” tools will serve to not only empower our physicians, nurses and caregivers, but give them the tools to continue to deliver on the “The Triple Aim” for years to come.
Gerald J. Wilmink, Ph.D., MBA, is the chief business officer of CarePredict.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.