This article was originally published in HIStalk, click here to read.
First, a confession. Our company leverages machine learning in our operating room utilization software solution. As such, we stand to benefit from the AI hype machine that is running at full speed. But I promise you, the intent of this paper is not self-promotion; it’s to help you distinguish true AI value from mere marketing hype.
Understanding the categories of AI, and a little about how each tool set works, is essential. However, wading into the weeds a bit is an unfortunate requirement of doing this.
Before delving into the nuances of true versus misleading AI claims, let’s first understand some fundamental AI categories relevant to healthcare. The items below aren’t a comprehensive list, but they do capture some of the most common and important categories you’ll see:
Machine learning is a branch of AI that uses specific algorithms to analyze and learn from large amounts of data. This “training” process, and subsequent testing, results in a model that can make predictions or decisions without being explicitly programmed to perform the task.
Training methods of different ML algorithms include:
ML excels at tasks where patterns and structures can be discerned from data, such as prediction (predicting hospital readmission rates based on patient data), classification (classifying skin lesions as benign or malignant based on image data), and clustering (segmenting patients into different risk groups based on their health data).
Think of deep learning as a team of detectives working on a case. Each detective looks at a part of the evidence and makes their own observations. They pass their findings to a senior detective, who then makes more complex observations based on the initial detectives’ findings. This goes on until the chief detective (the final layer of the network) makes a decision based on all these observations.
In the world of AI, each ‘detective’ is a layer in an artificial neural network. Each layer looks at some aspect of the data and passes on its findings to the next layer. This allows the network to learn from simple features at lower layers to more complex features at higher layers.
Let’s translate this into a healthcare example. Consider a deep learning model analyzing an MRI scan to detect a tumor. The initial layers might look for basic features like edges or colors. The next layers might recognize more complex patterns like shapes or textures. And the higher layers might identify the specific features of a tumor. Just like our detective team, each layer contributes to the final decision, allowing the model to accurately identify whether a tumor is present.
Deep learning excels at tasks involving unstructured data such as images, audio, and text. For instance, deep learning algorithms can analyze MRI images to detect tumors, listen to a patient’s speech to diagnose mental health conditions or analyze electronic health records to predict patient outcomes.
Natural language processing (NLP) involves the interaction between computers and human (natural) languages. This technology allows computers to understand, interpret, and generate human language in a valuable way. At its core, NLP involves machine learning to automatically learn rules by analyzing a set of examples and making a decision based on them. This decision could be understanding sentiment, translating languages, or converting speech to text.
In the healthcare sector, NLP can be used to interpret clinical documentation, analyze patient feedback, or enable natural language user interfaces (e.g., chatbots for patient engagement).
Generative AI involves creating new, previously unseen content. Think of it like an AI artist that creates new works based on styles it has learned from. Generative AI is not limited to any particular type of content and can generate images, text, music, and other types of data.
In healthcare, generative AI could be used to create synthetic patient data that can be used for research or training purposes without compromising patient privacy. For instance, a generative model could be trained on real patient data and then generate new data that maintains the statistical properties of the original data (like the distribution of different diseases or the average patient age) but does not correspond to any real individual patient. This synthetic data can then be freely used without worrying about privacy violations.
Computer vision is like teaching a computer to ‘see’ and interpret visual data in the way humans do. This technology is extremely versatile, being used in everything from self-driving cars to facial recognition software.
In healthcare, computer vision is often used in medical imaging to detect diseases and conditions. For example, computer vision algorithms can be used to analyze X-rays, MRIs, or CT scans to detect tumors, fractures, or other abnormalities. It’s also used in telemedicine solutions, where computer vision algorithms can help monitor patients and detect abnormalities or changes in their condition. Computer vision is also used in robotics.
Imagine AI as a detective solving a complex case. To do this, it needs a vast amount of knowledge about the world, along with the ability to reason with this knowledge to draw conclusions. That’s what knowledge representation and reasoning AI do.
In the healthcare domain, such AI can be used in clinical decision support systems to aid physicians in diagnosing diseases. The AI system has access to a vast amount of medical knowledge and can reason with this knowledge to provide suggestions to physicians.
It’s hard to decide where to draw the lines when categorizing types of AI. For instance, that the problem of tumor detection often involves computer vision as part of a series of machine learning models that feed together into a deep learning network. Additionally, a field like robotics is sometimes considered its own AI category and other times considered an application that uses specific categories of AI. I’m confident that people smarter than me will sometimes disagree with my categorization choices.
While understanding AI categories is a good starting point, the key to discerning genuine AI applications in healthcare software lies in recognizing when these technologies add real value to a process or outcome. And remember: AI, at its core, should aid decision-making, not replace it.
It’s easy for marketing campaigns to dress up their solutions with the AI label, but there are several ways in which the reality may fall short of the hype.
AI, including ML, DL, and NLP, learns and improves from data over time, enabling complex decision-making that is generalizable and extends beyond predefined rules. In contrast, rules-based automation, though beneficial in certain contexts, lacks this level of complexity and adaptability.
Consider, for example, a software solution that sends alerts when patient vitals reach certain thresholds. This represents a rules-based automation system, not AI. A genuine AI solution might continuously analyze patient data, learn from it, predict potential health risks before they become critical, and even suggest personalized treatment plans. This shouldn’t imply that a solution that leverages AI is necessarily better than a rules-based automation solution. However, beware of vendors dressing up automation as AI to take advantage of the hype as a rationale to increase price.
Some solutions may incorporate AI where it’s unnecessary, serving more as a marketing tool than a feature that adds value to the end user. An example could be an element of a software solution that matches tasks to individuals based on skills. It might use an unnecessarily complex ML categorization algorithm where a simple lookup table would have sufficed.
Some vendors may oversell what their AI can achieve. While AI can indeed help predict patient outcomes, claiming perfect accuracy is unrealistic and potentially misleading. Also, prediction accuracy in machine learning is a moving target as training data sets change over time, sometimes in response to the ML-based intervention itself.
Downplaying the need for quality data can lead to overstating anticipated results. AI systems are only as good as the data they’re trained on. If a solution downplays the importance of data quality, quantity, or diversity, be skeptical.
True AI applications in healthcare are designed to support and enhance human decision-making, not replace it. If a solution suggests that its AI can replace human judgment entirely, it’s likely overhyped. It could also be dangerous.
While Copient Health indeed benefits from the AI boom, we urge discernment when it comes to AI in healthcare. Understanding the basics and recognizing when AI genuinely adds value is critical. The future of healthcare is undeniably intertwined with AI. With a robust grasp of the subject, you’ll be primed to guide your organization into a more efficient, patient-focused era.