Artificial intelligence (AI) is on course to usher the real era of The Art of Medicine. By automating administrative tasks and aiding in decision-making, these smart algorithms promise to free up physicians’ valuable time; time that can be dedicated where the human touch is essential in healthcare.
The technology is also attracting investors to fuel growth in the field. For 2025, the global market size for AI in healthcare is forecasted to skyrocket to $28 billion. However, this increased interest is also leading to misleading information and hype around the technology. Companies falsely label their solutions as AI-based or inflate their tools’ performance to attract investors and clients. News headlines overuse the term to increase their audience engagement for the sake of sensationalism. It has thus become a tricky issue among physicians to separate the hype from the facts around AI in healthcare.
As such, a concise guide to help medical professionals in better understanding the basics of AI and its potentials in medicine proves to be important. However, most of the guides available for this target are either too superficial, too detailed, or focus on only one medical specialty. A middle ground with easily digestible information on the basis of AI is hard to come by.
The latest study from The Medical Futurist Institute (TMFI) addressed this gap. Led by Bertalan Meskó, M.D., ‘A Short Guide For Medical Professionals In The Era of Artificial Intelligence’ was published in npj Digital Medicine; the second TMFI paper to be published in this high-level journal. You can read it in open access here and we are guiding medical professionals into the world of AI in this article as well.
Definitions and levels of AI, made simple
Simply put, artificial intelligence refers to intelligence demonstrated by machines. Through algorithms, or a set of rules, which the machine follows, it mimics human cognitive functions such as learning and problem-solving.
Philosopher Nick Bostrom of the University of Oxford further expanded this definition to describe three major levels in the development of AI:
Artificial Narrow Intelligence (ANI):
ANI features pattern recognising abilities in huge data sets, enabling it to solve text, voice, or image-based classification and clustering problems. It is an algorithm that can excel at a precisely defined, single task. While it can play chess better than any grandmaster, its IQ is zero.
Artificial General Intelligence (AGI):
AGI has yet to be achieved, but its cognitive prowess is one that reaches that of human levels. It can reason, argue, memorise and solve issues like you do.
Artificial SuperIntelligence (ASI):
Also a theoretical concept, ASI’s cognitive capacity compares to that of the whole of humanity, if not more. Humans would be unable to grasp its knowledge and understand ASI’s reasoning and the technology itself might be considered a threat. As such, many organisations work hard to avoid ever reaching this stage.
With these definitions and levels of AI development in mind, we can estimate an “ideal AI”. In the graphic above, this theoretical threshold is represented by the green dot; much more advanced than the current ANI but not at the level of AGI.
From these definitions, we see that AI is itself a stratified technology, which currently is more widespread than ANI and has its limitations. Let’s now take a look at how the technology works and the means of doing so.
The human teacher and the AI child: The methods of AI
Even with the seemingly mind-blowing prowess that AI is touted to boast every so often, the technology is nevertheless comparable to a child. Under appropriate human guidance, children will perform as supervised, even if not explicitly told what to do. In this scenario, AI developers are the teachers and AI is the child.
Through machine learning (ML), developers enable algorithms to learn a task without being explicitly programmed for this particular task. By being fed with good quality and ample data, smart algorithms use ML techniques to identify patterns from the dataset and they excel at doing so. There are several ML subtypes and combined methods, but three major subtypes as well as an advanced method, deep learning (DL), are more relevant to healthcare. We elaborate upon these below.
1. Supervised learning
This ML subtype is comparable to teaching a child exactly what to learn. It is used when the algorithm’s exact task can be precisely defined with the data at hand. In medical practice, it can look as follows. We have two patient groups, Group A and Group B, each with their own set of medical records. Group A’s set contains the family history, lab markers and other details of the diagnosis. Group B’s set consists of the same types of information, but the diagnosis is missing. We can train an algorithm with supervised learning to assign the right diagnosis to Group B, based on the associations and labels the algorithm learns about in Group A. This method is the most frequently used training mode.
2. Unsupervised learning
As the name suggests, this method is akin to learning without a teacher. The starting tools are there, but the child decides on the end result. We provide different datasets to the algorithm and it finds associations on its own, even those we might not have thought about. Additionally, we do not modify the algorithm based on the outcome. Such a model can discover new drug-drug interactions or cluster patients according to the attributes they display.
3. Reinforcement learning
Reinforcement learning shares similar features with unsupervised learning in that the starting tools are given to the “child” and it is left to make decisions on their own in order to achieve a task. However, unlike unsupervised learning, reinforcement learning involves input from the teacher. After a series of actions (but not after each action as with supervised learning), AI developers input their feedback to nudge the algorithm towards the best course of action. The issue with using this subtype in healthcare is that we cannot test out the algorithm on a large number of scenarios since patient lives are at stake.
4. Deep learning
DL is an advanced subtype of ML that holds different potentials altogether. Its functioning is based on artificial neural networks (ANN), which is itself inspired by the neural network of the human brain. DL consists of a layered ANN structure where the more layers it has, the more complex tasks it can perform. Let’s say we are building a model to group patients based on their diagnosis. If the information reads “Type 1 Diabetes”, an ML algorithm will cluster medical records with “Type 1 Diabetes”. A DL algorithm on the other hand will be able to, with time, assign patients with only the “T1D” abbreviation mentioned in their records to the same group, without human input. Other ML subtypes will require manual input from the developers to recognise this abbreviation.
How to separate hype from fact
Now that we’ve been through the basics of AI, we can have a better understanding of how to look at the news and even studies around the technology. Indeed, as the infographic below depicts, the number of ML/DL studies on Pubmed saw an exponential rise in the last decade:
As for online outlets reporting on the technology and inflating its potential, the number is even higher. To help you navigate the flood of news and filter out the hype from the facts, you can take the general steps below to better understand the impact of the technology in question:
1. It’s about the data
If a study mentions the use of an AI-based technology, check the ‘Methods’ section to find out more about the types of data the authors used to train the algorithm. Without a large amount of quality data, an algorithm won’t be reliable. And such quality and quantity of data is obtainable through collaborations with clinicians and healthcare institutions. Certain researchers might amplify their dataset with some tricks that don’t improve the quality. For example, by rotating images, they can double their dataset size.
2. Clinical solutions for clinical problems
While state-of-the-art AI can scan medical data in a matter of seconds, its implementation in a clinical setting is a different story. If they aren’t easily implemented in clinical protocols or don’t provide straightforward results to the medical staff, then such AI might very well perform worse than human professionals in a real-world clinical setting.
Still, on the matter of clinical settings, it’s important to know whether the algorithm in question was tested on real clinical data. If only pre-selected data was used in a study, the results might not translate similarly when tested on real clinical data.
3. Know your AI
When aiming to create an AI-based tool, companies and researchers should mention the subtype of ML or DL used. They should also be able to detail the method used in its development. If they don’t make mention of these, it’s reason enough to be skeptical about their technology really involving AI.
Similarly, when reading news mentioning the term “artificial intelligence”, try to look if the article describes the exact method behind this term. If it does not, it’s a sign that you should be cautious about the article’s claims.
While we aimed at providing an easily digestible guide with this article, we invite you to read more details about what medical professionals should know about AI in the latest study from Dr. Bertalan Meskó and Marton Gorog.
We would also recommend keeping in touch with the latest developments in the medical AI field. From managing the COVID-19 pandemic through improving prosthetics to unusual associations in medicine, we are sure to hear more about A.’s contributions in healthcare. Rest assured that at The Medical Futurist, we will keep sharing the most exciting (and credible) applications and trends of the technology.
The post A Physician’s Visual Guide To Artificial Intelligence appeared first on The Medical Futurist.