Everywhere we look, there seems to be an air of pessimism surrounding AI, despite the fact that we have actually not made significant advancements in AI development recently. The only difference is that during the past few months, due to ChatGPT, the topic has greatly entered the spotlight.
Figures like Tristan Harris spread fear, Harari claims that this is even worse than nuclear war, major tech companies and moguls call for a six-month moratorium on the development of large language models (though it’s clear that nations such as China won’t halt their progress), countries ban it, and Silicon Valley has split into fractions.
With the rapid rise of ChatGPT, amassing over 100 million users in no time, there’s widespread apprehension about what the future holds. This has become the number one question I receive after every keynote speech and video, from people all around the globe.
Despite these concerns, I maintain a positive outlook on the future of AI in healthcare. While I do hold opinions about AI in general, I’ll refrain from commenting on that aspect. Let’s just say I’m using healthcare as a yardstick while explaining why I’m optimistic about the future of AI.
Below I will list my seven arguments for my stance. So it’s not just about having this romantic notion of a beautiful future with AI (although I do have that), but I can also justify why I’m optimistic.
1. Regulation catches up
The U.S. Food and Drug Administration (FDA) is a great example, but let’s start from a bit earlier. In one of our studies, we found that as early as 1996 there already was a patent, for an approved medical technology that used AI. Sure, development got much faster in the past few years, and the number of AI technologies and patents has exploded since 2015. While in 2015, the FDA was already regulating AI-based medical technologies, the first comprehensive study on this only came out in 2018 from the regulatory body.
In 2020, we found ourselves inadvertently stepping into the shoes of the FDA, compiling data on exactly how many AI-based technologies have been approved and their respective specialties. Just one year later, in 2021, the FDA released their own database, citing us as a source.
In just a few short years, we transitioned from the rudimentary regulation of AI-based medical technologies to establishing a dedicated system for them, along with a database to check which AI technologies are approved for medical use.
That is why I say that regulation eventually catches up. However, we have to note that not just any country’s regulation does so, and we best keep our eyes on the FDA, which serves as a benchmark in this aspect. Based on these maturing regulatory processes, I remain optimistic that AI will find its place in everyday life.
2. We need to deal with doctor shortages
There is a severe shortage of doctors, a situation that is predicted to worsen, as an estimated 10 million healthcare professionals are expected to be missing worldwide by 2030.
At the same time, more and more people need a doctor. Not because we are sicker, but because we can provide better care, and can better look after people with chronic diseases in the long term, so this gap between the doctors we train and the patients waiting for care is only getting wider.
This is a good reason to be optimistic about AI. We must fill this gap; we cannot leave people without care. If there aren’t enough human resources to provide care, we need to find alternative methods.
Technologies like new sensors or 3D printing will not solve this problem, but automation does provide an interim means of care until patients can consult with healthcare professionals in person. In mental health, there are already initiatives where a chatbot keeps the patient company while they wait to see a doctor, and tries to provide some empathy.
And there is an upside to it: by the time the patient gets to the doctor, the AI has already compiled a report about them, making the in-person visit more efficient.
3. Virtual care is rising
This is good news for all AI-supporter researchers because efficient virtual care can be provided if something is analysing the data, if something can write down what the patient is saying, understand it, and draw conclusions from it. Not to provide remote care instead of a doctor, but to support the doctor in remote care. Thus the advancement of virtual care promotes the implementation of AI.
4. Scientific understanding is achievable
AI is an incredibly complex technology. Even at the current level of AI development – artificial narrow intelligence (ANI) – it remains challenging for even its developers to fully understand. However, a sufficient level of scientific understanding can indeed be achieved, and that’s all that most healthcare stakeholders need in order to use the technology.
My favourite example is the MRI machine. We all learn about it in the first year of medical school, but nine out of ten doctors have no clue how such a machine works. Sure, they might say that the particle spins are aligned this way or that – but they actually have no idea.
But of course, we have enough scientific understanding of the technology to send a patient for an MRI scan without fearing that something terrible will happen to them. We understand in broad terms how it works, that it’s not dangerous, when and how it’s useful, even though we can’t describe its operation in detail.
The same level of scientific understanding is needed regarding AI, and it can be achieved. After all, we also released a minimal package in one of our studies, in fact, we even conducted a course because we claim that a minimal level of knowledge can be conveyed. A basic understanding, which is enough for everyone to understand, and at least not to fear the technology.
5. Evidence is growing
The evidence behind medical AI technologies is steadily growing. There are more and more AI technologies that are validated by studies and clinical trials, just as all drugs must be validated through studies, peer-reviewed studies, and clinical trials.
We are seeing a similar trend with AI technologies, for example, a good example is sepsis watch. It analyses medical records and helps staff identify patients with the highest risk of sepsis. It’s a very useful tool, but it requires the analysis of an incredible amount of data in a very complex way. If you are not familiar with the sepsis watch system, the Verge article presents the method with excellent graphics.
6. The Moravec Paradox
Moravec’s paradox refers to the observation made by artificial intelligence and robotics researchers that, contrary to popular belief, reasoning requires very little computation while sensorimotor and perception skills necessitate massive computational resources.
In simpler terms: developing algorithms for high-level cognitive tasks is much easier than replicating a baby’s motor movements. It’s more achievable to develop a stock market algorithm that controls trading than to develop a robot that moves like a baby.
And as it’s easier to algorithmize cognitive functions than our motor, movement-related functions, the question is not whether robots will take our jobs by coming into the hospital and dressing the patient. It simply makes no sense to entrust this task to a robot when a human is a thousand times more skilled, and a thousand times cheaper than developing a robot for this purpose.
However, for cognitive tasks such as analysing a radiology report or extracting diagnoses from medical records AI is here to lend us a hand – and dig through mountains of data without ever feeling tired.
7. ChatGPT finally allowed physicians and patients to feel what it is like to use AI
The ChatGPT revolution has finally made it possible for everyone to feel what it’s like to work with AI. Until now, AI was a toy of the selected few: some researchers, developers, innovators, and policymakers – but out of reach to everyday folks. The average person did not have access to it, only several lucky doctors that worked in some extraordinary place and a few patients that received treatment there, and therefore had the opportunity to get a glimpse of AI technology.
This has changed once and for all, now hundreds of millions of people can access it, use it for free every day, to learn, to feel, to get to know what it’s like to work with AI technology.
There is no future healthcare without AI
So, while I see the dangers and challenges, from regulation to costs to the private sphere, due to the above reasons I remain optimistic that AI not only has a role in the future of medicine, but without it, we will not be able to provide care. I’ve been saying this not only since the arrival of ChatGPT but for over a decade.
We cannot sustain healthcare without this paradigm shift, where the medical team – comprising the patient and doctor – welcomes a third member: a technological entity, artificial intelligence. This is an exciting process, and to make the most of it we certainly need to change. However, if we can develop our skills and adapt our thinking, a very exciting medical future awaits us.
The post Seven Reasons To Be Optimistic About The Future Of AI In Medicine appeared first on The Medical Futurist.