NextMed Health Day 1: An Update on AI, AI agents and Agentic AI in Healthcare
From a European perspective, Day 1 of the NextMed Health Conference—a forum dedicated to exploring the future of healthcare—highlighted the strong drive for innovation within the US healthcare system.
This is a newsletter of Faces of Digital Health - a podcast that explores the diversity of healthcare systems and healthcare innovation worldwide. Interviews with policymakers, entrepreneurs, and clinicians provide the listeners with insights into market specifics, go-to-market strategies, barriers to success, characteristics of different healthcare systems, challenges of healthcare systems, and access to healthcare. Find out more on the website, tune in on Spotify or iTunes.
You can still join the conference remotely through the live stream: https://www.nextmed.health/live-2025. See the agenda of the upcoming days: https://www.nextmed.health/program
The most gripping presentation on Day 1 of the NextMed Health conference 2025 did not come from an ambitious health-tech innovator, business analyst or a doctor, but from a coder, filmmaker and AI developer Steven Brown. A month after undergoing all possible medical tests, from blood draws to colonoscopy, and doctors guessing if the cause of his unexplained symptoms was just stress or gas, a sudden stab of abdominal pain during dinner sent Steven to the ER, where a new set of tests led to a diagnosis of a rare form of multiple myeloma. That prompted Steven, who comes from a family of physicians, to explore AI’s potential in diagnosing and managing diseases. He created MedicalAdvocate.ai - a cutting-edge platform for AI-led medical consultation.
After uploading medical results on the platform, multiple AI agents act like a tumor board with different specialties (oncologists, radiologists, etc.). Each AI agent provides insights from their specialty, contributing to a comprehensive review. Then, a higher-level AI agent ("Hippocrates") consolidates expert opinions and offers a holistic recommendation.
This is not a first story of a patient solving his medical case with the help of AI. While AI is not a standard of care yet, it has proven to be instrumental in some edge cases and hasn’t ceased to amaze with its life-saving potential.
What does this mean?
AI isn’t replacing doctors but can enhance patient understanding and decision-making.
AI agents can help bridge the gap between patients and specialists who have limited time.
AI enables personalized medicine by analyzing patient data more thoroughly than human doctors can in routine consultations.
MedicalAdvocate.ai is at the moment only a personal project and a closed platform.
The move towards agentic AI
In 2023 generative AI was all the rage. 2 years later, we are moving way beyond that with agentic AI, AI agents. What are they?
Generative AI (e.g., ChatGPT) functions like a coach on the sideline—it provides advice, generates content, and suggests actions but does not take direct action itself.
AI Agents perform specific functions within an AI system but do not necessarily operate independently. For example, AI agents act like a virtual research team: data analysts, statisticians, literature reviewers, scientific writers. In Steven’s case, AI agents represent doctors from different specialities that each comment on medical results from their own perspective.
Agentic AI functions like a skilled project manager - not just making strategic decisions, but delegating tasks, tracking progress, and adapting in real time to meet its goals.
Data remains a key challenge for AI development
While increasingly more tangible in it’s promises, AI is still facing many challenges because the quality of AI depends on data it’s fed. At the moment 30-40% of data in electronic health records are inaccurate. Most published studies can’t be replicated or scrutinized because datasets used for those studies are not available. Bayo Curry-Winchell, Founder of Beyond Clinical Walls and Urgent Care Medical Director at Saint Mary’s Health Network, reminded us that simply increasing access to healthcare will not eliminate bias. Instead, we must fundamentally transform how bias is recognized and addressed within the system. Despite her leadership role, Curry-Winchell experienced pain dismissal while in labor at the very hospital where she works — a failure in care that nearly cost her life. Her call to action to developers and users of AI is simple:
1. Check your bias.
2. Consider diverse patient groups – involve them in their process, do not focus on the narrow population
3. Champion equity.
Why AI is the only hope for healthcare
AI adoption and development are advancing at a rapid pace, and it’s doesn’t seem crazy to predict that AI will in many cases become mainstream in well before 17 years, which is the usual timeline for new innovations to become a standard practice in healthcare. 1017 devices with AI have been approved by the FDA to date, most of them in radiology.
With current trends in healthcare sustainability, aging and workforce shortages, we don’t really have a choice but to try to get as much as possible from AI to optimize costs and care delivery. WHO predicts a 10 million shortage of clinicians by 2030. As shared by Anthony Chang, Founder of AIMed and Chief Intelligence & Innovation Office, Children's Hospital of Orange County, U.S. health expenditure is 4.2 triillion USD per year. Because 25% of the cost is administration, AI can lead to cost savings of 150-250 billion per year in United Stated. With 90% of time spent on technology and 10% of time in clinical practice on interactions with the patient, AI has the chance to rehumanize medicine.
At some point it’s not going to be ethical to NOT use AI in healthcare, said Chang in a 3-hour workshop. At the same time, Chang reminded us that “change doesn’t happen at the pace of technology — it happens at the speed of people.”
“My bigger concern is that mature AI tools aren't being used efficiently or effectively”
Here is a short interview with Anthony Chang, Founder of AIMed and Chief Intelligence & Innovation Office at Children's Hospital of Orange County, after the first NextMed Health presentation and his 3-hour workshop on AI in healthcare.
Mr. Chang, you just had a really engaging session on AI in healthcare. One of the strongest points you made was that it may soon be unethical not to use AI in healthcare. Can you expand on that?
Mr. Chang: Of course. That statement needs to be interpreted carefully. I mean in specific situations where AI has already been proven effective. We're reaching a point where the standard of care will require the timely use of AI to make accurate diagnoses.
Radiology is probably the prime example.
Particularly brain and cardiac scans. We're already seeing some accountability emerging when hospitals don't use available AI tools in these areas.
You also mentioned that when you consult hospitals on their AI approach, it's not uncommon to find that an actual strategy doesn’t exist. What’s your advice to hospital leaders in that situation? If they already have a data strategy and governance framework, what’s typically missing—and where should they begin
Often, when we’re asked to advise on an AI strategy, we find the hospital doesn’t even have a solid data strategy or data governance in place. That’s the foundation. Once that’s established, you can build an AI governance framework and create a dedicated committee.
As I mentioned during the session, the AI agenda in healthcare must be driven by human-to-human interactions and relationships. Change isn't technology-driven—it’s human-driven. And that’s one of the beautiful things about AI in healthcare. The technology is robust and mature, but it takes people to learn it, adopt it, and use it effectively.
Where do you currently see the biggest dangers with AI? One participant mentioned concerns about non-diverse datasets, which may not improve in the near future. Her call to action was for people to collect more data. How do you view that issue, and what are the broader challenges AI will face before it becomes more widely adopted?
AI is going to hold us accountable for ensuring balanced datasets. There are technological solutions, like the use of synthetic data, that can help address this. But my bigger concern is that mature AI tools aren't being used efficiently or effectively—tools that could reduce caregiver burden and improve outcomes.
Why do you think these tools aren’t being used effectively? You mentioned earlier that of the $4.2 trillion spent annually on U.S. healthcare, around 25% could be saved through AI-driven automation of administrative tasks.
One major reason is a lack of education in this area—which we’re trying to address through sessions like this. Another reason is misaligned financial incentives among payers, providers, hospitals, and patients. Even the most elegant AI solution can’t overcome those human-driven issues unless incentives are aligned.
Finally, is there anything you think people should understand right now about agentic AI, which came up at the end of the workshop? You mentioned it’s often confused with AI agents.
Yes, it’s an easy distinction to miss. I see agentic AI as a capable team captain on the field. It has autonomy, but still operates under the guidance of a coach—which, in this analogy, is generative AI. The individual players could be seen as AI agents, each acting autonomously, but within the broader autonomy of the entire system.