3 Comments
User's avatar
Laurentiu Lupu MD's avatar

What feels especially important here is that the risk is not only misinformation. It is pre-orientation.

By the time a patient reaches a clinician, AI may already have reorganized what feels urgent, what seems likely, and which symptoms deserve the most weight. That can be helpful when it improves preparation and clarity. But it can also become dangerous when it creates a kind of premature coherence, especially in moments of fear, uncertainty, or desperation.

That is why I found the distinction between different kinds of AI-using patients so valuable. The issue is not simply access to answers. It is what kind of relationship the patient develops to those answers before medicine even begins to respond.

Tjaša Zajc's avatar

Thank you! It’s definitely not a straightforward journey. I will keep gathering recommendations, but also with a reminder that one should remain skeptical when assessing outputs.

NFT News's avatar

Agentic patients refer to individuals using Agentic AI to actively manage their healthcare, from tracking symptoms to making informed decisions. Powered by Agentic AI systems, this shift enables more personalised, proactive care, where AI supports real-time insights, treatment planning, and patient-led health management.

Refer Agentic: https://promptengineer-1.weebly.com/agentic.html