DECEMBER 2025
SEE COMPLETE NEWSLETTER
By Mag. Cintia Cejas, Dr. Santiago Esteban and Dr. Martín Sabán*
The value of AI lies not in chasing technological trends, but in solving problems that matter.
Artificial intelligence has earned its place on the health agenda. But amidst promises of “revolution” and warnings of “collapse,” we end up ignoring the essential point: AI is positioning itself as part of the infrastructure that supports clinical decisions, epidemiological surveillance, and planning. The problem is that we continue to discuss it as if it were magic or a threat. And it is neither.
True innovation begins with recognizing what's missing. And what's missing in most healthcare systems is reliable, complete, and accessible data. Without robust clinical records, interoperability, and clear processes, any algorithm is merely a costly illusion. Before discussing sophisticated predictions, we must build the foundation that makes them possible.
It is also true that responsibility cannot be confined to grandiloquent rhetoric. Talking about “responsible” AI requires concrete rules: transparent validation, explicit limits on unacceptable uses, regular audits, and clarity on who takes responsibility when a model fails. Without institutional governance, responsibility is reduced to a convenient slogan.
The value of AI lies not in chasing technological trends, but in solving problems that matter: reducing wait times, improving surveillance, anticipating risks, detecting omissions in records, or allocating resources more accurately. Pilot projects launched without a theory of change always end up in the same place: the folder of “interesting projects that never scaled.”
In this sense, the automation opportunities offered by AI should be, above all, a catalyst for re-engineering: digitizing inefficiency is pointless. The integration of these technologies provides a unique opportunity to rethink processes within the healthcare system, challenging outdated bureaucratic practices. For successful deployment, it is imperative to prioritize those workflows that users—both patients and healthcare teams—identify as critical pain points and where they perceive such tools would be useful. Only when a tool addresses a perceived need and alleviates real friction does artificial intelligence cease to be a technological imposition and become an asset that adds genuine value.
Equity doesn't happen on its own. Without deliberate intervention, algorithms amplify inequalities. Auditing biases, incorporating representative data from vulnerable populations, and analyzing the distributional impact are essential steps, not optional.
Ultimately, the key question isn't which model to use, but what institutional capacity we have to sustain it. AI in healthcare requires trained teams, stable processes, genuine interoperability, and ongoing funding. Without these, any deployment is fragile, no matter how brilliant it may seem in its initial presentation.
It's a common mistake to think that the initial deployment is the goal, when in reality it's merely the starting point. The true test is sustainability: AI models are not static; they can degrade or lose calibration in response to changes in the healthcare landscape. Keeping them operational and secure requires active monitoring and rigorous planning that goes beyond the initial launch euphoria. This means ensuring long-term financial sustainability: without guaranteed resources for maintenance, human oversight, and continuous updates, the initial investment risks quickly turning into technical debt and obsolescence.
Therefore, innovation also means being willing to say "not yet" when appropriate. It means not implementing what hasn't been validated, not accepting black boxes, and not promising benefits without evidence. Maturity lies in knowing when to move forward... and when to stop.
*Coordinator and researchers of the Center for Implementation and Innovation in Health Policies (CIIPS) of the IECS.

