Over the last three years, I went on a mission to reverse a series of symptoms stemming from metabolic syndrome - something I had struggled with since I was young. That journey pulled me deep into the world of wearable health devices. I experimented with continuous glucose monitors (CGMs), sleep and recovery trackers, and an endless array of dashboards promising clarity.
The promise was appealing: more data equals better insight, which equals better health. But the reality was more complicated. Every device generated impressive volumes of metrics about my steps, strain, heart rate variability, and supplementation protocols. Yet for all that information, I often found myself more anxious than before. The missing piece wasn’t just an absence of data - it was the lack of interpretation, context, and emotional calibration.
Even as newer trackers have evolved to offer more holistic overviews, they still stop at reporting. They may surface correlations, like low HRV after poor sleep, or fluctuations in glucose following a meal. But for the patient, especially someone navigating a chronic or complex condition, what is needed is more than pattern recognition. We need systems that help us think, adapt, and contextualize.
I had a similar realization during my time as a consultant on a research project aimed at diagnosing adenomyosis - an underdiagnosed and often misunderstood reproductive health condition. Adenomyosis can lead to debilitating pain and significant impacts on fertility, yet women are frequently sent home with an incomplete or incorrect care plan. We trained a deep learning model (Google Vertex AI) to identify the condition from ultrasound images, and the results were promising. The model reached high accuracy, in some cases flagging slides that had been incorrectly labeled by human experts.
The value of that work was undeniable. For many women, a faster, more accurate diagnosis can be life-changing. But as I watched the model improve, I also became acutely aware of its limits. Identifying the pattern is not the endpoint - it is the beginning. A diagnosis without support for the next steps still leaves patients alone in a maze of uncertainty. Given the varying skills of experts, holistic inputs by the model could reduce scope of clinician error as well.
This is why I see the intersection of artificial general intelligence and healthcare with cautious optimism.
For patients, large language models in health apps have real promise. Imagine a tracker that combines data across time and context, helping people spot patterns, challenge assumptions, and advocate for themselves. This is especially relevant in women’s health, where care protocols often trail behind lived experience. In reproductive medicine, many conditions still lack holistic, evidence-based plans. An advanced LLM could surface insights a time-pressed clinician might overlook and give patients clearer ways to participate in their care.
My experience building emerging tech products in India has shown how quickly teams can overpromise in the rush to scale. In healthcare, that pressure can carry heavy consequences. AGI's potential requires steady commitment to ethics, context, and collaboration. Risks often appear when teams ignore critical questions or leave out essential perspectives. If we deploy AGI without strong foundations, we risk building systems that look credible but behave carelessly. These tools can spread certainty faster than people can adapt.
A health device that integrates and adapts recommendations can transform care. That promise depends on designing technology as a partner to clinicians and patients - one that understands its own limits and stays grounded in the reality that healthcare is always human. AGI will have the most impact when it helps strengthen good judgment, not just accelerate it. That is why I believe the most important question for any product leader in this space is not how fast we can ship. It is whether we are willing to build with enough humility to recognize that augmentation - not automation - is what will truly change lives.