PROTECT YOUR DNA WITH QUANTUM TECHNOLOGY
Orgo-Life the new way to the future Advertising by AdpathwayOn May 12, Willow Innovations, Inc., a maternal health organization, and Ema EQ, an AI platform for women’s health, announced the creation of the Women’s Health AI (WHAI) Consortium. The consortium is the first industry body dedicated to setting shared benchmarks, ethical standards, and transparent evaluation methods for artificial intelligence in women’s health.
According to the press release, with the formation of the consortium, it is expected that AI tools will meet accuracy benchmarks, provide data that helps close gaps in women's health, and remain transparent about their data sources.
Healthcare Innovation discussed the initiative with Sarah O’Leary, CEO of Willow, and Amanda Ducach, CEO of Ema.
Could you provide a little background?
Sarah O’Leary: Willow Innovations is a maternal health pioneer best known for revolutionizing the breast-pumping experience and designing clinically backed innovations that meet women where they are rather than asking them to adapt to inadequate tools.
Amanda Ducach: Ema EQ is the first AI built specifically for women's health. Both organizations have spent years operating in a space that has been chronically underserved, underfunded, and misunderstood, and that shared experience is a large part of what brought us to this moment.
Could you tell me about the AI Consortium for AI in Women's Health and what led to this initiative?
O’Leary: AI in women's health is advancing faster than the standards meant to govern it, and women are already using these tools at significant rates. Women who were pregnant in the past year are more likely to turn to AI chatbots for health information than non-pregnant women of reproductive age, 60 percent versus 42 percent. Adoption among younger generations is tracking similarly, with 45 percent of Gen Z adults and 48 percent of Millennials reporting using AI chatbots for health information.
Ducach: These tools are already embedded in how women navigate their health. The problem is that models are being trained on data that underrepresents women, deployed without sufficient oversight, and validated against benchmarks that were never designed with women in mind. The Women's Health AI Consortium was formed to close that gap before it becomes the default. It's the first industry body dedicated to establishing shared benchmarks, ethical standards, and transparent evaluation methods for AI in women's health, with founding members including Clue, Ema EQ, Thrive Global, Oura, and Willow.
Could you talk about how this initiative is geared towards women's health and how this makes it different from others?
Ducach: Most AI governance efforts are general-purpose. They aren't built around the specific clinical, cultural, and emotional realities of women's health. WHAI is. The consortium's six core commitments, covering ethical and safety standards, bias reduction and cultural integrity, emotional and clinical quality at scale, contextual and longitudinal intelligence, mentorship for ethical AI builders, and transparent oversight, were designed with women's health as the primary frame, not an afterthought. The goal is AI that reflects the full reality of women's lives, not just the parts that were convenient to study.
How do you foresee this will help the topic of women's health?
Ducach: On a practical level, the consortium gives women and the companies serving them greater confidence in the tools they encounter by establishing shared expectations for quality and trustworthiness. That matters urgently, given where adoption already is. When nearly two-thirds of one of the most medically vulnerable populations is turning to AI for guidance, the quality of that AI is a patient safety issue. For organizations building or deploying in this space, WHAI offers clearer reference points for design, validation, and procurement, and the broader aim is cohesion across the field so that progress raises the floor everywhere.
What specific challenges would you like to address?
O’Leary: Women's health has been historically under-researched, which means the data powering AI tools is often incomplete or unrepresentative, and that directly limits their effectiveness and safety. There's also the issue of bias: AI that hasn't been validated across diverse populations can cause disparate harm, particularly for women of color. And there's a transparency problem; tools that aren't honest about their limitations or data sources put users at risk. The consortium's standards are designed to hold AI accountable before those failure modes become entrenched.
The founding board consists of all women, which is great to see in light of this initiative.
O’Leary: The board brings together clinicians, technologists, ethicists, legal experts, and health innovators. These are women who have spent their careers building in, and often fighting for, a space the broader industry has consistently deprioritized. Having that lived expertise at the governance level isn't just symbolically right, it’s structurally necessary if the standards are going to reflect what women actually need.
What are some challenges you faced in the consortium?
Ducach: Helping people to understand that women’s health is not a niche. Women are more than 51 percent of the population, and female bodies have lived experiences that are different from their male counterparts - and the AI tools they use need to reflect this. This should matter to everyone, from the health systems that create the care to the governments that regulate what that care looks like. This is not a “female” problem; this is a societal problem.
What are some future developments that you foresee with this initiative?
Ducach: The expectation is that AI tools operating in women's health will increasingly be held to the consortium's standards, meeting accuracy benchmarks, contributing data that helps close existing gaps, and remaining transparent about sources and limitations. Given that adoption among younger women is already well above 40 percent, the next few years will be defining ones for how trustworthy this category becomes. The goal is for WHAI to serve as the authoritative reference point for what good looks like, and to mentor the next generation of ethical AI builders so that the field grows in the right direction.

.jpg)










English (US) ·