An AI program called CHLOE silently produced a ranked list of viable embryos while a hopeful couple sat across from their embryologist in a Tel Aviv fertility clinic. The couple, who were unfamiliar with the procedure, asked an unexpected question as the discussion veered quickly between implant probabilities and morphology scores: “Did the algorithm pick our future child?”
It wasn’t just them who inquired.
The use of digital embryo selection is no longer experimental. Fertility clinics are increasingly using deep learning models and time-lapse imaging, which means that machines are now influencing some of the most private medical decisions—decisions about which potential lives will continue. The law has been oddly slow to react, despite the fact that that alone ought to provoke serious thought.
| Term | Description |
|---|---|
| Digital Embryo Selection | The use of AI and machine learning algorithms to assess, rank, or select human embryos during IVF procedures. |
| Purpose | Aims to improve implantation success rates and reduce human subjectivity in embryo grading. |
| Key Technologies | Time-lapse imaging, deep learning models, systems like CHLOE and iDAScore. |
| Legal Status | Fragmented regulation worldwide; most systems operate under clinical decision support, not diagnostic classification. |
| Ethical Concerns | Algorithmic bias, transparency, consent, and accountability in clinical outcomes. |
| Source Reference | Monash Bioethics Centre, 2025 |
It started out with good intentions. Once a laborious and very subjective process, embryo grading was prone to irregularities. Human embryologists might not agree on which characteristics were most important, or they might inadvertently let bias or weariness influence their choices. Scalability, objectivity, and consistency were promised by digital tools. AI could monitor the development of thousands of embryos in real time, compare them to enormous datasets, and identify the most promising candidates.
The additional assistance was well received by clinics. Certain tools, such as CHLOE from Fairtility or iDAScore in Europe, provided aesthetically pleasing dashboards and rankings that appeared to be simple. The prospect of improved chances and reduced expenses attracted patients. The efficient workflow was appreciated by embryologists. A healthy birth was the goal that everyone was aiming for.
However, that promise was accompanied by a legal void. Although the FDA has jurisdiction over software as a medical device in the US, many AI tools for embryo selection function in a gray area. They frequently evade thorough regulatory scrutiny if they are promoted as “decision support” rather than diagnostic. A patchwork of CE certifications is available in Europe. There is hardly any oversight in other areas.
The lack of regulation has an impact on more than just product safety. Informed consent is impacted. How many IVF patients are aware that their embryo selection may be influenced by an algorithm? Are they informed about the data used to train the model, how well it works across age or ethnic groups, or how much the clinic values the tool’s rankings more than the opinion of a human embryologist?
Particularly urgent issues were brought up by a recent study from Australia’s Monash Bioethics Centre. It cautioned that even with excellent design, AI tools are not impartial judges. Their suggestions are influenced by proprietary code, ambiguous data sets, and presumptions made by developers, many of whom lack expertise in reproductive health. The researchers wrote, “Computer algorithms are starting to make decisions about who is brought into the world.”
When I first read that sentence, I hesitated. It was more akin to quiet uneasiness than fear.
Clinics contend that the ultimate decision is still made by human experts. In a technical sense, they are correct. However, in reality, a lot of embryologists rely a lot on AI rankings, especially when they are under time or patient pressure. Without a strong argument, it is difficult to argue against a top-scoring embryo that is displayed on screen. Even if it is informal, the algorithm has actual authority.
And who is responsible when things go wrong, such as when implantation fails or a child is later discovered to have a condition that the system claimed to mitigate? Is that the clinic? The supplier of software? The oversight organization that never intervened?
Though answers are still elusive, legal scholars have started debating these issues. Disclosure of AI involvement is not required in many jurisdictions. Liability for reproductive decisions influenced by algorithms is not well established in the law. Furthermore, very few consent frameworks currently in use ask patients if they feel comfortable allowing machine learning to influence their future family.
When cross-border fertility travel is added, the ethical landscape becomes even more hazy. Germany’s Embryo Protection Act may be violated by what is allowed in a California lab. Tools that would never pass clinical trials in the UK might be used in an Israeli clinic. For IVF, couples frequently travel between jurisdictions, but it is more difficult for embryos to do so. Lines quickly become hazy when AI enters that already delicate area.
A reproductive endocrinologist asked a group of ethicists a hypothetical question during a conference in Geneva last year. “Should we always select the embryo with the 2% higher chance of a live birth based on AI scores, even if the two embryos are equally healthy?”
For a beat longer than anticipated, the room was silent.
It seems clear from a clinical perspective. You choose the higher chance, of course. However, that decision has moral implications. What if there was bias in the model’s training data? What if some characteristics that are subtly related to gender or race are given preference in the ranking? What would happen if the algorithm didn’t account for the patient’s other values?
Even with its efficiency, digital embryo selection quietly changes the question from “What child do we want?” to “What child will most likely succeed?” It is not just a semantic distinction. It changes who IVF works best for and what it is used for.
There is currently no universal legal framework to guarantee algorithmic transparency, no obligation to reveal ranking logic, and no mechanism for redress when results don’t match expectations. Nevertheless, technology continues to progress. The datasets expand. The instruments get better. The marketing gets more focused.
The public discourse is conspicuously absent.
Patients must be given genuine options, not only regarding which embryo to choose but also regarding whether or not they want AI to be used at all. Clinics must make a commitment to complete disclosure, both in-person and in brochures. And before an opaque black box becomes the norm for choosing who gets born, regulators must catch up.
That future is not far off. However, we still have time to influence how we get there.
