While there is great potential for artificial intelligence to improve cancer care in low- and middle-income countries, there is also great hype surrounding the technology.

Terms such as “game-changer”, “magic bullet” and “revolutionary” are touted by companies and echoed by the media. Such commercial rhetoric should be tempered by a greater focus on oversight and rigorous evaluation.

It feels as though we have been here before.

AI is portrayed as an oven-ready solution for lower-income countries in overcoming workforce challenges and improving access to specialist diagnostic and treatment services. At international meetings, vendors and academics talk about the “value added” from its deployment. This sounds like a new form of technological colonialism: “We have the solution and just need to be allowed to make a difference.”

There have been some promising developments, such as the use of mobile phones to support the earlier diagnosis of cervical cancer. Yet longer term evaluation is required before we can say definitively that the approach will reduce late diagnoses of this debilitating disease.

The need for robust assessment sits awkwardly with both impatience from activists to tackle growing public health concerns, and pressure from company shareholders to recoup the estimated $1.78bn already invested in AI applications for health.

But without rigorous evaluation, we risk deploying costly technology that offers little value and may even prove detrimental. It may fail to work; not improve the effectiveness of care; be unaffordable or impossible to scale; and ultimately waste scarce resources better used elsewhere to improve patient outcomes.

The use of cloud software to support radiotherapy in Ghana, Tanzania and Botswana offers a telling example. The technology was used effectively in the US, but in these countries it stumbled at the first hurdle: low internet speeds and incompatibility with existing systems rendered it unusable.

The key to success will be to ensure that technology is developed in conjunction with local partners, and tailored to their needs and the constraints of their health systems. It should not be seen as global or falsely described as an “Africa” solution.

Yet incentives to create the regulatory and ethical frameworks required are too weak. The preconditions for such frameworks should include a detailed assessment of needs, a clear understanding of barriers to adoption, and provision of infrastructure including high quality internet access. Projects should build local capacity to use and test technology, and have a clear plan set out in advance to evaluate outcomes in patients through trials.

To date, there has been scant evidence of such approaches. Prospective or randomised control trials in patients in low and middle income countries remain the exception.

Without testing, how can we assume that AI-based interventions will make complex care pathways more affordable and efficient, and improve the lives of individuals?

The predictive promise of AI to help diagnose and manage patients is also reductionist. It usually focuses on one aspect of care. Yet healthcare professionals must see patients, take a history, request and administer investigations, deliver treatments, manage side effects and evaluate the results. As the late economist William Baumol cautioned, more healthcare workers will still be needed as populations grow, so the cost of care will continue to rise.

With colleagues in South Africa, India, Malaysia and Jordan, I developed a clinical trial protocol to prospectively evaluate a new AI-based radiotherapy treatment in cancer patients over three years. It would examine how the technology can be integrated into the normal workflow of busy cancer centres in different health systems, the quality of treatment compared with current standards, and the time and cost savings from automation.

If proven effective by these measures, the software would be offered for free across low- and middle-income countries. But the reality is that we do not have the research ecosystems either in the commercial or public sector to fund this type of research, which despite its potential impact is far cheaper to deliver than any pharmaceutical trial.

Meanwhile, commercial vendors continue to develop automated systems without such prospective trial evaluation or the need to publish findings that are subject to independent scrutiny. Instead, precedent suggests their products will be bought and integrated at a price set by the market.

In the future, we need robust research and commercial partnerships in global health with stronger oversight that adopt international standards for reporting the effectiveness of interventions; and research protocols that assess acceptability, efficacy, affordability and scalability before they are implemented.

Otherwise, AI risks imposing additional costs and consequences that outweigh the benefits to patients and health systems alike.

Ajay Aggarwal is a consultant clinical oncologist at Guy’s & St Thomas’ NHS Trust, and associate professor at the London School of Hygiene and Tropical Medicine and King’s College London