Re-energising the human rights agenda
Share
Health

Re-energising the human rights agenda

Without a digital transformation of health, there can be no health for all. Yet leveraging the benefits of artificial intelligence and digital technology while minimising their harms requires us to take human rights seriously

In recent decades, the world has seen encouraging improvements in the health and well-being of millions of people. Global life expectancy and healthy life expectancy have increased; maternal and child mortality have dropped; deaths from diseases such as HIV, malaria and tuberculosis have declined; polio is close to eradication; and new vaccines and drugs are mighty tools against once-feared diseases.

Yet the progress is uneven, and appalling discrepancies remain: approximately half the world’s people still lack access to essential health services; catastrophic health expenditure still drives millions into poverty each year; and on current trends, only 50% of the global population will benefit from universal health coverage by 2030. A projected shortfall of almost 18 million health workers over the next 10 years will only exacerbate the situation.

There is thus little doubt, as the World Health Organization articulates in its draft digital health strategy, that “digital technologies are an essential component and an enabler of sustainable health systems and universal health coverage”. Data-driven digitalisation and the application of artificial intelligence and machine learning are integral to transforming healthcare systems, health services and medical practices and to ensuring progress towards universal access and coverage. These new technologies offer unprecedented potential, ranging from clinical decision-making support and remote health worker training, to better case management and coordination of care, efficient resource management, and improved access to services, especially for patients living in hard-to-reach areas. Moreover, as COVID-19 has taught us, digital technologies are hugely important for disease surveillance, outbreak control and contact tracing.

If we are to achieve health for all by 2030, as envisioned in the Sustainable Development Goals, we cannot afford to miss the opportunities these tools present. And yet, how well we realise the digital transformation of health care – whether we manage to reap its benefits while doing no harm – depends on the political choices we make. We must not fall prey to the belief that technology will on its own improve people’s health and well-being. In reality, who has access to digital innovation and who benefits from it hinges largely on the political and regulatory system; on social justice and the distribution of resources, power and capital; and on digital literacy and rights.

For the global good

For digital technology to have the positive impact on health we envisage, we must address underlying conditions of inequality and injustice and better protect the rights and entitlements of individuals and societies. We must ensure that digital technology is not used to extract data for unethical commercial or surveillance purposes, or that it discriminates against minorities or at-risk individuals in insurance schemes, or stigmatises vulnerable groups. At the same time, we need to close existing data gaps that often disproportionately affect marginalised people and individuals with low economic status who lack access to health care or communities where health data is not routinely collected. Aligning the public health and research needs for comprehensive data collection with robust data and privacy protection and non-discrimination is another lesson to be learnt from the global COVID-19 pandemic.

However, today, developments in the field of digital and AI often outpace our collective capacity to understand new technologies, assess their impact, and find ways to accommodate and regulate them. As a result, digital health unfolds in a largely unregulated landscape that lacks comprehensive political, legal and governance frameworks. New technological tools are thus fielded with little oversight or transparency. People may therefore not know how their data is being collected, processed or repurposed, or that it could potentially be used to discriminate against them as individuals or communities. Or they may not be aware that technologies they rely on, even outside the medical realm, may still be used to predict, detect or influence health-relevant behaviour. Likewise, patients may feel that they do not have much choice other than to consent to data-sharing agreements in order to access life-saving treatment. Especially the most vulnerable in society are increasingly subject to demands and forms of intrusion without accountability, with citizens’ information becoming ever more accessible to private companies and governments.

In recent years, mounting pressure on tech companies to act more responsibly and to protect the rights of citizens and societies has led to a burgeoning of ethical ‘codes’ and non-binding voluntary self-commitments developed by the companies themselves. These codes are laudable because they mirror an increased awareness among the private sector about potential harms of digitalisation. But if we want to reap the full benefits of digitalisation in health while minimising harm, we should go further. We at Fondation Botnar believe that if we want to leverage digital health for the greater public good, we must anchor our work in the legally binding framework of universal human rights. By putting the rights and entitlements of individuals centre stage and by imposing duties and responsibilities on states and businesses, the framework can give us a clear orientation and normative guidelines for a responsible transformation of health systems in the digital age. Human rights provide us with tangible, actionable means to fight for fairness, non-discrimination and equity, and to call for inclusion, empowerment and participation in shaping the digital health ecosystems of the future.