AI Improves Health; Ethics, Human Rights Grave Concern

WHO Warns Against Untested AI-based tools; Could Harm Patients

Artificial Intelligence (AI) can improve healthcare systems across the world but only if ethics and human rights are put at the heart of its design, deployment and use, said the first report on AI in health sector.

The report, Ethics and governance of artificial intelligence for health, published on Monday is the result of two years of consultations held by a panel of international experts appointed by WHO.

ENORMOUS POTENTIAL 

Noting that Artificial Intelligence (AI) like all new technologies holds enormous potential for improving the health of millions of people across the globe, WHO Director-General Dr Tedros Adhanom Ghebreyesus said that it can also be misused and cause harm as like other technologies. He stated that the first report was a valuable guide for countries on how to maximize the benefits of AI and to minimise its risks and avoiding its pitfalls.

WHO Chief Scientist Dr Soumya Swaminathan in the forward to the report said that AI could benefit low- and middle-income countries, especially in countries that may have significant gaps in health care delivery and services. She said that AI helps in extending health care services to underserved populations, improve public health surveillance, and enable healthcare providers to better attend to patients and engage in complex care.

In the report, the WHO says that some of the wealthy countries in the world are already using AI to improve the speed and accuracy of diagnosis and screening for diseases. It is used to strengthen health research, drug development, assist with clinical care and support diverse public health interventions

It said that Artificial Intelligence (AI) could empower patients to take greater control of their own health care. It also helps them in better understanding their needs. The new system could help resource-poor countries and rural communities to bridge gaps in access to health services.

BENEFITS 

The report mentions that AI helps in improving patient care, diagnoses and optimises treatment plans. It also helps in preparing a pandemic response. Moreover, AI informs the decisions of health policy-makers or allocates resources within health systems.

Despite talking about all the benefits of Artificial Intelligence, the WHO also warns against overestimating the benefits of AI. The WHO report points out AI involves challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cyber security, and environment.

In the report, the WHO says that AI systems on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income countries. The WHO mentions that Artificial Systems should be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, awareness raising and community engagement for the healthcare workers. The health workers need digital literacy.

The report calls on governments, providers, and designers to work together to address ethics and human rights concerns at every stage of an Artificial Intelligence technology’s design, development and deployment.

SIX PRINCIPLES TO ENSURE AI

Protecting human autonomy: Humans should remain in control of health-care systems and medical decisions. Privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

Promoting human well-being and safety and the public interest: The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.

Ensuring transparency, explainability and intelligibility:  Sufficient information should be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

Fostering responsibility and accountability: Stakeholders should ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

Ensuring inclusiveness and equity: Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.

Promoting AI that is responsive and sustainable: Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here