swissprivacy.law
  • Décision
  • Doctrine
  • Jurisprudence
  • Réglementation
  • À propos
  • Abonnement à notre newsletter
  • Generic selectors
    Expression exacte 
    Rechercher dans le titre 
    Rechercher dans le contenu 
    Post Type Selectors
swissprivacy.law
  • Décision
  • Jurisprudence
  • Doctrine
  • Réglementation
  • À propos
  • Generic selectors
    Expression exacte 
    Rechercher dans le titre 
    Rechercher dans le contenu 
    Post Type Selectors
S'abonner
-->

WHO Guidelines on AI in Health : Impacts on privacy and regulatory considerations

Charlotte Beck, le 12 juin 2024
The WHO relea­sed various guide­lines and recom­men­da­tions regar­ding the use of arti­fi­cial intel­li­gence for health. Among the cove­red topics, data protec­tion is regu­larly addres­sed. These aspects will be deve­lo­ped in this contribution.

Regulatory consi­de­ra­tions on arti­fi­cial intel­li­gence for health

Ethics and gover­nance of arti­fi­cial intel­li­gence for health : Guidance on large multi-modal models

Benefits and risks of using arti­fi­cial intel­li­gence for phar­ma­ceu­ti­cal deve­lop­ment and delivery

The inte­gra­tion of arti­fi­cial intel­li­gence (AI) into heal­th­care offers great poten­tial for advan­cing medi­cal research, impro­ving patient outcomes, and opti­mi­zing heal­th­care deli­very. However, this inte­gra­tion also presents signi­fi­cant ethi­cal, legal, and privacy chal­lenges. The World Health Organization (WHO) has recently issued three guiding docu­ments addres­sing the chal­lenges of AI use for health. This contri­bu­tion seeks to provide an over­view of these guide­lines, focu­sing on aspects of data protec­tion and privacy in general.

Regulatory Considerations on AI for Health

The WHO’s 2023 docu­ment “Regulatory Considerations on AI for Health” high­lights criti­cal aspects of privacy and data protec­tion in the regu­la­tory land­scape. Due to the vast amounts of data invol­ved in the deve­lop­ment and use of this tech­no­logy, anony­mi­za­tion tech­niques are often less effec­tive, and the multi­pli­ca­tion of proces­sing acti­vi­ties increases secu­rity risks. This is parti­cu­larly true regar­ding gene­tic data. The current regu­la­tory envi­ron­ment is diverse, with inter­na­tio­nal, natio­nal, and regio­nal laws over­lap­ping, and specia­li­zed regu­la­tions provi­ding speci­fic requi­re­ments, such as the GDPR, which intro­duces further chal­lenges rela­ted to consent and trans­bor­der data exchanges.

The main aspect deve­lo­ped in this guidance regar­ding privacy and data protec­tion pertains to the neces­sity to docu­ment and be trans­pa­rent. Institutions can strive to esta­blish trust, by provi­ding detai­led docu­men­ta­tion of their data protec­tion prac­tices. These poli­cies should also clearly outline the types of data collec­ted, the roles of invol­ved parties (data control­lers or data proces­sors), the appli­cable legal bases, and the collec­tion methods. Additionally, in order to demons­trate the compliance of an acti­vity, and where the proces­sing relies on such legal basis, a descrip­tion of the consent collec­tion method may be described.

The disclo­sure of these poli­cies allows for regu­la­tors to deter­mine a “stan­dard of prac­tice” by the compa­nies making them avai­lable, upon which compliance can be exami­ned. Companies must disclose signi­fi­cant uses of perso­nal infor­ma­tion for algo­rith­mic deci­sions, detai­ling data types, sources, and the tech­ni­cal and orga­ni­sa­tio­nal measures imple­men­ted to miti­gate risks.

Still in the topic of docu­men­ta­tion and trans­pa­rency, the prin­ciple of data accu­racy is put forward. Aimed at deve­lo­pers of AI, the neces­sity to ensure “quality conti­nuum”, a concept that is reflec­ted in Art. 5 GDPR (with the prin­ciples of “accu­racy” and “accoun­ta­bi­lity”), but also in the field of  Quality Control, such as the ICH E6 (R2) Guidelines for Good Clinical Practice (see Section “5.5 Trial Management, Data Handling, and Record Keeping”). The idea is to ensure the accu­racy of the infor­ma­tion throu­ghout its life­cycle. Applied to AI, this means that privacy needs to be taken into account from the design to the deploy­ment of AI.

Apart from the perfor­mance of privacy impact assess­ments, requi­red by Art. 35 GDPR and recom­men­ded by the NIST privacy frame­work, docu­men­ta­tion is a central method to ensure trans­pa­rency, where deve­lo­pers should take a central role. Similarly, the crea­tion of audit trails and anno­ta­ting AI models allows to describe the deci­sion-making process, and contri­butes to the “explai­na­bi­lity” of the outputs of a model, further enhan­cing its transparency.

Ethics and Governance of AI for Health

The WHO’s 2024 docu­ment “Ethics and Governance of AI for Health : Guidance on Large Multi-Modal Models” outlines ethi­cal prin­ciples to guide AI use in heal­th­care. These prin­ciples include protec­ting auto­nomy, promo­ting human well-being and safety, ensu­ring trans­pa­rency and explai­na­bi­lity, foste­ring respon­si­bi­lity and accoun­ta­bi­lity, ensu­ring inclu­si­ve­ness and equity, and promo­ting respon­sive and sustai­nable AI. This guidance empha­sizes that privacy must be prio­ri­ti­zed throu­ghout the life­cycle of large multi-modal models (LMMs), from deve­lop­ment to deploy­ment, through the provi­sion phase.

The main privacy risks in patient-centred appli­ca­tions include the poten­tial for sensi­tive infor­ma­tion to be shared, diffi­cul­ties in erasing data, and the risk of third-party disclo­sures. Overdependence on LMMs can under­mine trust in heal­th­care systems due to privacy and confi­den­tia­lity concerns. Compliance with data protec­tion laws and human rights obli­ga­tions is there­fore essen­tial, as LMMs often use perso­nal data without proper consent. Additionally, the fulfillment of access, recti­fi­ca­tion, erasure requests may not be possible, remo­ving the possi­bi­lity for indi­vi­duals to have control over their information.

To miti­gate these risks, ensu­ring high-quality, diver­si­fied data while avoi­ding data colo­nia­lism – collec­ting data from low- and middle-income coun­tries – is essen­tial. Other recom­men­ded measures involve the inclu­sion of patient repre­sen­ta­tives in AI deve­lop­ment, allo­wing human over­sight and the ability to address privacy concerns effectively.

Regarding measures which can be taken by govern­ments, the guidance mentions : the imple­men­ta­tion of target product profiles ; the design of stan­dards and requi­re­ments ; the deve­lop­ment of pre-certi­fi­ca­tion programs ; the perfor­mance of audits ; the consi­de­ra­tion of envi­ron­men­tal impacts ; the requi­re­ment of noti­fi­ca­tions of machine-gene­ra­ted content to manage AI systems effectively.

Benefits and Risks of Using AI for Pharmaceutical Development and Delivery

The WHO’s 2024 docu­ment on “Benefits and Risks of Using AI for Pharmaceutical Development and Delivery” unders­cores the impor­tance of privacy in data collec­tion and usage, as well as infor­med consent. AI in this context involves sour­cing data from exis­ting data­sets, but also from other new sources, such as “hospi­tal systems, inclu­ding cellu­lar data and gene­tic infor­ma­tion, and from social media”.

The need for large amounts of data  leads to seve­ral risks, inclu­ding discri­mi­na­tion, mani­pu­la­tion, and exploi­ta­tion based on health status. Additionally, sharing health data can under­mine an individual’s dignity, auto­nomy, and safety.

Another privacy risk in the use of AI in clini­cal trials is the neces­sity to colla­bo­rate with specia­li­zed third parties. This sub-contrac­ting of acti­vi­ties requires the setting up of a contrac­tual frame­work, requi­ring said sub-contrac­tor to comply with certain stan­dards appli­cable to phar­ma­ceu­ti­cal compa­nies, which may be outside of their usual sector of acti­vity. This could lead to concerns about the proper compliance with health-care stan­dards of the third parties involved.

The source of the risks for indi­vi­duals often come during data trans­fer, where cyber-theft, acci­den­tal disclo­sure, govern­ment exploi­ta­tion, discri­mi­na­tory health insu­rance terms, and exploi­ta­tive marke­ting prac­tices pose signi­fi­cant threats.

In clini­cal trials, the WHO empha­sizes the neces­sity for obtai­ning infor­med consent, espe­cially when AI analyses medi­cal records or public data, such as data collec­ted from social media. The invol­ve­ment of tech­no­logy compa­nies in drug deve­lop­ment raises signi­fi­cant concerns about their data-sharing prac­tices and privacy protec­tions. As mentio­ned above, these compa­nies often operate in a less regu­la­ted envi­ron­ment compa­red to the biome­di­cal domain, thus neces­si­ta­ting stric­ter rules to ensure ethi­cal use of data.

Conclusion

With the slow deve­lop­ments in the regu­la­tion of AI, these guidance docu­ments provide a basis for compa­nies and insti­tu­tions to start buil­ding their compliance programs. The impor­tance of certain prin­ciples, such as trans­pa­rency and accu­racy, the neces­sity of infor­med consent and iden­ti­fi­ca­tion of risks for indi­vi­duals, as well as the need for caution in the colla­bo­ra­tion with big tech compa­nies are clearly outlined.

Although the medi­cal and research fields are proba­bly the sectors which can earn the most out of leve­ra­ging AI, they are also the most vulne­rable to impact indi­vi­duals’ rights to privacy. Some of the above-mentio­ned prac­tices are already provi­ded for in laws currently in force, such as the GDPR, and provide certain protec­tions and reme­dies. For the others, efforts from both legis­la­tors and prac­ti­tio­ners will be needed to make them effective.



Proposition de citation : Charlotte Beck, WHO Guidelines on AI in Health : Impacts on privacy and regulatory considerations, 12 juin 2024 in www.swissprivacy.law/305


Les articles de swissprivacy.law sont publiés sous licence creative commons CC BY 4.0.
Sur ce thème
  • Vers un nouveau modèle de réutilisation des données médicales en Suisse ?
  • Protection des données et intelligence artificielle : une gouvernance indispensable
  • La mise en place de mesures de sécurité techniques et organisationnelles : not just a checklist !
  • Dark patterns : wait & see
Derniers articles
  • Les limites du secret d’affaires : Analyse des recommandations du PFPDT par le TAF
  • Scraping en masse : la Cour fédérale allemande admet le préjudice indemnisable
  • Collectes de données personnelles par des étudiants dans le cadre de travaux académiques : qui est responsable du traitement ?
  • La LPD refoulée en clinique : des sanctions pénales plus théoriques que pratiques
Abonnement à notre newsletter
swissprivacy.law