Who does the fairness in health AI community represent?

Isabelle Rose I. Alberto,Nicole Rose I. Alberto,Yuksel Altinel,Sarah Blacker,William Warr Binotti,Leo Anthony Celi, Tiffany Chua,Amelia Fiske,Molly Griffin, Gulce Karaca, Nkiruka Mokolo, David Kojo N. Naawu, Jonathan Patscheider, Anton Petushkov,Justin Quion,Charles Senteio, Simon Taisbak,İsmail Tırnova,Harumi Tokashiki, Adrian Velasquez,Antonio Yaghy, Keagan Yap

medrxiv(2023)

引用 1|浏览4
暂无评分
摘要
OBJECTIVE Artificial intelligence (AI) and machine learning are central components of today’s medical environment. The fairness of AI, i.e. the ability of AI to be free from bias, has repeatedly come into question. This study investigates the diversity of the members of academia whose scholarship poses questions about the fairness of AI. METHODS The articles that combine the topics of fairness, artificial intelligence, and medicine were selected from Pubmed, Google Scholar, and Embase using keywords. Eligibility and data extraction from the articles were done manually and cross-checked by another author for accuracy. 375 articles were selected for further analysis, cleaned, and organized in Microsoft Excel; spatial diagrams were generated using Public Tableau. Additional graphs were generated using Matplotlib and Seaborn. The linear and logistic regressions were analyzed using Python. RESULTS We identified 375 eligible publications, including research and review articles concerning AI and fairness in healthcare. When looking at the demographics of all authors, out of 1984, 794 were female, and 1190 were male. Out of 375 first authors, 155 (41.33%) were female, and 220 (58.67%) were male. For last authors 110 (31.16%) were female, and 243 (68.84%) were male. In regards to ethnicity, 234 (62.40%) of the first authors were white, 103 (27.47%) were Asian, 24 (6.40%) were black, and 14 (3.73%) were Hispanic. For the last authors, 234 (66.29%) were white, 96 (27.20%) were Asian, 12 (3.40%) were black, and 11 (3.11%) were Hispanic. Most authors were from the USA, Canada, and the United Kingdom. The trend continued for the first and last authors of the articles. When looking at the general distribution, 1631 (82.2%) were based in high-income countries, 209 (10.5 %) were based in upper-middle-income countries, 135 (6.8%) were based in lower-middle-income countries, and 9 (0.5 %) were based in low-income countries. CONCLUSIONS Analysis of the bibliographic data revealed that there is an overrepresentation of white authors and male authors, especially in the roles of first and last author. The more male authors a paper had the more likely they were to be cited. Additionally, analysis showed that papers whose authors are based in higher-income countries were more likely to be cited more often and published in higher impact journals. These findings highlight the lack of diversity among the authors in the AI fairness community whose work gains the largest readership, potentially compromising the very impartiality that the AI fairness community is working towards. ### Competing Interest Statement The authors have declared no competing interest. ### Funding Statement None ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Not Applicable The details of the IRB/oversight body that provided approval or exemption for the research described are given below: IRB and/or ethics committee approval was not necessary for this study. I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Not Applicable I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Not Applicable I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. Not Applicable All relevant data are within the manuscript and its Supporting Information files. More data that supports the findings of this study are publicly available here: [https://public.tableau.com/app/profile/jonathan6077/viz/IstheFairnessCommuntyFair/IstheFairnessComminutyFair?publish=yes&fbclid=IwAR0\_l5\_b-lWLr_baGhCdwBKi-vVjyzVJg7CHG971EOEMiea2MSp33NXExVM][1] The dataset can also be accessed at [1]: https://public.tableau.com/app/profile/jonathan6077/viz/IstheFairnessCommuntyFair/IstheFairnessComminutyFair?publish=yes&fbclid=IwAR0_l5_b-lWLr_baGhCdwBKi-vVjyzVJg7CHG971EOEMiea2MSp33NXExVM
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要