Explainability of Machine Learning in Work Disability Risk Prediction

Proceedings of the 2023 International Conference on Advances in Computing Research (ACR’23)(2023)

引用 1|浏览3
暂无评分
摘要
The risk of work disability can be predicted with machine learning (ML). All stakeholders should understand how the estimation is done to trust the results. Thus, these methods should be not only sufficiently accurate but also transparent and explainable. Explainability is one topic in artificial intelligence (AI) Ethics. Explainable AI (XAI) is especially important in health-related topics. We compared the accuracy of two ML methods that use occupational health care data (MHealth) and pension decision register data (MPension). Method MHealth uses deep neural networks and natural language processing (NLP) algorithms and method MPension uses different decision tree algorithms. We can assume that both methods are black box predictors because the reasoning behind the function is not understandable by humans. We observed in our previous study that both methods are sufficiently accurate to support experts in decision making. Our aim in this study was to determine if these methods are sufficiently explainable for clinical use. The two main approaches to estimating the explainability of the ML methods are transparency design and post-hoc explanations. We could not access the data for these methods and had to limit our research to the post-hoc explanation approach. We formulated the visualizations for the methods MHealth and MPension and discussed how understandable they are. We also determined that the explainability is better in MPension but the deep learning algorithm in MHealth is also possible to visualize.
更多
查看译文
关键词
disability,machine learning,explainability,prediction,risk
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要