PM2.5 concentration forecasting using Long Short-Term Memory Neural Network and Multi-Level Additive Model

semanticscholar(2019)

引用 0|浏览0
暂无评分
摘要
Background PM 2.5 concentration predication can provide an effective way to protect public health by early warning. Though there are many methods available, the comparison between multi-level additive model (AM) and long short-term memory (LSTM) neural network in predicting PM 2.5 concentration is limited. This study aimed to compare the performance of multi-level AM and LSTM in predicting hourly and daily PM 2.5 concentration.Methods Air pollution data from Jul 1, 2016 to Dec 31, 2017 were obtained from Beijing Municipal Environmental Monitoring Center, and meteorological data were derived from the National Meteorological Science Data Sharing Service. Multi-level AM and LSTM were developed to estimate the regional hourly and daily concentration of PM 2.5 .Results In the prediction of hourly PM 2.5 concentrations, LSTM achieved a better performance than multi-level AM (range of R 2 : 0.76-0.92 for LSTM, 0.59-0.78 for multi-level AM; range of root mean square error (RMSE): 6.20-17.58μg/m 3 for LSTM, 19.19-30.81μg/m 3 for multi-level AM; range of mean absolute error (MAE): 4.50-13.42μg/m 3 for LSTM, 13.55-22.35μg/m 3 for multi-level AM; range of mean absolute percentage error (MAPE): 0.18%-0.55% for LSTM, 0.50%-0.87% for multi-level AM). While in the prediction of daily PM 2.5 concentrations, multi-level AM showed a higher predictive accuracy than LSTM (range of R 2 : 0.43-0.93 for LSTM, 0.74-0.98 for multi-level AM; range of RMSE: 32.46-46.82μg/m 3 for LSTM, 4.83-20.98μg/m 3 for multi-level AM; range of MAE: 24.32-34.89μg/m 3 for LSTM, 3.67-16.33μg/m 3 for multi-level AM; range of MAPE: 0.92%-1.74% for LSTM, 0.11%-0.45% for multi-level AM).Conclusion LSTM showed better performance than the multi-level AM when there is a large amount of data, while multi-level AM showed better performance than LSTM when the amount of data is relatively small.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要