A Novel of Proactive Caching Policy for Privacy-Preserving Using Federated Learning and Lottery Hypothesis in Edge Computing.

CSCWD(2023)

引用 0|浏览2
暂无评分
摘要
Proactive caching is proving to be an increasingly efficient way to handle massive amounts of data as mobile edge computing becomes more widespread. Utilizing the edge nodes' closer proximity to end users, caching content beforehand at the edge nodes can enable quick replies to end-user queries and lower transmission latency. The fact that edge servers have limited resources makes prediction for cached material particularly crucial. The collaboration between edge nodes and the protection of user data privacy have not been taken into account by the numerous studies recently proposed for predicting content popularity. This work suggests a proactive caching strategy (LT-FLPC) by using federated learning and the lottery hypothesis to address the above issues. The lottery hypothesis is used to address the problem of user privacy and data protection when edge nodes communicate with the central server. Given the existence of collaborative domains between different edge nodes, federated learning is employed to achieve the proactive caching strategy. The experiment results show that the proposed method significantly outperforms other caching algorithms for estimating the popularity of a piece of content, such as Thompson Sampling, in terms of caching efficiency.
更多
查看译文
关键词
Edge Computing,Federated Learning,Lottery Ticket Hypothesis,proactive cache,MLP,privacy preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要