Chrome Extension
WeChat Mini Program
Use on ChatGLM

QoS Prediction and Adversarial Attack Protection for Distributed Services under DLaaS

IEEE TRANSACTIONS ON COMPUTERS(2024)

Hunan Univ | Shantou Univ | Providence Univ

Cited 59|Views19
Abstract
Deep-Learning-as-a-service (DLaaS) has received increasing attention due to its novelty as a diagram for deploying deep learning techniques. However, DLaaS faces performance and security issues that urgently need to be addressed. Given the limited computation resources and concern of benefits, Quality-of-Service (QoS) metrics should be revised to optimize the performance and reliability of distributed DLaaS systems. New users and services dynamically and continuously join and leave such a system, resulting in cold start issues, and additionally, the increasing demand for robust network connections requires the model to evaluate the uncertainty. To address such performance problems, we propose in this article a deep learning-based model called embedding enhanced probability neural network, in which information is extracted from inside the graph structure and then estimated the mean and variance values for the prediction distribution. The adversarial attack is a severe threat to model security under DLaaS. Due to such, the service recommender system's vulnerability is tackled, and adversarial training with uncertainty-aware loss to protect the model in noisy and adversarial environments is investigated and proposed. Extensive experiments on a large-scale real-world QoS dataset are conducted, and comprehensive analysis verifies the robustness and effectiveness of the proposed model.
More
Translated text
Key words
Quality of service,Internet of Things,Computational modeling,Deep learning,Security,Performance evaluation,Predictive models,Adversarial attacks,dlaas,graph neural network,probability forecast,qos prediction
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Yinxia Zhuang,Yapeng Sun, Han Deng, Jun Guo
2023

被引用0 | 浏览

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:提出了一种基于深度学习的模型,称为嵌入增强概率神经网络,用于优化分布式DLaaS系统的性能和可靠性,并且提出了对抗训练和不确定性感知损失来保护模型免受对抗攻击。

方法】:提出了嵌入增强概率神经网络模型,并进行了对抗训练和不确定性感知损失的研究。

实验】:通过大规模真实QoS数据集的广泛实验,验证了提出模型的鲁棒性和效果。