EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks
نویسندگان
چکیده
Edge intelligence has played an important role in constructing smart cities, but the vulnerability of edge nodes to adversarial attacks becomes urgent problem. A so-called example can fool a deep learning model on node for misclassification. Due transferability property examples, adversary easily black-box by local substitute model. general have limited resources, which cannot afford complicated defense mechanism like that cloud data center. To address challenge, we propose dynamic mechanism, namely EI-MTD. The first obtains robust member models small size through differential knowledge distillation from teacher Then, scheduling policy, builds Bayesian Stackelberg game, is applied choice target service. This prohibit selecting optimal attacks. We also conduct extensive experiments evaluate proposed and results show EI-MTD could protect effectively against settings.
منابع مشابه
Securing Deep Neural Nets against Adversarial Attacks with Moving Target Defense
Deep Neural Networks (DNNs) are presently the state-of-the-art for image classification tasks. However, recent works have shown that these systems can be easily fooled to misidentify images by modifying the image in particular ways, often rendering them practically useless. Moreover, defense mechanisms proposed in the literature so far are mostly attack-specific and prove to be ineffective agai...
متن کاملMTD CBITS: Moving Target Defense for Cloud-Based IT Systems
The static nature of current IT systems gives attackers the extremely valuable advantage of time, as adversaries can take their time and plan attacks at their leisure. Although cloud infrastructures have increased the automation options for managing IT systems, the introduction of Moving Target Defense (MTD) techniques at the entire IT system level is still very challenging. The core idea of MT...
متن کاملSparsity-based Defense against Adversarial Attacks on Linear Classifiers
Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce classification errors through tiny, almost imperceptible, perturbations. Vulnerability to such “adversa...
متن کاملDefense against Adversarial Attacks Using High-Level Representation Guided Denoiser
Neural networks are vulnerable to adversarial examples. This phenomenon poses a threat to their applications in security-sensitive systems. It is thus important to develop effective defending methods to strengthen the robustness of neural networks to adversarial attacks. Many techniques have been proposed, but only a few of them are validated on large datasets like the ImageNet dataset. We prop...
متن کاملDefense-gan: Protecting Classifiers against Adversarial Attacks Using Generative Models
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend de...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM transactions on privacy and security
سال: 2022
ISSN: ['2471-2574', '2471-2566']
DOI: https://doi.org/10.1145/3517806