Backdoor Attacks Against Transfer Learning With Pre-Trained Deep Learning Models

نویسندگان

چکیده

Transfer learning provides an effective solution for feasibly and fast customize accurate \textit{Student} models, by transferring the learned knowledge of pre-trained \textit{Teacher} models over large datasets via fine-tuning. Many Teacher used in transfer are publicly available maintained public platforms, increasing their vulnerability to backdoor attacks. In this paper, we demonstrate a threat tasks on both image time-series data leveraging accessible aimed at defeating three commonly-adopted defenses: \textit{pruning-based}, \textit{retraining-based} \textit{input pre-processing-based defenses}. Specifically, (A) ranking-based selection mechanism speed up trigger generation perturbation process while \textit{pruning-based} and/or \textit{retraining-based (B) autoencoder-powered is proposed produce robust that can defeat defense}, guaranteeing selected neuron(s) be significantly activated. (C) defense-aware retraining generate manipulated model using reverse-engineered inputs. We launch misclassification attacks Student real-world images, brain Magnetic Resonance Imaging (MRI) Electrocardiography (ECG) systems. The experiments reveal our enhanced attack maintain $98.4\%$ $97.2\%$ classification accuracy as genuine clean time series inputs respectively improving $27.9\%-100\%$ $27.1\%-56.1\%$ success rate trojaned presence pruning-based retraining-based defenses.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Unregistered Multiview Mammogram Analysis with Pre-trained Deep Learning Models

We show two important findings on the use of deep convolutional neural networks (CNN) in medical image analysis. First, we show that CNN models that are pre-trained using computer vision databases (e.g., Imagenet) are useful in medical image applications, despite the significant differences in image appearance. Second, we show that multiview classification is possible without the pre-registrati...

متن کامل

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attack...

متن کامل

Auror: defending against poisoning attacks in collaborative deep learning systems

Deep learning in a collaborative setting is emerging as a cornerstone of many upcoming applications, wherein untrusted users collaborate to generate more accurate models. From the security perspective, this opens collaborative deep learning to poisoning attacks, wherein adversarial users deliberately alter their inputs to mis-train the model. These attacks are known for machine learning systems...

متن کامل

Robust Deep Reinforcement Learning with Adversarial Attacks

This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss func...

متن کامل

A Hybrid Optimization Algorithm for Learning Deep Models

Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Services Computing

سال: 2022

ISSN: ['1939-1374', '2372-0204']

DOI: https://doi.org/10.1109/tsc.2020.3000900