Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense
نویسندگان
چکیده
Deep Neural Networks (DNNs) based Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems have shown to be highly vulnerable adversarial perturbations that are deliberately designed yet almost imperceptible but can bias DNN inference when added targeted objects. This leads serious safety concerns applying DNNs high-stake SAR ATR applications. Therefore, enhancing the robustness of is essential for implementing modern real-world systems. Toward building more robust DNN-based models, this article explores domain knowledge imaging process and proposes a novel Scattering Model Guided Adversarial Attack (SMGAA) algorithm which generate in form electromagnetic scattering response (called scatterers). The proposed SMGAA consists two parts: 1) parametric model corresponding method 2) customized gradient-based optimization algorithm. First, we introduce effective Attributed Center (ASCM) general describe behavior typical geometric structures process. By further devising several strategies take target images into account relax greedy search procedure, does not need prudentially finetuned, efficiently find ASCM parameters fool classifiers facilitate training. Comprehensive evaluations on MSTAR dataset show scatterers generated by transformations processing chain than currently studied attacks, construct defensive against malicious scatterers.
منابع مشابه
Adversarial Examples Generation and Defense Based on Generative Adversarial Network
We propose a novel generative adversarial network to generate and defend adversarial examples for deep neural networks (DNN). The adversarial stability of a network D is improved by training alternatively with an additional network G. Our experiment is carried out on MNIST, and the adversarial examples are generated in an efficient way compared with wildly-used gradient based methods. After tra...
متن کاملBreaking the Madry Defense Model with $L_1$-based Adversarial Examples
The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model. Attacks were constrained to perturb each pixel of the input image by a scaled maximal L∞ distortion = 0.3. This decision discourages the use of attacks which are not optimized on the L∞ distortion metric. Our experimental results demonstrate that by relaxing the L∞ constraint ...
متن کاملEnsembling as a Defense Against Adversarial Examples
Adversarial attacks on machine learning systems take two main flavors. First, there are trainingtime attacks, which involve compromising the training data that the system is trained on. Unsurprisingly, machines can misclassify examples, if they are trained on malicious data. Second, there are test-time attacks, which involve crafting an adversarial example, which a human would easily classify a...
متن کاملA Parametric Attributed Scattering Center Model for SAR Automatic Target Recognition
We present a parametric attributed scattering model for Synthetic Aperture Radar imagery The model characterizes both frequency and aspect dependence of scattering centers We present algorithms for estimating the model pa rameters from SAR image chips and propose model order estimation algorithms that exploit nested model structures We develop a Bayes classi er for the extracted model parameter...
متن کاملModel Uncertainty for Adversarial Examples using Dropouts
An image can undergo a visually imperceptible change and yet get confidently misclassified by a trained Neural Network. Puzzled by this counter-intuitive behaviour, a lot of research has been undertaken in search of answers for this inexplicable phenomenon and more importantly, a possibility to impart robustness against adversarial misclassification. This thesis is a first step in the direction...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Geoscience and Remote Sensing
سال: 2022
ISSN: ['0196-2892', '1558-0644']
DOI: https://doi.org/10.1109/tgrs.2022.3213305