A CMA‐ES‐Based Adversarial Attack Against Black‐Box Object Detectors
نویسندگان
چکیده
Object detection is one of the essential tasks computer vision. detectors based on deep neural network have been used more and widely in safe-sensitive applications, like face recognition, video surveillance, autonomous driving, other tasks. It has proved that object are vulnerable to adversarial attacks. We propose a novel black-box attack method, which can successfully regression-based region-based detectors. introduce methods reduce search dimensions, dimension optimization problems number queries by using Covariance matrix adaptation Evolution strategy (CMA-ES) as primary method generate examples. Our only adds perturbations box achieve precise attack. proposed hide specified with an success rate 86% average 5, 124, all objects 74% 6, 154. work illustrates effectiveness CMA-ES examples proves vulnerability against
منابع مشابه
Note on Attacking Object Detectors with Adversarial Stickers
Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist...
متن کاملAdversarial Examples that Fool Detectors
An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehic...
متن کاملLearning to Attack: Adversarial Transformation Networks
With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direc...
متن کاملIntegrating object detectors
ace detector, object detector, integration, expected computational cost, cascade of classifiers, decision tree This paper describes a method for integrating object detectors that reduces the expected computational cost of evaluating all the detectors whilst obtaining the same logical behaviour as running the detectors independently. The method combines the decision trees of the different object...
متن کاملASP: A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
With the excellent accuracy and feasibility, the Neural Networks (NNs) have been widely applied into the novel intelligent applications and systems. However, with the appearance of the Adversarial Attack, the NN based system performance becomes extremely vulnerable: the image classification results can be arbitrarily misled by the adversarial examples, which are crafted images with human unperc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Chinese Journal of Electronics
سال: 2021
ISSN: ['1022-4653', '2075-5597']
DOI: https://doi.org/10.1049/cje.2021.03.003