Fuzzing-based hard-label black-box attacks against machine learning models

نویسندگان

چکیده

Machine learning models are vulnerable to adversarial examples. We study the most realistic hard-label black-box attacks in this paper. The main limitation of existing is that they need a large number model queries, making them inefficient and even infeasible practice. Inspired by very successful fuzz testing approach traditional software engineering computer security domains, we propose fuzzing-based against machine models. design an AdvFuzzer explore multiple paths between source image guidance image, LocalFuzzer nearby space around given input for identifying potential demonstrate our fuzzing feasible effective generating examples with significantly reduced queries L 0 distance. More interestingly, example generated either or other attacks, can immediately generate more smaller 2 distance from example.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient Label Contamination Attacks Against Black-Box Learning Models

Label contamination attack (LCA) is an important type of data poisoning attack where an attacker manipulates the labels of training data to make the learned model beneficial to him. Existing work on LCA assumes that the attacker has full knowledge of the victim learning model, whereas the victim model is usually a black-box to the attacker. In this paper, we develop a Projected Gradient Ascent ...

متن کامل

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...

متن کامل

Decision-based Adversarial Attacks: Reliable Attacks against Black-box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...

متن کامل

Black-Box Attacks against RNN based Malware Detection Algorithms

Recent researches have shown that machine learning based malware detection algorithms are very vulnerable under the attacks of adversarial examples. These works mainly focused on the detection algorithms which use features with fixed dimension, while some researchers have begun to use recurrent neural networks (RNN) to detect malware based on sequential API features. This paper proposes a novel...

متن کامل

Impossibility of Black-Box Simulation Against Leakage Attacks

In this work, we show how to use the positive results on succinct argument systems to prove impossibility results on leakage-resilient black-box zero knowledge. This recently proposed notion of zero knowledge deals with an adversary that can make leakage queries on the state of the prover. Our result holds for black-box simulation only and we also give some insights on the non-black-box case. A...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computers & Security

سال: 2022

ISSN: ['0167-4048', '1872-6208']

DOI: https://doi.org/10.1016/j.cose.2022.102694