Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs
نویسندگان
چکیده
Adversarial attacks on graphs have attracted considerable research interests. Existing works assume the attacker is either (partly) aware of victim model, or able to send queries it. These assumptions are, however, unrealistic. To bridge gap between theoretical graph and real-world scenarios, in this work, we propose a novel more realistic setting: strict black-box attack, which has no knowledge about model at all not allowed any queries. design such an attack strategy, first generic filter unify different families graph-based models. The strength can then be quantified by change before after attack. By maximizing change, are find effective regardless underlying model. solve optimization problem, also relaxation technique approximation theories reduce difficulty as well computational expense. Experiments demonstrate that, even with exposure Macro-F1 drops 6.4% node classification 29.5% classification, significant result compared existent works.
منابع مشابه
Delving into Transferable Adversarial Examples and Black-box Attacks
An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferabilit...
متن کاملDecision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...
متن کاملDecision-based Adversarial Attacks: Reliable Attacks against Black-box Machine Learning Models
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...
متن کاملGenerating Adversarial Malware Examples for Black-Box Attacks Based on GAN
Machine learning has been used to detect new malware in recent years, while malware authors have strong motivation to attack such algorithms.Malware authors usually have no access to the detailed structures and parameters of the machine learning models used by malware detection systems, and therefore they can only perform black-box attacks. This paper proposes a generative adversarial network (...
متن کاملData Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
While modern day web applications aim to create impact at the civilization level, they have become vulnerable to adversarial activity, where the next cyber-attack can take any shape and can originate from anywhere. The increasing scale and sophistication of attacks, has prompted the need for a data driven solution, with machine learning forming the core of many cybersecurity systems. Machine le...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i4.20350