Quantization-Aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks

نویسندگان

چکیده

We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making more efficient by running them using low-bit integer arithmetic therefore commonly adopted in industry. Recent work has shown that floating-point have been verified to be can become vulnerable adversarial attacks after quantization, certification representation necessary guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), novel method QNNs. Inspired advances learning non-quantized networks, our algorithm computes gradient an abstract actual network. Unlike existing approaches, handle discrete semantics Based on QA-IBP, also develop complete verification procedure verifying robustness QNNs, which guaranteed terminate produce correct answer. Compared key advantage it runs entirely GPU or other accelerator devices. demonstrate experimentally approach significantly outperforms methods establish new state-of-the-art

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards a Deeper Understanding of Training Quantized Neural Networks

Training neural networks with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have been purely experimental. In this work, we investigate the theory of training quantized neural network...

متن کامل

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operati...

متن کامل

Training Neural Networks for Robust Control of Nonlinear Mimo Systems

A training strategy for computational neural networks is introduced that paves the way for incorporation of neural networks in robust control design for nonlinear multiple input, multiple output systems. The proposed training strategy enables utilization of statistical properties of the least-squares estimate. A control strategy that has a structural similarity to an adaptive control structure ...

متن کامل

Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks

Recent analysis of deep neural networks has revealed their vulnerability to carefully structured adversarial examples. Many effective algorithms exist to craft these adversarial examples, but performant defenses seem to be far away. In this work, we attempt to combine denoising and robust optimization methods into a unified defense which we found to not only work extremely well, but also makes ...

متن کامل

Streamlined Deployment for Quantized Neural Networks

Running Deep Neural Network (DNN) models on devices with limited computational capability is a challenge due to large compute and memory requirements. Quantized Neural Networks (QNNs) have emerged as a potential solution to this problem, promising to offer most of the DNN accuracy benefits with much lower computational cost. However, harvesting these benefits on existing mobile CPUs is a challe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i12.26747