نتایج جستجو برای: robustness

تعداد نتایج: 70303  

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2022

Adversarial training (AT) is currently one of the most successful methods to obtain adversarial robustness deep neural networks. However, phenomenon robust overfitting, i.e., starts decrease significantly during AT, has been problematic, not only making practitioners consider a bag tricks for training, e.g., early stopping, but also incurring significant generalization gap in robustness. In thi...

Journal: :Applied Network Science 2023

Abstract The widely used characterization of scale-free networks as “robust-yet-fragile” originates primarily from experiments on instances generated by preferential attachment. According to this characterization, are more robust against random failures but fragile targeted attacks when compared the same size. Here, we consider a appropriate baseline requiring that match not only size also inhe...

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2021

Despite the remarkable empirical performance of deep learning models, their vulnerability to adversarial examples has been revealed in many studies. They are prone make a susceptible prediction input with imperceptible perturbation. Although recent works have remarkably improved model's robustness under training strategy, an evident gap between natural accuracy and inevitably exists. In order m...

Journal: :Lecture Notes in Computer Science 2021

Several important models of machine learning algorithms have been successfully generalized to the quantum world, with potential speedup training classical classifiers and applications data analytics in physics that can be implemented on near future computers. However, noise is a major obstacle practical implementation learning. In this work, we define formal framework for robustness verificatio...

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2021

Recent work has exposed the vulnerability of computer vision models to vector field attacks. Due widespread usage such in safety-critical applications, it is crucial quantify their robustness against spatial transformations. However, existing only provides empirical quantification deformations via adversarial attacks, which lack provable guarantees. In this work, we propose novel convex relaxat...

Journal: :Stochastic Processes and their Applications 1999

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید