Cover Reproducible Steganography via Deep Generative Models

نویسندگان

چکیده

Whereas cryptography easily arouses attacks by means of encrypting a secret message into suspicious form, steganography is advantageous for its resilience to concealing the in an innocent-looking cover signal. Minimal distortion steganography, one mainstream frameworks, embeds messages while minimizing caused modification on elements. Due unavailability original signal receiver, embedding realized finding coset leader syndrome function steganographic codes migrated from channel coding, which complex and has limited performance. Fortunately, deep generative models robust semantic generated data make it possible receiver perfectly reproduce stego With this advantage, we propose cover-reproducible where source e.g., arithmetic serves as code. Specifically, decoding process coding used encoding regarded extraction. Taking text-to-speech text-to-image synthesis tasks two examples, illustrate feasibility steganography. Steganalysis experiments theoretical analysis are conducted demonstrate that proposed methods outperform existing most cases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Deep Generative Models

Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many artificial intelligence–related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that ...

متن کامل

Auxiliary Deep Generative Models

Deep generative models parameterized by neural networks have recently achieved state-ofthe-art performance in unsupervised and semisupervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave the generative model unchanged but make the variational distribution more expressive. Inspired by the structure...

متن کامل

Learning Deep Generative Models

Learning Deep Generative Models Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many AI related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architec...

متن کامل

Zero-Shot Learning via Class-Conditioned Deep Generative Models

We present a deep generative model for Zero-Shot Learning (ZSL). Unlike most existing methods for this problem, that represent each class as a point (via a semantic embedding), we represent each seen/unseen class using a classspecific latent-space distribution, conditioned on class attributes. We use these latent-space distributions as a prior for a supervised variational autoencoder (VAE), whi...

متن کامل

On Unifying Deep Generative Models

Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Dependable and Secure Computing

سال: 2022

ISSN: ['1941-0018', '1545-5971', '2160-9209']

DOI: https://doi.org/10.1109/tdsc.2022.3217569