VidaGAN: Adaptive GAN for image steganography.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      A recent approach to image steganography is to use deep learning. Mainly, convolutional neural networks can extract complex features and use them as patterns to combine hidden messages and images. Also, by using generative adversarial networks, it is possible to generate realistic and high‐quality stego images without any noticeable artifacts. Previous methods suffered from challenges such as simple architecture, low network accuracy, imbalance between capacity and transparency, vanishing gradients, and low capacity. This study introduces a steganography framework named VidaGAN that utilizes deep learning techniques. The network being proposed is made up of three components: an encoder, a decoder, and a critic, and introduces a novel architecture and several innovations to address some of the unresolved challenges mentioned above. This study introduces a novel method for embedding any type of binary data into images using generative adversarial networks, enabling us to enhance the visual appeal of images generated by the specified model. This neural network called VarIable aDAptive GAN (VidaGAN) achieved state‐of‐the‐art status by reaching a hiding capacity of 3.9 bits per pixel in the DIV2K dataset. Furthermore, examination by the StegExpose steganalysis tool shows an AUC of 0.6, a suitable threshold for transparency. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of IET Image Processing (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)