You are on page 1of 7

PixelSteganalysis: Pixel-wise Hidden Information Removal with Low Visual

Degradation

Dahuin Jung1 , Ho Bae2 , Hyun-Soo Choi1 , Sungroh Yoon1,2, ∗


1
Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea.
2
Interdisciplinary Program in Bioinformatics, Seoul National University, Seoul, Korea.

sryoon@snu.ac.kr
arXiv:1902.10905v1 [cs.MM] 28 Feb 2019

Abstract Steganography Steganalysis

Cover Image Secret Image


It is difficult to detect and remove secret images that
Stego
are hidden in natural images using deep-learning Image
algorithms. Our technique is the first work to ef-

Detector (Eve)
Sender (Alice)
fectively disable covert communications and trans-
actions that use deep-learning steganography. We
address the problem by exploiting sophisticated Encoder PixelSteganalysis
pixel distributions and edge areas of images using
a deep neural network. Based on the given infor-
mation, we adaptively remove secret information at Stego Purified
Image Stego
the pixel level. We also introduce a new quantita- Image
tive metric called destruction rate since the decod-
ing method of deep-learning steganography is ap-
proximate (lossy), which is different from conven- Decoder Decoder
tional steganography. We evaluate our technique
Reciever (Bob)

using three public benchmarks in comparison with


conventional steganalysis methods and show that
the decoding rate improves by 10 ∼ 20%.
Decoded Destroyed
Secret Image Secret Image
1 Introduction
Steganography is the science of unnoticeably concealing a se- Figure 1: How steganography works and steganalysis can disturb.
cret message within a plain cover image, to covertly send The destroyed secret image shows that our method, PixelSteganal-
messages to an intended recipient [Johnson and Jajodia, ysis, disrupts a covert transmission between the sender and the re-
1998a]. When a secret message is hidden in a cover image, ceiver with an imperceptible difference on the stego image.
the output is called a stego image. Steganalysis is the detec-
tion or removal of a secret message in a stego image [Johnson
and Jajodia, 1998b]. within the least significant bits (LSBs) of the cover im-
With an upsurge of big data on the internet, the threat of age [Pevnỳ et al., 2010; Holub and Fridrich, 2012]. Hence,
unauthorized and unlimited information transaction and dis- the secret messages hidden by conventional steganography
play has arisen. Steganography has been used by interna- could be removed by relatively simple steganalysis methods
tional terrorist organizations, companies, and the military for such as JPEG compression [Fridrich et al., 2002] and noise
covert communication [Sharma and Gupta, 2012]. It is also reduction [Gou et al., 2007]. For conventional steganogra-
being used to steal company confidential information [King, phy, both texts and images are applicable as the secret mes-
2018]. sage. The reason is that the decoding method in conventional
In the process of covertly embedding a secret message into steganography is lossless so it can well retain the meaning of
a cover image, the original cover image should be marginally the text.
altered to the stego image [Johnson and Jajodia, 1998a]. To Deep-learning has shown great performance in various
avoid statistical and visual detection, the payload of the se- fields. Recently, the field of deep-learning steganography
cret message is small in conventional steganography. Many has experienced rapid development [Yang et al., 2018; Wu
conventional steganographic methods embed secret messages et al., 2018; Husien and Badi, 2015]. Currently proposed
deep-learning steganography disperses representations of se-

Corresponding author cret images across all available bits [Baluja, 2017], not re-
Cover Stego Residual Secret Decoded
stricted to LSBs. The payload of a secret message encoded Image Image Image Image Secret Image
by a deep-learning method is comparatively large. Since the

Steganography
decoding methods are approximate (lossy) in deep-learning

Deep
steganography, secret messages are limited to the image form.
Depending on privilege levels, steganalysis can be cate-
gorized as passive or active [Amritha et al., 2016]. Passive
steganalysis algorithms aim to determine whether an image

ISGAN
contains a secret message or not. Most passive algorithms
look for features associated with a particular steganography
technique (non-blind). On the other hand, active steganaly-
sis algorithms, also called stego firewalls [Voloshynovskiy et

LSB
al., 2002], have the privilege to modify images so that the
secret message can no longer be recovered [Amritha et al.,
2016]. However, the images coming out of active steganal-
ysis need to appear as nearly unchanged as possible because Figure 2: Three examples from Deep Steganography, ISGAN, and
not all images are stego images. That is, a good active ste- LSB insertion, respectively. To clearly show the residual patterns of
ganalysis technique should aim to remove a secret message each steganographic method, we apply 20 times of original residual
as much as possible with a low degree of visual degradation images (x20).
on a stego image. The full scheme of steganography and ac-
tive steganalysis is depicted in Fig. 1.
The deep-learning steganographic methods are not eas- We supply more resultant samples and graphs for
ily detectable by conventional passive steganalysis since the the various datasets in an anonymous link: https://
deep-learning methods are an attempt to maintain the pixel anonymous-steganalysis.github.io/. Please, check the link for
distribution of an image to the greatest extent [Hayes and additional figures and examples, and information about set-
Danezis, 2017; Dong et al., 2018]. Furthermore, the robust- tings.
ness of stego images generated by the deep-learning meth-
ods against conventional active steganalysis is demonstrated
in Yu [2018].
2 Background
We propose a new way of removing the secret image as 2.1 Conventional Steganography
much as possible with minimal change to the appearance of
the image. The contributions of our work are as follows: The least significant bit (LSB) [Johnson and Jajodia, 1998a;
Mielikainen, 2006] is the most conventional steganographic
• To the best of our knowledge, this is the first method algorithm. Recent studies have also proposed other ap-
that is effective in removing a secret image encoded by proaches that maintain image statistics or design special dis-
steganography methods based on deep-learning. Exper- tortion functions [Holub et al., 2014; Lerch-Hostalot and
imentally, we show that our approach outperforms con- Megı́as, 2016]. Pevny et al. [2010] proposed a distortion
ventional active steganalysis on stego images produced function domain that takes into account the cost of chang-
by deep-learning as well as conventional steganography. ing every pixel. Holub and Fridrich [2012] encoded a secret
• We propose a novel structure consisting of an analyzer message in accordance with the statistically analyzed textural
and eraser. The analyzer is trained in an end-to-end man- relevance of image areas. We hide a grayscale secret image
ner. From it, we extract the distribution of each pixel in a color cover image using the most basic LSB insertion
and the edge areas of the image via a single neural net- method [Johnson and Jajodia, 1998a] in order to test whether
work. The eraser is used to adaptively remove the suspi- our method is also effective in conventional steganography.
cious traces pixel by pixel using the analyzed results from We experimentally showed that our method outperforms the
Analyzer. conventional steganalysis methods, considering both the vi-
• The decoding method of conventional steganography is sual degradation as well as the removal of the secret image.
mostly exact (lossless). However, the decoding method
of the recent deep-learning steganography is approximate 2.2 Deep-Learning Steganography
(lossy). We propose a new evaluation metric called the Unlike conventional steganography methods, deep-learning
destruction rate, which is required to calculate the perfor- based methods can hide more messages. Recently, many
mance of steganalysis against deep-learning steganogra- deep-learning based steganography methods have been sug-
phy accurately. gested. These are based on either convolutional neural net-
• We conducted various combinations of experiments be- works (CNNs) [Yang et al., 2018; Wu et al., 2018; Husien and
tween steganography and steganalysis. Specifically, we Badi, 2015; Pibre et al., 2016] or generative adversarial net-
also evaluated the performance of our method on stego works (GANs) [Volkhonskiy et al., 2017; Hayes and Danezis,
images produced by conventional steganography with a 2017; Volkhonskiy et al., 2017; Shi et al., 2017]. Among
high payload, not just deep-learning steganography. Our these, we consider two state-of-the-art methods on each ar-
method showed better results in this case compared to chitecture: Deep Steganography [Baluja, 2017] and invisible
widely used active steganalysis. steganography via GAN (ISGAN) [Dong et al., 2018].
Deep Steganography is one of the state-of-the-art deep- However, for easier visualization and explanation, we as-
learning based steganography algorithms. It hides a color im- sumed grayscale input images in the following sections.
age to a color cover image. The architecture consists of a As described in Fig. 3, our method consists of analyzer and
prep-network, hiding-network (encoder) and reveal-network eraser. The analyzer takes the stego image as input and out-
(decoder). As shown in the first row of Fig. 2, the stego puts an edge distribution and pixel distribution of the given
image and the decoded secret image look very similar to the image. The generated distributions are used to remove a se-
cover image and the secret image, respectively. However, the cret image hidden in a cover image in the eraser.
background color of the stego image is slightly reddish. We A potential stego image is called x and the image size, I ×
can observe that the diagonal grid pattern is distributed all J, represents the height and width of the image. We obtain a
over the background of the residual images. set of the distribution of all pixels, pp (x), and the edge areas,
ISGAN consists of an encoder, a decoder, and a discrimi- pe (x), using the deep-learning architecture (Sec. 3.1). On
nator. ISGAN considers only gray images as possible secret the basis of the pixel distributions pp (x) and the data-driven
images. ISGAN is different from other methods in that it uses condition pe (x), we visit the pixels one by one. We check
only the Y channel among the YCrCb channels of the cover the presence of stego information, and adjust the pixel value
image to hide the gray secret image. ISGAN uses structural under some constraints to remove it (Sec. 3.2).
similarity index values (SSIM) instead of simple pixel differ-
ence values for the reconstruction errors in order to produce 3.1 Analyzer
more naturally looking stego images shown in the second row The best scenario from the perspective of steganalysis is that
of the Fig. 2. However, we can notice that the overall residual both the cover and stego images are accessible. However,
values are large, particularly at the edge areas. The illumina- it is commonly impractical. Thus, we suggest an approach
tion of the decoded secret image is noticeably deviated from for removing the secret image by adjusting the pixel value of
the original secret image. the suspicious regions where the secret image may be hidden
through the pixel level information.
2.3 Conventional Active Steganalysis The goal of the analyzer is to obtain a set of the distribu-
There are various approaches of active steganalysis against tion of all pixels and the edge areas of the image from the
the conventional steganography methods. One of the most neural network trained with a dataset having similar distri-
basic ways is to take N LSB planes of the stego image and bution as the original cover images. The basic structure of
flip the bits [Ettinger, 1998]. Also, another commonly-used the analyzer comprises convolutional neural networks (CNN)
strategy for removing the secret message is to overwrite the with residual connections and fully connected (FC) layers.
random bits suggested by the Gaussian noise or others where The CNN architecture of the analyzer is inspired by Pixel-
the message may be residing [Fridrich et al., 2002]. There CNN++, where pixel-level conditional distributions can be
is also a methodology that uses a denoising method, assum- obtained [Salimans et al., 2017].
ing that the secret image is the noise added to the cover im- To train the analyzer, we minimize the sum of an
age. For example, a median filter [Gou et al., 2007] is a image loss and an edge loss:
handy tool for removing noise in an image, especially the
sporadic noise of large variance. The application of decon- L = λI LI + λE LE , (1)
volution for restoration after applying the denoising filter is where λI and λE > 0 are hyperparameters to balance the
also a main approach. The secret message added is expected strength of the two loss terms.
to be erased during the processes of denoising and restor- The image loss, LI , is the negative log-likelihood of the
ing. Wiener restoration is a representative method for con- image obtained by product of the conditional distribution of
ventional active steganalysis [Amritha et al., 2016] . each pixel:
According to Lafferty [2008], randomization maintained
the good image quality and comparably high removal ability I×J
Y 
for the spatial steganography algorithms. The median filter p(x) = p xi |x1:(i−1) ,
(2)
and wiener restoration operated quite optimally for the fre- i=2
quency domain steganography algorithms. Thus, we chose LI = −Ex∼X log p(x),
these three active steganalysis methods for comparison with
our work. where xi represents an each pixel of a training image x.
The edge loss, LE , is the mean-squared error between the
results obtained using a conventional edge detector and those
3 Proposed Methods obtained using our neural network:
Considering the intrinsic characteristics of the deep-learning
steganography methods, we propose an active steganalysis I J
method, PixelSteganalysis. The proposed method requires 1 XX
LE = (pdetector (xi,j ) − pe (xi,j ))2 , (3)
neither knowledge of the utilized steganography method nor IJ i=1 j=1
the distribution of the original cover image (blind), for remov-
ing a hidden image with minimum perceptual degradation or where xi,j represents an each pixel of a training image x ∈
even perceptual improvement of the stego image. I × J, pdetector represents the edge distribution from the con-
The candidate input images are not limited to grayscale. ventional edge detector, and pe represents the predicted edge
PixelSteganalysis

Edge
Analyzer Distribution
Eraser (repeat for every pixel)

Reshape
Pixel-wise
Removal

Range
Decoder Consideration Decoder

Transformer
Pixel
Distribution 0.06 0.06

CNN with
Residual Connection 0.00 0.00

0 255 0 255

Figure 3: Model overview. The analyzer receives the stego image x as input and produces an edge distribution pe and a pixel distributions
pp . The analyzer consists of convolutional neural networks (CNN) with residual connections and fully connected (FC) layers. The h is the
activation of the last FC layer. The e of the h is reshaped into the edge distribution pe , and the π, µ, and s of the h are transformed into
the pixel distributions pp . The analyzer is trained to minimize the sum of an image loss and an edge loss. The eraser receives the generated
pe and pp , and remove suspicious traces pixel by pixel using the given information. As a result, our proposed method, PixelSteganalysis,
produces a purified stego image x̂. The decoded secret images, Do and Dd , from the original stego image x and from the purified stego image
x̂, respectively show that our method is effective in removing the secret image as well as maintaining the original quality of the image x.

distribution. There are many approaches to detect edge areas. adaptive range of modification as shown below:
In our method, we use the Prewitt operator [Prewitt, 1970],
which is less sensitive to noise than other zero-based meth- pe (xi,j )
enorm = d × (max − )e + , (4)
ods such as Canny [1986]. pe (x)max
As described in Fig. 3, the activation of the last FC layer where pe (x)max is the maximum edge value and  is a hy-
of the analyzer is named as h, and consists of e, π, µ, and perparameter that represents an allowed degree of the least
s. By the trained parameters π, µ, and s with a dataset with modification. max is the allowed degree of modification at
properties similar to the original cover image, we obtain a most, which is = 2 · . We recommend to keep  greater than
discretized logistic mixture likelihood for all pixels pp (x) ∈ or equal to 1 since the difference also exists on the non-edge
I × J × K, where K is a pixel depth dimension. This proce- areas.
dure is represented as transformer in Fig. 3. Since the oper- The pixel value of the stego image is not much deviated
ation of the transformer is based on Salimans [2017], check from the corresponding pixel value of the cover image. Based
out the paper for more details. on the calculated enorm , we set up the adaptive range of the
The vector e of the h is reshaped into I × J. The reshaped considering pixel values:
results are named the edge distribution pe (x). The reason for
measuring the edge distribution pe (x) is that a large amount rmin = max(xi,j − enorm , 0),
of secret information is hidden in edge areas of the cover (5)
rmax = min(xi,j + enorm , 255).
image in some cases of conventional steganography and all
cases of deep-learning steganography. The edge distribution For every pixel, the pixel distribution pp (x) should be re-
pe (x) is used to determine the considering range of suspi- extracted because the pixel value of the image is sequentially
cious information in the eraser. In an ablation study 5.2, we modified. We use the re-extracted pixel distribution to replace
experimentally showed the positive effect of an edge detec- each pixel with the one taking the highest probability value
tion. within the allowed neighboring pixels:
x̂i,j = argmax pp (x̂i,j,k ). (6)
3.2 Eraser rmin ≤k≤rmax

With the eraser, the attempt is to move the distribution of the However, re-extracting the pixel distribution for all pixels re-
potential stego image, x, towards the distribution of its orig- quires a long time to modify a single image. To decrease
inal cover image. We substitute the pixel value of the suspi- the runtime, we propose an approximation of Eq. 6. By the
cious regions with the neighboring pixel value of the highest approximation, the pixel distribution pp (x) is only extracted
probability. Consequentially, we aim to find an purified stego before the iteration and we can keep utilizing the distribution
image, x̂, that maximizes p(x̂) under some conditions. for subsequent measurement of the highest probability value
We control the considering range of pixel modification, and within the adaptive range. The approximation will be formed
the range is decided by two factors: hyperparameter  and the similar to:
predicted edge distribution pe (x).
x̂i,j = argmax pp (xi,j,k ), (7)
Using the values of the edge distribution, we calculate the rmin ≤k≤rmax
Deep Steganography ISGAN LSB
Cover Cover Cover

Gaussian Noise Gaussian Noise Gaussian Noise

Stego Stego Stego

PSNR: 30.76 DC: 0.8745 PSNR: 24.07 DC: 0.9176 PSNR: 32.72 DC: 0.7216

PSNR: 30.77 DC: 0.7490 PSNR: 24.03 DC: 0.7412 PSNR: 33.20 DC: 0.6414

Secret Secret Secret

Proposed Method Proposed Method Proposed Method

Encoding Steganalysis Decoding

Figure 4: Three examples of how our method and the commonly-used conventional method, Gaussian noise, differs in the efficiency at the
same PSNR. In each example, the two images on the top right represent the stego image modified with Gaussian noise and the secret image
decoded from that stego image, respectively. In the same manner, the two images at the bottom right are the stego image modified with our
method and its decoded secret image, respectively. The decoded rate of our method is 10 ∼ 20% lower than the decoded rate of Gaussian
noise.

for every pixel. This leads to much faster runtime but slightly studies and its value differs by how well the decoding algo-
decreases the quality of the results. The comparison between rithm is trained. Thus, we suggest a new evaluation metric
the original case and the approximated case is presented in called the destruction rate. The destruction rate is defined by
the Sec. 5.2. PI PJ o
d
i=1 j=1 Di,j − Di,j (9)
4 Destruction Rate: Proposed Evaluation Destruction Rate = ,
I J
Metric
where Do and Dd represent the decoded secret image and
To measure the degree of destruction of the secret image, we the decoded secret image after a steganography removal algo-
propose a new evaluation metric to assess the performance rithm is applied, respectively. In order to be independent on
of the active steganalysis against both conventional and deep- the performance of the decoding algorithm, the base image
learning steganography. We adopt the SSIM and the peak sig- is changed into the unmodified decoded secret image instead
nal to noise ratio (PSNR) to measure the quality of the puri- of the original secret image. We believe that this metric is
fied image. The PSNR is a basic metric to measure the quality more reliable for representing the pure degree of destruction
of the modified image compared to the original image (The of each active steganalysis method.
larger the PSNR, the higher in quality).
For measuring the degree of destruction, there was a con-
dition that was guaranteed in conventional steganography 5 Experiments
to keep the recovery of the secret message lossless, there-
fore, we could use the decoded rate [Amritha et al., 2016; 5.1 Performance of Removing Hidden Images
Wu et al., 2018]. The decoded rate is calculated by We make use of three datasets, CIFAR-10, BOSS1.0.1,
PI PJ and ImageNet for the evaluation. We compared our method
i=1 j=1 |Hi,j − Di,j | with three commonly-used conventional steganalysis meth-
Decoded Rate = 1 − , (8)
I J ods, Gaussian noise, Denoising, and Restoration. Our pro-
where H and D represent the original secret image and the posed method showed up to a 20% improvement over the de-
decoded secret image, respectively, and I and J represent the coded rate. We have experimented with  values up to 8.
height and width, respectively. We used the decoded rate (DC) for benchmarking with pre-
After the deep-learning methods are proposed, these meth- vious works. To demonstrate precise performance, we pro-
ods allowed a relaxation on this guarantee. Thus, if we only posed the new evaluation metric, destruction rate (DT), and
use the decoded rate to measure the performance of the active show an example of why we should consider the evaluation
steganalysis against the deep-learning steganography meth- metric. Moreover, the DT values are provided in detail in the
ods, the performance has a dependency on the performance supplementary S3.
of a trained decoding algorithm. In other words, the decoded The decoding method of deep-learning steganography is
rate between the original secret image and the unmodified lossy. Thus, it is difficult to accurately evaluate the applied
decoded secret image is already approximately 90% [Baluja, active steganalysis when using only the DC. For example, as
2017; Dong et al., 2018] in all the recent deep-learning based shown in Fig. 5, the DC of the decoded secret image at the
Bad Case (Higher is better) (Lower is better)
(via Restoration) DC: 0.912 DT: 0.048

Good Case
(via Proposed Method) DC: 0.861 DT: 0.149

Figure 5: Top: the failure of removal shows almost zero value of DT.
Bottom: the success of removal shows relatively high value of DT.

top is 0.91, however, we can still see the original informa-


tion. The secret image is not removed at all. On the con-
trary, there is a case where the decoded rate is 0.86, however, Figure 6: Experimental results of our work and comparison of ste-
the secret image is hard to be interpreted. Therefore, the DT ganalysis methods when using the CIFAR-10 dataset. The higher the
PSNR is, the better the preservation of the original cover image is.
is more suitable as the evaluation metric, considering the re- The lower the decoded rate (DC) is, the better the scrubbing degree
cently proposed deep-learning steganography methods. of the hidden image is.
As shown in Fig. 4 and Fig. 6, our method exhibits the best
performance, when PSNR and DC are considered. In the case
of Gaussian noise, if  exceeds 4, the stego image becomes proceeded with the experiment using CIFAR-10. We can ob-
perceptually blurred. Therefore, it is difficult to use  more serve that especially at  = 1 and 2, the effect of the edge
than 4. The denoising and restoration methods are irrelevant detection as a guide is large. Detailed analysis of edge areas
to the . In the case of denoising, the DC is much lower than is provided in the supplementary S7 and S8. The results are
that in our method, but PSNR is very low compared to the presented in detail in the supplementary S9.
other steganography methods. In the case of restoration, the No Approximation The results on Fig. 6 are from an ap-
value of PSNR is similar to that in our method (even better proximated version. The original version takes an average of
at  = 8 at LSB). However, the DC does not drop very much. 3 min to modify a single CIFAR-10 image. However, the ap-
For example, in the case of ISGAN, considering that the DC proximated version takes less than 10 ms to finish a single
of ISGAN is originally ∼0.9, the DC did not drop at all. The image. We compare the decoded rates of the two versions us-
visual samples are presented in the supplementary S9, S11, ing stego images generated by Deep Steganography. Results
and S12. In addition, we experimented with a LSB insertion show that the average decoded rate of the original version is
approach. As shown in Fig. 6, our method not only outper- 75.8% and that of the approximated version is 76.3% both at
forms in the deep-learning steganoraphy methods but also in  = 1. The difference is as small as 0.5%.
the LSB. Figures of PSNR and DC for other datasets are pro-
vided in the supplementary S5 and S6. The figures show a
similar trend to Fig. 6. 6 Conclusion
In the case of active steganalysis, we assume the privilege From the perspective of passive steganalysis, we need to cre-
that can apply modifications to all the scanned images. We ate a detector targeted at a certain deep-learning steganog-
therefore experimented image degradation when applying our raphy method whenever a new method is proposed. In this
method to the innocuous cover images. There was some de- way, however, it is not easy to prevent the newly proposed
gree of degradation, but the change was hardly visible up to  steganography. The more deep-learning steganography meth-
= 4, and the PSNR of our method is higher than the compar- ods develop, the active steganalysis methods should improve
ing conventional steganalysis methods with  values up to 4. more to appropriately inhibit them beforehand. We hope that
Details can be found in the supplementary S4. our work can be a good starting point for new development.
The goal of active steganalysis is to completely remove
5.2 Ablation Studies the secret image and change the stego image into the origi-
No Edge Detection The residual between the cover image nal cover image. We call this as stego scrubbing [Moskowitz
and stego image are much larger on the edge areas while the et al., 2007]. In practice, this is very difficult. The develop-
residual values are comparably small on the non-edge (low ment of active steganalysis should aim to produce the purified
frequency) areas as shown in Fig. 2. This is because the al- stego image that are more similar to the cover image than the
teration of the edge areas is statistically and perceptually less original stego image. As shown in Fig. 6, our method pro-
suspicious than that of the non-edge areas. For ablation stud- duces the purified stego images closer to the cover images
ies, we test the degree of edge detection effectiveness. We than the original stego images when  is 2.
References [Mielikainen, 2006] Jarno Mielikainen. Lsb matching revis-
[Amritha et al., 2016] PP Amritha, M Sethumadhavan, and ited. IEEE signal processing letters, 13(5):285–287, 2006.
R Krishnan. On the removal of steganographic content [Moskowitz et al., 2007] Ira S Moskowitz, Patricia A Laf-
from images. Defence Science Journal, 66(6):574–581, ferty, and Farid Ahmed. Stego scrubbing a new direction
2016. for image steganography. In Information Assurance and
[Baluja, 2017] Shumeet Baluja. Hiding images in plain Security Workshop, 2007. IAW’07. IEEE SMC, pages 119–
sight: Deep steganography. In Advances in Neural Infor- 126. IEEE, 2007.
mation Processing Systems, pages 2069–2079, 2017. [Pevnỳ et al., 2010] Tomáš Pevnỳ, Tomáš Filler, and Patrick
[Canny, 1986] John Canny. A computational approach to Bas. Using high-dimensional image models to perform
edge detection. IEEE Transactions on pattern analysis and highly undetectable steganography. In International Work-
machine intelligence, (6):679–698, 1986. shop on Information Hiding, pages 161–177. Springer,
[Dong et al., 2018] Shiqi Dong, Ru Zhang, and Jianyi Liu. 2010.
Invisible steganography via generative adversarial net- [Pibre et al., 2016] Lionel Pibre, P Jerome, Dino Ienco, and
work. arXiv preprint arXiv:1807.08571, 2018. Marc Chaumont. Deep learning for steganalysis is better
[Ettinger, 1998] J Mark Ettinger. Steganalysis and game than a rich model with an ensemble classifier, and is na-
tively robust to the cover source-mismatch. SPIE Media
equilibria. In International Workshop on Information Hid-
Watermarking, Security, and Forensics, 2016.
ing, pages 319–328. Springer, 1998.
[Fridrich et al., 2002] Jessica Fridrich, Miroslav Goljan, and [Prewitt, 1970] Judith MS Prewitt. Object enhancement
Dorin Hogea. Steganalysis of jpeg images: Breaking the and extraction. Picture processing and Psychopictorics,
f5 algorithm. In International Workshop on Information 10(1):15–19, 1970.
Hiding, pages 310–323. Springer, 2002. [Salimans et al., 2017] Tim Salimans, Andrej Karpathy,
[Gou et al., 2007] Hongmei Gou, Ashwin Swaminathan, and Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving
Min Wu. Noise features for image tampering detection and the pixelcnn with discretized logistic mixture likelihood
steganalysis. In 2007 IEEE International Conference on and other modifications. arXiv preprint arXiv:1701.05517,
Image Processing, volume 6, pages VI–97. IEEE, 2007. 2017.
[Hayes and Danezis, 2017] Jamie Hayes and George [Sharma and Gupta, 2012] Manoj Kumar Sharma and
Danezis. Generating steganographic images via ad- PC Gupta. A comparative study of steganography and
versarial training. In Advances in Neural Information watermarking”. International Journal of Research in IT
Processing Systems, pages 1954–1963, 2017. & Management (IJRIM), 2(2):2231–4334, 2012.
[Holub and Fridrich, 2012] Vojtech Holub and Jessica J [Shi et al., 2017] Haichao Shi, Jing Dong, Wei Wang, Yin-
Fridrich. Designing steganographic distortion using direc- long Qian, and Xiaoyu Zhang. Ssgan: Secure steganogra-
tional filters. In WIFS, pages 234–239, 2012. phy based on generative adversarial networks. In Pacific
Rim Conference on Multimedia, pages 534–544. Springer,
[Holub et al., 2014] Vojtěch Holub, Jessica Fridrich, and
2017.
Tomáš Denemark. Universal distortion function for
steganography in an arbitrary domain. EURASIP Journal [Volkhonskiy et al., 2017] Denis Volkhonskiy, Ivan
on Information Security, 2014(1):1, 2014. Nazarov, Boris Borisenko, and Evgeny Burnaev. Stegano-
[Husien and Badi, 2015] Sabah Husien and Haitham Badi. graphic generative adversarial networks. arXiv preprint
arXiv:1703.05502, 2017.
Artificial neural network for steganography. Neural Com-
puting and Applications, 26(1):111–116, 2015. [Voloshynovskiy et al., 2002] Sviatoslav V Voloshynovskiy,
[Johnson and Jajodia, 1998a] Neil F Johnson and Sushil Ja- Alexander Herrigel, Yuri B Rytsar, and Thierry Pun. Ste-
gowall: Blind statistical detection of hidden data. In Se-
jodia. Exploring steganography: Seeing the unseen. Com-
curity and Watermarking of Multimedia Contents IV, vol-
puter, 31(2), 1998.
ume 4675, pages 57–69. International Society for Optics
[Johnson and Jajodia, 1998b] Neil F Johnson and Sushil Ja- and Photonics, 2002.
jodia. Steganalysis of images created using current
[Wu et al., 2018] Pin Wu, Yang Yang, and Xiaoqiang Li.
steganography software. In International Workshop on In-
formation Hiding, pages 273–289. Springer, 1998. Stegnet: Mega image steganography capacity with deep
convolutional network. arXiv preprint arXiv:1806.06357,
[King, 2018] Ariana King. Ge engineer tied to china charged 2018.
with theft of company secrets, Aug 2018.
[Yang et al., 2018] Jianhua Yang, Kai Liu, Xiangui Kang,
[Lafferty, 2008] Patricia A Lafferty. Obfuscation and the Edward K Wong, and Yun-Qing Shi. Spatial image
steganographic active warden model. The Catholic Uni- steganography based on generative adversarial network.
versity of America, 2008. arXiv preprint arXiv:1804.07939, 2018.
[Lerch-Hostalot and Megı́as, 2016] Daniel Lerch-Hostalot [Yu, 2018] Chong Yu. Integrated steganography and ste-
and David Megı́as. Unsupervised steganalysis based ganalysis with generative adversarial networks. 2018.
on artificial training sets. Engineering Applications of
Artificial Intelligence, 50:45–59, 2016.

You might also like