Steganography is the science of hiding information in data



AMLAN CHAKRABORTY

0440954

amlan@cs.washington.edu

Steganography and Digital Watermarking -- Applications, Attacks and Countermeasures

Introduction

Steganography is the science of hiding information in data. Normally steganography is done intelligently such that it is difficult for an adversary to detect the existence of a hidden message in the otherwise innocuous data. The piece of data that has the message embedded in it is visible to the world in the clear and appears as harmless and normal. This is in stark contrast with cryptography where the message is scrambled to make it extremely difficult or impossible for an adversary to put together. A message in ciphertext arouses some sort of suspicion whereas invisible message embedded in clear text does not. This is the advantage of steganography.

Generally, a steganographic message will appear to be something else: a picture, an audio file, a video file or a message in clear text - the covertext. Historically, messages were written using hidden invisible ink between the visible lines of innocuous documents, or even written onto clothing. Other techniques used were writing messages in Morse code in knitting yarn, or marking particular words or letters in the message, using invisible ink or pin prick that form the secret message. During WWII Germans used the microdot technology, where an image the size of a period had the clarity of typewritten pages. In this case the period was the covertext and the image is the message. Though smart hiding and innocuous hiding techniques are used to hide the stegotext, the algorithm itself is secure and only known to the communicating parties and not to the world. This is in slight contrast to classical cryptography where the algorithm is well known and only the key(s) are secret. Though data is not encrypted in steganography, authenticity of a message is normally established by using a MAC or a signature.

Steganography can be used to code messages in any transport layer – an image (GIF/BMP/JPEG), a MP3 file, a communications protocol like UDP etc. Steganogrpahic information can also be added to richer multimedia content like DVDs. There are normally two motivations – to send a secret message or to establish authenticity of a piece of information – usually a multimedia file. The later is a major application of modern steganography and known as Digital Watermarking and Fingerprinting. Watermarks establish ownership of an artifact while fingerprints or labels help to identify intellectual property violators. They are different protocol implementation of the same basic idea.

Information theory and human sensory perception

Steganography is possible for the same reasons that compression is – a combination of information theory and human perception of vision and audio. Digital signal contains redundancy which manifests itself as noise. Humans cannot detect all levels of noise; in other words, humans often cannot tell an image or an audio clip from another with slight difference in levels of noise. The larger the cover message is (in data content terms — number of bits) relative to the hidden message, the easier it is to hide the latter. For example, a 24 bit bitmap image has 8 bits representing three colors – Red, green and blue at each pixel – 256 shades of each basic color. So changing the least significant bit of any of these basic colors would make an extremely negligible change on that pixel – and possibly less on the image. So the least significant bit can be easily used to store the steganographic message. So, if we change the LSB of each basic color of three adjacent pixels, we get 9 bits -- enough space to store an ASCII character. This is called LSB manipulation and a very conventional and simple steganographic implementation. It can also be noted that the actual message itself can be compressed using some compression coding methodologies like run length coding. Historically, a lot of invisible ink steganographic messages were encoded using Polybius squares or similar text to integer mapping schemes.

Stated somewhat more formally, the objective for making steganographic encoding difficult to detect is to ensure that the changes to the carrier/container (the original signal) due to the injection of the payload (the signal to covertly embed) are visually and ideally, statistically negligible; that is to say, the changes are indistinguishable from the Gaussian noise of the carrier. From an information theoretical point of view, this means that the channel must have more capacity than the 'surface' signal requires (entropy), i.e., there is redundancy. For a digital image, this may be noise from the imaging element; for digital audio, it may be noise from recording techniques – amplitude or frequency modulation. Any system with an analog (signal) amplification stage will also introduce thermal noise, which can be exploited as a noise cover. Steganographic channel is a covert channel in Information theory terms since it transfers some kind of information using a method originally not intended to transfer this kind of information. Steganography also supports both storage and timing covert channels. This report primarily discusses storage covert channels where a covert message is communicated by manipulating a stored object like an image. Ron Rivest’s “Chaffing and Winnowing” protocol discussed later can be argued as an example of timing covert channel.

It is fairly obvious that more the data content of the cover message, the easier it is to hide the message. In case of images, bitmaps are better fits that GIFs and JPEGs because GIF is 8 bits per pixel and JPEG is a lossy compression technique. But on the flipside, bigger images will attract more attention than smaller images as suspect stego-images. Subtlety in changes is a very important feature and stego-images should only have subtle changes. An image with large areas of solid colors would be a bad fit since large variances created by the embedded message would cause drastic differences easily spotted by the human eye. The spatial frequency distribution of the image (spatio- temporal in case of audio or video content) is also a determining factor in the efficiency of the hiding process. As we will see later, we have techniques for both Gaussian and LaPlacian distribution using maximum likelihood estimators for the stego-messages.

Often the embedded message is itself encrypted using a key that may or may not be known to the adversary. Since steganography requires that communicating parties have some prior shared information, symmetric key is a natural fit. However, public steganography with steganographic key exchanges is also possible.

Prisoner’s problem and subliminal channel

The study of steganography in machine cryptography was first stated in the prisoner’s problem by Simmons. Two inmates Alice and Bob are accomplices in a crime and are sent to the prison. They need to communicate with each other but they have to use a public channel which is monitored by the Warden of the jail. The warden will only forward the messages if they are intelligible. The prisoners accept this condition and find a way to communicate secretly in exchanges --- establishing a subliminal channel even though the messages themselves are not encrypted. The warden will also try to deceive them, so they will authenticate each other’s messages before accepting them – authentication without secrecy.

Thethe situation is paradoxical because the warden demands access and the prisoner’s need to authenticate each other. Authentication without secrecy channels achieve that by placing a pre arranged condition on all messages. It is this capability that creates a subliminal channel for the prisoners. If ‘m’ redundant bits are allowed to establish authenticity, then these redundant bits create a bit by bit subliminal channel which can be used to transmit extra information.

Null ciphers and “Chaffing and Winnowing”

A null cipher is a form of encryption where the plaintext is mixed with a large amount of non-cipher material. Null ciphers are used to hide the actual ciphertext by introducing nulls to confuse the cryptanalyst. Classical steganography can also be thought of as an extension of this concept where the carrier / container data are actually the null ciphers – data that create confusion and diffuse the actual payload.

Ron Rivest extended this concept to an idea of “Chaffing and Winnowing” to create steganographic communication channels. The concept is analogous to separating (winnowing) wheat from chaff where wheat is the actual payload and chaff is the null ciphers. In a two step process, the transmitter introduces chaff to the wheat i.e. intersperse the actual payload with meaningless data. The receiver “winnows” the actual payload from the non-interesting data. As with most steganographic transfers, the transmitters add a MAC to establish authenticity of the communication to any message that is sent. MACs are calculated over the entire message and a serial number of the message using a secret symmetric authentication key. The transmitter attaches bogus MACs for the chaff packets instead of calculating it. This is what distinguishes the “chaff” from the “wheat”. The receiver now doesn’t have to do anything special since the normal protocol of a receiver is to discard packets that do not have correct MACs. Though the adversary can see the entire communication, it cannot tell chaff from wheat as the MAC will look like a random function. However, weak MAC functions can potentially leak information in this protocol. It is also important to note that it is not possible to use digital signatures here since anyone will be then able to compare the signatures and tell “chaff” from “wheat”. However, “designated verifier signature” schemes where only signature designates can verify a signature would work fine. The other key idea is that since the creation of “chaff” involves generation of a bad MAC and not the knowledge of a secret key, any entity can play the role of a “chaffer”.

Digital Watermarking and Fingerprinting

Digital watermarking is the technique of adding identifying information to digital artifacts using steganographic principles i.e. hiding the information cleverly so that extraction is difficult by any adversary. Watermarks can be visible or invisible in the context of images. There are various techniques of placing digital watermarks on images but they can conceptually be divided into two categories –

1. Spatial techniques. These methods are based on hiding the messages on geometric characteristics of the image. These are highly susceptible to signal alteration algorithms. Even simple signal manipulation like zooming, cropping, smoothing would obliterate watermarks.

2. Frequency Domain techniques. These methods are used to hide messages along the frequency distribution of hues, intensities, luminance etc of the images. These are comparatively robust to simple image manipulations but can fall prey to statistical steganalysis.

In strict terms, visible digital watermarks are really not steganographic object – they enhance information instead of hiding.

Fingerprinting is a slight different implementation of digital watermarks. When an artifact is sold to an entity, information about that entity is hidden in the artifact. If illegitimate copies of the artifact are sold, the watermark information would reveal the violator. A slight modification of this would be using the canary trap protocol where unique alterations are made to each copy of artifact sold. The illegitimate copy has a tell-a-tale that traces back to the violator.

Some digital watermarking algorithms

It is not too difficult to formulate algorithms that can cleverly hide information in images. The key idea to avoid detection is to hide the message in such a way that statistically it comes across like normal distribution making pattern detection very difficult.

Masking and filtering

These are some basic techniques to create visible watermarks by altering the luminance or colors of certain regions in the image. These can be detected very easily by simple statistical analysis but these are fairly resistant to lossy compression and image cropping. It doesn’t hide the data in noise but embed it in significant areas – just the reverse of LSB manipulation.

LSB Manipulation

This is the manipulation described in the Introduction that is susceptible to even slight image modification. It is very efficient in hiding a GIF or BMP image in another but a linear analysis is enough to figure this out. It is fairly easy to hide an image in 3 or even 4 least significant bits of another image without causing major noticeable change. The motivation for steganography is important here. If the intention is to covertly pass messages, this can still work unless all artifacts are sniffed for steganographic information. But if this is meant for digital watermarks, it is very easy to extract and /or get rid of the info.

Spread Spectrum methods

In spread spectrum methods, the message is scattered across the image making it harder for cropping, rotation and other basic image manipulation techniques to obliterate the watermark. This is also somewhat resistant to statistical steganalysis because it gives it the impression of noise in an image. Patchwork is a tool from IBM uses this technique to scatter hidden information based on statistical distribution of luminance in the image. It iteratively selects two patches on the image, brightens one and darkens one. It then calculates the standard deviation, S between light and dark patches over the sample patches. To encode, it picks up two patches up in random and then brightens one by S and darkens one by S. This process is iterated and the whole image palette is laid in a mosaic of bright and dark patches one of which is used to hide data. This patch information is vital to decode the hidden message later. This is clearly a frequency distribution method. Patchwork makes the assumption that the image has a Gaussian distribution.

Texture Block coding

In this method, pairs of areas of similar texture are found and one area is copied over the other. Thus we have identical blocks of texture in the image. Iterating a few times, we can get two large blocks of identical textures. These two blocks would get altered identically for all non-geometric alterations of the image. These two blocks can then contain information about these images.

M-Sequences using linear shift registers

M-sequences are based on starting vectors of a Fibonacci recursion relation which form a Galois field of finite cardinality. Mathematically and statistically these numbers are known to have desirable autocorrelation functions; the distribution of Galois field numbers is known to be of normal distribution thus resembling Gaussian noise in an image. So images encoded using m-sequences are statistically impossible to distinguish from the original as they are similar to noise in a normal distribution. If the stego message is encoded using m-sequences, it can easily be embedded in the image by a LSB substitution. A more secure implementation would be to use LSB addition instead to embed the watermark. So it will require the examination of the complete bit pattern and the current linear shift register implementation. This is more secure because to crack this, the adversary would have to do the same computations without any apriori knowledge.

Frequency hopping

In this method scattering of the message is done on the basis of rules that change cumulatively. The idea is similar to DES block encryption; bits are swapped according to rules that are dictated by the stego-key and random data from the previous round. White noise storm, an implementation of this methodology, creates a message space of 8 channels where each channel has a window of W bytes, where W is a random number. Each channel however carry only one bit of the message and a lot of unused bits. The bits inside a window permutate and rotate according to an algorithm that is regulated by the previous window’s operations and the stego-key. Finally this encoded message is embedded in the image using LSB substitution. The idea again is to simulate a distribution that is similar to a Gaussian distribution.

Steganalysis and Digital Watermarking Attacks

Steganalysis is analogous to cryptanalysis in the context of steganography. Steganalysis is composed of three steps:-

1. Detection of hidden message (Passive Steganalysis)

2. Extracting of hidden message (Active Steganalysis)

3. Disabling/ Destruction of hidden message.

It is important to note here that it is not necessary to extract a message to disable or destruct a message. It is often very difficult to extract a hidden message and at times even to detect one because they are scattered and show up as noise. The case of visible watermarks is obviously different. But the problem lies in the fact – detection is also not important if we have a “suspicious attitude”. We can run algorithms that are known to destruct digital watermarks in messages. On top of that there are algorithms that instead of disabling watermarks, either overwrite watermarks or create exact replicas – rendering the watermark useless either way. Luis Von Ahn et al formulates and proposed “universal robustness” for steganographic information. They prove that “robust steganography” is as secure as the underlying crypto used to encrypt the message that is hidden in the clear. But this just ensures extraction is hard and likens it to cryptography. But their algorithm doesn’t prove that obliteration of the steganographic secret is not possible.

Some common types of attacks on Digital Watermarking are

1. Removal attacks – Denoising, Remodulation, Lossy Compression

These attacks attempt at completely removing watermark from the data. Since a lot of steganography algorithms try to hide data as noise, removal of noise should obliterate the watermark. These algorithms they try to estimate the cover data using a given statistic for the noise in it. It assumes the noise to be the watermark. Langelaar et al proposes a sequence of filtering operations( median filtering, highpass filtering) on the image to denoise the image that will likely get rid of the digital watermark. There are several other watermark estimator algorithms that uses either Maximum Aposteriori Probability (MAP) if we know the image statistics or Maximum likelihood (ML) Classifier algorithms if we do not know anything about the images, to find an estimate of the digital watermark. Voloshoynovisky proposed an algorithm where he used the MAP estimator and then remodulates the image to find the least favorable noise distribution. This is guessed to be the watermark.

Often lossy compression of uncompressed image data like JPEG, would completely wipe out the watermark since the raw data would be replaced by Direct Cosine Transforms of the data. However, this is mitigated by algorithms that can hide information directly in compressed data.

2. Geometric Attacks – Warping, transforming, jittering etc.

These attacks are the easiest to implement and often very effective. Instead of removing the watermark, these stress on distortion of embedded data by spatial or temporal alterations (in case of audio and video data). The result of these attacks is to scatter and alter the way the watermark is laid out in the image. For a simple attack, if an image is rotated by a slight angle, say 1 degree and the edges filled by the texture of the average of adjacent pixels, there is a high likelihood that the watermark would fall out of sync with the watermark detector. The key idea here is though the digital watermark data exists in the artifact, it has moved in such a way that the watermark detector can no longer detect the data.

Jittering is another effective attack that works extremely well for audio data. An audio signal is chunked up into “n” chunks and then either one chunk is deleted or a copy is made and then assembled back together ending up in either (n -1) or (n +1) samples. This introduces a jitter in the signal that is not detectable by humans. Digital watermarks would totally get destroyed in this attack. Unzign implements a pixel jittering algorithm that works well on spatial domain watermarks.

Another important observation is that though some algorithms survive basic geometric attacks like rotation, shearing, resizing etc., they succumb to a combination of different attacks. StirMark is an implementation based on these principles that simulates an iterative resampling process – where the image is slightly resized, sheared and rotated by a random small amount. However, repeated iterations of StirMark degrade the image to the point that humans can detect the difference between the original and the processed.

3. Crypto attacks – Exhaustive key search, Collusion, Averaging, Oracle attack

These are similar to normal cryptographic attacks where the steganographic key is searched exhaustively. Statistical averaging attacks involve taking the same data set with different instances of watermarks and then averaging them to find the attacked data set. A modification of the averaging algorithm is the collusion attack where smaller portions of the data set are taken and attacked data set found using averaging algorithms. These smaller datasets are then combined to get a new attacked data set.

4. Protocol attacks – Watermark inversion, Copy Attack.

These attacks do not aim to detect, destroy or disable the watermark, but to attack the basic tenets of watermarking e.g. watermarks cannot be extracted from non watermarked data.

The Watermark inversion attack uses the feature of overmarking that is the ability to mark an image more than once. Bob gets an image from Alice that has her watermark. Bob subsequently generates his own watermark and subtracts his watermark from the image he got from Alice. Due to overmarking, Alice’s signature would still be readable from this image making it almost identical to the image Alice circulated. Bob can now argue that Alice has removed his signature and added hers to generate this image. This will establish that Bob was the actual owner of the image.

The Copy Attack gets an estimate of the watermark using a MAP or a ML estimator. It then processes this watermark using the least favorable noise function (mentioned in replacement attacks) to smoothen the watermark. It then adds the watermark to a new document. Copy attack allows anyone to identify his own document as being watermarked by a well known entity by placing a watermark copied from a document published by that entity on it. This is a very serious attack that Kutter et al experimentally succeeded to accomplish.

Defenses against Steganalysis

We noticed that most steganographic algorithms pretty cleverly hide data to avoid detection by --- blending in as Gaussian noise, embedding in significant areas, scattering across the frequency spectrum etc. It has also been seen that it is often not easy to extract a digital watermark. Luis von Ahn et al propose robust steganographic algorithms as well as new advances to public key steganography etc. But this often doesn’t safeguard against attacks to destroy or replace watermarks on images, audio files etc. We have also notices that watermarks are particularly susceptible to attacks that are combination of more than one attack. There have been mitigations suggested to particular types of attacks e.g. error correction of coding theory using hamming distance (or some other distance measuring algorithm like Euclidean algorithm ) for statistical steganalysis. But the problem is that attacks are preceding mitigations. Barr et al from DigiMarc are suggesting the concept of image signature to mitigate the copy attack where perpetually similar images would produce the same signature whereas perpetually different image would produce very different signatures. While this would successfully mitigate the copy attack, one can still launch a geometric attack and obliterate the watermark. Since the image would be perpetually similar the signature would be the same, and image signature would not mitigate the attack. Additionally the problem here is that there is an additional burden on the watermark detector to verify the signature of the image. This is double verification and needs additional security.

The key idea is to make the digital watermark such that destruction of the watermark would destroy the image itself. One idea proposed by Neil Johnson is to use a gradual mask instead of a sharp mask for the visible watermark, so that the watermark is not visible until the luminance of the image is significantly increased. This makes it more robust against changes of lower bits. Though extensive image processing and spatially selective alteration of luminance based on the luminance distribution may make the digital watermark vulnerable, the image would also be distorted enough by so much processing.

Conclusion

The challenges in digital watermarking stem from the fact that the attacks derive from the same phenomenon as the watermarking technology itself -- small noise insertion doesn’t create humanly noticeable changes to an artifact. Clearly right now the watermarking technology is not robust enough to mitigate combination of attacks. Introduction of new authentication schemes as proposed by public key steganography would attach another layer of security but does not in itself guarantee universal absolute robustness of watermarks. I think the solution may very well lie in better statistical models based on information theory. We can mitigate some attacks using authentication and authorization – but pattern detection and obfuscation should be mitigated by better scattering algorithms.

References

1. Neil Johnson and Sushil Jajodia, Exploring Steganography: Seeing the Unseen.

2. Gustavus J Simmons, The Prisoner’s Problem and the Sublimimal channel

3. Ronald L. Rivest, Chaffing and Winnowing: Confidentiality without Encryption

4. Fabien A Petitcolas, Ross J Anderson and Markus Kuhn, Information Hiding a Survey.

5. Pierre Moulin and Joseph O’ Sullivan, Information-Theoritic Analysis of information Hiding

6. Nicholas J Hopper, John Langford and Luis Von Ahn, Provably Secure Steganography

7. Neil Johnson and Sushil Jajodia, Steganalysis : The investigation of hidden information

8. Bender, Gruhl, Morimoto and Lu, Techniques for data hiding.

9. Neil F Johnson, An Introduction to Watermark recovery from Images

10. Fabien A Petitcolas, Ross J Anderson and Markus Kuhn, Attacks on copyright marking systems.

11. Martin Kutter and Sviatoslav Voloshynoviskiy, The Watermark Copy attack

12. Niels Provos, Defending against Statistical Steganlysis

13. Barr, Bradley and Hannigan, Using Digital watermarks to mitigate the threat of copy attacks.

14. Karen Su, Deepa Kundur and Dmitrios Hatzinakoa, A novel approach to collusion resistant Video watermarking.

15. Stefan Katzenbeiser and Helmut Beith, Securing symmetric watermarking schemes against protocol attacks.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download