Watermarking schemes evaluation
Fabien A. P. Petitcolas, Microsoft Research
Abstract—Digital watermarking has been presented as
a solution to copy protection of multimedia objects and We already pointed out in  that most papers
dozens of schemes and algorithms have been proposed. have used their own limited series of tests, their
Two main problems seriously darken the future of this own pictures and their own methodology and that
technology though. consequently comparison was impossible without
Firstly, the large number of attacks and weaknesses
which appear as fast as new algorithms are proposed,
re-implementing the method and trying to test
emphasizes the limits of this technology and in them separately. But then, the implementation
particular the fact that it may not match users might be very different and probably weaker than
expectations. the one of the original authors. This led to
Secondly, the requirements, tools and methodologies to suggest that methodologies for evaluating existing
assess the current technologies are almost non-existent.
The lack of benchmarking of current algorithms is
watermarking algorithms were urgently required
blatant. This confuses rights holders as well as software and we proposed a simple benchmark for still
and hardware manufacturers and prevents them from image marking algorithms.
using the solution appropriate to their needs. Indeed
basing long-lived protection schemes on badly tested With a common benchmark authors and
watermarking technology does not make sense. watermarking software providers would just need
In this paper we will discuss how one could solve the to provide a more or less detailed table of results,
second problem by having a public benchmarking which would give a good and reliable summary of
service. We will examine the challenges behind such a
the performances of the proposed scheme. So
end users can check whether their basic
Index Terms—watermarking, robustness, evaluation, requirements are satisfied, researchers can
compare different algorithms and see how a
I. INTRODUCTION method can be improved or whether a newly
added feature actually improves the reliability of
Digital watermarking remains a largely untested the whole method and the industry can properly
field and only very few large industrial evaluate risks associated to the use of a particular
consortiums have published requirements against solution by knowing which level of reliability can
which watermarking algorithms should be tested be achieved by each contender. Watermarking
[1, 2]. For instance the International Federation system designers can also use such evaluation to
for the Phonographic Industry led one of the first identify possible weak points during the early
large scale comparative testing of watermarking development phase of the system.
algorithm for audio. In general, a number of Evaluation per se is not a new problem and
broad claims have been made about the significant work has been done to evaluate, for
‘robustness’ of various digital watermarking or instance, image compression algorithms or
fingerprinting methods but very few researchers security of information systems  and we believe
or companies have published extensive tests on that some of it may be re-used for watermarking.
In section II will explain what is the scope of the
The growing number of attacks against evaluation we envisage. Section III will review the
watermarking systems (e.g., [3, 4, 5]) has shown type of watermarking schemes that an automated
that far more research is required to improve the evaluation service1 could deal with. In section IV
quality of existing watermarking methods so that, we will review what are the basic functionalities
for instance, the coming JPEG 2000 (and new
multimedia standards) can be more widely used
within electronic commerce applications. 1Such service is the logical continuation of the existing
that need to be evaluated. Section V will examine license conditions and enforcement
how each functionality can be tested. Finally, mechanisms. This objective is quite general as
section VI will argue the need for a third party it may wrap many other objectives described
evaluation service and briefly sketch its below. However one may wish to have the
architecture. data related to the work stored into the work
itself rather than into a central database in
II. SCOPE OF THE EVALUATION order to avoid connection to a remote server.
Watermarking algorithms are often used in larger Proof of creatorship, proof of ownership: the
system designed to achieve certain goals (e.g., embedded mark is be used to prove to a
prevention of illegal copying). For instance court who is the creator or the right holder of
Herrigel et al.  presented a system for trading the work;
images; this system uses watermarking Auditing: the mark carries information used
technologies but relies heavily on cryptographic to identify parties present in a transaction
involving the work (the distributors and the
Such systems may be flawed for other reasons end users). This audit trail shows the transfer
than watermarking itself; for instance the of work between parties. Marks for
protocol, which uses the watermark2, may be identifying users are usually referred to as
wrong or the random number generator used by fingerprints;
the watermark embedder may not be good. In
this paper we are only concerned with the Copy-control marking: the mark carries
evaluation of watermarking (so the signal information regarding the number of copies
processing aspects) within the larger system not allowed. Such marks are used in the digital
the effectiveness of the full system to achieve its versatile disk copy protection mechanisms. In
goals. this system a work can be copied, copied
once only or never copied .
III. TARGET OF EVALUATION Monitoring of multimedia object usage:
monitoring copyright liability can be achieved
The first step in the evaluation process is to by embedding a license number into the work
clearly identify the target of evaluation, that is the and having, for instance, an automated
watermarking scheme (set of algorithms required service constantly crawling the web or
for embedding and extraction) subject to listening to the radio, checking the licensing
evaluation and its purpose. The purpose of a and reporting infringement.
scheme is defined by one or more objectives and
an operational environment. For instance, we may Tamper evidence: special marks can be used
wish to evaluate a watermarking scheme that in a way that allows detection of
allows automatic monitoring of audio tracks modifications introduced after the mark has
broadcast over radio. been added.
Typical objectives found across the watermarking Labelling for user awareness: this type of
and copy protection literature include: marks are typically used by right holders to
warn end users that the work they ‘have in
Persistent identification of audio-visual
hands’ is copyrighted. For instance, whenever
signals: the mark carries a unique
an end user tries to save a copyrighted image
identification number (similar to an I.S.B.N.),
opened in a web browser or an image editor,
which can be used as a pointer in a database.
he may get a warning encouraging him to
This gives the ability to manage the
purchase a license for the work.
association of digital content with its related
descriptive data, current rights holders, Data augmentation: this is not really in the
scope of ‘digital watermarking’ but a similar
evaluation methodology can be applied to it.
2 The watermark or mark is what is actually
imperceptibly added to the cover-signal in order to Labelling to speed up search in databases.
convey the hidden data. The cover signal is the audio-
visual signal (still image, audio track, video) in which
one wishes to hide information (the work in many
IV. BASIC FUNCTIONALITIES copyright mark), wilfully or not, remain the
focus of many research papers which propose
The objectives of the scheme and its operational new attacks. By ‘disabling a watermark’ we
environment dictate several immediate constraints mean making it useless or removing it.
(a set of minimal requirements) on the algorithm.
In the case of automated radio monitoring, for The threats centred on tampering of the
instance, the watermark should clearly withstand signal by unauthorized parties in order to
distortions introduced by the radio channel. change the semantic of the signal are an
Similarly, in the case of MPEG video broadcast integrity issue. Modification can range from
the watermark detector must be fast to allow real the modification of court evidences to the
time detection and simple in terms of number modification of photos used in newspapers
gates required for hardware implementation. One or clinical images.
or more of the following general functionalities
can be used: The threats centred on distributing
anonymously illegal copies of marked work
A. Perceptibility are a traitor tracing issue and are mainly
addressed by cryptographic solutions .
One does not wish that the hidden mark
deteriorates too much the perceived quality of the
Watermark cascading, that is the ability to
embed a watermark into an audio-visual
signal that has been already marked, requires
B. Level of reliability
a special kind of robustness. The order in
There are two main aspects to reliability: which the mark are embedded is important
 because different types of marks may be
Robustness and false negatives occur when the embedded in the same signal. For instance
content was previously marked but the mark one may embed a public and a private
could not be detected. The threats centred on watermark (to simulate asymmetric
signal modification are robustness issues. watermarking) or a strong public watermark
Robustness can range from no modification together with a tamper evidence watermark.
at all to destruction3 of the signal. This As a consequence, the evaluation procedure
requirement separates watermarking from must take into account the second
other forms of data hiding (typically watermarking scheme when testing the first
steganography). Without robustness, the one.
information could just be stored as a separate
attribute. At last, false positives occur whenever the
Robustness remains a very general detected watermark differs from the mark
functionality as it may have different that was actually embedded. The detector
meanings depending on the purpose of the could find a mark A in a signal where no
scheme. If the purpose is image integrity mark was previously hidden, in a signal where
(tamper evidence), the watermark extractor a mark B was actually hidden with the same
should have a different output after small scheme, where a mark B was hidden with
changes have been made to the image while another scheme.
the same changes should not affect a
copyright mark. C. Capacity
Knowing how much information can reliably be
In fact, one may distinguish at least the
hidden in the signal is very important to users
following main categories of robustness:
especially when the scheme gives them the ability
to change this amount. Knowing the
The threats centred on modifying the signal
watermarking-access-unit4 (or granularity) is also
in order to disable the watermark (typically a
very important; indeed spreading the mark over a
3 Complete destruction may be a too stringent
requirement. Actually it is not clear what it means. 4 Watermark-access-unit: smallest part of a cover-
Instead one could agree on a particular quality measure signal in which a watermark can be reliably detected
and a maximum quality loss value. and the payload extracted.
full sound track prevents audio streaming, for important parameter to keep is the
instance. imperceptibility. Then two approaches can be
considered: emphasise capacity over robustness
D. Speed or favour robustness at the expense of low
capacity. This clearly depends on the purpose of
As we mentioned earlier, some applications the marking scheme and this should be reflected
require real time embedding and/or detection. in the way the system is evaluated.
E. Statistical undetectability V. EVALUATION
For some private watermarking systems, that is A full scheme is defined as a collection of
scheme requiring the original signal, one may functionality services to which a level of
wish to have a perfectly hidden watermark. In this assurance is globally applied and for each of
case it should not be possible for an attacker to which a specific level of strength is selected. So a
find any significant statistical differences between proper evaluation has to ensure that all the
an unmarked signal and a marked signal. As a selected requirements are met to a certain level of
consequence an attacker could never know assurance.
whether an attack succeeded or not; otherwise he
could still try something similar to the ‘oracle’ The number of level of assurance cannot be
attack . Note that this option is mandatory for justified precisely. On the one hand, it should be
steganographic systems. clear thought that a large number of them makes
the evaluation very complicated and unusable for
F. Asymmetry particular purposes. On the other hand too few
levels prevent scheme providers from finding an
Private-key watermarking algorithms require the evaluation close enough to their needs. Also we
same secret key both for embedding and are limited by the accuracy of the methods
extraction. They may not be good enough if the available for rating. Information technology
secret key has to be embedded in every security evaluation has been using, for the reasons
watermark detector (that may be found in any we just mentioned above but also for historical
consumer electronic or multimedia player reasons, six or seven levels. This seems to be a
software), then malicious attackers may extract it reasonable number for robustness evaluation.
and post it to the Internet allowing anyone to
remove the mark. In these cases the party, which For perceptibility we preferred to use fewer levels
embeds a mark, may wish to allow another party and hence follow more or less the market
to check its presence without revealing its segmentation for electronic equipment.
embedding-key. This can be achieved using Moreover, given the roughness of existing quality
asymmetric techniques. Unfortunately, robust metrics it is hard to see how one could reasonably
asymmetric systems are currently unknown and increase the number of assurance levels.
the current solution (which does not fully solve The following sub-sections discuss possible
the problem) is to embed two marks: a private methods to evaluate the functionalities listed
one and a public one. earlier.
Perceptibility can be assessed to different level of
Other functionality classes may be defined but assurance. The problem here is very similar to the
the one listed above seem to include most evaluation of compression algorithms. The
requirements used in the recent literature. The watermark could just be slightly perceptible but
first three functionalities are strongly linked not annoying or not perceptible under
together and the choice of any two of them domestic/consumer viewing/listening conditions.
imposes the third one. In fact, when considering Another level is non-perceptibility in comparison
the three-parameter (perceptibility, capacity5 and with the original under studio conditions. Finally,
reliability) watermarking model the most the best assurance is obtained when the
watermarked media are assessed by a panel of
individual who are asked to look or listen
5 Capacity: bit size of a payload that a watermark
access unit can carry.
carefully at the media under the above conditions. 1) Robustness
The robustness can be assessed by measuring the
However, as it is stated, this cannot be automated detection probability of the mark and the bit error
and one may wish to use less stringent levels. In rate for a set of criteria that are relevant for the
fact, various level of assurance can also be application which is considered.
achieved by using various quality measures based
The levels of robustness range from no
on human perceptual models. Since there are
robustness to provable robustness (e.g., [7, 13]).
various models and metrics available an average
of them could be used. Current metrics do not For level zero no special robustness features have
really take into account geometric distortions been added to the scheme apart the one needed
which remain a challenging attack against many to fulfil the basic constrains imposed by the
watermarking scheme. purpose and operational environment of the
Table 1—Summary of the possible perceptibility
scheme. So if we go back to the radio-monitoring
assurance levels. These levels may seem vague but example the minimal robustness feature should
this is the best we can achieve as long as we do make sure that the mark survives the distortions
not have good and satisfactory quality metrics. of the radio link in normal conditions.
Level of The low level corresponds to some extra
Criteria robustness features added but which can be
circumvented using simple and cheap tools
- PSNR (when applicable6) publicly available. These features are provided to
Low - Slightly perceptible but not prevent ‘honest’ people from disabling the mark
during normal use of the work. In the case of
- Metric based on perceptual watermarks used to identify owners of
model photographs, the end users should be able to save
Moderate - Not perceptible under domestic and compress the photo, resize it and crop it
conditions, that is using mass without removing the mark.
market consumer equipment
Moderate robustness is achieved when more
Not perceptible in comparison
Moderate high with original under studio
expensive tools are required as well as some basic
conditions knowledge on watermarking. So if we keep the
previous example, the end user would need tools
Evaluation by a large panel of such as Adobe Photoshop and apply more
persons under strict conditions processing to the image to disable the mark.
Moderately high: tools are available but special skills
and knowledge are required and attacks may be
Although robustness and capacity are linked in unsuccessful. Several attempts and operations
the sense that scheme with high capacity are may be required and the attacker must have to
usually easy to defeat, we believe that it is enough work on the attack.
to evaluate them separately. Watermarking High robustness: all known attacks have been
schemes are defined for a particular application unsuccessful. Some research by a team of
and each application only requires a certain fixed specialists is necessary. The cost of the attack may
payload so we are only concerned by the be much higher what it is worth and the success
robustness of the scheme for this given payload. of it is uncertain.
Provable robustness: it should be computationally (or
even more stringent: theoretically) infeasible for a
wilful opponent to disable the mark. This is
similar to what we have for cryptography where
6 PSNR is a very restrictive quality metrics: it does not some algorithms are based on some difficult
take into account any properties of the human visual
model. This includes the usual masking properties but
also the large tolerance to geometric distortions. By The first levels of robustness can be assessed
using PSNR one excludes immediately watermarking automatically by applying a simple benchmark
schemes based on geometric distortions. Unfortunately algorithm similar to :
we are not aware of any metric taking those distortions
Table 2—Evaluation profile sample.
Level zero Low level Moderate
Standard JPEG compression
100 – 90 100 – 75 100 – 50
Colour reduction (GIF) 256 256 16
Cropping 100 – 90 % 100 – 75 % 100 – 50 %
Gamma correction 0.7 – 1.2 0.5 – 1.5
Scaling 1/2 – 3/2 1/3 – 2
Rotation 0 – 2 deg 0 – 5, 90 deg
Uniform noise 1–5% 1 – 15 %
Contrast 0 – 10 % 0 – 25 %
Brightness 0 – 10 % 0 – 25 %
Median filter 33
medical systems need only to be tested on
For each medium in a determined set: medical images while watermarking algorithms
1. Embed a random payload with the for owner identification have to be tested on a
greatest strength which does not large panel of images.
introduce annoying effects. In other The first levels of robustness can be defined using
words, embed the mark such that the a finite and precise set of robustness criteria (e.g.,
quality of the output – for a given quality S.D.M.I., IFPI or E.B.U. requirements) and one
metric – is greater than a given minima. just need to check them.
2. Apply a set of given transformations to 2) False positives
the marked medium.
False positives are difficult to measure and
For each distorted medium try to extract the current solutions use a model to estimate their
watermark and measure the certainty of rate. This has two major problems: first ‘real
extraction. Simple methods may just use a world’ watermarking schemes are difficult to
success/failure approach, that is to consider model accurately; secondly modelling the scheme
the extraction successful if and only if the requires access to details of the algorithm.
payload is fully recovered without error. The Despite the fact that not publishing algorithms
measure for the robustness is the certainty of breaches Kerckhoffs’ principles7 , details of
detection or the bit error rate after extraction. algorithm are still considered as trade secrets and
This procedure must be repeated several times getting access to them is not always possible.
since the hidden information is random and a test
may be successful by chance.
Levels of robustness differ by the number and 7 In 1883, Auguste Kerckhoffs enunciated the first
strength of attacks applied and the number of principles of cryptographic engineering, in which he
media they are measured on. The set of test and advises that we assume the method used to encipher
media will also depend on the purpose of the data is known to the opponent, so security must lie
watermarking scheme and are defined in only in the choice of key. The history of cryptology
evaluation profiles. An evaluation profile sample since then has repeatedly shown the folly of ‘security-
by-obscurity’ – the assumption that the enemy will
is given in Table 2. For instance, schemes used in remain ignorant of the system in use.
So one (naïve) way to estimate the false alarm rate E. Statistical undetectability
is to count the number of false alarm using large
sample of data. This may turn out to be another All methods of steganography and watermarking
very difficult problem, as some applications substitute part of the cover signal, which has
require 1 error in 108 or even 1012. some particular statistical properties, with another
signal with different statistical properties; in fact
C. Capacity embedding processes usually do not pay attention
to the difference in statistical properties between
In most applications the capacity will be a fixed the original cover-signal and the stego-signal. This
constraint of the system so robustness test will be leads to possible detection attacks .
done with a random payload of given size. While
As for false positives evaluating such functionality
developing a watermarking scheme however,
is not trivial but fortunately very few
knowing the trade-off between the basic
watermarking schemes require it so we will not
requirements is very useful to know and graph
consider it in the next section.
with two varying requirements, the others being
fixed, are a simple way to achieve this. In the
VI. METHODOLOGY – NEED FOR THIRD PARTY
basic three-parameter watermarking model for
instance one can study the relation between To gain trust in the reliability of a watermarking
robustness and strength of the attack when the scheme, its qualities must be rated. This can be
quality of the watermarked medium is fixed, done by:
between the strength of the attack and the and
the visual quality or between the robustness and trusting the provider of the scheme and his
the visual quality . The first one is probably the quality assurance (or claims);
most important graph. For a given attack, and a
testing the scheme sufficiently oneself;
given visual quality, it shows the bit error rate as a
function of the strength of the attack. The second having the scheme evaluated by a trusted
one shows the maximum attack that the third party.
watermarking algorithm can tolerate. This is
useful from a user point of view: the performance Only the third option provides an objective
is fixed (we want only 5% of the bits to be solution to the problem but the general
corrupted so we can use error correction codes to acceptance of the evaluation methodology implies
recover all the information we wanted to hide) that the evaluation itself is as transparent as
and so it helps to define what kind of attacks the possible. This was the aim of StirMark and this
scheme will survive if the user accepts such or remains the aim of the project to build a next
such quality degradation. generation of StirMark Benchmark. This is why
the source code and methodology must be public
D. Speed so one can reproduce the results easily.
A question one may ask is: does the watermarking
Speed is very dependent on the type of system manufacturer need to submit any program
implementation: software or hardware. In the at all or can everything be done remotely using
automated evaluation service we propose in the some interactive proof? Indeed, watermarking
next section, we are not concerned with hardware system developers are not always willing to give
implementations. For these, the complexity is an out software or code for evaluation, or company
important criteria and some application impose a policy for intellectual property prevents them
limitation on the maximum number of gates that from doing this quickly. Unfortunately there is no
can be used, the amount of required memory, etc. protocol by which an outsider can evaluate such
. systems using a modified version of the above
For a software implementation is also depends robustness testing procedure. One could imagine
very much on the hardware used to run it but that the verifier sends an image I to be
comparing performance result obtain on the same watermarked to the prover. After receiving the
platform (usually the typical platform of end marked images Ĩ, the verifier would apply a
users) provide a reliable measure. transformation f to the image and send either J :=
f(I) or J := f(Ĩ) to the prover, who would just say ‘I
can detect the mark’ or ‘I cannot detect the mark’.
The verifier would always accept a ‘no’ answer
but a ‘yes’ answer only with a certain probability.
After several iterations of the protocol the verifier general architecture we have chosen will allow us
would be convinced. to support other media in the near future.
Unfortunately in this case, most f are invertible or
VII. CONCLUSIONS AND FUTURE WORK
almost invertible – even if f is a random geometric
distortion, such as the one implemented into In this paper we have used a duality approach to
StirMark, it can be inverted using the original the watermarking evaluation problem by splitting
image. So the prover can always approximate f 1 the evaluation criteria into two (independent)
by comparing J to I and Ĩ and try do detect the groups: functionality and assurance. The first
mark in f 1(J) and so, always cheat. The group represents a set of requirements that can be
conclusion of this is that the verifier must have at verified using agreed series of tests the second is a
least a copy of the detection or extraction set of level to which each functionality is
software. evaluated. These level go from zero or low to
So we propose, as a first step towards a widely very high.
accepted way to evaluate watermarking schemes, We are investigating how evalution profiles can
to implement an automated benchmark server for be defined for different applications and how
digital watermarking schemes. The idea is to allow importance sampling techniques could be used to
users to send a binary library of their scheme to evaluate the false alarm rate in an automated way.
the server which in turns runs a series of tests on
Hopefully this new generation of watermarking
this library and keeps the results in a database
testing tool (in the continuation of the StirMark
accessible to the scheme owner and/or to all
benchmark ) will be very useful to the
‘watermarkers’. One may consider this service as
the next generation of the StirMark benchmark:
fully automated evaluation with real time access
In order to be widely accepted this service must
have a simple interface with existing 1 International Federation of the
watermarking libraries; in the implementation we Phonographic Industry. Request for
propose we have exported only three functions Proposals – Embedded Signalling Systems
(scheme information, embedding and detection). Issue 1.0. 54 Regent Street, London W1R
The service must also, as we described earlier, 5PJ, June 1997.
take into account the application of the
watermarking scheme by proposing different 2 European Broadcasting Union and Union
evaluation profiles (tests and set of media européenne de radio télévision, Watermarking
samples) and strengths; this will be achieved by – Call for systems, May 2000.
the use of different evaluation profiles 3 Jonathan K. Su and Bernd Girod.
configuration files. The service must be easy to Fundamental performance limits of power-
use: spectrum condition-compliant Watermarks.
The client sends a library (which follows our In Ping Wah Wong and Edward J. Delp,
general interface) to be evaluated and editors, proceedings of electronic imaging '99,
specifies the evaluation profile and level of security and watermarking of multimedia contents II,
assurance to be used; vol. 3971, pp. 314–325, San Jose, California,
U.S.A., 24–26 January 2000. The Society for
The StirMark Benchmark service imaging science and technology (IS&T) and
automatically starts hundreds of tests on the the international Society for optical
library using its library of media; engineering (SPIE).
As soon as the test are finished the results are 4 Martin Kutter. Watermark copy attack. In
sent to the client and may later be published Ping Wah Wong and Edward J. Delp,
on the project website; editors, proceedings of electronic imaging '99,
security and watermarking of multimedia contents II,
At last all evaluation procedures, profiles and vol. 3971, pp. 371–380, San Jose, California,
code must be publicly available. U.S.A., 24–26 January 2000. The Society for
Although our current implementation only
supports image-watermarking schemes, the
imaging science and technology (IS&T) and (ICASSP’99), vol. 4, pp 2067–2070, Phoenix,
the international Society for optical Arizona, U.S.A., 15–19 March 1999. IEEE
engineering (SPIE). Signal Processing Society. ISBN 1-876346-
5 Fabien A. P. Petitcolas, Ross J. Anderson
and Markus G. Kuhn. Attacks on copyright 12 Jean-Paul M. G. Linnartz and van Dijk,
marking systems. In David Aucsmith, editor, Marten. Analysis of the sensitivity attack
second workshop on information hiding, in vol. against electronic watermarks in images. In
1525 of Lecture notes in computer science David Aucsmith, editor, second workshop on
Portland, Oregon, U.S.A., 14–17 April, 1998, information hiding, in vol. 1525 of Lecture
pp. 218–238. ISBN 3-540-65386-4. notes in computer science Portland, Oregon,
U.S.A., 258–272 April, 1998, pp. 218–238.
6 Martin Kutter and Fabien A. P. Petitcolas. A
fair benchmark for image watermarking
systems. In Ping Wah Wong and Edward J. 13 Dennis G. Abraham et al. Transaction
Delp, editors, proceedings of electronic imaging Security System, in I.B.M. systems journal vol.
'99, security and watermarking of multimedia 30 no. 2, pp 206-229, 1991.
contents, vol. 3657, pp. 226–239, San Jose,
14 August Kerckhoffs. La cryptographie
California, U.S.A., 25–27 January 1999. The
militaire. Journal des sciences militaires, vol. 9, pp.
Society for imaging science and technology
5–38, January 1883.
(IS&T) and the international Society for
optical engineering (SPIE). ISSN 0277-786X. 15 Matt L. Miller, Ingemar J. Cox and Jeffrey A.
ISBN 0-8194-3128-1. Bloom. Watermarking in the real world: an
application to DVD. In Jana Dittmann et al.,
7 IT-Security Criteria, Criteria for the
editors, Multimedia and security – workshop
evaluation of trustworthiness of information
at ACM Multimedia'98, Bristol, England, pp
technology systems, 1st version 1989,
71–76, 12–13 September 1998. GMD –
published by the German Information
Security Agency (Zentralstelle für Sicherheit
GmbH, Darmstadt, Germany.
in der Informationstechnik) on behalf of the
German Government, Köln: 16 Stefan C. Katzenbeisser. Principles of
Bundesanzeiger, 1989, ISBN 3-88784-200-6. steganography. In Stefan C. Katzenbeisser et
al., eds, Information hiding techniques for
8 Alexander Herrigel, et al. Secure copyright
steganography and digital watermarking, pp. 30–
protection techniques for digital images. In
31. Artech House Books, December 1999.
David Aucsmith, editor, second workshop on
information hiding, in vol. 1525 of Lecture
notes in computer science Portland, Oregon, 17 <http://www.cl.cam.ac.uk/~fapp2/
U.S.A., 169–189 April, 1998, pp. 218–238. watermarking/stirmark/>
9 Jeffrey A. Bloom et al. Copy protection for
D.V.D. video. In proceedings of the IEEE, vol.
87, no. 7, pp 1267–1276, July 1999.
10 Jong-Hyeon Lee. Fingerprinting. In Stefan C.
Katzenbeisser et al., eds, Information hiding
techniques for steganography and digital
watermarking, pp. 175–190. Artech House
Books, December 1999. ISBN 1-58053-035-
11 Fred Mintzer and Gordon W. Braudaway. If
one watermark is good, are more better?. In
proceedings of international Conference on
acoustics, speech and signal processing