Ross Anderson
[Research] [Blog] [Politics] [My Book] [Music] [Contact Details]
What's New
Here is a survey
paper on security economics which I gave at Softint 2007 on
January 20th; my slides; and a shorter survey that
was recently published in Science. See also my Economics and
Security Resource Page.
2006 blog
highlights included technical papers on topics from protecting
power-line communications to the Man-in-the-Middle Defence, as well as a major report on the safety and privacy of
children's databases for the UK Information Commissioner, which got a lot of
publicity. I ended the year by debating health privacy on the Today programme with health
minister Lord Warner, who resigned shortly aftewards. 2005 blog
highlights included research papers on The
topology of covert conflict, on combining
cryptography with biometrics, on Sybil-resistant DHT
routing, and on Robbing
the bank with a theorem prover; and a survey paper on cryptographic
processors, a shortened version of which appeared in the February 2006
Proceedings of the IEEE. 2004
highlights included papers on cipher
composition, key
establishment in ad-hoc networks and the economics
of censorship resistance. I also lobbied for amentments to the EU IP
Enforcement Directive and organised a workshop on copyright which
led to a common
position adopted by many European NGOs.
Research
I am Professor of Security Engineering at the Computer
Laboratory. I supervise a number of research students - Jolyon Clulow, Hao Feng, Stephen Lewis, Tyler Moore, Shishir Nagaraja Andy Ozment and Robert Watson. Richard Clayton, Steven Murdoch and Sergei Skorobogatov are
postdocs. Mike Bond, Vashek Matyas and Andrei Serjantov are
former postdocs. Jong-Hyeon
Lee, Frank Stajano, Fabien Petitcolas,
Harry Manifavas, Markus
Kuhn, Ulrich
Lang, Jeff
Yan, Susan Pancho, Mike Bond, George Danezis, Sergei Skorobogatov, Hyun-Jin Choi and Richard Clayton have
earned PhDs.
My other research interests include:
Many of my papers are available in html and/or pdf, but some of the
older technical ones are in postscript, for which you can download a
viewer here. By
default, when I post a paper here I license it under the relevant Creative Commons
license, so you may redistribute it but not modify it. I may
subsequently assign the residual copyright to an academic publisher.
Economics of information security
Over the last few years, it's become clear that many systems fail not
for technical reasons so much as from misplaced incentives - often the
people who could protect them are not the people who suffer the costs
of faulure. There are also many questions with an economic dimension
as well as a technical one. For example, will digital signatures make
electronic commerce more secure? Is so-called `trusted computing' a
good idea, or just another way for Microsoft to make money? And what
about all the press stories about `Internet hacking' - is this threat
serious, or is it mostly just scaremongering by equipment vendors?
It's not enough for security engineers to understand ciphers; we have
to understand incentives as well. This has led to a rapidly growing
interest in `security economics', a discipline which I helped to found.
I maintain the Economics and
Security Resource Page, and my research contributions include the
following.
- The
Economics of Information Security - A Survey and Open Questions is a paper
I'm giving at the Conference on the
Economics of the Software and Internet Industries in January 2007.
A much shorter version, called simply The Economics of
Information Security, was published in Science in October 2006. This
survey papers are a work in progress: we plan to produce more detailed and
extensive versions over time.
- The
topology of covert conflict is rather topical - how can the police
best target an underground organisation given some knowledge of its
patterns of communication? And how might they in turn react to various
law-enforcement strategies? We present a framework combining ideas
from network analysis and evolutionary game theory to explore the
interaction of attack and defence strategies in networks. Although we
started out thinking about computer viruses, our work suggests
explanations of a number of aspects of modern conflict generally.
- Why
Information Security is Hard - An Economic Perspective was the
paper that got information security people thinking about the subject.
It applies economic analysis to explain many phenomena that security
people had found to be pervasive but perplexing. Why do mass-market
software products such as Windows contain so many security bugs? Why
are their security mechanisms so difficult to manage? Why are
government evaluation schemes, such as the Orange Book and the Common
Criteria, so bad? This paper was presented at the Applications Security
conference in December 2001, and also as an invited talk at SOSP 2001.
- The hot political issue is `Trusted Computing'. My `Trusted
Computing' FAQ analysed this Intel/Microsoft initiative to install
digital rights management hardware in every computer, PDA and mobile
phone. `TC' will please Hollywood by making it hard to pirate music
and videos; and it will please Microsoft by making it harder to pirate
software. But TC could have disturbing consequences for privacy,
censorship, and innovation. Cryptography
and Competition Policy - Issues with `Trusted Computing' is an
economic analysis I gave at WEIS2003 and
also as an invited talk at PODC 2003. TC will help
Microsoft lock
in its customers more tightly, so it can charge more. The proposed
mechanisms could also have some disturbing consequences for privacy,
censorship, and innovation. There is a shortened
version of the paper that appeared in a special issue of Upgrade,
and a French
translation. I spoke about TC at the "Trusted
Computing Group" Symposium, at PODC, and at the Helsinki IPR
workshop. There's also a neat video
clip. TC is not just an isolated engineering and policy issue; it
is related to the IP
Enforcement Directive on the policy front, and new content
standards such as DTCP, which will
be built into consumer electronics and also into PC motherboards. The
row about `Trusted Computing' was ignited by a paper on the
security of free and open source software I gave at a conference on Open Source Software
Economics in Toulouse. This paper got press coverage in the New York Times, slashdot, news.com and The
Register. Its second part is about TC
- In the first part of my Toulouse
paper, I show that the usual argument about open source security -
whether source access makes it easier for the
defenders to find and fix bugs, or makes it easier for the
attackers to find and exploit them - is misdirected. Under
standard assumptions used by the reliability growth modelling
community, the two will exactly cancel each other out. That means that
whether open or closed systems are more secure in a given situation
will depend on whether, and how, the application deviates from the
standard assumptions. The ways in which this can happen, and either
open or closed be better in some specific application, are explored in
a later paper, Open
and Closed Systems are Equivalent (that is, in an ideal world)
which appeared as a chapter in a book entitled Perspectives
on Free and Open Source Software (you can download the whole book
for free by clicking on that link). This also got some press
coverage.
- On
Dealing with Adversaries Fairly applies election theory (also
known as social choice theory) to the problem of shared control in
distributed systems. It shows how a number of reputation systems
proposed for use in peer-to-peer applications might be improved. It
appeared at WEIS
2004.
- The
Economics of Censorship Resistance examines when it is better for
defenders to aggregate or disperse. Should file-sharers build one huge
system like gnutella and hope for safety in numbers, or would a loose
federation of fan clubs for different bands work better? More
generally, what are the tradeoffs between diversity and solidarity
when conflict threatens? (This is a live topic in social policy at the
moment - see David
Goodhart's essay, and a response in the Economist.)
This paper also appeared at WEIS 2004.
- Here are papers on The
Initial Costs and Maintenance Costs of Protocols, which I gave at Security
Protocols 05, and How Much is
Location Privacy Worth? which I gave at WEIS 05.
Our annual bash is the Workshop on Economics and
Information Security. My Economics and Security
Resource Page provides a guide to the literature and to what else is on.
There is also a web page on the economics of
privacy, maintained by Alessandro Acquisti.
Peer-to-Peer systems
Since about the middle of 2000, there has been an explosion of
interest in peer-to-peer networking - the business of building useful
systems out of large numbers of intermittently connected machines,
with virtual infrastructures that are tailored to the application. One
of the seminal papers in the field was The
Eternity Service, which I presented at Pragocrypt 96. I had been
alarmed by the Scientologists' success at closing down the penet
remailer in Finland, and had been personally threatened by bank
lawyers who wanted to suppress knowledge of the vulnerabilities of ATM
systems (see here
for a later incident). This taught me that electronic publications can
be easy for the rich and the ruthless to suppress. They are usually
kept on just a few servers, whose owners can be sued or coerced. To
me, this seemed uncomfortably like books in the Dark Ages: the modern
era only started once the printing press enabled seditious thoughts to
be spread too widely to ban. The Eternity Service was conceived as a
means of putting electronic documents as far outwith the censor's grasp as possible.
(The concern that motivated me has since materialised; a UK
court judgment has found that a newspaper's online archives can be
altered by order of a court to remove a libel.)
But history never repeats itself exactly, and the real fulcrum of censorship in
cyberspace turned out to be not sedition, or vulnerability disclosure, or even
pornography, but copyright. Hollywood's action against Napster led to my Eternity
Service ideas being adopted by many systems including Publius and Freenet. Many of these developments
were described in an important book,
and the first
academic conference on peer-to-peer systems was held in March 2002 at MIT.
The field has since become very active. See also Richard Stallman's classic, The Right to Read.
My contributions since the Eternity paper include:
- I've been helping upgrade
the security of Homeplug, an industry standard for broadband communication over
the power mains. A paper
on what we did and why appeared at SOUPS. This is a good worked example
of how to do key establishment in a real peer-to-peer system. The core problem
is this: how can you be sure you're recruiting the right device to your
network, rather than a similar one nearby?
- Sybil-resistant DHT
routing appeared at ESORICS 2005 and showed how we can
make peer-to-peer systems more robust against disrutpive attacks if we know
which nodes introduced which other nodes. The convergence of computer science
and social network theory is an interesting recent phenomenon, and not limited
to search and recommender systems.
- Key
Infection - Smart trust for Smart Dust appeared at ICNP 2004 and presents a radically
new approach to key management in sensor and peer-to-peer networks. Peers
establish keys opportunistically and use resilience mechanisms to fortify
the system against later node compromise. This work challenges the old
assumption that authentication is largely a bootstrapping problem.
- The Economics
of Censorship Resistance examines when it is better for defenders
to aggregate or disperse. Should file-sharers build one huge system
like gnutella and hope for safety in numbers, or would a loose
federation of fan clubs for different bands work better?
- A New
Family of Authentication Protocols presented our `Guy Fawkes Protocol',
which enables users to sign messages using only two computations of
a hash function and one reference to a timestamping service. This led to
many protocols for signing digital streams. Our paper also raises foundational
questions about the definition of a digital signature.
- Peer-to-peer techniques are not just about creating virtual machines out
of many distributed PCs on the Internet, but apply also to other environments
where communication is intermittent. Mobile communications, personal area
networks and piconets are another rapidly developing field. The Resurrecting Duckling:
Security Issues for Ad-hoc Wireless Networks describes how to do key
management between low-cost devices that can talk to each other using radio or
infrared, and without either the costs or privacy problems of centralised
trusted third parties. (There's also a journal version of the paper here.)
- The study of distributed systems which are hidden, deniable or difficult
to censor might be described as `subversive group computing'. Our seminal
publication in this thread was The Cocaine Auction
Protocol which explored how commercial transactions can be conducted
between mutually mistrustful principals with no trusted arbitrator, while
giving a high degree of privacy against traffic analysis.
- I have done some work with our university library on how
to secure a digital repository. This grew out of a thread on web security:
- The Eternal
Resource Locator: An Alternative Means of Establishing Trust on the World Wide
Web investigated how to protect naming and indexing information and showed
how to embed trust mechanisms in html documents. It was motivated by a project
by colleagues at our medical school to protect the electronic version of the British National Formulary. It followed work
reported in Secure Books:
Protecting the Distribution of Knowledge, which describes a project to
protect the authenticity and integrity of electronically distributed treatment
protocols. Later work included Jikzi, an authentication framework for
electronic publishing, which works by integrating ERL-type ideas into XML and
which led to a startup. There are both
general and technical
papers on Jikzi.
- The XenoService -
A Distributed Defeat for Distributed Denial of Service described defeating
distributed denial of service attacks using a network of web hosts that can
respond to an attack on a site by replicating it rapidly and widely. It used Xenoservers, developed at Cambridge for
distributed hosting. This technique is now widely used.
I ran a CMI project with Frans Kaashoek and Robert Morris on building a
next-generation peer-to-peer system. I gave a keynote talk about this at the Wizards of OS
conference in Berlin; the slides are here.
Robustness of cryptographic protocols
Many security system failures are due to poorly designed protocols, and this
has been a Cambridge interest for many years. Some relevant papers follow.
- API Level
Attacks on Embedded Systems was our first detailed paper on a powerful new
way to attack crypto processors. Even if a device is physically tamper-proof,
it can often be defeated by sending it some sequence of transactions which
causes it to leak the key. We've broken pretty well every security processor
we've looked at, at least once. This line of research originated at Protocols
2000 with my paper The Correctness of
Crypto Transaction Sets, and more followed in my book. A recent paper, Robbing the bank
with a theorem prover, shows how to apply some advanced tools to the
problem, and some ideas for future research can be found in Protocol
Analysis, Composability and Computation. There is a survey of the state of
the art in late 2005 in a survey of
cryptographic processors, a shortened version of which appeared in the
February 2006 Proceedings of the IEEE.
- Programming
Satan's Computer is a phrase Roger Needham and I coined to express
the difficulty of designing cryptographic protocols; it has recently been
popularised by Bruce Schneier (see, for example, his foreword to my book). The problem of
designing programs which run robustly on a network containing a malicious
adversary is rather like trying to program a computer which gives subtly wrong
answers at the worst possible moments.
- Robustness p
rinciples for public key protocols gives a number of attacks on protocols
based on public key primitives. It also puts forward some principles which can
help us to design robust protocols, and to find attacks on other people's
designs. It appeared at Crypto 95.
- The Cocaine
Auction Protocol explores how transactions can be conducted between
mutually mistrustful principals with no trusted arbitrator, even in
environments where anonymous communications make most of the principals
untraceable.
- NetCard - A
Practical Electronic Cash Scheme presents research on micropayment
protocols for use in electronic commerce. We invented tick payments
simultaneously with Torben Pedersen and with Ron Rivest and Adi Shamir; we all
presented our work at Protocols 96. Our paper discusses how tick payments can
be made robust against attacks on either the legacy credit card infrastructure
or next generation PKIs.
- The GCHQ
Protocol and its Problems pointed out a number of flaws in a key management
protocol widely used in the UK government. It was promoted by GCHQ as a
European alternative to Clipper, until we shot it down with this paper at
Eurocrypt 97. Many of the criticisms we developed here also apply to the more
recent, pairing-based cryptosystems.
- The Formal
Verification of a Payment System describes the first use of formal methods
to verify an actual payment protocol, which was (and still is) used in an
electronic purse product (VISA's COPAC card). This is a teaching example I use
to get the ideas of the BAN logic across to undergraduates. There is further
detailed information in a technical
report, which combines papers given at ESORICS 92 and Cardis 94.
- An
Attack on Server Assisted Authentication Protocols appeared in Electronics
Letters in 1992. It breaks a digital signature protocol.
- On Fortifying
Key Negotiation Schemes with Poorly Chosen Passwords presents a simple way
of achieving the same result as protocols such as EKE, namely preventing
middleperson attacks on Diffie-Hellman key exchange between two people whose
shared secret could be guessed by the enemy.
Protocols have been the stuff of high drama. Citibank asked the High Court to
gag the
disclsoure of certain crypto API
vulnerabilities that affect a number of systems used in banking. I wrote to
the judge opposing
this; a gagging
order was still imposed, although in slightly less severe terms than
Citibank had requested. The trial was in camera, the banks' witnesses didn't
have to answer questions about vulnerabilities, and new information revealed
about these vulnerabilities in the course of the trial may not be disclosed in
England or Wales. Information already in the public
domain was unaffected. The vulnerabilities were discovered by Mike Bond and me while acting as the
defence experts in a phantom withdrawal court case, and independently
discovered by the other side's expert, Jolyon Clulow, who later joined
us as a research student. They are of significant scientific interest, as well as being
relevant to the rights of the growing number of people who suffer phantom withdrawals from their
bank accounts worldwide. Undermining the fairness of trials and forbidding
discussion of vulnerabilities isn't the way forward. See press coverage by the
New
Scientist, the Register,
Slashdot,
news.com, and
Zdnet.
Reliability of security systems
I have been interested for many years in how security systems fail in real
life. This is a prerequisite for building robust secure systems; many security
designs are poor because they are based on unrealistic threat models. This work
began with a study of automatic teller machine fraud, and then expanded to
other applications as well. It now provides the central theme of my book.
- Why Cryptosystems
Fail may have been cited more than anything else I've written. This version
appeared at ACMCCS 93 and explains how ATM fraud was done in the early 1990s.
Liability and
Computer Security - Nine Principles took this work further, and examines
the problems with relying on cryptographic evidence. The recent introduction of
EMV ('chip and PIN') was supposed to fix the problem, but hasn't: Phish and
Chips documents protocol weaknesses in EMV, while The
Man-in-the-Middle Defence shows how to turn protocol weaknesses to
advantage. For example, a bank customer can take an electronic attorney along
to a chip-and-PIN transaction to help ensure that neither the bank nor the
merchant rips him off. This story will run and run.
- On a New Way to
Read Data from Memory describes techniques we developed that use lasers to
read out memory contents directly from a chip, without using the read-out
circuits provided by the vendor. This can defeat access controls and even
recover data from damaged devices. Collaborators at Louvain have developed ways
to do this using electromagnetic induction, which are also described. The work
builds on methods described in an earlier paper, on Optical Fault
Induction Attacks. This showed how laser pulses could be used to induce
faults in smartcards that would leak secret information. We can write
arbitrary values into registers or memory, reset protection bits, break out of
loops, and cause all sorts of mayhem. That paper made the front page of the New York
Times; it also got covered by the New
Scientist and slashdot. It was presented at CHES
2002.
- After we discovered the above attacks, we developed a new, more secure,
CPU technology for use in smartcards and similar products. It uses
redundant failure-evident logic to thwart attacks based on fault induction or
power analysis. Our first paper on this
technology won the best presentation award in April at Async 2002. Our latest journal
paper has recent test results.
- Our classic paper on hardware security, Tamper Resistance
- A Cautionary Note, describes how to penetrate the smartcards and secure
microcontrollers of the mid-1990s. It won the Best Paper award at the 1996
Usenix Electronic Commerce Workshop and caused a lot of controversy. Our second
paper on the subject was Low Cost Attacks on
Tamper Resistant Devices, which describes a number of tricks low budget
attackers can use. See also the home page of our hardware security
laboratory which brings together our smartcard and Tempest work, and Markus
Kuhn's page of links to hardware
attack resources.
- On the
Reliability of Electronic Payment Systems is another of the papers that
followed naturally from working on ATMs. It looks at the reliability of
prepayment electricity meters, and appeared in the May 1996 issue of the IEEE
Transactions on Software Engineering. An ealier version, entitled Cryptographic
Credit Control in Pre-Payment Metering Systems, appeared at the 1995 IEEE
Symposium on Security and Privacy. A later paper on this subject is The design of future
pre-payment systems, which discussed how we could apply what we'd learned
to support utility meter interworking in the UK after deregulation.
- On the Security
of Digital Tachographs looks at the techniques used to manipulate the
tachographs that are used in Europe to police truck and bus drivers' hours. It
tried (with some success) to predict how the introduction of smartcard-based
digital tachographs throughout Europe from the year 2000 would affect fraud and
tampering. This work was done for the Department of Transport.
- How to
Cheat at the Lottery is a paper reporting a novel and, I hope, entertaining
experiment in software requirements engineering. The lessons it teaches have
the potential to cut the cost of developing safety critical and security
critical software, and also to reduce the likelihood that specification errors
will lead to disastrous failures.
- The Grenade
Timer describes a novel way to protect low-cost processors against
denial-of-service attacks, by limiting the number of processing cycles an
application program can consume.
- The Millennium
Bug - Reasons Not to Panic describes our experience in coping with the bug
at Cambridge University and elsewhere. This paper correctly predicted that the
bug wouldn't bite very hard. (Journalists were not interested, despite a major
press
release by the University.)
- The Memorability
and Security of Passwords -- Some Empirical Results tackles an old problem
- how do you train users to choose passwords that are easy to remember but hard
to guess? There's a lot of `folk wisdom' on this subject but little that would
pass muster by the standards of applied psychology. So we did a randomized
controlled trial with a few hundred first year science students. While we
confirmed some common beliefs, we debunked some others. This has become one of
the classic papers on security usability.
- Murphy's law,
the fitness of evolving species, and the limits of software reliability
shows how we can apply the techniques of statistical thermodynamics to the
failure modes of any complex logical system that evolves under testing. It
provides a common mathematical model for the reliability growth of complex
computer systems and for biological evolution. Its findings are in close
agreement with empirical data. This paper inspired later
work in security economics.
- Security
Policies play a central role in secure systems engineering. They provide a
concise statement of the kind of protection a system is supposed to achieve. A
security policy should be driven by a realistic threat model, and should in
turn be used as the foundation for the design and testing of protection
mechanisms. This article is a security policy tutorial.
- Here is a paper on combining
cryptography with biometrics, which shows that in those applications where
you can benefit from biometrics, you don't need a large central database (as
proposed in the ID card Bill). There are
smarter, more resilient, and less privacy-invasive ways to arrange things.
The papers on physical security by Roger Johnston's team
are also definitely worth a look, and there's an old leaked copy of the NSA Security
Manual that you can download (also as latex).
Analysis and design of cryptographic algorithms
Reports
of an attack on the hash function SHA have made Tiger, which Eli Biham and I designed in
1995, a popular choice of cryptographic hash function. I also worked with Eli,
and with Lars Knudsen, to develop Serpent - a candidate
block cipher for the Advanced Encryption
Standard. Serpent won through to the final of the competition and got the
second largest number of votes. Another of my contributions was founding the
series of workshops on Fast Software
Encryption.
Other papers on cryptography and cryptanalysis include the following.
- The Dancing Bear - A
New Way of Composing Ciphers presents a new way to combine crypto
primitives. Previously, to decrypt using (say) any three out of five keys, the
keys all had to be of the same type (such as RSA keys). With my new
construction, you can mix and match - RSA, AES, even one-time pad. The paper
appeared at the 2004 Protocols Workshop; an earlier version came out at the FSE 2004 rump session.
- Two Remarks on
Public Key Cryptology is a note on two ideas I floated at talks I gave in
1997-98, concerning forward-secure signatures and compatible weak keys. The
first of these has inspired later research by others; the second gives a new
attack on public key encryption.
- Two
Practical and Provably Secure Block Ciphers: BEAR and LION shows how to
construct a block cipher from a stream cipher and a hash function. We had
already known how to construct stream ciphers and hash functions from block
ciphers, and hash functions from stream ciphers; so this paper completed the
set of elementary reductions. It also led to the `Dancing Bear' above.
- Tiger - A
Fast New Hash Function defines a new hash function, which we designed
following Hans Dobbertin's attack on MD4. This was designed to run extremely
fast on the new 64-bit processors such as DEC Alpha and IA64, while still
running reasonably quickly on existing hardware such as Intel 80486 and
Pentium (the above link is to the Tiger home page, maintained in Haifa by Eli
Biham; if the network is slow, see my UK mirrors of the Tiger paper, new and old reference
implementations (the change fixes a padding bug) and S-box generation
documents. There are also third-party crypto toolkits supporting Tiger,
such as that from Bouncy Castle).
- Minding your
p's and q's points out a number of things that can go wrong with the choice
of modulus and generator in public key systems based on discrete log. It
elucidated some of the previously classified reasoning behind the design of the
US Digital Signature Algorithm, and appeared at Asiacrypt 96.
- Chameleon -
A New Kind of Stream Cipher shows how to do traitor tracing using symmetric
rather than public-key cryptology. The idea is to turn a stream cipher into one
with reduced key diffusion, but without compromising security. A single
broadcast ciphertext is decrypted to slightly different plaintexts by users
with slightly different keys. This paper appeared at Fast Software Encryption
in Haifa in January 1997.
-
Searching
for the Optimum Correlation Attack shows that nonlinear combining functions
used in nonlinear filter generators can react with shifted copies of themselves
in a way that opens up a new and powerful attack on many cipher systems. It
appeared at the second workshop on fast software encryption.
- The Classification of
Hash Functions showed that correlation freedom is strictly stronger than
collision freedom, and shows that there are many pseudorandomness properties
other than collision freedom which hash functions may need. It appeared at
Cryptography and Coding 93.
- A Faster Attack
on Certain Stream Ciphers shows how to break the multiplex shift register
generator, which is used in satellite TV systems. I found a simple
divide-and-conquer attack on this system in the mid 1980's, a discovery that
got me `hooked' on cryptology. This paper is a refinement of that work.
- On Fibonacci
Keystream Generators appeared at FSE3, and shows how to break `FISH', a
stream cipher proposed by Siemens. It also proposes an improved cipher, `PIKE',
based on the same general mechanisms.
Information hiding (including Soft Tempest)
From the mid- to late-1990s, I did a lot of work on information hiding.
- Soft Tempest: Hidden
Data Transmission Using Electromagnetic Emanations must be one of the more
unexpected and newsworthy papers I've published. It is well known that
eavesdroppers can reconstruct video screen content from radio frequency
emanations; up till now, such `Tempest attacks' were prevented by shielding,
jammers and so on. Our innovation was a set of techniques that enable the
software on a computer to control the electromagnetic radiation it emanates.
This can be used for both attack and defence. To attack a system, malicious
code can hide stolen information in the machine's Tempest emanations and
optimise them for some combination of reception range, receiver cost and
covertness. To defend a system, a screen driver can display sensitive
information using fonts which minimise the energy of RF emanations. This
technology is now fielded in PGP and eslewhere. You can download Tempest fonts
from here.
- There is a followup
paper on the costs and benefits of Soft Tempest in military environments,
which appeared at NATO's 1999 RTO meeting on infosec, while an earlier version
of our main paper, which received considerable publicity, is
available here.
Finally, there's some software you can use to play your MP3s over the radio here, a press article
here
and information on more recent optical tempest attacks here.
- Hollywood once hoped that copyright-marking systems would help control the
copying of videos, music and computer games. This became high drama when a paper that showed how to break
the DVD/SDMI copyright marking scheme was pulled by its authors from the Information
Hiding 2001 workshop, following legal threats from Hollywood. In fact, the
basic scheme - echo hiding - was among a number that we broke in 1997. The
attack was reported in our paper Attacks on
Copyright Marking Systems, which we published at Info Hiding 1998. We
also wrote Information
Hiding - A Survey, which appeared in Proc IEEE and is a good place to start
if you're new to the field. For the policy aspects, you might read Pam Samuelson. There is much more
about the technology on the web page of my former student Fabien Petitcolas.
- Another novel application of information hiding is the Steganographic File
System. It will give you any file whose name and password you know, but if
you do not know the correct password, you cannot even tell that a file of that
name exists in the system! This is much stronger than conventional multilevel
security, and its main function is to protect users against coercion. Two of
our students implemented SFS for Linux: a paper describing the details is here, while the code
is available here. This
functionality has since appeared in a number of crypto products.
- The threat by some governments to ban cryptography has led to a surge of
interest in steganography - the art of hiding messages in other messages. Our
paper On The Limits
of Steganography explores what can and can't be done; it appeared in a
special issue of IEEE JSAC. It developed from an earlier paper, Stretching the
Limits of Steganography, which appeared at the first international workshop
on Information Hiding in 1996. I also started a bibliography
of the subject which is now maintained by Fabien Petitcolas.
- The Newton
Channel settles a conjecture of Simmons by exhibiting a high bandwidth
subliminal channel in the ElGamal signature scheme. It appeared at Info Hiding
96.
Security of Medical Information Systems
Medical information security is a subject in which I've worked on and off for
over a decade. It's highly
topical right now: the UK government is building a national database of
medical records, a project which many doctors oppose;
half of all GPs say they won't upload their patients' data. Ministers have given a
guarantee
of patient privacy, but GPs, NGOs and commentators are sceptical. There are radio pieces on the problems
here and here, and comments on broken government promises here.
There is an article with some examples of privacy abuses, and a report that
the Real IRA penetrated the Royal Victoria Hospital in Northern Ireland and
used its electronic medical records to gather information on policemen to
target them and their families for murder. A particularly shocking case was
that of Helen Wilkinson, who needed to organise a debate in Parliament to get ministers to agree
to remove defamatory and untrue information about her from NHS computers. The
minister assured the House that the libels had been removed; months later, they
still had not been.
Civil servants started pushing for online access to everyone's records in 1992
and I got involved in 1995, when I started consulting for the British Medical
Association on the safety and privacy of clinical information systems. Back
then, the police were given access to all drug prescriptions in the UK, after
the government argued that they needed it to catch the occasional doctor who
misprescribed heroin. The police got their data, they didn't catch Harold Shipman, and
no-one was held accountable.
The NHS slogan in 1995 was `a unified electronic patient record, accessible
to all in the NHS'. The slogan has changed several times, but the goal remains
the same. The Health and Social Care
(Community Health and Standards) Act allowed the Government access to all
medical records in the UK, for the purposes of `Health Improvement'. It
removed many of the patient privacy safeguards in previous legislation. In
addition, the new contract
offered to GPs since 2003 moves ownership of family doctor computers to Primary
Care Trusts (that's health authorities, in oldspeak). There was a token
consultation on confidentiality; the Foundation
for Information Policy Research, which I chair, published a response to it (which was of course ignored).
The last time people pointed out that NHS administrators were helping
themselves illegally to confidential personal health information, Parliament
passed some regulations
on patient privacy to legalise those questionable practices that had been
brought to public attention. For example, the regulations compel doctors to
give the government copies of all records relating to infectious disease and
cancer. The regulations were made under an Act
that was rushed through in the shadow of the 2001 election and that gives
ministers broad powers to nationalise personal health information. For the
background to that Act, see an editorial from the
British Medical Journal, a discussion
paper on the problems that the bill could cause for researchers, and an impact
analysis commissioned by the Nuffield Trust. Ministers claimed the records
were needed for cancer registries: yet cancer researchers in many other
countries work with anonymised data (see papers on German cancer registries here and here, and the
website of the Canadian Privacy
Commissioner.) There was contemporary press coverage in the Observer, the New Statesman, and The Register; and
Hansard reports the Parliamentary debate on the original bill in the Commons and the Lords.
In the end, perhaps only a European law challenge can halt the slide toward
surveillance. The regulations appear to breach the Declaration of Helsinki on
ethical principles for medical research, and contravene the Council of Europe
recommendation no R(97)5 on the protection of medical data, to which
Britain is a signatory. There is a list of some more of the problems here, and
a letter we wrote to the BMJ here.
Some relevant papers of my own follow. They are mostly from the 1995-6 period,
when the government last tried to centralise all medical records - and we saw
them off.
-
Security in Clinical Information Systems was published by the British
Medical Association in January 1996. It sets out rules that can be used to
uphold the principle of patient consent independently of the details of
specific systems. It was the medical profession's initial response to the
safety and privacy problems posed by centralised NHS computer systems.
- An
Update on the BMA Security Policy appeared in June 1996 and tells the story
of the struggle between the BMA and the government, including the origins and
development of the BMA security policy and guidelines.
- There are comments made
at NISSC 98 on the healthcare protection profiles being developed by NIST for
the DHHS to use in regulating health information systems privacy. The
protection profiles make a number of mistaken assumptions about the threats to
medical systems and of the kind of protection mechanisms that are
appropriate.
- Remarks
on the Caldicott Report raises a number of issues about policy as it was
settled in the late 1990s. The Caldicott Committee was set up by the Major
government to kick the medical privacy issue into touch until after the 1997
election. Its members failed to understand that medical records from which the
names have been removed, but where NHS numbers remain, are not really
anonymous - as large numbers of people in the NHS can map names to numbers (and
need to do this in order to do their jobs).
- Information
technology in medical practice: safety and privacy lessons from the United
Kingdom provided an overview of the safety and privacy problems we
encountered in UK healthcare computing in the mid-90s for readers of the
Australian Medical Journal.
- The
DeCODE Proposal for an Icelandic Health Database analyses a proposal to
collect all Icelanders' medical records into a single database. I evaluated
this for the Icelandic Medical Association and concluded that the proposed
security wouldn't work. The company running it has since hit financial
problems but the
ethical issues remain, and Iceland's
Supreme Court recently allowed a woman to block access to her father's
records because of the information they may reveal about her. (These issues may
recur in the UK with the proposed Biobank database.) I also wrote an analysis
of security targets prepared under the Common Criteria for the evaluation of
this database. For more, see BMJ
correspondence,
the Icelandic doctors
opposing the database, and an article by Einar
Arnason.
- Clinical
System Security - Interim Guidelines appeared in the British Medical
Journal on 13th January 1996. It advises healthcare professionals on prudent
security measures for clinical data. The most common threat is that private
investigators use false-pretext telephone calls to elicit personal health
information from assistant staff.
- A
Security Policy Model for Clinical Information Systems appeared at the 1996
IEEE Symposium on Security and Privacy. It presents the BMA policy model to the
computer security community in a format comparable to policies such as
Bell-LaPadula and Clark-Wilson. It had some influence on later US health
privacy legislation (the Kennedy-Kassebaum Bill, now HIPAA).
- NHS
Wide Networking and Patient Confidentiality appeared in the British Medical
Journal in July 1995 and set out some early objections to the government's
health network proposals.
- Patient
Confidentiality - At Risk from NHS Wide Networking went into somewhat
more detail, particularly on the security policy aspects. It was presented at
Health Care 96.
- Problems
with the NHS Cryptography Strategy points out a number of errors in, and
ethically unacceptable consequences of, a report on
cryptography produced for the Department of Health. These comments formed the
BMA's response to that report.
- I recently wrote a report
for the National Audit Office on the health IT expenditure, strategies and
goals of the UK and a number of other developed countries. This showed that the
NHS National Program for IT is in many ways an outlier, and high-risk.
- I am one of the authors
of a recent report on
the safety and privacy of children's databases. This report was done for
the UK Information Commissioner; it concluded that government plans to link up
most of the public-sector databases that hold information on children are
misguided. The proposed systems will be both unsafe and illegal. This report
got a
lot of publicity.
Two health IT papers by colleagues deserve special mention. Privacy in clinical
information systems in secondary care describes a hospital system
implementing something close to the BMA security policy (it is described in
more detail in a special issue of the Health
Informatics Journal, v 4 nos 3-4, Dec 1998, which I edited). Second, Protecting Doctors'
Identity in Drug Prescription Analysis describes a system designed to
de-identify prescription data for commercial use; although de-identification
usually does not protect patient privacy very well, there are exceptions, such
as here. This system led to a court case, in which the government tried to stop
its owner promoting it - as it would have competed with their (less
privacy-friendly) offerings. The government lost: the Court of Appeal decided
that personal health information can be used for research without patient
consent, so long as the de-identification is done competently.
A first-class collection of links to papers on the protection of
de-identified data is maintained by the American Statistical
Association. Bill Lowrance wrote a good survey for the US
Department of Health and Human Services of the potential for using
de-identified data to protect patient privacy in medical research, while a report by the
US General Accounting Office shows how de-identified records are handled much
better by Medicare than by the NHS. For information on what's happening in the
German speaking world, see Andreas
von Heydwolff's web site and Gerrit
Bleumer's European project links. Resources on what's happening in the USA
- where medical privacy is a very live issue - include many NGOs: Patient Privacy Rights, EPIC, the Privacy Rights
Clearinghouse, the Medical Privacy Coalition,
the Citizens' Council on Health Care, CPT, the Institute for Health Freedom. and
Georgetown University's health privacy
project (which has a comprehensive survey of US
health privacy laws). Other resources include a report from the US National
Academy of Sciences entitled For the Record: Protecting
Electronic Health Information and a report by
the US Office of Technology Assessment.
Public policy issues
I chair the Foundation for Information Policy
Research, which I helped set up in 1998. This body is concerned with
promoting research and educating the public in such topics as the interaction
between computing and the law, and the social effects of IT. We are not a lobby
group; our enemy is ignorance rather than the government of the day, and one of
our main activities is providing accurate and neutral briefing for politicians
and members of the press. Here's an overview of
the issues as we saw them in 1999; some highlights of our work follow.
- Key Escrow: My first foray into policy, in 1995, was Crypto in Europe -
Markets, Law and Policy which surveyed the uses of cryptography in Europe
and discussed the shortcomings of public policy. In it, I pointed out that law
enforcement communications intelligence was mostly about traffic analysis and
criminal communications security was mostly traffic security. This was
considered heretical at the time but is now well known. The Risks of Key Recovery, Key
Escrow, and Trusted Third-Party Encryption became perhaps the most widely
cited publication on key escrow. It examines the technical risks, costs, and
implications of deploying systems that would satisfy government wishes. It
was originally presented as testimony to the US Senate, and then also to the Trade
and Industry Committee of the UK House of Commons, together with a further
piece I wrote, The Risks and Costs
of UK Escrow Policy.
- The GCHQ
Protocol and its Problems pointed out a number of serious defects in
the protocol
that the British government used to secure its electronic mail, and which it
wanted everyone else to use too. This paper appeared at Eurocrypt 97 and it
replies to GCHQ's response
to an earlier version of
our paper. Our analysis prevented the protocol from being widely adopted. The
Global Trust Register is a book of the fingerprints of the world's most
important public keys. It thus implements a top-level certification authority,
but using paper and ink rather than electronics. If the DTI had pushed through
mandatory licensing of cryptographic services, this book would have been banned
in the UK. At a critical point in the lobbying, it enabled me to visit Culture
Secretary Chris Smith and ask why his government wanted to ban my book. This
got crypto policy referred to Cabinet when otherwise it would have been snuck
through by the civil servants.
-
This work on key escrow and related topics led up to a campaign that FIPR ran
to limit the scope of the Regulation of
Investigatory Powers Act. Originally this would have allowed the police to
obtain, without warrant, a complete history of everyone's web browsing activity
(under the rubric of `communications data'); an amendment FIPR got through the
Lords limited this to the identity of the machines involved in a communication,
rather than the actual web pages.
-
E-Commerce: FIPR also brought together legal and computing
experts to deconstruct the fashionable late-1990s notion that `digital
certificates' would solve all the problems of e-commerce and e-government. The
mandatory reading for anyone inclined to believe PKI salesmen's claims is Electronic
Commerce - Who Carries the Risk of Fraud?. Other work in this thread
include FIPR's responses to consultations on smartcards, the electronic signature
directive and the ecommerce
bill. Much of what we wrote has direct relevance today - to ID cards, to
Trusted Computing and to debates on liability for ATM fraud and phishing.
- Terrorism: A page with Comments on Terrorism
explains why many of the measures that various people have been trying to sell
since the 11th September attacks are unlikely to work as promised. Much
subsequent policy work has been made harder by assorted salesmen, centralisers,
rent-seekers and chancers talking about terror; I recently testified against police
attempts to increase pre-charge detention to ninety days with the implausible
claim that they needed more time to decrypt seized data. We must constantly push back on the scaremongers.
- Export Control: In 2001-02, FIPR persuaded the Lords to amend
the Export Control
Bill. This bill was designed to give ministers the power to license
intangible exports. It was the result of US lobbying of Tony Blair in 1997;
back then, UK crypto researchers could put source code on our web pages while
our US colleagues weren't allowed to. In its original
form, its provisions were so broad that it would have given ministers the
power of pre-publication review of scientific papers. We defeated
the Government in the House of Lords by 150-108, following a hard campaign
- see press coverage in the BBC,
the New
Scientist, the Guardian
and the Economist, and an article on free
speech I wrote for IEEE Computing. But the best quote I
have is also the earliest. The first book written on cryptology in English, by
Bishop John Wilkins in 1641, remarked that `If all those useful
Inventions that are liable to abuse, should therefore be concealed, there is
not any Art or Science which might be lawfully profest'
-
This issue revived in 2003, with a government attempt to wrest back using
regulations much of what they conceded in parliament. FIPR fought back
and extracted assurances
from Lord Sainsbury about the interpretation of regulations made under the
Act. This may seem technical, but is important for British science and academic
freedom. Without our campaign, much scientific collaboration would have become
technically illegal, leaving scientists open to arbitrary harrassment. Much
credit goes to the Conservative frontbencher Doreen
Miller, Liberal Democrat frontbencher Margaret
Sharp, and the then President of the Royal Society Bob May, who
marshalled the crossbenchers in the Lords. We are very grateful to them for
their efforts.
- Trusted Computing was a focus in 2002-03. I wrote a Trusted Computing FAQ
that was very widely read, followed by a study of the
competition policy aspects of this technology. This led inter alia to a symposium organised by the German government which
in turn pushed the Trusted Computing Group into incorporating, admitting small
companies, and issuing implementation guidelines. The next act in this drama
awaits the launch of Windows Vista.
- IP Enforcement: Our top priority in 2003-04 was the EU IPR
enforcement directive, which has been succinctly described as DMCA
on steroids and also criticised by a
number of distinguished lawyers. Our lobbying helped secure some positive
amendments - notably, removing criminal sanctions and legal protection for
devices such as RFID tags. Here are further criticisms of the directive by AEL. This law
was supported by Microsoft, since convicted of anticompetitive behaviour; the
music industry
and the owners of luxury brands such as Yves Saint Laurent, while it was
opposed by phone companies, supermarkets, smaller software firms and the free
software community. The press was sceptical - in Britain,
France and even America. The issue is even linked to a boycott of
Gillette. There is more on my blog.
- In 2004 FIPR brought together NGOs from all over Europe to establish a common position on intellectual
property. We've followed this up from time to time; recently I gave
evidence to the Gowers
Review of IP and
a parliamentary committee on DRM.
- Identity Cards were a clever political ploy; they divided
the Conservatives in 2004-5, setting the authoritarian leader Michael Howard
against the libertarian majority in his shadow cabinet. But they are not good
security engineering. I testified to the Home Affairs committee in 2004 that they
would not work as advertised, and contributed to the LSE
Report that spelled this out in detail. I'd produced numerous previous
pieces in response to government identity consultations, on aspects such as smartcards and PKI. There's more
in my book (ch. 6). Why
are ministers and officials incapable of listening to scientific advice?
- Internet Censorship is a growing problem, and not just in
developing countries. In 1995, I tried to forestall it by invneting the Eternity
Service (a precursor of later file-sharing systems). But despite the
technical difficulties and collateral costs of content filtering, governments
aren't giving up. I recently became a principal investgator for the OpenNet Initiative which monitors
Internet filtering worldwide.
My pro-bono work also includes sitting on Council, our University's governing
body. I stood for election because I was concerned about the erosion of
academic freedom under the previous administration. See, for example a truly shocking
speech by Mike Clark at a recent discussion on IPR. Mike tells how our
administration promised a research sponsor that he would submit all his
relevant papers to them for prior review - without even asking him! It was to
prevent abuses like this that we founded the Campaign for Cambridge
Freedoms, whose goal was to defeat a proposal by the former Vice Chancellor
that most of the intellectual property generated by faculty members - from
patents on bright ideas to books written up from lecture notes - would belong
to the university rather than to the person who created them. If this had
passed, Cambridge would have swapped one of the most liberal rules on
intellectual property of any British university, for one of the most oppressive
anywhere. Over almost four years of campaigning we managed to draw many of the
teeth of this proposal.
A recent vote approved a
policy in which academics keep copyright but the University gets 15% of patent
royalties. The policy is howerer defective in many ways: for example, it allows
the University to do IPR deals without the consent of affected staff and
students. The authorities have undertaken to introduce amendments.
Finally, here is my PGP
key. If I revoke this key, I will always be willing to explain why I have
done so provided that the giving of such an explanation is lawful. (For
more, see FIPR.)
Wiley has finally
agreed to let me put my book online! You can download it here.
Security engineering is about building systems to remain dependable in
the face of malice, error or mischance. As a discipline, it focuses on
the tools, processes and methods needed to design, implement and test
complete systems, and to adapt existing systems as their environment
evolves. My book has become the standard textbook and reference since
it was published in 2001.
Security engineering is not just concerned with `infrastructure'
matters such as firewalls and PKI. It's also about specific
applications, such as banking and medical record-keeping, and about
embedded systems such as automatic teller machines and burglar alarms.
It's usually done badly: it often takes several attempts to get a
design right. It is also hard to learn: although there were good books
on a number of the component technologies, such as cryptography and
operating systems security, there was little about how to use them
effectively, and even less about how to make them work together. It's
hardly surprising that most systems don't fail because the mechanisms
are weak, but because they're used wrong.
My book was an attempt to help the working engineer to do better. As
well as the basic science, it contains details of many typical
applications - and lot of case histories of how their protection
mechanisms failed. It contains a fair amount of new material, as well
as accounts of a number of technologies which aren't well described in
the accessible literature. Writing it was also pivotal in founding the
now-flourishing field of information
security economics: I realised that the critical narrative had to
do with incentives and organisation at least as often as with ciphers
and access control. This led me to spend a fair proportion of my
research time since on the interface between security and the social
sciences.
More ...
Contact details
I don't execute programs sent by strangers without good reason. So I don't
read attachments in formats such as Word, unless by prior arrangement. I also
discard html-only emails, as most of them are spam; and emails asking for
`summer research positions' or `internships', which we don't do.
If you're contacting me about coming to Cambridge to do a PhD,
please read the relevant web
pages first.