You are on page 1of 2

1.

1 Thesis outline
After this introduction, Chapter 2 deals with a traditional view of traceability, reflecting the understanding of the issues that were being considered up to about 1999. It closely follows the classic document on the subject [32], created by the UK ISP industry in 19971999 under the auspices of the London Internet Exchange (LINX), although it goes well beyond that document in detailing exactly how the process hangs together. Chapter 3 presents a far more contemporary examination of the failures of traceability (the presence of anonymity) which result from the processes described in Chapter 2 either being absent, or not working in the way that everyone always assumed. This is an important contribution to understanding how much traceability currently exists in cyberspace. Although many parts of this particular jigsaw have been lying around for some time, this is the first time that they have been collected together, and an examination made of the common themes they exhibit. In Chapter 4, an entirely new method is described that can be used to achieve anonymity on an Ethernet by running a very precise denial-of-service attack against another node by deliberately colliding with its transmissions. I show that this means that a user can borrow the machine level identity of a co-worker in a hard-to-detect manner. I also identify a previously undescribed problem with personal firewalls, whereby a system may entirely fail to object when this type of identity theft takes place. It may now be easier to become anonymous by sitting at your own desk than by travelling to the far side of the world. Traceability, in the policy arena inhabited by the governments and regulators, has become synonymous with the making and retaining of logs of user activity. In the European Union, with its Data Protection regime, this has created a tension with the Data Protection Principles, which insist that since personal data is involved, logging In the beginning 13 can only be performed for a business purpose and that the logs must be destroyed or anonymised as soon as they are no longer needed. In Chapter 5, I describe a new way of processing email server logs to automatically detect the sending of spam. Apart from the significant advantages to ISPs in being able to detect this behaviour and deal with it promptly, this will be good news for the policy makers, because it has now provided a compelling business reason for creating logs of email activity; although they may be less happy to learn that the processing is so effective that there is little reason to retain the logs for more than a few days. Staying with the theme of spam, and the difficulty of using traceability to locate the spammers, in Chapter 6 I present a detailed analysis of a well-known proposal for dealing with the email spam problem by economic means. It is often suggested that spam has become so prevalent because it is free to send, and the solution is for an artificial cost to be introduced, by requiring all email to carry a proof-of-work that a small computational puzzle has been solved. Only genuine senders, it is argued, would bother to solve the puzzles and hence spam would decrease. The scheme can be operated anonymously since the decision to accept an email depends on the presence of the puzzle solution and not upon who sent it. The payment is universal and self-evident and so there can be no defaulting and hence no need validate the

sender within a complex identification infrastructure, or to trace the source of email. In fact, the scheme is so appealing that everyone just assumed it would work, without ever calculating just how complex the puzzle should be. It turns out that making the puzzle complex enough to dissuade the spammers will also make it infeasible for a significant proportion of legitimate senders to maintain their current levels of email activity. Proof-of-work is therefore not an elegant fix for spam. It can at best only provide one facet of a technical fix and the other facets will involve all manner of accountability and traceability and so I fully expect such fixes to be too complex, too costly and too inconvenient to roll out in the near future, if ever. Finally, in Chapter 7, I provide a detailed analysis of another seemingly elegant technical scheme. In this case it is the BT CleanFeed system, which aims to prevent access to indecent images of children which have been located by the Internet Watch Foundation (IWF) but that are hosted abroad where the local law enforcement authorities may not promptly remove such content. I give a detailed account of the many ways in which the system might be avoided by users and by content providers; and then outline possible countermeasures for BT and the IWF. I also show how users can exploit the system as an oracle, to create lists of blocked sites which they would not otherwise have known about. I view the underlying problem here as being the failure of traceability to deliver the results that would be required for blocking to be effective. At the end of the thesis there is an annotated bibliography and, since the overwhelming majority of work in this field is available online, URLs are provided that link to the material that has been cited. I have also provided a glossary for those 14 not familiar with the various acronyms and other obscure terms of art which are, necessarily, scattered throughout the tex

You might also like