NSA did a deal with Britain and Sweden to introduce the Clipper chip. I heard this from a US source late last year. The other European countries apparently turned them down flat. Confirmation came last month when a journalist who'd heard of the deal from UK sources asked me to comment. I said that classified designs are unusable in evidence in British courts, and so it was a crock and I would advise clients not to touch it. There still hasn't been a public announcement though. I take it you've followed the GSM/A5 story - I posted an implementation to sci.crypt and uk.telecom. There has been little feedback so far. One of the things that does emerge, however, is that GSM phones have odd surveillance characteristics. We think that if you buy a phone in the UK, then GCHQ could follow you areound in germany even if the german government didn't want you to. The only bit of GSM security which seem fairly well designed is the part which prevents billing fraud. So if Clipper is introduced in the UK, it might give more security than GSM. But this doesn't mean it will be a commercial success. Most people in security greatly overestimate the amount of interest which the real world has in the subject, and the market for security products is a lot smaller than many businessmen have thought. The whole industry lives 70% off government subsidy and 20% off the banks' paranoia; the other 10% of genuine demand is scattered over a whole lot of applications, such as eftpos systems, prepayment electricity tokens, burglar alarms, pay-TV, authenticating document and video images, and software licensing; here the requirements tend to have more to do with integrity than confidentiality, and so open design is a must. I enclose a paper which has been accepted for ESORICS this year, and which goes into the subject of evidence a bit more Regards Ross Anderson, at Cambridge University \documentstyle[a4,11pt]{article} \parskip 7pt plus 2pt minus 2pt \newtheorem{principle}{Principle \begin{document} \begin{center} {\Large \bf Liability and Computer Security: Nine Principles} \vspace{5ex Ross J Anderson\\ Cambridge University Computer Laboratory\\ Email: {\tt rja14@cl.cam.ac.uk} \end{center} \vspace{3ex} \begin{abstract} Many authors have proposed that security priorities should be set by risk analysis. However, reality is subtly different: many (if not most) commercial computer security systems are at least as much about sheddin liability as about minimising risk. Banks use computer security mechanisms to transfer liability to their customers; companies use them to transfer liability to their insurers, or (via the public prosecutor) to the taxpayer; and they are also used within governments and companies to shift the blame to other departments (``we did everything that GCHQ/the internal auditors told us to''). We derive nine principles which might hel designers avoid the most common pitfalls. \end{abstract \section{Introduction} In the conventional model of technological progress, there is a smoot progression from research through development and engineering to a product. After this is fielded, the experience gained from its use provides feedback to the research team, and helps drive the next generation of products: \begin{center {\sc Research $\rightarrow$ Development $\rightarrow$ Engineering $\rightarrow$ Product\\} \begin{picture}(260,10)(0,0) \thinlines \put(260,10){\line(0,-1){10}} \put(260,0){\line(-1,0){260}} \put(0,0){\vector(0,1){10}} \end{picture} \end{center} This cycle is well known, and typically takes about ten years. However the product's failure modes may not be immediately apparent, and may even be deliberately concealed; in this case it may be several years before litigation comes into the cycle. This was what happened with the asbestos industry, and many other examples could be given. \begin{center} {\sc Research $\rightarrow$ Development $\rightarrow$ Engineerin $\rightarrow$ Product $\rightarrow$ Litigation\\ \begin{picture}(320,10)(0,0) \thinlines \put(320,10){\line(0,-1){10}} \put(320,0){\line(-1,0){320}} \put(0,0){\vector(0,1){10}} \end{picture} \end{center} Now many computer security systems and products are designed to achieve some particular legal result. Digital signatures, for example, are often recommended on the grounds that they are the only way in which an electronic document can in the long term be made acceptable to the courts. It may therefore be of interest that some of the first court cases involving cryptographic evidence have recently been decided, and in thi paper we try to distil some of the practical wisdom which can be gleaned from them \section{Using Cryptography in Evidence} Over the last two years, we have advised in a number of cases involvin disputed withdrawals from ATMs. These now include five criminal and three civil cases in Britain, two civil cases in Norway, and one civil and one criminal case in the USA. All these cases had a common theme of relianc on claims concerning cryptography and computer security; in many cases th bank involved said that since its PINs were generated and verified in secure cryptographic hardware, they could not be known to any member of its staff and thus any disputed withdrawals must therefore be the customer's fault However, these cases have shown that such sweeping claims do not work, an in the process have undermined some of the assumptions made by commercial computer security designers for the past fifteen years. At the engineering level, they provided us with the first detailed threat model for commercial computer security systems; they showed that almost all frauds are due to blunders in application design, implementation and operation [A1]. The main threat is not the cleverness of the attacker, but the stupidity of the system builder. At the technical level, we should be much more concerned with robustness [A2], and we have shown how robustness properties can be successfully incorporated into fielded systems in [A3]. However, there is another lesson to be learned from the ``phantom withdrawal'' cases, which will be our concern here. This is that many security systems are really about liability rather than risk; and failure to understand this has led to many computer security systems being essentially useless. We will first look at evidence; here it is well established that a defendant has the right to examine every link in the chain. \begin{itemize} \item One of the first cases was R v Hendy at Plymouth Crown Court. One o Norma Hendy's colleagues had a phantom withdrawal from her bank account and as the staff at this company used to take turns going to the cas machine for each other, the victim's PIN was well known. Of the many suspects, Norma was arrested and charged for no good reason other than that the victim's purse had been in her car all day (even although this fact was widely known and the car was unlocked). She denied the charge vigorously; and the bank said in its evidence that the alleged withdrawal could not possibly have been made except with the card and PIN issued to the victim. This was untrue, as both theft by bank staff using extra cards, and card forgery by outsiders, had been known to affect this bank's customers [A1]. We therefore demanded disclosure of the bank's security manuals, audit reports and so on; the bank refused, and so Norma was acquitted. \item Almost exactly the same happened in the case R v De Mott at Great Yarmouth. Philip De Mott was a taxi driver, who was accused of stealing \pounds 50 from a colleague after she had a phantom withdrawal. His employers did not believe that he could be guilty, and applied for his bail terms to allow him to keep working for them. Again, the bank claimed that its systems were infallible; again, when the evidence was demanded, they backed down and the case collapsed. \end{itemize} Given that, even on the banks' own admission, ATM systems have an error rate of 1 in 34,000 [A2], a country like Britain with $10^9$ ATM transactions a year will have 30,000 phantom withdrawals and other miscellaneous malfunctions. If 10,000 of these are noticed by the victims, and 1,000 referred to the police, then even given the police tendency to `file and forget' small matters, it is not surprising that there are maybe a dozen wrongful prosecutions each year. Thankfully, there now exists a solid defence. This is to demand that th Crown Prosecution Service provide a full set of the bank's security and quality documentation, including security policies and standards, crypto key management procedures and logs, audit and insurance inspectors' reports, test and bug reports, ATM balancing records and logs, and detail of all customer complaints in the last seven years. The UK courts have so far upheld the rights of both criminal defendants [RS] and civil plaintiffs [MB] to this material, despite outraged protest from the banks. Of course, this defence works whether or not the defendant is actuall guilty, and the organised crime squad at Scotland Yard has expressed concern that the inability of banks to support computer records could seriously hinder police operations. In a recent trial in Bristol, two men who were accused of conspiring to defraud a bank by card forgery obtained a plea bargain by threatening to call a banking industry expert to sa that the crimes they had planned could not possibly have succeeded [RLN]. The first (and probably most important) lesson from the litigation is therefore this: \begin{center} \fbox \parbox{5.5in}{{\bf Principle 1:} Security systems which are to provide evidence must be designed and certified on the assumption that they will be examined in detail by a hostile expert.}} \end{center} This should have been obvious to anybody who stopped to think about th matter, yet for many years nobody in the industry (including the author did so. In fact, many banking sector crypto suppliers also sell equipment to government bodies. Have their military clients stopped to assess the damage which could be done if a mafioso's lawyers, embroiled in some dispute over an electronic banking transaction, raid the design lab at six in the morning and, armed with a court order, take away all the schematic and source code they can find? Pleading a classification mismatch is no defence - in a recent case, lawyers staged just such a dawn raid against Britain's biggest defence electronics firm, in order to find out how man PCs were running unlicenced software. \section{Using the Right Threat Model Another problem is that many designers fail to realise that most security failures occur at the level of application detail [A2] and instead pu most of their effort into cryptographic algorithms and protocols, or into delivery mechanisms such as smartcards. This is illustrated by current ATM litigation in Norway. Norwegian banks spent millions on issuing all their customers with smartcards, and are now as certain as British banks (at least in public) that no debit can appear on a customer's account without the actual card and PIN issued to the customer being used. Yet a number of phantom withdrawals around the University of Trondheim have cast serious doubt on their position. In these cases, cards were stolen from offices on campus and used in ATMs and shops in the town; among the victims are highly credible witnesses who are quite certain that their PINs could not have been compromised. The banks refused to pay up, and have been backed up by the central bank an the local banking ombudsman; yet the disputed transactions (about which the bank was so certain) violated the card cycle limits. Although only NOK 5000 should have been available from ATMs and NOK 6000 from eftpos, the thief managed somehow to withdraw NOK 18000 (the extra NOK 7000 was refunded without any explanation) [BN]. Although intelligence agencies may have the resources to carry out technical attacks on algorithms or operating systems, most crime is basically opportunist, and most criminals are both unskilled and undercapitalised; most of their opportunities therefore come from the victim's mistakes. \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 2:} Expect the real problems to come from blunders in the application design and in the way the system is operated. }} \end{center} \section{The Limitations of Legal Process} Even if we have a robust system with a well designed and thoroughly tested application, we are still not home and dry; and conversely, if we suffer as a result of an insecure application built by someone else, we cannot rely on prevailing against them in court. This is illustrated by the one case `won' recently by the banking industry, in which one of our loca police constables was prosecuted for attempting to obtain money b deception after he complained about six phantom withdrawals on his ban account. Here, it came out during the trial that the bank's system had been implemented and managed in a rather ramshackle way, which is probably not untypical of the small data processing departments which service most medium sized commercial firms \begin{itemize} \item The bank had no security management or quality assurance function. The software development methodology was `code-and-fix', and the production code was changed as often as twice a week \item No external assessment, whether by auditors or insurance inspectors, was produced; the manager who gave technical evidence was the same man who had originally designed and written the system twenty years before, and still managed it. He claimed that bugs could not cause disputed transaction, as his system was written in assembler, and thus all bugs caused abends and were thus detected. He was not aware of the existence of TCSEC or ITSEC; but nonetheless claimed that as ACF2 was used to control access, it was not possible for any systems programmer to get hold of the encryption keys which were embedded in application code \item The disputed transactions were never properly investigated; the technical staff had just looked at the mainframe logs and not found anything which seemed wrong (and even this was only done once the tria was underway, under pressure from defence lawyers). In fact, there were another 150-200 transactions under dispute with other clients, none of which had been investigated. \end{itemize} It was widely felt to be shocking that, even after all this came to light, Munden was still convicted [E]; one may hope that the conviction is overturned on appeal. The Munden case does however highlight not just our second principle that many problems are likely to be found in the application, but a fact that (although well known to lawyers) is often ignored by the security community: \begin{center \fbox{ \parbox{5.5in}{{\bf Principle 3:} Judgments handed down in computer cases are often surprising. }} \end{center} \section{Legislation} Strange computer judgments have on occasion alarmed lawmakers, and they have tried to rectify matters by legislation. For example, in the famous case of R v Gold \& Schifreen, two hackers, who had played havoc with British Telecom's electronic mail service by sending electronic mail `from' Prince Philip `to' people they didn't like announcing the award o honours, were charged with having stolen the master password by copying it from another system. They were acquitted, on the grounds that information (unlike material goods) cannot be stolen The ensuing panic in parliament led to the Computer Misuse Act. This act makes `hacking' a specific criminal offence, and thus tries to transfer some of the costs of distributed system access control from the system administrator to the Crown Prosecution Service. Whether it actually does anything useful is open to dispute: on the one hand firms have to take considerable precautions if they want to use it against errant employees [A5] [C1]; and on the other hand it has led to surprising convictions, such as that of a software writer who used the old established technique of putting a timelock in his code to enforce payment [C2]. Similar laws have been passed in a number of jurisdictions, and similar problems have arisen; in a field where the technology changes quickly, and both judges and lawmakers lag behind the curve, our next principle is inevitable: \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 4:} Computer security legislation is highl likely to suffer from the law of unintended consequences. }} \end{center} \section{Standards Another tack taken by some governments is to try and establish a system of security standards, and indeed there are a number of initiatives in play from various governmental bodies. Often, these are supposed to give a legal advantage to systems which follow some particular standard. For example, to facilitate CREST (the Bank of England's new share dealin system), the Treasury proposes to amend English law so that the existence of a digital signature on a stock transfer order will create `an equitable interest by way of tenancy in common in the ... securities pending registration' [HMT]. On a more general note, some people are beginning to see a TCSEC C2 evaluation as the touchstone for commercial computer security, and this might lead in time to a situation where someone who had not used a C product might be considered negligent, and someone who had used one might hope that the burden of proof had thereby passed to someone else. However, in the Munden case cited above, the bank did indeed use an evaluated product - ACF2 was one of the first products to gain the C2 rating - yet this evaluation was not only irrelevant to the case, but not even known to the bank. \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 5:} Don't try to solve legal problems wit system engineering standards. }} \end{center A related point is that although the courts often rely on industry practice when determining which of two parties has been negligent, existing computer security standards do not help much here. After all, as noted above, they mostly have to do with operating system level features while the industry practices themselves tend to be expressed i application detail - precisely where the security problems arise. The legal authority flows from the industrial practice to the application, not the other way around. Understanding this could have saved the banks in UK and Norway a lot of legal fees, security expenditure and public embarrassment; in traditional banking, the onus is on the bank to show that it made each debit in accordance with the customer's mandate. \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 6:} Security goals and assumptions should be based on the existing industry practice in the application area, not on general `computer' concepts. }} \end{center} \section{Abuses} Things become even more problematic when one of the parties to a dispute has used market power, legal intimidation or political influence to shed liability. There are many examples of this: \begin{enumerate} \item We recently helped to evaluate the security of an alarm system, which is used to protect bank vaults in over a dozen countries. The vendor had claimed for years that the alarm signalling was encrypted; this is a requirement under the draft CENELEC standards for class 4 risks [B]. O examination, it was found that the few manipulations performed to disguise the data could not be expected to withstand even an amateur attack. \item Many companies can be involved in providing components of a secur system; as a result, it is often unclear who is to blame. With software products, licence agreements usually include very strong disclaimers and it may not be practical to sue. Within organisations, it is common that managers implement just enough computer security that they will not carry the blame for any disaster. They will often ask for guidance from the internal audit department, or some other staff function, in order to diffuse the liability for an inadequate security specification \item If liability cannot be transferred to the state, to suppliers, to insurers, or to another department, then managers may attempt to transfe it to customers - especially if the business is a monopoly or cartel. Utilities are notorious for refusing to entertain disputes about billing system errors; and many banking disputes also fall into this category. \end{enumerate} \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 7:} Understand how liability is transferred by any system you build or rely on. }} \end{center} \section{Security Goals} In case the reader is still not convinced that liability is central, we shall compare the background to ATM cases in Britain and the United States. The British approach is for the banks to claim that their systems are infallible, in that it is not possible for an ATM debit to appear o someone's account unless the card and PIN issued to him had been used i that ATM. People who complain are therefore routinely told that they must be lying, or mistaken, or the victim of fraud by a friend or relative (in which case they must be negligent). The US is totally different; there, in the landmark court case Judd Citibank [JC], Dorothy Judd claimed that she had not made a number of AT withdrawals which Citibank had debited to her account; Citibank claimed that she must have done. The judge ruled that Citibank was wrong in law to claim that its systems were infallible, as this placed `an unmeetable burden of proof' on the plaintiff. Since then, if a US bank customer disputes an electronic debit, the bank must refund the money within 3 days, unless it can prove that the claim is an attempted fraud When tackled in private, British bankers claim they have no alternative; if they paid up whenever a customer complained, there would be `an avalanche of fraudulent claims of fraud'. US bankers are much more relaxed; their practical experience is that the annual loss due to customer misrepresentation is only about \$15,000 per bank [W]. This will not justify any serious computer security programme; so in areas such as New York where risks are higher, banks just use ATM cameras to resolv disputes. One might expect that as US banks are liable for fraudulent transactions, they would invest more in security than British banks do. One of the mor interesting facts thrown up by the recent ATM cases is that precisely the reverse is the case: almost all UK banks and building societies now use hardware security modules to manage PINs [VSM], while most US banks do not; they just encrypt PINs in software [A1] Thus we can conclude that the real function of these hardware security modules is due diligence rather than security. British bankers want to be able to point to their security modules when fighting customer claims, while US bankers, who can only get the advertised security benefit from these devices, generally do not see any point in buying them. Given that no-one has yet been able to construct systems which bear hostile examination, it is in fact unclear that these devices added any real value at all. One of the principles of good protocol engineering is that one should never use encryption without understanding what it is for (keeping a key secret, binding two values together, ...) [AN]. This generalises naturally to the following: \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 8:} Before setting out to build a computer security system, make sure you understand what its real purpose is (especially if this differs from its advertised purpose). }} \end{center} \section{National Security Interference} In addition to assuming liability for prosecuting some computer disputes which are deemed to be criminal offences, governments have often tried to rewrite the rules to make life easier for their signals intelligence organisations. For example, the South African government decreed in 1986 that all users of civilian cryptology had to provide copies of their algorithms and keys to the military. Bankers approached the authorities and said that this was a welcome development; managing keys for automatic teller machines was a nuisance and the military were welcome to the job. Of course, whenever a machine was out of balance, they would be sent the bill. At this the military backed down quickly. More recently, the NIST public key initiative [C3] proposes that the US government will assume responsibility for certifying all the public keys in use by civilian organisations in that country. They seem to have learned from the South African experience; they propose a statutory legal exemption for key management agencies. It remains to be seen how much trust users will place in a key management system which they will not be able to sue when things go wrong. \section{Liability and Insurance The above sections may have given the reader the impression that managing the liability aspects of computer security systems is just beyond mos companies. This does not mean that the problem should be accepted as intractable, but rather that it should be passed to a specialist - th insurer. As insurers become more aware of the computer related element in their risks, it is likely that they will acquire much more clout in setting security standards. This is already happening at the top end of the market: banks who wish to insure against computer fraud usually need to have their systems inspected by a firm approved by the insurer. The present system could be improved [A4] - in particular the inspections, which focus on operational controls, will have to be broadened to include application reviews. However, this is a detail; certification is bound to spread down to smaller risks, and, under current business conditions, it could economically be introduced for risks of the order of \$250,000. It is surely only a matter of time before insurance driven computer security standards affect not just businesses and wealthy individuals, but most o us [N1]. Just as my insurance policy may now specify `a five-lever mortice deadlock', so the policy I buy in ten years' time is likely to insist that I use accounting software from an approved product list, and certify that I manage encryption keys and take backups in accordance with the manual, if my practice is to be covered against loss of data and various kinds o crime. Insurance-based certification will not of course mean hardening systems to military levels, but rather finding one or more levels of assurance at which insurance business can be conducted profitably. The protection must be cheap enough that insurance can still be sold, yet good enough to keep the level of claims under control Insurance-based security will bring many other benefits, such as arbitration; any dispute I have with you will be resolved between my insurer and your insurer, as with most motor insurance claims, thus saving the bills (and the follies) of lawyers. Insurance companies are also better placed to deal with government meddling; they can lobby for offensive legislation to be repealed, or just decline to cover any system whose keys are kept on a government server, unless the government provides a full indemnity. A liability based approach can also settle a number of intellectual disputes, such as the old question of trust. What is `trust'? At present, we have the US DoD `functional' definition that a trusted component is on which, if it breaks, can compromise system security, and Needham' alternative `organisational' definition [N2] that a trusted component is one such that, if it breaks and my company's system security is compromised as a result, I do not get fired. >From the liability point of view, of course, a component which can b trusted is one such that, if it breaks and compromises my system security, I do not lose an unpredictable amount of money. In other words: \begin{center} \fbox{ \parbox{5.5in}{{\bf Principle 9:} A trusted component or system is one which you can insure }} \end{center} \vspace{4ex} \smal \begin{thebibliography}{TCPEC} \bibitem[A1]{A1} RJ Anderson, ``Why Cryptosystems Fail'', in {\em Proceedings of the 1st ACM Conference on Computer and Communications Security} (Fairfax 1993) pp 215 - 227 \bibitem[A2]{A2 RJ Anderson, ``Why Cryptosystems Fail'', to appear in {\em Communications of the ACM} \bibitem[A3]{A3} RJ Anderson, ``Making Smartcard Systems Robust'', submitted to {\em Cardis 94} \bibitem[A4]{A4} RJ Anderson, ``Liability, trust and Security Standards'', in {\e Proceedings of the 1994 Cambridge Workshop on Security Protocols} (Springer, to appear) \bibitem[A5]{A5 J Austen, ``Computer Crime: ignorance or apathy?'', in {\em The Computer Bulletin v 5 no 5} (Oct 93) pp 23 - 24 \bibitem[AN]{AN} M Abadi, RM Needham, `Prudent Engineering Practice for Cryptographic Protocols', in {\em Proceedings of the 1994 IEEE Symposium on Security and privacy} (to appear) \bibitem[B]{B} KM Banks, Kluwer Security Bulletin, 4 Oct 93 \bibitem[BN]{BN} Behne v Den Norske Bank, Bankklagenemnda, Sak nr: 92457/93111 \bibitem[C1]{C1} T Corbitt, ``The Computer Misuse Act'', in {\em Computer Fraud and Security Bulletin} (Feb 94) pp 13 - 17 \bibitem[C2]{C2} A Collins, ``Court decides software time-locks are illegal'', in {\em Computer Weekly} (19 August 93) p 1 \bibitem[C3]{C3} S Chokhani ``Public Key Infrastructure Study (PKI)'', in {\em Proceedings of the first ISOC Symposium on Network and Distributed System Security} (1994) p 45 \bibitem[DP]{DP} DW Davies and WL Price, {\em `Security for Computer Networks'}, John Wiley and Sons 1984. \bibitem[E]{E B Ellis, ``Prosecuted for complaint over cash machine'', in {\em The Sunday Times}, 27th March 1994, section 5 page 1 \bibitem[ECMA]{ECMA} European Computer Manufacturers' Association, {\em `Secure Information Processing versus the Concept of Product Evaluation'}, Technical Report 6 (December 1993) \bibitem[HMT]{HMT} HM Treasury, {\em `CREST - The Legal Issues'}, March 1994 \bibitem[ITSEC]{ITSEC} {\em `Information Technology Security Evaluation Criteria'}, June 1991, EC document COM(90) 314 \bibitem[J]{J RB Jack (chairman) {\em `Banking services: law and practice report by the Review Committee'}, HMSO, London, 1989 \bibitem[JC]{JC} Dorothy Judd v Citibank, {\em 435 NYS, 2d series, pp 210 - 212, 107 Misc.2d 526 \bibitem[L]{L HO Lubbes, ``COMPUSEC: A Personal View'', in {\em Proceedings of Security Applications 93} (IEEE) pp x - xviii \bibitem[MB]{MB} McConville \& others v Barclays Bank \& others, High Court of Justice Queen's Bench Division 1992 ORB no.812 \bibitem[MM]{MM CH Meyer and SM Matyas, {\em `Cryptography: A New Dimension in Computer Data Security'}, John Wiley and Sons 1982. \bibitem[N1]{N1} RM Needham, ``Insurance and protection of data'', {\em preprint} \bibitem[N2]{N2 RM Needham, comment at 1993 Cambridge formal methods worksho \bibitem[P]{P} WR Price, ``Issues to Consider When Using Evaluated Products to Implement Secure Mission Systems'', in {\em Proceedings of the 15th National Computer Security Conference}, National Institute of Standards and Technolog (1992) pp 292 - 299 \bibitem[R]{R} J Rushby, {\em `Formal methods and digital systems validation for airborne systems'}, NASA Contractor Report 4551, NA81-18969 (December 1993) \bibitem[RLN]{RLN} R v Lock and North, Bristol Crown Court, 1993 \bibitem[RS]{RS} R v Small, Norwich Crown Court, 1994 \bibitem[T]{T} ``Business Code'', in {\em The Banker} (Dec 93) p 69 \bibitem[TCSEC]{TCSEC} {\em `Trusted Computer System Evaluation Criteria'}, US Department of Defense, 5200.28-STD, December 1985 \bibitem[TW]{TW} VP Thompson, FS Wentz, ``A Concept for Certification of an Army MLS Management Information System'', in {\em Proceedings of 16th National Computer Security Conference, 1993} pp 253 - 25 \bibitem[VSM]{VSM} {\em `VISA Security Module Operations Manual'}, VISA, 1986 \bibitem[W]{W} MA Wright, ``Security Controls in ATM Systems'', in {\em Computer Fraud and Security Bulletin}, November 1991, pp 11 - 14 \end{thebibliography} \end{document}