WWW.DISSERTATION.XLIBX.INFO
FREE ELECTRONIC LIBRARY - Dissertations, online materials
 
<< HOME
CONTACTS



Pages:   || 2 | 3 | 4 | 5 |   ...   | 6 |

«Chapter 23: System Evaluation and Assurance C H A P TE R System Evaluation and Assurance If it’s provably secure, it probably isn’t. —LARS ...»

-- [ Page 1 ] --

Chapter 23: System Evaluation and Assurance

C H A P TE R

System Evaluation and

Assurance

If it’s provably secure, it probably isn’t.

—LARS KNUDSEN

I think any time you expose vulnerabilities it’s a good thing

—U.S. ATTORNEY GENERAL JANET RENO [642]

23.1 Introduction

I’ve covered a lot of material in this book, some of it quite difficult. But I’ve left the

hardest topics to the last. These are the questions of assurance, whether the system will

work, and evaluation, how you convince other people of this.

Fundamentally, assurance comes down to the question of whether capable, motivated people have beat up on the system enough. But how do you define enough? And how do you define the system? How do you deal with people who protect the wrong thing, because their model of the requirements is out-of-date or plain wrong? And how do you allow for human failures? Many systems can be operated just fine by alert experienced professionals, but are unfit for purpose because they’re too tricky for ordinary folk to use or are intolerant of error.

But if assurance is hard, evaluation is even harder. It’s about how you convince your boss, your clients—and, in extremis, a jury—that the system is indeed fit for purpose;

that it does indeed work (or that it did work at some particular time in the past). The reason that evaluation is both necessary and hard is that, often, one principal carries the cost of protection while another carries the risk of failure. This creates an obvious tension, and third-party evaluation schemes such as the Common Criteria are marketed as a means of making it more transparent.

Security Engineering: A Guide to Building Dependable Distributed Systems

23.2 Assurance A working definition of assurance could be “our estimate of the likelihood that a system will not fail in some particular way.” This estimate can be based on a number of factors, such as the process used to develop the system; the identity of the person or team who developed it; particular technical assessments, such as the use of formal methods or the deliberate introduction of a number of bugs to see how many of them are caught by the testing team; and experience—which ultimately depends on having a model of how reliability grows (or decays) over time as a system is subjected to testing, use, and maintenance.

23.2.1 Perverse Economic Incentives

A good starting point for the discussion of assurance is to look at the various principals’ motives. As a preliminary let’s consider the things for which we may need assurance:

• Functionality is important and often neglected. It’s all too common to end up protecting the wrong things or protecting the right things in the wrong way.

Recall from Chapter 8, for example, how the use of the Bell-LaPadula model in the healthcare environment caused more problems than it solved.

• Strength of mechanisms has been much in the news, thanks to U.S. export controls on crypto. Many products, such as DVD, were shipped with 40-bit keys and were thus intrinsically vulnerable. Strength of mechanisms is independent of functionality, but can interact with it. For example, in Chapter 14, I remarked how the difficulty of preventing probing attacks on smartcards led the industry to protect other, relatively unimportant things such as the secrecy of chip masks.

• Implementation is the traditional focus of assurance. This involves whether, given the agreed functionality and strength of mechanisms, the product has been implemented correctly. As we’ve seen, most real-life technical security failures are due to programming bugs—stack overflow vulnerabilities, race conditions, and the like. Finding and fixing them absorbs most of the effort of the assurance community.

• Usability is the missing factor—one might even say the spectre at the feast.

Perhaps the majority of system-level (as opposed to purely technical) failures have a large human interface component. It is very common for secure system designers to tie up the technical aspects of protection tightly, without stopping to consider human frailty. There are some notable exceptions. The bookkeeping systems described in Chapter 9 are designed to cope with user error; and the security printing technologies discussed in Chapter 12 are often optimized to make it easier for untrained and careless people to spot forgeries. But usability concerns developers as well as users. A developer usability issue, mentioned in Chapter 4 is that the access controls provided with commodity operating systems often aren’t used, as it’s so much simpler to make code run with administrator privilege.

Chapter 23: System Evaluation and Assurance

These four factors are largely independent, and the system builder has to choose an appropriate combination of them to aim at. A personal computer user, for example, might want high usability, medium assurance (because high would be expensive, and we can live with the odd virus), high strength of mechanisms (they don’t cost much more), and simple functionality (as usability is more important). But the market doesn’t deliver this, and a moment’s thought will indicate why.

Commercial platform vendors go for rich functionality (rapid product versioning prevents the market being commoditized, and complementary vendors that grab too much market share can be undermined), low strength of mechanisms (except for cryptography where the escrow debate has led vendors to regard strong crypto as an essential marketing feature), low implementation assurance (so the military-grade crypto is easily defeated by Trojan horses), and low usability (application programmers matter much more than customers, as they enhance network externalities).





In Chapter 22, I described why this won’t change any time soon. The strategy of “ship it Tuesday and get it right by version 3” isn’t a personal moral defect of Bill Gates, as some of his critics allege, but is dictated by the huge first-mover advantages inherent in the economics of networks. And mechanisms that compelled application developers to use operating system access controls would alienate them, raising the risk that they might write their code for competitors’ platforms. Thus, the current insecurity of commercial systems is perfectly rational from the economists’ viewpoint, however undesirable from the users’.

Government agencies’ ideals are also frustrated by economics. Their dream is to be able to buy commercial off-the-shelf products, replace a small number of components (such as by removing commercial crypto and plugging in Fortezza cards in its place), and end up with something they can use with existing defense networks. That is, they want Bell-LaPadula functionality (never mind that it fails to support mechanisms some of the vendors’ other customers need) and high implementation assurance. There is little concern with usability, as a trainable and disciplined workforce is assumed (however wrongly), and low strength of crypto is preferred so as to limit the benefits that potential enemies can gain from otherwise high-assurance systems being on the market. This wish list is unrealistic given not just the cost of high assurance (which I’ll discuss shortly), but also the primacy of time-to-market, the requirement to appease the developer community, and the need for frequent product versioning to prevent the commoditization of markets. Also, larger networks usually swamp smaller ones; so a million government computer users can’t expect to impose their will on 100 million users of Microsoft Office.

The dialogue between user advocates, platform vendors, and government is probably condemned to remain a dialogue of the deaf. But that doesn’t mean there’s nothing more of interest to say on assurance.

23.2.2 Project Assurance Assurance is a process very much like the development of code or documents. Just as you will have bugs in your code and in your specification, you will also have bugs in your test procedures. So assurance can be done as a one-off project or be the subject of continuous evolution. An example of the latter is given by the huge databases of known computer viruses that anti-virus software vendors accumulate over the years to

Security Engineering: A Guide to Building Dependable Distributed Systems

do regression-testing of their products. Assurance can also involve a combination, as when a step in an evolutionary development is managed using project techniques and is tested as a feature before being integrated and subjected to system-level regression tests. Here, you also have to find ways of building feature tests into your regression test suite.

Nonetheless, it’s helpful to look first at the project issues, then at the evolutionary issues.

23.2.2.1 Security Testing In practice, security testing usually comes down to reading the product documentation, reviewing the code, then performing a number of tests. (This is known as white-box testing, as opposed to black-box testing, for which the tester has the product but not the

design documents or source code). The process is:

1. First look for any obvious flaws, the definition of which will depend on the tester’s experience.

2. Then look for common flaws, such as stack-overwriting vulnerabilities.

3. Then work down a list of less common flaws, such as those described in the various chapters of this book.

The process is usually structured by the requirements of a particular evaluation environment. For example, it might be necessary to show that each of a list of control objectives was assured by at least one protection mechanism; in some industries, such as bank inspection, there are more or less established checklists (see, for example, [72]).

23.2.2.2 Formal Methods In Chapter 2, I gave an example of a formal method: the BAN logic that can be used to verify certain properties of cryptographic protocols. The working engineer’s take on formal methods may be that they’re widely taught in universities, but not used anywhere in the real world. This isn’t quite true in the security business. There are problems—such as in designing crypto protocols—where intuition is often inadequate and where formal verification can be helpful. Military purchasers go further, and require the use of formal methods as a condition of higher levels of evaluation under the Orange Book and the Common Criteria. I’ll discuss this further below. For now, it’s enough to say that this restricts high evaluation levels to relatively small and simple products, such as line encryption devices and operating systems for primitive computers such as smartcards. Even so, formal methods aren’t infallible. Proofs can have errors, too; and often the wrong thing gets proved [673]. The quote by Knudsen at the head of this chapter refers to the large number of breaks of cryptographic algorithms or protocols that had previously been proven secure. These breaks generally occur because one of the proof’s assumptions is unrealistic, or has become so over time.

Chapter 23: System Evaluation and Assurance 23.2.2.3 Quis Custodiet?

Just as mistakes can be made by theorem provers and by testers, so they can also be made by people who draw up checklists of things for the testers to test (and by the security textbook writers from whose works the checklist writers draw). This is the old problem of quis custodiet ipsos custodes, as the Romans more succintly put it: who shall watch the watchmen?

There are a number of things one can do, few of which are likely to appeal to the organization whose goal is a declaration that a product is free of faults. The obvious one is fault injection, whereby a number of errors are deliberately introduced into the code at random. If there are 100 such errors, and the tester finds 70 of them, plus a further 70 that weren’t deliberately introduced, then once the 30 remaining deliberate errors are removed, you can expect that there are 30 bugs left that you don’t know about.

(This assumes that the unknown errors are distributed the same as the known ones; reality will almost always be worse than this [133].) Even in the absence of deliberate bug insertion, a rough estimate can be obtained by looking at which bugs are found by which testers. For example, I had Chapter 7 of this book reviewed by a fairly large number of people, as I took a draft of it to a conference on the topic. Given the bugs they found, and the number of people who reviewed the other chapters, I’d estimate that there are maybe three dozen errors of substance left in the book. The sample sizes aren’t large enough in this case to justify more than a guess, but where they are large enough, we can use statistical techniques, which I’ll describe shortly.

Another factor is the rate at which new attacks are discovered. In the university system, we train graduate students by letting them attack stuff; new vulnerabilites and exploits end up in research papers, which bring fame and, ultimately, promotion. The mechanics in government agencies and corporate labs are slightly different, but the overall effect is the same: a large group of capable, motivated people look for new exploits. Academics usually publish, government scientists usually don’t, and corporate researchers sometimes do. So you need some means of adding new procedures to your test suite as fresh ideas come along, and to bear in mind that it will never be complete.

Finally, we get feedback from the rate at which instances of known bugs are discovered in products once they’re fielded. This also provides valuable input for reliability growth models.

23.2.3 Process Assurance In recent years, less emphasis has come to be placed on assurance measures focused on the product, such as testing, and more on process measures, such as who developed the system. As anyone with experience of system development knows, some programmers produce code with an order of magnitude fewer bugs than others. Also, some organizations produce much better quality code than others. This is the subject of much attention in the industry.

Some of the differences between high-quality and low-quality development teams are amenable to direct management intervention. Perhaps the most notable is whether people are responsible for correcting their own bugs. In the 1980s, many organizations

Security Engineering: A Guide to Building Dependable Distributed Systems



Pages:   || 2 | 3 | 4 | 5 |   ...   | 6 |


Similar works:

«Nelson’s Best of Boston Winter 2016 Edition (© Aaron Nelson) Compiled for the 44th Annual Meeting of INS This Week in Beantown A sampling of events in larger venues! See more at www.boston-discovery-guide.com Citi Performing Arts Center www.citicenter.org Other events: www.Bostix.org Bruce Springsteen TD Garden 2/4/16 8PM Boston Symphony Orchestra: Andris Nelsons conducts Shostakovich on 2/4 and 2/5 American Repertory Theater: Mark Rylance. Nice Fish Berklee Performance Center...»

« Welcome Session Dr Laurent Malier, CEO, CEA-Leti Leti Introduction and Overview Laurent Malier was graduated from Ecole Polytechnique and received his M.S. degree in Solid State Physics in1990, from University of Paris-Orsay. His thesis research covered the synthesis of nanoporous oxides and their applications, and he pursued research on semiconductor nanocristals, fullerenes and optical glasses. In 1996, he joined the French ministry of Defence, in charge of R&D strategy and programs in...»

«UNITED STATES DISTRICT COURT DISTRICT OF DELAWARE In re: Federal-Mogul Global, Inc. : Hon. Joseph H. Rodriguez PepsiAmericas, Inc., n/k/a : Civil Action Nos. 10-cv-986 & 11-cv-813 Pepsi-Cola Metropolitan, Bottling Company, Inc., : OPINION Appellant, : & ORDER v. : Federal-Mogul Global Inc., et al., : Debtor-Appellee. : This matter comes before the Court on appeal from the Bankruptcy Court’s October 27, 2010 grant of summary judgment in favor of Debtor-Appellee. The Court heard oral argument...»

«Immunomodulatory Therapy for Chronic Tubulointerstitial Nephritis Associated Uveitis (TINU) Nicolette Gion MD, Panagiota Stavrou FRCS C. Stephen Foster MD, FACS ABSTRACT Purpose: To describe the clinical course and treatment with immunomodulatory agents in patients with tubulointerstitial nephritis and uveitis (TINU) syndrome. Methods: Retrospective analysis of the charts of 6 patients with TINU syndrome. Results: The mean (±SD) age was 24.3 (±16.5) years, range 13 to 49 years; four patients...»

«SEVENTH FRAMEWORK PROGRAMME THEME – ICT [Information and Communication Technologies] Contract Number: 223854 Project Title: Hierarchical and Distributed Model Predictive Control of Large-Scale Systems Project Acronym: HD-MPC Deliverable Number: D7.3.3 Deliverable Type: Report Contractual Date of Delivery: September 1, 2011 Actual Date of Delivery: September 1, 2011 Title of Deliverable: Report on optimization of distribution of water Dissemination level: Public Workpackage contributing to the...»

«Application Report SPRAC32 – March 2016 Processor SDK RTOS Customization: Modifying Board Library to Change UART Instance on AM335x Lalindra Jayatilleke ABSTRACT This document describes the procedure to modify the default UART0 example in the AM335x Processor SDK RTOS package to enable UART1. On the BeagleBone Black (BBB) P9 header, pins 24(TX) and 26(RX) are connected to UART1. This procedure shows a test to verify that UART1 is enabled on the BBB. Tutorial Environment • Code Composer...»

«Water today, water tomorrow January 2015 Consultation on modifications to Thames Water’s Instrument of Appointment required to give effect to the Thames Tideway Tunnel project www.ofwat.gov.uk Consultation on modifications to Thames Water’s Instrument of Appointment required to give effect to the Thames Tideway Tunnel project About this document The purpose of this document is to seek comments on a proposal to modify Thames Water’s Instrument of Appointment to enable Thames Water to...»

«International Well Control Forum Official Forum Feedback 10 July 2015 Forum Feedback Overview Forum is the new administration system due for release by IWCF in August. Because this is a brand new system and will pose many changes to those involved, IWCF have been delivering presentations around the world to introduce the system. The purpose of this document is to summarise the feedback gained from these sessions and provide official IWCF responses to that feedback. An audio presentation has...»

«What Elijah Discovered in the Cave By Rev. Mel C. Montgomery The Lord revealed Himself and His ways to the prophet Elijah in the cave on Mount Horeb. If we Charismatics will allow ourselves to learn from what was demonstrated to Elijah, we will leave the superficial spiritual things behind and press through into the same kind of spiritual power that Elijah had. Elijah was a remarkable prophet of the Old Testament. God raised him up during the reign of King Ahab and his treacherous wife Jezebel,...»

«Recommended procedure Ear examination Date of publication: January 2010 Due for review: January 2015 Recommended procedure British Society of Audiology Ear examination 2010 General foreword This document presents a Recommended Procedure by the British Society of Audiology (BSA). A Recommended Procedure provides a reference standard for the conduct of an audiological intervention that represents, to the best knowledge of the BSA, the evidence-base and consensus on good practice given the stated...»

«9303 PROGRAM MIX EXAMPLES Here are a few examples of some common program mixes. They are intended as a quick reference guide and may require modification to suit a particular installation.SMOKE SYSTEM MIXER This mixer is used to turn a smoke system on and off and is intended for a smoke system that plugs directly into the receiver (RX). The mix is turned on and off using a combination of the throttle stick position and a switch. When the switch is turned on and the throttle stick is moved above...»

«Development of an Autonomous, Tethered and Submersible Data Buoy Final year project GH7P Artificial Intelligence and Robotics Supervisor: Author: Dr. Mark Neal Tom Blanchard mjn@aber.ac.uk ttb7@aber.ac.uk Contents Acknowledgements iv Abstract v 1 Introduction 1 1.1 Current CTD Deployment Methods. 2 2 Background 4 2.1 Argo Float Network....................... 4 2.2 Gliders............................... 6 2.2.1 Slocum Glider...........»





 
<<  HOME   |    CONTACTS
2016 www.dissertation.xlibx.info - Dissertations, online materials

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.