Mary Ann Davidson

Subscribe to Mary Ann Davidson feed
Oracle Blogs
Updated: 9 hours 35 min ago

Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Tue, 2014-03-11 16:08

Many commercial off-the-shelf (COTS) vendors have recently seen an uptick of interest by their customers in third party static analysis or static analysis of binaries (compiled code). Customers who are insisting upon this in some cases have referenced the (then-)SANS Top Twenty Critical Controls (http://www.sans.org/critical-security-controls/) to support their position, specifically, Critical Control 6, Application Software Security:

"Configuration/Hygiene: Test in-house developed and third-party-procured web and other application software for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software. If source code is not available, these organizations should test compiled code using static binary analysis tools (emphases added). In particular, input validation and output encoding routines of application software should be carefully reviewed and tested."


Recently, the "ownership" of the 20 Critical Controls has passed to the Council on CyberSecurity and the particular provision on third party static analysis has been excised. Oracle provided feedback on the provision (more on which below) and appreciates the responsiveness of the editors of the 20 Critical Controls to our comments.


The argument for third party code analysis is that customers would like to know that they are getting “reasonably defect-free” code in a product. All things being equal, knowing you are getting better quality code than not is a good thing, while noting that there is no defect-free or even security defect-free software – no tool finds all problems and they generally don’t find design defects. Also, a product that is “testably free” of obvious defects may still have significant security flaws – like not having any authentication, access control, or auditing. Nobody is arguing that static analysis isn’t a good thing – that’s why a lot of vendors already do it and don’t need a third party to “attest” to their code (assuming there is a basis for trusting the third party other than their saying “trust us”).


Oracle believes third party static analysis is at best infeasible for organizations with mature security assurance practices and – well, a bad idea, not to put too fine a point on it. The reasons why it is a bad idea are expanded upon in detail below, and include: 1) worse, not better security 2) increased security risk to customers, 3) an increased risk of intellectual property theft and 4) increased costs for commercial software providers without a commensurate increase in security. Note that this discussion does not address the use of other tools - such as so-called web vulnerability analysis tools – that operate against “as installed” object code. These tools also have challenges (i.e., a high rate of false positives) but do not in general pose the same security threats, risks and high costs that static analysis as conducted by third parties does.


Discussion: Static analysis tools are one of many means by which vendors of software, including commercial off-the-shelf (COTS) software, can find coding defects that may lead to exploitable security vulnerabilities. Many vendors – especially large COTS providers - do static analysis of their own code as part of a robust, secure software development process. In fact, there are many different types of testing that can be done to improve security and reliability of code, to include regression testing (ensuring that changes to code do not break something else, and that code operates correctly after it has been modified), “fuzzing” tools, web application vulnerability tools and more. No one tool finds all issues or is necessarily even suitable for all technologies or all programming languages. Most companies use a multiplicity of tools that they select based on factors such as cost, ease-of-use, what the tools find, how well and how accurately, programming languages the tool understands, and other factors. Note of course that these tools must be used in a greater security assurance context (security training, ethical hacking, threat modeling, etc.), echoing the popular nostrum that security has to be “baked in, not bolted on.” Static analysis and other tools can’t “bake in” security – just find coding errors that may lead to security weaknesses. More to the point, static analysis tools should correctly be categorized as “code analysis tools” rather than “code testing tools,” because they do not automatically produce accurate or actionable results when run and cannot be used, typically, by a junior developer or quality assurance (QA) person.


These tools must in general be “tuned” or “trained” or in some cases “programmed” to work against a particular code base, and thus the people using them need to be skilled developers, security analysts or QA experts. Oracle has spent many person years evaluating the tools we use, and have made a significant commitment to a particular static analysis tool which works the best against much – but not all – of our code base. We have found that results are not typically repeatable from code base to code base even within a company. That is, just because the tool works well on one code base does not mean it will work equally well on another product -- another reason to work with a strong vendor who will consider improving the tool to address weaknesses. In short, static analysis tools are not a magic bullet for all security ills, and the odds of a third party being able to do meaningful, accurate and cost-effective static code analysis are slim to none.


1. Third party static analysis is not industry-standard practice.
Despite the marketing claims of the third parties that do this, “third party code review” is not “industry best practice.” As it happens, it is certainly not industry-standard practice for multiple reasons, not the least of which is the lack of validation of the entities and tools used to do such “validation” and the lack of standards to measure efficacy, such as what does the tool find, how well, and how cost effectively? As Juvenal so famously remarked, “Quis custodiet ipsos custodes?” (Who watches the watchmen?) Any third party can claim, and many do, that “we have zero false positives” but there is no way to validate such puffery – and it is puffery. (Sarcasm on: I think any company that does static analysis as a service should agree to have their code analyzed by a competitor. After all, we only have Company X’s say-so that they can find all defects known to mankind with zero false positives, whiten your teeth and get rid of ring-around-the-collar, all with a morning-fresh scent!)


The current International Standards Organization (ISO) standard for assurance (which encompasses the validation of secure code development), the international Common Criteria (ISO-15408), is, in fact, retreating from the need for source code access currently required at higher assurance levels (e.g., Evaluation Assurance Level (EAL) 4). While limited vulnerability analysis has been part of higher assurance evaluations currently being deprecated by the U.S. National Information Assurance Partnership (NIAP), static analysis has not been a requirement at commercial assurance levels. Hence, “the current ISO assurance standard” does not include third party static code analysis and thus, “third party static analysis” is not standard industry practice. Lastly, “third party code analysis” is clearly not “industry best practice” if for no other reason than all the major COTS vendors are opposed to it and will not agree to it. We are already analyzing our own code, thanks very much.


(It should be noted that third party systematic manual code review is equally impractical for the code bases of most commercial software. The Oracle database, for example, has multiple millions of lines of code. Manual code review for the scale of code most COTS vendors produce would accomplish little except pad the bank accounts of the consultants doing it without commensurate value provided (or risk reduction) for either the vendor or the customers of the product. Lastly, the nature of commercial development is that the code is continuously in development: the code base literally changes daily. Third party manual code review in these circumstances would accomplish absolutely nothing. It would be like painting a house while it is under construction.)


2. Many vendors already use third party tools to find coding errors that may lead to exploitable security vulnerabilities.
As noted, many large COTS vendors have well-established assurance programs that include the use of a multiplicity of tools to attempt to find not merely defects in their code, but defects that lead to exploitable security vulnerabilities. Since only a vendor can actually fix a product defect in their proprietary code, and generally most vulnerabilities need a “code fix” to eliminate the vulnerability, it makes sense for vendors to run these tools themselves. Many do.


Oracle, for example, has a site license for a COTS static analysis tool and Oracle also produces a static analysis tool in-house (Parfait, which was originally developed by Sun Labs). With Parfait, Oracle has the luxury of enhancing the tool quickly to meet Oracle-specific needs. Oracle has also licensed a web application vulnerability testing tool, and has produced a number of in-house tools that focus on Oracle’s own (proprietary) technologies. It is unlikely that any third party tool can fuzz Oracle PL/SQL as well as Oracle’s own tools, or analyze Oracle’s proprietary SQL networking protocol as well as Oracle’s in-house tools do. The Oracle Ethical Hacking Team (EHT) also develops tools that they use to “hack” Oracle products, some of which are “productized” for use by other development and QA teams. As Oracle runs Oracle Corporation on Oracle products, Oracle has a built-in incentive to write and deliver secure code. (In fact, this is not unusual: many COTS vendors run their own businesses on their own products and are thus highly motivated to build secure products. Third party code testers typically do not build anything that they run their own enterprises on.)


The above tool usage within Oracle is in addition to extensive regression testing of functionality to high levels of code coverage, including regression testing of security functionality. Oracle also uses other third party security tools (many of which are open source) that are vetted and recommended by the Oracle Software Security Assurance (OSSA) team. Additionally, Oracle measures compliance with “use of automated tools” as part of the OSSA program. Compliance against OSSA is reported quarterly to development line-of-business owners as well as executive management (the company president and the CEO). Many vendors have similarly robust assurance programs that include static analysis as one of many means to improve product security.


Several large software vendors have acquired static analysis (or other) code analysis tools. HP, for example, acquired both Fortify and WebInspect and IBM acquired Coverity. This is indicative both of these vendors’ commitment to “the secure code marketplace” but also, one assumes, to secure development within their own organizations. Note that while both vendors have service offerings for the tools, neither is pushing “third party code testing,” which says a lot. Everything, actually.


Note that most vendors will not provide static analysis results to customers for valid business reasons, including ensuring the security of all customers. For example, a vendor who finds a vulnerability may often fix the issue in the version of product that is under development (i.e., the “next product train leaving the station”). Newer versions are more secure (and less costly to maintain since the issue is already fixed and no patch is required). However, most vendors do not - or cannot - fix an issue in all shipping versions of product and certainly not in versions that have been deprecated. Telling customers the specifics of a vulnerability (i.e., by showing them scan results) would put all customers on older, unfixed or deprecated versions at risk.


3. Testing COTS for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software increases costs without a commensurate return on investment (ROI).
The use of static code analysis software is a highly technical endeavor requiring skilled development personnel. There are skill requirements and a necessity for detailed operational knowledge of how the software is built to help eliminate false positives, factors that raise the cost of this form of “testing.” Additionally, static code analysis tools are not the tool of choice for detecting malware or backdoors. (It is in fact, trivial, to come up with a “backdoor” that, if inserted into code, would not be detected by even the best static analysis tools. There was an experiment at Sandia Labs in which a backdoor was inserted into code and code reviewers told where in code to look for it. They could not find it – even knowing where to look.)


If the real concern of a customer insisting on a third party code scan is malware and backdoor detection: it won’t work and thus represents an extremely expensive – and useless – distraction.


4. Third party code analysis will diminish overall product security.
It is precisely leading vendors’ experience with static analysis tools that contributes to their unwillingness to have third parties attempt to analyze code – emphasis on “attempt.” None of these tools are “plug and play": in some cases, it has taken years, not months, to be able to achieve actionable results, even from the best available static analysis tools. These are in fact code analysis tools and must be “tuned” – and in some cases actually “programmed”- to understand code, and must typically be run by an experienced developer (that is, a developer who understand the particular code base being analyzed) for results to be useful and actionable. There are many reasons why static analysis tools either raise many false positives, or skip entire bodies of code. For example, because of the way Oracle implements particular functionality (memory management) in the database, static analysis tools that look for buffer overflows either do not work, or raise false positives (Oracle writes its own checks to look for those issues).


The rate of false positives from use of a “random” tool run by inexperienced operators – especially on a code base as large as that of most commercial products – would put a vendor in the position of responding to unsubstantiated fear, uncertainty, and doubt (FUD). In a code base of 10,000,000 lines of code, even a false positive rate of one per 1000 lines of code would yield 10,000 “false positives” to chase down. The cost of doing this is prohibitive. (One such tool run against a large Oracle code base generated a false positive for every 3.4 lines of code, or about 160,000 false positives in toto due to the size of the code base.)


This is why most people using these tools must “tune” them to drown out “noise.” Many vendors have already had this false positive issue with customers running web application vulnerability tools and delivering in some cases hundreds of pages of “alarms” in which there were, perhaps, a half page of actionable issues. The rate of false positives is the single biggest determinant whether these tools are worth using or an expensive distraction (aka “rathole”).


No third party firm has to prove that their tool is accurate – especially not if the vendor is forced to use a third party to validate their code – and thus there is little to no incentive to improve their tool. Consultants get paid more the longer they are on site and working. A legislative or “standards” requirement for “third party code analysis” is therefore a license for the third party doing it to print money. Putting it differently, if the use of third party static analysis was accurate and cost effective, why wouldn’t vendors already be doing it? Instead, many vendors use static analysis tools in-house, because they own the code, and are willing to assume the cost of going up the learning curve for a long term benefit to them of reduced defects (and reduced cost of fixing these defects as more vulnerabilities are found earlier in the development cycle).


In short, the use of a third party is the most expensive, non-useful, high-cost attempt at “better code” most vendors could possibly use, and would result in worse security, not better security as in-house “security boots on the ground” are diverted to working with the third party. It is unreasonable to expect any vendor to in effect tune a third party tool and train the third party on their code – and then have to pay the third party for the privilege of doing it. Third party static analysis represents an unacceptably high opportunity cost caused by the “crowding out effect” of taking scarce security resources and using them on activity of low value to the vendor and to their customers. The only “winner” here is the third party. Ka-chink. Ka-chink.


5. Third party code analysis puts customers at increased risk.
As noted, there is no standard for what third party static analysis tools find, let alone how well and how economically they find it. More problematically, there are no standards for protection of any actual vulnerabilities these tools find. In effect, third party code analysis allows the third party to amass a database of unfixed vulnerabilities in products without any requirements for data protection or any recourse should that information be sold, incorporated into a hacking tool or breached. The mere fact of a third party amassing such sensitive information makes the third party a hacker target. Why attempt to hack products one by one if you can break into a third party’s network and get a listing of defects across multiple products – in the handy “economy size?” The problem is magnified if the “decompiled” source code is stored at the third party: such source code would be an even larger hacker target than the list of vulnerabilities the third party found.


Most vendors have very strict controls not merely on their source code, but on the information about product vulnerabilities that they know about and are triaging and fixing. Oracle Corporation, for example, has stringent security vulnerability handling policies that are promulgated and “scored” as part of Oracle’s software and hardware assurance program. Oracle uses its own secure database technology (row level access control) to enforce “need to know” on security vulnerabilities, information that is considered among the most sensitive information the company has. Security bugs are not published (meaning, they are not generally searchable and readable across the company or accessible by customers). Also, security bug access is stringently limited to those working on a bug fix (and selected others, such as security analysts and the security point of contact (SPOC) for the development area).


One of the reasons Oracle is stringent about limiting access to security vulnerability information is that this information often does leak when “managed” by third parties, even third parties with presumed expertise in secret-keeping. In the past, MI5 circulated information about a non-public Oracle database vulnerability among UK defense and intelligence entities (it should be noted that nobody reported this to Oracle, despite the fact that only Oracle could issue a patch for the issue). Oracle was only notified about the bug by a US commercial company to whom the information had leaked. As the saying goes, two people can’t keep a secret.


There is another risk that has not generally been considered in third party static analysis, and that is the increased interest in cyber-offense. There is evidence that the market for so-called zero-day vulnerabilities is being fueled in part by governments seeking to develop cyber-offense tools. (StuxNet, for example, allegedly made use of at least four “zero-day” vulnerabilities: that is, vulnerabilities not previously reported to a vendor.) Coupled with the increased interest in military suppliers/system integrators in getting into the “cyber security business,” it is not a stretch to think that at last some third parties getting into the “code analysis” business can and would use that as an opportunity to “sell to both sides” – use legislative fiat or customer pressure to force vendors to consent to static analysis, and then surreptitiously sell the vulnerabilities they found to the highest bidder as zero-days. Who would know?


Governments in particular cannot reasonably simultaneously fuel the market in zero days, complain at how irresponsible their COTS vendors are for not building better code and/or insist on third party static analysis. This is like stoking the fire and then complaining that the room is too hot.


6. Equality of access to vulnerability information protects all customers.
Most vendors do not provide advance information on security vulnerabilities to some customers but not others, or more information about security vulnerabilities to some customers but not others. As noted above, one reason for this is the heightened risk that such information will leak, and put the customers “not in the know” at increased risk. Not to mention, all customers believe their secrets are as worthy of protection as any other customer: nobody wants to be on the “Last Notified” list.


Thus, third party static analysis is problematic because it may result in violating basic fairness and equality in terms of vulnerability disclosure in the rare instances where these scans actually find exploitable vulnerabilities. The business model for some vendors offering static analysis as a service is to convince the customers of the vendor that the vendor is an evil slug and cannot be trusted, and thus the customer should insist on the third party analyzing the vendors’ code base.


There is an implicit assumption that the vendor will fix vulnerabilities that the third party static analysis finds immediately, or at least, before the customer buys/installs the product. However, the reality is more subtle than that. For one thing, it is almost never the case that a vulnerability exists in one and only one version of product: it may also exist on older versions. Complicating the matter: some issues cannot be “fixed” via a patch to the software but require the vendor to rearchitect functionality. This typically can only be done in so-called major product releases, which may only occur every two to three years. Furthermore, such issues often cannot be fixed on older versions because the scope of change is so drastic it will break dependent applications. Thus, a customer (as well as the third party) has information about a “not-easily-fixed” vulnerability which puts other customers at a disadvantage and at risk to the extent that information may leak.


Therefore, allowing some customers access to the results of a third party code scan in advance of a product release would violate most vendors’ disclosure policies as well as actually increasing risk to many, many customers, and potentially that increased risk could exist for a long period of time.


7. Third party code analysis sets an unacceptable precedent that risks vendors’ core intellectual property (IP).
COTS vendors maintain very tight control over their proprietary source code because it is core, high-value IP. As such, most COTS vendors will not allow third parties to conduct static analysis against source code (and for purposes of this discussion, this includes static analysis against binaries, which typically violates standard license agreements).


Virtually all companies are aware of the tremendous cost of intellectual property theft: billions of dollars per year, according to published reports. Many nation states, including those that condone if not encourage wholesale intellectual property theft, are now asking for source code access as a condition of selling COTS products into their markets. Most COTS vendors have refused these requests. One can easily imagine that for some nation states, the primary reason to request source code access (or, alternatively, “third party analysis of code”) is for intellectual property theft or economic espionage. Once a government-sanctioned third party has access to source code, so may the government. (Why steal source code if you can get a vendor to gift wrap it and hand it to you under the rubric of “third party code analysis?”)


Another likely reason some governments may insist on source code access (or third party code analysis) is to analyze the code for weaknesses they then exploit for their own national security purposes (e.g., more intellectual property theft). All things being equal, it is easier to find defects in source code than in object code. Refusing to accede to these requests – in addition to, of course, a vendor doing its own code analysis and defect remediation – thus protects all customers. In short, agreeing to any third party code analysis involving source code – either static analysis or static analysis of binaries - would make it very difficult if not impossible for a vendor to refuse any other similar requests for source code access, which would put their core intellectual property at risk. Third party code analysis is a very bad idea because there is no way to “undo” a precedent once it is set.


Summary
Software should have a wide variety of tests performed before it is shipped and additional security tests (such as penetration tests) should be used against “as-deployed” software. However, the level of testing should be commensurate with the risk, which is both basic risk management and appropriate (scarce) resource management. A typical firm has many software elements, most probably COTS, and to suggest that they all be tested with static analysis tools begs a sanity check. The scope of COTS alone argues against this requirement: COTS products run the gamut from operating systems to databases to middleware, business intelligence and other analytic tools, business applications (accounting, supply chain management, manufacturing) as well as specialized vertical market applications (e.g., clinical trial software),
representing a number of programming languages and billions – no, hundreds of billions - of lines of code.


The use of static analysis tools in development to help find and remediate security vulnerabilities is a good assurance practice, albeit a difficult one because of the complexity of software and the difficulty of using these tools. The only utility of these tools is that they be used by the producer of software in a cost- effective way geared towards sustained vulnerability reduction over time. The mandated use of third party static analysis to “validate” or “test” code is unsupportable, for reasons of cost (especially opportunity cost), precedence, increased risk to vendors’ IP and increased security risk to customers. The third party static code analysis market is little more than a subterfuge for enabling the zero-day vulnerability market: bad security, at a high cost, and very bad public policy.


Book of the Month
It’s been so long since I blogged, it’s hard to pick out just a few books to recommend. Here are three, and a "freebie":


Hawaiki Rising: Hōkūle’a, Nainoa Thompson and the Hawaiian Renaissance by Sam Low
Among the most amazing tidbits of history are the vast voyages that the Polynesians made to settle (and travel among) Tahiti, Hawai’i and Aotearoa (New Zealand) using navigational methods largely lost to history. (Magellan – meh – he had a compass and sextant.) This book describes the re-creation of Polynesian wayfinding in Hawai’i in the 1970s via the building of a double-hulled Polynesian voyaging canoe, the Hōkūle’a, and how one amazing Hawaiian (Nainoa Thompson) – under the tutelage of one of the last practitioners of wayfinding (Mau Piailug) – made an amazing voyage from Hawai’i to Tahiti using only his knowledge of the stars, the winds, and the currents. (Aside: one of my favorite songs is “H
ōkūle’a Hula,” which describes this voyage, and is so nicely performed by Erik Lee.) Note: the Hōkūle’a is currently on a voyage around the world.


The Korean War by Max Hastings
Max Hastings is one of the few historians whom I think is truly balanced: he looks at the moral issues of history, weighs them, and presents a fair analysis – not “shove-it-down-your-throat revisionism.” He also makes use of a lot of first-person accounts, which makes history come alive. The Korean War is in so many cases a forgotten war, especially the fact that it literally is a war that never ended. It’s a good lesson of history, as it is made clear that the US drew down their military so rapidly and drastically after the World War II that we were largely (I am trying not to say “completely”) unprepared for Korea. (Moral: there is always another war.)


Code Talker by Chester Nez
Many people now know of the crucial role that members of the Navajo Nation played in the Pacific War: the code they created that provided a crucial advantage (and was never broken). This book is a first-person account of the experiences of one Navajo code talker, from his experiences growing up on the reservation to his training as a Marine, and his experiences in the Pacific Theater. Fascinating.


Securing Oracle Database 12c: A Technical Primer
If you are a DBA or security professional looking for more information on Oracle database security, then you will be interested in this book. Written by members of Oracle's engineering team and the President of the International Oracle User Group (IOUG), Michelle Malcher, the book provides a primer on capabilities such as data redaction, privilege analysis and conditional auditing. If you have Oracle databases in your environment, you will want to add this book to your collection of professional information. Register now for the complimentary eBook and learn from the experts.




I Love Standards…There Are So Many Of Them

Mon, 2013-05-13 16:32

The title is not an original bon mot by me – it’s been said often, by others, and by many with more experience than I have in developing standards.  It is with mixed emotions that I feel compelled to talk about a (generally good and certainly well-intentioned) standards organization: the US National Institute of Standards and Technology (NIST). I should state at the outset that I have a lot of respect for NIST. In the past, I have even urged a Congressional committee (House Science and Technology, if memory serves) to try to allocate more money to NIST for cybersecurity standards work.  I’ve also met a number of people who work at NIST – some of whom have since left NIST and brought their considerable talents to other government agencies, one of whom I ran into recently and mentioned how I still wore a black armband years after he had left NIST because he had done such great work there and I missed working with him. All that said, I’ve seen a few trends at NIST recently that are – of concern.

When in Doubt, Hire a Consultant

I’ve talked in other blog entries about the concern I have that so much of NIST’s outwardly-visible work seems to be done not by NIST but by consultants. I’m not down on consultants for all purposes, mind you – what is having your tires rotated and your oil changed except “using a car consultant?” However, in the area of developing standards or policy guidance it is of concern, especially when, as has been the case recently, the number of consultants working on a NIST publication or draft document is greater than the number of NIST employees contributing to it.  There are business reasons, often, to use consultants. But you cannot, should not, and must not “outsource” a core mission, or why are you doing it? This is true in spades for government agencies.  Otherwise, there is an entire beltway’s worth of people just aching to tell you about a problem you didn’t know you had, propose a standard (or regulation) for it, write the standard/regulation, interpret it and “certify” that Other Entities meet it. To use a song title, “Nice Work If You Can Get It.”* Some recent consultant-heavy efforts are all over the map, perhaps because there isn’t a NIST employee to say, "you say po-TAY-to, I sy po-TAH-to, let's call the whole thing off." ** (Or at least make sure the potato standard is Idaho russet – always a good choice.)

Another explanation – not intentionally sinister but definitely a possibility – is that consultants’ business models are often tied to repeat engagements.  A short, concise, narrowly-tailored and readily understandable standard isn’t going to generate as much business for them as a long, complex and “subject to interpretation – and five people will interpret this six different ways” – document. 

In short: I really don’t like reading a document like NISTIR 7622 (more on which below) where most of the people who developed it are consultants. NIST’s core mission is standards development: NIST needs to own their core mission and not farm it out.

Son of FISMA

I have no personal experience with the Federal Information Security Management Act of 2002 (FISMA) except the amount of complaining I hear about it second hand, which is considerable.  The gist of the complaints is that FISMA asks people to do a lot of stuff that looks earnestly security oriented, not all of which is equally important.

Why should we care? To quote myself (in an obnoxiously self-referential way): “time, money and (qualified security) people are always limited.” That is, the more security degenerates into a list of the 3000 things you Must Do To Appease the Audit Gods, the less real security we will have (really, who keeps track of 3000 Must Dos, much less does them? It sounds like a demented Girl Scout merit badge). And, in fact, the one thing you read about FISMA is that many government agencies aren’t actually compliant because they missed a bunch of FISMA checkboxes.  Especially since knowledgeable resources (that is, good security people) are limited, it’s much better to do the important things well then maintain the farce that you can check 3000 boxes, which certainly cannot all be equally important. (It’s not even clear how many of these requirements contribute to actual security as opposed to supporting the No Auditor Left Behind Act.)

If the scuttlebutt I hear is accurate, the only thing that could make FISMA worse is – you guessed it –adding more checkboxes. It is thus with considerable regret that I heard recently that NIST updated NIST Special Publication 800-53 (which NIST has produced as part of its statutory responsibilities under FISMA). The Revision 4 update included more requirements in the area of supply chain risk management and software assurance and trustworthiness.  Now why would I, a maven of assurance, object to this? Because a) we already have actual standards around assurance, b) having FISMA-specific requirements means that pretty much every piece of Commercial Off-the-Shelf (COTS) software will have to be designed and built to be FISMA compliant or COTS software/hardware vendors can’t sell into the Federal government and (c) we don’t want a race by other governments to come up with competing standards, to the point where we’re checking not 3000 but 9000 or 12000 boxes and probably can’t come up with a single piece of COTS globally let alone one that meets all 12000 requirements. (Another example is the set of supply chain/assurance requirements in the telecom sector in India that include a) asking for details about country of origin and b) specific contractual terms that buyers anywhere in the supply chain are expected to use.  An unintended result is that a vendor will need to (a) disclose sensitive supply chain data (which itself may be a trade secret) and (b) modify processes around global COTS to sell into one country.)

Some of the new NIST guidance is problematic for any COTS supplier. To provide one example, consider:

“The artifacts generated by these development activities (e.g., functional specifications, high-level/low-level designs, implementation representations [source code and hardware schematics], the results from static/dynamic testing and code analysis (emphasis mine)) can provide important evidence that the information systems (including the components that compose those systems) will be more reliable and trustworthy. Security evidence can also be generated from security testing conducted by independent, accredited, third-party assessment organizations (e.g., Common Criteria Testing Laboratories (emphasis mine), Cryptographic/Security Testing Laboratories, and other assessment activities by government and private sector organizations.)”

For a start, to the extent that components are COTS, such “static testing” is certainly not going to happen by a third party nor will the results be provided to a customer. Once you allow random customers – especially governments – access to your source code or to static analysis results, you might as well gift wrap your code and send it to a country that engages in industrial espionage, because no vendor, having agreed to this for one government, will ever be able to say no to Nation States That Steal Stuff.  (And static analysis results, to the extent some vulnerabilities are not fixed yet, just provide hackers a road map for how and where to break in.) Should vendors do static analysis themselves? Sure, and many do. It’s fair for customers to ask whether this is done, and how a supplier ensures that the worst stuff is fixed before the supplier ships product. But it is worth noting – again – that if these tools were easy to use and relatively error free, everyone would be at a high level of tools usage maturity years ago. Using static analysis tools is like learning Classic Greek – very hard, indeed. (OK, koinic Greek isn’t too bad but Homeric Greek or Linear B, fuhgeddabout it.)

With reference to the Common Criteria (CC), the difficulty now is that vendors have a much harder time doing CC evaluations than in the past because of other forces narrowing CC evaluations into a small set of products that have Protection Profiles (PPs). The result has been and will be for the foreseeable future – fewer evaluated products. The National Information Assurance Partnership (NIAP) – the US evaluation scheme – has ostensibly good reasons for their “narrowed/focused” CC-directions. But it is more than a little ironic that the NIST 800-53 revision should mention CC evaluations as an assurance measure at a time when the pipeline of evaluated products is shrinking, in large part due to the directions taken by another government entity (NIAP). What is industry to make of this apparent contradiction? Besides corporate head scratching, that is.

There are other – many other sections – I could comment upon, but one sticks out as worthy of notice:

“Supply chain risk is part of the advanced persistent threat (APT).”

It’s bad enough that “supply chain risk” is such a vague term that it encompasses basically any and all risk of buying from a third party. (Including “buying a crummy product” which is not actually a supply chain-specific risk but a risk of buying any and all products.) Can bad guys try to corrupt the supply chain? Sure. Does that make any and all supply chain risks “part of APT?” Heck, no. We have enough hysteria about supply chain risk and APT without linking them together for Super-Hysteria. 

To sum up, I don’t disagree that customers in some cases – and for some, not all applications – may wish higher levels of assurance or have a heightened awareness of cyber-specific supply chain threats (e.g., counterfeiting and deliberate insertion of malware in code). However, incorporation of supply chain provisions and assurance requirements into NIST 800-53 has the unintended effect of requiring any and all COTS products to be sold to government agencies – which is all of them as far as I know – to be subject to FISMA.

What if the state of Idaho decided that every piece of software had to attest to the fact that No Actual Moose were harmed during the production of this software and that any moose used in code production all had background checks? What if every other state enumerated specific assurance requirements and specific supply chain risk management practices? What if they conflict with each other, or with the NIST 800-53 requirements? I mean really, why are these specific requirements called out in NIST 800-53 at all? There really aren’t that many ways to build good software.  FISMA as interpreted by NIST 800-53 really, really shouldn’t roll its own.

IT Came from Outer Space – NISTIR 7622

I’ve already opined at length about how bad the NIST Interagency Report (NISTIR) 7622 is. I had 30 pages of comments on the first 80-page draft. The second draft only allowed comments of the Excel Spreadsheet form:  “Section A.b, change ‘must’ to ‘should,’ for the reason ‘because ‘must’ is impossible’” and so on. This format didn’t allow for wholesale comments such as “it’s unclear what problem this section is trying to solve and represents overreach, fuzzy definition and fuzzier thinking.”  NISTIR 7622 was and is so dreadful that an industry association signed a letter that said, in effect, NISTIR 7622 was not salvageable, couldn’t be edited to something that could work, and needed to be scrapped in toto.

I have used NISTIR 7622 multiple times as a negative example: most recently, to an audience of security practitioners as to why they need to be aware of what regulations are coming down the pike and speak up early and often.  I also used it in the context of a (humorous) paper I did at the recent RSA Conference with a colleague, the subject of which was described as “doubtless-well-intentioned legislation/regulation-that-has-seriously-unfortunate-yet-doubtless-unintended-consequences.”  That’s about as tactful as you can get.

Alas, Dracula does rise from the grave,*** because I thought I heard noises at a recent Department of Homeland Security event that NISTIR 7622 was going to move beyond “good advice” and morph into a special publication. (“Run for your lives, store up garlic and don’t go out after dark without a cross!”)  The current version of NISTIR 7622 – after two rounds of edits and heaven knows how many thousands of hours of scrutiny – is still unworkable, overscoped and completely unclear: you have a better chance of reading Linear B**** than understanding this document (and for those who don’t already know, Linear B is not merely “all Greek to me” – it’s actually all Greek to anybody).  Ergo, NISTIR 7622 needs to die the true death: the last thing anyone should do with it is make a special publication out of it. It’s doubling down on dreck. Make it stop. Now. Please.

NIST RFI

The last section is, to be fair, not really about NIST per se. NIST has been tasked, by virtue of a recent White House Executive Order, with developing a framework for improving cybersecurity. As part of that tasking, NIST has published a Request For Information (RFI) seeking industry input on said framework. NIST has also scheduled several meetings to actively draw in thoughts and comments from those outside NIST. As a general rule, and NISTIR 7622 notwithstanding, NIST is very good at eliciting and incorporating feedback from a broad swath of stakeholders. It’s one of their strengths and one of the things I like about them. More importantly, I give major kudos to NIST and its Director Pat Gallagher for forcefully making the point that NIST would not interfere with IT design, development and manufacture, in the speech he gave when he kicked off NIST’s work on the Framework: “the Framework must be technology neutral and it must enable critical infrastructure sectors to benefit from a competitive [technology] market. (…) In other words, we will not be seeking to tell industry how to build your products or how to run your business.”

The RFI responses are posted publicly and are, well, all over the map.  What is concerning to me is the apparent desire of some respondents to have the government tell industry how to run their businesses. More specifically, how to build software, how to manage supply chain risk, and so forth. No, no, and no. (Maybe some of the respondents are consultants lobbying the government to require businesses to hire these consultants to comply with this or that mandate.)

For one thing, “security by design” concepts have already been working their way into development for a number of years: many companies are now staking their reputations on the security of their products and services. Market forces are working. Also, it’s a good time to remind people that more transparency is reasonable – for example, to enable purchasers to make better risk-based acquisition decisions – but when you buy COTS you don’t get to tell the provider how to build it. That’s called “custom code” or “custom development.” Just like, I don’t get to walk into <insert name of low-end clothing retailer here> and tell them, that I expect my “standard off-the-shelf blue jeans” to ex post facto be tailored to me specifically, made of “organic, local and sustainable cotton” (leaving aside the fact that nobody grows cotton in Idaho), oh, and embroidered with not merely rhinestones but diamonds.  The retailer’s response should be “pound sand/good luck with that.” It’s one thing to ask your vendor “tell me what you did to build security into this product” and “tell me how you help mitigate counterfeiting” but something else for a non-manufacturing entity – the government – to dictate exactly how industry should build products and manage risk. Do we really want the government telling industry how to build products? Further, do we really want a US-specific set of requirements for how to build products for a global marketplace? What’s good for the (US) goose is good for the (European/Brazilian/Chinese/Russian/Indian/Korean/name your foreign country) gander.

An illustrative set of published responses to the NIST RFI – and my response to the response – follows:

1. “NIST should likewise recognize that Information Technology (IT) products and services play a critical role in addressing cybersecurity vulnerabilities, and their exclusion from the Framework will leave many critical issues unaddressed.”

Comment: COTS is general purpose software and not built for all threat environments. If I take my regular old longboard and attempt to surf Maverick’s on a 30 foot day and “eat it,” as I surely will, not merely because of my lack of preparation for 30-foot waves but because you need, as every surfer knows, a “rhino chaser” or “elephant gun” board for those conditions, is it the longboard shaper’s fault? Heck, no. No surfboard is designed for all surf conditions; neither is COTS designed for all threat environments. Are we going to insist on products designed for one-size-fits-all threat conditions? If so, we will all, collectively, “wipe out.” (Can’t surf small waves well on a rhino chaser. Can’t walk the board on one, either.)

Nobody agrees on what, precisely, constitutes critical infrastructure. Believe it or not, some governments appear to believe that social media should be part of critical national infrastructure. (Clearly, the World As We Know It will come to an end if I can’t post a picture of my dog Koa on Facebook.) And even if certain critical infrastructure functions – say, power generation – depend on COTS hardware and software, the surest way to weaken their security is to apply an inflexible and country-specific regulatory framework to that COTS hardware and software. We have an existing standard for the evaluation of COTS IT, it’s called the Common Criteria (see below): let’s use it rather than reinvent the digital wheel.  

2. “Software that is purchased or built by critical infrastructure operators should have a reasonable protective measures applied during the software development process.”

Comment: Thus introducing an entirely new and undefined term into the assurance lexicon: “protective measures.” I’ve worked in security – actually, the security of product development – for 20 years and I have no idea what this means. Does it mean that every product should self defend? I confess, I rather like the idea of applying the Marine Corps ethos – “every Marine a rifleman” – to commercial software. Every product should understand when it is under attack and every product should self-defend. It is a great concept but we do not, as an industry, know how to do that - yet. Does “protective measures” mean “quality measures?” Does it mean “standard assurance measures?” Nobody knows. And any term that is this nebulous will be interpreted by every reader as Something Different. 

3. “Ultimately, <Company X> believes that the public-private establishment of baseline security assurance standards for the ICT industry should cover all key components of the end-to-end lifecycle of ICT products, including R&D, product development, procurement, supply chain, pre-installation product evaluation, and trusted delivery/installation, and post-installation updates and servicing.”

Comment:  I can see the religious wars over tip-of-tree vs. waterfall vs. agile development methodologies. There is no single development methodology, there is no single set of assurance practices that will work for every organization (for goodness’ sake, you can’t even find a single vulnerability analysis tool that works well against all code bases).

Too many in government and industry cannot express concerns or problem statements in simple, declarative sentences, if at all. They don’t, therefore, have any business attempting to standardize how all commercial products are built (what problem will this solve, exactly?). Also, if there is an argument for baseline assurance requirements, it certainly can’t be for everything, or are we arguing that “FindEasyRecipes.com” is critical infrastructure and need to be built to withstand hostile nation state attacks that attempt to steal your brioche recipe if not tips on how to get sugar to caramelize at altitude?

 4. “Application of this technique to the Common Criteria for Information Technology Security Evaluation revealed a number of defects in that standard.  The journal Information and Software Technology will soon publish an article describing our technique and some of the defects we found in the Common Criteria.”

Comment: Nobody ever claimed the Common Criteria was perfect. What it does have going for it is a) it’s an ISO standard and b) by virtue of the Common Criteria Recognition Arrangement (CCRA), evaluating once against the Common Criteria gains you recognition in 20-some other countries. Putting it differently, the quickest way to make security much, much worse is to have a Balkanization of assurance requirements. (Taking a horse and jumping through mauve, pink, and yellow hoops doesn’t make the horse any better, but it does enrich the hoop manufacturers, quite nicely.)  In the security realm, doing the same thing four times doesn’t give you four times the security, it reduces security by four times, as limited (skilled) resource goes to doing the same thing four different ways. If we want better security, improve the Common Criteria and, by the way, major IT vendors and the Common Criteria national schemes – which come from each CCRA member country’s information assurance agency, like the NSA in the US – have been hard at work for the last few years applying their considerable security expertise and resources to do just that. Having state-by-state or country-by-country assurance requirements will make security worse – much, much worse.

 5. “…vendor adoption of industry standard security models.  In addition, we also believe that initiatives to motivate vendors to more uniformly adopt vulnerability and log data categorization, reporting and detection automation ecosystems will be a significant step in ensuring security tools can better detect, report and repair security vulnerabilities.”

Comment: There are so many flaws in this, one hardly knows where to start. There are existing vulnerability “scoring” standards – namely, the Common Vulnerability Scoring System (CVSS), *****  though there are some challenges with it, such as the fact that the value of the data compromised should make a difference in the score: a “breach” of Aunt Gertrude’s Whiskey Sauce Recipe is not, ceteris paribus, as dire as a breach of Personally Identifiable Information (PII) if for no other reason than a company can incur large fines for the latter, far exceeding Aunt Gertrude’s displeasure at the former. Even if she cuts you out of her will.

Also, there is work going on to standardize descriptions of product vulnerabilities (that is, the format and type). However, not all vendors release the exact same amount of information when they announce security vulnerabilities and should not be required to.  Oracle believes that it is not necessary to release either exploit code or the exact type of vulnerability; e.g., buffer overflow, cross-site request forgery (CSRF) or the like because this information does not help customers decide whether to apply a patch or not: it merely enables hackers to break into things faster. Standardize how you refer to particular advisory bulletin elements and make them machine readable? Sure. Insist on dictating business practices (e.g., how much information to release) – heck, no. That’s between a vendor and its customer base. Lastly, security tools cannot, in general “repair” security vulnerabilities – typically, only patch application can do that.

6. “All owners and operators of critical infrastructure face risk from the supply chain. Purchasing hardware and software potentially introduce security risk into the organization. Creation of a voluntary vendor certification program may help drive innovation and better security in the components that are essential to delivery of critical infrastructure services.”

Comment:  The insanity of the following comment astounds: “Purchasing hardware and software potentially introduce security risk into the organization.” News flash: all business involves “risk.” Not doing something is a risk. So, what else is new? Actually, attempting to build everything yourself also involves risk – not being able to find qualified people, the cost (and ability) to maintain a home-grown solution, and so forth. To quote myself again: “Only God created something from nothing: everyone else has a supply chain.”****** In short, everyone purchases something from outside their own organization. Making all purchases into A Supply Chain Risk as opposed to, say, a normal business risk is silly and counterproductive.  It also makes it far less likely that specific, targeted supply chain threats can be addressed at all if “buying something – anything – is a risk” is the threat definition.

At this point, I think I’ve said enough. Maybe too much. Again, I appreciate NIST as an organization and as I said above the direction they have set for the Framework (not to $%*& with IT innovation) is really to their credit. I believe NIST needs to in-source more of their standards/policy development, because it is their core mission and because consultants have every incentive to create perpetual work for themselves (and none whatsoever to be precise and focused). NIST should adopt a less-is-more mantra vis-a-vis security. It is better to ask organizations do a few critical things well than to ask them to do absolutely everything – with not enough resource (which is a collective industry problem and not one likely to be solved any time soon). Lastly, we need to remember that we are a proud nation of innovators. Governments generally don’t do well when they tell industry how to do their core mission – innovate – and, absent a truly compelling public policy argument for so doing, they shouldn’t try. 

*”Nice Work If You Can Get It,” lyrics by Ira Gershwin, music  by George Gershwin. Don’t you just love Gershwin?

** “Let’s Call The Whole Thing Off.” Another gem by George and Ira Gershwin.

*** Which reminds me – I really hate the expression “there are no silver bullets.” Of course there are silver bullets. How many vampires and werewolves do you see wandering around?

****Speaking of which, I just finished a fascinating if short read: The Man Who Deciphered Linear B: The Story of Michael Ventris.

*****CVSS is undergoing revision.

****** If you believe the account in Genesis, that is.

I Love Standards…There Are So Many Of Them

Mon, 2013-05-13 16:32

The title is not an original bon mot by me – it’s been said often, by others, and by many with more experience than I have in developing standards.  It is with mixed emotions that I feel compelled to talk about a (generally good and certainly well-intentioned) standards organization: the US National Institute of Standards and Technology (NIST). I should state at the outset that I have a lot of respect for NIST. In the past, I have even urged a Congressional committee (House Science and Technology, if memory serves) to try to allocate more money to NIST for cybersecurity standards work.  I’ve also met a number of people who work at NIST – some of whom have since left NIST and brought their considerable talents to other government agencies, one of whom I ran into recently and mentioned how I still wore a black armband years after he had left NIST because he had done such great work there and I missed working with him. All that said, I’ve seen a few trends at NIST recently that are – of concern.


When in Doubt, Hire a Consultant


I’ve talked in other blog entries about the concern I have that so much of NIST’s outwardly-visible work seems to be done not by NIST but by consultants. I’m not down on consultants for all purposes, mind you – what is having your tires rotated and your oil changed except “using a car consultant?” However, in the area of developing standards or policy guidance it is of concern, especially when, as has been the case recently, the number of consultants working on a NIST publication or draft document is greater than the number of NIST employees contributing to it.  There are business reasons, often, to use consultants. But you cannot, should not, and must not “outsource” a core mission, or why are you doing it? This is true in spades for government agencies.  Otherwise, there is an entire beltway’s worth of people just aching to tell you about a problem you didn’t know you had, propose a standard (or regulation) for it, write the standard/regulation, interpret it and “certify” that Other Entities meet it. To use a song title, “Nice Work If You Can Get It.”* Some recent consultant-heavy efforts are all over the map, perhaps because there isn’t a NIST employee to say, "you say po-TAY-to, I sy po-TAH-to, let's call the whole thing off." ** (Or at least make sure the potato standard is Idaho russet – always a good choice.)


Another explanation – not intentionally sinister but definitely a possibility – is that consultants’ business models are often tied to repeat engagements.  A short, concise, narrowly-tailored and readily understandable standard isn’t going to generate as much business for them as a long, complex and “subject to interpretation – and five people will interpret this six different ways” – document. 


In short: I really don’t like reading a document like NISTIR 7622 (more on which below) where most of the people who developed it are consultants. NIST’s core mission is standards development: NIST needs to own their core mission and not farm it out.


Son of FISMA


I have no personal experience with the Federal Information Security Management Act of 2002 (FISMA) except the amount of complaining I hear about it second hand, which is considerable.  The gist of the complaints is that FISMA asks people to do a lot of stuff that looks earnestly security oriented, not all of which is equally important.


Why should we care? To quote myself (in an obnoxiously self-referential way): “time, money and (qualified security) people are always limited.” That is, the more security degenerates into a list of the 3000 things you Must Do To Appease the Audit Gods, the less real security we will have (really, who keeps track of 3000 Must Dos, much less does them? It sounds like a demented Girl Scout merit badge). And, in fact, the one thing you read about FISMA is that many government agencies aren’t actually compliant because they missed a bunch of FISMA checkboxes.  Especially since knowledgeable resources (that is, good security people) are limited, it’s much better to do the important things well then maintain the farce that you can check 3000 boxes, which certainly cannot all be equally important. (It’s not even clear how many of these requirements contribute to actual security as opposed to supporting the No Auditor Left Behind Act.)


If the scuttlebutt I hear is accurate, the only thing that could make FISMA worse is – you guessed it –adding more checkboxes. It is thus with considerable regret that I heard recently that NIST updated NIST Special Publication 800-53 (which NIST has produced as part of its statutory responsibilities under FISMA). The Revision 4 update included more requirements in the area of supply chain risk management and software assurance and trustworthiness.  Now why would I, a maven of assurance, object to this? Because a) we already have actual standards around assurance, b) having FISMA-specific requirements means that pretty much every piece of Commercial Off-the-Shelf (COTS) software will have to be designed and built to be FISMA compliant or COTS software/hardware vendors can’t sell into the Federal government and (c) we don’t want a race by other governments to come up with competing standards, to the point where we’re checking not 3000 but 9000 or 12000 boxes and probably can’t come up with a single piece of COTS globally let alone one that meets all 12000 requirements. (Another example is the set of supply chain/assurance requirements in the telecom sector in India that include a) asking for details about country of origin and b) specific contractual terms that buyers anywhere in the supply chain are expected to use.  An unintended result is that a vendor will need to (a) disclose sensitive supply chain data (which itself may be a trade secret) and (b) modify processes around global COTS to sell into one country.)


Some of the new NIST guidance is problematic for any COTS supplier. To provide one example, consider:


“The artifacts generated by these development activities (e.g., functional specifications, high-level/low-level designs, implementation representations [source code and hardware schematics], the results from static/dynamic testing and code analysis (emphasis mine)) can provide important evidence that the information systems (including the components that compose those systems) will be more reliable and trustworthy. Security evidence can also be generated from security testing conducted by independent, accredited, third-party assessment organizations (e.g., Common Criteria Testing Laboratories (emphasis mine), Cryptographic/Security Testing Laboratories, and other assessment activities by government and private sector organizations.)”


For a start, to the extent that components are COTS, such “static testing” is certainly not going to happen by a third party nor will the results be provided to a customer. Once you allow random customers – especially governments – access to your source code or to static analysis results, you might as well gift wrap your code and send it to a country that engages in industrial espionage, because no vendor, having agreed to this for one government, will ever be able to say no to Nation States That Steal Stuff.  (And static analysis results, to the extent some vulnerabilities are not fixed yet, just provide hackers a road map for how and where to break in.) Should vendors do static analysis themselves? Sure, and many do. It’s fair for customers to ask whether this is done, and how a supplier ensures that the worst stuff is fixed before the supplier ships product. But it is worth noting – again – that if these tools were easy to use and relatively error free, everyone would be at a high level of tools usage maturity years ago. Using static analysis tools is like learning Classic Greek – very hard, indeed. (OK, koinic Greek isn’t too bad but Homeric Greek or Linear B, fuhgeddabout it.)


With reference to the Common Criteria (CC), the difficulty now is that vendors have a much harder time doing CC evaluations than in the past because of other forces narrowing CC evaluations into a small set of products that have Protection Profiles (PPs). The result has been and will be for the foreseeable future – fewer evaluated products. The National Information Assurance Partnership (NIAP) – the US evaluation scheme – has ostensibly good reasons for their “narrowed/focused” CC-directions. But it is more than a little ironic that the NIST 800-53 revision should mention CC evaluations as an assurance measure at a time when the pipeline of evaluated products is shrinking, in large part due to the directions taken by another government entity (NIAP). What is industry to make of this apparent contradiction? Besides corporate head scratching, that is.


There are other – many other sections – I could comment upon, but one sticks out as worthy of notice:


“Supply chain risk is part of the advanced persistent threat (APT).”


It’s bad enough that “supply chain risk” is such a vague term that it encompasses basically any and all risk of buying from a third party. (Including “buying a crummy product” which is not actually a supply chain-specific risk but a risk of buying any and all products.) Can bad guys try to corrupt the supply chain? Sure. Does that make any and all supply chain risks “part of APT?” Heck, no. We have enough hysteria about supply chain risk and APT without linking them together for Super-Hysteria. 


To sum up, I don’t disagree that customers in some cases – and for some, not all applications – may wish higher levels of assurance or have a heightened awareness of cyber-specific supply chain threats (e.g., counterfeiting and deliberate insertion of malware in code). However, incorporation of supply chain provisions and assurance requirements into NIST 800-53 has the unintended effect of requiring any and all COTS products to be sold to government agencies – which is all of them as far as I know – to be subject to FISMA.


What if the state of Idaho decided that every piece of software had to attest to the fact that No Actual Moose were harmed during the production of this software and that any moose used in code production all had background checks? What if every other state enumerated specific assurance requirements and specific supply chain risk management practices? What if they conflict with each other, or with the NIST 800-53 requirements? I mean really, why are these specific requirements called out in NIST 800-53 at all? There really aren’t that many ways to build good software.  FISMA as interpreted by NIST 800-53 really, really shouldn’t roll its own.


IT Came from Outer Space – NISTIR 7622


I’ve already opined at length about how bad the NIST Interagency Report (NISTIR) 7622 is. I had 30 pages of comments on the first 80-page draft. The second draft only allowed comments of the Excel Spreadsheet form:  “Section A.b, change ‘must’ to ‘should,’ for the reason ‘because ‘must’ is impossible’” and so on. This format didn’t allow for wholesale comments such as “it’s unclear what problem this section is trying to solve and represents overreach, fuzzy definition and fuzzier thinking.”  NISTIR 7622 was and is so dreadful that an industry association signed a letter that said, in effect, NISTIR 7622 was not salvageable, couldn’t be edited to something that could work, and needed to be scrapped in toto.


I have used NISTIR 7622 multiple times as a negative example: most recently, to an audience of security practitioners as to why they need to be aware of what regulations are coming down the pike and speak up early and often.  I also used it in the context of a (humorous) paper I did at the recent RSA Conference with a colleague, the subject of which was described as “doubtless-well-intentioned legislation/regulation-that-has-seriously-unfortunate-yet-doubtless-unintended-consequences.”  That’s about as tactful as you can get.


Alas, Dracula does rise from the grave,*** because I thought I heard noises at a recent Department of Homeland Security event that NISTIR 7622 was going to move beyond “good advice” and morph into a special publication. (“Run for your lives, store up garlic and don’t go out after dark without a cross!”)  The current version of NISTIR 7622 – after two rounds of edits and heaven knows how many thousands of hours of scrutiny – is still unworkable, overscoped and completely unclear: you have a better chance of reading Linear B**** than understanding this document (and for those who don’t already know, Linear B is not merely “all Greek to me” – it’s actually all Greek to anybody).  Ergo, NISTIR 7622 needs to die the true death: the last thing anyone should do with it is make a special publication out of it. It’s doubling down on dreck. Make it stop. Now. Please.


NIST RFI


The last section is, to be fair, not really about NIST per se. NIST has been tasked, by virtue of a recent White House Executive Order, with developing a framework for improving cybersecurity. As part of that tasking, NIST has published a Request For Information (RFI) seeking industry input on said framework. NIST has also scheduled several meetings to actively draw in thoughts and comments from those outside NIST. As a general rule, and NISTIR 7622 notwithstanding, NIST is very good at eliciting and incorporating feedback from a broad swath of stakeholders. It’s one of their strengths and one of the things I like about them. More importantly, I give major kudos to NIST and its Director Pat Gallagher for forcefully making the point that NIST would not interfere with IT design, development and manufacture, in the speech he gave when he kicked off NIST’s work on the Framework: “the Framework must be technology neutral and it must enable critical infrastructure sectors to benefit from a competitive [technology] market. (…) In other words, we will not be seeking to tell industry how to build your products or how to run your business.”


The RFI responses are posted publicly and are, well, all over the map.  What is concerning to me is the apparent desire of some respondents to have the government tell industry how to run their businesses. More specifically, how to build software, how to manage supply chain risk, and so forth. No, no, and no. (Maybe some of the respondents are consultants lobbying the government to require businesses to hire these consultants to comply with this or that mandate.)


For one thing, “security by design” concepts have already been working their way into development for a number of years: many companies are now staking their reputations on the security of their products and services. Market forces are working. Also, it’s a good time to remind people that more transparency is reasonable – for example, to enable purchasers to make better risk-based acquisition decisions – but when you buy COTS you don’t get to tell the provider how to build it. That’s called “custom code” or “custom development.” Just like, I don’t get to walk into <insert name of low-end clothing retailer here> and tell them, that I expect my “standard off-the-shelf blue jeans” to ex post facto be tailored to me specifically, made of “organic, local and sustainable cotton” (leaving aside the fact that nobody grows cotton in Idaho), oh, and embroidered with not merely rhinestones but diamonds.  The retailer’s response should be “pound sand/good luck with that.” It’s one thing to ask your vendor “tell me what you did to build security into this product” and “tell me how you help mitigate counterfeiting” but something else for a non-manufacturing entity – the government – to dictate exactly how industry should build products and manage risk. Do we really want the government telling industry how to build products? Further, do we really want a US-specific set of requirements for how to build products for a global marketplace? What’s good for the (US) goose is good for the (European/Brazilian/Chinese/Russian/Indian/Korean/name your foreign country) gander.


An illustrative set of published responses to the NIST RFI – and my response to the response – follows:


1. “NIST should likewise recognize that Information Technology (IT) products and services play a critical role in addressing cybersecurity vulnerabilities, and their exclusion from the Framework will leave many critical issues unaddressed.”


Comment: COTS is general purpose software and not built for all threat environments. If I take my regular old longboard and attempt to surf Maverick’s on a 30 foot day and “eat it,” as I surely will, not merely because of my lack of preparation for 30-foot waves but because you need, as every surfer knows, a “rhino chaser” or “elephant gun” board for those conditions, is it the longboard shaper’s fault? Heck, no. No surfboard is designed for all surf conditions; neither is COTS designed for all threat environments. Are we going to insist on products designed for one-size-fits-all threat conditions? If so, we will all, collectively, “wipe out.” (Can’t surf small waves well on a rhino chaser. Can’t walk the board on one, either.)


Nobody agrees on what, precisely, constitutes critical infrastructure. Believe it or not, some governments appear to believe that social media should be part of critical national infrastructure. (Clearly, the World As We Know It will come to an end if I can’t post a picture of my dog Koa on Facebook.) And even if certain critical infrastructure functions – say, power generation – depend on COTS hardware and software, the surest way to weaken their security is to apply an inflexible and country-specific regulatory framework to that COTS hardware and software. We have an existing standard for the evaluation of COTS IT, it’s called the Common Criteria (see below): let’s use it rather than reinvent the digital wheel.  


2. “Software that is purchased or built by critical infrastructure operators should have a reasonable protective measures applied during the software development process.”


Comment: Thus introducing an entirely new and undefined term into the assurance lexicon: “protective measures.” I’ve worked in security – actually, the security of product development – for 20 years and I have no idea what this means. Does it mean that every product should self defend? I confess, I rather like the idea of applying the Marine Corps ethos – “every Marine a rifleman” – to commercial software. Every product should understand when it is under attack and every product should self-defend. It is a great concept but we do not, as an industry, know how to do that - yet. Does “protective measures” mean “quality measures?” Does it mean “standard assurance measures?” Nobody knows. And any term that is this nebulous will be interpreted by every reader as Something Different. 


3. “Ultimately, <Company X> believes that the public-private establishment of baseline security assurance standards for the ICT industry should cover all key components of the end-to-end lifecycle of ICT products, including R&D, product development, procurement, supply chain, pre-installation product evaluation, and trusted delivery/installation, and post-installation updates and servicing.”


Comment:  I can see the religious wars over tip-of-tree vs. waterfall vs. agile development methodologies. There is no single development methodology, there is no single set of assurance practices that will work for every organization (for goodness’ sake, you can’t even find a single vulnerability analysis tool that works well against all code bases).


Too many in government and industry cannot express concerns or problem statements in simple, declarative sentences, if at all. They don’t, therefore, have any business attempting to standardize how all commercial products are built (what problem will this solve, exactly?). Also, if there is an argument for baseline assurance requirements, it certainly can’t be for everything, or are we arguing that “FindEasyRecipes.com” is critical infrastructure and need to be built to withstand hostile nation state attacks that attempt to steal your brioche recipe if not tips on how to get sugar to caramelize at altitude?


 4. “Application of this technique to the Common Criteria for Information Technology Security Evaluation revealed a number of defects in that standard.  The journal Information and Software Technology will soon publish an article describing our technique and some of the defects we found in the Common Criteria.”


Comment: Nobody ever claimed the Common Criteria was perfect. What it does have going for it is a) it’s an ISO standard and b) by virtue of the Common Criteria Recognition Arrangement (CCRA), evaluating once against the Common Criteria gains you recognition in 20-some other countries. Putting it differently, the quickest way to make security much, much worse is to have a Balkanization of assurance requirements. (Taking a horse and jumping through mauve, pink, and yellow hoops doesn’t make the horse any better, but it does enrich the hoop manufacturers, quite nicely.)  In the security realm, doing the same thing four times doesn’t give you four times the security, it reduces security by four times, as limited (skilled) resource goes to doing the same thing four different ways. If we want better security, improve the Common Criteria and, by the way, major IT vendors and the Common Criteria national schemes – which come from each CCRA member country’s information assurance agency, like the NSA in the US – have been hard at work for the last few years applying their considerable security expertise and resources to do just that. Having state-by-state or country-by-country assurance requirements will make security worse – much, much worse.


 5. “…vendor adoption of industry standard security models.  In addition, we also believe that initiatives to motivate vendors to more uniformly adopt vulnerability and log data categorization, reporting and detection automation ecosystems will be a significant step in ensuring security tools can better detect, report and repair security vulnerabilities.”


Comment: There are so many flaws in this, one hardly knows where to start. There are existing vulnerability “scoring” standards – namely, the Common Vulnerability Scoring System (CVSS), *****  though there are some challenges with it, such as the fact that the value of the data compromised should make a difference in the score: a “breach” of Aunt Gertrude’s Whiskey Sauce Recipe is not, ceteris paribus, as dire as a breach of Personally Identifiable Information (PII) if for no other reason than a company can incur large fines for the latter, far exceeding Aunt Gertrude’s displeasure at the former. Even if she cuts you out of her will.


Also, there is work going on to standardize descriptions of product vulnerabilities (that is, the format and type). However, not all vendors release the exact same amount of information when they announce security vulnerabilities and should not be required to.  Oracle believes that it is not necessary to release either exploit code or the exact type of vulnerability; e.g., buffer overflow, cross-site request forgery (CSRF) or the like because this information does not help customers decide whether to apply a patch or not: it merely enables hackers to break into things faster. Standardize how you refer to particular advisory bulletin elements and make them machine readable? Sure. Insist on dictating business practices (e.g., how much information to release) – heck, no. That’s between a vendor and its customer base. Lastly, security tools cannot, in general “repair” security vulnerabilities – typically, only patch application can do that.


6. “All owners and operators of critical infrastructure face risk from the supply chain. Purchasing hardware and software potentially introduce security risk into the organization. Creation of a voluntary vendor certification program may help drive innovation and better security in the components that are essential to delivery of critical infrastructure services.”


Comment:  The insanity of the following comment astounds: “Purchasing hardware and software potentially introduce security risk into the organization.” News flash: all business involves “risk.” Not doing something is a risk. So, what else is new? Actually, attempting to build everything yourself also involves risk – not being able to find qualified people, the cost (and ability) to maintain a home-grown solution, and so forth. To quote myself again: “Only God created something from nothing: everyone else has a supply chain.”****** In short, everyone purchases something from outside their own organization. Making all purchases into A Supply Chain Risk as opposed to, say, a normal business risk is silly and counterproductive.  It also makes it far less likely that specific, targeted supply chain threats can be addressed at all if “buying something – anything – is a risk” is the threat definition.


At this point, I think I’ve said enough. Maybe too much. Again, I appreciate NIST as an organization and as I said above the direction they have set for the Framework (not to $%*& with IT innovation) is really to their credit. I believe NIST needs to in-source more of their standards/policy development, because it is their core mission and because consultants have every incentive to create perpetual work for themselves (and none whatsoever to be precise and focused). NIST should adopt a less-is-more mantra vis-a-vis security. It is better to ask organizations do a few critical things well than to ask them to do absolutely everything – with not enough resource (which is a collective industry problem and not one likely to be solved any time soon). Lastly, we need to remember that we are a proud nation of innovators. Governments generally don’t do well when they tell industry how to do their core mission – innovate – and, absent a truly compelling public policy argument for so doing, they shouldn’t try. 


*”Nice Work If You Can Get It,” lyrics by Ira Gershwin, music  by George Gershwin. Don’t you just love Gershwin?


** “Let’s Call The Whole Thing Off.” Another gem by George and Ira Gershwin.


*** Which reminds me – I really hate the expression “there are no silver bullets.” Of course there are silver bullets. How many vampires and werewolves do you see wandering around?


****Speaking of which, I just finished a fascinating if short read: The Man Who Deciphered Linear B: The Story of Michael Ventris.


*****CVSS is undergoing revision.


****** If you believe the account in Genesis, that is.


Put Up or Shut Up

Fri, 2012-08-17 15:10

One of the (usually) unfortunate concomitants of being a veteran in the cybersecurity space (“veteran” as in, I can remember when everyone called it “information security”) is that you get to hear the same themes over and over again (and solve the same security problems over and over again, only with different protocols).* Not to mention, you experience many technical revival meetings, which is industry’s way of promoting the same old same old under new exhortations (“Praise the Lord! I found eternal life with <insert sexy technology cult du jour>!”)

One of the topics that I am tired of talking about and would like us collectively to do something about is (drum roll) information sharing. Now, information sharing is not a cure-all for every ill in cybersecurity. It is a means to anend, not an end in itself. Specifically, information sharing is a means to enhance situational awareness, which in turn helps networked entities defend themselves better (“Excuse me, I see a mugger is about to swipe your purse. You might want to hit him with it or switch it to your other shoulder.”)

As a basic enabler of better defense, information sharing is certainly a no-brainer, and yet it largely doesn’t happen, or doesn’t happen enough, at least among the good guys. The bad guys, of course, are really good at information sharing. Techniques, tools, top ten lists of badly secured web sites – bring it on, woo hoo. The hacker toolkits are so good now that even someone as technically challenged as I am could probably become a competent Internet evildoer (not that I have any plans to do so). And yet industry and government have spent more time writing tomes, doing PPTs and drafting policy papers that use the magic words “public-private partnership” than making actual – make that “almost any” – progress. Sharing policy papers, I hasten to add, is not the kind of information sharing that solves actual problems. So here it is, all y’all: time to put up or shut up on information sharing.

I say this in my experience as a member of the IT industry Information Sharing and Analysis Center (IT-ISAC) (OK, I am the current president, but I am not speaking for the IT-ISAC) and as a security weenie at Oracle. I can state pretty categorically that I have been astonished – and depressed – at what currently passes for information sharing, despite years of gum flapping about it. The government agencies that are tasked with it generally don’t do it, for example. I find it ironic that the same entities that can’t or won’t tell you you are being broken into – or are about to be – think in some cases that the better solution is for them to just take over protection of your company’s networks after you’ve being broken into. Huh?

More to the point, surprisingly, and delightedly, other agencies that are not tasked with information sharing (e.g., an entity I cannot name by name but that is not part of the Department of Homeland Security (DHS)) recently went to great lengths to contact the IT-ISAC and bring “interesting information” to the attention of the IT-ISAC because they’d seen suspicious activity related to some member companies. Bravo Zulu to you, Unnamed Government Entity. It was not your mission to share that information, but you made an effort and did it, anyway. I wish you'd make a hostile takeover attempt on the entity that is supposed to share information and doesn’t, probably because their lawyers are still mulling it over. If I sound harsh, consider that I have spent 10 years having the exact same conversations over and over and over and nothing seems to change except the people you are having the conversations with. To quote Yoda, “Do or do not. There is no try.”

Other government agencies may call you but you get mysterious intimations and in some cases nothing actionable. I certainly understand that a recipient doesn’t – and probably shouldn’t – receive information about how the reporter got the information (e.g., sources and methods). I know I don’t have a “need to know.” But the information has to be actionable or it’s useless. For example (and I know they meant well), I once got a phone call from Agency X who said, “we have a credible threat that an entity in Country Y (and We All Know Who That Is) is interested in stealing (only they used a more bureaucratic term) the source code for Oracle Product Foo.” Gosh, really? The only news there would be if that country were not out to rip off…er…steal…er…conduct industrial espionage…er…enhance their native manufacturing capacity by ‘active acquisition’… of someone else’s core intellectual property. The next statement was even less helpful: “The details about the threat are classified.” On the one hand, glad Agency X called. Points for trying. On the other hand, the warning was so vague it was not actionable and it certainly didn’t tell me anything I didn’t already know. I wish they’d saved the 35 cents that the call cost and used it to reduce our national debt.

So, the agencies that should share information don’t share much if anything and ones that do in some cases don’t give you information in enough detail such that you can do anything with it. And other good agencies do the right thing although they aren’t tasked with it. It’s not a great report card for the government (more on industry below, lest anyone think I am being one-sided in my criticism). Note that there are people across the political spectrum (and better security really should be an ecumenical issue) who, to their credit, have tried to pass legislation that would help provide “better information sharing” as one of several things we could do to help improve cybersecurity. “Better information sharing” seems a mom-and-secure-apple-pie proposition if ever there was one. Except that a bill that proposed that – and various other iterations of bills – did not pass and for now Congress has gone on vacation like so many of us do in August. There are many reasons why there hasn’t been a consensus cyber bill passed – and I’m not going to go into all that **– but for Pete’s sake, improving government information sharing with industry and vice versa really should be something everyone agrees on.

Another reason that even “kumbaya information sharing 101” couldn’t get a consensus was because of Privacy Concerns. You do wonder about people who are really happy telling intimate details of their lives on Facebook but don’t think the government should be able to receive information about anybody’s attempts to hack critical infrastructure. (Because that’s what we are talking about, not “sending information about the amount of time you spent visiting cutepuppiesandbunniesandduckies.com to the National Security Agency,” which, I am pretty sure, is truly not interested in that information – they have bigger evil fish to fry – and doesn’t view your bunny obsession as a national security threat.)

This is a good time to say that the type of information sharing I am talking about is the voluntary kind (though “highly encouraged” information sharing pursuant to a court order is also good – I’m nothing if not law-abiding). I have zero interest in handing over everything, including the digital kitchen sink, because someone decides they should get everything you have and only then figure out what they actually need. “Need to know” goes for the government, too.

Ergo, at a macro level, I’m glad there are people who are concerned and involved as regards digital privacy. But at the same time, I am frustrated because any time there is even a common sense proposal (legislative or otherwise) about information sharing, privacy hawks seem to come out of the woodwork and Express Grave Concern that either national security or homeland security agencies might actually get useful information from industry to enable them to do their national or homeland security jobs better. Or, God forbid, that industries under non-stop attack from bad guys (including hostile nation states intent on ripping us all off) might actually receive useful and actionable intelligence to help them close open digital doors and windows and keep vermin out. Wouldn’t that be awful?

Because I like analogies, I’d like to offer some perspectives from the real (non-cyber) world that will, at least, illustrate why I am so frustrated and want us to stop talking and start doing. I’d observe that in the physical world, we really don’t seem to have these Concerned Discussions,*** mostly because people understand that we live in communities and that we have a collective interest in making sure we have a secure commons. (Duh, it’s exactly the same issue in the digital world.) Here we go:

Scenario 1: I see a couple walking their dog on the street. They walk by my house and my neighbor’s house. The dog is a Labradope that barks incessantly and the owners don’t clean up after him. ****

Result: I might not like the fact the dog doo-dooed on the arctic willows I painstakingly planted, but this is not a national emergency and it’s not suspicious activity. I’ll clean up after the dog and be done with it. I’m not calling the Wood River Animal Shelter Dog Doo Hotline or the Ketchum Police Department Canine Crap Cop.

Scenario 2: I see someone attempting to enter a window in my neighbor’s house, at 7PM, when my neighbor has gone to the Sun Valley Symphony (they are playing Mahler, whom I don’t care for, which is why I am home instead of at the symphony).

Result: I’m calling the police. I’m also going to give the police as much information as I can about the person doing the B and E (breaking and entering) – what he looks like, how old, how he is dressed, etc. What I am not going to do is think, “Wait, I can’t provide a description of the breaker-inner because gosh, that might violate the perp’s right to privacy and bad taste in clothes. The police showing up when the criminal is doing a breaking and entering job is creating a hostile work environment for him, too.” If you are breaking into someone’s home, you do not have a right to privacy while doing it. Even realizing that there might be false positives (it’s the neighbor's kid, he locked himself out and is breaking into his own house), most of us would rather err on the side of caution and call the cops. We aren’t telling everyone on the planet about “attempted break-in on Alpine Lane,” but we are providing targeted information about a malefactor to the group (Ketchum Police Department) that can do something about it.

In short, if I am a decent neighbor, I should do what I can to protect my neighbor’s house. And as long as I am on the subject, if every house in the neighborhood has been broken into, I would like to know that before someone tries to break into my house. It would be nice if the police told me if there is a rash of B and Es in my neighborhood. (Given it’s a small town in Idaho and we have really good police department, I’m pretty sure they will tell me.)*****

This is what information sharing is, folks. It’s not telling everybody everything whether or not it is interesting or useful. The above examples all have “cyber equivalents” in terms of the difference between sharing “all information” and “sharing interesting information” – which is exactly what we are talking about when we speak of information sharing. There isn’t a neighbor in the world that is busy taping everyone walking dogs by their house (and don’t forget those close-ups of the Labrador committing indiscretions on your plants). Nobody cares about your incontinent Labrador. You share information that is targeted, of value, of interest and where possible, actionable. That’s true in the physical world and in the cyber world.

I’ve been doing a bit of government bashing regarding “failure of government agencies to share information.” Is it only fair that I also do some industry bashing, because information sharing is something some sectors do a lot better than others, yet it is something everyone could and should benefit from. Not to mention, I am mindful of the Biblical wisdom of “Physician, heal thyself” (Luke 4:23).

While the government can add value in information sharing, it is not their job to defend private networks, especially when the private sector – merely by virtue of the fact that they have more digital real estate – gets to see more and thus potentially has more information to share with their neighbors. Not to mention, industry cannot have it both ways. There is a lot of legitimate concern about regulation of cyberspace, mostly because so much regulation has unintended, expensive and often unfortunate consequences. This is all the more reason to Be A Good Cyber Citizen instead of waiting for the government to be the source of all truth or to tell you How To Be A Good Cyber Citizen. Industry participation in information sharing forums is a demonstration of voluntary sector cybersecurity risk management without the force of regulation. As I said earlier, “put up or shut up,” which goes just as much if not more for industry as for government.

While ISACs are not the only information sharing vehicles that exist, they were set up specifically for that purpose (in response to Presidential Decision Directive 63, way back in 1998). It’s a fair cop that some of the ISACs have done better at performing their mission than others. Not all ISACs are equal or even have the same mission. Still, each ISAC has its own examples of success and it is often difficult for those not participating in specific ISACs to see the value they deliver to members (to protect member information that is shared, most ISACs have non-disclosure agreements that prevent information from being shared outside the ISAC membership).

I’d specifically note that the multi-state ISAC and the financial services ISAC both seem to operate very well. There are, I think, many reasons for their success. First of all, the multi-state ISAC and the financial services ISAC have more homogeneity, for lack of a better word. A state is a state is a state – it’s not also a planet. (Except California and Texas, which often seem like Mars to the rest of the country. Bless their lil’ ol’ hearts.) This makes it easier to recognize the obvious benefit of cooperation. To quote Ben Franklin: "We must, indeed, all hang together, or most assuredly we shall all hang separately.” The financial services sector gets this really well: any perceived threat to an individual financial services company is likely to affect all of them, either because of the perception problem that a successful hack creates (“online banking is insecure!”) or because criminals like to repeat successes (to quote Willy Sutton when asked why he robbed banks, “that’s where the money is”). You can’t imagine a bad guy saying, “I’m only going to hack Bank of Foobaria because I don’t like that bank, but Bank of Whateversville is a really nice bank – they hand out dog biscuits – so I am not going to hack them.”

I think leadership is also a factor. I don’t know the originators and past presidents of the Financial Services ISAC, but Bill Nelson has done a tremendous job as the current President of the Financial Services ISAC. I also know Will Pelgrin at the multi-state ISAC and he is a very good, very skilled leader, indeed, and a generous colleague, to boot. Will has been gracious with his time and expertise to me personally in my role as the IT-ISAC president, and I am grateful for it.

While the IT-ISAC has a long list of accomplishments that it is justifiably proud of, the IT-ISAC also faces unique challenges. One of them is the nature of the ISAC and its constituency. The IT industry is less homogeneous than other sectors, including both “soup to nuts” stack vendors as well as security solution vendors that make a business out of sharing threat information. Being a die-hard capitalist, I don’t expect these companies to give away their secret sauce, plus French fries and a hot apple pie to avoid Ben Franklin’s collective hanging. While I think the diversity of the IT sector, the variance in business practices and the “not giving away the store” issues are real challenges to the IT-ISAC, they also provide real benefits. The IT-ISAC provides a forum for bringing together subject matter experts from diverse companies to engage on and discuss common security threats. The IT-ISAC is also moving from an organization focused on vendor vulnerabilities to one that assists members in understanding the rapidly-changing threat environment. For example, we have established a group within the IT-ISAC membership that has agreed to share threat indicator information with each other.

As President of the IT-ISAC, I am committed to doing what I can to try to expand membership, to find common ground (e.g., threat information that even security vendors feel comfortable sharing that benefits everyone, without expecting them to share secret sauce recipes), and finding ways to work with our counterparts in the public sector. I am not the first, and won’t be the last, IT-ISAC president, and I am blessed with an extremely capable executive director and with the generosity of colleagues on the Board. As I learned in my Navy days, I must do my best to steer a steady course to favorable shores.

Lastly, I think the biggest hurdle we in industry collectively need to get over is the trust issue. We seem to be more fearful of other companies than we are of being hacked by bad guys. (“If I share this information, will a competitor use it against me?”) Trust has to be earned, but it can be garnered by outreach and by making an effort to start somewhere. I think of a fine gentleman and public servant who has recently retired from NSA, Tony Sager. Tony was a public face of NSA in terms of working with industry in the information assurance directorate (IAD). He and his team did a lot of outreach: here’s who we are, here’s what we do, let’s talk. Tony did a lot of listening, too. I have said often that if I had a problem in a particular area, I’d not hesitate to call Tony and his team. They had the creds, they had the smarts, and they had earned – yes, earned – my trust. We in industry, who see most of the threats, who are so often the direct victims of them, should take a cue from Tony. Use our “creds” and our intelligence (of all types) to improve the commons. We can start by sharing useful, actionable, valuable information that will help all of us be more secure. It is often said the bad guys are a step ahead of the defenders. This is true with information sharing as well: the bad guys play nicely with other bad guys – so why can’t we good guys get along?

If you are sitting on the sidelines, it is time to get involved and engaged. Instead of sitting on the outside complaining that there is no effective way to share information, join an information sharing organization (I’m partial to ISACs), get involved in it, and help shape and move the organization so that it meets your needs. Just get on with it, already!

* The fact that technology changes but stupidity repeats endlessly is job security for security weenies. Rule number 1 of  nformation security is “never trust any unverified data from a client.” Rule 2 is “see rule 1.” Most security defects stem from failure to heed rule 1 – and we keep doing it every time we introduce new clients or new protocols. The excuse for lazy-ass servers or middle tiers is always, “Gosh, it’s just so much easier to accept any old thing the client hands you because it is computationally intensive to verify it. And nobody would send evil data to a middle tier, wouldthey?” Right. Just like, think of all the cycles we’d save if we didn’t verify passwords. I’m sure if a client says he is John Doe, he IS John Doe! (Good luck with that.)

** Ok, I lied. One of the reasons various bills failed is because the bill drafters wanted “better security to protect critical infrastructure” but could not actually define “critical infrastructure.” If “it” is important enough to legislate, “it” should be clearly defined in the language of the bill, instead of subject to interpretation (and vast scope increase ex post facto). Just my opinion.

*** With the prospect of increased drone use in our domestic environs, we are going to have a lot more privacy discussions. What I barbecue in my backyard is none of anyone else’s goldurn business.

**** Ok, I know a lot of people love Labs. Apologies to anybody I offended.

***** Since I live a couple of blocks from the police, it’s pretty darn stupid of anybody to try to break into any house in the neighborhood.

Put Up or Shut Up

Fri, 2012-08-17 15:10



Intellectual Property
EOP
Joint Strategic Plan, Intellectual Property
12.00



Normal
0





false
false
false

EN-US
X-NONE
X-NONE













MicrosoftInternetExplorer4














DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>






















UnhideWhenUsed="false" QFormat="true" Name="Title"/>



UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>




UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>




UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>
Name="Revision"/>
UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Calibri","sans-serif";}


One of the (usually) unfortunate concomitants of being a veteran in the cybersecurity space (“veteran” as in, I can remember when everyone called it “information security”) is that you get to hear the same themes over and over again (and solve the same security problems over and over again, only with different protocols).* Not to mention, you experience many technical revival meetings, which is industry’s way of promoting the same old same old under new exhortations (“Praise the Lord! I found eternal life with <insert sexy technology cult du jour>!”)


One of the topics that I am tired of talking about and would like us collectively to do something about is (drum roll) information sharing. Now, information sharing is not a cure-all for every ill in cybersecurity. It is a means to an
end, not an end in itself. Specifically, information sharing is a means to enhance situational awareness, which in turn helps networked entities defend themselves better (“Excuse me, I see a mugger is about to swipe your purse. You might want to hit him with it or switch it to your other shoulder.”)


As a basic enabler of better defense, information sharing is certainly a no-brainer, and yet it largely doesn’t happen, or doesn’t happen enough, at least among the good guys. The bad guys, of course, are really good at information sharing. Techniques, tools, top ten lists of badly secured web sites – bring it on, woo hoo. The hacker toolkits are so good now that even someone as technically challenged as I am could probably become a competent Internet evildoer (not that I have any plans to do so). And yet industry and government have spent more time writing tomes, doing PPTs and drafting policy papers that use the magic words “public-private partnership” than making actual – make that “almost any” – progress. Sharing policy papers, I hasten to add, is not the kind of information sharing that solves actual problems. So here it is, all y’all: time to put up or shut up on information sharing.


I say this in my experience as a member of the IT industry Information Sharing and Analysis Center (IT-ISAC) (OK, I am the current president, but I am not speaking for the IT-ISAC) and as a security weenie at Oracle. I can state pretty categorically that I have been astonished – and depressed – at what currently passes for information sharing, despite years of gum flapping about it. The government agencies that are tasked with it generally don’t do it, for example. I find it ironic that the same entities that can’t or won’t tell you you are being broken into – or are about to be – think in some cases that the better solution is for them to just take over protection of your company’s networks after you’ve being broken into. Huh?


More to the point, surprisingly, and delightedly, other agencies that are not tasked with information sharing (e.g., an entity I cannot name by name but that is not part of the Department of Homeland Security (DHS)) recently went to great lengths to contact the IT-ISAC and bring “interesting information” to the attention of the IT-ISAC because they’d seen suspicious activity related to some member companies. Bravo Zulu to you, Unnamed Government Entity. It was not your mission to share that information, but you made an effort and did it, anyway. I wish you'd make a hostile takeover attempt on the entity that is supposed to share information and doesn’t, probably because their lawyers are still mulling it over. If I sound harsh, consider that I have spent 10 years having the exact same conversations over and over and over and nothing seems to change except the people you are having the conversations with. To quote Yoda, “Do or do not. There is no try.”


Other government agencies may call you but you get mysterious intimations and in some cases nothing actionable. I certainly understand that a recipient doesn’t – and probably shouldn’t – receive information about how the reporter got the information (e.g., sources and methods). I know I don’t have a “need to know.” But the information has to be actionable or it’s useless. For example (and I know they meant well), I once got a phone call from Agency X who said, “we have a credible threat that an entity in Country Y (and We All Know Who That Is) is interested in stealing (only they used a more bureaucratic term) the source code for Oracle Product Foo.” Gosh, really? The only news there would be if that country were not out to rip off…er…steal…er…conduct industrial espionage…er…enhance their native manufacturing capacity by ‘active acquisition’… of someone else’s core intellectual property. The next statement was even less helpful: “The details about the threat are classified.” On the one hand, glad Agency X called. Points for trying. On the other hand, the warning was so vague it was not actionable and it certainly didn’t tell me anything I didn’t already know. I wish they’d saved the 35 cents that the call cost and used it to reduce our national debt.


So, the agencies that should share information don’t share much if anything and ones that do in some cases don’t give you information in enough detail such that you can do anything with it. And other good agencies do the right thing although they aren’t tasked with it. It’s not a great report card for the government (more on industry below, lest anyone think I am being one-sided in my criticism). Note that there are people across the political spectrum (and better security really should be an ecumenical issue) who, to their credit, have tried to pass legislation that would help provide “better information sharing” as one of several things we could do to help improve cybersecurity. “Better information sharing” seems a mom-and-secure-apple-pie proposition if ever there was one. Except that a bill that proposed that – and various other iterations of bills – did not pass and for now Congress has gone on vacation like so many of us do in August. There are many reasons why there hasn’t been a consensus cyber bill passed – and I’m not going to go into all that **– but for Pete’s sake, improving government information sharing with industry and vice versa really should be something everyone agrees on.


Another reason that even “kumbaya information sharing 101” couldn’t get a consensus was because of Privacy Concerns. You do wonder about people who are really happy telling intimate details of their lives on Facebook but don’t think the government should be able to receive information about anybody’s attempts to hack critical infrastructure. (Because that’s what we are talking about, not “sending information about the amount of time you spent visiting cutepuppiesandbunniesandduckies.com to the National Security Agency,” which, I am pretty sure, is truly not interested in that information – they have bigger evil fish to fry – and doesn’t view your bunny obsession as a national security threat.)


This is a good time to say that the type of information sharing I am talking about is the voluntary kind (though “highly encouraged” information sharing pursuant to a court order is also good – I’m nothing if not law-abiding). I have zero interest in handing over everything, including the digital kitchen sink, because someone decides they should get everything you have and only then figure out what they actually need. “Need to know” goes for the government, too.


Ergo, at a macro level, I’m glad there are people who are concerned and involved as regards digital privacy. But at the same time, I am frustrated because any time there is even a common sense proposal (legislative or otherwise) about information sharing, privacy hawks seem to come out of the woodwork and Express Grave Concern that either national security or homeland security agencies might actually get useful information from industry to enable them to do their national or homeland security jobs better. Or, God forbid, that industries under non-stop attack from bad guys (including hostile nation states intent on ripping us all off) might actually receive useful and actionable intelligence to help them close open digital doors and windows and keep vermin out. Wouldn’t that be awful?


Because I like analogies, I’d like to offer some perspectives from the real (non-cyber) world that will, at least, illustrate why I am so frustrated and want us to stop talking and start doing. I’d observe that in the physical world, we really don’t seem to have these Concerned Discussions,*** mostly because people understand that we live in communities and that we have a collective interest in making sure we have a secure commons. (Duh, it’s exactly the same issue in the digital world.) Here we go:


Scenario 1: I see a couple walking their dog on the street. They walk by my house and my neighbor’s house. The dog is a Labradope that barks incessantly and the owners don’t clean up after him. ****


Result: I might not like the fact the dog doo-dooed on the arctic willows I painstakingly planted, but this is not a national emergency and it’s not suspicious activity. I’ll clean up after the dog and be done with it. I’m not calling the Wood River Animal Shelter Dog Doo Hotline or the Ketchum Police Department Canine Crap Cop.


Scenario 2: I see someone attempting to enter a window in my neighbor’s house, at 7PM, when my neighbor has gone to the Sun Valley Symphony (they are playing Mahler, whom I don’t care for, which is why I am home instead of at the symphony).


Result: I’m calling the police. I’m also going to give the police as much information as I can about the person doing the B and E (breaking and entering) – what he looks like, how old, how he is dressed, etc. What I am not going to do is think, “Wait, I can’t provide a description of the breaker-inner because gosh, that might violate the perp’s right to privacy and bad taste in clothes. The police showing up when the criminal is doing a breaking and entering job is creating a hostile work environment for him, too.” If you are breaking into someone’s home, you do not have a right to privacy while doing it. Even realizing that there might be false positives (it’s the neighbor's kid, he locked himself out and is breaking into his own house), most of us would rather err on the side of caution and call the cops. We aren’t telling everyone on the planet about “attempted break-in on Alpine Lane,” but we are providing targeted information about a malefactor to the group (Ketchum Police Department) that can do something about it.


In short, if I am a decent neighbor, I should do what I can to protect my neighbor’s house. And as long as I am on the subject, if every house in the neighborhood has been broken into, I would like to know that before someone tries to break into my house. It would be nice if the police told me if there is a rash of B and Es in my neighborhood. (Given it’s a small town in Idaho and we have really good police department, I’m pretty sure they will tell me.)*****


This is what information sharing is, folks. It’s not telling everybody everything whether or not it is interesting or useful. The above examples all have “cyber equivalents” in terms of the difference between sharing “all information” and “sharing interesting information” – which is exactly what we are talking about when we speak of information sharing. There isn’t a neighbor in the world that is busy taping everyone walking dogs by their house (and don’t forget those close-ups of the Labrador committing indiscretions on your plants). Nobody cares about your incontinent Labrador. You share information that is targeted, of value, of interest and where possible, actionable. That’s true in the physical world and in the cyber world.


I’ve been doing a bit of government bashing regarding “failure of government agencies to share information.” Is it only fair that I also do some industry bashing, because information sharing is something some sectors do a lot better than others, yet it is something everyone could and should benefit from. Not to mention, I am mindful of the Biblical wisdom of “Physician, heal thyself” (Luke 4:23).


While the government can add value in information sharing, it is not their job to defend private networks, especially when the private sector – merely by virtue of the fact that they have more digital real estate – gets to see more and thus potentially has more information to share with their neighbors. Not to mention, industry cannot have it both ways. There is a lot of legitimate concern about regulation of cyberspace, mostly because so much regulation has unintended, expensive and often unfortunate consequences. This is all the more reason to Be A Good Cyber Citizen instead of waiting for the government to be the source of all truth or to tell you How To Be A Good Cyber Citizen. Industry participation in information sharing forums is a demonstration of voluntary sector cybersecurity risk management without the force of regulation. As I said earlier, “put up or shut up,” which goes just as much if not more for industry as for government.


While ISACs are not the only information sharing vehicles that exist, they were set up specifically for that purpose (in response to Presidential Decision Directive 63, way back in 1998). It’s a fair cop that some of the ISACs have done better at performing their mission than others. Not all ISACs are equal or even have the same mission. Still, each ISAC has its own examples of success and it is often difficult for those not participating in specific ISACs to see the value they deliver to members (to protect member information that is shared, most ISACs have non-disclosure agreements that prevent information from being shared outside the ISAC membership).


I’d specifically note that the multi-state ISAC and the financial services ISAC both seem to operate very well. There are, I think, many reasons for their success. First of all, the multi-state ISAC and the financial services ISAC have more homogeneity, for lack of a better word. A state is a state is a state – it’s not also a planet. (Except California and Texas, which often seem like Mars to the rest of the country. Bless their lil’ ol’ hearts.) This makes it easier to recognize the obvious benefit of cooperation. To quote Ben Franklin: "We must, indeed, all hang together, or most assuredly we shall all hang separately.” The financial services sector gets this really well: any perceived threat to an individual financial services company is likely to affect all of them, either because of the perception problem that a successful hack creates (“online banking is insecure!”) or because criminals like to repeat successes (to quote Willy Sutton when asked why he robbed banks, “that’s where the money is”). You can’t imagine a bad guy saying, “I’m only going to hack Bank of Foobaria because I don’t like that bank, but Bank of Whateversville is a really nice bank – they hand out dog biscuits – so I am not going to hack them.”


I think leadership is also a factor. I don’t know the originators and past presidents of the Financial Services ISAC, but Bill Nelson has done a tremendous job as the current President of the Financial Services ISAC. I also know Will Pelgrin at the multi-state ISAC and he is a very good, very skilled leader, indeed, and a generous colleague, to boot. Will has been gracious with his time and expertise to me personally in my role as the IT-ISAC president, and I am grateful for it.


While the IT-ISAC has a long list of accomplishments that it is justifiably proud of, the IT-ISAC also faces unique challenges. One of them is the nature of the ISAC and its constituency. The IT industry is less homogeneous than other sectors, including both “soup to nuts” stack vendors as well as security solution vendors that make a business out of sharing threat information. Being a die-hard capitalist, I don’t expect these companies to give away their secret sauce, plus French fries and a hot apple pie to avoid Ben Franklin’s collective hanging. While I think the diversity of the IT sector, the variance in business practices and the “not giving away the store” issues are real challenges to the IT-ISAC, they also provide real benefits. The IT-ISAC provides a forum for bringing together subject matter experts from diverse companies to engage on and discuss common security threats. The IT-ISAC is also moving from an organization focused on vendor vulnerabilities to one that assists members in understanding the rapidly-changing threat environment. For example, we have established a group within the IT-ISAC membership that has agreed to share threat indicator information with each other.


As President of the IT-ISAC, I am committed to doing what I can to try to expand membership, to find common ground (e.g., threat information that even security vendors feel comfortable sharing that benefits everyone, without expecting them to share secret sauce recipes), and finding ways to work with our counterparts in the public sector. I am not the first, and won’t be the last, IT-ISAC president, and I am blessed with an extremely capable executive director and with the generosity of colleagues on the Board. As I learned in my Navy days, I must do my best to steer a steady course to favorable shores.


Lastly, I think the biggest hurdle we in industry collectively need to get over is the trust issue. We seem to be more fearful of other companies than we are of being hacked by bad guys. (“If I share this information, will a competitor use it against me?”) Trust has to be earned, but it can be garnered by outreach and by making an effort to start somewhere. I think of a fine gentleman and public servant who has recently retired from NSA, Tony Sager. Tony was a public face of NSA in terms of working with industry in the information assurance directorate (IAD). He and his team did a lot of outreach: here’s who we are, here’s what we do, let’s talk. Tony did a lot of listening, too. I have said often that if I had a problem in a particular area, I’d not hesitate to call Tony and his team. They had the creds, they had the smarts, and they had earned – yes, earned – my trust. We in industry, who see most of the threats, who are so often the direct victims of them, should take a cue from Tony. Use our “creds” and our intelligence (of all types) to improve the commons. We can start by sharing useful, actionable, valuable information that will help all of us be more secure. It is often said the bad guys are a step ahead of the defenders. This is true with information sharing as well: the bad guys play nicely with other bad guys – so why can’t we good guys get along?


If you are sitting on the sidelines, it is time to get involved and engaged. Instead of sitting on the outside complaining that there is no effective way to share information, join an information sharing organization (I’m partial to ISACs), get involved in it, and help shape and move the organization so that it meets your needs. Just get on with it, already!



* The fact that technology changes but stupidity repeats endlessly is job security for security weenies. Rule number 1 of  nformation security is “never trust any unverified data from a client.” Rule 2 is “see rule 1.” Most security defects stem from failure to heed rule 1 – and we keep doing it every time we introduce new clients or new protocols. The excuse for lazy-ass servers or middle tiers is always, “Gosh, it’s just so much easier to accept any old thing the client hands you because it is computationally intensive to verify it. And nobody would send evil data to a middle tier, would
they?” Right. Just like, think of all the cycles we’d save if we didn’t verify passwords. I’m sure if a client says he is John Doe, he IS John Doe! (Good luck with that.)


** Ok, I lied. One of the reasons various bills failed is because the bill drafters wanted “better security to protect critical infrastructure” but could not actually define “critical infrastructure.” If “it” is important enough to legislate, “it” should be clearly defined in the language of the bill, instead of subject to interpretation (and vast scope increase ex post facto). Just my opinion.


*** With the prospect of increased drone use in our domestic environs, we are going to have a lot more privacy discussions. What I barbecue in my backyard is none of anyone else’s goldurn business.


**** Ok, I know a lot of people love Labs. Apologies to anybody I offended.


***** Since I live a couple of blocks from the police, it’s pretty darn stupid of anybody to try to break into any house in the neighborhood.


Summer Potpourri

Tue, 2012-07-17 14:41

Its summer in Idaho, and how! (It was over 90 yesterday and people can say “it’s a dry heat” until the cows come home with heatstroke. My oven is dry, but I don’t want to sit in that, either.) The air is scented with sagebrush (the quintessential Idaho smell), the pine needles I am clearing from my side yard, the lavender and basil in my garden, and the occasional ozone from imminent thunderstorms. It’s those snippets of scent that perfume long, languid summer days here.  With that, I offer my own “literary potpourri” – thought snippets in security.
Digital Pearl Harbor
In the interests of greater service to my country, I would like to propose a moratorium on the phrase “digital (or “cyber”) Pearl Harbor.”  This particular phrase is beyond tiresome cliché, to the point where it simply does not resonate, if it ever did.  The people who use it for the most part demonstrate an utter lack of understanding of the lessons of Pearl Harbor.  It’s as bad as the abuse of the word “decimate,” the original meaning of which was to kill every  10th person (a form of military discipline used by officers in the Roman Army). Unless you were one of the ten, it hardly constituted “utter annihilation,” the now generally accepted usage of “decimate.”   The users and abusers of the phrase “digital Pearl Harbor” typically mean to say that we are facing the cyber equivalent of:

-- A sneak attack
-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin/devastation
-- With attendant loss of life
Not to go on and on about the Pacific War – not that it isn’t one of my favorite topics - but Pearl Harbor turned out to be a blessing in disguise for the United States.  Note that I am not saying it was a blessing for the 2403 men and women who died in the attack (not to mention those who were wounded). In support of my point:
1) The Japanese attack on Pearl Harbor brought together the United States as almost nothing else could have. On December 6, 1941, there were a lot of isolationists. On December 8, 1941, nary a one.  (For extra credit, Hitler was not obligated to declare war on the United States under the terms of the Tripartite Agreement, yet he did so, which means – as if we needed one – we also had a valid excuse to join the European war.)
2) The Japanese did not help their cause any by – due to a timing snafu – only severing diplomatic relations after the attack had begun, instead of before. Thus, Pearl Harbor wasn’t just a sneak attack, but a sneak attack contrary to diplomatic norms.  
3)
Japan left critical facilities at Pearl Harbor intact, due to their failure to launch another attack wave. Specifically, Japan did not destroy the POL (petroleum, oil and lubricant) facilities at Pearl Harbor. They also did not destroy the shipyards at Pearl Harbor. Had the POL facilities been destroyed, the Pacific Fleet would have had to function out of San Francisco (or another West Coast port) instead of Hawai’i.  As for the dry-dock facilities, most of the ships that were sunk  at Pearl Harbor were ultimately raised, refurbished, and rejoined the fleet.  (Special kudos to the divers who did that work, who do not generally get the credit they deserve for it.) The majority of ships on Battleship Row were down on December 7 -- but far from out. 
4) The attack forced the US to rely on their aircraft carriers. Famously, the US carriers were out to sea during the attack, unfortunately for the Japanese, who truly wanted to sink carriers more than battleships.  Consequently, the US was forced (in the early months of the war) to rely on their carriers, and carriers were the future of war-at-sea.  (The Japanese ship Yamato, which, with her sister Musashi, were the largest battleships ever built, was a notable non-force during the war.) 
5) Japan was never going to prevail against the industrial might of the United States.  Famously, ADM Isoroku Yamamoto -- who had initially opposed the attack on Pearl Harbor -- said, "I can run wild for six months … after that, I have no expectation of success.” It was almost exactly six months between December 7, 1941 and the battle of Midway (June 4-6, 1942), which was arguably the turning point of the Pacific War.
December 7, 1941 was and always will be “a date that shall live in infamy,” but Pearl Harbor was not the end of the United States.  Far from it.
Thus, the people who refer to “Cyber Pearl Harbor” or “Digital Pearl Harbor”(DPH) are using it in virtually complete ignorance of actual history.  Worse, unless they can substantiate some actual “for instances”  that would encompass the basic “here’s what we mean by that,” they run the risk of becoming the boy who cried cyber wolf.  Specifically, and to return to my earlier points:
-- A sneak attack
With the amount of cyber intrusions, cyber thefts, and so on, would any cyber attack (e.g., on critical infrastructure) really be “a sneak attack” at this point? If we are unprepared, shame on us. But nobody should be surprised. Even my notably technophobic Mom reads the latest and greatest articles on cyber attacks, and let’s face it, there are a lot of them permeating even mainstream media. It’s as if Japan had attacked Seattle, Los Angeles and San Diego and pundits were continuing to muse about whether there would be an attack on Pearl Harbor. Duh, yes!
-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin /devastation
If we know of critical systems that are so fragile or interdependent that we could be ruined if they were “brought down,” for pity’s sake let’s get on with fixing them. For the amount of time pundits spending opining on DPH, they could be working to fix the problem.  Hint: unless Congress has been taken over by techno-savvy aliens and each of the members is now supergeek, they cannot solve this problem via legislation. If a critical system is laid low, what are we going to say? “The answer is – more laws!” Yessirree, I had no interest in protecting the power grid before we were attacked, but golly jeepers, now there’s a law, I suddenly realize my responsibilities. (Didn’t we defeat imperial Japan with – legislation? Yep, we threw laws at the Japanese defenders of Tarawa, Guadalcanal, and Peleliu. Let’s give Marines copies of the Congressional Record instead of M4s.)
-- With attendant loss of life
It’s not inconceivable that there are systems whose failures could cost lives. So let’s start talking about that (we do not, BTW, have to talk about that in gory detail wherein Joe-Bob Agent-of-a- Hostile-Nation-State now has a blueprint for evil). If it -- loss of life -- cannot be substantiated, then it’s another nail in the coffin of using DPH as an industry scare tactic.
To summarize, I have plenty of concerns about the degree to which people rely on systems that were not designed for their threat environments, and/or that embed a systemic risk (the nature of which is that it is not mitigateable – that’s why it is “systemic” risk).  I also like a catchy sound bite as much as the next person (I believe I was the first person to talk about a Cyber Monroe Doctrine).  But I am sick to death of “DPH” and all its  catchy variants. To those who use it: stop waving your hands, and Put Up or Shut Up – read about Pearl Harbor and either map the digital version to actual events or find another CCT (Cyber Catchy Term) . Cyberpocalypse, Cybergeddon, CyberRapture – there are so many potential terms of gigabit gloom and digi-doom – I am sure we as an industry can do better.
Hand waving only moves hot air around – it’s doesn’t cool anything off.
Security Theater
I recently encountered a “customer expectations management” issue -- which we dealt with pretty quickly -- that reminds me of a Monty Python sketch.  It illustrates the difference between “real security” and “security theater” -- those feel-good, compliance-oriented, “everybody else does this so it must be OK” exercises that typically end in “but we can’t have a security problem -- we checked all the required boxes!”  
Here goes. I was told by a particular unnamed development group that customers requesting a penetration test virtually demanded a penetration test report that showed there were vulnerabilities, otherwise they didn’t believe it was a “real” report.  (Sound of head banging on wall.) I’d laugh if the stakes weren’t so high and the discussion weren’t so absurd.
If the requirement for “security” is “you have to produce a penetration test that shows there are vulnerabilities,” that is an easy problem to solve. I am sure someone can create a software program that randomly selects from, say, the oodles of potential problems outlined in the Common Weakness Enumeration (CWE), and produces a report showing one or more Vulnerabilities Du Jour. Of course, it probably needs to be parameterized (e.g.,  you have to show at least one vulnerability per X thousand lines of code, you specify primary coding language so you can tailor the fake vulnerabilities reported to the actual programming language, etc.).  Rather than waste money using a really good tool (or hiring a really good third party), we can all just run the report generator. Let’s call it BogusBreakIt. “With BogusBreakIt, you can quickly and easily show you have state-of- the-art, non-existent security problems – without the expense of an actual penetration test! Why fix actual problems when customers only want to know you still have them? Now, with new and improved fonts!”
With apologies to the Knights Who Say, “Ni,” all we are missing is a shrubbery (not too expensive). The way this exercise should work is that instead of hiring Cheap and Bad Pen Testers R Us to make your customers feel good about imperfect security (why hire a good one to find everything if the bar is so low?), you do the best you can do, yourselves, then, where apropos, hire a really good pen tester, triage the results, and put an action plan together to address the crappiest problems first. If you provide anything to customers, it should not be gory details of problems you have not fixed yet, it should be a high level synopsis with an accompanying remediation plan.  Any customer who really cares doesn’t want “a report that shows security problems,” but a report that shows the vendor is focused on improving security in priority order – and, of course, actually does so.  
Moral: It’s hard enough working in security without wasting scarce time, money, and people on delivering shrubberies instead of security.
CSSLEIEIO
I don’t think anybody can doubt my commitment to assurance – it’s been my focus at Oracle for most of the time I’ve worked in security, going on 20 years (I should note that I joined the company when I was 8). It is with mixed feelings that I say that while I was an early (grandfathered) recipient of the Certified Secure Software Lifecycle Professional (CSSLP) certification, I have just let it lapse. I’m not trying to bash the Information Systems Security Certification Consortium (ISC(2)), the developer and “blessing certifier” of the CSSLP, and certainly not trying to denigrate the value of assurance, but the entire exercise of developing this certification never sat well with me. Part of it is that I think it solves the wrong problem – more on which later. Also, my cynical and probably unfair comment when I heard that ISC(2) was developing the CSSLP certification was that, having saturated the market for Certified Information Systems Security Professionals (CISSPs), they needed a new source of revenue. (You do wonder when you look at the number of business cards with a multiplicity of alphabet soup on them: CISM, CISSP, CSSLP, EIEIO (Ok, I made that last one up).)
I let my CSSLP lapse because of a) laziness and b) the fact that I got nothing from it.  Having a CSSLP made no difference to my work or my “professional standing.” It added no value to me personally, or, more importantly, to assurance at Oracle. I started working in assurance before the CSSLP dreamer-uppers ever thought of Yet Another Certification, and my team (and many others in Oracle) have proceeded to continue to try to improve our practices. ISC(2) had no impact on that. None. I wondered why I was paying X dollars to an organization for them to “bless” work that we were doing anyway, that I was doing, anyway, that did not add one iota to our knowledge or practices?  (To be fair, some of the people who work for me have CSSLPs and have kept them current. I can’t speak for what they think they get out of having a CSSLP.)
We have an EIEIO syndrome in security – so many certifications, and really, is security any better? I don’t know. I don’t know how much difference many of these certifications make, except that job search engines apparently look for them as keywords. Many certifications across various industries are used as barriers to market entry, to the point that some forward- thinking states are repealing these requirements as being anti-competitive. (And really, do we need a certification for interior decorators? Is it that critical that someone know the difference between Rococo and Neoclassical styles? C’mon!) In some areas, certifications probably do make sense. Some of them might be security areas. But it doesn’t do any good if the market is so fragmented and we are adding more certifications just to add them. And it really doesn’t do any good if we have too many of them to the point where the next one is JASC – just another security certification. CSSLP felt like that to me.  
I certainly think assurance is important. I just do not know – and remain really, really unconvinced -- that having an assurance “certification” for individuals has done anything to improve the problem space. As I’ve opined before, I think it would be far, far, better to “bake in” security to the computer science and related degree programs than try to “bolt on” assurance through an ex post facto certification. It’s like an example I have used before: the little Dutch boy, who, in the story, put his fingers in leaks in a dike to prevent a flood. We keep thinking if only we had more little Dutch boys, we can prevent a flood. If we don’t fix the “builders” problem,  we – and all the Dutch boys we are using  to stem the flood – will surely drown.

I am not perfect. As good as my team is -- and they and many others in Oracle are the real builders of our assurance program -- they are not perfect. But I stand on our record, and none of that record was, in my opinion, affected one iota by the presence or absence of CSSLPs among our practitioners.
If I may be indulged:
Old MacDonald had some code
EIEIO
And in that code he had some flaws
EIEIO
With a SQL injection here and an XSS there
Run a scan, fuzz your code
Everywhere a threat model
Old MacDonald fixed his code
EIEIO*
Last Bits
I mentioned in an earlier blog, in a truly breathtaking example of self-promotion, that my sister Diane and I (writing as Maddi Davidson) had written and published the first in an IT Industry-based murder mystery series,  Outsourcing Murder. The two exciting bits of news about that are, first of all book 2, Denial of Service, is almost (OK, 90%) done.  Stay tuned. The second bit is that Outsourcing Murder made a summer reading list for geeks.  It’s probably the only time in my life I (or rather, my sister and I) will appear in a list with Kevin Mitnick and Neal Stephenson.
*Ira Gershwin would turn over in his grave – I know I’ll never make it as a lyricist.

Summer Potpourri

Tue, 2012-07-17 14:41

Its summer in Idaho, and how! (It was over 90 yesterday and people can say “it’s a dry heat” until the cows come home with heatstroke. My oven is dry, but I don’t want to sit in that, either.) The air is scented with sagebrush (the quintessential Idaho smell), the pine needles I am clearing from my side yard, the lavender and basil in my garden, and the occasional ozone from imminent thunderstorms. It’s those snippets of scent that perfume long, languid summer days here.  With that, I offer my own “literary potpourri” – thought snippets in security.

Digital Pearl Harbor

In the interests of greater service to my country, I would like to propose a moratorium on the phrase “digital (or “cyber”) Pearl Harbor.”  This particular phrase is beyond tiresome cliché, to the point where it simply does not resonate, if it ever did.  The people who use it for the most part demonstrate an utter lack of understanding of the lessons of Pearl Harbor.  It’s as bad as the abuse of the word “decimate,” the original meaning of which was to kill every  10th person (a form of military discipline used by officers in the Roman Army). Unless you were one of the ten, it hardly constituted “utter annihilation,” the now generally accepted usage of “decimate.”   The users and abusers of the phrase “digital Pearl Harbor” typically mean to say that we are facing the cyber equivalent of:


-- A sneak attack

-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin/devastation

-- With attendant loss of life

Not to go on and on about the Pacific War – not that it isn’t one of my favorite topics - but Pearl Harbor turned out to be a blessing in disguise for the United States.  Note that I am not saying it was a blessing for the 2403 men and women who died in the attack (not to mention those who were wounded). In support of my point:

1) The Japanese attack on Pearl Harbor brought together the United States as almost nothing else could have. On December 6, 1941, there were a lot of isolationists. On December 8, 1941, nary a one.  (For extra credit, Hitler was not obligated to declare war on the United States under the terms of the Tripartite Agreement, yet he did so, which means – as if we needed one – we also had a valid excuse to join the European war.)

2) The Japanese did not help their cause any by – due to a timing snafu – only severing diplomatic relations after the attack had begun, instead of before. Thus, Pearl Harbor wasn’t just a sneak attack, but a sneak attack contrary to diplomatic norms.  

3)
Japan left critical facilities at Pearl Harbor intact, due to their failure to launch another attack wave. Specifically, Japan did not destroy the POL (petroleum, oil and lubricant) facilities at Pearl Harbor. They also did not destroy the shipyards at Pearl Harbor. Had the POL facilities been destroyed, the Pacific Fleet would have had to function out of San Francisco (or another West Coast port) instead of Hawai’i.  As for the dry-dock facilities, most of the ships that were sunk  at Pearl Harbor were ultimately raised, refurbished, and rejoined the fleet.  (Special kudos to the divers who did that work, who do not generally get the credit they deserve for it.) The majority of ships on Battleship Row were down on December 7 -- but far from out. 

4) The attack forced the US to rely on their aircraft carriers. Famously, the US carriers were out to sea during the attack, unfortunately for the Japanese, who truly wanted to sink carriers more than battleships.  Consequently, the US was forced (in the early months of the war) to rely on their carriers, and carriers were the future of war-at-sea.  (The Japanese ship Yamato, which, with her sister Musashi, were the largest battleships ever built, was a notable non-force during the war.) 

5)
Japan was never going to prevail against the industrial might of the United States.  Famously, ADM Isoroku Yamamoto -- who had initially opposed the attack on Pearl Harbor -- said, "I can run wild for six months … after that, I have no expectation of success.” It was almost exactly six months between December 7, 1941 and the battle of Midway (June 4-6, 1942), which was arguably the turning point of the Pacific War.

December 7, 1941 was and always will be “a date that shall live in infamy,” but Pearl Harbor was not the end of the United States.  Far from it.

Thus, the people who refer to “Cyber Pearl Harbor” or “Digital Pearl Harbor”(DPH) are using it in virtually complete ignorance of actual history.  Worse, unless they can substantiate some actual “for instances”  that would encompass the basic “here’s what we mean by that,” they run the risk of becoming the boy who cried cyber wolf.  Specifically, and to return to my earlier points:

-- A sneak attack

With the amount of cyber intrusions, cyber thefts, and so on, would any cyber attack (e.g., on critical infrastructure) really be “a sneak attack” at this point? If we are unprepared, shame on us. But nobody should be surprised. Even my notably technophobic Mom reads the latest and greatest articles on cyber attacks, and let’s face it, there are a lot of them permeating even mainstream media. It’s as if Japan had attacked Seattle, Los Angeles and San Diego and pundits were continuing to muse about whether there would be an attack on Pearl Harbor. Duh, yes!

-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin /devastation

If we know of critical systems that are so fragile or interdependent that we could be ruined if they were “brought down,” for pity’s sake let’s get on with fixing them. For the amount of time pundits spending opining on DPH, they could be working to fix the problem.  Hint: unless Congress has been taken over by techno-savvy aliens and each of the members is now supergeek, they cannot solve this problem via legislation. If a critical system is laid low, what are we going to say? “The answer is – more laws!” Yessirree, I had no interest in protecting the power grid before we were attacked, but golly jeepers, now there’s a law, I suddenly realize my responsibilities. (Didn’t we defeat imperial Japan with – legislation? Yep, we threw laws at the Japanese defenders of Tarawa, Guadalcanal, and Peleliu. Let’s give Marines copies of the Congressional Record instead of M4s.)

--
With attendant loss of life

It’s not inconceivable that there are systems whose failures could cost lives. So let’s start talking about that (we do not, BTW, have to talk about that in gory detail wherein Joe-Bob Agent-of-a- Hostile-Nation-State now has a blueprint for evil). If it -- loss of life -- cannot be substantiated, then it’s another nail in the coffin of using DPH as an industry scare tactic.

To summarize, I have plenty of concerns about the degree to which people rely on systems that were not designed for their threat environments, and/or that embed a systemic risk (the nature of which is that it is not mitigateable – that’s why it is “systemic” risk).  I also like a catchy sound bite as much as the next person (I believe I was the first person to talk about a Cyber Monroe Doctrine).  But I am sick to death of “DPH” and all its  catchy variants. To those who use it: stop waving your hands, and Put Up or Shut Up – read about Pearl Harbor and either map the digital version to actual events or find another CCT (Cyber Catchy Term) . Cyberpocalypse, Cybergeddon, CyberRapture – there are so many potential terms of gigabit gloom and digi-doom – I am sure we as an industry can do better.

Hand waving only moves hot air around – it’s doesn’t cool anything off.

Security Theater

I recently encountered a “customer expectations management” issue -- which we dealt with pretty quickly -- that reminds me of a Monty Python sketch.  It illustrates the difference between “real security” and “security theater” -- those feel-good, compliance-oriented, “everybody else does this so it must be OK” exercises that typically end in “but we can’t have a security problem -- we checked all the required boxes!”  

Here goes. I was told by a particular unnamed development group that customers requesting a penetration test virtually demanded a penetration test report that showed there were vulnerabilities, otherwise they didn’t believe it was a “real” report.  (Sound of head banging on wall.) I’d laugh if the stakes weren’t so high and the discussion weren’t so absurd.

If the requirement for “security” is “you have to produce a penetration test that shows there are vulnerabilities,” that is an easy problem to solve. I am sure someone can create a software program that randomly selects from, say, the oodles of potential problems outlined in the Common Weakness Enumeration (CWE), and produces a report showing one or more Vulnerabilities Du Jour. Of course, it probably needs to be parameterized (e.g.,  you have to show at least one vulnerability per X thousand lines of code, you specify primary coding language so you can tailor the fake vulnerabilities reported to the actual programming language, etc.).  Rather than waste money using a really good tool (or hiring a really good third party), we can all just run the report generator. Let’s call it BogusBreakIt. “With BogusBreakIt, you can quickly and easily show you have state-of- the-art, non-existent security problems – without the expense of an actual penetration test! Why fix actual problems when customers only want to know you still have them? Now, with new and improved fonts!”

With apologies to the Knights Who Say, “Ni,” all we are missing is a shrubbery (not too expensive). The way this exercise should work is that instead of hiring Cheap and Bad Pen Testers R Us to make your customers feel good about imperfect security (why hire a good one to find everything if the bar is so low?), you do the best you can do, yourselves, then, where apropos, hire a really good pen tester, triage the results, and put an action plan together to address the crappiest problems first. If you provide anything to customers, it should not be gory details of problems you have not fixed yet, it should be a high level synopsis with an accompanying remediation plan.  Any customer who really cares doesn’t want “a report that shows security problems,” but a report that shows the vendor is focused on improving security in priority order – and, of course, actually does so.  

Moral: It’s hard enough working in security without wasting scarce time, money, and people on delivering shrubberies instead of security.

CSSLEIEIO

I don’t think anybody can doubt my commitment to assurance – it’s been my focus at Oracle for most of the time I’ve worked in security, going on 20 years (I should note that I joined the company when I was 8). It is with mixed feelings that I say that while I was an early (grandfathered) recipient of the Certified Secure Software Lifecycle Professional (CSSLP) certification, I have just let it lapse. I’m not trying to bash the Information Systems Security Certification Consortium (ISC(2)), the developer and “blessing certifier” of the CSSLP, and certainly not trying to denigrate the value of assurance, but the entire exercise of developing this certification never sat well with me. Part of it is that I think it solves the wrong problem – more on which later. Also, my cynical and probably unfair comment when I heard that ISC(2) was developing the CSSLP certification was that, having saturated the market for Certified Information Systems Security Professionals (CISSPs), they needed a new source of revenue. (You do wonder when you look at the number of business cards with a multiplicity of alphabet soup on them: CISM, CISSP, CSSLP, EIEIO (Ok, I made that last one up).)

I let my CSSLP lapse because of a) laziness and b) the fact that I got nothing from it.  Having a CSSLP made no difference to my work or my “professional standing.” It added no value to me personally, or, more importantly, to assurance at Oracle. I started working in assurance before the CSSLP dreamer-uppers ever thought of Yet Another Certification, and my team (and many others in Oracle) have proceeded to continue to try to improve our practices. ISC(2) had no impact on that. None. I wondered why I was paying X dollars to an organization for them to “bless” work that we were doing anyway, that I was doing, anyway, that did not add one iota to our knowledge or practices?  (To be fair, some of the people who work for me have CSSLPs and have kept them current. I can’t speak for what they think they get out of having a CSSLP.)

We have an EIEIO syndrome in security – so many certifications, and really, is security any better? I don’t know. I don’t know how much difference many of these certifications make, except that job search engines apparently look for them as keywords. Many certifications across various industries are used as barriers to market entry, to the point that some forward- thinking states are repealing these requirements as being anti-competitive. (And really, do we need a certification for interior decorators? Is it that critical that someone know the difference between Rococo and Neoclassical styles? C’mon!) In some areas, certifications probably do make sense. Some of them might be security areas. But it doesn’t do any good if the market is so fragmented and we are adding more certifications just to add them. And it really doesn’t do any good if we have too many of them to the point where the next one is JASC – just another security certification. CSSLP felt like that to me.  

I certainly think assurance is important. I just do not know – and remain really, really unconvinced -- that having an assurance “certification” for individuals has done anything to improve the problem space. As I’ve opined before, I think it would be far, far, better to “bake in” security to the computer science and related degree programs than try to “bolt on” assurance through an ex post facto certification. It’s like an example I have used before: the little Dutch boy, who, in the story, put his fingers in leaks in a dike to prevent a flood. We keep thinking if only we had more little Dutch boys, we can prevent a flood. If we don’t fix the “builders” problem,  we – and all the Dutch boys we are using  to stem the flood – will surely drown.


I am not perfect. As good as my team is -- and they and many others in Oracle are the real builders of our assurance program -- they are not perfect. But I stand on our record, and none of that record was, in my opinion, affected one iota by the presence or absence of CSSLPs among our practitioners.

If I may be indulged:

Old MacDonald had some code

EIEIO

And in that code he had some flaws

EIEIO

With a SQL injection here and an XSS there

Run a scan, fuzz your code

Everywhere a threat model

Old MacDonald fixed his code

EIEIO*

Last Bits

I mentioned in an earlier blog, in a truly breathtaking example of self-promotion, that my sister Diane and I (writing as Maddi Davidson) had written and published the first in an IT Industry-based murder mystery series,  Outsourcing Murder. The two exciting bits of news about that are, first of all book 2, Denial of Service, is almost (OK, 90%) done.  Stay tuned. The second bit is that Outsourcing Murder made a summer reading list for geeks.  It’s probably the only time in my life I (or rather, my sister and I) will appear in a list with Kevin Mitnick and Neal Stephenson.

*Ira Gershwin would turn over in his grave – I know I’ll never make it as a lyricist.

Pain Comes Instantly

Wed, 2012-03-28 11:18

When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror.

In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.”

The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”)

Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in.

The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked.

More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something.

Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if:

• It is cost effective
• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*
• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)
• It solves the right problem

With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged.

A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?)

Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them:

• Specific details of security vulnerabilities

• Including exploit information or technical details of the vulnerability

• Whether or not there is any mitigation available (as in a patch)

PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret.

Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you.

Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium?

Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically.

A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!)

Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want.

Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit.

Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates.

So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following:

• Contact PCI with your concerns
• Contact Oracle (we are looking for vendors to sign our statement of concern)
• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application

I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions.

By Way of Explanation…

Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.”

Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket:

Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer.

Book 2, Denial of Service, is coming out this summer.

* Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

Pain Comes Instantly

Wed, 2012-03-28 11:18

When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror.


In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.”


The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”)


Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in.


The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked.


More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something.


Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if:


• It is cost effective
• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*
• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)
• It solves the right problem


With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged.


A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?)


Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them:


• Specific details of security vulnerabilities

• Including exploit information or technical details of the vulnerability

• Whether or not there is any mitigation available (as in a patch)


PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret.


Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you.


Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium?


Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically.


A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!)


Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want.


Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit.


Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates.


So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following:


• Contact PCI with your concerns
• Contact Oracle (we are looking for vendors to sign our statement of concern)
• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application


I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions.


By Way of Explanation…


Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.”


Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket:


Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer.


Book 2, Denial of Service, is coming out this summer.


* Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

Those Who Can’t Do, Audit

Wed, 2011-08-24 11:53

I am often asked what the biggest change is that I’ve seen in my years working in information security. (Most of those who ask are too polite to ask how many years I’ve worked in information security. I used to say I joined Oracle when I was 8. Now, I tell everyone I started at Oracle when I was 3. Pretty soon, I’ll be a prenatal employee.)

There are too many changes to pick just one, but on a personal level, I seem to spend a lot more time on government affairs (relative to the rest of my job) that I used to: working with others in industry to discuss cybersecurity-related public policy, meeting with staffers on the Hill, commenting on draft legislation, commenting on others’ comments on draft legislation, and so on. On the one hand, it’s a natural result of the amount of dependence that everyone has on cybersecurity and the amount of highly publicized breaches, hacks, and so on we continue to see. You can hardly open the paper – or open an email with a link to the daily electronic paper – without reading about another data breach, “outing” of sensitive information, new thing you didn’t even know was IP-accessible being broken into (e.g., starting cars remotely), and so on.

Legislators often want to Do Something when they see a problem – that’s why we elected them, presumably. I’ve opined in previous blogs on the importance of defining what problem you want to solve, specifying what “it” is that you want to legislate, understanding costs – especially those pertaining to unintended consequences - and so on. I spend more time on govie affairs now because more legislators are proposing legislation that they hope will improve cybersecurity.

Most people who help frame public policy discussions about cybersecurity are well intended and they want to minimize or resolve the problem. However, there are also a whole lotta entities that want to use those discussions to get handed a nice, big, fat “public policy printing press”: ideally, legislative mandates where their businesses receive a direct and lucrative benefit, ka-ching. Their idea of public policy begins and ends at the cash register.

Some businesses would love to be in a position where they could write/draft/influence legislation about how no commercial company can be trusted with security and therefore needs someone (presumably people like themselves) to bless their systems and all their business practices – the more, the merrier. This includes legislative mandates on suppliers – who, as we all know – are busy throwing crappy code over the wall with malice aforethought. Those evil suppliers simply cannot be trusted, and thus the entirety of supplier business practices in security must be validated by Outside Experts* if not the entirety of their code scanned. (Nice work if you can get it.) (Aside: boy, with all the energy we suppliers expend on deliberately producing rotten code and putting our customers’ systems at risk – not to mention our own businesses since we run on our own stuff - it’s really hard to find time for other malicious acts like starving baby rabbits and kitties.**)

At the same time, these businesses are less than forthcoming when asked how they will be vetted: what do they find, how well do they find it and at what cost, since everybody who has worked with code scanning tools know they need to be tuned. (Having to plow through 1000 alleged vulnerabilities to find the three that are legitimate is way too expensive for any company to contemplate doing it.) Now is a perfect time for a disclosure: I am on an advisory board to a company in this sector.

One company in particular is beginning to get under my skin on a lot of levels pertaining to “creating a market for themselves.” Let’s call them SASO (Static Analysis Service Offering) that – ta da! - do static analysis as a service offering. More specifically, they analyze the binaries to do static analysis – an important point. When SASO got started, I thought they had a useful value proposition in that a lot of small software companies don’t have security expertise in-house – one reason why Oracle has strong “business alignment” processes when we acquire a company that include alignment with our assurance practices, which the acquired entities are by-and-large really happy to do. The small company wants to increase their ability to sell into core markets and to do so they may need to convince prospective customers that their code is appropriately secure. They hire SASO to analyze their code, and SASO produces a summary report that tells the customer whether there are significant potentially unfixed vulnerabilities and also gives the small company the details of those vulnerabilities so they can validate actual vulnerabilities and then fix them. Fair enough, and a good service offering, to a point, more on which later. The real benefit to the small company is to get actionable intelligence on problems in their code earlier because for one, it’s almost always easier and cheaper to fix these before products ship, and/or remediate them in some severity order. Also, it’s not always easy to use static analysis tools, yourself: the tools require knowledgeable people to run them and analyze the results.

It’s not, in other words, the “gold star” from SASO they get that has value; it’s the SASO expertise they get to fix issues earlier until they get that expertise in-house and up-to-speed. And they really do need that expertise in-house because security is a core element of software development and you cannot outsource it -- or you will have no security. (Among other things Oracle does in terms of secure development, we have our ethical hacking team conduct product assessments. They are as good as anybody in industry – or better – and they find problems that code analysis tools do not find and never will find. Their expertise is often codified in our own tools that we use in addition to commercially available static and dynamic analysis tools.)

Of course, the market for “helping small inexperienced companies improve the security-worthiness of their code” isn’t big enough, so SASO has been trying to expand the size of their market by means of two understandably - yet obnoxiously - self-promoting activities. One of them is by convincing entire market sectors that they cannot trust their vendors, and their suppliers’ code needs to be “tested” by SASO. You do want to ask, why would anybody trust them? I mean, more and more of SASO’s potential market size is based on FUD. Whereas, the suppliers’ market is based on their customers’ ability to run their entire businesses on the software or hardware. That is, customers bet the digital farm on their suppliers, and the suppliers understand this. If you lose your customer’s confidence for whatever reason – security among them – your customers will switch suppliers faster than you can say “don’t bother with the code scan.” And thus, suppliers are out of business if they screw it up, because their competitors will be ruthless. Competitors are ruthless.

Whom do you think is more trustworthy? Who has a greater incentive to do the job right – someone who builds something, or someone who builds FUD around what others build? Did I mention that most large hardware and software companies run their own businesses on their own products so if there’s a problem, they – or rather, we – are the first ones to suffer? Can SASO say that? I thought not.

Being approached by more and more companies who want our code SASOed has led me to the “enough, already” phase of dealing with them. For the record, we do not and will not allow our code to be SASOed; in fact, we do not allow any third parties access to source code for purposes of security analysis (with one exception noted below). I’ve also produced a canned response that explains to customers why SASO will never darken our code base. It covers the following points, though admittedly when I express them to customers they are less glib and more professional:

1) We have source code and we do our own static analysis. Ergo, I don’t need SASO or anybody else to analyze our binaries (to get at the source code). Moreover, we have not only one but two static analysis tools we use in-house, one of which we built (Parfait, which is built by Sun Labs, a fine outfit we got via the Sun acquisition). Also, as I’ve opined at length in past blog entries, these tools are not “plug and play” – if they were, everybody would run down to Radio Shack, or wait for late night television to see advertisements for Code Scannerama (“finds all security vulnerabilities and malware in 28 seconds -- with No False Positives! But wait, there’s more: order now, and get a free turnip twaddler!”). For the amount of time we’d expend to work with SASO and get useful, accurate and actionable output – we could just get on with it and use the tools we’ve already licensed or already have. SASOing our code hurts us and doesn’t help us.

2) Security != testing. I am all for the use of automated tools to find vulnerabilities in software – it’s a code hygiene thing – but it is not the totality of assurance. Oracle has an extensive assurance program for our products that includes the use of static analysis tools. (In fact, Oracle requires the use of multiple tools, some of which we have developed in-house because no single tool “does it all.”) We also track compliance with our assurance program and report it to our senior management, all the way up to the CEO. In short, we do a far more comprehensive job in security than SASO can do for us. (I also note that – hype to the contrary – these tools will not find much if any malware in code. I am a blond and a bad programmer and it took me about ten minutes to think of ways to put bad stuff in code in a way that would not be detectable by automated code analysis.) “Code analysis” may be a necessary element for security-worthy code but it sure ain’t sufficient.

3) Precedent. It’s a really, really bad precedent to hand your source code to a third party for purposes of “security analysis” because gee, lots of governments have asked for the same privilege. And some governments who ask (I won’t name them, but we all know who they are) engage in state-sponsored industrial espionage so you might as well just kiss your intellectual property good bye as hand your code to those guys (“Wanna start a new software/hardware company? Here’s our source code, O Official Government of Foobaria IP Theft Department!”). Some governments also want to easily do security analysis they then exploit for national security purposes – e.g., scan source code to find problems that they hand to their intelligence agencies to . Companies can’t really say, “we will allow SASO to scan our code if customers nudge us enough” and then “just say no” to governments who want exactly the same privilege. Also, does anybody think it is a good idea for any third party to amass a database of unfixed vulnerabilities in a bunch of products? How are they going to protect that? Does anybody think that would be a really nice, fat, juicy hacker target? You betcha.

(As a matter of fact, Oracle does think that “bug repositories” are potentially nice, fat, juicy hacker targets. Security bug information is highly sensitive information, which is why security bugs are not published to customers and our bug database enforces “need to know” using our own row level access control technology. The security analysts who work for me have security bug access so they can analyze, triage, ensure complete fixes, and so on, but I don’t, because we enforce “need to know,” and I don’t.) I don’t trust any third party to secure detailed bug information and it is bad public policy to allow a third party to amass it in the first place.

4) Fixability. Only Oracle can fix vulnerabilities: SASO cannot. We have the code: they don’t.

5) Equality as public policy. Favoring a single customer or group of customers over others in terms of security vulnerability information is bad public policy. Frankly, all customers' data is precious to them and who would not want to be on the "insider" list? When we do have vulnerabilities in products, all customers get the same information at the same time and the same level of information. It’s basic equity and fairness. It’s also a lot better policy than playing favorites.

6) Global practices for global markets. The industry standard for "security evaluations" is an ISO standard called the Common Criteria (ISO 15408) that looks at the totality of security functionality needed to address specific threats as well as "assurance." A product with no exploitable security defects - that has no authentication, auditing or access control – and isn’t designed to protect against any threats - is absolutely, utterly useless. SASOing your code does not provide any information about the presence or absence of security functionality that meets particular threats. It’s like having someone do a termite inspection on the house you are about to buy but not looking at structural soundness: you know you don’t have bugs – pun intended – but you have not a clue about whether the house will fall down. Common Criteria evaluations do include vulnerability analysis but do not require static analysis. However, many vendors who Common Criteria-evaluate their product also do static analysis, whether or not they get any “credit” for doing it, because it is good practice to do so. Our Common Criteria labs do in some cases get source code access because of the type of analysis they have to do at some assurance levels. But the labs are not “governments,” they are licensed (so there is some quality control around their skill set), access to our source code is incredibly restricted, and we have highly enforceable liability provisions regarding protection of that source code.

7) All tools are not created equal. The only real utility in static analysis is in using the tools in development and fixing what you find in some priority order. In particular, requiring static analysis – e.g., by third parties - will not lead to significant improvement except possibly in the very short term and in the medium to long term will increase consumer costs for no benefit. Specifically: · The state of the art in vulnerability discovery tools is changing very rapidly. Mandating use of a particular tool - or service - will almost guarantee that the mandated tool/service will be obsolete quickly, resulting in added cost for incremental benefit at best. Obsolescence is especially guaranteed if a particular tool or service is mandated because the provider, having been granted a monopoly, has no incentive to improve their service. · The best types of tools used to discover vulnerabilities differ depending on the type of product or service that is being analyzed. Mandating a particular tool is thus suboptimal at best, and an expensive mistake at worst.

I said at the outset that SASO has a useful model for smaller companies that perhaps do not have security expertise in house - yet. But having said that, I offer a cautionary story. A company Oracle acquired some time ago had agreed to have their code SASOed at the request of a customer - before we acquired them. The first problem that created was that the team, by agreeing to a customer request – and continuing to agree to “outside code auditing” until we stopped it – set the market expectation that “gosh, we don’t know what we are doing in security and you shouldn’t trust us,” which is bad enough. What is far, far, worse: I believe this led to a mentality that “those (SASO or other) testing people will find security issues, so I don’t need to worry about security.” I told the product group that they absolutely, positively, needed in-house security expertise, that “outsourcing testing” would create an “outsourcing security” mentality that is unacceptable. Oracle cannot – does not – outsource security.

By way of contrast, consider another company that does static analysis as a service. Let’s call them HuiMaika‘i (Hawaiian for “good group”). HuiMaika‘i provides a service where customers can send in a particular type of code (i.e., based on a particular development framework) and get a report back from HuiMaika‘i that details suspected security vulnerabilities in it. Recently, a HuiMaika‘i customer sent them code which, when analyzed, was found to contain multiple vulnerabilities. HuiMaika‘i somehow determined that this code was actually Oracle code (that is, from an Oracle site) and, because it is their policy not to provide vulnerability reports to customers who submit code not owned by the customer, they returned the vulnerability report to Oracle and not the paying customer. (How cool is that?)

I think their policy is a good one, for multiple reasons. The first is that running such vulnerability-detecting software against code not owned by the customer may violate terms-of-use licenses (possibly resulting in liability for both the customer and the vendor of such vulnerability detecting software). The second reason is that it would be best all around if the vendor of the software being scanned be informed about possible vulnerabilities before public disclosure so that the vendor has a chance to provide fixes in the software. Having said that, you can bet that Oracle had some takeaways from this exercise. First, we pulled the code down wikiwiki (Hawaiian for “really fast”) until we could fix it. And we fixed it wikiwiki. Second, my team is reminding development groups - in words of one syllable - that our coding standards require that any code someone gets from Oracle – even if “just an example” - be held to the same development standards as products are. Code is code. Besides, customers have a habit of taking demo code that does Useful Stuff and using it in production. Ergo, you shouldn’t write crappy code and post it as “an example for customers.” Duh.

Our third takeaway is that we are looking at using HuiMaika‘i onsite in that particular development area. It wasn’t, by the way, just because of what they found - it’s the way they conducted themselves: ethically, professionally, and they didn’t play “vendor gotcha.” Thanks very much, Hui. Small company – big integrity.

Returning to SASO, the other way – besides marketing “you can’t trust suppliers but you can trust us” - in which they are trying to expand their market is a well-loved and unfortunately, often-attempted technique – get the government to do your market expansion work for you. I recently heard that SASO has hired a lobbyist. (I did fact check with them and they stated that, while they had hired a lobbyist, they weren’t “seriously pursuing that angle” – yet.) I have to wonder, what are they going to lobby for? Is it a coincidence that they hired a lobbyist just at the time we have so much draft cybersecurity legislation, e.g., that would have the Department of Homeland Security (DHS) design a supply chain risk management strategy – including third party testing of code - and then use that as part of Federal Acquisition Regulations (FAR) to regulate the supply chain of what the government buys? (Joy.)

I suspect that SASO would love the government to mandate third party code scans – it’s like winning the legislative lottery! As to assurance, many of my arguments above about why we “just say No, to S-A-S-O!” are still equally – if not more – relevant in a legislative context. Secondly, and related to “supply chain” concerns, you really cannot find malware in code by means of static analysis. You can find some coding errors that lead to certain types of security vulnerabilities - exploitable or not. You might be able to find known patterns of “commercially available” malware (e.g., someone downloaded a library and incorporated it into their product, and did not bother to scan it first before doing so. Quelle surprise! The library you got from CluelessRandomWebSite.com is infected with a worm). That is not the same thing as a deliberately introduced back door, and code scans will not find those.

In my opinion, neither SASO - nor any other requirement for third party security testing - has any place in a supply chain discussion. If the concern is assurance, the answer is to work within existing international assurance standards, not create a new one. Particularly not a new, US-only requirement to “hand a big fat market win by regulatory fiat to any vendor who lobbied for a provision that expanded their markets.” Ka-ching.

The bigger concern I have is the increasing amount of attempts by many companies, not to innovate, but to convince legislators that people who actually do stuff and build things can’t be trusted (how about asking themselves the same question?). I recently attending a meeting in which one of the leading presenters was a tenured professor of law whose area of focus is government regulation, and who admitted having no expertise in software. Said individual stated that we should license software developers because “we need accountability.” (I note that as a tenured professor, that person has little accountability since she is “unfireable” regardless of how poor a job she does, or whether there is any market demand for what she wants to teach.)

I felt compelled to interject: “Accountable to whom? Because industry hires many or most of these grads. We can enforce accountability – we fire people who don’t perform. If your concern is our accountability to our customers – we suppliers absolutely are accountable. Our customers can fire us – move to a competitor – if we don’t do our jobs well.”

In some cases, there are valid public policy arguments for regulation. The to- and fro- that many of us have in discussions with well-meaning legislators is what the problem really is, whether there is a market failure and - frankly -whether regulation will work, and at what cost. I can agree to disagree with others over where a regulatory line is drawn. But having said that, what I find most distasteful in all the above, is the example of a company with a useful service - “hiring guns to defend yourself, until you get your own guns” - deciding that instead of innovating their way to a larger market, they are FUDing their way to a larger market and potentially “regulating” their way to a larger market.

America has been in the past, a proud nation of innovators, several of whom I have been privileged to know and call “friends.” I fear that we are becoming nation of auditors, where nobody will be able to create, breathe, live, without asking someone else, who has never built anything but has been handed power they did not earn, “Mother, May I?”

Live free, or die.  

* Aside from everything else, I draw the line at someone who knows less than I do about my job tell me how to do it. Unless, of course, there is some reciprocity. I’d like to head the Nuclear Regulatory Commission. Of course, I don’t know a darn thing about nuclear energy, but I’m smart and I mean well. Of course, many people given authority to do things nobody asked them to do - and wouldn’t ask them to do - create havoc because, despite being smart and meaning well, they don’t have a clue. Like the line goes from The Music Man, “you gotta know the territory.”

** Of course we don’t do that – it was a joke. I have a pair of bunnies who live at the end of my street and I’ve become quite attached to them. I always slow down at the end of the street to make sure I don’t hit them. I also hope no predator gets them because they are quite charming. Of course, I might not find them so charming if they were noshing on my arugula, but that’s why you schlep on over to Moss Garden Center and buy anti-deer and anti-bunny spray for your plants. I like kitties, too, for the record.

Book of the Month

Bats at the Library by Brian Lies 
This is a children’s book but it is utterly charming, to the point where I bought a copy just to have one. I highly recommend it for the future little reader in your life. Bats see that a library window has been left open, and they enter the library to enjoy all its delights, including all the books (and of course, playing with the copy machine and the water fountain). The illustrations are wonderful, especially all the riffs on classic children’s stories (e.g.,Winnie the Pooh as a bat, Peter Rabbit with bat wings, the headless horseman with bat wings, etc.). There are two other books in the series: Bats at the Beach and Bats at the Ballgame. I haven’t read them, but I bet they are good, too.

Shattered Sword: The Untold Story of the Battle of Midway by Jonathan Parshall and Anthony Tully
I can’t believe I bought and read, much less I am recommending yet another book on the battle of Midway, but this is a must read. The authors went back to primary Japanese sources and discuss the battle from the Japanese point of view. It’s particularly interesting since so many Western discussions of the battle use as a primary source the account of the battle by Mitsuo Fuchida (who led the air strike on Pearl Harbor and who was at Midway on ADM Nagumo’s flagship), which was apparently largely – oh dear – fabricated to help various participants “save face.” For example, you find out how important little details are such as the US forces regularly drained aviation fuel lines (to avoid catastrophic fires during battles – which the Japanese did not do) and that the Japanese – unlike the US - had dedicated damage control teams (if the damage control team was killed, a Japanese ship could not do even basic damage control). It’s an amazing and fascinating book, especially since the authors are not professional historians but regular folks who have had a lifelong interest in the battle. Terrific read.

Neptune’s Inferno: The US Navy at Guadalcanal by James Hornfischer 
I might as well just recommend every book James Hornfischer has ever written since he is a wonderful naval historian, a compelling storyteller, and incorporates oral history so broadly and vividly into his work. You see, hear, and feel -- to the extent you can without having been there -- what individuals who were in the battle see, heard and felt. Despite having read multiple works on Guadalcanal (this one focuses on the sea battles, not the land battle), I came away with a much better understanding of the “who” and “why.” I was so convinced it was going to be wonderful and I’d want to read it again that I bought it in hardcover (instead of waiting for paperback and/or getting it at the library). Terrif.

Cutting for Stone by Abraham Verghese 
I don’t generally read much contemporary fiction since so much of it is postmodern dreary drivel. However, a friend bought this for me - and what a lovely gift! This is a compelling story (twin brothers born in Ethiopia to a surgeon farther and a nun mother) about what constitutes family - and forgiveness. It has been a best seller (not that that always means much) and highly acclaimed (which in this case does mean much). The story draws you in and the characters are richly drawn. I am not all that interested in surgery but the author makes it interesting, particularly in describing the life and work of surgeons in destitute areas.

Those Who Can’t Do, Audit

Wed, 2011-08-24 11:53

I am often asked what the biggest change is that I’ve seen in my years working in information security. (Most of those who ask are too polite to ask how many years I’ve worked in information security. I used to say I joined Oracle when I was 8. Now, I tell everyone I started at Oracle when I was 3. Pretty soon, I’ll be a prenatal employee.)


There are too many changes to pick just one, but on a personal level, I seem to spend a lot more time on government affairs (relative to the rest of my job) that I used to: working with others in industry to discuss cybersecurity-related public policy, meeting with staffers on the Hill, commenting on draft legislation, commenting on others’ comments on draft legislation, and so on. On the one hand, it’s a natural result of the amount of dependence that everyone has on cybersecurity and the amount of highly publicized breaches, hacks, and so on we continue to see. You can hardly open the paper – or open an email with a link to the daily electronic paper – without reading about another data breach, “outing” of sensitive information, new thing you didn’t even know was IP-accessible being broken into (e.g., starting cars remotely), and so on.


Legislators often want to Do Something when they see a problem – that’s why we elected them, presumably. I’ve opined in previous blogs on the importance of defining what problem you want to solve, specifying what “it” is that you want to legislate, understanding costs – especially those pertaining to unintended consequences - and so on. I spend more time on govie affairs now because more legislators are proposing legislation that they hope will improve cybersecurity.


Most people who help frame public policy discussions about cybersecurity are well intended and they want to minimize or resolve the problem. However, there are also a whole lotta entities that want to use those discussions to get handed a nice, big, fat “public policy printing press”: ideally, legislative mandates where their businesses receive a direct and lucrative benefit, ka-ching. Their idea of public policy begins and ends at the cash register.


Some businesses would love to be in a position where they could write/draft/influence legislation about how no commercial company can be trusted with security and therefore needs someone (presumably people like themselves) to bless their systems and all their business practices – the more, the merrier. This includes legislative mandates on suppliers – who, as we all know – are busy throwing crappy code over the wall with malice aforethought. Those evil suppliers simply cannot be trusted, and thus the entirety of supplier business practices in security must be validated by Outside Experts* if not the entirety of their code scanned. (Nice work if you can get it.) (Aside: boy, with all the energy we suppliers expend on deliberately producing rotten code and putting our customers’ systems at risk – not to mention our own businesses since we run on our own stuff - it’s really hard to find time for other malicious acts like starving baby rabbits and kitties.**)


At the same time, these businesses are less than forthcoming when asked how they will be vetted: what do they find, how well do they find it and at what cost, since everybody who has worked with code scanning tools know they need to be tuned. (Having to plow through 1000 alleged vulnerabilities to find the three that are legitimate is way too expensive for any company to contemplate doing it.) Now is a perfect time for a disclosure: I am on an advisory board to a company in this sector.


One company in particular is beginning to get under my skin on a lot of levels pertaining to “creating a market for themselves.” Let’s call them SASO (Static Analysis Service Offering) that – ta da! - do static analysis as a service offering. More specifically, they analyze the binaries to do static analysis – an important point. When SASO got started, I thought they had a useful value proposition in that a lot of small software companies don’t have security expertise in-house – one reason why Oracle has strong “business alignment” processes when we acquire a company that include alignment with our assurance practices, which the acquired entities are by-and-large really happy to do. The small company wants to increase their ability to sell into core markets and to do so they may need to convince prospective customers that their code is appropriately secure. They hire SASO to analyze their code, and SASO produces a summary report that tells the customer whether there are significant potentially unfixed vulnerabilities and also gives the small company the details of those vulnerabilities so they can validate actual vulnerabilities and then fix them. Fair enough, and a good service offering, to a point, more on which later. The real benefit to the small company is to get actionable intelligence on problems in their code earlier because for one, it’s almost always easier and cheaper to fix these before products ship, and/or remediate them in some severity order. Also, it’s not always easy to use static analysis tools, yourself: the tools require knowledgeable people to run them and analyze the results.


It’s not, in other words, the “gold star” from SASO they get that has value; it’s the SASO expertise they get to fix issues earlier until they get that expertise in-house and up-to-speed. And they really do need that expertise in-house because security is a core element of software development and you cannot outsource it -- or you will have no security. (Among other things Oracle does in terms of secure development, we have our ethical hacking team conduct product assessments. They are as good as anybody in industry – or better – and they find problems that code analysis tools do not find and never will find. Their expertise is often codified in our own tools that we use in addition to commercially available static and dynamic analysis tools.)


Of course, the market for “helping small inexperienced companies improve the security-worthiness of their code” isn’t big enough, so SASO has been trying to expand the size of their market by means of two understandably - yet obnoxiously - self-promoting activities. One of them is by convincing entire market sectors that they cannot trust their vendors, and their suppliers’ code needs to be “tested” by SASO. You do want to ask, why would anybody trust them? I mean, more and more of SASO’s potential market size is based on FUD. Whereas, the suppliers’ market is based on their customers’ ability to run their entire businesses on the software or hardware. That is, customers bet the digital farm on their suppliers, and the suppliers understand this. If you lose your customer’s confidence for whatever reason – security among them – your customers will switch suppliers faster than you can say “don’t bother with the code scan.” And thus, suppliers are out of business if they screw it up, because their competitors will be ruthless. Competitors are ruthless.


Whom do you think is more trustworthy? Who has a greater incentive to do the job right – someone who builds something, or someone who builds FUD around what others build? Did I mention that most large hardware and software companies run their own businesses on their own products so if there’s a problem, they – or rather, we – are the first ones to suffer? Can SASO say that? I thought not.


Being approached by more and more companies who want our code SASOed has led me to the “enough, already” phase of dealing with them. For the record, we do not and will not allow our code to be SASOed; in fact, we do not allow any third parties access to source code for purposes of security analysis (with one exception noted below). I’ve also produced a canned response that explains to customers why SASO will never darken our code base. It covers the following points, though admittedly when I express them to customers they are less glib and more professional:



1) We have source code and we do our own static analysis. Ergo, I don’t need SASO or anybody else to analyze our binaries (to get at the source code). Moreover, we have not only one but two static analysis tools we use in-house, one of which we built (Parfait, which is built by Sun Labs, a fine outfit we got via the Sun acquisition). Also, as I’ve opined at length in past blog entries, these tools are not “plug and play” – if they were, everybody would run down to Radio Shack, or wait for late night television to see advertisements for Code Scannerama (“finds all security vulnerabilities and malware in 28 seconds -- with No False Positives! But wait, there’s more: order now, and get a free turnip twaddler!”). For the amount of time we’d expend to work with SASO and get useful, accurate and actionable output – we could just get on with it and use the tools we’ve already licensed or already have. SASOing our code hurts us and doesn’t help us.


2) Security != testing. I am all for the use of automated tools to find vulnerabilities in software – it’s a code hygiene thing – but it is not the totality of assurance. Oracle has an extensive assurance program for our products that includes the use of static analysis tools. (In fact, Oracle requires the use of multiple tools, some of which we have developed in-house because no single tool “does it all.”) We also track compliance with our assurance program and report it to our senior management, all the way up to the CEO. In short, we do a far more comprehensive job in security than SASO can do for us. (I also note that – hype to the contrary – these tools will not find much if any malware in code. I am a blond and a bad programmer and it took me about ten minutes to think of ways to put bad stuff in code in a way that would not be detectable by automated code analysis.) “Code analysis” may be a necessary element for security-worthy code but it sure ain’t sufficient.


3) Precedent. It’s a really, really bad precedent to hand your source code to a third party for purposes of “security analysis” because gee, lots of governments have asked for the same privilege. And some governments who ask (I won’t name them, but we all know who they are) engage in state-sponsored industrial espionage so you might as well just kiss your intellectual property good bye as hand your code to those guys (“Wanna start a new software/hardware company? Here’s our source code, O Official Government of Foobaria IP Theft Department!”). Some governments also want to easily do security analysis they then exploit for national security purposes – e.g., scan source code to find problems that they hand to their intelligence agencies to . Companies can’t really say, “we will allow SASO to scan our code if customers nudge us enough” and then “just say no” to governments who want exactly the same privilege. Also, does anybody think it is a good idea for any third party to amass a database of unfixed vulnerabilities in a bunch of products? How are they going to protect that? Does anybody think that would be a really nice, fat, juicy hacker target? You betcha.


(As a matter of fact, Oracle does think that “bug repositories” are potentially nice, fat, juicy hacker targets. Security bug information is highly sensitive information, which is why security bugs are not published to customers and our bug database enforces “need to know” using our own row level access control technology. The security analysts who work for me have security bug access so they can analyze, triage, ensure complete fixes, and so on, but I don’t, because we enforce “need to know,” and I don’t.) I don’t trust any third party to secure detailed bug information and it is bad public policy to allow a third party to amass it in the first place.


4) Fixability. Only Oracle can fix vulnerabilities: SASO cannot. We have the code: they don’t.


5) Equality as public policy. Favoring a single customer or group of customers over others in terms of security vulnerability information is bad public policy. Frankly, all customers' data is precious to them and who would not want to be on the "insider" list? When we do have vulnerabilities in products, all customers get the same information at the same time and the same level of information. It’s basic equity and fairness. It’s also a lot better policy than playing favorites.


6) Global practices for global markets. The industry standard for "security evaluations" is an ISO standard called the Common Criteria (ISO 15408) that looks at the totality of security functionality needed to address specific threats as well as "assurance." A product with no exploitable security defects - that has no authentication, auditing or access control – and isn’t designed to protect against any threats - is absolutely, utterly useless. SASOing your code does not provide any information about the presence or absence of security functionality that meets particular threats. It’s like having someone do a termite inspection on the house you are about to buy but not looking at structural soundness: you know you don’t have bugs – pun intended – but you have not a clue about whether the house will fall down. Common Criteria evaluations do include vulnerability analysis but do not require static analysis. However, many vendors who Common Criteria-evaluate their product also do static analysis, whether or not they get any “credit” for doing it, because it is good practice to do so. Our Common Criteria labs do in some cases get source code access because of the type of analysis they have to do at some assurance levels. But the labs are not “governments,” they are licensed (so there is some quality control around their skill set), access to our source code is incredibly restricted, and we have highly enforceable liability provisions regarding protection of that source code.


7) All tools are not created equal. The only real utility in static analysis is in using the tools in development and fixing what you find in some priority order. In particular, requiring static analysis – e.g., by third parties - will not lead to significant improvement except possibly in the very short term and in the medium to long term will increase consumer costs for no benefit. Specifically: · The state of the art in vulnerability discovery tools is changing very rapidly. Mandating use of a particular tool - or service - will almost guarantee that the mandated tool/service will be obsolete quickly, resulting in added cost for incremental benefit at best. Obsolescence is especially guaranteed if a particular tool or service is mandated because the provider, having been granted a monopoly, has no incentive to improve their service. · The best types of tools used to discover vulnerabilities differ depending on the type of product or service that is being analyzed. Mandating a particular tool is thus suboptimal at best, and an expensive mistake at worst.



I said at the outset that SASO has a useful model for smaller companies that perhaps do not have security expertise in house - yet. But having said that, I offer a cautionary story. A company Oracle acquired some time ago had agreed to have their code SASOed at the request of a customer - before we acquired them. The first problem that created was that the team, by agreeing to a customer request – and continuing to agree to “outside code auditing” until we stopped it – set the market expectation that “gosh, we don’t know what we are doing in security and you shouldn’t trust us,” which is bad enough. What is far, far, worse: I believe this led to a mentality that “those (SASO or other) testing people will find security issues, so I don’t need to worry about security.” I told the product group that they absolutely, positively, needed in-house security expertise, that “outsourcing testing” would create an “outsourcing security” mentality that is unacceptable. Oracle cannot – does not – outsource security.


By way of contrast, consider another company that does static analysis as a service. Let’s call them HuiMaika‘i (Hawaiian for “good group”). HuiMaika‘i provides a service where customers can send in a particular type of code (i.e., based on a particular development framework) and get a report back from HuiMaika‘i that details suspected security vulnerabilities in it. Recently, a HuiMaika‘i customer sent them code which, when analyzed, was found to contain multiple vulnerabilities. HuiMaika‘i somehow determined that this code was actually Oracle code (that is, from an Oracle site) and, because it is their policy not to provide vulnerability reports to customers who submit code not owned by the customer, they returned the vulnerability report to Oracle and not the paying customer. (How cool is that?)


I think their policy is a good one, for multiple reasons. The first is that running such vulnerability-detecting software against code not owned by the customer may violate terms-of-use licenses (possibly resulting in liability for both the customer and the vendor of such vulnerability detecting software). The second reason is that it would be best all around if the vendor of the software being scanned be informed about possible vulnerabilities before public disclosure so that the vendor has a chance to provide fixes in the software. Having said that, you can bet that Oracle had some takeaways from this exercise. First, we pulled the code down wikiwiki (Hawaiian for “really fast”) until we could fix it. And we fixed it wikiwiki. Second, my team is reminding development groups - in words of one syllable - that our coding standards require that any code someone gets from Oracle – even if “just an example” - be held to the same development standards as products are. Code is code. Besides, customers have a habit of taking demo code that does Useful Stuff and using it in production. Ergo, you shouldn’t write crappy code and post it as “an example for customers.” Duh.


Our third takeaway is that we are looking at using HuiMaika‘i onsite in that particular development area. It wasn’t, by the way, just because of what they found - it’s the way they conducted themselves: ethically, professionally, and they didn’t play “vendor gotcha.” Thanks very much, Hui. Small company – big integrity.


Returning to SASO, the other way – besides marketing “you can’t trust suppliers but you can trust us” - in which they are trying to expand their market is a well-loved and unfortunately, often-attempted technique – get the government to do your market expansion work for you. I recently heard that SASO has hired a lobbyist. (I did fact check with them and they stated that, while they had hired a lobbyist, they weren’t “seriously pursuing that angle” – yet.) I have to wonder, what are they going to lobby for? Is it a coincidence that they hired a lobbyist just at the time we have so much draft cybersecurity legislation, e.g., that would have the Department of Homeland Security (DHS) design a supply chain risk management strategy – including third party testing of code - and then use that as part of Federal Acquisition Regulations (FAR) to regulate the supply chain of what the government buys? (Joy.)


I suspect that SASO would love the government to mandate third party code scans – it’s like winning the legislative lottery! As to assurance, many of my arguments above about why we “just say No, to S-A-S-O!” are still equally – if not more – relevant in a legislative context. Secondly, and related to “supply chain” concerns, you really cannot find malware in code by means of static analysis. You can find some coding errors that lead to certain types of security vulnerabilities - exploitable or not. You might be able to find known patterns of “commercially available” malware (e.g., someone downloaded a library and incorporated it into their product, and did not bother to scan it first before doing so. Quelle surprise! The library you got from CluelessRandomWebSite.com is infected with a worm). That is not the same thing as a deliberately introduced back door, and code scans will not find those.


In my opinion, neither SASO - nor any other requirement for third party security testing - has any place in a supply chain discussion. If the concern is assurance, the answer is to work within existing international assurance standards, not create a new one. Particularly not a new, US-only requirement to “hand a big fat market win by regulatory fiat to any vendor who lobbied for a provision that expanded their markets.” Ka-ching.


The bigger concern I have is the increasing amount of attempts by many companies, not to innovate, but to convince legislators that people who actually do stuff and build things can’t be trusted (how about asking themselves the same question?). I recently attending a meeting in which one of the leading presenters was a tenured professor of law whose area of focus is government regulation, and who admitted having no expertise in software. Said individual stated that we should license software developers because “we need accountability.” (I note that as a tenured professor, that person has little accountability since she is “unfireable” regardless of how poor a job she does, or whether there is any market demand for what she wants to teach.)


I felt compelled to interject: “Accountable to whom? Because industry hires many or most of these grads. We can enforce accountability – we fire people who don’t perform. If your concern is our accountability to our customers – we suppliers absolutely are accountable. Our customers can fire us – move to a competitor – if we don’t do our jobs well.”


In some cases, there are valid public policy arguments for regulation. The to- and fro- that many of us have in discussions with well-meaning legislators is what the problem really is, whether there is a market failure and - frankly -whether regulation will work, and at what cost. I can agree to disagree with others over where a regulatory line is drawn. But having said that, what I find most distasteful in all the above, is the example of a company with a useful service - “hiring guns to defend yourself, until you get your own guns” - deciding that instead of innovating their way to a larger market, they are FUDing their way to a larger market and potentially “regulating” their way to a larger market.


America has been in the past, a proud nation of innovators, several of whom I have been privileged to know and call “friends.” I fear that we are becoming nation of auditors, where nobody will be able to create, breathe, live, without asking someone else, who has never built anything but has been handed power they did not earn, “Mother, May I?”


Live free, or die.  


* Aside from everything else, I draw the line at someone who knows less than I do about my job tell me how to do it. Unless, of course, there is some reciprocity. I’d like to head the Nuclear Regulatory Commission. Of course, I don’t know a darn thing about nuclear energy, but I’m smart and I mean well. Of course, many people given authority to do things nobody asked them to do - and wouldn’t ask them to do - create havoc because, despite being smart and meaning well, they don’t have a clue. Like the line goes from The Music Man, “you gotta know the territory.”


** Of course we don’t do that – it was a joke. I have a pair of bunnies who live at the end of my street and I’ve become quite attached to them. I always slow down at the end of the street to make sure I don’t hit them. I also hope no predator gets them because they are quite charming. Of course, I might not find them so charming if they were noshing on my arugula, but that’s why you schlep on over to Moss Garden Center and buy anti-deer and anti-bunny spray for your plants. I like kitties, too, for the record.


Book of the Month


Bats at the Library by Brian Lies 
This is a children’s book but it is utterly charming, to the point where I bought a copy just to have one. I highly recommend it for the future little reader in your life. Bats see that a library window has been left open, and they enter the library to enjoy all its delights, including all the books (and of course, playing with the copy machine and the water fountain). The illustrations are wonderful, especially all the riffs on classic children’s stories (e.g.,Winnie the Pooh as a bat, Peter Rabbit with bat wings, the headless horseman with bat wings, etc.). There are two other books in the series: Bats at the Beach and Bats at the Ballgame. I haven’t read them, but I bet they are good, too.


Shattered Sword: The Untold Story of the Battle of Midway by Jonathan Parshall and Anthony Tully
I can’t believe I bought and read, much less I am recommending yet another book on the battle of Midway, but this is a must read. The authors went back to primary Japanese sources and discuss the battle from the Japanese point of view. It’s particularly interesting since so many Western discussions of the battle use as a primary source the account of the battle by Mitsuo Fuchida (who led the air strike on Pearl Harbor and who was at Midway on ADM Nagumo’s flagship), which was apparently largely – oh dear – fabricated to help various participants “save face.” For example, you find out how important little details are such as the US forces regularly drained aviation fuel lines (to avoid catastrophic fires during battles – which the Japanese did not do) and that the Japanese – unlike the US - had dedicated damage control teams (if the damage control team was killed, a Japanese ship could not do even basic damage control). It’s an amazing and fascinating book, especially since the authors are not professional historians but regular folks who have had a lifelong interest in the battle. Terrific read.


Neptune’s Inferno: The US Navy at Guadalcanal by James Hornfischer 
I might as well just recommend every book James Hornfischer has ever written since he is a wonderful naval historian, a compelling storyteller, and incorporates oral history so broadly and vividly into his work. You see, hear, and feel -- to the extent you can without having been there -- what individuals who were in the battle see, heard and felt. Despite having read multiple works on Guadalcanal (this one focuses on the sea battles, not the land battle), I came away with a much better understanding of the “who” and “why.” I was so convinced it was going to be wonderful and I’d want to read it again that I bought it in hardcover (instead of waiting for paperback and/or getting it at the library). Terrif.


Cutting for Stone by Abraham Verghese 
I don’t generally read much contemporary fiction since so much of it is postmodern dreary drivel. However, a friend bought this for me - and what a lovely gift! This is a compelling story (twin brothers born in Ethiopia to a surgeon farther and a nun mother) about what constitutes family - and forgiveness. It has been a best seller (not that that always means much) and highly acclaimed (which in this case does mean much). The story draws you in and the characters are richly drawn. I am not all that interested in surgery but the author makes it interesting, particularly in describing the life and work of surgeons in destitute areas.

The Bucket List

Fri, 2011-04-08 10:20

The title of this blog comes from a recent movie starring Morgan Freeman and Jack Nicholson. I confess I have only seen part of the movie - edited, on a plane, with the headphones off for half the movie - but I still "get" the premise, involving two guys in pursuit of accomplishing the list of things they want to do before their lives are over (i.e., before they "kick the bucket"). I have various personal bucket lists that are really more like "short term wanna-do lists." Mine are nothing grandiose like "climb Mt. Everest,"* but they are personal goals which includes places I want to visit and in some cases things I want to do when I get there (e.g., hike the Kalaulau Trail on one of my trips to Kaua´i, perform Hawaiian music at an "open mic" night without rotten vegetables being thrown in my direction, and so on). It's good to have goals and some of those can certainly include life experiences.

In the context of this blog, "bucket" means something other than "things to do before you kick the ..." For example, we use buckets for things like a) swill, b) mopping floors and c) for the inevitable output of drinking far, far, too much with too little food accompaniment. "Buckets" are receptacles for unsavory things we plan on throwing out, and the sooner, the better.

After multiple years in the work force and in technology specifically, I have amassed a list of concepts, phrases and behaviors I believe should be thrown out with prejudice (meaning, that they never darken our door again). I'm including everything from trite business phrases to entire bodies of obfuscation like "governmentese."

At the end of the day As one of my professors at Wharton said, "in the long run, we are all dead." I might add, "at the end of the day, the sun goes down." So what? There is nothing wrong with using phrases like "the end result is," which has the twin advantages of clarity and being useful advice for more than a single day.

Net/netAt the net/net, we lob/lob. Why can't people just say, "the result of FOO is BAR?"

Security is only as good as the weakest link ...and the second weakest link and the third weakest link and so on, because we call them "determined adversaries" and not "lazy pesterers." If you strengthen the weakest link, then the adversary goes for the second weakest link and so on. In short, there will always be stronger and weaker aspects of security and there will always be - depending on what is being secured - people who try to cirvumvent those security measures. It is certainly good practice to acknowledge weak points of security and monitor those, but if someone can break through security at the second weakest link, the weakest link didn't really matter, did it?

As long as we do not have perfect security, there will always be one point that is arguably weaker than others. There is nothing stunning in this pronouncement unless it's the banality of it.

Zero false positivesEvery security vendor in the world whose product detects bad stuff claims they do so with zero false positives. I can do that, too. Just return (hard code) "no problem" to any scan/test/benchmark that your tool checks. An added plus - the performance is excellent since you don't actually have to do anything, woo hoo!

Most people will tolerate a reasonable rate of false positives because very few alert/alarm mechanisms are 100% accurate. To misquote Dickens, "If I could work my will, every idiot who goes about with 'zero false positives' on his lips should be boiled in his own pudding and buried with a stake of holly through his heart."

There are no silver bulletsSure there are. After all, how many vampires and werewolves do you see out there? Not many. So, clearly there are silver bullets and they work pretty well.

Glibness aside, there are, occasionally, silver bullets that are (cliché alert) game changers because they work against problems that were previously considered unsolvable. For example, before there was a vaccine for polio, it was a scourge upon youth - too many kids were left crippled or in an iron lung for life. Thanks to the Salk and Sabin vaccines, polio is almost nonexistent. It's pretty darn close to a silver bullet. Vaccines in general are almost silver bullets when you consider the horrible diseases that they protect against which (rant on) makes parents' reluctance to vaccinate particularly heinous.

Digital Pearl HarborYou could argue that, perversely, Pearl Harbor did the US a favor by galvanizing public fervor. Prior to Pearl Harbor, there was a strong isolationist movement in the US; afterwards, not. "Remember the Arizona and remember Pearl Harbor" were rallying cries throughout the Pacific war. The attack on Pearl Harbor paradoxically put the US in a stronger position in the long run because they had to rely upon aircraft carriers instead of battleships (the Japanese having done significant damage to battleships at anchor in O´ahu) and, as any student of naval history knows, aircraft carriers were the key to success in the Pacific. (While the lack of carriers spelled the end of the British Empire's rule of the seas, notably as the Prince of Wales and Repulse were sunk in the early stages of the war due in no small part to No Air Cover, duh.)

Admiral Yamamoto - who meticulously planned the attack on Pearl Harbor - nonetheless actually opposed doing so since, as he noted, it would buy him at most 6 months to roam around the Pacific. It was almost 6 months to the day between the attack on Pearl Harbor (December 7, 1941) and the battle of Midway (June 3-5, 1942), at which Japan lost the war. Japan also erred in not destroying the POL (petroleum, oil and lubrication) facilities on O´ahu that would have rendered Pearl Harbor effectively useless as a port.

In short, while nobody wants to have a digital (or other) event that amounts to a) a sneak attack with b) a significant loss of life, Pearl Harbor is a poor metaphor to use because, in the long run, it was an attack that ultimately backfired on the attackers.

Very uniqueUnique means "one-of-a-kind" and requires no other modifier. Unique is thus binary: something is or is not unique, but cannot be "sort of" or "exceedingly" unique.

It's a hard problemWhen does anybody ever have an easy problem? If it's easy, it's not a problem for very long! "Hard problem" is the mother of all redundancies.

I have a better phrase: "it's an unsolvable problem." Some problems are not solvable; you merely, if you are lucky, whack away at them until they are less intractable. Or, a problem may be unsolvable as stated and thus you must change the way you think about it to devise better strategies for addressing it.

One of my favorite "it's an unsolvable problem" discussions involves trying to find deliberately introduced malware in code. It's not possible to prevent someone putting something bad in code in a way that is undetectable. Instead of expensive boogeyman hunts (like requiring background checks on all employees of a company whether or not they touch code), other strategies may be more effective, such as having multiple suppliers of a component instead of a sole source (thus reducing the chances that a corrupted core component gives someone the keys to the digital kingdom). Creating more isolation for network elements (e.g., so their interactions with other elements are more constrained and through known paths) is another potential strategy. If I cannot get to a back door to open it, does it matter that it is there? Many things in life do not lend themselves to "solutions" as much as "management." We are better off acknowledging that than holding out false hope of perfect solutions.

Elegant solutionA technoid favorite, and entirely too cutesy. Most of us do not care if a solution is elegant or not, as long as it works. To me, elegance involves black tie and classical music. However, I do not need most problem solutions to be accompanied by Chopin and presented by a white-gloved waiter. "Ugly gets you there."

Awesome and CoolIf ever there were words that were overused, they are "awesome" and "cool." It's as if surfer-dude speak has permeated our national consciousness. As much as I love surfing, and "speak the lingo" when I am out in the water, I dislike hearing non-surfers try to use "gnarly" correctly and pepper their lexicon with "awesome" and "cool." These are the same loons who wear "No Fear" T-shirts when they wouldn't even set a toe in the ocean on a flat day, most likely. Only God is awesome: everything else is, at best, spectacular.

Core competenciesWho admits to core incompetencies? I think it is fair for individuals and entities to think about what they should do themselves, which is likely a subset of "things that I am actually good at." If something is a core competency, it's probably not a good candidate for being outsourced to a third party. More to the point, if something is a core mission - it absolutely should not be outsourced, or why are you in business?

For example, I have been concerned about the US National Institute for Standards and Technology (NIST) recently outsourcing some standards development. I restate that I have immense respect for the mission of NIST and the people I know who work there. But they should not, IMHO, be hiring contractors to develop standards for them, particularly not when by definition paying a contractor to develop a standard means it is not a standard, but a "contractor-developed, closed way of doing something that has not been developed with others, with industry, or sanity checked by a broad group of actual experts." If it is proprietary, it's not a standard unless you are handed a monopoly. Which is what happens when the government pays to develop something that they then mandate through procurement - you get a government-proprietary way of doing something instead of an open standards way of doing something. None of which is conducive to use of core competencies.Think outside of the boxThinking inside the box is perfectly acceptable for 98% of daily living. For example, if I look in the backyard and see that Thunder is not there, which is more likely to be true:

  • I let him in and forgot about it?
  • He was attacked by a mountain lion (without my hearing it)?
  • He was beamed up by aliens looking for a very noisy and hairy addition to their alien zoo?
I'm betting on a), but if I were to "think outside the box," I might go for c).

My mantra is to by all means, think inside the box, because there is a lot of amassed wisdom as to how you do things well that is just ripe for the picking - far preferable to an expensive experiment to "think creatively" for a problem best solved using current approaches. And let's face it, the majority of tasks that the majority of us do is a problem someone else, somewhere, has already dealt with.

Reinvent the wheelPresumably, once something has been invented it cannot be reinvented, and it certainly cannot be reinvented if someone has a patent on it. Maybe people who are reinventing the wheel were told once too often to think outside the box?

Boil the oceanThe global warming alarmists are convinced that we are boiling the ocean by degrees, so people who say, "we shouldn't try to boil the ocean" are apparently mistaken. Of course, nobody is presumably actually trying to warm the ocean, except - perhaps - surfers like me who would be happy to wear less neoprene in northern climes.

Aside: I am endlessly amused by watching surfers in the water who wear far, far, too much "bundle up gear" in not-all-that-cold water. Such as a surfer I saw in San Diego wearing a) a full suit b) a hood c) booties d) and had some oxygen apparatus on his back - all on a 3 foot day in 57 degree water, which is warm for winter surfing in SoCal. I wanted to ask him, "what are you going to wear when it gets really big and really cold?"

FrameworksFrameworks are the "F" words of technology. A framework is something that is never actually implemented. It's kind of the scaffolding of technology, actually, because scaffolding can go anywhere and you never really know what the building it rises beside is going to look like.

TattoosThis is not a verbal cliché but it is a cliché nonetheless. I like tasteful tribal tattoos on Hawaiians and other Pacific islanders: it's a cultural thing ("tattoo" comes from the Polynesian word "tatau" - or kākau" in Hawaiian, which means "to write"). I even like a tasteful globe-and-anchor on members of the US Marine Corps (which also represents a tribal affiliation of sorts). I really, really hate tattoos on pasty haoles for two reasons. One is the general lack of "truth in advertising;" e.g., a guy I saw who must have weighed 350 pounds, very little of it muscle, with "buff" tattooed on the back of his neck. He was anything but buff, but I guess nobody wants to get a tattoo that says, "out-of-shape pudgewad."

Second, given so many people are getting or have tattoos now, how "individual," and "cutting edge" is it to get one? It's mainstream and crowd following. More to the point,when you get old, tattoos fade, sag, and generally look even more awful than they do now, if that is possible. As the French say, "a chacun son goût" - each to his own taste. But in my opinion, except for Marines and Pacific islanders, I think most people look dumb with a tattoo.

GovernmenteseAccording to one waggish definition, an expert is "someone who knows more and more about less and less until finally (s)he knows everything about nothing." I am, alas, beginning to think that a similar definition can be extended to the way in which some employees and "deputies" of the government express themselves: "governmentese is the language by which one says more and more less and less comprehensibly until finally ones says nothing that can be understood." (To be fair, the same can be said about academia, particularly in areas of study that have been strip-mined more than Appalachia, and technologists who insist upon speaking in acronyms - without spelling out first use - such as SOA, CRSF, and EIEIO.)

I am particularly frustrated by government documents that

a)

do not clearly define a problem b)

are written in passive voice, so that the actual actors (and direct objects of the acts) are unclear, and that thus obfuscate who has actual responsibility, if anybody does** c)

that make heavy use of acronyms and jargon that is not spelled out (e.g., VBBWAR, which stands for Very Big Bureaucracy Without Actual Responsibility)

People who are proposing legislation that's going to cost somebody something - probably a lot - or are proposing building something - that will cost a lot - have a responsibility to articulate clearly. What they mean, who does what, and with what proposed effect.

Information sharingInformation sharing is a mantra for every problem in cybersecurity: if only we shared more information with more people, we'd all be more secure. This is postulated as a Universal Truth.*** My response to this is that I am happy to share information: I don't like any opera written after Puccini died, I think post-modern anything is by definition dreary, devoid of moral values and second rate, my weight and age are...OK, I am not going there. I could "overshare" a lot of information that might be of interest to somebody but to the larger security populace, oversharing of information is:

a)

not relevant b)

does not help anybody mitigate risk better c)

is a tactic and not a strategy d)

risks "hardening of the digital arteries" to the extent more and more information is shared and drowns out or crowds out the really useful information in our technical and neural pathways.

Finding the useful nuggets in a sea of overshared information is like looking for a platinum needle in a haystack of silver needles: "good luck with that." The next time someone proposes "information sharing" as a solution, let us ask them "to what problem? And what information, precisely, and to whom?"

I would agree that selected information sharing may help us improve the security of the commons if it enables collective situational awareness that we do not have now. Unfortunately, most people who opine on information sharing want to feed at the public trough as they create frameworks, repositories, standards and so forth as to how to do it, and offer information sharing as the cure for all digital ills. Presuming, of course, that all that shared information only got to the right people, and wasn't shared with or leaked to the wrong people. As we've been so recently reminded, sharing more information with more people carries its own set of risks. Thus, the problem with looking for the platinum needle in a haystack of silver needles is that you may prick yourself and lose a lot of blood before you find what you are looking for.

* Mostly because, while I like reading about mountaineering, I have no interested in doing technical climbing. And anyway, being hauled up Mt. Everest by a guide when you have no actual technical climbing skill in my book does not count as "climbing Mt. Everest."

** "Mistakes were made" is the poster child for responsibility avoidance masked in passive voice.

*** "It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife." This, the opening line of Pride and Prejudice, is one of the catchiest and most-quoted first lines of a book, the other two being "it was a dark and stormy night" (the opening of Paul Clifford by Bullwer Lytton), and "In the beginning, the world was formless and void," the opening of the book of Genesis, whose authorship is a matter of faith.Book of the MonthThe Twilight Warriors by Robert Gandt

This is a wonderful read about the air battle for Okinawa, which was the most expensive naval battle in American history. It is very well researched but also reads well: you have a strong sense of the players, the terror caused by the kamikaze attacks, the valor of the defending pilots and ship crews, and the human cost of the carnage. Well worth the read.

Buffalo for the Broken Heart: Restoring Life to a Black Hills Ranch by Dan O'Brien

I picked this up because my local Sun Valley bookstore had it on their staff picks list. About three pages into it, I was hooked. If you think, "why would I want to read a book about ranching in South Dakota," you are missing a treat. It's a poignant book encompassing natural history, hopes, dreams, and the unique ecology of the buffalo. The Great Plains evolved around the buffalo and has - devolved, for lack of a better word - under cattle. A beautifully written book that will sweep you up in the life of a buffalo rancher.

Killer Summer, Killer View, Killer Weekend by Ridley Pearson

These are just fun "thriller" reads, set in Sun Valley and starring a protagonist - Walt Fleming - whose name is a whisker away from the real-life sheriff, Walt Femling. (As of this writing, Sheriff Femling has just retired after a 24-year career of public service to Blaine County. Happy retirement, Walt.) As the book notes, the sheriff of Blaine County looks after a county bigger than the state of New Jersey. They are great reads and I enjoy them as much for the celebration of Sun Valley - gorgeous views, and outdoor living punctuated by "got-bucks" living - as for the fact they are great page turners "I betcha can't read just one."

Unbroken: A World War II Story of Survival, Resilience, and Redemption by Laura Hillenbrand

This is the "amazing but true" story of Louis Zamperini, a former Olympian and "survivor" par excellance. He survived his plane being shot down over the Pacific, 47 days in a raft, and years in Japanese captivity where he was the target of a particularly sadistic guard. Meticulously researched, brilliantly written, it is a book that will lift the spirit of all who read it. Sometimes truth is not only stranger than, but more transcendent than fiction.

The Bucket List

Fri, 2011-04-08 10:20


The title of this blog comes from a recent movie starring Morgan Freeman and Jack Nicholson. I confess I have only seen part of the movie - edited, on a plane, with the headphones off for half the movie - but I still "get" the premise, involving two guys in pursuit of accomplishing the list of things they want to do before their lives are over (i.e., before they "kick the bucket"). I have various personal bucket lists that are really more like "short term wanna-do lists." Mine are nothing grandiose like "climb Mt. Everest,"* but they are personal goals which includes places I want to visit and in some cases things I want to do when I get there (e.g., hike the Kalaulau Trail on one of my trips to Kaua´i, perform Hawaiian music at an "open mic" night without rotten vegetables being thrown in my direction, and so on). It's good to have goals and some of those can certainly include life experiences.

In the context of this blog, "bucket" means something other than "things to do before you kick the ..." For example, we use buckets for things like a) swill, b) mopping floors and c) for the inevitable output of drinking far, far, too much with too little food accompaniment. "Buckets" are receptacles for unsavory things we plan on throwing out, and the sooner, the better.

After multiple years in the work force and in technology specifically, I have amassed a list of concepts, phrases and behaviors I believe should be thrown out with prejudice (meaning, that they never darken our door again). I'm including everything from trite business phrases to entire bodies of obfuscation like "governmentese."

At the end of the day
As one of my professors at Wharton said, "in the long run, we are all dead." I might add, "at the end of the day, the sun goes down." So what? There is nothing wrong with using phrases like "the end result is," which has the twin advantages of clarity and being useful advice for more than a single day.

Net/net
At the net/net, we lob/lob. Why can't people just say, "the result of FOO is BAR?"

Security is only as good as the weakest link
...and the second weakest link and the third weakest link and so on, because we call them "determined adversaries" and not "lazy pesterers." If you strengthen the weakest link, then the adversary goes for the second weakest link and so on. In short, there will always be stronger and weaker aspects of security and there will always be - depending on what is being secured - people who try to cirvumvent those security measures. It is certainly good practice to acknowledge weak points of security and monitor those, but if someone can break through security at the second weakest link, the weakest link didn't really matter, did it?

As long as we do not have perfect security, there will always be one point that is arguably weaker than others. There is nothing stunning in this pronouncement unless it's the banality of it.

Zero false positives
Every security vendor in the world whose product detects bad stuff claims they do so with zero false positives. I can do that, too. Just return (hard code) "no problem" to any scan/test/benchmark that your tool checks. An added plus - the performance is excellent since you don't actually have to do anything, woo hoo!

Most people will tolerate a reasonable rate of false positives because very few alert/alarm mechanisms are 100% accurate. To misquote Dickens, "If I could work my will, every idiot who goes about with 'zero false positives' on his lips should be boiled in his own pudding and buried with a stake of holly through his heart."

There are no silver bullets
Sure there are. After all, how many vampires and werewolves do you see out there? Not many. So, clearly there are silver bullets and they work pretty well.

Glibness aside, there are, occasionally, silver bullets that are (cliché alert) game changers because they work against problems that were previously considered unsolvable. For example, before there was a vaccine for polio, it was a scourge upon youth - too many kids were left crippled or in an iron lung for life. Thanks to the Salk and Sabin vaccines, polio is almost nonexistent. It's pretty darn close to a silver bullet. Vaccines in general are almost silver bullets when you consider the horrible diseases that they protect against which (rant on) makes parents' reluctance to vaccinate particularly heinous.

Digital Pearl Harbor
You could argue that, perversely, Pearl Harbor did the US a favor by galvanizing public fervor. Prior to Pearl Harbor, there was a strong isolationist movement in the US; afterwards, not. "Remember the Arizona and remember Pearl Harbor" were rallying cries throughout the Pacific war. The attack on Pearl Harbor paradoxically put the US in a stronger position in the long run because they had to rely upon aircraft carriers instead of battleships (the Japanese having done significant damage to battleships at anchor in O´ahu) and, as any student of naval history knows, aircraft carriers were the key to success in the Pacific. (While the lack of carriers spelled the end of the British Empire's rule of the seas, notably as the Prince of Wales and Repulse were sunk in the early stages of the war due in no small part to No Air Cover, duh.)

Admiral Yamamoto - who meticulously planned the attack on Pearl Harbor - nonetheless actually opposed doing so since, as he noted, it would buy him at most 6 months to roam around the Pacific. It was almost 6 months to the day between the attack on Pearl Harbor (December 7, 1941) and the battle of Midway (June 3-5, 1942), at which Japan lost the war. Japan also erred in not destroying the POL (petroleum, oil and lubrication) facilities on O´ahu that would have rendered Pearl Harbor effectively useless as a port.

In short, while nobody wants to have a digital (or other) event that amounts to a) a sneak attack with b) a significant loss of life, Pearl Harbor is a poor metaphor to use because, in the long run, it was an attack that ultimately backfired on the attackers.

Very unique
Unique means "one-of-a-kind" and requires no other modifier. Unique is thus binary: something is or is not unique, but cannot be "sort of" or "exceedingly" unique.

It's a hard problem
When does anybody ever have an easy problem? If it's easy, it's not a problem for very long! "Hard problem" is the mother of all redundancies.

I have a better phrase: "it's an unsolvable problem." Some problems are not solvable; you merely, if you are lucky, whack away at them until they are less intractable. Or, a problem may be unsolvable as stated and thus you must change the way you think about it to devise better strategies for addressing it.

One of my favorite "it's an unsolvable problem" discussions involves trying to find deliberately introduced malware in code. It's not possible to prevent someone putting something bad in code in a way that is undetectable. Instead of expensive boogeyman hunts (like requiring background checks on all employees of a company whether or not they touch code), other strategies may be more effective, such as having multiple suppliers of a component instead of a sole source (thus reducing the chances that a corrupted core component gives someone the keys to the digital kingdom). Creating more isolation for network elements (e.g., so their interactions with other elements are more constrained and through known paths) is another potential strategy. If I cannot get to a back door to open it, does it matter that it is there? Many things in life do not lend themselves to "solutions" as much as "management." We are better off acknowledging that than holding out false hope of perfect solutions.

Elegant solution
A technoid favorite, and entirely too cutesy. Most of us do not care if a solution is elegant or not, as long as it works. To me, elegance involves black tie and classical music. However, I do not need most problem solutions to be accompanied by Chopin and presented by a white-gloved waiter. "Ugly gets you there."

Awesome and Cool
If ever there were words that were overused, they are "awesome" and "cool." It's as if surfer-dude speak has permeated our national consciousness. As much as I love surfing, and "speak the lingo" when I am out in the water, I dislike hearing non-surfers try to use "gnarly" correctly and pepper their lexicon with "awesome" and "cool." These are the same loons who wear "No Fear" T-shirts when they wouldn't even set a toe in the ocean on a flat day, most likely. Only God is awesome: everything else is, at best, spectacular.

Core competencies
Who admits to core incompetencies? I think it is fair for individuals and entities to think about what they should do themselves, which is likely a subset of "things that I am actually good at." If something is a core competency, it's probably not a good candidate for being outsourced to a third party. More to the point, if something is a core mission - it absolutely should not be outsourced, or why are you in business?

For example, I have been concerned about the US National Institute for Standards and Technology (NIST) recently outsourcing some standards development. I restate that I have immense respect for the mission of NIST and the people I know who work there. But they should not, IMHO, be hiring contractors to develop standards for them, particularly not when by definition paying a contractor to develop a standard means it is not a standard, but a "contractor-developed, closed way of doing something that has not been developed with others, with industry, or sanity checked by a broad group of actual experts." If it is proprietary, it's not a standard unless you are handed a monopoly. Which is what happens when the government pays to develop something that they then mandate through procurement - you get a government-proprietary way of doing something instead of an open standards way of doing something. None of which is conducive to use of core competencies.

Think outside of the box

Thinking inside the box is perfectly acceptable for 98% of daily living. For example, if I look in the backyard and see that Thunder is not there, which is more likely to be true:


  • I let him in and forgot about it?

  • He was attacked by a mountain lion (without my hearing it)?

  • He was beamed up by aliens looking for a very noisy and hairy addition to their alien zoo?

I'm betting on a), but if I were to "think outside the box," I might go for c).

My mantra is to by all means, think inside the box, because there is a lot of amassed wisdom as to how you do things well that is just ripe for the picking - far preferable to an expensive experiment to "think creatively" for a problem best solved using current approaches. And let's face it, the majority of tasks that the majority of us do is a problem someone else, somewhere, has already dealt with.

Reinvent the wheel
Presumably, once something has been invented it cannot be reinvented, and it certainly cannot be reinvented if someone has a patent on it. Maybe people who are reinventing the wheel were told once too often to think outside the box?

Boil the ocean
The global warming alarmists are convinced that we are boiling the ocean by degrees, so people who say, "we shouldn't try to boil the ocean" are apparently mistaken. Of course, nobody is presumably actually trying to warm the ocean, except - perhaps - surfers like me who would be happy to wear less neoprene in northern climes.

Aside: I am endlessly amused by watching surfers in the water who wear far, far, too much "bundle up gear" in not-all-that-cold water. Such as a surfer I saw in San Diego wearing a) a full suit b) a hood c) booties d) and had some oxygen apparatus on his back - all on a 3 foot day in 57 degree water, which is warm for winter surfing in SoCal. I wanted to ask him, "what are you going to wear when it gets really big and really cold?"

Frameworks
Frameworks are the "F" words of technology. A framework is something that is never actually implemented. It's kind of the scaffolding of technology, actually, because scaffolding can go anywhere and you never really know what the building it rises beside is going to look like.

Tattoos
This is not a verbal cliché but it is a cliché nonetheless. I like tasteful tribal tattoos on Hawaiians and other Pacific islanders: it's a cultural thing ("tattoo" comes from the Polynesian word "tatau" - or kākau" in Hawaiian, which means "to write"). I even like a tasteful globe-and-anchor on members of the US Marine Corps (which also represents a tribal affiliation of sorts). I really, really hate tattoos on pasty haoles for two reasons. One is the general lack of "truth in advertising;" e.g., a guy I saw who must have weighed 350 pounds, very little of it muscle, with "buff" tattooed on the back of his neck. He was anything but buff, but I guess nobody wants to get a tattoo that says, "out-of-shape pudgewad."

Second, given so many people are getting or have tattoos now, how "individual," and "cutting edge" is it to get one? It's mainstream and crowd following. More to the point,
when you get old, tattoos fade, sag, and generally look even more awful than they do now, if that is possible. As the French say, "a chacun son goût" - each to his own taste. But in my opinion, except for Marines and Pacific islanders, I think most people look dumb with a tattoo.

Governmentese
According to one waggish definition, an expert is "someone who knows more and more about less and less until finally (s)he knows everything about nothing." I am, alas, beginning to think that a similar definition can be extended to the way in which some employees and "deputies" of the government express themselves: "governmentese is the language by which one says more and more less and less comprehensibly until finally ones says nothing that can be understood." (To be fair, the same can be said about academia, particularly in areas of study that have been strip-mined more than Appalachia, and technologists who insist upon speaking in acronyms - without spelling out first use - such as SOA, CRSF, and EIEIO.)

I am particularly frustrated by government documents that

a) do not clearly define a problem
b) are written in passive voice, so that the actual actors (and direct objects of the acts) are unclear, and that thus obfuscate who has actual responsibility, if anybody does**
c) that make heavy use of acronyms and jargon that is not spelled out (e.g., VBBWAR, which stands for Very Big Bureaucracy Without Actual Responsibility)

People who are proposing legislation that's going to cost somebody something - probably a lot - or are proposing building something - that will cost a lot - have a responsibility to articulate clearly. What they mean, who does what, and with what proposed effect.

Information sharing
Information sharing is a mantra for every problem in cybersecurity: if only we shared more information with more people, we'd all be more secure. This is postulated as a Universal Truth.*** My response to this is that I am happy to share information: I don't like any opera written after Puccini died, I think post-modern anything is by definition dreary, devoid of moral values and second rate, my weight and age are...OK, I am not going there. I could "overshare" a lot of information that might be of interest to somebody but to the larger security populace, oversharing of information is:

a) not relevant
b) does not help anybody mitigate risk better
c) is a tactic and not a strategy
d) risks "hardening of the digital arteries" to the extent more and more information is shared and drowns out or crowds out the really useful information in our technical and neural pathways.

Finding the useful nuggets in a sea of overshared information is like looking for a platinum needle in a haystack of silver needles: "good luck with that." The next time someone proposes "information sharing" as a solution, let us ask them "to what problem? And what information, precisely, and to whom?"

I would agree that selected information sharing may help us improve the security of the commons if it enables collective situational awareness that we do not have now. Unfortunately, most people who opine on information sharing want to feed at the public trough as they create frameworks, repositories, standards and so forth as to how to do it, and offer information sharing as the cure for all digital ills. Presuming, of course, that all that shared information only got to the right people, and wasn't shared with or leaked to the wrong people. As we've been so recently reminded, sharing more information with more people carries its own set of risks. Thus, the problem with looking for the platinum needle in a haystack of silver needles is that you may prick yourself and lose a lot of blood before you find what you are looking for.

* Mostly because, while I like reading about mountaineering, I have no interested in doing technical climbing. And anyway, being hauled up Mt. Everest by a guide when you have no actual technical climbing skill in my book does not count as "climbing Mt. Everest."

** "Mistakes were made" is the poster child for responsibility avoidance masked in passive voice.

*** "It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife." This, the opening line of Pride and Prejudice, is one of the catchiest and most-quoted first lines of a book, the other two being "it was a dark and stormy night" (the opening of Paul Clifford by Bullwer Lytton), and "In the beginning, the world was formless and void," the opening of the book of Genesis, whose authorship is a matter of faith.

Book of the Month

The Twilight Warriors
by Robert Gandt

This is a wonderful read about the air battle for Okinawa, which was the most expensive naval battle in American history. It is very well researched but also reads well: you have a strong sense of the players, the terror caused by the kamikaze attacks, the valor of the defending pilots and ship crews, and the human cost of the carnage. Well worth the read.

Buffalo for the Broken Heart: Restoring Life to a Black Hills Ranch by Dan O'Brien

I picked this up because my local Sun Valley bookstore had it on their staff picks list. About three pages into it, I was hooked. If you think, "why would I want to read a book about ranching in South Dakota," you are missing a treat. It's a poignant book encompassing natural history, hopes, dreams, and the unique ecology of the buffalo. The Great Plains evolved around the buffalo and has - devolved, for lack of a better word - under cattle. A beautifully written book that will sweep you up in the life of a buffalo rancher.

Killer Summer, Killer View, Killer Weekend by Ridley Pearson

These are just fun "thriller" reads, set in Sun Valley and starring a protagonist - Walt Fleming - whose name is a whisker away from the real-life sheriff, Walt Femling. (As of this writing, Sheriff Femling has just retired after a 24-year career of public service to Blaine County. Happy retirement, Walt.) As the book notes, the sheriff of Blaine County looks after a county bigger than the state of New Jersey. They are great reads and I enjoy them as much for the celebration of Sun Valley - gorgeous views, and outdoor living punctuated by "got-bucks" living - as for the fact they are great page turners "I betcha can't read just one."

Unbroken: A World War II Story of Survival, Resilience, and Redemption by Laura Hillenbrand

This is the "amazing but true" story of Louis Zamperini, a former Olympian and "survivor" par excellance. He survived his plane being shot down over the Pacific, 47 days in a raft, and years in Japanese captivity where he was the target of a particularly sadistic guard. Meticulously researched, brilliantly written, it is a book that will lift the spirit of all who read it. Sometimes truth is not only stranger than, but more transcendent than fiction.

The Root of The Problem

Thu, 2010-09-02 02:07

Summer in Idaho is treasured all the more since it is all too brief. We had a long, cold spring - my lilacs were two months behind those of friends and family on the east coast - and some flowers that normally do well here never did poke their colorful heads out of the ground.

My personal gardening forays have been mixed: some things I planted from seeds never came up, and others only just bloomed in August, much to my delight. I am trying to create order from chaos - more specifically, I want a lovely oasis of flowers in a rock garden I have admittedly neglected for several years. Nature abhors a vacuum and thus, she made a successful flanking maneuver to colonize flowerbeds with sagebrush and grasses. I am way beyond "yanking and weed killer" and have traded in my trowel for heavier equipment. You need a shovel and a strong back to pull up a sagebrush and as for the grass, I've had to remove the top three inches of soil in many places and move a number of rocks to get at the root system snaking under them.

I never appreciated the expression, "getting at the root of the problem" until I dealt with invasive sagebrush and "grass-zilla." I have no choice but to do it because if I do not eradicate the root system, I will continue to battle these opportunistic biological interlopers one new sprout at a time. Just as, if you do not figure out the - pun intended - root cause of a security vulnerability, but just fix the symptoms, you will later have to clean up the rest of the buggy snippets that are choking your code.

I have had professional experiences that mirror my rock garden. That is, that there are "interloping and invasive" ideas that take hold with unbelievable tenacity to the point it is hard to eradicate them. The sagebrush and grass of the cybersecurity area are what I can only call the (myth of the) evil vendor cabal (var. multae crappycodae) and supply chain risk management (malwarum hysteriensis). Both have taken hold of otherwise rational human beings just like the pods took over people's minds in Invasion of the Body Snatchers.

In the course of my work, I attend a lot of meetings, seminars and the like on software assurance. The good news is that in the last couple of years, most of the vendors who attend these events (think of the big names in software and hardware) are doing pretty much the same sort of mom and secure apple pie things in software development. The bar, I can say pretty confidently, has been raised. This does not mean industry is perfect, nor does it mean that industry is "done" improving security. I would add that all of us know that building better code is good business: good for customers and good for us. It's also important for critical infrastructure. We get it.

However, to go to some of these meetings, you wouldn't think anything had changed. I have recently been astonished at the statements of opinion - without any facts to back them up - about the state of software development and the motives of those of us who do it, and even more disturbed at what I can only describe as outright hostility to industry in particular and capitalism in general. I suspect at least part of the reason for the hostility is the self-selecting nature of some of these meetings. That is, for some assurance-focused groups, vendors only attend meetings sporadically (because it's more productive to spend time improving your product than in talking about it). That leaves the audience dominated by consultants, academics and policy makers. Each group, in its own way, wants to make the problem better and yet each, in its own way, has a vested interest in convincing other stakeholders that they - and only they - can fix the problem. Many of them have never actually built software or hardware or worked in industry - and it shows. Theory often crumbles upon the altar of actual practice.

What I have heard some of these professional theorists say is not only breathtakingly ironic but often more than a little hypocritical: for example, a tenured academic complaining that industry is "not responsive to the market." (See my earlier blog "The Supply Chain Problem") on fixing the often-execrable cybersecurity education in most university programs and the deafening silence I got in response from the universities I sent letters to.) If you are tenured, you do not have to respond to market forces: you can teach the same thing for thirty years whether or not it is what the market needs or wants and whether or not you are any good at it. (What was that again about being nonresponsive to market forces?)

I am also both amused and annoyed at the hordes of third party consultants all providing a Greek chorus of "you can't trust your suppliers - let us vet them for you." Their purpose in the drama of assurance seems to be the following:

  • Create fear, uncertainty and doubt (FUD) in the market - "evil, money-grubbing vendors can't be trusted; good, noble consultants are needed to validate security"
  • Draft standards - under contract to the government - that create new, expensive third party software and hardware validation schemes
  • Become the "validator" of software after your recommendations to the government - the ones you wrote for them - have been accepted

Could there possibly be a clearer definition of "conflict of interest" than the above? Now, I do not blame anyone for trying to create a market - isn't that what capitalism is? - but trying to create a market for your services by demonizing capitalism is hilariously ironic. One wants to know, "quis custodiet ipsos custodes?" (Who watches the watchers, otherwise known as, "why should I trust consultants who, after all, exist to sell more consulting services?")

The antibusiness rhetoric got so bad once that I took advantage of a keynote I was delivering to remark - because I am nothing if not direct - that, contrary to popular belief, there is no actual Evil Vendor Cabal wherein major software and hardware suppliers collude to determine how we can collectively:

  • build worse products
  • charge more for them and
  • put our customers at increased risk of cyberattack.
It doesn't happen. And furthermore, I added, the government likes and has benefited from buying commercial software for many applications since it is feature rich, maintained regularly, generally very configurable, and runs on a lot of operating systems. "How well," I added, "did it work when government tried to build all these systems from scratch?" The answer is, the government does not have the people or the money to do that: they never did. But the same consultants who are creating FUD about commercial software would be happy to build custom software for everything at 20 times the price, whether or not there is a reason to build custom software.

"You are all in business to make a profit!" one person stated accusingly, as if that were a bad thing. "Yes," I said, "and because we are in business to make a profit, it is very much in our interest to build robust, secure software, because it is enormously expensive for us to fix defects - especially security defects - after we ship software, and we'd much rather spend the resources on building new features we can charge for, instead of on old problems we have to fix in many, many places. Furthermore, we run our own businesses on our own software so if there is horrible security, we are the first 'customer' to suffer. And lastly, if you build buggy, crappy software that performs poorly and is expensive to maintain, you will lose customers to competitors, who love to point at your deficiencies if customers have not already found them."

The second and more disturbingly tenacious idea - and I put this in the category of grass since it seemingly will take a lot of grubbing in the dirt to eradicate it - is what is being called "supply chain risk," this year's hot boy band, judging from the amount of screaming, fainting and hysteria that surrounds it. And yet, if "it" is such a big deal, why oh why can't the people writing papers, draft standards and proposed legislation around "it" describe exactly what they are worried about? I have read multiple pieces of legislation and now, a draft NIST standard on "supply chain risk management" and still there is no clear articulation of "what are you worried about?"

I generally have a high degree of regard for the National Institute of Standards and Technology (NIST). In the past, I've even advocated to get them more money for specific projects that I thought would be a very good use of taxpayer money. I am therefore highly disturbed that a draft standard on supply chain risk management, a problem supposedly critical to our national interests, appears to be authored by contractors and not by NIST. Specifically, two out of three people who worked on the draft are consultants, not NIST employees. (Disclaimer: I know both of them professionally and I am not impugning them personally.) There is no way to know whether the NIST employee who is listed on the standard substantially contributed to the draft or merely managed a contract that "outsourced" development of it.

As I noted earlier, there is an inherent problem in having third parties who would directly stand to benefit if a "standard" is implemented participate in drafting it. Human nature being what it is, the temptation to create future business for oneself is insidiously hard to resist. Moreover, it is exceedingly difficult to resist one's own myopias about how to solve a problem and, let's face it, if you are a consultant, every problem looks like the solution is "hire a consultant." It would be exactly the same thing if, say, the federal government asked Oracle to draft a request for proposal that required a ...database. Does anybody think we could possibly be objective? Even if we tried to be open minded, the set of requirements we would come up with would look suspiciously like Oracle, because that's what we are most familiar with.

Some will argue that this is a draft standard, and will go through revisions, so the provenance of the ideas shouldn't matter. However, NIST's core mission is developing standards. If they are not capable of drafting standards themselves then they should either get the resources to do so or not do it at all. Putting it differently, if you can't perform a core mission, why are you in business? If I may be a bit cheeky here, there is a lesson from Good Old Capitalism here: you cannot be in all market segments (otherwise known as "You can't be all things to all people"). It's better to do a few things well than to try to do everything, and end up doing many things badly. I might add, any business that tried to be in too many market segments that they had no actual expertise in would fail - quickly - because the market imposes that discipline on them.

Back to the heart of the hysteria: what, precisely is meant by "supply chain risk?" At the root of all the agitation there appears to be two concerns, both of which are reasonable and legitimate to some degree. They are:

  • Counterfeiting
  • Malware
Taking the easier one first, "counterfeiting" in this context means "purchasing a piece of hardware FOO or software BAR where the product is not a bona fide FOO or BAR but a knockoff." (Note: this is not the case of buying a "solid gold Rolex" on the street corner for $10 when you know very well this is not a real Rolex - not at that price.) From the acquirer's viewpoint, the concern is that a counterfeit component will not perform as advertised (i.e., might fail at a critical juncture), or won't be supported/repaired/warranted by the manufacturer (since it is a fake product). It could also include a suspicion that instead of GoodFoo you are getting EvilKnockoffFOO, which does something very different - and malicious - from what it's supposed to do. More on that later.

From the manufacturer's standpoint, counterfeiting cuts into your revenue stream since someone is "free riding" on your brand, your advertising, maybe even your technology, and you are not getting paid for your work. Counterfeits may also damage your brand (when FakeFOO croaks under pressure instead of performing like the real product). Counterfeiting is the non-controversial part of supply chain concerns in that pretty much everybody agrees you should get what you pay for, and if you buy BigVendor's product FOO, version 5, you want to know you are actually getting FOO, version 5 (and not fake FOO). Note: I say, "non controversial," but when you have government customers buying products off eBay (deeply discounted) who are shocked - shocked I tell you! - to discover that they have bought fakes, you do want to say, "do you buy F-22s off eBay? No? Then what makes you think you can buy mission critical hardware off eBay? Buy From An Authorized Distributor, fool!"

The second area of supply chain risk hysteria is malware. Specifically, the concern that someone, somewhere will Put Something Bad in code (such as a kill switch which would render the software or hardware inoperable at a critical juncture). Without ever articulating it, the hysteria is typically that An Evil Foreigner - not a Good American Programmer - will Put Something Bad in Code. (Of course, other countries have precisely the same concern, only in their articulation, it is evil Americans who will Put Something Bad In Code.) The "foreign boogeymen" problem is at the heart of the supply chain risk hysteria and has led to the overreach of proposed solutions for it. (For example, the NIST draft wanted acquirers to be notified of changes to personnel involving "maintenance." Does this mean that every time a company hires a new developer to work on old code - and let's face it, almost everybody who works in development for an established company touches old code at some point - they have to send a letter to Uncle Sam with the name of the employee? Can you say "intrusive?")

So here is my take on the reality of the "malware" part of supply chain. It's a long explanation, and I stole it from a paper I did on supply chain issues for a group of legislators. I offer these ideas as points of clarification that I fervently hope will frame this discussion, before someone, in a burst of public service, creates an entirely new expensive, vague, "construct" of policy remedies for an unbounded problem. Back to my gardening analogy, if eradicating the roots of a plant is important and necessary to kill off a biological interloper, it is also true that some plants will not grow in all climates and in all soil no matter what you do: I cannot grow plumeria (outdoors) in Idaho no matter how hard I try and no matter how much I love it. Similarly, some of the proposed "solutions" to supply chain risk are not going to thrive because of a failure to understand what is reasonable and feasible and will "grow" and what absolutely will not. I'll go farther than that - some of the proposed remedies - and much of what is proposed in the draft NIST standard - should be dosed with weed killer.

Constraint 1: In the general case - and certainly for multi-purpose infrastructure and applications software and hardware - there are no COTS products without global development and manufacturing.

Discussion: The explosion in COTS software and hardware of the past 20 years has occurred precisely because companies are able to gain access to global talent by developing products around the world. For example, a development effort may include personnel on a single "virtual team" who work across the United States and in the United Kingdom and India. COTS suppliers also need access to global resources to support their global customers. For example, COTS suppliers often offer 7x24 support in which responsibility for addressing a critical customer service request migrates around the globe, from support center to support center (often referred to as a "follow the sun" model). Furthermore, the more effective and available (that is, 7x24 and global) support is, the more likely problems will be reported and resolved more quickly for the benefit of all customers. Even smaller firms that produce specialized COTS products (e.g., cryptographic or security software) may use global talent to produce it.Hardware suppliers are typically no longer "soup to nuts" manufacturers. That is, a hardware supplier may use a global supply network in which components - sourced from multiple entities worldwide - are assembled by another entity. Software is loaded onto the finished hardware in yet another manufacturing step. Global manufacturing and assembly helps hardware suppliers focus on production of the elements for which they can best add value and keeps overall manufacturing and distribution costs low. We take it for granted that we can buy serviceable and powerful personal computers for under $1000, but it was not that long ago that the computing power in the average PC was out of reach for all but highly capitalized entities and special purpose applications. Global manufacturing and distribution makes this possible.In summary, many organizations that would have deployed custom software and hardware in the past have now "bet the farm" on the use of COTS products because they are cheaper, more feature rich, and more supportable than custom software and hardware. As a result, COTS products are being embedded in many systems - or used in many deployment scenarios - that they were not necessarily designed for. Supply chain risk is by no means the only risk of deploying commercial products in non-commercial threat environments.

Constraint 2: It is not possible to prevent someone from putting something in code that is undetectable and potentially malicious, no matter how much you tighten geographic parameters.

Discussion: One of the main expressions of concern over supply chain risk is the "malware boogeyman," most often associated with the fear that a malicious employee with authorized access to code will put a backdoor or malware in code that is eventually sold to a critical infrastructure provider (e.g., financial services, utilities) or a defense or intelligence agency. Such code, it is feared, could enable an adversary to alter (i.e., change) data or exfiltrate data (e.g., remove copies of data surreptitiously) or make use of a planted "kill switch" to prevent the software or hardware from functioning. Typically, the fear is expressed as "a foreigner" could do this. However, it is unclear precisely what "foreigner" is in this context:

  • There are many H1B visa holders (and green card holders) who work for companies located in the United States. Are these "foreigners?"
  • There are US citizens who live in countries other than the US and work on code there. Are these "foreigners?" That is, is the fear of code corruption based on geography or national origin of the developer?
  • There are developers who are naturalized US citizens (or dual passport holders). Are these "foreigners?"
(Ironically, naturalized citizens and H1B visa holders are arguably more "vetted" that native-born Americans.) It is unclear whether the concern is geographic locale, national origin of a developer or overall development practice and the consistency by which it is applied worldwide.

COTS software, particularly infrastructure software (operating systems, databases, middleware) or packaged applications (customer relationship management (CRM), enterprise resource planning (ERP)) typically has multiple millions of lines of code (e.g., the Oracle database has about 70 million lines of code). Also typically, commercial software is in near-constant state of development: there is always a new version under development or old versions undergoing maintenance. While there are automated tools on the market that can scan source code for exploitable security defects (so-called static analysis tools), such tools find only a portion of exploitable defects and these are typically of the "coding error" variety. They do not find most design defects and they would be unlikely to find deliberately introduced backdoors or malware.

Given the size of COTS code bases, the fact they are in a near constant state of flux, and the limits of automated tools, there is no way to absolutely prevent the insertion of bad code that would have unintended consequences and would not be detectable. (As a proof point, a security expert in command and control systems once put "bad code" in a specific 100 lines of code and challenged code reviewers to find it within the specific 100 lines of code. They couldn't. In other words, even if you know where to look, malware can be and often is undetectable.)

Lastly, we are sticking our collective heads in the sand if we think that no American would ever put something deliberately bad in code. Most of the biggest intelligence leaks of the past were perpetrated by cleared American citizens (e.g., Aldrich Ames, Robert Hanssen and the Walker spy ring). But there are other reasons people could Do Bad Things To Code, such as being underpaid and disgruntled about it (why not stick a back door in code and threaten to shut down systems unless someone gives you a pay raise?).

Constraint 3: Commercial assurance is not "high assurance" and the commercial marketplace will not support high assurance software.

Discussion: Note that there are existing, internationally recognized assurance measures such as the Common Criteria (ISO-15408) that validate that software meets specific (stated) threats it was designed to meet. The Common Criteria supports a sliding scale of assurance (i.e., levels 1 through 7) with different levels of software development rigor required at each level: the higher the assurance level, the more development rigor required to substantiate the higher assurance level. Most commercial software can be evaluated up to Evaluation Assurance Level (EAL) 4 (which, under the Common Criteria Recognition Arrangement (CCRA), is also accepted by other countries that subscribe to the Common Criteria). Few commercial entities ask for or require "high assurance" software and few if any government customers ask for it, either.

What is achievable and commercially feasible is for a supplier to have reasonable controls on access to source code during its development cycle and reasonable use of commercial tools and processes that will find routine "bad code" (such as exploitable coding errors that lead to security vulnerabilities). Such a "raise the bar" exercise may have and likely will have a deterrent affect to the extent that it removes the plausible deniability of a malefactor inserting a common coding error that leads to a security exploit. Using automated vulnerability finding tools, in addition to improving code hygiene, makes it harder for someone to deliberately insert a backdoor masquerading as a common coding error because the tools find many such coding errors. Thus, a malefactor may, at least, have to work harder.

That said, and to Constraint 1, the COTS marketplace will not support significantly higher software assurance levels such as manual code review of 70 million lines of code, or extensive third party "validation" of large bodies of code beyond existing mechanisms (i.e., the Common Criteria) nor will it support a "custom code" development model where all developers are US citizens, any more than the marketplace will support US-only components and US-only assembly in hardware manufacturing. This was, in fact, a conclusion reached by the Defense Science Board in their report on foreign influence on the supply chain of software. And in fact, supply chain risk is not about the citizenship of developers or their geographic locale but about the lifecycle of software, how it can be corrupted, and taking reasonable and commercially feasible precautions to prevent code corruption.

Constraint 4: Any supply chain assurance exercise - whether improved assurance or improved disclosure - must be done under the auspices of a single global standard, such as the Common Criteria.

Discussion: Assurance-focused supply chain concerns should use international assurance standards (specifically the Common Criteria) to address them. Were someone to institute a separate, expensive, non-international "supply chain assurance certification," not only would software assurance not improve, it would likely get worse, because the same resources that companies today spend on improving their product would be spent on secondary or tertiary "certifications" that are expensive, inconsistent and non-leverageable. In the worst case, a firm might have to produce different products for different geographic locales, which would further divert resources (and weaken security). A new "regulatory regime" - particularly one that largely overlaps with an existing scheme - would be expensive and "crowd out" better uses of time, people, and money. To the extent some supply chain issues are not already addressed in Common Criteria evaluations, the Common Criteria could be modified to address them, using an existing structure that already speaks to assurance in the international realm.

Even in cases of "supply chain disclosure," any such disclosure requirement needs to ensure that the value of information - to purchasers - is greater than the cost to suppliers of providing such information. To that end, disclosure should be standardized, not customized. Even a large vendor would not be able to complete per-customer or per-industry questionnaires on supply chain risk for each release of each product they produce. The cost of completing such "per-customer, per-industry" questionnaires would be considerable, and far more so for small, niche vendors or innovative start-ups.

For example, a draft questionnaire developed by the Department of Homeland Security asked, for each development project, for each phase of development (requirement, design, code, and test) how many "foreigners" worked on each project? A large product may have hundreds of projects, and collating how many "foreigners" worked on each of them provides little value (and says nothing about the assurance of the software development process) while being extremely expensive to collect. (The question was dropped from the final document.)

Constraint 5: There is no defect-free or even security defect-free software.

Discussion: While better commercial software is achievable, perfect software is not. This is the case because of a combination of generally poor "security education" in universities (most developers are not taught even basic secure development practices and have to be retrained by the companies that hire them), imperfect development practices, imperfect testing practices, and the fact that new classes of vulnerabilities are being discovered (and exploited) as enemies become more sophisticated. Better security education, better development practices and better testing will improve COTS (and non-COTS) software but will not eliminate all vulnerabilities or even all security vulnerabilities -- people make mistakes, and its not possible to catch all of those mistakes.

As noted elsewhere, manual code inspection is infeasible over large code bases and is error prone. Automated vulnerability-finding tools are the only scalable solution for large code bases (to automate "error finding") but even the best commercially available automated vulnerability-finding tools find perhaps 50% of security defects in code resulting from coding errors but very few security design errors (e.g., an automated tool can't "detect" that a developer neglected to include key security functionality, like encrypting passwords or requiring a password at all).

Lastly, no commercial software ships with "zero defects." Most organizations ship production software only after a phase-in period (so-called alpha and beta testing) in which a small, select group of production customers use the software and provide feedback, and the vendor fixes the most critical defects. In other words, there is typically a "cut-off" in that less serious vulnerabilities are not fixed prior to the product being generally available to all customers.

It is reasonable and achievable that a company has enough rigor in its development practice to include, as part of a robust development practice, actively looking for security defects (using commercial automated tools), triaging them (e.g., by assigning a Common Vulnerability Scoring System (CVSS) score) and, for example, fixing all issues above a particular severity). That said, it is a certainty that some vulnerabilities will still be discovered after the product has shipped, and some of these will be security vulnerabilities.

There is a reasonableness test here we all understand. Commercial software is designed for commercial purposes and with commercial assurance levels. "Commercial software" is not necessarily military grade any more than a commercial vehicle - a Chevy Suburban, for example - is expected to perform like an M1 Abrams tank. Wanting commercial software to have been built (retroactively) using theoretically perfect but highly impractical development models (and by cleared US citizens in a secured facility, no less) might sound like Nirvana to a confluence of assurance agitators - but it is neither reasonable nor feasible and it is most emphatically not commercial software.

Book(s) of the Month

Strong Men Armed: The United States Marines vs. Japan by Robert Leckie

Robert Leckie was a Marine who served in WWII in the Pacific theater and also a prolific writer, much of it military history (another book, Helmet for My Pillow, was a basis for HBO's The Pacific). As much as I have read about the Pacific War - and I've read a lot - I continue to be inspired and humbled by the accounts of whose who fought it and what they were up against: a fanatical, ideologically-inspired and persistent foe who would happily commit suicide if he were able to take out many of "the American enemy." The Marines were on the front lines of much of that war and indeed, so many battles were the Marines' to fight and win. What I liked about this book was that it did not merely recap which battles were fought when, where and by which Marine division led by what officer, but it delves into the individuals in each battle. You know why Joe Foss received the Congressional Medal of Honor, and for what (shooting down 23 Japanese planes over Guadalcanal), for example. History is made by warriors, and everyone - not just the US Marines - should know who our heroes are. (On a personal note, I was also thrilled to read, on page 271 of my edition, several paragraphs about the exploits of Lt. Col Henry Buse, USMC, on New Britain. I later knew him as General Henry Buse, a family friend. Rest in peace, faithful warrior.)

I'm Staying with My Boys: The Heroic Life of Sgt. John Basilone, USMC by Jim Proser

One of many things to love about the US Marine Corps is that they know their heroes: any Marine knows who John Basilone is and why his name is held in honor. This book - told in the first person, unusually - is nonetheless not an autobiography but a biography of Sgt. "Manila" John Basilone, who was a recipient of the Congressional Medal of Honor for his actions at Lunga Ridge on Guadalcanal. He could have sat out the rest of the war selling war bonds but elected to return to the front, where he was killed the first day of the battle for Iwo Jima. In a world where mediocrity and the manufactured 15 minutes of fame are celebrated, this is what a real hero - and someone who is worthy of remembrance - looks like. He is reported to have said upon receiving the CMH: "Only part of this medal belongs to me. Pieces of it belong to the boys who are still on Guadalcanal. It was rough as hell down there."

The citation for John Basilone's Congressional Medal of Honor:

" For extraordinary heroism and conspicuous gallantry in action against enemy Japanese forces, above and beyond the call of duty, while serving with the 1st Battalion, 7th Marines, 1st Marine Division in the Lunga Area. Guadalcanal, Solomon Islands, on 24 and 25 October 1942. While the enemy was hammering at the Marines' defensive positions, Sgt. Basilone, in charge of 2 sections of heavy machineguns, fought valiantly to check the savage and determined assault. In a fierce frontal attack with the Japanese blasting his guns with grenades and mortar fire, one of Sgt. Basilone's sections, with its gun crews, was put out of action, leaving only 2 men able to carry on. Moving an extra gun into position, he placed it in action, then, under continual fire, repaired another and personally manned it, gallantly holding his line until replacements arrived. A little later, with ammunition critically low and the supply lines cut off, Sgt. Basilone, at great risk of his life and in the face of continued enemy attack, battled his way through hostile lines with urgently needed shells for his gunners, thereby contributing in large measure to the virtual annihilation of a Japanese regiment. His great personal valor and courageous initiative were in keeping with the highest traditions of the U.S. Naval Service."

Other Links

More than you ever wanted to know about sagebrush:

The Root of The Problem

Thu, 2010-09-02 02:07

Summer in Idaho is treasured all the more since it is all too brief. We had a long, cold spring - my lilacs were two months behind those of friends and family on the east coast - and some flowers that normally do well here never did poke their colorful heads out of the ground.

My personal gardening forays have been mixed: some things I planted from seeds never came up, and others only just bloomed in August, much to my delight. I am trying to create order from chaos - more specifically, I want a lovely oasis of flowers in a rock garden I have admittedly neglected for several years. Nature abhors a vacuum and thus, she made a successful flanking maneuver to colonize flowerbeds with sagebrush and grasses. I am way beyond "yanking and weed killer" and have traded in my trowel for heavier equipment. You need a shovel and a strong back to pull up a sagebrush and as for the grass, I've had to remove the top three inches of soil in many places and move a number of rocks to get at the root system snaking under them.

I never appreciated the expression, "getting at the root of the problem" until I dealt with invasive sagebrush and "grass-zilla." I have no choice but to do it because if I do not eradicate the root system, I will continue to battle these opportunistic biological interlopers one new sprout at a time. Just as, if you do not figure out the - pun intended - root cause of a security vulnerability, but just fix the symptoms, you will later have to clean up the rest of the buggy snippets that are choking your code.

I have had professional experiences that mirror my rock garden. That is, that there are "interloping and invasive" ideas that take hold with unbelievable tenacity to the point it is hard to eradicate them. The sagebrush and grass of the cybersecurity area are what I can only call the (myth of the) evil vendor cabal (var. multae crappycodae) and supply chain risk management (malwarum hysteriensis). Both have taken hold of otherwise rational human beings just like the pods took over people's minds in Invasion of the Body Snatchers.

In the course of my work, I attend a lot of meetings, seminars and the like on software assurance. The good news is that in the last couple of years, most of the vendors who attend these events (think of the big names in software and hardware) are doing pretty much the same sort of mom and secure apple pie things in software development. The bar, I can say pretty confidently, has been raised. This does not mean industry is perfect, nor does it mean that industry is "done" improving security. I would add that all of us know that building better code is good business: good for customers and good for us. It's also important for critical infrastructure. We get it.

However, to go to some of these meetings, you wouldn't think anything had changed. I have recently been astonished at the statements of opinion - without any facts to back them up - about the state of software development and the motives of those of us who do it, and even more disturbed at what I can only describe as outright hostility to industry in particular and capitalism in general. I suspect at least part of the reason for the hostility is the self-selecting nature of some of these meetings. That is, for some assurance-focused groups, vendors only attend meetings sporadically (because it's more productive to spend time improving your product than in talking about it). That leaves the audience dominated by consultants, academics and policy makers. Each group, in its own way, wants to make the problem better and yet each, in its own way, has a vested interest in convincing other stakeholders that they - and only they - can fix the problem. Many of them have never actually built software or hardware or worked in industry - and it shows. Theory often crumbles upon the altar of actual practice.

What I have heard some of these professional theorists say is not only breathtakingly ironic but often more than a little hypocritical: for example, a tenured academic complaining that industry is "not responsive to the market." (See my earlier blog "The Supply Chain Problem") on fixing the often-execrable cybersecurity education in most university programs and the deafening silence I got in response from the universities I sent letters to.) If you are tenured, you do not have to respond to market forces: you can teach the same thing for thirty years whether or not it is what the market needs or wants and whether or not you are any good at it. (What was that again about being nonresponsive to market forces?)

I am also both amused and annoyed at the hordes of third party consultants all providing a Greek chorus of "you can't trust your suppliers - let us vet them for you." Their purpose in the drama of assurance seems to be the following:


  • Create fear, uncertainty and doubt (FUD) in the market - "evil, money-grubbing vendors can't be trusted; good, noble consultants are needed to validate security"

  • Draft standards - under contract to the government - that create new, expensive third party software and hardware validation schemes

  • Become the "validator" of software after your recommendations to the government - the ones you wrote for them - have been accepted

Could there possibly be a clearer definition of "conflict of interest" than the above? Now, I do not blame anyone for trying to create a market - isn't that what capitalism is? - but trying to create a market for your services by demonizing capitalism is hilariously ironic. One wants to know, "quis custodiet ipsos custodes?" (Who watches the watchers, otherwise known as, "why should I trust consultants who, after all, exist to sell more consulting services?")

The antibusiness rhetoric got so bad once that I took advantage of a keynote I was delivering to remark - because I am nothing if not direct - that, contrary to popular belief, there is no actual Evil Vendor Cabal wherein major software and hardware suppliers collude to determine how we can collectively:


  • build worse products

  • charge more for them and

  • put our customers at increased risk of cyberattack.


It doesn't happen. And furthermore, I added, the government likes and has benefited from buying commercial software for many applications since it is feature rich, maintained regularly, generally very configurable, and runs on a lot of operating systems. "How well," I added, "did it work when government tried to build all these systems from scratch?" The answer is, the government does not have the people or the money to do that: they never did. But the same consultants who are creating FUD about commercial software would be happy to build custom software for everything at 20 times the price, whether or not there is a reason to build custom software.

"You are all in business to make a profit!" one person stated accusingly, as if that were a bad thing. "Yes," I said, "and because we are in business to make a profit, it is very much in our interest to build robust, secure software, because it is enormously expensive for us to fix defects - especially security defects - after we ship software, and we'd much rather spend the resources on building new features we can charge for, instead of on old problems we have to fix in many, many places. Furthermore, we run our own businesses on our own software so if there is horrible security, we are the first 'customer' to suffer. And lastly, if you build buggy, crappy software that performs poorly and is expensive to maintain, you will lose customers to competitors, who love to point at your deficiencies if customers have not already found them."

The second and more disturbingly tenacious idea - and I put this in the category of grass since it seemingly will take a lot of grubbing in the dirt to eradicate it - is what is being called "supply chain risk," this year's hot boy band, judging from the amount of screaming, fainting and hysteria that surrounds it. And yet, if "it" is such a big deal, why oh why can't the people writing papers, draft standards and proposed legislation around "it" describe exactly what they are worried about? I have read multiple pieces of legislation and now, a draft NIST standard on "supply chain risk management" and still there is no clear articulation of "what are you worried about?"

I generally have a high degree of regard for the National Institute of Standards and Technology (NIST). In the past, I've even advocated to get them more money for specific projects that I thought would be a very good use of taxpayer money. I am therefore highly disturbed that a draft standard on supply chain risk management, a problem supposedly critical to our national interests, appears to be authored by contractors and not by NIST. Specifically, two out of three people who worked on the draft are consultants, not NIST employees. (Disclaimer: I know both of them professionally and I am not impugning them personally.) There is no way to know whether the NIST employee who is listed on the standard substantially contributed to the draft or merely managed a contract that "outsourced" development of it.

As I noted earlier, there is an inherent problem in having third parties who would directly stand to benefit if a "standard" is implemented participate in drafting it. Human nature being what it is, the temptation to create future business for oneself is insidiously hard to resist. Moreover, it is exceedingly difficult to resist one's own myopias about how to solve a problem and, let's face it, if you are a consultant, every problem looks like the solution is "hire a consultant." It would be exactly the same thing if, say, the federal government asked Oracle to draft a request for proposal that required a ...database. Does anybody think we could possibly be objective? Even if we tried to be open minded, the set of requirements we would come up with would look suspiciously like Oracle, because that's what we are most familiar with.

Some will argue that this is a draft standard, and will go through revisions, so the provenance of the ideas shouldn't matter. However, NIST's core mission is developing standards. If they are not capable of drafting standards themselves then they should either get the resources to do so or not do it at all. Putting it differently, if you can't perform a core mission, why are you in business? If I may be a bit cheeky here, there is a lesson from Good Old Capitalism here: you cannot be in all market segments (otherwise known as "You can't be all things to all people"). It's better to do a few things well than to try to do everything, and end up doing many things badly. I might add, any business that tried to be in too many market segments that they had no actual expertise in would fail - quickly - because the market imposes that discipline on them.

Back to the heart of the hysteria: what, precisely is meant by "supply chain risk?" At the root of all the agitation there appears to be two concerns, both of which are reasonable and legitimate to some degree. They are:


  • Counterfeiting

  • Malware


Taking the easier one first, "counterfeiting" in this context means "purchasing a piece of hardware FOO or software BAR where the product is not a bona fide FOO or BAR but a knockoff." (Note: this is not the case of buying a "solid gold Rolex" on the street corner for $10 when you know very well this is not a real Rolex - not at that price.) From the acquirer's viewpoint, the concern is that a counterfeit component will not perform as advertised (i.e., might fail at a critical juncture), or won't be supported/repaired/warranted by the manufacturer (since it is a fake product). It could also include a suspicion that instead of GoodFoo you are getting EvilKnockoffFOO, which does something very different - and malicious - from what it's supposed to do. More on that later.

From the manufacturer's standpoint, counterfeiting cuts into your revenue stream since someone is "free riding" on your brand, your advertising, maybe even your technology, and you are not getting paid for your work. Counterfeits may also damage your brand (when FakeFOO croaks under pressure instead of performing like the real product). Counterfeiting is the non-controversial part of supply chain concerns in that pretty much everybody agrees you should get what you pay for, and if you buy BigVendor's product FOO, version 5, you want to know you are actually getting FOO, version 5 (and not fake FOO). Note: I say, "non controversial," but when you have government customers buying products off eBay (deeply discounted) who are shocked - shocked I tell you! - to discover that they have bought fakes, you do want to say, "do you buy F-22s off eBay? No? Then what makes you think you can buy mission critical hardware off eBay? Buy From An Authorized Distributor, fool!"

The second area of supply chain risk hysteria is malware. Specifically, the concern that someone, somewhere will Put Something Bad in code (such as a kill switch which would render the software or hardware inoperable at a critical juncture). Without ever articulating it, the hysteria is typically that An Evil Foreigner - not a Good American Programmer - will Put Something Bad in Code. (Of course, other countries have precisely the same concern, only in their articulation, it is evil Americans who will Put Something Bad In Code.) The "foreign boogeymen" problem is at the heart of the supply chain risk hysteria and has led to the overreach of proposed solutions for it. (For example, the NIST draft wanted acquirers to be notified of changes to personnel involving "maintenance." Does this mean that every time a company hires a new developer to work on old code - and let's face it, almost everybody who works in development for an established company touches old code at some point - they have to send a letter to Uncle Sam with the name of the employee? Can you say "intrusive?")

So here is my take on the reality of the "malware" part of supply chain. It's a long explanation, and I stole it from a paper I did on supply chain issues for a group of legislators. I offer these ideas as points of clarification that I fervently hope will frame this discussion, before someone, in a burst of public service, creates an entirely new expensive, vague, "construct" of policy remedies for an unbounded problem. Back to my gardening analogy, if eradicating the roots of a plant is important and necessary to kill off a biological interloper, it is also true that some plants will not grow in all climates and in all soil no matter what you do: I cannot grow plumeria (outdoors) in Idaho no matter how hard I try and no matter how much I love it. Similarly, some of the proposed "solutions" to supply chain risk are not going to thrive because of a failure to understand what is reasonable and feasible and will "grow" and what absolutely will not. I'll go farther than that - some of the proposed remedies - and much of what is proposed in the draft NIST standard - should be dosed with weed killer.

Constraint 1: In the general case - and certainly for multi-purpose infrastructure and applications software and hardware - there are no COTS products without global development and manufacturing.

Discussion: The explosion in COTS software and hardware of the past 20 years has occurred precisely because companies are able to gain access to global talent by developing products around the world. For example, a development effort may include personnel on a single "virtual team" who work across the United States and in the United Kingdom and India. COTS suppliers also need access to global resources to support their global customers. For example, COTS suppliers often offer 7x24 support in which responsibility for addressing a critical customer service request migrates around the globe, from support center to support center (often referred to as a "follow the sun" model). Furthermore, the more effective and available (that is, 7x24 and global) support is, the more likely problems will be reported and resolved more quickly for the benefit of all customers. Even smaller firms that produce specialized COTS products (e.g., cryptographic or security software) may use global talent to produce it.

Hardware suppliers are typically no longer "soup to nuts" manufacturers. That is, a hardware supplier may use a global supply network in which components - sourced from multiple entities worldwide - are assembled by another entity. Software is loaded onto the finished hardware in yet another manufacturing step. Global manufacturing and assembly helps hardware suppliers focus on production of the elements for which they can best add value and keeps overall manufacturing and distribution costs low. We take it for granted that we can buy serviceable and powerful personal computers for under $1000, but it was not that long ago that the computing power in the average PC was out of reach for all but highly capitalized entities and special purpose applications. Global manufacturing and distribution makes this possible.

In summary, many organizations that would have deployed custom software and hardware in the past have now "bet the farm" on the use of COTS products because they are cheaper, more feature rich, and more supportable than custom software and hardware. As a result, COTS products are being embedded in many systems - or used in many deployment scenarios - that they were not necessarily designed for. Supply chain risk is by no means the only risk of deploying commercial products in non-commercial threat environments.

Constraint 2: It is not possible to prevent someone from putting something in code that is undetectable and potentially malicious, no matter how much you tighten geographic parameters.

Discussion: One of the main expressions of concern over supply chain risk is the "malware boogeyman," most often associated with the fear that a malicious employee with authorized access to code will put a backdoor or malware in code that is eventually sold to a critical infrastructure provider (e.g., financial services, utilities) or a defense or intelligence agency. Such code, it is feared, could enable an adversary to alter (i.e., change) data or exfiltrate data (e.g., remove copies of data surreptitiously) or make use of a planted "kill switch" to prevent the software or hardware from functioning. Typically, the fear is expressed as "a foreigner" could do this. However, it is unclear precisely what "foreigner" is in this context:


  • There are many H1B visa holders (and green card holders) who work for companies located in the United States. Are these "foreigners?"

  • There are US citizens who live in countries other than the US and work on code there. Are these "foreigners?" That is, is the fear of code corruption based on geography or national origin of the developer?

  • There are developers who are naturalized US citizens (or dual passport holders). Are these "foreigners?"

(Ironically, naturalized citizens and H1B visa holders are arguably more "vetted" that native-born Americans.) It is unclear whether the concern is geographic locale, national origin of a developer or overall development practice and the consistency by which it is applied worldwide.

COTS software, particularly infrastructure software (operating systems, databases, middleware) or packaged applications (customer relationship management (CRM), enterprise resource planning (ERP)) typically has multiple millions of lines of code (e.g., the Oracle database has about 70 million lines of code). Also typically, commercial software is in near-constant state of development: there is always a new version under development or old versions undergoing maintenance. While there are automated tools on the market that can scan source code for exploitable security defects (so-called static analysis tools), such tools find only a portion of exploitable defects and these are typically of the "coding error" variety. They do not find most design defects and they would be unlikely to find deliberately introduced backdoors or malware.

Given the size of COTS code bases, the fact they are in a near constant state of flux, and the limits of automated tools, there is no way to absolutely prevent the insertion of bad code that would have unintended consequences and would not be detectable. (As a proof point, a security expert in command and control systems once put "bad code" in a specific 100 lines of code and challenged code reviewers to find it within the specific 100 lines of code. They couldn't. In other words, even if you know where to look, malware can be and often is undetectable.)

Lastly, we are sticking our collective heads in the sand if we think that no American would ever put something deliberately bad in code. Most of the biggest intelligence leaks of the past were perpetrated by cleared American citizens (e.g., Aldrich Ames, Robert Hanssen and the Walker spy ring). But there are other reasons people could Do Bad Things To Code, such as being underpaid and disgruntled about it (why not stick a back door in code and threaten to shut down systems unless someone gives you a pay raise?).

Constraint 3: Commercial assurance is not "high assurance" and the commercial marketplace will not support high assurance software.

Discussion: Note that there are existing, internationally recognized assurance measures such as the Common Criteria (ISO-15408) that validate that software meets specific (stated) threats it was designed to meet. The Common Criteria supports a sliding scale of assurance (i.e., levels 1 through 7) with different levels of software development rigor required at each level: the higher the assurance level, the more development rigor required to substantiate the higher assurance level. Most commercial software can be evaluated up to Evaluation Assurance Level (EAL) 4 (which, under the Common Criteria Recognition Arrangement (CCRA), is also accepted by other countries that subscribe to the Common Criteria). Few commercial entities ask for or require "high assurance" software and few if any government customers ask for it, either.

What is achievable and commercially feasible is for a supplier to have reasonable controls on access to source code during its development cycle and reasonable use of commercial tools and processes that will find routine "bad code" (such as exploitable coding errors that lead to security vulnerabilities). Such a "raise the bar" exercise may have and likely will have a deterrent affect to the extent that it removes the plausible deniability of a malefactor inserting a common coding error that leads to a security exploit. Using automated vulnerability finding tools, in addition to improving code hygiene, makes it harder for someone to deliberately insert a backdoor masquerading as a common coding error because the tools find many such coding errors. Thus, a malefactor may, at least, have to work harder.

That said, and to Constraint 1, the COTS marketplace will not support significantly higher software assurance levels such as manual code review of 70 million lines of code, or extensive third party "validation" of large bodies of code beyond existing mechanisms (i.e., the Common Criteria) nor will it support a "custom code" development model where all developers are US citizens, any more than the marketplace will support US-only components and US-only assembly in hardware manufacturing. This was, in fact, a conclusion reached by the Defense Science Board in their report on foreign influence on the supply chain of software. And in fact, supply chain risk is not about the citizenship of developers or their geographic locale but about the lifecycle of software, how it can be corrupted, and taking reasonable and commercially feasible precautions to prevent code corruption.

Constraint 4: Any supply chain assurance exercise - whether improved assurance or improved disclosure - must be done under the auspices of a single global standard, such as the Common Criteria.

Discussion: Assurance-focused supply chain concerns should use international assurance standards (specifically the Common Criteria) to address them. Were someone to institute a separate, expensive, non-international "supply chain assurance certification," not only would software assurance not improve, it would likely get worse, because the same resources that companies today spend on improving their product would be spent on secondary or tertiary "certifications" that are expensive, inconsistent and non-leverageable. In the worst case, a firm might have to produce different products for different geographic locales, which would further divert resources (and weaken security). A new "regulatory regime" - particularly one that largely overlaps with an existing scheme - would be expensive and "crowd out" better uses of time, people, and money. To the extent some supply chain issues are not already addressed in Common Criteria evaluations, the Common Criteria could be modified to address them, using an existing structure that already speaks to assurance in the international realm.

Even in cases of "supply chain disclosure," any such disclosure requirement needs to ensure that the value of information - to purchasers - is greater than the cost to suppliers of providing such information. To that end, disclosure should be standardized, not customized. Even a large vendor would not be able to complete per-customer or per-industry questionnaires on supply chain risk for each release of each product they produce. The cost of completing such "per-customer, per-industry" questionnaires would be considerable, and far more so for small, niche vendors or innovative start-ups.

For example, a draft questionnaire developed by the Department of Homeland Security asked, for each development project, for each phase of development (requirement, design, code, and test) how many "foreigners" worked on each project? A large product may have hundreds of projects, and collating how many "foreigners" worked on each of them provides little value (and says nothing about the assurance of the software development process) while being extremely expensive to collect. (The question was dropped from the final document.)

Constraint 5: There is no defect-free or even security defect-free software.

Discussion: While better commercial software is achievable, perfect software is not. This is the case because of a combination of generally poor "security education" in universities (most developers are not taught even basic secure development practices and have to be retrained by the companies that hire them), imperfect development practices, imperfect testing practices, and the fact that new classes of vulnerabilities are being discovered (and exploited) as enemies become more sophisticated. Better security education, better development practices and better testing will improve COTS (and non-COTS) software but will not eliminate all vulnerabilities or even all security vulnerabilities -- people make mistakes, and its not possible to catch all of those mistakes.

As noted elsewhere, manual code inspection is infeasible over large code bases and is error prone. Automated vulnerability-finding tools are the only scalable solution for large code bases (to automate "error finding") but even the best commercially available automated vulnerability-finding tools find perhaps 50% of security defects in code resulting from coding errors but very few security design errors (e.g., an automated tool can't "detect" that a developer neglected to include key security functionality, like encrypting passwords or requiring a password at all).

Lastly, no commercial software ships with "zero defects." Most organizations ship production software only after a phase-in period (so-called alpha and beta testing) in which a small, select group of production customers use the software and provide feedback, and the vendor fixes the most critical defects. In other words, there is typically a "cut-off" in that less serious vulnerabilities are not fixed prior to the product being generally available to all customers.

It is reasonable and achievable that a company has enough rigor in its development practice to include, as part of a robust development practice, actively looking for security defects (using commercial automated tools), triaging them (e.g., by assigning a Common Vulnerability Scoring System (CVSS) score) and, for example, fixing all issues above a particular severity). That said, it is a certainty that some vulnerabilities will still be discovered after the product has shipped, and some of these will be security vulnerabilities.

There is a reasonableness test here we all understand. Commercial software is designed for commercial purposes and with commercial assurance levels. "Commercial software" is not necessarily military grade any more than a commercial vehicle - a Chevy Suburban, for example - is expected to perform like an M1 Abrams tank. Wanting commercial software to have been built (retroactively) using theoretically perfect but highly impractical development models (and by cleared US citizens in a secured facility, no less) might sound like Nirvana to a confluence of assurance agitators - but it is neither reasonable nor feasible and it is most emphatically not commercial software.

Book(s) of the Month

Strong Men Armed: The United States Marines vs. Japan by Robert Leckie

Robert Leckie was a Marine who served in WWII in the Pacific theater and also a prolific writer, much of it military history (another book, Helmet for My Pillow, was a basis for HBO's The Pacific). As much as I have read about the Pacific War - and I've read a lot - I continue to be inspired and humbled by the accounts of whose who fought it and what they were up against: a fanatical, ideologically-inspired and persistent foe who would happily commit suicide if he were able to take out many of "the American enemy." The Marines were on the front lines of much of that war and indeed, so many battles were the Marines' to fight and win. What I liked about this book was that it did not merely recap which battles were fought when, where and by which Marine division led by what officer, but it delves into the individuals in each battle. You know why Joe Foss received the Congressional Medal of Honor, and for what (shooting down 23 Japanese planes over Guadalcanal), for example. History is made by warriors, and everyone - not just the US Marines - should know who our heroes are. (On a personal note, I was also thrilled to read, on page 271 of my edition, several paragraphs about the exploits of Lt. Col Henry Buse, USMC, on New Britain. I later knew him as General Henry Buse, a family friend. Rest in peace, faithful warrior.)

I'm Staying with My Boys: The Heroic Life of Sgt. John Basilone, USMC by Jim Proser

One of many things to love about the US Marine Corps is that they know their heroes: any Marine knows who John Basilone is and why his name is held in honor. This book - told in the first person, unusually - is nonetheless not an autobiography but a biography of Sgt. "Manila" John Basilone, who was a recipient of the Congressional Medal of Honor for his actions at Lunga Ridge on Guadalcanal. He could have sat out the rest of the war selling war bonds but elected to return to the front, where he was killed the first day of the battle for Iwo Jima. In a world where mediocrity and the manufactured 15 minutes of fame are celebrated, this is what a real hero - and someone who is worthy of remembrance - looks like. He is reported to have said upon receiving the CMH: "Only part of this medal belongs to me. Pieces of it belong to the boys who are still on Guadalcanal. It was rough as hell down there."

The citation for John Basilone's Congressional Medal of Honor:

" For extraordinary heroism and conspicuous gallantry in action against enemy Japanese forces, above and beyond the call of duty, while serving with the 1st Battalion, 7th Marines, 1st Marine Division in the Lunga Area. Guadalcanal, Solomon Islands, on 24 and 25 October 1942. While the enemy was hammering at the Marines' defensive positions, Sgt. Basilone, in charge of 2 sections of heavy machineguns, fought valiantly to check the savage and determined assault. In a fierce frontal attack with the Japanese blasting his guns with grenades and mortar fire, one of Sgt. Basilone's sections, with its gun crews, was put out of action, leaving only 2 men able to carry on. Moving an extra gun into position, he placed it in action, then, under continual fire, repaired another and personally manned it, gallantly holding his line until replacements arrived. A little later, with ammunition critically low and the supply lines cut off, Sgt. Basilone, at great risk of his life and in the face of continued enemy attack, battled his way through hostile lines with urgently needed shells for his gunners, thereby contributing in large measure to the virtual annihilation of a Japanese regiment. His great personal valor and courageous initiative were in keeping with the highest traditions of the U.S. Naval Service."

Other Links

More than you ever wanted to know about sagebrush:

Physician, Heal Thyself

Thu, 2010-07-15 04:04

"The fault, dear Brutus, is not in our stars, But in ourselves, that we are underlings." Julius Caesar (1,ii140-141)

There is in some cases a terrible - and in some cases terrifying - disconnect between technology and our larger societal ability to understand it, in particular, to understand the risks it poses and the unintended consequences of those risks. The limitations of technology are not necessarily what we think they are, either. That is, we wouldn't solve all our technology problems if only we had - more technology. No, many of the limitations are ones we create ourselves, because of our inability to understand systemic risks, and by the way we think about and talk about technology as if it were something "new" and "different," instead of recognizing patterns that have repeated themselves in other disciplines.

One of the perspective slaps to the side of the head you get when you leave the nerdified air of Silicon Valley is that large swathes of the world are not technophiles, let alone technoacolytes. By that I mean that, regardless of the benefits of technology, once you drive past Los Gatos on highway 17, most of the people you meet don't think we'd achieve world peace if only we had a standards compliant API for it. Nor for that matter does most of the world think that the Eleventh Commandment is "thou shalt honor the Lord they God, by making thy code open source." As long as I have worked in technology, I continue to find the number of technological cults and cult members to be truly astonishing. If I were a social scientist, I might observe that, having extirpated God from so much of public life, we rush to find other ways to fill the void. The last time I checked, Deuteronomy said, "I am the Lord thy God, who led you out of Israel, out of slavery. You will have no other gods before me." Technology, it should be said, is not god. More like a golden calf.

If that sounds silly, think about the number of discussions we have all had with technocult members who speak in raptured, hushed tones about (insert all that apply): cloud computing, open source, object-oriented programming, agile development, and so on. (And my personal favorite, referring to any technology as "awesome." God and the North Shore of O´ahu in winter are awesome,* everything else is merely amazing, at best.) Note: I am not denigrating any of these technological constructs, merely observing that none of them have created world peace, cured cancer, raised the dead or helped anybody lose that last pesky 10 pounds. It's just technology. Even in my happiest moments curled up with my iPhone -- which is really nice technology and has made me more efficient -- I don't expect the iPhone GPS system to help me find real direction in life.

The first limitation of technology is one we impose ourselves: we make it a god, when it isn't, and IT people the high priests, when they aren't. The reason anybody cares is because technology has substantially altered our world and, if we admit it, not always for the better. Unfortunately, when technologists make an idol out of technology, we get all the overhead that comes with creating a new religion.

  • Non-believers may be ostracized or pressured to convert.
  • Statements of opinion - or ecstatic utterances - are treated as religious tenets and therefore, not open for discussion.
  • Instead of honest disagreements, we have (literal) religious arguments.
  • We may (figuratively) burn heretics at the stake.
The result is that in many cases technology is pushed merely because it is the next path to salvation, and without any rational discussion of whether we need it and, most importantly what risks it subjects us to - and whether they can even be understood let alone mitigated. Technologists become like the snake, telling Adam and Eve they will be as God is if they take a bite, and, like Adam and Eve, we only later realize the technological apple has rendered us naked.

I should have realized this when I first moved to Silicon Valley many moons ago. I went to a party given by a friend who happened to work for a chip company that was a competitor to the company I worked for. She introduced me to a colleague who, instead of the usual "Hi, how are you, nice to meet you," glared at me and said, "we're going to kill you in MOS technology." ("Not tonight, unless you are serving hemlock," was my response.) The "technology as god" cult has been reinforced a number of times over the interim years, most recently by an entrepreneur I recently talked to who had a hard time understanding that inventing a cool new technology was not, per se, enough to get him in my door or anybody else's. While I admire his entrepreneurial gifts, unless he is solving a problem people care about and can explain, in less than 25 words, he is not going to get past the "I only want twenty minutes of your time" barrier. It's just technology. It's not (a) god. It's not even a worthy golden calf wanna be.

Another limitation of technology is linguistic. I don't mean the difference between French and German: more like the difference between English and Martian. Many technologists might as well be speaking Martian, they are so far removed from the people who need to understand what the technology can do, what it can't do, and why anybody would want to use it. End users. Customers. Legislators. Those who are legitimate stakeholders in determining whether the risks of technology usage outweigh the benefits, but who cannot do that unless technologists can make themselves understood. By way of example, I might just possibly be understood in La Jolla if I were to say to someone in the surf lineup at Windansea, **"´Auwe, aia he mano nui loa! E ku´u hoa, e hele mai!" However, I'd be far more likely to get a response if I said "Alas! There is a really big shark. Get over here, buddy!" At any rate, if I cannot deliver a warning in English, I should not then excoriate a friend (on his way to the emergency room thanks to the man in the gray suit ***) that he really needs to learn Hawaiian to avoid future shark bites.

One of the main reasons technologists cannot or will not make themselves understood is the overuse of jargon. I've never met a group of people who were more jargon happy than technologists, unless it is teenagers -- who at least grow out of it if for no other reason than that they are forced to. (I don't care a hoot about "intergenerational communication styles," if a candidate for an open headcount insists on using "BFF" and "OMG" in an interview, he/she will experience "GFG"****.) If we are honest, we admit that we use jargon not only for the sake of efficiency but to make us feel smarter and superior than those who are not in the techno-know. Jargon becomes a means to exclude people from the club, and a means to hornswoggle others if we are fortunate enough to have an audience of either true believers or one that is too embarrassed to admit what they do not know. For our part, we reason, if they are too stupid to know what OCSP and HAML***** stand for, then they are not worth the time for us to offer our explanations. Jargon is also a means to cut yourself out of the herd. Everyone wants to be the first to either invent a new buzzword or be the first to use it. I'd only just learned what the acronym APT stands for -- Advanced Persistent Threat -- and already, I have had slimy sales reps emailing me asking about what Oracle does to combat APT and do I know that they have a product that can protect against it, not to mention slice, dice and make julienne fries? (What I'd really like is a product that protects against APSRs - Annoyingly Persistent Sales Reps. Nobody has offered to sell me one yet.)

I'm not immune to the temptation. I was once in a meeting with a bunch of developers talking about security issues. After listening to the discussion, I said with great solemnity, "as I see it, we need an ITP story. In fact, we need an S-ITP story. " Everyone nodded. Finally, someone had the courage to say, "what is S-ITP?" to which I replied, "the Secure Internet Toaster Protocol." ****** We get wrapped up in our own linguistic cleverness to the point we do not always know what we are saying. How then, can we expect our constituents to understand what technology is capable of, and what it is not capable of? True, there are many other organizations or cohorts that are almost as jargon happy as we are, but few of them are tasked with the level of responsibility we have. It's not just the Marine Corps and the Three Letter Agencies that do national security: technoids do. At least when I was in the Navy, you could always look up FAADCPAC******* or other cryptic acronyms in the DICNAVAB (Dictionary of Naval Abbreviations). Good luck with that in our industry (this is where the cloud cult members insist that all I have to do is look up my acronym in the acronym cloud). The point is: I shouldn't have to.

Even God - the real one - communicated in a language his followers could understand (and wrote the rules down on tablets). Technologists won't even do that. The other thing God (or His scribes) did was tell stories. It's easier for most people to get their minds around a creation story that begins "in the beginning, the world was formless and void..." than around the Big Bang and/or quantum physics. Phrases like "Can the Ethiopian change his skin or the leopard its spots?" speaks to the fact that some things are immutable. We get it so well that the number of people who use the phrase vastly exceed the number who know where it comes from. (Jeremiah 13:23). Analogies, stories, and using examples people understand all help make the complex more simple: God is more understandable than the average technologist. No wonder most people know that they shouldn't lie, cheat, steal but we have lots of the Clueless Faithful who think it's a good idea to allow houses to talk to power plants. (What next, wireless access to life support systems? Because it is just so expensive to send a nurse into someone's room.)

On the occasions when I have had the privilege of testifying in front of Congress, I've had 5 minutes to try to help well-intended legislators do something that will make a positive difference. How far do you think I'd get if I said, "we need to deploy IpV6 broadly to fix all our cybersecurity problems" or "insecure RNGs are the bane of network encryption?" I might actually get farther if I said, "'Auwe, eia he mano nui loa! E ku'u hoa, e hele mai!" (At least, Senators Inouye and Akaka - and the rest of the state of Hawai´i delegation - might understand.) I readily admit that I am not a technologist - I don't have the in-depth knowledge that most of my team has. What I do have is the ability to ask questions until I understand the gist of - and details of - a problem, and the ability to translate the problem into terms others can understand. Without that communication, you get people suffering from avoidable shark bites because they don't speak Hawaiian. Why is it so hard for technologists to understand that if they cannot communicate, their technological acumen is worthless, even dangerous?

Maybe the reason technologists resist the use of analogies is that it would reveal that the emperor has no clothes or, more accurately, that the emperor's clothes are the same ones everyone else is wearing. By that I mean, that if we can use analogies to explain technology and technological limits, it makes it all too obvious that we already have examples we can use to, for example, craft public policy. We might be forced to admit that technology isn't the ooh, aah, gee whiz stuff we say it is, it's just the same old problems wrapped up in shiny new bits.

For example, there are a number - no, a lot - of cybersecurity bills in draft currently. A core element of many of these bills is the degree to which the Federal government should exert "control" over private networks and what form that control should take. In my opinion, there are many reasons for thinking that the Federal government is not well suited to such a role. One of the main reasons goes to basic accountability 101. The best example I could come up with was physical security. The CEO of a company that has no door locks, no physical security of any kind, and whose company experiences massive thefts from people wandering around their buildings would not have a job for very long. The police might help investigate the break in but they would also be the first to recommend locks and a security system. They certainly would not take over building defense.

"Oh, but cyber is different." Why, precisely? Assets are increasingly stored electronically, corporate "boundaries" include electronic ones (or should). If we think there is nothing valuable to protect on corporate networks then let's skip authentication and dispense with firewalls. Clearly, we know that data is valuable and corporations do have a responsibility to protect their own resources - they owe that to shareholders. If cyber is so "complicated" that we can't possibly secure it, one has to ask why these entities knowingly continue to double down on risk they cannot mitigate. The buck stops somewhere and it is not (primarily) at the Department of Homeland Security. Business cannot realistically have it both ways - embrace the increasing use of technology ("do more with less!") and then declaim responsibility for having done so. And just as the local police department does not have keys to local businesses -- nor do they install and monitor close circuit TV in each business to detect and prevent crime -- we ought to have sensible boundaries about who secures what.

Jargon makes people feel smart and superior, but end users and key stakeholders - including, increasingly, legislators - do not speak that jargon. If we cannot learn to de-jargon ourselves and speak in languages that our audience can understand and process, technology will continue to ensnare us instead of setting us free. I'll close with an illustration that bring together several of the themes I have been talking about: responsibility, limits, and all wrapped up in a nice, de-jargoned turn of phrase:"The LORD God took the man and put him in the Garden of Eden to work it and take care of it. And the LORD God commanded the man, "You are free to eat from any tree in the garden; but you must not eat from the tree of the knowledge of good and evil, for when you eat of it you will surely die." (Genesis 2:15-17)I wish technologists were as forthright.

* Ok, I admit, I just violated the first commandment by making surfing a god. But the North Shore when it breaks is pretty awe-inspiring, at the least.

** Not that I've ever seen a shark there. But I did see a dorsal fin pop up next to me one Saturday. It was the longest three seconds of my life until I heard the exhale of - a dolphin.

*** "Man in the gray suit" is a surfing euphemism for "shark"

**** Gone for good

***** Caught you! OCSP is the Online Certificate Status Protocol, but HAML is something I made up (Hack Attack Markup Language, by which we all craft standards-compliant security vulnerabilities so it is easier for hackers to exploit them on multiple web applications, in a standards-compliant way).

****** As far as I can tell, there is no reason to put household appliances on a network and many reasons not to. Anyway, I don't really think you need special authorization to toast bagels vs. white bread.

******* Fleet Accounting and Disbursing Center, US Pacific Fleet, if you care.

Books

Matterhorn: A Novel of the Vietnam War by Karl Marlantes

http://www.amazon.com/gp/product/080211928X/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0979528534&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1VJP13PEV1Q4RBD3F060

It's not often I get a chance to read a book that I think will become a classic, but this is one of them. The author is a decorated Vietnam-war veteran and the "authenticity" shows. The premise is that a green 2LT is sent into the bush with his team to take and hold a territory known as "the Matterhorn." It's impossible for me to describe how gripping this book is, how real the characters are, and how invested you become in them. I would never - I hope - have the hubris to say that I understand what it was like to have been in Vietnam, but after reading this book, I feel I have been a spectator. A great, magnificent book. A Dog for All Seasons: A Memoir by Patti Sherlock

http://www.amazon.com/Dog-All-Seasons-Memoir/dp/0312577923/ref=sr_1_1?ie=UTF8&s=books&qid=1273709745&sr=1-1

Personally, I am not so hot on border collies since one of them tried herding Thunder on the Nordic trail (Thunder did not want to be herded) and I had an expensive vet bill from the border collie biting him. One reason I think leash laws are critical. But I digress.

We forget that so many dogs (mine included) are working dogs. Working dogs need something to do besides sit in a dog basket and snarf dog treats. Working dogs are also indispensable to many people (you can't really herd sheep without them). The author wrote this book about a remarkable border collie named Duncan who lived with her on a sheep ranch in eastern Idaho. A dog that was more than merely a dog. Well worth the read.

Sailing in the Wake of the Ancestors: Reviving Polynesian Voyaging by Ben Finney

http://www.amazon.com/Sailing-Wake-Ancestors-Polynesian-Excellence/dp/1581780249

The resurrection of Hebrew as a popular (i.e., not merely scholarly) language is one of the great comeback stories in history. The other is the resurrection of Polynesian voyaging. This book tells the how and the why of how a dead or dying art was recreated, and the shot in the arm it gave to Polynesian peoples, who must now - largely as a result of the work of the Polynesian Voyaging Society - be acknowledged as the greatest navigators of all time. The author does not spare the infighting that almost led to the destruction of the Polynesian Voyaging Society and the means by which it was resurrected. The happy ending is the number of Polynesian peoples who are participating in wayfinding, just as their ancestors did. (It's still amazing to me that anybody can travel thousands of miles by the stars, observing ocean currents and birds.)

Physician, Heal Thyself

Thu, 2010-07-15 04:04

"The fault, dear Brutus, is not in our stars, But in ourselves, that we are underlings." Julius Caesar (1,ii140-141)

There is in some cases a terrible - and in some cases terrifying - disconnect between technology and our larger societal ability to understand it, in particular, to understand the risks it poses and the unintended consequences of those risks. The limitations of technology are not necessarily what we think they are, either. That is, we wouldn't solve all our technology problems if only we had - more technology. No, many of the limitations are ones we create ourselves, because of our inability to understand systemic risks, and by the way we think about and talk about technology as if it were something "new" and "different," instead of recognizing patterns that have repeated themselves in other disciplines.

One of the perspective slaps to the side of the head you get when you leave the nerdified air of Silicon Valley is that large swathes of the world are not technophiles, let alone technoacolytes. By that I mean that, regardless of the benefits of technology, once you drive past Los Gatos on highway 17, most of the people you meet don't think we'd achieve world peace if only we had a standards compliant API for it. Nor for that matter does most of the world think that the Eleventh Commandment is "thou shalt honor the Lord they God, by making thy code open source." As long as I have worked in technology, I continue to find the number of technological cults and cult members to be truly astonishing. If I were a social scientist, I might observe that, having extirpated God from so much of public life, we rush to find other ways to fill the void. The last time I checked, Deuteronomy said, "I am the Lord thy God, who led you out of Israel, out of slavery. You will have no other gods before me." Technology, it should be said, is not god. More like a golden calf.

If that sounds silly, think about the number of discussions we have all had with technocult members who speak in raptured, hushed tones about (insert all that apply): cloud computing, open source, object-oriented programming, agile development, and so on. (And my personal favorite, referring to any technology as "awesome." God and the North Shore of O´ahu in winter are awesome,* everything else is merely amazing, at best.) Note: I am not denigrating any of these technological constructs, merely observing that none of them have created world peace, cured cancer, raised the dead or helped anybody lose that last pesky 10 pounds. It's just technology. Even in my happiest moments curled up with my iPhone -- which is really nice technology and has made me more efficient -- I don't expect the iPhone GPS system to help me find real direction in life.

The first limitation of technology is one we impose ourselves: we make it a god, when it isn't, and IT people the high priests, when they aren't. The reason anybody cares is because technology has substantially altered our world and, if we admit it, not always for the better. Unfortunately, when technologists make an idol out of technology, we get all the overhead that comes with creating a new religion.


  • Non-believers may be ostracized or pressured to convert.

  • Statements of opinion - or ecstatic utterances - are treated as religious tenets and therefore, not open for discussion.

  • Instead of honest disagreements, we have (literal) religious arguments.

  • We may (figuratively) burn heretics at the stake.


The result is that in many cases technology is pushed merely because it is the next path to salvation, and without any rational discussion of whether we need it and, most importantly what risks it subjects us to - and whether they can even be understood let alone mitigated. Technologists become like the snake, telling Adam and Eve they will be as God is if they take a bite, and, like Adam and Eve, we only later realize the technological apple has rendered us naked.

I should have realized this when I first moved to Silicon Valley many moons ago. I went to a party given by a friend who happened to work for a chip company that was a competitor to the company I worked for. She introduced me to a colleague who, instead of the usual "Hi, how are you, nice to meet you," glared at me and said, "we're going to kill you in MOS technology." ("Not tonight, unless you are serving hemlock," was my response.) The "technology as god" cult has been reinforced a number of times over the interim years, most recently by an entrepreneur I recently talked to who had a hard time understanding that inventing a cool new technology was not, per se, enough to get him in my door or anybody else's. While I admire his entrepreneurial gifts, unless he is solving a problem people care about and can explain, in less than 25 words, he is not going to get past the "I only want twenty minutes of your time" barrier. It's just technology. It's not (a) god. It's not even a worthy golden calf wanna be.

Another limitation of technology is linguistic. I don't mean the difference between French and German: more like the difference between English and Martian. Many technologists might as well be speaking Martian, they are so far removed from the people who need to understand what the technology can do, what it can't do, and why anybody would want to use it. End users. Customers. Legislators. Those who are legitimate stakeholders in determining whether the risks of technology usage outweigh the benefits, but who cannot do that unless technologists can make themselves understood. By way of example, I might just possibly be understood in La Jolla if I were to say to someone in the surf lineup at Windansea, **"´Auwe, aia he mano nui loa! E ku´u hoa, e hele mai!" However, I'd be far more likely to get a response if I said "Alas! There is a really big shark. Get over here, buddy!" At any rate, if I cannot deliver a warning in English, I should not then excoriate a friend (on his way to the emergency room thanks to the man in the gray suit ***) that he really needs to learn Hawaiian to avoid future shark bites.

One of the main reasons technologists cannot or will not make themselves understood is the overuse of jargon. I've never met a group of people who were more jargon happy than technologists, unless it is teenagers -- who at least grow out of it if for no other reason than that they are forced to. (I don't care a hoot about "intergenerational communication styles," if a candidate for an open headcount insists on using "BFF" and "OMG" in an interview, he/she will experience "GFG"****.) If we are honest, we admit that we use jargon not only for the sake of efficiency but to make us feel smarter and superior than those who are not in the techno-know. Jargon becomes a means to exclude people from the club, and a means to hornswoggle others if we are fortunate enough to have an audience of either true believers or one that is too embarrassed to admit what they do not know. For our part, we reason, if they are too stupid to know what OCSP and HAML***** stand for, then they are not worth the time for us to offer our explanations. Jargon is also a means to cut yourself out of the herd. Everyone wants to be the first to either invent a new buzzword or be the first to use it. I'd only just learned what the acronym APT stands for -- Advanced Persistent Threat -- and already, I have had slimy sales reps emailing me asking about what Oracle does to combat APT and do I know that they have a product that can protect against it, not to mention slice, dice and make julienne fries? (What I'd really like is a product that protects against APSRs - Annoyingly Persistent Sales Reps. Nobody has offered to sell me one yet.)

I'm not immune to the temptation. I was once in a meeting with a bunch of developers talking about security issues. After listening to the discussion, I said with great solemnity, "as I see it, we need an ITP story. In fact, we need an S-ITP story. " Everyone nodded. Finally, someone had the courage to say, "what is S-ITP?" to which I replied, "the Secure Internet Toaster Protocol." ****** We get wrapped up in our own linguistic cleverness to the point we do not always know what we are saying. How then, can we expect our constituents to understand what technology is capable of, and what it is not capable of? True, there are many other organizations or cohorts that are almost as jargon happy as we are, but few of them are tasked with the level of responsibility we have. It's not just the Marine Corps and the Three Letter Agencies that do national security: technoids do. At least when I was in the Navy, you could always look up FAADCPAC******* or other cryptic acronyms in the DICNAVAB (Dictionary of Naval Abbreviations). Good luck with that in our industry (this is where the cloud cult members insist that all I have to do is look up my acronym in the acronym cloud). The point is: I shouldn't have to.

Even God - the real one - communicated in a language his followers could understand (and wrote the rules down on tablets). Technologists won't even do that. The other thing God (or His scribes) did was tell stories. It's easier for most people to get their minds around a creation story that begins "in the beginning, the world was formless and void..." than around the Big Bang and/or quantum physics. Phrases like "Can the Ethiopian change his skin or the leopard its spots?" speaks to the fact that some things are immutable. We get it so well that the number of people who use the phrase vastly exceed the number who know where it comes from. (Jeremiah 13:23). Analogies, stories, and using examples people understand all help make the complex more simple: God is more understandable than the average technologist. No wonder most people know that they shouldn't lie, cheat, steal but we have lots of the Clueless Faithful who think it's a good idea to allow houses to talk to power plants. (What next, wireless access to life support systems? Because it is just so expensive to send a nurse into someone's room.)

On the occasions when I have had the privilege of testifying in front of Congress, I've had 5 minutes to try to help well-intended legislators do something that will make a positive difference. How far do you think I'd get if I said, "we need to deploy IpV6 broadly to fix all our cybersecurity problems" or "insecure RNGs are the bane of network encryption?" I might actually get farther if I said, "'Auwe, eia he mano nui loa! E ku'u hoa, e hele mai!" (At least, Senators Inouye and Akaka - and the rest of the state of Hawai´i delegation - might understand.) I readily admit that I am not a technologist - I don't have the in-depth knowledge that most of my team has. What I do have is the ability to ask questions until I understand the gist of - and details of - a problem, and the ability to translate the problem into terms others can understand. Without that communication, you get people suffering from avoidable shark bites because they don't speak Hawaiian. Why is it so hard for technologists to understand that if they cannot communicate, their technological acumen is worthless, even dangerous?

Maybe the reason technologists resist the use of analogies is that it would reveal that the emperor has no clothes or, more accurately, that the emperor's clothes are the same ones everyone else is wearing. By that I mean, that if we can use analogies to explain technology and technological limits, it makes it all too obvious that we already have examples we can use to, for example, craft public policy. We might be forced to admit that technology isn't the ooh, aah, gee whiz stuff we say it is, it's just the same old problems wrapped up in shiny new bits.

For example, there are a number - no, a lot - of cybersecurity bills in draft currently. A core element of many of these bills is the degree to which the Federal government should exert "control" over private networks and what form that control should take. In my opinion, there are many reasons for thinking that the Federal government is not well suited to such a role. One of the main reasons goes to basic accountability 101. The best example I could come up with was physical security. The CEO of a company that has no door locks, no physical security of any kind, and whose company experiences massive thefts from people wandering around their buildings would not have a job for very long. The police might help investigate the break in but they would also be the first to recommend locks and a security system. They certainly would not take over building defense.

"Oh, but cyber is different." Why, precisely? Assets are increasingly stored electronically, corporate "boundaries" include electronic ones (or should). If we think there is nothing valuable to protect on corporate networks then let's skip authentication and dispense with firewalls. Clearly, we know that data is valuable and corporations do have a responsibility to protect their own resources - they owe that to shareholders. If cyber is so "complicated" that we can't possibly secure it, one has to ask why these entities knowingly continue to double down on risk they cannot mitigate. The buck stops somewhere and it is not (primarily) at the Department of Homeland Security. Business cannot realistically have it both ways - embrace the increasing use of technology ("do more with less!") and then declaim responsibility for having done so. And just as the local police department does not have keys to local businesses -- nor do they install and monitor close circuit TV in each business to detect and prevent crime -- we ought to have sensible boundaries about who secures what.

Jargon makes people feel smart and superior, but end users and key stakeholders - including, increasingly, legislators - do not speak that jargon. If we cannot learn to de-jargon ourselves and speak in languages that our audience can understand and process, technology will continue to ensnare us instead of setting us free. I'll close with an illustration that bring together several of the themes I have been talking about: responsibility, limits, and all wrapped up in a nice, de-jargoned turn of phrase:

"The LORD God took the man and put him in the Garden of Eden to work it and take care of it. And the LORD God commanded the man, "You are free to eat from any tree in the garden; but you must not eat from the tree of the knowledge of good and evil, for when you eat of it you will surely die." (Genesis 2:15-17)
I wish technologists were as forthright.


* Ok, I admit, I just violated the first commandment by making surfing a god. But the North Shore when it breaks is pretty awe-inspiring, at the least.

** Not that I've ever seen a shark there. But I did see a dorsal fin pop up next to me one Saturday. It was the longest three seconds of my life until I heard the exhale of - a dolphin.

*** "Man in the gray suit" is a surfing euphemism for "shark"

**** Gone for good

***** Caught you! OCSP is the Online Certificate Status Protocol, but HAML is something I made up (Hack Attack Markup Language, by which we all craft standards-compliant security vulnerabilities so it is easier for hackers to exploit them on multiple web applications, in a standards-compliant way).

****** As far as I can tell, there is no reason to put household appliances on a network and many reasons not to. Anyway, I don't really think you need special authorization to toast bagels vs. white bread.

******* Fleet Accounting and Disbursing Center, US Pacific Fleet, if you care.


Books

Matterhorn: A Novel of the Vietnam War by Karl Marlantes

http://www.amazon.com/gp/product/080211928X/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0979528534&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1VJP13PEV1Q4RBD3F060

It's not often I get a chance to read a book that I think will become a classic, but this is one of them. The author is a decorated Vietnam-war veteran and the "authenticity" shows. The premise is that a green 2LT is sent into the bush with his team to take and hold a territory known as "the Matterhorn." It's impossible for me to describe how gripping this book is, how real the characters are, and how invested you become in them. I would never - I hope - have the hubris to say that I understand what it was like to have been in Vietnam, but after reading this book, I feel I have been a spectator. A great, magnificent book.

A Dog for All Seasons: A Memoir
by Patti Sherlock

http://www.amazon.com/Dog-All-Seasons-Memoir/dp/0312577923/ref=sr_1_1?ie=UTF8&s=books&qid=1273709745&sr=1-1

Personally, I am not so hot on border collies since one of them tried herding Thunder on the Nordic trail (Thunder did not want to be herded) and I had an expensive vet bill from the border collie biting him. One reason I think leash laws are critical. But I digress.

We forget that so many dogs (mine included) are working dogs. Working dogs need something to do besides sit in a dog basket and snarf dog treats. Working dogs are also indispensable to many people (you can't really herd sheep without them). The author wrote this book about a remarkable border collie named Duncan who lived with her on a sheep ranch in eastern Idaho. A dog that was more than merely a dog. Well worth the read.

Sailing in the Wake of the Ancestors: Reviving Polynesian Voyaging by Ben Finney

http://www.amazon.com/Sailing-Wake-Ancestors-Polynesian-Excellence/dp/1581780249

The resurrection of Hebrew as a popular (i.e., not merely scholarly) language is one of the great comeback stories in history. The other is the resurrection of Polynesian voyaging. This book tells the how and the why of how a dead or dying art was recreated, and the shot in the arm it gave to Polynesian peoples, who must now - largely as a result of the work of the Polynesian Voyaging Society - be acknowledged as the greatest navigators of all time. The author does not spare the infighting that almost led to the destruction of the Polynesian Voyaging Society and the means by which it was resurrected. The happy ending is the number of Polynesian peoples who are participating in wayfinding, just as their ancestors did. (It's still amazing to me that anybody can travel thousands of miles by the stars, observing ocean currents and birds.)

Summer R & R

Tue, 2009-09-08 07:35

Many of us take summer vacations to indulge in some R&R. Usually, we mean "rest and relaxation" by the abbreviation. R&R can also mean "reading and reruns" for those of us of the couch potato persuasion. I've done a lot of reading this summer (more on that below) and on those evenings when I can't concentrate on a demanding book, I sack out in front of the couch and watch reruns (e.g., NCIS and Law and Order. I find I am much better at figuring out whodunnit if I already know who did it. Less mental effort, too.).

There are other summer reruns materializing in Washington, in particular a revamped version of S. 773, the Cybersecurity Act of 2009 (aka the Snowe-Rockefeller Bill, after Senators Olympia Snowe (R-Maine) and Jay Rockefeller (D-WV)). First, the disclaimers: I've written a column for Oracle Magazine on this topic so I am stealing material from myself (otherwise known as "repurposing content"). Second, I always assume that members of Congress and their staff have the best of intentions when they draft a piece of legislation. So, no evil motives are assigned to them by me nor should be imputed. This disclaimer will be especially important when I explain why the Snowe-Rockefeller rerun is, despite good intentions, not an improvement from its original version.

I've reviewed a number of bills in my years working in cybersecurity and I have seen plenty that have become laws that best fit into the "what were they thinking?" category. I therefore offer a modest proposal: members of Congress should observe just four ironclad rules when drafting cybersecurity legislation, rules that would result in better, clearer and less ambiguous legislation, which is less subject to random interpretation and/or legal challenges (e.g., on Constitutional grounds). Here they are:

1)

Set limits; don't overreach. Before writing a law, determine the problem(s) the bill is trying to solve, whether legislation will actually solve the problem(s), at what cost and with what "unintended consequences." Also, determine whether there is another remedy equally or more effective at less cost and/or reach.2)

Do no harm. The legislative remedy shouldn't kill the problem by maiming the patient. 3)

Use precise language. Vague language will be misinterpreted or - worse - lead to people spending a lot of money without knowing if they are "there." In the case of cybersecurity, vague language means lawyers are more likely to be making the security decisions for companies. Worst of all are the "no auditor left behind" security bills for the amount of work they create and expenditure they require without materially improving security.4)

Uphold our current laws and values (e.g., the Constitution).

With that in mind, here are my thoughts on the Snowe-Rockefeller rerun.

First, the draft bill calls for certification of cybersecurity professionals; however, the term "cybersecurity professionals" is not defined. What, precisely does that term cover?

Someone who is a CISO? A CSO?Someone who is a security architect?Someone who applies patches, some of which are security patches? Someone who configures any product (after all, some settings are security settings)? Someone who installs AV software on mom and pop's home computer (gee, that could include their 9-year-old son Chad, the computer whiz)? Someone who administers firewalls? Someone who does forensic analysis? What about software developers - after all, if their code is flawed, it may lead to security vulnerabilities that bypass security settings?Does it mean security researchers? What about actual hackers? (It would be an interesting consequence of this bill if, in the future, someone isn't convicted for hacking (computer trespass) but is fined because (s)he does not have a CISHP (Certified Information Security Hacking Professional) certification.)

If you cannot tell based on the information in a bill to whom it applies and what "compliance" means, the likely beneficiaries are auditors, who were already given a industry boost courtesy of the Sarbanes Oxley Act, the gold standard of the "No Auditor Left Behind" bills I mentioned and the slayer of the US IPO market. More to the point, for all the money organizations could spend getting cybersecurity professional certifications for the people who don't do anything more in security than send out the "don't forget to change your password!" notices every 90 days, they could do more that actually improves security with the same funds. Getting certifications for people who don't need them crowds our more useful activity and thus could do actual harm. The lack of a clear definition in the draft bill alone runs afoul of my ironclad rules 1, 2 and 3 (and 4, as I will show later).

There is another problem with this provision: the potential for windfall profits by some (on top of not necessarily making the problem space better and possibly making it worse). Aside from product certifications (e.g., "so-and-so is a certified professional in administering product FOO"), which vendors administer, I believe that many "cyber-certification " bodies that exist now are for profit (meaning, such a bill is a mandate to spend money). The problem is made worse if the entities are effectively granted monopoly power over certifications.

To wit, a small aside here to bash ISC(2), or more correctly, a single individual within ISC(2). I and most of my team have received the new Certified Secure Software Lifecycle Professional (CSSLP) certification. I have to say, I didn't think it was that hard to get nor do you really have to demonstrate much actual expertise in development practice. The hard part of "secure software lifecycle" is doing it, not writing about it, taking exams about it, or the like. The next thing I know, I am getting a cold call from someone who I can only construe to be a sales rep for ISC(2) telling me why everybody in Oracle should take their CSSLP training classes and get the certification.

My response was what I outlined above: I did not see the value for the money. The hard part is doing secure development, not getting a CSSLP certification and anyway, for the amount of money we'd spend to do massive CSSLP training (and by the way, we actually do secure development so I don't see the need for ISC(2) training on top of what we already do in practice or the training we provide to developers), we could do more valuable things towards, oh, actually improving Oracle product security. I'd rather improve product security than line ISC(2)'s pockets. Customers would prefer I do that, too.

In response, I received what I can only construe as a "policy threat," which was Slimy Sales Guy saying that the Defense Department was going to start requiring CSSLPs as a condition of procurement so I needed to talk to him. (Gee, I bet ISC(2)'s lobbyists were busy.) My response was "hey, good to know, because that sounds like you've been handed a monopoly by DoD, which is inherently anticompetitive - who in the IT industry made you the arbiters of what constitutes 'secure development skill?'" I also said that I would work to oppose that provision - if it exists - on public policy grounds. ISC(2)'s certification wasn't broadly enough arrived at (full disclosure: I was asked about the utility of such a certification before ISC(2) developed it and I said I did not see the need for it). More to the point, you could get a CSSLP and still work for an organization that does not (technical, secure development terminology follows) give a rat's behind about actually building secure software so who the bleep cares?

I shouldn't single ISC(2) out in the sense that a lot of entities want to get legislation passed that allows them to get government-mandated money by, say, requiring someone to get their certification, or buy their product, or use their services.* If Slimy Sales Guy does not speak for ISC(2), my apologies to them, but I did not appreciate Oracle being "shaken down" as thanks for my team being an early adopter of CSSLP.

Back to the Snowe-Rockefeller rerun: it's bad enough that one out of every five people in the US has a licensing or certification requirement for his job** but if we are going to add one more requirement and license cybersecurity professionals, then at least figure out who "cybersecurity professionals" are, why we need to do that, how we will do it and constrain the problem.

The bill compounds the vague definition of "cybersecurity professional" by requiring that "3 years after the date of enactment of this Act, it shall be unlawful for an individual who is not certified under the program to represent himself or herself as a cybersecurity professional." Why does the federal government want to directly regulate cybersecurity professionals to a degree that arguably exceeds medical licensing, professional engineers' licensing, architects' licensing and so forth? Even in professions that have licensing requirements, there are state-by-state requirements that differ (e.g., California has more stringent licensing for structural engineers because there is a requirement for seismic design in CA that other, less earthquake-prone states do not have). Also, such a hands-on role for the federal government raises real constitutional concerns. Where in the Constitution is the Federal government authority as the licensing and regulatory body for all cybersecurity? (See ironclad rule number 4.)

The draft bill also would allow the president to exert control over "critical infrastructure information systems and networks" in the event of a "national emergency" - including private networks - without defining what either of those things are, which would leave the discretion to the executive branch. I read this to mean the President would be able (in an "emergency") to exert authority over private networks based on whatever criteria he/she wants to use to declare them "critical." *** If "critical infrastructure information systems and networks" are so critical, why can't we define what they are before legislating them? Are those networks pertaining to:

Utilities? Financial services? Manufacturing? (What kind of manufacturing - someone's toy making control systems or are we talking about heavy industry?) Health care?Agriculture? Other?

I have concerns - because I am a student of history - about giving anyone too much power in what we think is a good cause and watching that power turned against us. Vague terms combined with explicit presidential authority over these ill-defined terms can be a dangerous legislative formula.

There is also a provision that requires "...real time cybersecurity status and vulnerability information of all Federal Government information systems and networks managed by the Department of Commerce, including an inventory of such, vulnerabilities of such systems and networks, and corrective action plans for those vulnerabilities..." Of course, it makes sense for any owner of a network to know what's on their network and its state of "mission readiness," which in this context could include the state of its security configuration and whether security patches have been applied. However - and I made the same comment on the first draft bill - "vulnerabilities" is not defined and there is almost no such thing as "real time vulnerability information" if "vulnerability" includes defects in software that are not publicly known and for which no workaround or patch exists. Most vendors do not provide real time vulnerability information because there is nothing that increases the risk to customers like telling them of a vulnerability with no fix (or other threat mitigation) available.

"Everybody knows what we mean" is not good enough if cybersecurity is truly a national security problem, which it clearly is. At a minimum, for purposes of this bill, "vulnerability" should be explicitly defined as either a configuration weakness or a defect in software that has been publicly disclosed and for which a patch or other remediation exists. Otherwise, someone will construe this draft bill to require vendors to notify customers about security problems with no solutions as soon as they find the problems - real time, no less. Uh, no, not going to happen.

We do not need legislation or regulation for the sake of regulation, especially when it is not clear what and who is being "regulated" and what "compliance" means and at what cost. And, most importantly, I need to be convinced that the cost of regulation - the all in cost - is worth a clear benefit and that benefit could not be derived in a better or more economical or less draconian way. Most importantly, I want this bill - or any bill - to uphold our values and specifically the values enumerated in the Constitution. Good motives are not enough to create good public policy. I truly hope the next remake of Snowe-Rockefeller is worthy of its intentions, and advances our nation's cybersecurity posture.

* Here's mine: I would like a bill passed called the Hawaiian Language Preservation Act. As part of that act, I'd like to require musicians to (in addition to paying authors of works their royalties if the work is performed in public) obtain a certification that they pronounce the lyrics of the song correctly. You won't be able to perform in public (or at least, sing Hawaiian music) unless you have a Correct Hawaiian Lyrics Pronunciation (CHLP) certification. This is a bigger problem than you would think, according to my 'ukulele teacher, Saichi (who insists we pronounce the language correctly as we sing and "good on him"). Because I am a straight up gal, I won't even be greedy - I'll just require CHLP certification for anyone publicly performing any of the Rev. Dennis Kamakahi's songs (he's written about 400 or so songs, as far as I can tell he has never written a bad song, they are very popular and often played). Now, everybody will have to come to me to get a piece of paper that asserts they can pronounce "hāwanawana" correctly (it shows up in the second verse of Koke'e). See how easy that was? I figure I can use the proceeds of my CHLP certification program to buy a house in Honolulu (and improve everyone's Hawaiian pronunciation, too).

** Source: The Dirty Dozen, more about which below.

*** A colleague who reviewed this blog entry for me raised some even scarier concerns I thought were spot-on. Consider that some elements of our country have been at "heightened alert status" since 9/11/01 (e.g., air transportation). Some networks (e.g., DoD) are being probed daily so it's conceivable that a similar "heightened alert status" for cyber could be put in place in some sectors and left "on." Would the government be able to search any records, at any time, in a sector once a (semi-permanent) cyberalert exists? It's sometimes happened that a company that works with a law enforcement entity after a cyberincident is asked for "everything": logs, machines, access to people. Perhaps an experienced person knows how to ask for the minimum information needed to investigate an incident, but the law can't require that an "experienced, reasonable person with judgment" would be the enforcement mechanism. No company wants to face having to hand over all their data, their servers and their people because of an "alert." What would the government really accomplish if every company in that sector flooded them with records? Also, would companies receive some immunity or could data obtained under an "alert" be used for another purpose by the government?

Books of the Month

I have not blogged in awhile so I am overloading the following section. I have been doing a lot of summer reading and it is hard to recommend just one book:Huckleberry Finn by Mark Twain

Ernest Hemingway declared that "All modern American literature comes from one book by Mark Twain called Huckleberry Finn." It is a classic, and that is all the more reason to read it if you haven't already and reread it if you haven't read it in awhile. It's ineffably sad and short-sighted that a lot of schools either don't have a copy or don't teach this book anymore due to the prevalence of the "n word" in the text. That is political correctness run amok, especially since Twain was an expert satirist and the most heroic character in the book is the runaway slave, Jim. If you think Twain condones slavery, you didn't read the book closely enough: no, not at all.On Wings of Trust: The Quest of Pilot Carole Leigh by Maynard D. Poland

http://www.amazon.com/Wings-Trust-Quest-Pilot-Carole/dp/1419637800

I am particularly partial to this book because it is about a friend of mine. No, she's more than that, she is a great friend of long standing (we were Navy buddies) and she was a pioneer - a P3 pilot in the Navy and then a commercial airline pilot. Carole is one of the highest integrity people I know and that shines throughout the book, never more so than in her dealing with scary emergencies in-flight - and in her not turning a blind eye when something Is Not Right. The highest compliment I could pay someone is that I would trust her with my life, and I would trust Carole with mine. It's a great (true) story about a great person.

A Moveable Feast: The Restored Edition by Ernest Hemingway

http://www.amazon.com/Moveable-Feast-Restored-Ernest-Hemingway/dp/1416591311/ref=sr_1_1?ie=UTF8&s=books&qid=1252113968&sr=1-1

A Moveable Feast has been in print for some time (and is one of my favorite books by Hemingway), but this is a new version: since the book was published posthumously and there was no "definitive manuscript," it is hard in some sections to know what Hemingway intended to write. The expanded version gives in some cases an entirely differently flavor: Hemingway comes across as much less - literary criticism term - "snotty" towards F. Scott Fitzgerald in this version. The book gives a real flavor both of Paris and the Lost Generation's place in it in the 1920s.

Baking Cakes in Kigali by Gaile Parkin

http://www.amazon.com/dp/0385343434/?tag=googhydr-20&hvadid=4024611209&ref=pd_sl_45rnacbtln_e

People who like the gentle humor of the No. 1 Ladies' Detective Agency will like this. People in Kigali come to Angel, an expert cake baker, to order cakes and as they do, they tell their stories. The book does not spare the real challenges faced in Rwanda - the devastation wrought by AIDS, for example, and yet it's a lovely, redemptive story.

The Blue Notebook by James Levine

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1This is the story of a young Indian girl sold into child prostitution despite which, her spirit prevails. It is a disturbing and tragic book - and yet, extremely moving, all the more so when you realize that the author is donating the US proceeds of the book to the Center for Missing and Exploited children. A wonderful read.

The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom by Robert A. Levy and William Mellor

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1

This book analyzes the twelve worst decisions by the US Supreme Court and how they have affected our freedoms. You will need Maalox or a stiff gin and tonic after reading it. The concept of limited government envisioned by our founding fathers is not what we have now, and this book explains why. The erosion of freedom/expansion of government began for the most part under Franklin Roosevelt but there are some recent cases highlighted such as Kelo vs. New London, that upheld government abuse of eminent domain. At the time the book went to print DC vs. Heller (an important 2nd amendment case) had not been decided but it is mentioned in the book. I finished the book four days ago and I am still aghast at what I learned.

The Art of Racing in the Rain - by Garth Stein

http://www.amazon.com/Art-Racing-Rain/dp/B0017SWPXY

I picked this up because someone recommended it to me and I was going to spend the day on planes and in the airport. After I opened it, I could not put it down, and when I finished it, I felt I had read something wondrous. The book is about the travails in a family, told from the dog's point of view. It sounds too strange to work, but it does work, and the character Enzo (the dog) is unforgettable. He puke kapu (a sacred book).

Summer R & R

Tue, 2009-09-08 07:35


Many of us take summer vacations to indulge in some R&R. Usually, we mean "rest and relaxation" by the abbreviation. R&R can also mean "reading and reruns" for those of us of the couch potato persuasion. I've done a lot of reading this summer (more on that below) and on those evenings when I can't concentrate on a demanding book, I sack out in front of the couch and watch reruns (e.g., NCIS and Law and Order. I find I am much better at figuring out whodunnit if I already know who did it. Less mental effort, too.).

There are other summer reruns materializing in Washington, in particular a revamped version of S. 773, the Cybersecurity Act of 2009 (aka the Snowe-Rockefeller Bill, after Senators Olympia Snowe (R-Maine) and Jay Rockefeller (D-WV)). First, the disclaimers: I've written a column for Oracle Magazine on this topic so I am stealing material from myself (otherwise known as "repurposing content"). Second, I always assume that members of Congress and their staff have the best of intentions when they draft a piece of legislation. So, no evil motives are assigned to them by me nor should be imputed. This disclaimer will be especially important when I explain why the Snowe-Rockefeller rerun is, despite good intentions, not an improvement from its original version.

I've reviewed a number of bills in my years working in cybersecurity and I have seen plenty that have become laws that best fit into the "what were they thinking?" category. I therefore offer a modest proposal: members of Congress should observe just four ironclad rules when drafting cybersecurity legislation, rules that would result in better, clearer and less ambiguous legislation, which is less subject to random interpretation and/or legal challenges (e.g., on Constitutional grounds). Here they are:

1) Set limits; don't overreach. Before writing a law, determine the problem(s) the bill is trying to solve, whether legislation will actually solve the problem(s), at what cost and with what "unintended consequences." Also, determine whether there is another remedy equally or more effective at less cost and/or reach.
2) Do no harm. The legislative remedy shouldn't kill the problem by maiming the patient.
3) Use precise language. Vague language will be misinterpreted or - worse - lead to people spending a lot of money without knowing if they are "there." In the case of cybersecurity, vague language means lawyers are more likely to be making the security decisions for companies. Worst of all are the "no auditor left behind" security bills for the amount of work they create and expenditure they require without materially improving security.
4) Uphold our current laws and values (e.g., the Constitution).

With that in mind, here are my thoughts on the Snowe-Rockefeller rerun.

First, the draft bill calls for certification of cybersecurity professionals; however, the term "cybersecurity professionals" is not defined. What, precisely does that term cover?

Someone who is a CISO? A CSO?
Someone who is a security architect?
Someone who applies patches, some of which are security patches?
Someone who configures any product (after all, some settings are security settings)?
Someone who installs AV software on mom and pop's home computer (gee, that could include their 9-year-old son Chad, the computer whiz)?
Someone who administers firewalls?
Someone who does forensic analysis?
What about software developers - after all, if their code is flawed, it may lead to security vulnerabilities that bypass security settings?
Does it mean security researchers? What about actual hackers? (It would be an interesting consequence of this bill if, in the future, someone isn't convicted for hacking (computer trespass) but is fined because (s)he does not have a CISHP (Certified Information Security Hacking Professional) certification.)

If you cannot tell based on the information in a bill to whom it applies and what "compliance" means, the likely beneficiaries are auditors, who were already given a industry boost courtesy of the Sarbanes Oxley Act, the gold standard of the "No Auditor Left Behind" bills I mentioned and the slayer of the US IPO market. More to the point, for all the money organizations could spend getting cybersecurity professional certifications for the people who don't do anything more in security than send out the "don't forget to change your password!" notices every 90 days, they could do more that actually improves security with the same funds. Getting certifications for people who don't need them crowds our more useful activity and thus could do actual harm. The lack of a clear definition in the draft bill alone runs afoul of my ironclad rules 1, 2 and 3 (and 4, as I will show later).

There is another problem with this provision: the potential for windfall profits by some (on top of not necessarily making the problem space better and possibly making it worse). Aside from product certifications (e.g., "so-and-so is a certified professional in administering product FOO"), which vendors administer, I believe that many "cyber-certification " bodies that exist now are for profit (meaning, such a bill is a mandate to spend money). The problem is made worse if the entities are effectively granted monopoly power over certifications.

To wit, a small aside here to bash ISC(2), or more correctly, a single individual within ISC(2). I and most of my team have received the new Certified Secure Software Lifecycle Professional (CSSLP) certification. I have to say, I didn't think it was that hard to get nor do you really have to demonstrate much actual expertise in development practice. The hard part of "secure software lifecycle" is doing it, not writing about it, taking exams about it, or the like. The next thing I know, I am getting a cold call from someone who I can only construe to be a sales rep for ISC(2) telling me why everybody in Oracle should take their CSSLP training classes and get the certification.

My response was what I outlined above: I did not see the value for the money. The hard part is doing secure development, not getting a CSSLP certification and anyway, for the amount of money we'd spend to do massive CSSLP training (and by the way, we actually do secure development so I don't see the need for ISC(2) training on top of what we already do in practice or the training we provide to developers), we could do more valuable things towards, oh, actually improving Oracle product security. I'd rather improve product security than line ISC(2)'s pockets. Customers would prefer I do that, too.

In response, I received what I can only construe as a "policy threat," which was Slimy Sales Guy saying that the Defense Department was going to start requiring CSSLPs as a condition of procurement so I needed to talk to him. (Gee, I bet ISC(2)'s lobbyists were busy.) My response was "hey, good to know, because that sounds like you've been handed a monopoly by DoD, which is inherently anticompetitive - who in the IT industry made you the arbiters of what constitutes 'secure development skill?'" I also said that I would work to oppose that provision - if it exists - on public policy grounds. ISC(2)'s certification wasn't broadly enough arrived at (full disclosure: I was asked about the utility of such a certification before ISC(2) developed it and I said I did not see the need for it). More to the point, you could get a CSSLP and still work for an organization that does not (technical, secure development terminology follows) give a rat's behind about actually building secure software so who the bleep cares?

I shouldn't single ISC(2) out in the sense that a lot of entities want to get legislation passed that allows them to get government-mandated money by, say, requiring someone to get their certification, or buy their product, or use their services.* If Slimy Sales Guy does not speak for ISC(2), my apologies to them, but I did not appreciate Oracle being "shaken down" as thanks for my team being an early adopter of CSSLP.

Back to the Snowe-Rockefeller rerun: it's bad enough that one out of every five people in the US has a licensing or certification requirement for his job** but if we are going to add one more requirement and license cybersecurity professionals, then at least figure out who "cybersecurity professionals" are, why we need to do that, how we will do it and constrain the problem.

The bill compounds the vague definition of "cybersecurity professional" by requiring that "3 years after the date of enactment of this Act, it shall be unlawful for an individual who is not certified under the program to represent himself or herself as a cybersecurity professional." Why does the federal government want to directly regulate cybersecurity professionals to a degree that arguably exceeds medical licensing, professional engineers' licensing, architects' licensing and so forth? Even in professions that have licensing requirements, there are state-by-state requirements that differ (e.g., California has more stringent licensing for structural engineers because there is a requirement for seismic design in CA that other, less earthquake-prone states do not have). Also, such a hands-on role for the federal government raises real constitutional concerns. Where in the Constitution is the Federal government authority as the licensing and regulatory body for all cybersecurity? (See ironclad rule number 4.)

The draft bill also would allow the president to exert control over "critical infrastructure information systems and networks" in the event of a "national emergency" - including private networks - without defining what either of those things are, which would leave the discretion to the executive branch. I read this to mean the President would be able (in an "emergency") to exert authority over private networks based on whatever criteria he/she wants to use to declare them "critical." *** If "critical infrastructure information systems and networks" are so critical, why can't we define what they are before legislating them? Are those networks pertaining to:

Utilities?
Financial services?
Manufacturing? (What kind of manufacturing - someone's toy making control systems or are we talking about heavy industry?)
Health care?
Agriculture?
Other?

I have concerns - because I am a student of history - about giving anyone too much power in what we think is a good cause and watching that power turned against us. Vague terms combined with explicit presidential authority over these ill-defined terms can be a dangerous legislative formula.

There is also a provision that requires "...real time cybersecurity status and vulnerability information of all Federal Government information systems and networks managed by the Department of Commerce, including an inventory of such, vulnerabilities of such systems and networks, and corrective action plans for those vulnerabilities..." Of course, it makes sense for any owner of a network to know what's on their network and its state of "mission readiness," which in this context could include the state of its security configuration and whether security patches have been applied. However - and I made the same comment on the first draft bill - "vulnerabilities" is not defined and there is almost no such thing as "real time vulnerability information" if "vulnerability" includes defects in software that are not publicly known and for which no workaround or patch exists. Most vendors do not provide real time vulnerability information because there is nothing that increases the risk to customers like telling them of a vulnerability with no fix (or other threat mitigation) available.

"Everybody knows what we mean" is not good enough if cybersecurity is truly a national security problem, which it clearly is. At a minimum, for purposes of this bill, "vulnerability" should be explicitly defined as either a configuration weakness or a defect in software that has been publicly disclosed and for which a patch or other remediation exists. Otherwise, someone will construe this draft bill to require vendors to notify customers about security problems with no solutions as soon as they find the problems - real time, no less. Uh, no, not going to happen.

We do not need legislation or regulation for the sake of regulation, especially when it is not clear what and who is being "regulated" and what "compliance" means and at what cost. And, most importantly, I need to be convinced that the cost of regulation - the all in cost - is worth a clear benefit and that benefit could not be derived in a better or more economical or less draconian way. Most importantly, I want this bill - or any bill - to uphold our values and specifically the values enumerated in the Constitution. Good motives are not enough to create good public policy. I truly hope the next remake of Snowe-Rockefeller is worthy of its intentions, and advances our nation's cybersecurity posture.

* Here's mine: I would like a bill passed called the Hawaiian Language Preservation Act. As part of that act, I'd like to require musicians to (in addition to paying authors of works their royalties if the work is performed in public) obtain a certification that they pronounce the lyrics of the song correctly. You won't be able to perform in public (or at least, sing Hawaiian music) unless you have a Correct Hawaiian Lyrics Pronunciation (CHLP) certification. This is a bigger problem than you would think, according to my 'ukulele teacher, Saichi (who insists we pronounce the language correctly as we sing and "good on him"). Because I am a straight up gal, I won't even be greedy - I'll just require CHLP certification for anyone publicly performing any of the Rev. Dennis Kamakahi's songs (he's written about 400 or so songs, as far as I can tell he has never written a bad song, they are very popular and often played). Now, everybody will have to come to me to get a piece of paper that asserts they can pronounce "hāwanawana" correctly (it shows up in the second verse of Koke'e). See how easy that was? I figure I can use the proceeds of my CHLP certification program to buy a house in Honolulu (and improve everyone's Hawaiian pronunciation, too).

** Source: The Dirty Dozen, more about which below.

*** A colleague who reviewed this blog entry for me raised some even scarier concerns I thought were spot-on. Consider that some elements of our country have been at "heightened alert status" since 9/11/01 (e.g., air transportation). Some networks (e.g., DoD) are being probed daily so it's conceivable that a similar "heightened alert status" for cyber could be put in place in some sectors and left "on." Would the government be able to search any records, at any time, in a sector once a (semi-permanent) cyberalert exists? It's sometimes happened that a company that works with a law enforcement entity after a cyberincident is asked for "everything": logs, machines, access to people. Perhaps an experienced person knows how to ask for the minimum information needed to investigate an incident, but the law can't require that an "experienced, reasonable person with judgment" would be the enforcement mechanism. No company wants to face having to hand over all their data, their servers and their people because of an "alert." What would the government really accomplish if every company in that sector flooded them with records? Also, would companies receive some immunity or could data obtained under an "alert" be used for another purpose by the government?

Books of the Month

I have not blogged in awhile so I am overloading the following section. I have been doing a lot of summer reading and it is hard to recommend just one book:

Huckleberry Finn
by Mark Twain

Ernest Hemingway declared that "All modern American literature comes from one book by Mark Twain called Huckleberry Finn." It is a classic, and that is all the more reason to read it if you haven't already and reread it if you haven't read it in awhile. It's ineffably sad and short-sighted that a lot of schools either don't have a copy or don't teach this book anymore due to the prevalence of the "n word" in the text. That is political correctness run amok, especially since Twain was an expert satirist and the most heroic character in the book is the runaway slave, Jim. If you think Twain condones slavery, you didn't read the book closely enough: no, not at all.

On Wings of Trust: The Quest of Pilot Carole Leigh
by Maynard D. Poland

http://www.amazon.com/Wings-Trust-Quest-Pilot-Carole/dp/1419637800

I am particularly partial to this book because it is about a friend of mine. No, she's more than that, she is a great friend of long standing (we were Navy buddies) and she was a pioneer - a P3 pilot in the Navy and then a commercial airline pilot. Carole is one of the highest integrity people I know and that shines throughout the book, never more so than in her dealing with scary emergencies in-flight - and in her not turning a blind eye when something Is Not Right. The highest compliment I could pay someone is that I would trust her with my life, and I would trust Carole with mine. It's a great (true) story about a great person.

A Moveable Feast: The Restored Edition by Ernest Hemingway

http://www.amazon.com/Moveable-Feast-Restored-Ernest-Hemingway/dp/1416591311/ref=sr_1_1?ie=UTF8&s=books&qid=1252113968&sr=1-1

A Moveable Feast has been in print for some time (and is one of my favorite books by Hemingway), but this is a new version: since the book was published posthumously and there was no "definitive manuscript," it is hard in some sections to know what Hemingway intended to write. The expanded version gives in some cases an entirely differently flavor: Hemingway comes across as much less - literary criticism term - "snotty" towards F. Scott Fitzgerald in this version. The book gives a real flavor both of Paris and the Lost Generation's place in it in the 1920s.

Baking Cakes in Kigali by Gaile Parkin

http://www.amazon.com/dp/0385343434/?tag=googhydr-20&hvadid=4024611209&ref=pd_sl_45rnacbtln_e

People who like the gentle humor of the No. 1 Ladies' Detective Agency will like this. People in Kigali come to Angel, an expert cake baker, to order cakes and as they do, they tell their stories. The book does not spare the real challenges faced in Rwanda - the devastation wrought by AIDS, for example, and yet it's a lovely, redemptive story.

The Blue Notebook by James Levine

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1
This is the story of a young Indian girl sold into child prostitution despite which, her spirit prevails. It is a disturbing and tragic book - and yet, extremely moving, all the more so when you realize that the author is donating the US proceeds of the book to the Center for Missing and Exploited children. A wonderful read.

The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom by Robert A. Levy and William Mellor

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1

This book analyzes the twelve worst decisions by the US Supreme Court and how they have affected our freedoms. You will need Maalox or a stiff gin and tonic after reading it. The concept of limited government envisioned by our founding fathers is not what we have now, and this book explains why. The erosion of freedom/expansion of government began for the most part under Franklin Roosevelt but there are some recent cases highlighted such as Kelo vs. New London, that upheld government abuse of eminent domain. At the time the book went to print DC vs. Heller (an important 2nd amendment case) had not been decided but it is mentioned in the book. I finished the book four days ago and I am still aghast at what I learned.

The Art of Racing in the Rain - by Garth Stein

http://www.amazon.com/Art-Racing-Rain/dp/B0017SWPXY

I picked this up because someone recommended it to me and I was going to spend the day on planes and in the airport. After I opened it, I could not put it down, and when I finished it, I felt I had read something wondrous. The book is about the travails in a family, told from the dog's point of view. It sounds too strange to work, but it does work, and the character Enzo (the dog) is unforgettable. He puke kapu (a sacred book).

What’s Mine Is Mine

Mon, 2009-05-11 05:15

The 2009 RSA Conference is over and it was, as always, a good chance to catch up with old friends and new trends. I was on four panels (including the Executive Security Action Forum on the Monday before RSA) and it was a pleasure to be able to discuss interesting issues with esteemed colleagues. One such panel was on the topic of cloud computing security (ours was not the only panel on that topic, needless to say). One of the biggest issues in getting the panel together was manifest at the outset when, like the famous story of 6 blind men and the elephant, everyone had a different “feel” for what cloud computing actually is.

The “what the heck is cloud computing, anyway?” definitional problem is what makes discussions of cloud computing so thorny. Some proponents of cloud computing are almost pantheists in their pronouncements. “The cloud is everything; everything is the cloud. I’m a cloud, you’re a cloud, we’re a cloud, it’s all the cloud; are in you in touch with your inner cloud?” It’s hard to even discuss cloud computing with them because you need to know what faction of the radical cult you are with to understand how they even approach the topic.

One of the reasons it is hard to debunk cloud computing theology is that the term itself is so nebulous. If by cloud computing, one means software as a service, this is nothing new (so what’s all the fuss about?). Almost as long as there have been computers, there have been people paying other people to manage the equipment and software for them using a variety of different business models. When I was in college, students got “cloud services,” in a way. You got so many computer hours at the university computer center. You’d punch out your program on a card deck, drop it off at the university computer center, someone would load the deck, and later you’d stop by, pick up your card deck and your output. Someone else managed running the program for you (loading the deck) and debited your account for the amount of computing time it took to run the program. (I know, I know, given all the power on a mere desktop these days, this reminiscence is the computing equivalent of “I walked 20 miles to school through snow drifts.” But people who remember those days also remember dropping a card deck, which was the equivalent of “the dog ate my homework” if you couldn’t literally get your cards lined up in time to turn your homework in. Ah, the good old days.)

Today, many companies run hosted applications for their customers through a variety of business models. In some cases, the servers and software are managed at a data center the service provider owns and manages (the @myplace model); in other cases, the service provider manages the software remotely, where the servers and software remain at the customer site (the @yourplace model). What both of these models have in common is knowing “what’s mine is mine.” That is, where the servers are located is not as important as the principle that a customer knows where the data is, what is being done to secure it and that “what’s mine is mine.” If you are not actually managing your own data center, you still will do due diligence – and have a well-written contract, with oversight provisions – to ensure that someone else is securing your data to your satisfaction. If it is not done to your satisfaction you either needed to write a better contract or to terminate the service contract you have for cause.

I therefore find some of the pronouncements about cloud computing to be completely ludicrous if you are talking about anything important, because you want to know a) where something is that is of value to you and b) that it is being secured appropriately. “Being secured” is not just a matter of using secure smoke and mirrors – oops, I mean, a secure cloud protocol – but a bunch of things (kind of like the famous newspaper reporting example – who, what, when, how, why and where). Maybe “whatever” also begins with a W, but nobody would accept that as an answer to the question, “It’s 11PM, do you know where your data is and who is accessing it?”

I’ve used the following example before, most recently at the 2009 RSA Conference, but it’s worth repeating here. Suppose you have a daughter named Janie, who is the light of your life. Can you imagine the following conversation when you call her day care provider at 2pm.?

You: “Where is Janie?”DCP: “Well, we aren’t really sure right now. Janie is off in the day care cloud. Somewhere. But we are sure she’ll be at the door by 5 when you come to pick her up.”

The answer is, you wouldn’t tolerate such “wherever, whatever” answers and you’d yank Janie out of there ASAP. Similarly, if your data is important, you aren’t going to be happy with a “secure like, wherever” cloud protocol.

There is another reason “the cloud is everything, everywhere” mantra is nonsense. The reality is that if the cloud is everything and everywhere, then you have to protect everything, which is simply not possible (basic military strategy 101, courtesy of Frederick II: “He who defends everything defends nothing”). It’s not even worth trying to do that. If everything is in the cloud then one of two things will happen. Either security will have to rise to that digital equivalent of Ft. Knox everywhere: if not all data is gold, some of it is and you have to protect the gold above all else. Or, security devolves to the lowest common denominator, and we are back to little Janie – nobody is going to drop off their precious jewels in some cloud where nobody is sure where they are or how they are being protected. (You might drop off the neighbor’s kid into an insecure day care cloud because he keeps teasing your cat, but not little Janie.)

One of the reasons the grandiose claims about cloud computing don’t sit well is that most people have an intuitive defensiveness about “what’s mine.” You want to know “what’s mine is mine, what’s yours is yours” and most of us don’t have megalomaniacal tendencies to claim what’s yours as mine. Nor frankly, do we generally care about “what’s yours” unless you happen to be a friend or there are commons that affect both of us (e.g., if three houses in the neighborhood get burgled, I’m more likely to join neighborhood watch since what affects my neighbor is likely to affect me, too).

I buy the idea of having someone else manage your applications because I learned at an early age you could pay people to do unpleasant things you don’t want to do for yourself. My mother reminded me of this only last weekend. When I was a 21-year-old ensign stationed in San Diego, my command had a uniform inspection in khakis. I did not like khakis and had not ever had to wear them (the particular shade of khaki the uniforms were made of at that time made everyone look as if he/she had malaria, and the material was a particularly yucky double knit). I was moaning and groaning about having to hem my khaki uniform skirt when my mother reminded me that the Navy Exchange had a tailor shop and they’d probably hem my skirt for a nominal fee (the best five dollars I ever spent, as it happens). If you don’t want to manage your applications (in business parlance, because it is not your “core competence”), you can pay someone else to do it for you. You’re not completely off the hook in that you have to substitute due diligence and contract management skills for hands-on IT skills, but this model works for a lot of people.

What I don’t buy is the idea that – for anything of value – grabbing storage or computing on the fly is a model anybody is going to want to use. A pharmaceutical company running clinical trials isn’t going to store their latest test results “somewhere, out there.” They aren’t necessarily going to rent computing power on the fly, either if the raw data itself is sensitive (how valuable would it be to a competitor to learn that new Killer Drug isn’t doing so well in clinical trials?) You want to know “what’s mine is mine, and is being protected to my verifiable satisfaction.” If it’s not terribly valuable – or, more precisely, if it is not something you mind sharing broadly - then the cloud is fine. A lot of people store their photographs on some web site somewhere which means a) if their hard drive is corrupted, they have a copy somewhere and b) it’s easier to share with lots of people – easier than emailing .JPG files around. I heard one presenter at RSA describe how his company crunched big numbers using “the power of the cloud” but he admitted that the data being crunched was already public data. So, the model that worked was “this is mine; I am happy to share,” or “this is already being shared, and is not really mine.”

Speaking of “what’s mine is mine,” I mentioned in my previous blog entry that I’d had the privilege of testifying in front of Congress in mid-March (the Homeland Security Subcommittee on Emerging Threats, Cybersecurity, Science and Technology). As I only had five minutes for my remarks, I wanted to make a few strong recommendations that I hoped would have an impact. The third of the three recommendations was that the US should invoke the Monroe Doctrine in cyberspace. (One of my co-panelists then started referring to this idea as the Davidson Doctrine, which I certainly cringe at using myself. James Monroe was the president who first invoked the doctrine that bears his name – he gets to have a major doohickey in foreign policy named after him since he was – well, The President. I am clearly not the president or even not a president, unless it is president of the Ketchum, Idaho Maunalua Fan Club.)

For those who have forgotten their history, the Monroe Doctrine – created in 1823 – was a basic enumeration that the United States had a declared sphere of influence in the Western Hemisphere, that further efforts by European governments to colonize or interfere with states in the Western Hemisphere would be viewed by the US as aggressive acts requiring US intervention. The Monroe Doctrine is one of the United States’ oldest foreign policy constructs, it has been invoked multiple times by multiple presidents (well into the 20th century), and has morphed to include other areas of intervention (the so-called Roosevelt Corollary).* In short, the Monroe Doctrine was a declared line in the sand: the United States’ way of saying “what’s mine is mine.”

My principle reason for recommending invocation of the Monroe Doctrine is that we already have foreign powers stealing intellectual property, invading our networks, probing our critical defense systems (and other critical infrastructure systems). Nobody wants to say it, but there is a war going on in cyberspace. Putting it differently, if a hostile foreign power bombed our power plants, would that be considered an act of war? If a group of (non-US) actors systemically denied us the use of critical systems by physically taking control of them, would that be considered an act of war? I am certainly not suggesting that the Monroe Doctrine should govern (if it is invoked in cyberspace) the entire doctrine of cyberwar. But it is the case that before great powers can develop doctrines of cyberwar, they need to declare what is important. “What’s mine is mine: stay out or face the consequences.”

Another incident from the RSA Conference brought this home to me. In the Q and A session after a panel I was on: a woman mentioned she had grown up during the Cold War, when it was obvious who the enemy was. Who, she asked, is the enemy now? My response was, “We aren’t actually allowed to have enemies now. Wanting to annihilate western civilization is a different, equally valid value system that needs to be respected in the interests of diversity.” This sarcastic remark went right over her head for no reason that I can fathom. It is, however, true, that a lot of people don’t want to use the term “enemy” anymore, in part because they don’t even want to acknowledge that we are at war. From what is already public knowledge, we can state honestly that we have numerous enemies attacking our interests in cyber space – from individual actors to criminal organizations to nation states – part of our problem is that because we have not developed a common understanding of what “cyber war” is, we are unable to map these enemies to appropriate responders in the same way we pair street crime up with local cops and attacks on military supply lines with the armed forces.

We need to at least begin to elucidate a larger cyberwar doctrine by declaring a sphere of influence and that messing with that will lead to retribution. Like the Monroe Doctrine, we do not need to publicly elucidate exact responses, but our planning must include specific exercises such as “if A did B, what would our likely response be, where ‘response’ could include signaling and other activities in the non-cyber world?” Nations and others do “signal” each other of intentions, which often allows others to gracefully avoid conflict escalation by reading the signals correctly and backing off.

Slight aside: there are parents more worried about their children’s self esteem than stopping their obnoxious behavior Right This Second. My mother had a great escalation protocol using signaling that I wish all the Gen-Xers, Gen-Yers and Millennials would adopt instead of “we want Johnny to feel good about being a rude brat.” Mom has not had to invoke this on the Davidson kids in several decades because she invoked it so well before we were 10:

Defcon 5 - (Child behaves himself or herself)Defcon 4 - The “look” (narrowed eyes, direct eye contact, tense body language)Defcon 3 - The hiss through clenched teethDefcon 2 - “Stop That Right This Minute Or We Are Leaving. I Mean It.” Defcon 1 - The arm pinch and premise-vacating

This was, my siblings and I can attest to, a well-established escalation protocol with predictable “payoffs” at each level. As a result, we only rarely made it to Defcon 1 (and, in defense of my mother, I richly deserved it when we did).

So, below are some thoughts I wrote up as a later expansion on my remarks to the subcommittee. Invoking the Monroe Doctrine in cyberspace is, I believe, a useful construct for approaching how we think about cybersecurity as the critical national security interest I believe it is.

Applicability of the Monroe Doctrine to Cyberspace

1. The essential truth of invoking a Cyber Monroe Doctrine is that what we are seeing in cyberspace is no different from the kinds of real-world activities and threats our nation (and all nations) have been dealing with for years; we must stop thinking cyberspace falls outside of the existing system of how we currently deal with threats, aggressive acts and appropriate responses.

Referencing the Monroe Doctrine is meant to simplify the debate while highlighting its importance. The Monroe Doctrine became an organizing principle of US foreign policy. Through the concept of the Americas sphere of influence, it publicly identified an area of national interest for the US and clearly indicated a right to defend those interests without limiting the response. Today cyberspace requires such an organizing principle to assist in prioritization of US interest. While cyberspace by its name connotes virtual worlds, we should recall that cyberspace maps to places and physical assets we care about that are clearly within the US government's remit and interest.

Conceptually, how we manage the cyber threat should be no different than how we manage various real-world threats (from domestic crime to global terrorism and acts of aggression by hostile nation-states). Just as the Monroe Doctrine compelled the US government to prioritize intercontinental threats, a Cyber Monroe Doctrine also forces the US government to prioritize: simply put, some cyber-assets are more important than others and we should prioritize protection of them accordingly. We do not treat the robbery of a corner liquor store with the same response (or same responders) as we treat an attempt to release a dirty bomb into a population center, for example. With this approach, policy makers also benefit from existing legal systems and frameworks that ensure actions are appropriate and that protect our civil liberties.

Similarly, not all European incursions into the Western hemisphere have warranted a response under the Monroe Doctrine. For example in 1831, Argentina, which claimed sovereignty over the Falkland Islands, seized three American schooners in a dispute over fishing rights. The US reacted by sending the USS Lexington, whose captain, Silas Duncan, “seized property taken from the American ships, released the American seamen, spiked the fort’s cannon, captured a number of Argentine colonists, and posted a decree that anyone interfering with American fishing rights would be considered a pirate”(The Savage Wars of Peace, Max Boot, page 46).

The territorial dispute ended in 1833 when Great Britain sent a landing party of Royal Marines to seize the Falklands. In this instance the US specifically did not respond by invoking the Monroe Doctrine; the Falklands were deemed of insufficient importance to risk a crisis with London.2. The initial and longstanding value of the Monroe Doctrine was that it sent a signal to foreign powers that the US had a territorial sphere of influence and that incursions would be met with a response. Precisely because we did not specify all possible responses in advance, the Monroe Doctrine proved very flexible (e.g., it was later modified to support other objectives).

It is understandable that the United States would have concerns about ensuring the safety of the 85% of US critical (cyber) infrastructure that is in private hands given that much of this critical infrastructure (if attacked or brought down) has a direct link to the economic well-being of the United States in addition to other damage that might result. That said, declaring a national security interest in such critical infrastructure should not mean militarizing all of it or placing it under military or other governmental control any more than the Monroe Doctrine led to colonization (“planting the flag”) or militarization (military occupation and/or permanent bases) of all of the Western hemisphere. Similarly, the US should not make a cyberspace “land grab” for the Western hemisphere, or even our domestic cyber-infrastructure.

A 21st century Cyber Monroe Doctrine would have the same primary value as the original Monroe Doctrine - a signal to others of our national interests and a readiness to action in defense of those interests. Importantly, any consideration of our cyber interests must be evaluated within the larger view of our national security concerns and our freedoms. For example, it is clear where the defacement of a government website ranks in comparison to a weapons of mass destruction (WMD) attack on a major city. All cyber-risks are not created equal nor should they have a precisely “equal” response.

Another reason to embrace a Cyber Monroe Doctrine (and the innate flexibility it engendered) is the fact that cyberspace represents a potentially “liquid battlefield.” Traditionally, wars have been fought for fixed territory whose battlefields did not dramatically expand overnight (e.g., the attack by Imperial Japan on Pearl Harbor did not overnight morph into an attack on San Francisco, Kansas City and New York City). By contrast, in cyberspace there is no “fixed” territory and thus the boundaries of what is attacked are fluid. For a hostile entity, almost any potential cybertarget is 20 microseconds away.

A Cyber Monroe Doctrine must also accommodate the fundamental architecture of the Internet. Since the value of the Internet is driven by network effects, policies that decrease the value of the Internet through (real or perceived) balkanization will harm all participants. While a Cyber Monroe Doctrine can identify specific critical cyber infrastructure of interest to the U.S., parts of the cyber infrastructure are critical to all global stakeholders. In short, even as the United States may have a cybersphere of influence, there are nonetheless cybercommons. This is all the more true as attacks or attackers move through or use the infrastructure of those cybercommons. Therefore, the US must find mechanisms to be inclusive rather than exclusive when it comes to stewardship and defense of our cybercommons.

3. Placing the critical assets we care about within a framework that maps to existing legal, policy and social structures/institutions is the shortest path to success.

For example, military bases are protected by the military, and a nation-state attack (physical or cyber) against a military base or military cyberassets should fit within a framework that can offer appropriate and proportionate responses (ranging from State Department harassment of the local embassy official, to application of kinetic force). Critical national assets (power plants, financial systems) require similar flexibility, but through engagement of the respective front-line institutions in a manner that permits escalation appropriate to the nature of the attack. Challenges

There are a number of challenges in applying a Cyber Monroe Doctrine. Below is a representative but by no means exhaustive list of them.

1. Credibility

A deterrence strategy needs teeth in it to be credible. Merely telling attackers “we are drawing a line in the sand, step over it at your peril,” without being able to back it up with an actual and proportionate response is the equivalent of moving the line in the sand repeatedly in an attempt to appear fierce while actually doing nothing. (The Chinese would rightly call such posturers “paper tigers.”) Mere words without at least the possibility of a full range of supporting actions is no deterrent at all. A credible deterrent can be established through non-military options as well - for some a sharply worded public rebuke may change behavior as much as if we were sending in the Marines.

Because the Monroe Doctrine did not detail all potential responses to provocation in advance, the United States was able to respond as it saw fit to perceived infractions of the Monroe Doctrine on multiple occasions and over much of our history. The response was measured and flexible, but there was a response.

2. Invocation Scenarios

To bolster credibility, the “teeth” part of a cyber doctrine should include a potential escalation framework and some “for instances” in which a Cyber Monroe Doctrine would be invoked. This planning activity can take place in the think tank realm, the cyber exercise realm, or a combination thereof.

We know how to do this. Specifically, military strategists routinely look at possible future war scenarios. In fact, it is not possible to do adequate military planning by waiting for an incident and only then deciding if you have the right tools, war plans, and defense capabilities to meet it, if for no other reason than military training and procurement take years and not days to implement.

Similarly, “changing the battlefield” could be one supporting activity for a Cyber Monroe Doctrine. For example, it has been argued (by Michael Oren in Power, Faith and Fantasy: America in The Middle East 1776 to the Present) that the United States only developed a strong Navy (and the centralized government that enabled it) as a result of the wars of the Barbary pirates. Similarly, the fabric of our military may change and likely will change in support of a Cyber Monroe Doctrine and that could include not only fielding new “troops” – the Marines first made a name for themselves by invading Tripoli – but new technologies to support a changed mission. One would similarly expect that a Cyber Monroe Doctrine as a policy construct would be supported by specific planning exercises instead of “shoot from the hip” responses.

3. Attribution

A complicating factor in cybersecurity is that an attack - especially if it involves infiltration/exfiltration and not a “frontal assault” (e.g., denial of service attack) - and the perpetrator of it may not be obvious. Thus two of the many challenges of cybersecurity are detecting attacks or breaches in the first place, and attributing them correctly in the second place. No one would want to initiate a response to a cyber attack if one cannot correctly target the adversary. In particular, highly reliable attribution is critical in cyberoffense, since the goal is to take out attackers or stop the attacks, not necessarily to create collateral damage by taking down systems being hijacked by attackers. Notwithstanding this challenge, “just enough attribution” may be sufficient for purposes of “shot over the bow warnings,” even if it would be insufficient for escalated forms of retaliation.

For example, in cybersecurity circles last year there were a number of discussions about the types of activities that occur when one takes electronic devices overseas (e.g., hard drives being imaged, cell phones being remotely turned on an used as listening devices) and the precautions that one should take to minimize risk. While specific countries were not singled out on one such draft document (outlining the risks and the potential mitigation of those risks), the discussion included whether such warnings should be released in advance of the Beijing Olympics. Some expressed a reluctance to issue such warnings because of the concern that it would cause China to “lose face.”

Ultimately, the concern was rendered moot since Joel Brenner, a national counterintelligence executive in the Bush Administration, otherwise made the topic public (http://blogs.computerworld.com/slurping_and_other_cyberspying_expected_at_olympics). It seems ludicrous in hindsight that the concern over making a government “feel bad” about activities that they were widely acknowledged to be doing should be greater than protecting people who did not know about those risks. (Do we warn people against walking through high crime areas at night, or are we worried that criminals might be offended if we did so?) Even when we choose to exercise diplomacy instead of countermeasures, diplomacy inevitably includes some element of “you are doing X, we’d prefer that you not do so,” if not an actual “cease and desist” signal.

The difficulty of proper attribution of non-state actors deserves specific attention because of the need for multi-stakeholder cooperation in order to identify and eliminate the threat. When an attacker resides in one location, uses resources distributed around the world, and targets a victim in yet another country, the authorities and individuals responsible for finding out who (or what) is behind the attack may only have portions of the information or resources needed to properly carry out their job. Taking a unilateral approach will at times be simply impossible, and may not offer the quickest path to success. However, working collaboratively with other governments and stakeholders not only builds our collective capacity to defend critical infrastructures around the world, but also ensures that our weakest links do not become havens for cyber criminals or terrorists.

While it can be at times harder in cyberspace to distinguish what kind of foe we face, a Cyber Monroe Doctrine will work best when we can clearly distinguish who is conducting an attack so that we can deliver the appropriate response. This is not an easy task, and will require new skill sets across the entire government to ensure cyber threats are properly categorized.

* The government of the Dominican Republic stopped payment on debts of more than $32 million to various nations, which caused President Theodore Roosevelt to invoke (and expand upon) the Monroe Doctrine to avoid having European powers come to the Western Hemisphere for the purpose of collecting debts. This expansion of the Monroe Doctrine became known as the Roosevelt Corollary

For More InformationBook of the Week

The Forgotten Man by Amity Shlaes

http://www.amazon.com/Forgotten-Man-History-Great-Depression/dp/0066211700

This is a fascinating economic history of the Depression and why Hoover’s and Roosevelt’s economic policies made the Depression worse – much worse. It’s worth reading for such gems as (quoting philosopher Wiliam Graham Sumner): "The type and formula of most schemes of philanthropy or humanitarianism is this: A and B put their heads together to decide what C shall be made to do for D. The radical vice of all these schemes, from a sociological point of view, is that C is not allowed a voice in the matter, and his position, character, and interests, as well as the ultimate effects on society through C's interests, are entirely overlooked. I call C the Forgotten Man." Roosevelt, of course, twisted this to make D the Forgotten Man. Very well written and a reminder of what disastrous government intervention in the economy looks like.

More Useful Hawaiian:Na´u keia mea. Nou kēlā mea. (This is mine. That is yours.)

More on the Monroe Doctrine:

http://en.wikipedia.org/wiki/Monroe_Doctrine

About DEFCON:

http://en.wikipedia.org/wiki/DEF_CON

About William Graham Sumner:

http://en.wikipedia.org/wiki/William_Graham_Sumner

Pages