Many commercial off-the-shelf (COTS) vendors have recently seen an uptick of interest by their customers in third party static analysis or static analysis of binaries (compiled code).Customers who are insisting upon this are in some cases referencing the SANS Top Twenty Critical Controls (http://www.sans.org/critical-security-controls/) to support their position, specifically, Critical Control 6, Application Software Security:
"Configuration/Hygiene: Test in-house developed and third-party-procured web and other application software for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software. If source code is not available, these organizations should test compiled code using static binary analysis tools (emphases added). In particular, input validation and output encoding routines of application software should be carefully reviewed and tested."
Presumably, SANS and others think this is a good idea in that customers would like to know that they are getting “reasonably defect-free” code in a product. All things being equal, knowing you are getting better quality code than not is a good thing, while noting that there is no defect-free or even security defect-free software – no tool finds all problems and generally don’t find design defects. Also, a product that is “testably free” of obvious defects may still have significant security flaws – like not having any authentication, access control, or auditing. Nobody is arguing that static analysis isn’t a good thing – that’s why a lot of vendors already do it and don’t need a third party to “attest” to their code (assuming there is a basis for trusting the third party other than their saying “trust us”).Oracle has provided feedback to SANS as to why we believe third party static analysis is at best infeasible for organizations with mature security assurance practices and – well, a bad idea, not to put too fine a point on it. The reasons why it is a bad idea are expanded upon in detail below, and include: 1) worse, not better security 2) increased security risk to customers, 3) an increased risk of intellectual property theft 4) increased costs for commercial software providers without a commensurate increase in security. Note that this discussion does not address the use of other tools - such as so-called web vulnerability analysis tools – that operate against “as installed” object code. These tools also have challenges (i.e., a high rate of false positives) but do not in general pose the same security threats, risks and high costs that static analysis as conducted by third parties does.
Discussion: Static analysis tools are one of many means by which vendors of software, including commercial off-the-shelf (COTS) software, can find coding defects that may lead to exploitable security vulnerabilities. Many vendors – especially large COTS providers - do static analysis of their own code as part of a robust, secure software development process. In fact, there are many different types of testing that can be done to improve security and reliability of code, to include regression testing (ensuring that changes to code do not break something else, and that code operates correctly after it has been modified), “fuzzing” tools, web application vulnerability tools and more. No one tool finds all issues or is necessarily even suitable for all technologies or all programming languages. Most companies use a multiplicity of tools that they select based on factors such as cost, ease-of-use, what the tools find, how well and how accurately, programming languages the tool understands, and other factors.Note of course that these tools must be used in a greater security assurance context (security training, ethical hacking, threat modeling, etc.), echoing the popular nostrum that security has to be “baked in, not bolted on.” Static analysis and other tools can’t “bake in” security – just find coding errors that may lead to security weaknesses.More to the point, static analysis tools should correctly be categorized as “code analysis tools” rather than “code testing tools,” because they do not automatically produce accurate or actionable results when run and cannot be used, typically, by a junior developer ir quality assurance (QA) person.
These tools must in general be “tuned” or “trained” or in some cases “programmed” to work against a particular code base, and thus the people using them need to be skilled developers, security analysts or QA experts. Oracle has spent many person years evaluating the tools we use, and have made a significant commitment to a particular static analysis tool which worked the best against much – but not all – of our code base.We have found that results are not typically repeatable from code base to code base even within a company. That is, just because the tool works well on one code base does not mean it will work equally well on another product -- another reason to work with a strong vendor who will consider improving the tool to address weaknesses. In short, static analysis tools are not a magic bullet for all security ills, and the odds of a third party being able to do meaningful, accurate and cost- effective static code analysis are slim to none.
1. Third party static analysis is not industry-standard practice.
Despite the marketing claims of the third parties that do this, “third party code review” is not “industry best practice.” As it happens, it is certainly not industry-standard practice for multiple reasons, not the least of which is the lack of validation of the entities and tools used to do such “validation” and the lack of standards to measure efficacy, such as what does the tool find, how well, and how cost effectively? As Juvenal so famously remarked, “Quis custodiet ipsos custodes?” (Who watches the watchmen?) Any third party can claim, and many do, that “we have zero false positives” but there is no way to validate such puffery – and it is puffery. (Sarcasm on:I think any company that does static analysis as a service should agree to have their code analyzed by a competitor. After all, we only have Company X’s say-so that they can find all defects known to mankind with zero false positives, whiten your teeth and get rid of ring-around-the-collar, all with a morning-fresh scent!)
The current International Standards Organization (ISO) standard for assurance (which encompasses the validation of secure code development), the international Common Criteria (ISO-15408), is, in fact, retreating from the need for source code access currently required at higher assurance levels (e.g., Evaluation Assurance Level (EAL) 4). While limited vulnerability analysis has been part of higher assurance evaluations currently being deprecated by the U.S. National Information Assurance Partnership (NIAP), static analysis has not been a requirement at commercial assurance levels. Hence, “the current ISO assurance standard” does not include third party static code analysis and thus, “third party static analysis” is not standard industry practice. Lastly, “third party code analysis” is clearly not “industry best practice” if for no other reason than all the major COTS vendors are opposed to it and will not agree to it. We are already analyzing our own code, thanks very much.
(It should be noted that third party systematic manual code review is equally impractical for the code bases we of most commercial software. The Oracle database, for example, has multiple millions of lines of code.Manual code review for the scale of code most COTS vendors produce would accomplish little except pad the bank accounts of the consultants doing it without commensurate value provided (or risk reduction) for either the vendor or the customers of the product. Lastly, the nature of commercial development is that the code is continuously in development: the code base literally changes daily. Third party manual code review in these circumstances would accomplish absolutely nothing. It would be like painting a house while it is under construction.)
2. Many vendors already use third party tools to find coding errors that may lead to exploitable security vulnerabilities.
As noted, many large COTS vendors have well-established assurance programs that include the use of a multiplicity of tools to attempt to find not merely defects in their code, but defects that lead to exploitable security vulnerabilities. Since only a vendor can actually fix a product defect in their proprietary code, and generally most vulnerabilities need a “code fix” to eliminate the vulnerability, it makes sense for vendors to run these tools themselves. Many do.
Oracle, for example, has a site license for a COTS static analysis tool and Oracle also produces a static analysis tool in-house (Parfait, which was originally developed by Sun Labs). With Parfait, Oracle has the luxury of enhancing the tool quickly to meet Oracle-specific needs. Oracle has also licensed a web application vulnerability testing tool, and has produced a number of in-house tools that focus on Oracle’s own (proprietary) technologies. It is unlikely that any third party tool can fuzz Oracle PL/SQL as well as Oracle’s own tools, or analyze Oracle’s proprietary SQL networking protocol as well as Oracle’s in-house tools do. The Oracle Ethical Hacking Team (EHT) also develops tools that they use to “hack” Oracle products, some of which are “productized” for use by other development and QA teams. As Oracle runs Oracle Corporation on Oracle products, Oracle has a built-in incentive to write and deliver secure code. (In fact, this is not unusual: many COTS vendors run their own businesses on their own products and are thus highly motivated to build secure products. Third party code testers typically do not build anything that they run their own enterprises on.)
The above tool usage within Oracle is in addition to extensive regression testing of functionality to high levels of code coverage, including regression testing of security functionality. Oracle also uses other third party security tools (many of which are open source) that are vetted and recommended by the Oracle Software Security Assurance (OSSA) team.Additionally, Oracle measures compliance with “use of automated tools” as part of the OSSA program. Compliance against OSSA is reported quarterly to development line-of-bushttps://blogs.oracle.com/roller-ui/authoring/entryEdit!save.roliness owners as well as executive management (the company president and the CEO). Many vendors have similarly robust assurance programs that include static analysis as one of many means to improve product security.
Several large software vendors have acquired static analysis (or other) code analysis tools. HP, for example, acquired both Fortify and WebInspect and IBM acquired Coverity. This is indicative both of these vendors’ commitment to “the secure code marketplace” but also, one assumes, to secure development within their own organizations. Note that while both vendors have service offerings for the tools, neither is pushing “third party code testing,” which says a lot. Everything, actually.
Note that most vendors will not provide static analysis results to customers for valid business reasons, including ensuring the security of all customers. For example, a vendor who finds a vulnerability may often fix the issue in the version of product that is under development (i.e., the “next product train leaving the station”). Newer versions are more secure (and less costly to maintain since the issue is already fixed and no patch is required). However, most vendors do not -or cannot - fix an issue in all shipping versions of product and certainly not in versions that have been deprecated. Telling customers the specifics of a vulnerability (i.e., by showing them scan results) would put all customers on older, unfixed or deprecated versions at risk.
3. Testing COTS for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software increases costs without a commensurate return on investment (ROI).
The use of static code analysis software is a highly technical endeavor requiring skilled development personnel.There are skill requirements and a necessity for detailed operational knowledge of how the software is built to help eliminate false positives, factors that raise the cost of this form of “testing.”Additionally, static code analysis tools are not the tool of choice for detecting malware or backdoors. (It is in fact, trivial, to come up with a “backdoor” that, if inserted into code, would not be detected by even the best static analysis tools. There was an experiment at Sandia Labs in which a backdoor was inserted into code and code reviewers told where in code to look for it. They could not find it – even knowing where to look.)
If the real concern of a customer insisting on a third party code scan is malware and backdoor detection: it won’t work and thus represents an extremely expensive – and useless – distraction.
4.Third party code analysis will diminish overall product security.
It is precisely leading vendors’ experience with static analysis tools that contributes to their unwillingness to have third parties attempt to analyze code – emphasis on “attempt.” None of these tools are “plug and play": in some cases, it has taken years, not months, to be able to achieve actionable results, even from the best available static analysis tools. These are in fact code analysis tools and must be “tuned”– and in some cases actually “programmed”- to understand code, and must typically be run by an experienced developer (that is, a developer who understand the particular code base being analyzed) for results to be useful and actionable. There are many reasons why static analysis tools either raise many false positives, or skip entire bodies of code. For example,because of the way Oracle implements particular functionality (memory management) in the database, static analysis tools that look for buffer overflows either do not work, or raise false positives (Oracle writes its own checks to look for those issues).
The rate of false positives from use of a “random” tool run by inexperienced operators – especially on a code base as large as that of most commercial products – would put a vendor in the position of responding to unsubstantiated fear, uncertainty, and doubt (FUD).In a code base of 10,000,000 lines of code, even a false positive rate of one per 1000 lines of code would yield 10,000 “false positives” to chase down. The cost of doing this is prohibitive. (One such tool run against a large Oracle code base generated a false positive for every 3.4 lines of code, or about 160,000 false positives in toto due to the size of the code base.)
This is why most people using these tools must “tune” them to drown out “noise.” Many vendors have already had this false positive issue with customers running web application vulnerability tools and delivering in some cases hundreds of pages of “alarms” in which there were, perhaps, a half page of actionable issues.The rate of false positives is the single biggest determinant whether these tools are worth using or an expensive distraction (aka “rathole”).
No third party firm has to prove that their tool is accurate – especially not if the vendor is forced to use a third party to validate their code – and thus there is little to no incentive to improve their tool. Consultants get paid more the longer they are on site and working. A legislative or “standards” requirement for “third party code analysis” is therefore a license for the third party doing it to print money. Putting it differently, if the use of third party static analysis was accurate and cost effective, why wouldn’t vendors already be doing it? Instead, many vendors use static analysis tools in-house, because they own the code, and are willing to assume the cost of going up the learning curve for a long term benefit to them of reduced defects (and reduced cost of fixing these defects as more vulnerabilities are found earlier in the development cycle).
In short, the use of a third party is the most expensive, non-useful, high-cost attempt at “better code” most vendors could possibly use, and would result in worse security, not better security as in-house “security boots on the ground” are diverted to working with the third party. It is unreasonable to expect any vendor to in effect tune a third party tool and train the third party on their code – and then have to pay the third party for the privilege of doing it. Third party static analysis represents an unacceptably high opportunity cost caused by the “crowding out effect” of taking scarce security resources and using them on activity of low value to the vendor and to their customers. The only “winner” here is the third party. Ka-chink. Ka-chink.
5. Third party code analysis puts customers at increased risk.
As noted, there is no standard for what third party static analysis tools find, let alone how well and how economically they find it. More problematically, there are no standards for protection of any actual vulnerabilities these tools find. In effect, third party code analysis allows the third party to amass a database of unfixed vulnerabilities in products without any requirements for data protection or any recourse should that information be sold, incorporated into a hacking tool or breached. The mere fact of a third party amassing such sensitive information makes the third party a hacker target. Why attempt to hack products one by one if you can break into a third party’s network and get a listing of defects across multiple products – in the handy “economy size?” The problem is magnified if the “decompiled” source code is stored at the third party: such source code would be an even larger hacker target than the list of vulnerabilities the third party found.
Most vendors have very strict controls not merely on their source code, but on the information about product vulnerabilities that they know about and are triaging and fixing. Oracle Corporation, for example, has stringent security vulnerability handling policies that are promulgated and “scored” as part of Oracle’s software and hardware assurance program. Oracle uses its own secure database technology (row level access control) to enforce “need to know” on security vulnerabilities, information that is considered among the most sensitive information the company has. Security bugs are not published (meaning, they are not generally searchable and readable across the company or accessible by customers). Also, security bug access is stringently limited to those working on a bug fix (and selected others, such as security analysts and the security point of contact (SPOC) for the development area)
One of the reasons Oracle is stringent about limiting access to security vulnerability information is that this information often does leak when “managed” by third parties, even third parties with presumed expertise in secret-keeping. In the past, MI5 circulated information about a non-public Oracle database vulnerability among UK defense and intelligence entities (it should be noted that nobody reported this to Oracle, despite the fact that only Oracle could issue a patch for the issue). Oracle was only notified about the bug by a US commercial company to whom the information had leaked. As the saying goes, two people can’t keep a secret.
There is another risk that has not generally been considered in third party static analysis, and that is the increased interest in cyber-offense. There is evidence that the market for so-called zero-day vulnerabilities is being fueled in part by governments seeking to develop cyber-offense tools. (StuxNet, for example, allegedly made use of at least four “zero-day” vulnerabilities: that is, vulnerabilities not previously reported to a vendor.) Coupled with the increased interest in military suppliers/system integrators in getting into the “cyber security business,” it is not a stretch to think that at last some third parties getting into the “code analysis” business can and would use that as an opportunity to “sell to both sides”– use legislative fiat or customer pressure to force vendors to consent to static analysis, and then surreptitiously sell the vulnerabilities they found to the highest bidder as zero-days. Who would know?
Governments in particular cannot reasonably simultaneously fuel the market in zero days, complain at how irresponsible their COTS vendors are for not building better code and/or insist on third party static analysis. This is like stoking the fire and then complaining that the room is too hot.
6.Equality of access to vulnerability information protects all customers.
Most vendors do not provide advance information on security vulnerabilities to some customers but not others, or more information about security vulnerabilities to some customers but not others. As noted above, one reason for this is the heightened risk that such information will leak, and put the customers “not in the know” at increased risk. Not to mention, all customers believe their secrets are as worthy of protection as any other customer: nobody wants to be on the “Last Notified” list.
Thus, third party static analysis is problematic because it may result in violating basic fairness and equality in terms of vulnerability disclosure in the rare instances where these scans actually find exploitable vulnerabilities. The business model for some vendors offering static analysis as a service is to convince the customers of the vendor that the vendor is an evil slug and cannot be trusted, and thus the customer should insist on the third party analyzing the vendors’ code base.
There is an implicit assumption that the vendor will fix vulnerabilities that the third party static analysis finds immediately, or at least, before the customer buys/installs the product. However, the reality is more subtle than that. For one thing, it is almost never the case that a vulnerability exists in one and only one version of product: it may also exist on older versions. Complicating the matter: some issues cannot be “fixed” via a patch to the software but require the vendor to rearchitect functionality. This typically can only be done in so-called major product releases, which may only occur every two to three years. Furthermore, such issues often cannot be fixed on older versions because the scope of change is so drastic it will break dependent applications. Thus, a customer (as well as the third party) has information about a “not-easily-fixed” vulnerability which puts other customers at a disadvantage and at risk to the extent that information may leak.
Therefore, allowing some customers access to the results of a third party code scan in advance of a product release would violate most vendors’ disclosure policies as well as actually increasing risk to many, many customers, and potentially that increased risk could exist for a long period of time.
7. Third party code analysis sets an unacceptable precedent that risks vendors’ core intellectual property (IP).
COTS vendors maintain very tight control over their proprietary source code because it is core, high-value IP. As such, most COTS vendors will not allow third parties to conduct static analysis against source code (and for purposes of this discussion, this includes static analysis against binaries, which typically violates standard license agreements).
Virtually all companies are aware of the tremendous cost of intellectual property theft: billions of dollars per year, according to published reports. Many nation states, including those that condone if not encourage wholesale intellectual property theft, are now asking for source code access as a condition of selling COTS products into their markets. Most COTS vendors have refused these requests. One can easily imagine that for some nation states, the primary reason to request source code access (or, alternatively, “third party analysis of code”) is for intellectual property theft or economic espionage. Once a government-sanctioned third party has access to source code, so may the government. (Why steal source code if you can get a vendor to gift wrap it and hand it to you under the rubric of “third party code analysis?”)
Another likely reason some governments may insist on source code access (or third party code analysis) is to analyze the code for weaknesses they then exploit for their own national security purposes (e.g., more intellectual property theft). All things being equal, it is easier to find defects in source code than in object code. Refusing to accede to these requests – in addition to, of course, a vendor doing its own code analysis and defect remediation – thus protects all customers. In short, agreeing to any third party code analysis involving source code – either static analysis or static analysis of binaries - would make it very difficult if not impossible for a vendor to refuse any other similar requests for source code access, which would put their core intellectual property at risk. Third party code analysis is a very bad idea because there is no way to “undo” a precedent once it is set.
Summary
Software should have a wide variety of tests performed before it is shipped and additional security tests (such as penetration tests) should be used against “as-deployed” software.However, the level of testing should be commensurate with the risk, which is both basic risk management and appropriate (scarce) resource management.A typical firm has many software elements, most probably COTS, and to suggest that they all be tested with static analysis tools begs a sanity check.The scope of COTS alone argues against this requirement: COTS products run the gamut from operating systems to databases to middleware, business intelligence and other analytic tools, business applications (accounting, supply chain management, manufacturing) as well as specialized vertical market applications (e.g., clinical trial software),
representing a number of programming languages and billions – no, hundreds of billions - of lines of code.
The use of static analysis tools in development to help find and remediate security vulnerabilities is a good assurance practice, albeit a difficult one because of the complexity of software and the difficulty of using these tools. The only utility of these tools is that they be used by the producer of software in a cost- effective way geared towards sustained vulnerability reduction over time. The mandated use of third party static analysis to “validate” or “test” code is unsupportable, for reasons of cost (especially opportunity cost), precedence, increased risk to vendors’ IP and increased security risk to
customers. The third party static code analysis market is little more than a subterfuge for enabling the zero-day vulnerability market: bad security, at a high cost, and very bad public policy.
Book of the Month
It’s been so long since I blogged, it’s hard to pick out just a few books to recommend. Here are three, and a "freebie":
Hawaiki Rising: Hōkūle’a, Nainoa Thompson and the Hawaiian Renaissance by Sam Low
Among the most amazing tidbits of history are the vast voyages that the Polynesians made to settle (and travel among) Tahiti, Hawai’i and Aotearoa (New Zealand) using navigational methods largely lost to history. (Magellan – meh – he had a compass and sextant.) This book describes the re-creation of Polynesian wayfinding in Hawai’i in the 1970s via the building of a double-hulled Polynesian voyaging canoe, the Hōkūle’a, and how one amazing Hawaiian (Nainoa Thompson) – under the tutelage of one of the last practitioners of wayfinding (Mau Piailug) – made an amazing voyage from Hawai’i to Tahiti using only his knowledge of the stars, the winds, and the currents. (Aside: one of my favorite songs is “Hōkūle’a Hula,” which describes this voyage, and is so nicely performed by Erik Lee.) Note: the Hōkūle’a is currently on a voyage around the world.
The Korean War by Max Hastings
Max Hastings is one of the few historians whom I think is truly balanced: he looks at the moral issues of history, weighs them, and presents a fair analysis – not “shove-it-down-your-throat revisionism.” He also makes use of a lot of first-person accounts, which makes history come alive. The Korean War is in so many cases a forgotten war, especially the fact that it literally is a war that never ended.It’s a good lesson of history, as it is made clear that the US drew down their military so rapidly and drastically after the World War II that we were largely (I am trying not to say “completely”) unprepared for Korea. (Moral: there is always another war.)
Code Talker by Chester Nez
Many people now know of the crucial role that members of the Navajo Nation played in
the Pacific War: the code they created that provided a crucial advantage (and was never broken). This book is a first-person account of the experiences of one Navajo code talker, from his experiences growing up on the reservation to his training as a Marine, and his experiences in the Pacific Theater.Fascinating.
Securing Oracle Database 12c: A Technical Primer
If you are a DBA or security professional looking for more information on Oracle database security, then you will be interested in this book. Written by members of Oracle's engineering team and the President of the International Oracle User Group (IOUG), Michelle Mlcher, the book prvides a primer on capabilities such as data redaction, privilege analysis and conditional auditing. If you have Oracle databases in your environment, you will want to add this book to your collection of professional information. Register now for the complimentary eBook and learn from the experts.