Oracle Security Team

Subscribe to Oracle Security Team feed
Oracle Blogs
Updated: 1 hour 21 min ago

July 2017 Critical Patch Update Released

Tue, 2017-07-18 11:00

Oracle today released the July 2017 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle Enterprise Manager, Oracle Fusion Middleware, Oracle Hyperion, Oracle E-Business Suite, Oracle Industry Applications (Communications, Retail, and Hospitality), Oracle Primavera, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2282980.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpujul2017-3236622.html

My Oracle Support Note 2282980.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2282980.1 (MOS account required).

Securing the Oracle Cloud

Wed, 2017-06-21 15:25
sup { vertical-align: baseline; position: relative; top: -0.4em; }

Technology safeguards, fewer risks, and unparalleled security motivate CIOs to embrace cloud computing.

If one thing is constant in the IT world, it's change. Consider the age-old dilemma of security versus innovation. Just a few years ago, concerns about data security and privacy prevented some organizations from adopting cloud-based business models. Today, many of these concerns have been alleviated. IT leaders are migrating their applications and data to the cloud in order to benefit from security features offered by some cloud providers. The key is to choose the right technology—one that is designed to protect users, enhance safeguarding of data, and better address requirements under privacy laws. Find out why millions of users rely on advanced and complete cloud services to transform fundamental business processes more quickly and confidently than ever before.

The Evolving Security Landscape

Mitigating the Risk of Data Loss with Cloud Technology

The IT security practices of many organizations that manage their own systems may not be strong enough to resist complex threats from malware, phishing schemes, and advanced persistent threats unleashed by malicious users, cybercriminal organizations, and state actors. The perimeter-based security controls typically implemented by organizations who manage their own security—from firewalls, intrusion detection systems, and antivirus software packages—are arguably no longer sufficient to prevent these threats.

It's time to look further. It's time to look to the cloud. Thousands of organizations and millions of users obtain a better security position using a tier 1 public cloud provider than they can obtain in their own data centers. A full 78 percent of businesses surveyed say the cloud can improve both their security and their agility.5 Consider the facts: Most of today's security budgets are used to protect the network, with less than a third used to protect data and intellectual property that resides inside the organization.6 Network security is important, but it's not enough.

Building Oracle's Defense-in-Depth Strategy

Oracle Cloud is built around multiple layers of security and multiple levels of defense throughout the technology stack. Redundant controls provide exceptional resiliency, so if vulnerability is discovered and exploited in one layer, the unauthorized user will be confronted with another security control in the next layer.

But having some of the world's best security technology is only part of the story. Oracle aligns people, processes, and technology to offer an integrated defense-in-depth platform:

  • Preventive controls designed to mitigate unauthorized access to sensitive systems and data
  • Detective controls designed to reveal unauthorized system and data changes through auditing, monitoring, and reporting
  • Administrative measures to address security policies, practices, and procedures

Gaining an Edge with Cloud Security

In the Digital Age, companies depend on their information systems to connect with customers, sell products, operate equipment, maintain inventory, and carry out a wide range of other business processes. If your data is compromised, IT assets quickly become liabilities. A 2016 Ponemon Institute study found that the average cost of a data breach continues to rise each year, with each lost or stolen record that contains confidential information representing US$158 in costs or penalties.8 In response, more and more organizations are transitioning their information systems to the cloud to achieve better security for sensitive data and critical business processes.

Security used to be an inhibitor to moving to the cloud. Now it's an enabler to get you where you need to go. Oracle helps you embrace the cloud quickly, and with confidence.

Learn more about Oracle security cloud services, read the paper "Oracle Infrastructure and Platform Cloud Services Security", and try Oracle Cloud today.

---
1 QuinStreet Enterprise, "2015 Security Outlook: Meeting Today's Evolving Cyber-Threats".
2 Ponemon Institute, "The Cost of Malware Containment," 2015.
3 Leviathan Security Group, "Quantifying the Cost of Cloud Security,".
4 Crowd Research Partners, "Cloud Security: 2016 Spotlight Report"
5 Coleman Parkes Research, "A Secure Path to Digital Transformation".
6 CSO Market Pulse, "An Inside-Out Approach to Enterprise Security."
7 Jeff Kauflin, "The Fast-Growing Job with a Huge Skills Gap: Cyber Security," Forbes, March 16, 2017.
8 IBM Security, "2016 Ponemon Cost of Data Breach Study"

Security Alert CVE-2017-3629 Released

Mon, 2017-06-19 15:21

Oracle just released Security Alert CVE-2017-3629 to address three vulnerabilities affecting Oracle Solaris:

- Vulnerability CVE-2017-3629 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 7.8. - CVE-2017-3630 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 5.3. - CVE-2017-3631 only affects Oracle Solaris 11.3 and has a CVSS Base Score of 5.3.

Oracle recommends affected Oracle Solaris customers apply the fixes released with this Security Alert as soon as possible.

For More Information:
The Advisory for Security Alert CVE-2016-0636 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2017-3629-3757403.html

Oracle's Security Fixing Practices

Mon, 2017-06-19 13:53

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers.

As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible.

So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives?

The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer.

Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release.

It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases.

Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order:

  • Main code line. The main code line is the code line for the next major release version of the product.
  • Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned.
  • Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets

This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release.

One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle.

In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to:

  1. Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes;
  2. Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time;
  3. Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes.

For more information:
- Oracle Software Security Assurance website
- Security Alerts and Critical Patch Updates

Take Advantage of Oracle Software Security Assurance

Thu, 2017-06-15 07:00

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s methodology for building security into the design, build, testing, and maintenance of its products.

The primary objective of software security assurance is to help ensure that security controls provided by software are effective, work in a predictable fashion, and are appropriate for that software. The purpose of ongoing security assurance is to make sure that this objective continues to be met over time (throughout the useful life of software).

The development of enterprise software is a complex matter. Even in mature development organizations, bugs still occur, and the use of automated tools does not completely prevent software defects. One important aspect of ongoing security assurance is therefore to remediate security bugs in released code. Another aspect of ongoing security assurance is to ensure that the security controls provided by software continue to be appropriate when the use cases for software change. For example, years ago backups were performed mostly on tapes or other devices physically connected to the server being backed up, while today many backups are performed over private or public networks and sometimes stored in a cloud. Finally, other aspects for ongoing security assurance activities include changing threats (e.g., new attack methods) or obsolete technologies (e.g., deprecated encryption algorithms).

Oracle customers need to take advantage of Oracle ongoing security assurance efforts in order to preserve over time their security posture associated with their use of Oracle products. To that end, Oracle recommends that customers remain on actively-supported versions and apply security fixes as quickly as possible after they have been published by Oracle.

Introduced in 2005, the Critical Patch Update program is the primary mechanism for the backport of security fixes for all Oracle on-premises products. The Critical Patch Update is Oracle’s program for the distribution of security fixes in previously-released versions of Oracle software. Critical Patch Updates are regularly scheduled: they are issued quarterly on the Tuesday closest to the 17th of the month in January, April, July, and October. This fixed schedule is intended to provide enough predictability to enable customers to apply security fixes in normal maintenance windows. Furthermore, the dates of the Critical Patch Update releases are intended to fall outside of traditional "blackout" periods when no changes to production systems are typically allowed (e.g., end of fiscal years or quarters or significant holidays).

Note that in addition to this regularly-scheduled program for security releases, Oracle retains the ability to issue out of schedule patches or workaround instructions in case of particularly critical vulnerabilities and/or when active exploits are reported "in the wild." This program is known as the Security Alert Program.

Critical Patch Update and Security Alert fixes are only provided for product versions that are "covered under the Premier Support or Extended Support phases of the Lifetime Support Policy." This means that Oracle does not backport fixes to product versions that are out of support. Furthermore, unsupported product releases are not tested for the presence of vulnerabilities. It is, however, common for vulnerabilities to be found in legacy code, and vulnerabilities fixed in a given Critical Patch Update release can also affect older product versions that are no longer supported.

As a result, organizations choosing to continue to use unsupported systems face increasing risks over time. Malicious attackers are known to reverse-engineer the content of published security fixes and it is common for exploit code to be to be published in hacking frameworks soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Continuing to use unsupported systems can therefore have two serious implications:(a) Unsupported releases are likely to be affected by vulnerabilities which are not known by the affected software user because these releases are no longer subject to ongoing security assurance activities, and
(b) Unsupported releases are likely to be vulnerable to flaws that are known by malicious perpetrators because these bugs have been fixed (and publicly disclosed) in subsequent releases.

Unfortunately, security studies continue to report that in addition to human errors and systems misconfigurations, the lack of timely security patching constitutes one of the greatest reasons for the compromise of IT systems by malicious attackers. See for example, the Federal Trade Commission’s paper "Start with Security: A Guide for Business", which recommends that organizations have effective means to keep up with security releases of their software (whether commercial or open source). Delays in security patching and overall lapses in good security hygiene have plagued IT organizations for years. In many instances, organizations will report the "fear of breaking something in a business-critical system" as the reason for not keeping up with security patches. Here lies a fundamental paradox: a given system may be considered too important to fail (or temporarily brought offline), and this is the reason why it is not kept up to date with security patches! The hope for these organizations is that the known system availability interruption outweighs the potential impact of a security incident that could result from not keeping up with a security release. This amounts to driving a car with very little gas left in the tank and thinking "I don’t have time to stop at the gas station, because I really need my car and I am too busy to gas up." Obviously, the scarcity of technical personnel and the costs associated with testing complex applications and deploying patches further exacerbate the problem. The larger the IT environment, the more complex, and the more operation-critical, the greater is the "to patch or not to patch" conundrum.

In recent years, Oracle has issued stronger caution against postponing the application of security fixes or knowingly continuing to use unsupported versions. For example, the April 2017 Critical Patch Update Advisory includes the following warning: "Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay." Keeping up with security releases is simply a critical requirement for preserving the security posture of an IT environment, regardless of the technologies (or vendors) in use.

April 2017 Critical Patch Update Released

Tue, 2017-04-18 15:00

Oracle today released the April 2017 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2252203.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html

My Oracle Support Note 2220314.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2252203.1 (MOS account required).

January 2017 Critical Patch Update Released

Tue, 2017-01-17 15:15

Oracle today released the January2017 Critical Patch Update.

This Critical Patch Update provides fixes for a wide rangeof product families including: Oracle Database Server, Oracle EnterpriseManager Grid Control, Oracle E-Business Suite, Oracle Industry Applications,Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and OracleMySQL.

Oracle recommends this Critical Patch Update be applied assoon as possible. A summary and analysis of this Critical Patch Update has beenpublished on MyOracle Support (Doc ID 2220314.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpujan2017-2881727.html

My Oracle Support Note 2220314.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2220314.1(MOS account required).

January 2017 Critical Patch Update Released

Tue, 2017-01-17 15:15

Oracle today released the January 2017 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2220314.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpujan2017-2881727.html

My Oracle Support Note 2220314.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2220314.1 (MOS account required).

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

January 2017 Critical Patch Update Released

Tue, 2017-01-17 02:00

Oracle today released the January 2017 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2220314.1).

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpujan2017-2881727.html

My Oracle Support Note 2220314.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2220314.1 (MOS account required).

What Is Assurance and Why Does It Matter?

Fri, 2017-01-13 08:00

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading.

Way back in the pre-Internet days, I used to say that computer security was kind of a lonely job, because hardly any customers seemed to be really interested in talking about it. There were, of course, some keenly interested customers, including defense and intelligence agencies and a few banks, most of which were concerned with our security functionality and—to a lesser degree—how we were building security into everything, a difference I will explain below, and which is known as assurance.

Times change. Now, when I meet someone who complains of a virus, it's better-than-even odds that he is talking about the latest digital plague and not a case of the flu. Information technology (IT) has moved way beyond mission-critical applications to things that are literally in the palm of our hands and is in places we never even thought would (or in some cases should) be computerized ("turn your crock pot on remotely? There's an app for that!") More and more of our world is not only IT-based but Internet accessible. Alas, the growth in Internet-accessible whatchamacallits has also led to a growth in Evil Dudes in Upper Wherever wreaking havoc in systems in Anywheresville. This is one big reason that cybersecurity is something (almost) everybody cares about.

Historically, computer security has often been described as "CIA" (Confidentiality, Integrity and Availability):

Confidentiality means that the data is protected such that people who don't need to access it, can't access it, via restrictions on who can view, delete or change data. For example, at Oracle, I can review my salary online (so can my Human Resource representative), but I cannot look at the salaries of employees who do not report to me.

Integrity means that the data hasn't been corrupted (technical term: "futzed with"). In other words, you know that "A" means "A" and isn't really "B" that has been garbled to look like "A." Corrupted data is often worse than no data, precisely because you can't trust it. (Wire transfers wouldn't work if extra 0s were mysteriously and randomly appended to amounts.)

Availability means in that you are able to access data (and systems) you have legitimate access to—when you need to. In other words, someone hasn't prevented access by say, flooding a machine with so many requests that the system just gives up (the digital equivalent of a persistent three-year-old asking for more candy "now mommy now mommy now mommy" to the point where mommy can't think).

C, I and A are all important attributes of security that may vary in terms of importance from system to system.

Assurance is not CIA, but it is the confidence that a system does what it was designed to do, including protecting against specific threats, and also that there aren't sneaky ways around the security controls it provides. It's important because, if you don't have a lot of confidence that the CIA cannot be bypassed by Evil Dude (or Evil Dudette), then the CIA isn't useful. If you have a digital doorlock—but the lock designer goofed by allowing anybody to unlock the door by typing '98765,' then you don't have any security once Evil Dude figures out that 98765 always gets him into your house (and shares that with everybody else on the Internet).

Here's the definition of assurance that the US Department of Defense uses:

"Software assurance relates to "the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software (https://acc.dau.mil/CommunityBrowser.aspx?id=25749)."

When I started working in security, most security people knew a lot about the CIA of security, but fewer of us—fewer of anybody—thought about the "functions as designed" and "free from vulnerabilities" part. "Functions as intended" is a design aspect of security. That means that a designer not only considered what the software (or hardware) was intended to do, but thought about how someone could try to make the software (or hardware) do what it was not intended to do. Both are important because unless you never deploy a product, it's most likely going to be attacked, somehow, somewhere, by someone. Thinking about how Evil Dude can try to break stuff (and making that hard/unlikely to succeed) is a very important part of "functions as intended."

The "free of vulnerabilities" part is also important; having said that, nobody who knows anything about code would say, "all of our code is absolutely perfect." ("Pride goeth before destruction and a haughty spirit before a fall.") That said, one of the most important aspects of assurance is secure coding. Secure coding practices include training your designers, developers, testers (and yes, even documentation writers) about how code can be broken, so people think about that before starting to code. Having a development process that incorporates security into design, development, testing and maintenance is also important. Security isn't a sort of magic pixie dust you can sprinkle over software or hardware after it's all done to magically make it secure—it is a quality just as structural integrity is part of a building, not something you slap your head over and think, "dang, I forgot the rebar, I need to add some to this building." It's too late after the concrete has set. Secure coding practices include actively looking for coding errors that could be exploited by a Evil Dude, triaging those coding errors to determine "how bad is bad," fixing the worst stuff the fastest and making sure that a problem is fixed completely. If Evil Dude can break in by typing '^X,' it's tempting to just redo your code so typing ^X doesn't get Evil Dude anything. But that likely isn't the root of the problem (what about ^Y - what does that do? Or ^Z, ^A...?) Automated tools designed to help find avoidable, preventable defects in software are a huge help (they didn't really exist when I started in security).

Nobody who buys a house expects the house to be 100% perfect, but you'd like to think that the architect hired structural engineers to ensure the walls wouldn't fall over, the contractor had people checking the work all along ("don't skimp on the rebar"), there was a building inspection, etc. Noting that even with a really well-designed and well-built house, there is probably a ding or two in the paint somewhere even before you move in—it's probably not letter perfect. Code is like that, too, although a "ding in your code" is probably more significant than a ding in your paint, so there should be far fewer of them.

Assurance matters not only because people who use IT want to know things work as intended—and cannot be easily broken—but because time, money and people are always limited resources. Most companies would rather hire 50 more salespeople to reach more potential customers than hire 50 more people to patch their systems. (For that matter, I'd rather have such strong, repeatable, ingrained secure development practices that instead of hiring 50 more people to fix bad/ insecure code, we can use those 50 people to build new, cool (and secure) stuff.)

Assurance is always going to be good and necessary, even as the baseline technology we are trying to "assure" continues to change. One of the most enjoyable aspects of my job is continuing to "bake security in" as we grow in breadth and as we adapt to changes in the market. Many companies are moving from "buy-build-maintain" their own systems to "rent," by using cloud services. (It makes a lot of sense: companies don't typically build apartment buildings in every city their employees visit: they use "cloud housing," a.k.a. hotels.) The increasing move to cloud services comes with security challenges, but also has a lot of security benefits. If it's hard to find enough IT people to maintain your systems, it's even harder to find enough security people to defend them. A service provider can secure the same thing, 5000 times, much better than 5000 individual customers can. (Or, alternatively, a service provider can secure one big multi-tenant service offering better than the 5000 customers using it can do themselves.) The assurance practices we have adapted from "home grown" software and hardware has already morphed and will continue to morph to how we build and deliver cloud services.

Click here for more information on Oracle assurance.

The State of Open Source Security

Thu, 2017-01-05 07:00

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything to re-using as much as possible known and trustworthy components. As a result, a growing aspect of software design and development decisions has become the integration of open-source and third-party components into increasingly large and complex software.

The question as to whether open source software is inherently more secure than commercial (i.e. "closed") software has been ardently debated for a number of years. The purpose of this blog entry is not to definitely settle this argument, though I would argue that software (whether open source or closed source) that is developed by security-aware developers tends to be inherently more secure. Regardless of this controversy, there are important security implications regarding the use of open source components.

The wide use of certain components (open source or not) has captured the attention of both security researchers and malicious actors. Indeed, the discovery of a 0-day in a widely-used component can imply for malicious actors the prospective of "hacking once and exploiting anywhere" or large financial gain if the bug is sold on the black market. As a result, a growing number of security vulnerabilities have been found and reported in open source components. The positive impact of increased security research in widely-used components will (hopefully) be the improved security-worthiness of these components (and the relative fulfillment of the "million eyes" theory), and the increased awareness of the security implications of the use of open source components within development organizations.

In many instances, the vulnerabilities found in public components have been given cute names: POODLE, Heartbleed, Dirty Cow, Shellshock, Venom, etc. (in no particular order). These names contributed to a sense of urgency (sometimes panic) within many organizations, often to the detriment of a rational analysis of the actual severity of these issues and their relative exploitability in the affected environments.

Less security-sophisticated organizations have been particularly affected by this sense of urgency and many have attempted to scan their environment to find software containing the vulnerability "du jour." However, it has been Oracle's experience that while many free tools provide organizations with the ability to relatively accurately identify the presence of open source components in IT systems, the majority of these tools have an abysmal track record at accurately identifying the version(s) of these components, and much less determine the exploitability of the issues associated with them. As a result, less security-sophisticated organizations are facing reports with a large number of false positive, and are unable to make sense of these findings (Oracle support has seen an increase in the submission of such inaccurate reports). From a security assurance perspective, I believe that there are 3 significant and often under-discussed topics related to the use of open source components in complex software development:

  1. How can we assess and enhance security assurance activities in open source projects?
  2. How can we ensure that these components are obtained securely?
  3. How does the use of open source components affect ongoing assurance activities throughout the useful life of associated products?

How can we assess and enhance security assurance activities in open source projects?

Assessing the security maturity of an open source project is not necessarily an easy thing. There are certain tools and derived methodologies and principles (e.g., Building Security In Maturity Model (BSIMM), Safecode), that can be used to assess the relative security maturity of commercial software developers but their application to open source projects is difficult. For example, how can one determine the amount of security skills available in an open source projects and whether code changes are systematically reviewed by skilled security experts?

Furthermore, should the software industry try to come up together with means to coordinate the role of commercial vendors in helping enhance the security posture of the most common open source projects for the benefit of all vendors and the community? Is it enough to commit that security fixes be shared with the community when an issue is discovered while a component is being used in a commercial offering?

How can we ensure that these components are obtained securely?

A number of organizations (whether for the purpose of developing commercial software or their own systems) are concerned solely about "toxic" licenses when procuring open source components, while they should be equally concerned about bringing in toxic code.

One problem is the potential downloading and use of obsolete software (which contains known security flaws that have been fixed in the most recent releases). This problem can relatively easily be solved by forcing developers to only download the most recent releases from the official project repository.

Many developers prefer pulling compiled binaries, instead of compiling the source code themselves (and verifying its authenticity). Developers should be aware of the risk of pulling malicious code (it's not because it is labelled as "foo" that it actually is "foo"; it may actually be "foo + a nasty backdoor"). There have been several publicly-reported security incidents resulting from the downloading of maliciously altered programs.

How can we provide the necessary security care of a solution that include open source components throughout its useful life?

Once an organization has decided to use an external component in their solution, the organization also should consider how they will maintain the solution. The maintenance and patching implications of third party components are often overlooked. For example, organizations may be faced with hardware limitations in their products. They may have to deprecate hardware products more quickly because a required open source component is no longer supported on a specific platform, or the technical requirements of the subsequent releases of the component exceed the specifications of the hardware. In hardware environments, there is also the obvious question of whether patching mechanisms are available for the updating of open source components on the platform.

There are also problematic implications of the use of open source components when they are used in purely-software solutions. Security fixes for open source components are often unpredictable. How does this unpredictability affect the availability of production systems, or customers' requirement to have fixed maintenance schedule?

In conclusion, the questions listed in this blog entry are just a few of the questions that one should consider when developing technology-based products (which seems to be about almost everything these days.) These questions are particularly important as open source components represent a large and increasing chunk of the technology supply chain, not only of commercial technology vendors, but also of cloud providers. Security assurance policies and practices should take these questions into consideration, and highlight the fact that open source, while incredibly useful, is not necessarily "free" but requires specific sets of commitments and due diligence obligations.

Common Criteria and the Future of Security Evaluations

Thu, 2016-10-20 12:08

For years, I (and many others) have recommended that customersdemand more of their information technology suppliers in terms of security assurance– that is, proof that security is “built in” and not “bolted on,” that securityis “part of” the product or service developed and can be assessed in ameaningful way. While many customers are focused on one kind of assurance – thedegree to which a product is free from security vulnerabilities – it is extremelyimportant to know the degree to which a product was designed to meet specificsecurity threats (and how well it does that). These are two distinct approachesto security that are quitecomplementary and a point thatshould increasingly be of value for all customers. The good news is that many ITcustomers – whether of on-premises products or cloud services - are asking for more “proof of assurance,”and many vendors are paying more attention. Great! At the same time, sadly, a core internationalstandard for assurance: the Common Criteria (CC) (ISO 15408), is at risk.

The Common Criteria allows you to evaluate your IT products viaan independent lab (certified by the national “scheme” in which the lab isdomiciled). Seven levels of assurance are defined – generally, the higher theevaluation assurance level (EAL), the more “proof” you have to provide thatyour product 1) addresses specific (named) security threats 2) via specific(named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specificsecurity threats, objectives, functions and assurance requirements into“Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC RecognitionArrangement (CCRA), the benefit of which is that a CC security evaluation donein one country (subject to some limits) is recognized in multiple othercountries (27, at present). The benefit to customers is that they can have abaseline level of confidence in a product they buy because an independent entity has lookedat/validated a set of security claims about that product.

Unfortunately, the CC in danger of losing this key benefitof mutual recognition. The main tension is between countries that want fast,cookie cutter, “one assurance size fits all” evaluations, and those that want (forat least some classes of products) higher levels of assurance. These tensions threatento shatter the CCRA, with the risk of an “every country for itself,” “everymarket sector for itself” or worse, “every customer for itself” attempt toimpose inconsistent assurance requirements on vendors that sell products andservices in the global marketplace. Customers will not be well-served if thereis no standardized and widely-recognized starting point for a conversationabout product assurance.

The uncertainty about the future of the CC creates opportunityfor new, potentially expensive and unproven assurance validation approaches. EveryTom, Dick, and Harriet is jumping on the assurance bandwagon, whether it isdeveloping a new assurance methodology (that the promoters hope will be adoptedas a standard, although it’s hardly a standard if one company “owns” themethodology), or lobbying for the use of one proprietary scanning tool oranother (noting that none of the tools that analyze code are themselvescertified for accuracy and cost-efficiency, nor are the operators of thesetools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entitiesready to promote their assurance solutions – which may or may not work. (Note:I freely admit that a current weakness of the CC is that, while vulnerabilityanalysis is part of a CC evaluation,it’s not all that one would want. A needed improvement would be a mechanismthat ensures that vendors use a combination of tools to more comprehensivelyattempt to find security vulnerabilities that can weaken security mechanismsand have a risk-based program for triaging and fixing them. Validating thatvendors are doing their own tire-kicking – and fixing holes in the tires beforethe cars leave the factory – would be a positive change.)

Why does this threat of CC balkanization matter? First ofall, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement,especially when the cost of the testing is born by the same entity over andover (the vendor). Worse, since theresources (time, money, and people) that would be used to improve actualsecurity are assigned to jumping through the same hoop 27 times, we mayparadoxically end up with worsesecurity. We may also end up with worsesecurity to the extent that there will be less incentive for the labs that doCC evaluations to pursue excellence and cost efficiency in testing if they haveless competition (for example, from labs in other countries, as is the caseunder the CCRA) and they are handed a captive marketplace via country-specificevaluation schemes.

Second, whatever the shortcomings of the CC, it is a strong,broadly-adopted foundation for security that to-date has the support ofmultiple stakeholders. While it may be improved upon, it is nonetheless betterto do one thing in one market that benefits and is accepted in 26 other marketsthan to do 27 or more expensive testing iterations that will not lead to a 27-foldimprovement in security. This is especially true in categories of products thatsome national schemes have deemed “too complex to evaluate meaningfully.” Thealternative clearly isn't per-country testing or per-customer testing, becauseit is in nobody's interests and not feasible forvendors to do repeated one-off assurance fire-drills for multiple systemintegrators. Even if the CC is “not sufficient” for all types of testing forall products, it is still a reputable and strong baseline to build upon.

Demand for Higher Assurance

In part, the continuing demand for higher assurance CC evaluationsis due to the nature of some of the products: smart cards, for example, areoften used for payment systems, where there is a well understood need for“higher proof of security-worthiness.” Also, smart cards generally have asmaller code footprint, fewer interfaces that are well-defined and thus they lendthemselves fairly well to more in-depth, higher assurance validation. Indeed,the smart card industry – in a foreshadowing and/or inspiration of CC communityProtection Profiles (cPPs), was an early adopter of devising common security requirements and “proofof security claims,” doubtless understanding that all smart card manufacturers- and the financial institutions who are heavy issuers of them - have a vestedinterest in “shared trustworthiness.”This is a great example of understanding that, to quote Ben Franklin, “We mustall hang together or assuredly we shall all hang separately.”

The demand for higher assurance evaluations continues inpart because the CC has been so successful. Customers worldwide becameaccustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—thedirection of new style community Protection Profiles (cPP)—hasn’t captured theimagination of the global marketplace for evaluated software in part becausethe promoters of “no-EAL is the new EAL4” have not made the necessary businesscase for why “new is better than old.” An honorable, realistic assessment of“new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part ofmaking a case that “new is better than old.” Consumers do not necessarilyupgrade their TV just because they aretold “new is better than old;” they upgrade because they can see a largerscreen, clearer picture, and better value for money.

Product Complexity andEvaluations

To the extent security evaluation methodology can be more preciseand repeatable, that facilitates more consistent evaluations across the boardat a lower evaluation cost. However, there is a big difference between productsthat were designed to do a small set of core functions, using standardprotocols, and products that have a broader swathe of functionality and havefar more flexibility as to how that functionalityis implemented. This means that it will be impossible tostandardize testing across products in some product evaluation categories.

For example, routers use standard Internet protocols (orwell-known proprietary protocols) and are relatively well defined in terms ofwhat they do. Therefore, it is far easier to test their security using standardizedtests as part of a CC evaluation to, for example, determine attack resistance,correctness of protocol implementation, and so forth. The Network DeviceProtection Profile (NDPP) is the perfect template for this type of evaluation.

Relational databases, on the other hand, use structuredquery language (SQL) but that does not mean all SQL syntax in all commercial databasesis identical, or that protocols used to connect to the database are allidentical, or that common functionality is completely comparable amongdatabases. For example, Oracle was thefirst relational database to implement commercial row level access control:specifically, by attaching a security policy to a table that causes a rewriteof SQL to enforce additional security constraints. Since Oracle developed (andpatented) row level access control, other vendors have implemented similar (butnot identical) functionality.

As a result, no set of standard tests can adequately testeach vendor’s row level security implementation, any more than you can use thesame key on locks made by different manufacturers. Prescriptive (monolithic)testing can work for verifying protocol implementations; it will not work in caseswhere features are implemented differently. Even worse, prescriptive testing mayhave the effect of “design by test harness.”

Some national CC schemes have expressed concerns that anevaluation of some classes of products (like databases) will not be“meaningful” because of the size and complexity of these products,[1]or that these products do not lend themselves to repeatable, cross-product (prescriptive)testing. This is true, to a point: it is much easier to do a buildinginspection of a 1000-square foot or 100-square meter bungalow than of BuckinghamPalace. However, given that some of these large, complex products are the coreunderpinning of many critical systems, does it make sense to ignore thembecause it’s not “rapid, repeatable and objective” to evaluate even a core partof their functionality? These classes of products are heavily used in the coremarket sectors the national schemes serve: all the more reason the schemesshould not preclude evaluation ofthem.

Worse, given that customers subject to these CC schemesstill want evaluated products, a lack of mutual recognition of theseevaluations (thus breaking the CCRA) or negation of the ability to evaluatemerely drives costs up. Demand forinefficient and ineffective ad hoc security assurances continues to increaseand will explode if vendors are precluded from evaluating entire classes ofproducts that are widely-used and highly security relevant. No national scheme, despite good intentions,can successfully control its national marketplace, or the global marketplacefor information technology.

Innovation

One of the downsides of rapid, basic, vanilla evaluations isthat it stifles the uptake of innovative security features in a customer basethat has a lot to protect. Most security-aware customers (like defense andintelligence customers) want new andinnovative approaches to security to support their mission. They also want thenew innovations vetted properly (via a CC evaluation).

Typically, a community Protection Profile (cPP) defines the setof minimum security functions that aproduct in category X does. Add-ons can in theory be done via an extendedpackage (EP) – if the community agrees to it and the schemes allow it. Thevendor and customer community should encourage the ability to evaluateinnovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to theexclusion of other ways to address the threat. This would continue to advancethe state of the security art in particular product categories without waitinguntil absolutely everyone has Security Feature Y. It’s almost always a good thing to build abetter mousetrap: there are always more mice to fend off. Rapid adoption of EPswould enable security-aware customers, many of whom are required to useevaluated products, to adopt new features readily, without waiting for:

a) every vendorto have a solution addressing that problem (especially since some vendors may never develop similar functionality)

b) the cPP to have been modified, and

c) all vendors to have evaluated against the newcPP (that includes the new security feature)

Given the increasing focus ofgovernments on improvements to security (in some cases by legislation),national schemes should be the first in line to support “fasterinnovation/faster evaluation,” to support the customer base they are purportedlyserving.

Last but really first, in the absence of the ability to rapidlyevaluate new, innovative security features, customers who would most benefitfrom using those features may be unable or unwilling to use them, or may onlyuse them at the expense of “one-off” assurance validation. Is it really in anyone’sinterest to ask vendors to do repeated one-off assurance fire-drills formultiple system integrators?

Conclusion

The Common Criteria – and in particular, the Common Criteriarecognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it.That strong foundation can nonetheless be strengthened by:

1) recognizing and supporting the legitimate needfor higher assurance evaluations in some classes of product

2) enabling faster innovation in security and theability to evaluate it via EPs

3) continuing to evaluate core products that havehistorically had and continue to have broad usage and market demand (e.g.,databases and operating systems)

4) embracing, where apropos, repeatable testing andvalidation, while recognizing the limitations thereof that apply in some casesto entire classes of products and ensuring that such testing is not unnecessarily prescriptive.

Common Criteria and the Future of Security Evaluations

Thu, 2016-10-20 12:08

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security that are quite complementary and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk.

The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product.

Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance.

The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.)

Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes.

Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon.

Demand for Higher Assurance

In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.”

The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money.

Product Complexity and Evaluations

To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories.

For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation.

Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality.

As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.”

Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products,[1] or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them.

Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology.

Innovation

One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation).

Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for:

a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality)

b) the cPP to have been modified, and

c) all vendors to have evaluated against the new cPP (that includes the new security feature)

Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving.

Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators?

Conclusion

The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by:

1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product

2) enabling faster innovation in security and the ability to evaluate it via EPs

3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems)

4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive.

Common Criteria and the Future of Security Evaluations

Thu, 2016-10-20 08:00

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security that are quite complementary and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk.

The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product.

Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance.

The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.)

Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes.

Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon.

Demand for Higher Assurance

In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.”

The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money.

Product Complexity and Evaluations

To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories.

For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation.

Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality.

As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.”

Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products [1], or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them.

Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology.

Innovation

One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation).

Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for:

a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality)

b) the cPP to have been modified, and

c) all vendors to have evaluated against the new cPP (that includes the new security feature)

Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving. Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators?

Conclusion

The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by:

1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product

2) enabling faster innovation in security and the ability to evaluate it

3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems)

4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive.

[1] https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/DBMS%20Position%20Statement.pdf

October 2016 Critical Patch Update Released

Tue, 2016-10-18 14:59

Oracle today released the October2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide rangeof product families including: Oracle Database Server, Oracle E-Business Suite,Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products,Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied assoon as possible. A summary and analysis of this Critical Patch Update has beenpublished on MyOracle Support (Doc ID 2193091.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

My Oracle Support Note 2193091.11 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2193091.1 (MOS account required).

October 2016 Critical Patch Update Released

Tue, 2016-10-18 14:59

Oracle today released the October 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2193091.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

My Oracle Support Note 2193091.11 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2193091.1 (MOS account required).

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

October 2016 Critical Patch Update Released

Tue, 2016-10-18 08:00

Oracle today released the October 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2193091.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

My Oracle Support Note 2193091.11 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2193091.1 (MOS account required).

Unmasking Hackers with User Behavior Analytics

Tue, 2016-09-06 08:00

Many people keep sensitive documents in cloud storage services and the latest breach shows that hackers are focusing on online storage cloud services more frequently. This opens the door to huge vulnerabilities if employees are storing sensitive enterprise information in the cloud. From a preventative perspective, security personnel should review their security measures for the following:

  1. Require multi-factor authentication to access the application
  2. Enforce password strength and complexity requirements
  3. Require and enforce frequent password resets for employees

But manual processes and policies are not enough. At minimum, enterprises should look at automating the enforcement of these policies. For example, you may require multi-factor authentication, but how do you ensure that it's required at all times? A cloud access security broker (CASB) continuously monitors configurations to alert security personnel when changes are made, and automatically creates incident tickets to revert security configurations back to the default setting.   How can enterprises prevent further damage if their employees' credentials were compromised in this hack? We recommend utilizing user behavior analytics (UBA) to look for anomalous activity in an account. UBA uses advanced machine learning techniques to create a baseline for normal behavior for each user. If a hacker is accessing an employee's account using stolen credentials, UBA will flag a number of indicators that this access deviates from the normal behavior of a legitimate user.   Palerra LORIC is a cloud access security broker (CASB) that supports cloud storage services. Here's a few indicators LORIC can use to unmask a potential hacker with stolen credentials:

  1. Flag a login from an unusual IP address or geographic location
  2. Detect a spike in number of file downloads compared to normal user activity
  3. Detect logins outside of normal access hours for the user
  4. Detect anomalous file sharing or file previewing activities

The ability to gauge legitimate access and activities becomes even more important when you consider that many people use the same password for multiple applications. Instead of just protecting a single online storage cloud service, UBA helps the enterprise protect any cloud environment that could be accessed using the stolen passwords.

If you're concerned that hackers may access your cloud storage environment using stolen employee credentials, you should take preventative and remedial action. Adding a cloud security automation tool help prevents a breach by enforcing password best practices, and helps prevents additional damage after a breach by unmasking hackers posing as legitimate users by flagging anomalous activity.

July 2016 Critical Patch Update Released

Tue, 2016-07-19 14:51

Oracle today released the July2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide rangeof product families including: Oracle Database Server, Oracle E-Business Suite,Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products,Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied assoon as possible. A summary and analysis of this Critical Patch Update has beenpublished on My Oracle Support (MOS Note 2161607.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpujul2016-2881720.html

My Oracle Support Note 2161607.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2161607.1 (MOS account required).

July 2016 Critical Patch Update Released

Tue, 2016-07-19 14:51

Oracle today released the July 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (MOS Note 2161607.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpujul2016-2881720.html

My Oracle Support Note 2161607.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2161607.1 (MOS account required).

Pages