Feed aggregator

New blog to handle the PJC/Bean articles

Francois Degrelle - Mon, 2015-03-30 13:06
Here is the link to another place that stores the PJCs/Beans article without adds. http://forms.pjc.bean.blog.free.fr/ Francois

A command-line alternative to PeopleSoft SendMaster

Javier Delgado - Mon, 2015-03-30 11:05
If you are familiar with PeopleSoft Integration Broker, I'm sure you have dealt with SendMaster to some degree. This is a very simple but yet useful tool to perform unit tests of the Integration Broker incoming service operations using plain XML (if I'm dealing with SOAP Web Services, I normally use SoapUI, for which there is a very good article on PeopleSoft Wiki).

Most of the time it's enough with SendMaster, but today I came through a problem that required an alternative. While testing an XML message with this tool against an HTTPS PeopleSoft installation, I got the following error message:

Error communicating with server: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

After checking in My Oracle Support, I've found the following resolution (doc 1634045.1):

The following steps will resolve the error:

1) Import the appropriate SSL certificate to the java kestore PS_HOME\jre\lib\security\cacerts or Integration Broker's keystore location i.e pskey file
2) Set sendmaster's preferences  ( via File-Preferences-HTTP tab )  to point to the keystore with the appropriate SSL certificate
3) Test

Unfortunately, I didn't have access to the appropriate SSL certificate, so I've decided to use curl, a pretty old (dating back to 1997 according to all knowing wikipedia) but still useful command line tool.



curl is a command line tool that can be used to test HTTP and HTTPS operations, including GET, PUT, POST and so on. One of the features of this tool is that it can run in "insecure" mode, eliminating the need of a client certificate to test URLs on HTTPS. Both in Linux and Mac OS, the option to run in insecure mode is -k. The command line to test my service operation then looked like:

curl -X POST -d @test.xml -k https://<server>/PSIGW/HttpListeningConnector 

Please note that the @ option actually requests curl to take the data from the file following it. Instead of doing so, you can specify the data in the command line, but it is a bit more cumbersome.

Also, keep in mind that curl is not delivered with Windows out of the box, but you can download similar tools from several sources (for instance, this one).







PTS Sample code now available on GitHub

Angelo Santagata - Mon, 2015-03-30 09:43

Not sure many people know about this, but sometime ago my team created a whole collection of sample code. This code is available on OTN at this location  but it is now also available in github here!

 We'll be updating this repository with some new code soon, when we do I'll make sure to update this blog entry

Retrieving OAM keystore password

Frank van Bortel - Mon, 2015-03-30 04:12
How to retrieve the password of OAM keystore If you ever need it; the password of the default OAM keystore password (which is generated) can be retrieved using: cd /oracle/middleware/oracle_common/common/bin ./wlst.sh connect(); domainRuntime() listCred(map="OAM_STORE",key="jks") Would you like to change it, use resetKeystorePassword() Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Slides from my presentation at APEX World 2015 in Rotterdam

Dietmar Aust - Mon, 2015-03-30 04:05
Hi guys,

I had a great time at APEX World in Rotterdam, it was a wonderful event. I could meet up with my friends and learn a few new tricks, too :).

Here are the slides from my presentation about the many smaller new features of Oracle APEX 5.0. And I could only cram like half of the good stuff that I found into this 45 min. session.

About 70 people attended the session and I sure hope they will use some of the features I presented.

Once I clean up my demo application, I will make it available, too.

Cheers,
~Dietmar.

Want to SPEED Up Your Database Tasks? DBMS_PARALLEL_EXECUTE to the rescue!

VitalSoftTech - Thu, 2015-03-26 05:56
Use DBMS_PARALLEL_EXECUTE to divide one huge task into multiple small tasks that can be executed at the same time. See how many ways are there to divide.
Categories: DBA Blogs

Is Your Shellshocked Poodle Freaked Over Heartbleed?

Mary Ann Davidson - Wed, 2015-03-25 17:43

Security weenies will understand that the above title is not as nonsensical as it appears. Would that it were mere nonsense. Instead, I suspect more than a few will read the title and their heads will throb, either because the readers hit themselves in the head, accompanied by the multicultural equivalents of “oy vey” (I’d go with “aloha ‘ino”), or because the above expression makes them reach for the most potent over- the-counter painkiller available.

For those who missed it, there was a sea change in security vulnerabilities reporting last year involving a number of mass panics around “named” vulnerabilities in commonly-used – and widely-used – embedded libraries. For example, the POODLE vulnerability (an acronym for Padding Oracle On Downgraded Legacy Encryption) affects SSL version 3.0, and many products and services using SSL version 3.0 use third party library implementations. The Shellshock vulnerabilities affect GNU bash, a program that multiple Unix-based systems use to execute command lines and command scripts. These vulnerabilities (and others) were widely publicized (the cutesie names helped) and resulted in a lot of scrambling to find, fix, and patch the vulnerabilities. The cumulative result of a number of named vulnerabilities last year in widely-used and deployed libraries I refer to as the Great Shellshocked Poodle With Heartbleed Security Awakening (GSPWHSA). It was a collective IT community eye opener as to:

The degree to which common third party components are embedded in many products and services
The degree to which vendors (and customers) did not know where-all these components actually were used, or what versions of them were used
And, to some degree (more on which below) the actual severity of these issues

A slight digression on how we got to a Shellshocked Poodle with Heartbleed. Way back in the olden days (when I started working at Oracle), the Internet hadn’t taken off yet, and there weren’t as many standard ways of doing things. The growth of the Internet led to the growth of standards (e.g., SSL, now superseded by TLS) so Stuff Would Work Together. The requirement for standards-based interoperability fostered the growth of common libraries (many of them open source), because everyone realized it was dumb to, say, build your own pipes when you could license someone else’s ready-made pipe libraries. Open source/third party libraries helped people build things faster that worked together, because everyone wasn’t building everything from scratch. None of these – standards, common libraries, open source – are bad things. They are (mostly) very good things that have fostered the innovation we now take for granted.

Historically, development organizations didn’t always keep careful track of where all the third party libraries were used, and didn’t necessarily upgrade them regularly. To some degree, the “not upgrade” was understandable – unless there is a compelling reason to move from Old Reliable to New and Improved (as in, they actually are improved and there is a benefit to using the new stuff), you might as well stick with Old and Reliable. Or so it seemed.

When security researchers began focusing on finding vulnerabilities in widely-used libraries, everyone got a rude awakening that their library of libraries (that is, listing of what components were used where) needed to be a whole lot better, because customers wanted to know very quickly the answer to “is the product or cloud service I am using vulnerable?” Moreover, many vendors and service providers realized that, like it or not, they needed to aggressively move to incorporate reasonably current (patched) versions of libraries because, if the third party component you embed is not supported for the life of the product or service you are embedding it in, you can’t get a security patch when you need one: in short, “you are screwed,” as we security experts say. I’ve remarked a lot recently, with some grumbling, that people don’t do themselves any favors by continuing to incorporate libraries older than the tablets of Moses (at least God is still supporting those).

Like all religious revivals, the GSPWHSA has thus resulted in a lot of people repenting of their sins: “Forgive me, release manager, for I have sinned, I have incorporated an out-of-support library in my code.” “Three Hail Marys and four version upgrades, my son…” Our code is collectively more holy now, we all hope, instead of continuing to be hole-y. (Yes, that was a vile pun.) This is a good thing.

The second aspect of the GSPWHSA is more disturbing, and that is, for lack of a better phrase, the “marketing of security vulnerabilities.” Anybody who knows anything about business knows how marketing can – and often intends to – amplify reality. Really, I am sure I can lose 20 pounds and find true love and happiness if I only use the right perfume: that’s why I bought the perfume! Just to get the disclaimer out of the way, no, this is not another instance of the Big Bad Vendor complaining about someone outing security vulnerabilities. What’s disturbing to me is the outright intent to create branding around security vulnerabilities and willful attempt to create a mass panic – dare we say “trending?” – around them regardless of the objective threat posed by the issue. A good example is the FREAK vulnerability (CVE-2015-0204). The fix for FREAK was distributed by OpenSSL on January 8th. It was largely ignored until early March when it was given the name FREAK.  Now, there are a lot of people FREAKing out about this relatively low risk vulnerability while largely ignoring unauthenticated, network, remote code execution vulnerabilities.

Here’s how it works. A researcher first finds vulnerability in a widely-used library: the more widely-used, the better, since nobody cares about a vulnerability in Digital Buggy Whip version 1.0 that is, like, so two decades ago and hardly anybody uses. OpenSSL has been a popular target, because it is very widely used so you get researcher bragging rights and lots of free PR for finding another problem in it. Next, the researcher comes up with a catchy name. You get extra points for it being an acronym for the nature of the vulnerability, such as SUCKS – Security Undermining of Critical Key Systems. Then, you put up a website (more points for cute animated creature dancing around and singing the SUCKS song). Add links so visitors can Order the T-shirt, Download the App, and Get a Free Bumper Sticker! Get a hash tag. Develop a Facebook page and ask your friends to Like your vulnerability. (I might be exaggerating, but not by much.) Now, sit back and wait for the uninformed public to regurgitate the headlines about “New Vulnerability SUCKS!” If you are a security researcher who dreamed up all the above, start planning your speaking engagements on how the world as we know it will end, because (wait for it), “Everything SUCKS.”

Now is where the astute reader is thinking, “but wait a minute, isn’t it really a good thing to publicize the need to fix a widely-embedded library that is vulnerable?” Generally speaking, yes. Unfortunately, most of the publicity around some of these security vulnerabilities is out of proportion to the actual criticality and exploitability of the issues, which leads to customer panic. Customer panic is a good thing – sorta – if the vulnerability is the equivalent of the RMS Titanic’s “vulnerability” as exploited by a malicious iceberg. It’s not a good thing if we are talking about a rowboat with a bad case of chipped paint. The panic leads to suboptimal resource allocation as code providers (vendors and open source communities) are – to a point – forced to respond to these issues based on the amount of press they are generating instead of how serious they really are. It also means there is other more valuable work that goes undone. (Wouldn’t most customers actually prefer that vendors fix security issues in severity order instead of based on “what’s trending?”). Lastly, it creates a shellshock effect with customers, who cannot effectively deal with a continuous string of exaggerated vulnerabilities that cause their management to apply patches as soon as possible or document that their environment is free of the bug.

The relevant metric around how fast you fix things should be objective threat. If something has a Common Vulnerability Scoring System (CVSS) Base Score of 10, then I am all for widely publicizing the issue (with, of course, the Common Vulnerability Enumeration (CVE) number, so people can read an actual description, rather than “run for your lives, Godzilla is stomping your code!”) If something is CVSS 2, I really don’t care that it has a cuter critter than Bambi as a mascot and generally customers shouldn’t, either. To summarize my concerns, the willful marketing of security vulnerabilities is worrisome for security professionals because:

It creates excessive focus on issues that are not necessarilytruly critical
It creates grounds for confusion (as opposed to using CVEs)

It creates a significant support burden for organizations,* where resources would be better spent elsewhere

I would therefore, in the interests of constructive suggestions, recommend that customers assess the following criteria before calling all hands on deck over the next “branded” security vulnerability being marketed as the End of Life On Earth As We Know It:

1. Consider the source of the vulnerability information. There are some very good sites (arstechnica comes to mind) that have well-explained, readily understandable analyses of security issues. Obviously, the National Vulnerability Database (NVD) is also a great source of information.

2. Consider the actual severity of the bug (CVSS Base Score) and the exploitation scenario to determine “how bad is bad.”

3. Consider where the vulnerability exists, its implications, and whether mitigation controls exist in the environment: e.g., Heartbleed was CVSS 5.0, but the affected component (SSL), the nature of the information leakage (possible compromise of keys), and the lack of mitigation controls made it critical.

* e.g., businesses patching based on the level of hysteria rather than the level of threat

Organizations should look beyond cutesie vulnerability names so as to focus their attention where it matters most.  Inquiring about the most recent medium-severity bugs will do less in term of helping an organization secure its environment than, say applying existing patches for higher severity issues. Furthermore, it fosters a culture of “security by documentation” where organizations seek to collect information about a given bug from their cloud and software providers, while failing to apply existing patches in their environment. Nobody is perfect, but if you are going to worry, worry about vulnerabilities based on How Bad Is Bad, and not based on which ones have catchy acronyms, mascots or have generated a lot of press coverage.



Is Your Shellshocked Poodle Freaked Over Heartbleed?

Mary Ann Davidson - Wed, 2015-03-25 17:43



Normal
0





false
false
false

EN-US
X-NONE
X-NONE



























Security weenies will understand that the above title is not as nonsensical as it appears. Would that it were mere nonsense. Instead, I suspect more than a few will read the title and their heads will throb, either because the readers hit themselves in the head, accompanied by the multicultural equivalents of “oy vey” (I’d go with “aloha ‘ino”), or because the above expression makes them reach for the most potent over- the-counter painkiller available.


For those who missed it, there was a sea change in security vulnerabilities reporting last year involving a number of mass panics around “named” vulnerabilities in commonly-used – and widely-used – embedded libraries. For example, the POODLE vulnerability (an acronym for Padding Oracle On Downgraded Legacy Encryption) affects SSL version 3.0, and many products and services using SSL version 3.0 use third party library implementations. The Shellshock vulnerabilities affect GNU bash, a program that multiple Unix-based systems use to execute command lines and command scripts. These vulnerabilities (and others) were widely publicized (the cutesie names helped) and resulted in a lot of scrambling to find, fix, and patch the vulnerabilities. The cumulative result of a number of named vulnerabilities last year in widely-used and deployed libraries I refer to as the Great Shellshocked Poodle With Heartbleed Security Awakening (GSPWHSA). It was a collective IT community eye opener as to:


The degree to which common third party components are embedded in many products and services
The degree to which vendors (and customers) did not know where-all these components actually were used, or what versions of them were used
And, to some degree (more on which below) the actual severity of these issues



A slight digression on how we got to a Shellshocked Poodle with Heartbleed. Way back in the olden days (when I started working at Oracle), the Internet hadn’t taken off yet, and there weren’t as many standard ways of doing things. The growth of the Internet led to the growth of standards (e.g., SSL, now superseded by TLS) so Stuff Would Work Together. The requirement for standards-based interoperability fostered the growth of common libraries (many of them open source), because everyone realized it was dumb to, say, build your own pipes when you could license someone else’s ready-made pipe libraries. Open source/third party libraries helped people build things faster that worked together, because everyone wasn’t building everything from scratch. None of these – standards, common libraries, open source – are bad things. They are (mostly) very good things that have fostered the innovation we now take for granted.



Historically, development organizations didn’t always keep careful track of where all the third party libraries were used, and didn’t necessarily upgrade them regularly. To some degree, the “not upgrade” was understandable – unless there is a compelling reason to move from Old Reliable to New and Improved (as in, they actually are improved and there is a benefit to using the new stuff), you might as well stick with Old and Reliable. Or so it seemed.


When security researchers began focusing on finding vulnerabilities in widely-used libraries, everyone got a rude awakening that their library of libraries (that is, listing of what components were used where) needed to be a whole lot better, because customers wanted to know very quickly the answer to “is the product or cloud service I am using vulnerable?” Moreover, many vendors and service providers realized that, like it or not, they needed to aggressively move to incorporate reasonably current (patched) versions of libraries because, if the third party component you embed is not supported for the life of the product or service you are embedding it in, you can’t get a security patch when you need one: in short, “you are screwed,” as we security experts say. I’ve remarked a lot recently, with some grumbling, that people don’t do themselves any favors by continuing to incorporate libraries older than the tablets of Moses (at least God is still supporting those).


Like all religious revivals, the GSPWHSA has thus resulted in a lot of people repenting of their sins: “Forgive me, release manager, for I have sinned, I have incorporated an out-of-support library in my code.” “Three Hail Marys and four version upgrades, my son…” Our code is collectively more holy now, we all hope, instead of continuing to be hole-y. (Yes, that was a vile pun.) This is a good thing.


The second aspect of the GSPWHSA is more disturbing, and that is, for lack of a better phrase, the “marketing of security vulnerabilities.” Anybody who knows anything about business knows how marketing can – and often intends to – amplify reality. Really, I am sure I can lose 20 pounds and find true love and happiness if I only use the right perfume: that’s why I bought the perfume! Just to get the disclaimer out of the way, no, this is not another instance of the Big Bad Vendor complaining about someone outing security vulnerabilities. What’s disturbing to me is the outright intent to create branding around security vulnerabilities and willful attempt to create a mass panic – dare we say “trending?” – around them regardless of the objective threat posed by the issue. A good example is the FREAK vulnerability (CVE-2015-0204). The fix for FREAK was distributed by OpenSSL on January 8th. It was largely ignored until early March when it was given the name FREAK.  Now, there are a lot of people FREAKing out about this relatively low risk vulnerability while largely ignoring unauthenticated, network, remote code execution vulnerabilities.

Here’s how it works. A researcher first finds vulnerability in a widely-used library: the more widely-used, the better, since nobody cares about a vulnerability in Digital Buggy Whip version 1.0 that is, like, so two decades ago and hardly anybody uses. OpenSSL has been a popular target, because it is very widely used so you get researcher bragging rights and lots of free PR for finding another problem in it. Next, the researcher comes up with a catchy name. You get extra points for it being an acronym for the nature of the vulnerability, such as SUCKS – Security Undermining of Critical Key Systems. Then, you put up a website (more points for cute animated creature dancing around and singing the SUCKS song). Add links so visitors can Order the T-shirt, Download the App, and Get a Free Bumper Sticker! Get a hash tag. Develop a Facebook page and ask your friends to Like your vulnerability. (I might be exaggerating, but not by much.) Now, sit back and wait for the uninformed public to regurgitate the headlines about “New Vulnerability SUCKS!” If you are a security researcher who dreamed up all the above, start planning your speaking engagements on how the world as we know it will end, because (wait for it), “Everything SUCKS.”

Now is where the astute reader is thinking, “but wait a minute, isn’t it really a good thing to publicize the need to fix a widely-embedded library that is vulnerable?” Generally speaking, yes. Unfortunately, most of the publicity around some of these security vulnerabilities is out of proportion to the actual criticality and exploitability of the issues, which leads to customer panic. Customer panic is a good thing – sorta – if the vulnerability is the equivalent of the RMS Titanic’s “vulnerability” as exploited by a malicious iceberg. It’s not a good thing if we are talking about a rowboat with a bad case of chipped paint. The panic leads to suboptimal resource allocation as code providers (vendors and open source communities) are – to a point – forced to respond to these issues based on the amount of press they are generating instead of how serious they really are. It also means there is other more valuable work that goes undone. (Wouldn’t most customers actually prefer that vendors fix security issues in severity order instead of based on “what’s trending?”). Lastly, it creates a shellshock effect with customers, who cannot effectively deal with a continuous string of exaggerated vulnerabilities that cause their management to apply patches as soon as possible or document that their environment is free of the bug.


The relevant metric around how fast you fix things should be objective threat. If something has a Common Vulnerability Scoring System (CVSS) Base Score of 10, then I am all for widely publicizing the issue (with, of course, the Common Vulnerability Enumeration (CVE) number, so people can read an actual description, rather than “run for your lives, Godzilla is stomping your code!”) If something is CVSS 2, I really don’t care that it has a cuter critter than Bambi as a mascot and generally customers shouldn’t, either. To summarize my concerns, the willful marketing of security vulnerabilities is worrisome for security professionals because:

It creates excessive focus on issues that are not necessarily
truly critical

It creates grounds for confusion (as opposed to using CVEs)

It creates a significant support burden for organizations,* where resources would be better spent elsewhere

I would therefore, in the interests of constructive suggestions, recommend that customers assess the following criteria before calling all hands on deck over the next “branded” security vulnerability being marketed as the End of Life On Earth As We Know It:


1. Consider the source of the vulnerability information. There are some very good sites (arstechnica comes to mind) that have well-explained, readily understandable analyses of security issues. Obviously, the National Vulnerability Database (NVD) is also a great source of information.


2. Consider the actual severity of the bug (CVSS Base Score) and the exploitation scenario to determine “how bad is bad.”


3. Consider where the vulnerability exists, its implications, and whether mitigation controls exist in the environment: e.g., Heartbleed was CVSS 5.0, but the affected component (SSL), the nature of the information leakage (possible compromise of keys), and the lack of mitigation controls made it critical.


* e.g., businesses patching based on the level of hysteria rather than the level of threat


Organizations should look beyond cutesie vulnerability names so as to focus their attention where it matters most.  Inquiring about the most recent medium-severity bugs will do less in term of helping an organization secure its environment than, say applying existing patches for higher severity issues. Furthermore, it fosters a culture of “security by documentation” where organizations seek to collect information about a given bug from their cloud and software providers, while failing to apply existing patches in their environment. Nobody is perfect, but if you are going to worry, worry about vulnerabilities based on How Bad Is Bad, and not based on which ones have catchy acronyms, mascots or have generated a lot of press coverage.






DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

Push Notifications and the Internet of Things

Matthias Wessendorf - Wed, 2015-03-25 15:26

Today on Facebook’s F8 conference they announced Parse for IoT. This a cool, but not unexpected move, especially since there is demand to have connected objects being part of an (enterprise) cloud systems. We will see more of that happening soon, and our lesson learned on traditional mobile, will be applied to IoT devices or “connected objects” in general.

AeroGear and IoT

In the AeroGear project we have done similar experiments, bringing functionality of our UnifiedPush Server to the IoT space. My colleage Sébastien Blanc did two short screencasts on his work in this area:

The above examples basically leverage our support for SimplePush, which is a WebSocket based protocol used on Firefox OS for Push Notification. Due to the fact Firefox OS uses such an open protocol, we are able to extend this mechanism of Push Notifications delivery to other platforms, not just Firefox OS devices.

Bringing Push Notifications to the IoT sector is a logical move, to integrate connected objects and mobile cloud services!


R12.2 Documentation link in html format

Vikram Das - Mon, 2015-03-23 20:35
This link has the R12.2 documentation in HTML format:

https://docs.oracle.com/cd/E26401_01/index.htm 
Categories: APPS Blogs

The Four Ps of Standards/Procurement Requirements/”Whatevahs”

Mary Ann Davidson - Mon, 2015-03-23 19:43

I am a veteran – not merely a military veteran, but an information security veteran. I don’t get medals for the latter, but I do have battle scars. Many of the scars are relatively recent: a result of tearing my hair out from many, many, many mind-numbing reviews of publications, draft standards and other kinds of documents which are ostensibly meant to make security better, cybersecurity being “hot” and all. Alas, many of these documents have linguistic and operational difficulties that often make it highly unlikely that they will achieve their stated “better security” objectives.

After reviewing so many documents and running into common patterns, I decided to take a cue from my MBA days and categorize my concerns in a catchy way. Though not a marketing major, I vaguely recall the “four Ps” of marketing (product, price, place and promotion) and decided to adapt them to the world of standards/procurement requirements/whatevahs (which I will now refer to as SPW). They are:

Pr    Problem Statement
Precise Language and Scope
Pragmatic Solutions
Prescriptive Minimizations

I t     I offer the "four Ps of SPW" for those who are attempting to improve cybersecurity by fiat, or in other ways intended to compel the market, in hopes that we may collectively get to better security without sinking into the swamp of despair, dallying in the desert of dashed hopes, trekking through the tundra of too-obscure requirements (nice use of alliteration, no?) … you get the point. While I think my advice is generally applicable in the SPW (say “spew”) realm, the context for my discussion is assurance slash supply chain risk mitigation since that’s what I seem to review most often.

Problem Statement

I cannot tell you how many SPW documents I have read in which Someone Was Attempting to Make Someone Else Do Something More Securely, only it wasn’t clear what, exactly, or more importantly, why (or even that the requirements would result in “better security”). Anything that seeks to impose Something Security-Oriented On Someone needs a clear problem statement. Without this, a proposed SPW becomes an expensive wish list with no associated benefits to it. Ultimately, the seller has no idea what the buyer really wants or needs. If a government agency cannot explain what they are really worried about, in language the “comply-ee” can understand, they shouldn’t be surprised if they get a chocolate-covered cockroach (eew) when they ask for something sweet, crunchy and locally sourced. (I’d add “sustainable,” as there seems to be no shortages of cockroaches.)

With regard to security, “supply chain” has become the mantra for attempting to regulate almost 100% of what businesses do. Poor quality, “backdoor boogiemen,” assurance, “supply chain shutdown” are all very (very!) different problems. Worse, the ambiguity around proposing a standard for “supply chain security” may encompass 100% of business operations. Example: my employer does not make their own paper clips or wood stirrers for coffee cups. Do we really need to worry about a shortage of either? No? Then don’t describe “supply chain requirements” that ask technology suppliers to track the wood sourced for our coffee stirrers. Buying a poor quality product, for example, is a business risk. It’s not, per se, a supply chain risk. Furthermore, while poor quality may lead to poor security, not all security problems are a result of quality issues. Some are a result of buyers not understanding that commercial off-the-shelf (COTS) software, while general purpose and often very good, is not “all purpose” and not designed for all threat environments.

The second aspect of a problem statement is the provision of use cases. A use cases is a fancy way of saying, “for example.” Use cases are very important to help turn a problem statement into an “aha” moment for the reader. Moreover, use cases are important to limit scope and ensure that the SPW requirements are appropriate to serve its stated objectives. Absent a use case, you never really know what’s being asked for (and where it applies and where it does not apply). Use cases absolutely need to be contained within a requirements document.

For example, consider the US National Institute of Standards and Technology (NIST) Special Publication 800-152 A Profile for U.S. Federal Cryptographic Key Management Systems Draft 3 (December 2014). This special pub describes a combination of technical standards and policies around cryptographic key management systems. The problem is, nowhere in reading the document is it evident what, exactly, this applies to. Is this just “special, super secret key management systems for classified US government systems?” Or, does it apply to key management for things like Transport Layer Security (TLS) (or other cryptographic protocols that are well-established standards)? Why it matters: because if there are not use cases that define applicability, someone will assume it applies to everything. And, applying these requirements may conflict with (if not break) other standards.

90% of life isn’t showing up, it’s solving the right problem. You can’t solve the right problem if you don’t know (or cannot articulate) what it is, with some “for instances.”

Precise Language and Scope

It is astonishing to me how many SPW documents do not define core terminology used therein. Without a precise set of definitions, nobody really knows what is meant, and if something is vague, it’s going to be misinterpreted. (Worse, an undefined term may end up meaning whatever a “certifier” or other compliance overlord thinks it means: nobody ever really knows if they are compliant if compliant depends on what the certifier thinks it means.) Core terminology must be precisely and narrowly defined within the document. As the famous line goes from Let’s Call The Whole Thing Off,

“You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, tomato, tomahto
Let’s call the whole thing off.” (Lyrics by Ira Gershwin, melody by GeorgeGershwin)

The problem is, if a SPW is enshrined and applied, you can’t call it off. At least until the next revision. Figure out what to call a spud and make it clear, please!

For example, in the context of software, what is a vulnerability? A configuration error (leading to a security weakness)? A defect in software (that leads to a security weakness)? Any defect in software (regardless of the impact)? What if the design was intentional? Is a policy violation a vulnerability? A vulnerability cannot, surely, be all the above! And in fact, it isn’t, but just saying “vulnerability” and conflating all the above means that nobody will be able to come up with a remedy that works for all cases. (Note: for configurable software, if you configure it so my grandmother can hack into it, it’s not a “vulnerability,” it’s “user error.” There is only so much you can do to prevent a user shooting self in the foot when we are talking about firearms that allow you to point them at your feet.) Another example, what is a “module?” The answer may be very different depending on whether you are a hardware person or a software person.

If ‘it’ is not clear, ‘it’ is going to be misinterpreted.

Pragmatic Solutions

One of my biggest concerns with a lot of SPW documents is that they almost never take into account the value of pragmatism over perfection. Perfection is not achievable (much less at an acceptable cost) while “better” usually is achievable. (Surely “better” that everyone can do is better than “perfect” that is unachievable?) To those who insist, “evil slug vendors are profit driven and always want to do the minimum,” my response is that economics rules the world and doesn’t necessarily argue for the minimum. Generally speaking, it’s more profitable to find security vulnerabilities and fix them earlier in a product release cycle than waiting until you ship six affected versions of product and now have to produce 120 patches for a single issue (or patch 120 cloud instances). Most vendors know this (or find out the hard way). Customers certainly know this and complain if they have to apply too many patches (or if their cloud service uptime is negatively impacted by a lot of patch-related downtime).

More to the point, unless you can print money, invent a time machine or perfect cloning, time, money and people are always constrained resources so using them well is a must. Doing more X means – often – doing less of Y, because you can’t add more resource you don’t have or can’t find. Worse, doing more of X required for compliance may mean doing less of the Y that actually improves security, since they are mutually exclusive as long as resources are constrained and regulations are written by (or interpreted by) the Knights Who Say Ni.

In particular, I see little evidence that people proposing SPW have done much or any economic analysis of the cost of compliance. I know the government knows how to do this kind of analysis because – for example – the US Department of Defense does resource planning that among other things looks at “how many conflicts are we prepared to fight simultaneously?” rather than, “in a perfect world with unlimited resources and cyborg soldiers, we could take on Frabistatians, the Foobarians, and open a third front combating the Little Green Men from Marsians.” How I wish that other entities – any other entity – would analyze (e.g., do a reality check) on what the impact of X is before it becomes part of a SPW.

Any SPW should include an economic analysis of impact – and look at options. Included in that analysis should be the bane of (quasi-)regulatory ambition, “unintended consequences.” There are almost always unintended consequences of SPW, even those created with good motives. One of the big ones is, if you make it too expensive for suppliers to deal with you, there will be fewer suppliers. And that means choice will decrease and cost will increase. Any SPW should explicitly ask the question, “What would matter the most, be broadly implementable and cost the least (or be the most cost effective for all parties)?”

To provide an example, the NIST Interagency Report 7622 Notional Supply Chain Risk Management Practices for Federal Information Systems (the draft requirement has, I believe, since been excised) at one time wanted the “supplier” (e.g., a vendor) to notify the acquirer (e.g., a government agency) of “all personnel changes involving maintenance.” I suspect that the intent was something to the effect that, if the acquirer (let’s say, DoD) outsources a service, and that service involves a fundamental change of venue – e.g., the maintenance for the US Department of Defense manpower system is outsourced to Hostile Foreign Country, DoD wants to be notified. However, that is not what the requirement stated. One interpretation would be that any time someone touched code who didn’t write the original code (“a personnel change involving maintenance”) that a vendor would have to notify the government. Ok, Oracle has almost 5000 products (and lots and lots of clouds), billions of lines of code, and every day there are a lot of code checkouts where someone is changing something he or she did not write. Are we supposed to tweet all that stuff? What is that going to do for the acquirer? “Kaitlyn checked out and changed code that, like, Ashley wrote, LOL, OMG!”

Figure out what you really want, and what it is worth to you to get it.

Prescriptive Minimization

With rare exceptions, non-technical* process or management standards should not tell industry how exactly to do something, if for no other reason than there is no such thing as “best practice.” There are certainly better or worse practices, but arguably no single practice that everyone does, exactly the same way, that will work equally well for everyone subject to the requirements, for any length of time. Worse, SPW diktats often stifle innovation, drive up costs (without commensurate benefit) and fall prey to the buggy whip effect (where you are specifying how to use buggy whips long after people have moved from horse-and-buggy to Model Ts - or better). Add to all these reasons the economic impact referenced above.

To provide one example, consider (draft) NIST Special Publication 800-160 Systems Security Engineering, containing a requirement that, in the event of a discovered security bug, the engineering team should conduct root cause analysis. This sounds like a Mom and Apple Pie requirement on the face of it, so what could possibly be wrong with that? A clear Best Practice, right? Well, no, not really, on grounds of pragmatism and context.

Consider a security bug that is not only high impact but for which there is an exploit circulating in the wild. For commercial software vendors, job 1 will be getting a patch into customers’ hands (or at least the hands of their customers’ system administrators) and/or patching their cloud instances, as the case may be. Protection of customers under these circumstances is initially way more important than determining causation.

Second, it doesn’t necessarily make sense to do a root cause analysis on every single security bug of every severity. What does make sense is to deep dive on the more severe bugs (e.g., high Common Vulnerability Scoring System (CVSS) Base Score bugs), because those are the ones you really want to ensure you fixed completely (and avoid in the future). You might want to ask the following as part of your analysis:

“How/when did this get into the code base?”
“What is the resulting vulnerability (how can it be exploited)?”
“Have we looked elsewhere for similar problems?”
“Have we added test cases to regression tests and other test suites (like static analysis tools) to ensure that we can automate finding other instances?”
“Have we fixed it everywhere (or everywhere that is relevant?)”and
“Have we attempted to enshrine/transfer knowledge of the severity and impact of this bug across the development organization (so everyone knows why it’s a big deal and how to avoid it in future)?”

Given scarce resources, I’d argue that root cause analysis on a CVSS 0 bug is not as important as thoroughly addressing – and in future avoiding – a CVSS 9.0 or 10.0 bug, along the lines of the above analysis. If a standard enshrines the former, it leads to suboptimal resource allocation (like spreading peanut butter over too many slices of bread). Worse, any company doing the “better” thing will get dinged as being non-standards compliant if there is a Best Practice enshrined in SPW that calls for root cause analysis of everything, regardless of severity. Perfection works against actual security improvement.

Another “best practice” I see shilled relentlessly is third party static analysis. I’ve opined on why that is not a best practice in previous blogs, but I have new reasons to avoid it like the plague it is, which is a real world example of the high cost and low utility. Recently, we were made aware that a customer of Oracle (without asking our permission, that we would not have given if asked) submitted our software to a third party that does static analysis on binaries. Where to start with how extremely bad this is? Numero uno: the customer violated their license agreement with Oracle, which alone made their actions completely unacceptable. Add to that, the report we were furnished included alleged vulnerabilities not merely in Oracle but in another product Not Made By Oracle. (Needless to say, we could neither analyze those issues nor fix them in the event they turned out to be actual vulnerabilities and really, we did not want to see alleged vulnerabilities in Someone Else’s Code. That information is extremely sensitive and should not have been given to us.) Last but far from least was the fact that – drum roll – not one of the alleged security issues the third party reported was, in fact, an actual security vulnerability. 0% accuracy: zilch, zip, nada, bubkes, a’ohe mea. Further, one of our best security leads (I’d bill him out at least $2,000 bucks an hour) wasted his very valuable time determining that there was “no there, there.”

Running a tool (if and only if you have permission to do it) is nothing; the ability to analyze the results is everything. Third parties cannot do that since they have no actual code knowledge of what they are running the tool on, especially not on a code base as big as Oracle’s is. Third party static analysis is thus only a best practice if you want to waste time and money. But it’s the vendor’s time that is being wasted (maybe that third party should reimburse us the $2K an hour our kahuna spent analyzing their errata?), and the customer’s money. And last, but really first, violating licensing terms is unacceptable business conduct.

Summary

Nobody is perfect, but with all the attention being focused on cybersecurity, it would be really helpful if attempted problem solvers writing SPW could sharpen their – I was going to say, knives, but I am not sure I mean that! – focus. Yes, a sharpened focus is what is needed. Cybersecurity is an important area. Better security is achievable, but only if we know what we are worried about, we speak the same language, we can look at relative costs and benefits, and we allow for latitude in how we get to better. We can’t do everything, but everybody can do something. Let’s do the some of the things that matter – and that won’t make us spend resources checking boxes instead of making sure nobody can break into the boxes.

· I    * I note that one reason for technical standards is, of course, interoperability. In which case, people do need to implement, say, the Secure Whateverworks Protocol (SWP) a particular way, or it won’t work with another vendor’s implementation of SWP.

For More Information

Ruthlessly self-serving announcement follows: my sister and I, writing as Maddi Davidson, are pleased to announce that we have completed our third book in the Miss-Information Technology Mystery Series, With Murder You Get Sushi. (Also, our short story “Heartfelt” will appear in Mystery Times Ten this month, published by Buddhapuss Ink.)

Apropos of nothing having to do with security, I have discovered and become totally addicted to The Palliser Novels by Anthony Trollope. Like high class soap opera, only you get classics points for reading them. (Best of all, nobody in the book is named “Kardashian.”)

The Four Ps of Standards/Procurement Requirements/”Whatevahs”

Mary Ann Davidson - Mon, 2015-03-23 19:43



Normal
0





false
false
false

EN-US
X-NONE
X-NONE




























I am a veteran – not merely a military veteran, but an information security veteran. I don’t get medals for the latter, but I do have battle scars. Many of the scars are relatively recent: a result of tearing my hair out from many, many, many mind-numbing reviews of publications, draft standards and other kinds of documents which are ostensibly meant to make security better, cybersecurity being “hot” and all. Alas, many of these documents have linguistic and operational difficulties that often make it highly unlikely that they will achieve their stated “better security” objectives.


After reviewing so many documents and running into common patterns, I decided to take a cue from my MBA days and categorize my concerns in a catchy way. Though not a marketing major, I vaguely recall the “four Ps” of marketing (product, price, place and promotion) and decided to adapt them to the world of standards/procurement requirements/whatevahs (which I will now refer to as SPW). They are:


Pr    Problem Statement
Precise Language and Scope
Pragmatic Solutions
Prescriptive Minimizations


I t     I offer the "four Ps of SPW" for those who are attempting to improve cybersecurity by fiat, or in other ways intended to compel the market, in hopes that we may collectively get to better security without sinking into the swamp of despair, dallying in the desert of dashed hopes, trekking through the tundra of too-obscure requirements (nice use of alliteration, no?) … you get the point. While I think my advice is generally applicable in the SPW (say “spew”) realm, the context for my discussion is assurance slash supply chain risk mitigation since that’s what I seem to review most often.


Problem Statement


I cannot tell you how many SPW documents I have read in which Someone Was Attempting to Make Someone Else Do Something More Securely, only it wasn’t clear what, exactly, or more importantly, why (or even that the requirements would result in “better security”). Anything that seeks to impose Something Security-Oriented On Someone needs a clear problem statement. Without this, a proposed SPW becomes an expensive wish list with no associated benefits to it. Ultimately, the seller has no idea what the buyer really wants or needs. If a government agency cannot explain what they are really worried about, in language the “comply-ee” can understand, they shouldn’t be surprised if they get a chocolate-covered cockroach (eew) when they ask for something sweet, crunchy and locally sourced. (I’d add “sustainable,” as there seems to be no shortages of cockroaches.)


With regard to security, “supply chain” has become the mantra for attempting to regulate almost 100% of what businesses do. Poor quality, “backdoor boogiemen,” assurance, “supply chain shutdown” are all very (very!) different problems. Worse, the ambiguity around proposing a standard for “supply chain security” may encompass 100% of business operations. Example: my employer does not make their own paper clips or wood stirrers for coffee cups. Do we really need to worry about a shortage of either? No? Then don’t describe “supply chain requirements” that ask technology suppliers to track the wood sourced for our coffee stirrers. Buying a poor quality product, for example, is a business risk. It’s not, per se, a supply chain risk. Furthermore, while poor quality may lead to poor security, not all security problems are a result of quality issues. Some are a result of buyers not understanding that commercial off-the-shelf (COTS) software, while general purpose and often very good, is not “all purpose” and not designed for all threat environments.


The second aspect of a problem statement is the provision of use cases. A use cases is a fancy way of saying, “for example.” Use cases are very important to help turn a problem statement into an “aha” moment for the reader. Moreover, use cases are important to limit scope and ensure that the SPW requirements are appropriate to serve its stated objectives. Absent a use case, you never really know what’s being asked for (and where it applies and where it does not apply). Use cases absolutely need to be contained within a requirements document.


For example, consider the US National Institute of Standards and Technology (NIST) Special Publication 800-152 A Profile for U.S. Federal Cryptographic Key Management Systems Draft 3 (December 2014). This special pub describes a combination of technical standards and policies around cryptographic key management systems. The problem is, nowhere in reading the document is it evident what, exactly, this applies to. Is this just “special, super secret key management systems for classified US government systems?” Or, does it apply to key management for things like Transport Layer Security (TLS) (or other cryptographic protocols that are well-established standards)? Why it matters: because if there are not use cases that define applicability, someone will assume it applies to everything. And, applying these requirements may conflict with (if not break) other standards.


90% of life isn’t showing up, it’s solving the right problem. You can’t solve the right problem if you don’t know (or cannot articulate) what it is, with some “for instances.”


Precise Language and Scope


It is astonishing to me how many SPW documents do not define core terminology used therein. Without a precise set of definitions, nobody really knows what is meant, and if something is vague, it’s going to be misinterpreted. (Worse, an undefined term may end up meaning whatever a “certifier” or other compliance overlord thinks it means: nobody ever really knows if they are compliant if compliant depends on what the certifier thinks it means.) Core terminology must be precisely and narrowly defined within the document. As the famous line goes from Let’s Call The Whole Thing Off,


“You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, tomato, tomahto
Let’s call the whole thing off.” (Lyrics by Ira Gershwin, melody by George
Gershwin)


The problem is, if a SPW is enshrined and applied, you can’t call it off. At least until the next revision. Figure out what to call a spud and make it clear, please!


For example, in the context of software, what is a vulnerability? A configuration error (leading to a security weakness)? A defect in software (that leads to a security weakness)? Any defect in software (regardless of the impact)? What if the design was intentional? Is a policy violation a vulnerability? A vulnerability cannot, surely, be all the above! And in fact, it isn’t, but just saying “vulnerability” and conflating all the above means that nobody will be able to come up with a remedy that works for all cases. (Note: for configurable software, if you configure it so my grandmother can hack into it, it’s not a “vulnerability,” it’s “user error.” There is only so much you can do to prevent a user shooting self in the foot when we are talking about firearms that allow you to point them at your feet.) Another example, what is a “module?” The answer may be very different depending on whether you are a hardware person or a software person.


If ‘it’ is not clear, ‘it’ is going to be misinterpreted.


Pragmatic Solutions


One of my biggest concerns with a lot of SPW documents is that they almost never take into account the value of pragmatism over perfection. Perfection is not achievable (much less at an acceptable cost) while “better” usually is achievable. (Surely “better” that everyone can do is better than “perfect” that is unachievable?) To those who insist, “evil slug vendors are profit driven and always want to do the minimum,” my response is that economics rules the world and doesn’t necessarily argue for the minimum. Generally speaking, it’s more profitable to find security vulnerabilities and fix them earlier in a product release cycle than waiting until you ship six affected versions of product and now have to produce 120 patches for a single issue (or patch 120 cloud instances). Most vendors know this (or find out the hard way). Customers certainly know this and complain if they have to apply too many patches (or if their cloud service uptime is negatively impacted by a lot of patch-related downtime).


More to the point, unless you can print money, invent a time machine or perfect cloning, time, money and people are always constrained resources so using them well is a must. Doing more X means – often – doing less of Y, because you can’t add more resource you don’t have or can’t find. Worse, doing more of X required for compliance may mean doing less of the Y that actually improves security, since they are mutually exclusive as long as resources are constrained and regulations are written by (or interpreted by) the Knights Who Say Ni.


In particular, I see little evidence that people proposing SPW have done much or any economic analysis of the cost of compliance. I know the government knows how to do this kind of analysis because – for example – the US Department of Defense does resource planning that among other things looks at “how many conflicts are we prepared to fight simultaneously?” rather than, “in a perfect world with unlimited resources and cyborg soldiers, we could take on Frabistatians, the Foobarians, and open a third front combating the Little Green Men from Marsians.” How I wish that other entities – any other entity – would analyze (e.g., do a reality check) on what the impact of X is before it becomes part of a SPW.


Any SPW should include an economic analysis of impact – and look at options. Included in that analysis should be the bane of (quasi-)regulatory ambition, “unintended consequences.” There are almost always unintended consequences of SPW, even those created with good motives. One of the big ones is, if you make it too expensive for suppliers to deal with you, there will be fewer suppliers. And that means choice will decrease and cost will increase. Any SPW should explicitly ask the question, “What would matter the most, be broadly implementable and cost the least (or be the most cost effective for all parties)?”


To provide an example, the NIST Interagency Report 7622 Notional Supply Chain Risk Management Practices for Federal Information Systems (the draft requirement has, I believe, since been excised) at one time wanted the “supplier” (e.g., a vendor) to notify the acquirer (e.g., a government agency) of “all personnel changes involving maintenance.” I suspect that the intent was something to the effect that, if the acquirer (let’s say, DoD) outsources a service, and that service involves a fundamental change of venue – e.g., the maintenance for the US Department of Defense manpower system is outsourced to Hostile Foreign Country, DoD wants to be notified. However, that is not what the requirement stated. One interpretation would be that any time someone touched code who didn’t write the original code (“a personnel change involving maintenance”) that a vendor would have to notify the government. Ok, Oracle has almost 5000 products (and lots and lots of clouds), billions of lines of code, and every day there are a lot of code checkouts where someone is changing something he or she did not write. Are we supposed to tweet all that stuff? What is that going to do for the acquirer? “Kaitlyn checked out and changed code that, like, Ashley wrote, LOL, OMG!”


Figure out what you really want, and what it is worth to you to get it.


Prescriptive Minimization


With rare exceptions, non-technical* process or management standards should not tell industry how exactly to do something, if for no other reason than there is no such thing as “best practice.” There are certainly better or worse practices, but arguably no single practice that everyone does, exactly the same way, that will work equally well for everyone subject to the requirements, for any length of time. Worse, SPW diktats often stifle innovation, drive up costs (without commensurate benefit) and fall prey to the buggy whip effect (where you are specifying how to use buggy whips long after people have moved from horse-and-buggy to Model Ts - or better). Add to all these reasons the economic impact referenced above.


To provide one example, consider (draft) NIST Special Publication 800-160 Systems Security Engineering, containing a requirement that, in the event of a discovered security bug, the engineering team should conduct root cause analysis. This sounds like a Mom and Apple Pie requirement on the face of it, so what could possibly be wrong with that? A clear Best Practice, right? Well, no, not really, on grounds of pragmatism and context.


Consider a security bug that is not only high impact but for which there is an exploit circulating in the wild. For commercial software vendors, job 1 will be getting a patch into customers’ hands (or at least the hands of their customers’ system administrators) and/or patching their cloud instances, as the case may be. Protection of customers under these circumstances is initially way more important than determining causation.


Second, it doesn’t necessarily make sense to do a root cause analysis on every single security bug of every severity. What does make sense is to deep dive on the more severe bugs (e.g., high Common Vulnerability Scoring System (CVSS) Base Score bugs), because those are the ones you really want to ensure you fixed completely (and avoid in the future). You might want to ask the following as part of your analysis:


“How/when did this get into the code base?”
“What is the resulting vulnerability (how can it be exploited)?”
“Have we looked elsewhere for similar problems?”
“Have we added test cases to regression tests and other test suites (like static analysis tools) to ensure that we can automate finding other instances?”
“Have we fixed it everywhere (or everywhere that is relevant?)”and
“Have we attempted to enshrine/transfer knowledge of the severity and impact of this bug across the development organization (so everyone knows why it’s a big deal and how to avoid it in future)?”


Given scarce resources, I’d argue that root cause analysis on a CVSS 0 bug is not as important as thoroughly addressing – and in future avoiding – a CVSS 9.0 or 10.0 bug, along the lines of the above analysis. If a standard enshrines the former, it leads to suboptimal resource allocation (like spreading peanut butter over too many slices of bread). Worse, any company doing the “better” thing will get dinged as being non-standards compliant if there is a Best Practice enshrined in SPW that calls for root cause analysis of everything, regardless of severity. Perfection works against actual security improvement.


Another “best practice” I see shilled relentlessly is third party static analysis. I’ve opined on why that is not a best practice in previous blogs, but I have new reasons to avoid it like the plague it is, which is a real world example of the high cost and low utility. Recently, we were made aware that a customer of Oracle (without asking our permission, that we would not have given if asked) submitted our software to a third party that does static analysis on binaries. Where to start with how extremely bad this is? Numero uno: the customer violated their license agreement with Oracle, which alone made their actions completely unacceptable. Add to that, the report we were furnished included alleged vulnerabilities not merely in Oracle but in another product Not Made By Oracle. (Needless to say, we could neither analyze those issues nor fix them in the event they turned out to be actual vulnerabilities and really, we did not want to see alleged vulnerabilities in Someone Else’s Code. That information is extremely sensitive and should not have been given to us.) Last but far from least was the fact that – drum roll – not one of the alleged security issues the third party reported was, in fact, an actual security vulnerability. 0% accuracy: zilch, zip, nada, bubkes, a’ohe mea. Further, one of our best security leads (I’d bill him out at least $2,000 bucks an hour) wasted his very valuable time determining that there was “no there, there.”


Running a tool (if and only if you have permission to do it) is nothing; the ability to analyze the results is everything. Third parties cannot do that since they have no actual code knowledge of what they are running the tool on, especially not on a code base as big as Oracle’s is. Third party static analysis is thus only a best practice if you want to waste time and money. But it’s the vendor’s time that is being wasted (maybe that third party should reimburse us the $2K an hour our kahuna spent analyzing their errata?), and the customer’s money. And last, but really first, violating licensing terms is unacceptable business conduct.


Summary


Nobody is perfect, but with all the attention being focused on cybersecurity, it would be really helpful if attempted problem solvers writing SPW could sharpen their – I was going to say, knives, but I am not sure I mean that! – focus. Yes, a sharpened focus is what is needed. Cybersecurity is an important area. Better security is achievable, but only if we know what we are worried about, we speak the same language, we can look at relative costs and benefits, and we allow for latitude in how we get to better. We can’t do everything, but everybody can do something. Let’s do the some of the things that matter – and that won’t make us spend resources checking boxes instead of making sure nobody can break into the boxes.


· I    * I note that one reason for technical standards is, of course, interoperability. In which case, people do need to implement, say, the Secure Whateverworks Protocol (SWP) a particular way, or it won’t work with another vendor’s implementation of SWP.


For More Information


Ruthlessly self-serving announcement follows: my sister and I, writing as Maddi Davidson, are pleased to announce that we have completed our third book in the Miss-Information Technology Mystery Series, With Murder You Get Sushi. (Also, our short story “Heartfelt” will appear in Mystery Times Ten this month, published by Buddhapuss Ink.)


Apropos of nothing having to do with security, I have discovered and become totally addicted to The Palliser Novels by Anthony Trollope. Like high class soap opera, only you get classics points for reading them. (Best of all, nobody in the book is named “Kardashian.”)



DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

Manually applying Global Payroll Rules Packages downloaded from an Update Image

Javier Delgado - Mon, 2015-03-23 09:47
Last week we've faced an issue while applying a Tax Update in one of our PeopleSoft HCM 9.2 customers. The Tax Update was delivered as a PeopleSoft Release Patchset, that needs to be first applied to the Update Image before creating the Change Package using PeopleSoft Update Manager.

Unfortunately, during the process, one rules package delivered in the Tax Update was not included within the Change Assistant steps, and therefore it was missed. Some days after, we reported the error to Oracle and they pointed out to the original zip file containing the PeopleSoft Release Patchset, which indeed contained the missing package.

We did not want to repeat the entire Change Package definition steps, as it would have required to restore a couple of backups. Instead, we decided to manually apply the rules package.

Not so fast...Unfortunately, within Update Manager the Rules Packages are not delivered in the usual format used to import, compare and copy them. Instead, specific steps are followed when Update Manager is used.

In the end, we managed to find a way to manually import the package, which is documented below.

Importing the Rules PackageThe rules packages in Update Manager are delivered using the Data Migration Workbench. The process to import them starts by defining the directories from which the Data Migration projects should be picked:

PeopleTools > Lifecycle Tools > Migrate Data > Manage File Locations

The path should point to the PTADSAEPRCS directory within the patch (whose zip file needs to be extracted before). Once the path is defined, the Data Migration project can be copied using the Data Migration Workbench:

PeopleTools > Lifecycle Tools > Migrate Data > Data Migration Workbench

The project should now be uploaded using the Load Project From File link. A list of the projects found the previously defined path will be shown:



Once the project is selected and the Load button pressed, the Project Definition page within the Workbench will be shown:


Applying the Data Migration project is quite simple. In first place, the project needs to be compared using the Compare button and once the comparison has finished, the project has to be submitted for copy (using the Submit for Copy button). 

Note: Data Migration project submissions may need approval. In such case, make sure the request is approved, so the project is actually copied.

The best way to validate whether the project has been correctly copied or not is to check the contents of the PS_GP_PKG_ADS_DFN table, which should now contain the imported rules package.

Rules Package Merge
Once the rules package has been imported, it needs to be merged. The merge process actually takes all the imported rules package and builds a single rule package to simplify its application. Unfortunately, the process is not available from the user interface, but it can still be run using the command line:

<PS_HOME>\psae.exe -CT <database type> -CD <database name> -CO <PeopleSoft user> -CP <PS user password> -R ESP -AI GP_PKG_ADSMR -I 0 -OT 2 -OF 13 -OP <output directory> -CI <connect id> -CW <connect password>

Once the rules package is merged, an usual Rules Package will be accessible within the Global Payroll Packages functionality. From there on, the package can be applied using the steps we were used to.

HP Mini T210 3F0 boot failure and System Rollback Data disk space issues with Roxio BackOnTrack / aswrvt.sys

Gareth Roberts - Sun, 2015-03-22 05:10

If you have an HP Mini 210 or 100 and have had issues with disk running out of space, failures to boot, I feel for you. After a heap of hassle, I sorted my out problems without having to reinstall Windows, and so I'm sharing my experiences as I found many many people with the same issue, but no central resolution. The issues I've had are as follows

  • Failure to boot: 3F0 Harddisk does not exist
  • Running out of space (see Roxio BackOnTrack)
  • Black screen after attempting to uninstall Roxio BackOnTrack
3F0 Harddisk does not exist

IF you get a failure to startup, screen with 3F0 Harddisk does not exist:

  1. To access the BIOS, turn on the computer and immediately press the esc key to display the Startup Menu, and then press the F10 key.
  2. Press F9 to reset the BIOS defaults, and press Enter to confirm the action.
  3. Press F10 to save the change and exit the BIOS, and then press Enter to confirm.

Hopefully now the computer will restart.

Low disk space - "System Rollback Data" hidden secret directory

This netbook came preinstalled with Roxio BackOnTrack, which is the primary source of my woes. Recently I found I was running out of disk space and after deleting a heap of data, then finding I was running out again I installed WinDirStat to check what was going on. On inspection I found a super secret hidden folder System Rollback Data, which is not even visible with "show system files" switched on. Note this is not to be confused with Microsoft Windows System Restore. This folder contained 160Gb out of my 250Gb SSD !!

So after reading I found many pointers for Roxio BackOnTrack being the culprit of the System Restore Data directory size. Note that it seems Roxio BackOnTrack and this super secret directory may have a hook into the boot loader. So beware!

Anyway, without further ado here's what I hit and how to fix it.

  • Uninstall Roxio BackOnTrack from the Programs & Features or similar. Note this may take a long time if the backup store is large. I didn't wait, and end task'ed the uninstall process. That may have been a bad idea.
  • After killing the uninstall and rebooting I got the dreaded 3F0 hard disk does not exist, fix as per above.
  • After fixing 3F0 issue, Windows would not boot, I tried safe mode, command prompt, last known good configuration, all that happened was a black screen after the initial Windows splash. On logging I found Windows load stopped at aswrvt.sys, no further progress. After a lot of searching I found a post that mentioned to delete the file delete the syscow32.sys file in WINDOWS/system32/drivers folder. The next issue was trying to get a utility running so I could get access to the C:\ Drive.
  • As the HP Mini 210 doesn't have a CD/DVD drive I decided to run a USB Boot program, after trying Windows restore disk, which didn't boot, I decided to use EaseUS Todo Back Emergency Disk, so I did the following:
    • Installed EaseUS Todo Backup Free to another computer and ran it.
    • Tools > Create Emergency disk, then created USB bootable drive
    • Put that in the HP Mini and changed boot options to move USB to top of the list
    • When EaseUS started up went to Tools > Windows Command Prompt and ran commands along the lines of the following and found I only had syscow32x.sys, so backed up and deleted that:
      c:
      cd windows\system32\drivers
      copy syscow32x.sys c:\temp
      del syscow32x.sys
      exit
    • Closed EaseUS Todo Backup
  • Rebooted - filesystem check kicked in, and Windows booted successfully !

  • Get rid of Roxio BackOnTrack - run uninstallapp.exe from C:\Program Files\Roxio ... takes a long time (10's of minutes dependeing on data size), watch the disk space free up as it runs :-)

  • Remove Roxio BackOnTrack via Control Panel - Programs and Features

After all that, my machine is running sweetly again. Hopefully this post saves someone from throwing a perfectly fine HP Mini 100 / 210 out the window !!

Catch ya!
Gareth

This is a post from Gareth's blog at http://garethroberts.blogspot.com
References

What’s this ‘WHERE 1=1’?

Bar Solutions - Sun, 2015-03-22 02:30

Since some time I have been adding WHERE 1=1 to all my queries.
I get queries like this:

SELECT *
  FROM emp e
 WHERE 1=1
   AND e.ename LIKE 'A%'
   AND e.deptno = 20

Lots of people ask me what’s the use of this WHERE 1=1.

You know I like to type as little as possible but here I am typing a lot of extra characters. And yet, it makes my development life a lot easier.

If my query has a lot of predicates and I want to see what happens then I usually comment those predicates out by using — (two dashes). I use my own CommentLine plug-in for this. This is easy for the second and higher predicates. But if I want to comment out the first predicate, then it get a bit harder. Well, not harder, but more work.

If I didn’t use the WHERE 1=1 and I wanted to comment out the ename predicate then I would have to do something like this:

SELECT *
  FROM emp e
 WHERE /*e.ename LIKE 'A%'
   AND */e.deptno = 20

I agree, it’s not hard to do, but I think it’s a lot more work than just adding — (two dashes) in front of a line:

SELECT *
  FROM emp e
 WHERE 1 = 1
--   AND e.ename LIKE 'A%'
   AND e.deptno = 20

And, as I don’t like typing or at least, I want to make it as easy for me as possible, I am using another one of my plug-ins, Template, where I defined a template w1 which results in

WHERE 1 = 1
   AND 


making it easy for me to write the queries.

I think adding this extra predicate has no (or hardly any) influence on the execution time of the query. I think the optimizer ignores this predicate completely.

I hope this explains a bit why I write my queries like this.

Installing Languages on PeopleSoft Update Images

Javier Delgado - Sat, 2015-03-21 03:52
One of great things about PeopleSoft Update Manager images is that they could be used as a Demo environment to try the latest and greatest features of the PeopleSoft application. All you need to do is to download the image and install it and you can already play with the application.

However, the initial install of the Update Image will only have the English language enabled. If you are using PeopleSoft Update Manager, once you upload the target environment and define the change package, the application will automatically install the languages you have in place in your own environment. However, if you just want to install the Update Image and you do not have a target environment to upload, this approach is not feasible.

Below I describe the steps to follow in order to install additional languages into an Update Image without having a target environment.


1.- Launch the Update Image.

2.- Install the client database connectivity tools by running the installer shipped in the oracle-12c-client-64bit shared folder.

3.- Connect to the database using SQL Developer and run the following command:

insert into PS_PTIASPTARGETLNG values ('ESPDEMO', '<language>');

4.- Install the PeopleTools client by running the installer located in the client-854 shared folder.

5.- Connect to the Update Image environment using Application Designer and open the PTIASPLANG_VW.PTIASPLANGCD.FieldFormula PeopleCode. Once in there, add the lines in bold.

(...)
Function AssembleDMoverCommand() Returns array of string
   Local array of string &arrRet = CreateArrayRept("", 0);
   
   Local string &srvName;
   Local string &userID;
   Local string &userPwd;
   Local string &dmsLogPath;
   Local string &dmsPath;
   Local string &psdmtxPath;
   Local string &mlDatPath;
   
   Local string &paramName;
   Local string &paramValue;
   REM prepare data mover parameters from database;
   Local SQL &sqlParam = GetSQL(SQL.PTIASPDMPARAM);
   While &sqlParam.Fetch(&paramName, &paramValue)
      Evaluate &paramName
      When "USERID"
         &userID = &paramValue;
         Break;
      When "USERPWD"
         &userPwd = &paramValue;
         Break;
      When "DMSPATH"
         &dmsPath = &paramValue;
         Break;
      When "PSDMTXPATH"
         &psdmtxPath = &paramValue;
         Break;
      When "SVRNAME"
         &srvName = &paramValue;
         Break;
      When "DMLOGPATH"
         &dmsLogPath = &paramValue;
         Break;
      When "MLDATPATH"
         &mlDatPath = &paramValue;
         Break;
      When-Other
         Break;
      End-Evaluate;
   End-While;
   &sqlParam.Close();
   
   Local string &cmdString;
   &cmdString = &psdmtxPath | " -CT " | %DbType;
   If All(&srvName) Then
      &cmdString = &cmdString | " -CS " | &srvName;
   End-If;
   /* BNB - J.Delgado - 08 Mar 2015 - BEGIN */
   Local string &tmp;
   &tmp = &userPwd;
   /* BNB - J.Delgado - 08 Mar 2015 - END */
   If All(&userPwd) Then
      &userPwd = Decrypt("mldmpswd", &userPwd);
   End-If;
   /* BNB - J.Delgado - 08 Mar 2015 - BEGIN */
   If None(&userPwd) Then
      &userPwd = &tmp;
   End-If;
   /* BNB - J.Delgado - 08 Mar 2015 - END */
   &cmdString = &cmdString | " -CD " | %DbName | " -CO " | &userID | " -CP " | &userPwd | " -FP " | &dmsPath;
   
   REM DMover executable file location, whole DMover command, DMS Log File, Dat file Path, DMS file Path are pushed into the array for following process;
   &arrRet.Push(&psdmtxPath, &cmdString, &dmsLogPath, &mlDatPath, &dmsPath);
   Return &arrRet;
End-Function;
(...)

Note: This change removes the requirement of a previous target environment upload. 

6.- Connect to the PeopleSoft application using PIA. Associate the PTIASPMLLOAD Application Engine process to the AE_REQUEST component.



7.- Run the PTIASPMLLOAD process using the Request AE page. The language to be installed should associated to the PTIASPMLLDAET.PTIASPPROPVAL field.


Once the process is run, make sure you reboot the web server and application server in order to use the Update Image in the newly installed language.


Might need to use Tunneling with Discoverer 11g

Michael Armstrong-Smith - Thu, 2015-03-19 14:40
I have noticed a few instances recently of Discoverer 11g Plus failing to open or taking an awful long time to open. In both of the cases where this has been reported to me by my clients, changing the plus communication protocol from Default to Tunneling did the trick.

To enable tunneling for use with Discoverer Plus, use this workflow:

  1. Launch Enterprise Manager using something like: http://server.domain.com:7002/em
  2. Enter your Username and Password. Username is typically Weblogic
  3. Under Farm, on left-hand side, expand Discoverer and Discoverer(11.1.1.x.0)
  4. In the Components window, highlight Discoverer Plus then click the Configure button
  5. In the Communication Protocols window, click the Tunneling radio button (see below)
  6. Click the Apply button
  7. Shut Down the Discoverer service from the top link by clicking on Discoverer | Control | Shut Down - confirm the action
  8. Restart the Discoverer service from the top link by clicking on Discoverer | Control | Start Up - confirm the action (sometimes you have to do this twice)


 

The Secret Feature of Bucketsets in Oracle Business Rules

Jan Kettenis - Thu, 2015-03-19 13:42
In Oracle Business Rules one can use so-called "Bucketsets". I never liked the term as it is not in the dictionary (did you mean bucketseat?), and never understood what is wrong with "list of values" (LoV) as that is what it is.

See for example the following bucketset that defines a list of values to be used for some status field:


Anyway, bucketsets are typically used in decision tables to define the set of values to be used in conditions. Unfortunately you cannot use them to define the set of values to be used in actions. At least, that is what the UI seems to suggest, as bucketsets do not appear in the values you can choose from.


But don't get fooled, you are being misguided, as it works after all. Just type it in as [bucketset_name].[value_name] and be surprised that it works!


Funny enough this is not the case for IF-THEN rules.


 Let's call it room for improvement.

BEA-090898 during PlugIn activation in clusters

Frank van Bortel - Wed, 2015-03-18 09:42
Be Secure I did not mention it in my not so "OAM-in-a-day" entry, but when you run a clustered environment, make sure to set the "Secure" flag on the AdminServer and Managed Server configuration screens. It does have more impact that setting the "Use JSSE" flag on the SSL/Advanced section of the Weblogic console, but when you failed to do so, that's one place to correct it. Why? No particular Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Pages

Subscribe to Oracle FAQ aggregator