Feed aggregator

Using Oracle SQL Developer with MS SQL

Robert Vollman - Tue, 2010-09-07 11:17
Having chosen Oracle SQL Developer as your preferred Oracle database tool, do you have to install and learn a new technology for supporting your MS SQL databases? Nope! It's easy to connect SQL Developer to MS SQL databases, and I'll show you how.BackgroundFor years I worked in technical support for software vendors, and I never knew what client tool would be available when I accessed a Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com209

Sangam’10: All-India Oracle Users Group (AIOUG) Annual Conference

Virag Sharma - Sun, 2010-09-05 11:23
Sangam’10: All-India Oracle Users Group (AIOUG) Annual Conference



The All India Oracl Users Group (AIOUG) presented Sangam 2010, a two-day(3-4-Sep-2010) Oracle Users Conference held in Hyderabad. Sangam 2010 hosted multiple sessions by international and Indian experts like Jonathan Lewis , Rittman , Iggy Fernandez etc.




Jonathan Lewis & Virag Sharma


The event was very good and opportunity to learn from experts. Sangam 2010 started with Murali welcome note followed by Presentation, by Roland Slee, Vice-President, Database Product Management, Oracle Asia Pacific & Japan.


First technical session was from Jonathan Lewis on "Writing Optimal SQL". This One day Presentation was divided in to two , 1/2 day sessions. Conference room packed for his sessions, Jonathan Lewis Presentation was main attraction for Sangam 2010. After Lunch , there were 2 Conference room for different sessions and audience can choose session as per there interest.

Sponsors "OSI Consulting" presentation on "Cross Platform migration challenges and time reduction techniques" was pretty good then expected. After "OSI Consulting" presentation, RAC SIG meeting held. In this meeting Satyendra Kumar , explained things about RAC SIG. In the end of "RAC SIG" meeting Satyendra looking leader for various city , for Delhi/NCR, I proposed Aman Sharma’s name and latter learned that Aman recently become Oracle ACE.

Session “Tips and Best Practices for DBA’s” from Francisco , was in soo much demand that Conference room 2 overflowed and finally session held in Conference room 1.

Rittman and Vivek took one session in Sangam 2010 and unfortunately , Not able to attend any of them.In the end of Sangam 2010 Oracle Users Conference, Jonathan Lewis took 1 Hr question and Answer session, that was pretty good.


Categories: DBA Blogs

Oracle Labs

Habib Gohar - Sat, 2010-09-04 01:42
Excellant Oracle Blog Oracle Labs – Worth Oracle Experiences http://www.oraclelabs.com Regards

What I expect from Oracle Open World 2010

Brent Martin - Thu, 2010-09-02 07:16

I’m going to Oracle OpenWorld again this year, and I just finished building my schedule.  Wow.  This year there are so many sessions I want to attend in the same time slots that I won’t be able to see a fraction of what I want to.  Guess I’ll have to skip the session on how to author blue-ray disks using Java in favor of a product roadmap session I need to attend.  You see, I’ve started a new project and I have a whole laundry list of stuff I need to come up to speed on.  I’m sure it’ll be an informative but exhausting week – it always is.


Gazing into my crystal ball, I’m expecting to hear more about the Fusion applications that were introduced at the end of the 2009 OOW.  I think Oracle isn’t re-inventing all of the functionality in their mature ERP/CRM product lines like PeopleSoft, JDE, EBS, and Siebel.  But all the same I’m expecting to see some products that are ready for launch and looking snazzy with the deep integration with BI and other apps that Oracle has invested so heavily in.


Speaking of BI, I’m looking forward to seeing the new release of OBIEE.  BI apps just look cool, and their functionality makes things like PeopleSoft Matching functionality seem boring in comparison.  I'm hoping to see support for a ton of data sources and the ability to publish interactive reports to latest generation mobile devices.  Unfortunately I think I missed the BI boat at some point in my career, so bring on the 3-way match!


 


The Root of The Problem

Mary Ann Davidson - Thu, 2010-09-02 02:07

Summer in Idaho is treasured all the more since it is all too brief. We had a long, cold spring - my lilacs were two months behind those of friends and family on the east coast - and some flowers that normally do well here never did poke their colorful heads out of the ground.

My personal gardening forays have been mixed: some things I planted from seeds never came up, and others only just bloomed in August, much to my delight. I am trying to create order from chaos - more specifically, I want a lovely oasis of flowers in a rock garden I have admittedly neglected for several years. Nature abhors a vacuum and thus, she made a successful flanking maneuver to colonize flowerbeds with sagebrush and grasses. I am way beyond "yanking and weed killer" and have traded in my trowel for heavier equipment. You need a shovel and a strong back to pull up a sagebrush and as for the grass, I've had to remove the top three inches of soil in many places and move a number of rocks to get at the root system snaking under them.

I never appreciated the expression, "getting at the root of the problem" until I dealt with invasive sagebrush and "grass-zilla." I have no choice but to do it because if I do not eradicate the root system, I will continue to battle these opportunistic biological interlopers one new sprout at a time. Just as, if you do not figure out the - pun intended - root cause of a security vulnerability, but just fix the symptoms, you will later have to clean up the rest of the buggy snippets that are choking your code.

I have had professional experiences that mirror my rock garden. That is, that there are "interloping and invasive" ideas that take hold with unbelievable tenacity to the point it is hard to eradicate them. The sagebrush and grass of the cybersecurity area are what I can only call the (myth of the) evil vendor cabal (var. multae crappycodae) and supply chain risk management (malwarum hysteriensis). Both have taken hold of otherwise rational human beings just like the pods took over people's minds in Invasion of the Body Snatchers.

In the course of my work, I attend a lot of meetings, seminars and the like on software assurance. The good news is that in the last couple of years, most of the vendors who attend these events (think of the big names in software and hardware) are doing pretty much the same sort of mom and secure apple pie things in software development. The bar, I can say pretty confidently, has been raised. This does not mean industry is perfect, nor does it mean that industry is "done" improving security. I would add that all of us know that building better code is good business: good for customers and good for us. It's also important for critical infrastructure. We get it.

However, to go to some of these meetings, you wouldn't think anything had changed. I have recently been astonished at the statements of opinion - without any facts to back them up - about the state of software development and the motives of those of us who do it, and even more disturbed at what I can only describe as outright hostility to industry in particular and capitalism in general. I suspect at least part of the reason for the hostility is the self-selecting nature of some of these meetings. That is, for some assurance-focused groups, vendors only attend meetings sporadically (because it's more productive to spend time improving your product than in talking about it). That leaves the audience dominated by consultants, academics and policy makers. Each group, in its own way, wants to make the problem better and yet each, in its own way, has a vested interest in convincing other stakeholders that they - and only they - can fix the problem. Many of them have never actually built software or hardware or worked in industry - and it shows. Theory often crumbles upon the altar of actual practice.

What I have heard some of these professional theorists say is not only breathtakingly ironic but often more than a little hypocritical: for example, a tenured academic complaining that industry is "not responsive to the market." (See my earlier blog "The Supply Chain Problem") on fixing the often-execrable cybersecurity education in most university programs and the deafening silence I got in response from the universities I sent letters to.) If you are tenured, you do not have to respond to market forces: you can teach the same thing for thirty years whether or not it is what the market needs or wants and whether or not you are any good at it. (What was that again about being nonresponsive to market forces?)

I am also both amused and annoyed at the hordes of third party consultants all providing a Greek chorus of "you can't trust your suppliers - let us vet them for you." Their purpose in the drama of assurance seems to be the following:

  • Create fear, uncertainty and doubt (FUD) in the market - "evil, money-grubbing vendors can't be trusted; good, noble consultants are needed to validate security"
  • Draft standards - under contract to the government - that create new, expensive third party software and hardware validation schemes
  • Become the "validator" of software after your recommendations to the government - the ones you wrote for them - have been accepted

Could there possibly be a clearer definition of "conflict of interest" than the above? Now, I do not blame anyone for trying to create a market - isn't that what capitalism is? - but trying to create a market for your services by demonizing capitalism is hilariously ironic. One wants to know, "quis custodiet ipsos custodes?" (Who watches the watchers, otherwise known as, "why should I trust consultants who, after all, exist to sell more consulting services?")

The antibusiness rhetoric got so bad once that I took advantage of a keynote I was delivering to remark - because I am nothing if not direct - that, contrary to popular belief, there is no actual Evil Vendor Cabal wherein major software and hardware suppliers collude to determine how we can collectively:

  • build worse products
  • charge more for them and
  • put our customers at increased risk of cyberattack.
It doesn't happen. And furthermore, I added, the government likes and has benefited from buying commercial software for many applications since it is feature rich, maintained regularly, generally very configurable, and runs on a lot of operating systems. "How well," I added, "did it work when government tried to build all these systems from scratch?" The answer is, the government does not have the people or the money to do that: they never did. But the same consultants who are creating FUD about commercial software would be happy to build custom software for everything at 20 times the price, whether or not there is a reason to build custom software.

"You are all in business to make a profit!" one person stated accusingly, as if that were a bad thing. "Yes," I said, "and because we are in business to make a profit, it is very much in our interest to build robust, secure software, because it is enormously expensive for us to fix defects - especially security defects - after we ship software, and we'd much rather spend the resources on building new features we can charge for, instead of on old problems we have to fix in many, many places. Furthermore, we run our own businesses on our own software so if there is horrible security, we are the first 'customer' to suffer. And lastly, if you build buggy, crappy software that performs poorly and is expensive to maintain, you will lose customers to competitors, who love to point at your deficiencies if customers have not already found them."

The second and more disturbingly tenacious idea - and I put this in the category of grass since it seemingly will take a lot of grubbing in the dirt to eradicate it - is what is being called "supply chain risk," this year's hot boy band, judging from the amount of screaming, fainting and hysteria that surrounds it. And yet, if "it" is such a big deal, why oh why can't the people writing papers, draft standards and proposed legislation around "it" describe exactly what they are worried about? I have read multiple pieces of legislation and now, a draft NIST standard on "supply chain risk management" and still there is no clear articulation of "what are you worried about?"

I generally have a high degree of regard for the National Institute of Standards and Technology (NIST). In the past, I've even advocated to get them more money for specific projects that I thought would be a very good use of taxpayer money. I am therefore highly disturbed that a draft standard on supply chain risk management, a problem supposedly critical to our national interests, appears to be authored by contractors and not by NIST. Specifically, two out of three people who worked on the draft are consultants, not NIST employees. (Disclaimer: I know both of them professionally and I am not impugning them personally.) There is no way to know whether the NIST employee who is listed on the standard substantially contributed to the draft or merely managed a contract that "outsourced" development of it.

As I noted earlier, there is an inherent problem in having third parties who would directly stand to benefit if a "standard" is implemented participate in drafting it. Human nature being what it is, the temptation to create future business for oneself is insidiously hard to resist. Moreover, it is exceedingly difficult to resist one's own myopias about how to solve a problem and, let's face it, if you are a consultant, every problem looks like the solution is "hire a consultant." It would be exactly the same thing if, say, the federal government asked Oracle to draft a request for proposal that required a ...database. Does anybody think we could possibly be objective? Even if we tried to be open minded, the set of requirements we would come up with would look suspiciously like Oracle, because that's what we are most familiar with.

Some will argue that this is a draft standard, and will go through revisions, so the provenance of the ideas shouldn't matter. However, NIST's core mission is developing standards. If they are not capable of drafting standards themselves then they should either get the resources to do so or not do it at all. Putting it differently, if you can't perform a core mission, why are you in business? If I may be a bit cheeky here, there is a lesson from Good Old Capitalism here: you cannot be in all market segments (otherwise known as "You can't be all things to all people"). It's better to do a few things well than to try to do everything, and end up doing many things badly. I might add, any business that tried to be in too many market segments that they had no actual expertise in would fail - quickly - because the market imposes that discipline on them.

Back to the heart of the hysteria: what, precisely is meant by "supply chain risk?" At the root of all the agitation there appears to be two concerns, both of which are reasonable and legitimate to some degree. They are:

  • Counterfeiting
  • Malware
Taking the easier one first, "counterfeiting" in this context means "purchasing a piece of hardware FOO or software BAR where the product is not a bona fide FOO or BAR but a knockoff." (Note: this is not the case of buying a "solid gold Rolex" on the street corner for $10 when you know very well this is not a real Rolex - not at that price.) From the acquirer's viewpoint, the concern is that a counterfeit component will not perform as advertised (i.e., might fail at a critical juncture), or won't be supported/repaired/warranted by the manufacturer (since it is a fake product). It could also include a suspicion that instead of GoodFoo you are getting EvilKnockoffFOO, which does something very different - and malicious - from what it's supposed to do. More on that later.

From the manufacturer's standpoint, counterfeiting cuts into your revenue stream since someone is "free riding" on your brand, your advertising, maybe even your technology, and you are not getting paid for your work. Counterfeits may also damage your brand (when FakeFOO croaks under pressure instead of performing like the real product). Counterfeiting is the non-controversial part of supply chain concerns in that pretty much everybody agrees you should get what you pay for, and if you buy BigVendor's product FOO, version 5, you want to know you are actually getting FOO, version 5 (and not fake FOO). Note: I say, "non controversial," but when you have government customers buying products off eBay (deeply discounted) who are shocked - shocked I tell you! - to discover that they have bought fakes, you do want to say, "do you buy F-22s off eBay? No? Then what makes you think you can buy mission critical hardware off eBay? Buy From An Authorized Distributor, fool!"

The second area of supply chain risk hysteria is malware. Specifically, the concern that someone, somewhere will Put Something Bad in code (such as a kill switch which would render the software or hardware inoperable at a critical juncture). Without ever articulating it, the hysteria is typically that An Evil Foreigner - not a Good American Programmer - will Put Something Bad in Code. (Of course, other countries have precisely the same concern, only in their articulation, it is evil Americans who will Put Something Bad In Code.) The "foreign boogeymen" problem is at the heart of the supply chain risk hysteria and has led to the overreach of proposed solutions for it. (For example, the NIST draft wanted acquirers to be notified of changes to personnel involving "maintenance." Does this mean that every time a company hires a new developer to work on old code - and let's face it, almost everybody who works in development for an established company touches old code at some point - they have to send a letter to Uncle Sam with the name of the employee? Can you say "intrusive?")

So here is my take on the reality of the "malware" part of supply chain. It's a long explanation, and I stole it from a paper I did on supply chain issues for a group of legislators. I offer these ideas as points of clarification that I fervently hope will frame this discussion, before someone, in a burst of public service, creates an entirely new expensive, vague, "construct" of policy remedies for an unbounded problem. Back to my gardening analogy, if eradicating the roots of a plant is important and necessary to kill off a biological interloper, it is also true that some plants will not grow in all climates and in all soil no matter what you do: I cannot grow plumeria (outdoors) in Idaho no matter how hard I try and no matter how much I love it. Similarly, some of the proposed "solutions" to supply chain risk are not going to thrive because of a failure to understand what is reasonable and feasible and will "grow" and what absolutely will not. I'll go farther than that - some of the proposed remedies - and much of what is proposed in the draft NIST standard - should be dosed with weed killer.

Constraint 1: In the general case - and certainly for multi-purpose infrastructure and applications software and hardware - there are no COTS products without global development and manufacturing.

Discussion: The explosion in COTS software and hardware of the past 20 years has occurred precisely because companies are able to gain access to global talent by developing products around the world. For example, a development effort may include personnel on a single "virtual team" who work across the United States and in the United Kingdom and India. COTS suppliers also need access to global resources to support their global customers. For example, COTS suppliers often offer 7x24 support in which responsibility for addressing a critical customer service request migrates around the globe, from support center to support center (often referred to as a "follow the sun" model). Furthermore, the more effective and available (that is, 7x24 and global) support is, the more likely problems will be reported and resolved more quickly for the benefit of all customers. Even smaller firms that produce specialized COTS products (e.g., cryptographic or security software) may use global talent to produce it.Hardware suppliers are typically no longer "soup to nuts" manufacturers. That is, a hardware supplier may use a global supply network in which components - sourced from multiple entities worldwide - are assembled by another entity. Software is loaded onto the finished hardware in yet another manufacturing step. Global manufacturing and assembly helps hardware suppliers focus on production of the elements for which they can best add value and keeps overall manufacturing and distribution costs low. We take it for granted that we can buy serviceable and powerful personal computers for under $1000, but it was not that long ago that the computing power in the average PC was out of reach for all but highly capitalized entities and special purpose applications. Global manufacturing and distribution makes this possible.In summary, many organizations that would have deployed custom software and hardware in the past have now "bet the farm" on the use of COTS products because they are cheaper, more feature rich, and more supportable than custom software and hardware. As a result, COTS products are being embedded in many systems - or used in many deployment scenarios - that they were not necessarily designed for. Supply chain risk is by no means the only risk of deploying commercial products in non-commercial threat environments.

Constraint 2: It is not possible to prevent someone from putting something in code that is undetectable and potentially malicious, no matter how much you tighten geographic parameters.

Discussion: One of the main expressions of concern over supply chain risk is the "malware boogeyman," most often associated with the fear that a malicious employee with authorized access to code will put a backdoor or malware in code that is eventually sold to a critical infrastructure provider (e.g., financial services, utilities) or a defense or intelligence agency. Such code, it is feared, could enable an adversary to alter (i.e., change) data or exfiltrate data (e.g., remove copies of data surreptitiously) or make use of a planted "kill switch" to prevent the software or hardware from functioning. Typically, the fear is expressed as "a foreigner" could do this. However, it is unclear precisely what "foreigner" is in this context:

  • There are many H1B visa holders (and green card holders) who work for companies located in the United States. Are these "foreigners?"
  • There are US citizens who live in countries other than the US and work on code there. Are these "foreigners?" That is, is the fear of code corruption based on geography or national origin of the developer?
  • There are developers who are naturalized US citizens (or dual passport holders). Are these "foreigners?"
(Ironically, naturalized citizens and H1B visa holders are arguably more "vetted" that native-born Americans.) It is unclear whether the concern is geographic locale, national origin of a developer or overall development practice and the consistency by which it is applied worldwide.

COTS software, particularly infrastructure software (operating systems, databases, middleware) or packaged applications (customer relationship management (CRM), enterprise resource planning (ERP)) typically has multiple millions of lines of code (e.g., the Oracle database has about 70 million lines of code). Also typically, commercial software is in near-constant state of development: there is always a new version under development or old versions undergoing maintenance. While there are automated tools on the market that can scan source code for exploitable security defects (so-called static analysis tools), such tools find only a portion of exploitable defects and these are typically of the "coding error" variety. They do not find most design defects and they would be unlikely to find deliberately introduced backdoors or malware.

Given the size of COTS code bases, the fact they are in a near constant state of flux, and the limits of automated tools, there is no way to absolutely prevent the insertion of bad code that would have unintended consequences and would not be detectable. (As a proof point, a security expert in command and control systems once put "bad code" in a specific 100 lines of code and challenged code reviewers to find it within the specific 100 lines of code. They couldn't. In other words, even if you know where to look, malware can be and often is undetectable.)

Lastly, we are sticking our collective heads in the sand if we think that no American would ever put something deliberately bad in code. Most of the biggest intelligence leaks of the past were perpetrated by cleared American citizens (e.g., Aldrich Ames, Robert Hanssen and the Walker spy ring). But there are other reasons people could Do Bad Things To Code, such as being underpaid and disgruntled about it (why not stick a back door in code and threaten to shut down systems unless someone gives you a pay raise?).

Constraint 3: Commercial assurance is not "high assurance" and the commercial marketplace will not support high assurance software.

Discussion: Note that there are existing, internationally recognized assurance measures such as the Common Criteria (ISO-15408) that validate that software meets specific (stated) threats it was designed to meet. The Common Criteria supports a sliding scale of assurance (i.e., levels 1 through 7) with different levels of software development rigor required at each level: the higher the assurance level, the more development rigor required to substantiate the higher assurance level. Most commercial software can be evaluated up to Evaluation Assurance Level (EAL) 4 (which, under the Common Criteria Recognition Arrangement (CCRA), is also accepted by other countries that subscribe to the Common Criteria). Few commercial entities ask for or require "high assurance" software and few if any government customers ask for it, either.

What is achievable and commercially feasible is for a supplier to have reasonable controls on access to source code during its development cycle and reasonable use of commercial tools and processes that will find routine "bad code" (such as exploitable coding errors that lead to security vulnerabilities). Such a "raise the bar" exercise may have and likely will have a deterrent affect to the extent that it removes the plausible deniability of a malefactor inserting a common coding error that leads to a security exploit. Using automated vulnerability finding tools, in addition to improving code hygiene, makes it harder for someone to deliberately insert a backdoor masquerading as a common coding error because the tools find many such coding errors. Thus, a malefactor may, at least, have to work harder.

That said, and to Constraint 1, the COTS marketplace will not support significantly higher software assurance levels such as manual code review of 70 million lines of code, or extensive third party "validation" of large bodies of code beyond existing mechanisms (i.e., the Common Criteria) nor will it support a "custom code" development model where all developers are US citizens, any more than the marketplace will support US-only components and US-only assembly in hardware manufacturing. This was, in fact, a conclusion reached by the Defense Science Board in their report on foreign influence on the supply chain of software. And in fact, supply chain risk is not about the citizenship of developers or their geographic locale but about the lifecycle of software, how it can be corrupted, and taking reasonable and commercially feasible precautions to prevent code corruption.

Constraint 4: Any supply chain assurance exercise - whether improved assurance or improved disclosure - must be done under the auspices of a single global standard, such as the Common Criteria.

Discussion: Assurance-focused supply chain concerns should use international assurance standards (specifically the Common Criteria) to address them. Were someone to institute a separate, expensive, non-international "supply chain assurance certification," not only would software assurance not improve, it would likely get worse, because the same resources that companies today spend on improving their product would be spent on secondary or tertiary "certifications" that are expensive, inconsistent and non-leverageable. In the worst case, a firm might have to produce different products for different geographic locales, which would further divert resources (and weaken security). A new "regulatory regime" - particularly one that largely overlaps with an existing scheme - would be expensive and "crowd out" better uses of time, people, and money. To the extent some supply chain issues are not already addressed in Common Criteria evaluations, the Common Criteria could be modified to address them, using an existing structure that already speaks to assurance in the international realm.

Even in cases of "supply chain disclosure," any such disclosure requirement needs to ensure that the value of information - to purchasers - is greater than the cost to suppliers of providing such information. To that end, disclosure should be standardized, not customized. Even a large vendor would not be able to complete per-customer or per-industry questionnaires on supply chain risk for each release of each product they produce. The cost of completing such "per-customer, per-industry" questionnaires would be considerable, and far more so for small, niche vendors or innovative start-ups.

For example, a draft questionnaire developed by the Department of Homeland Security asked, for each development project, for each phase of development (requirement, design, code, and test) how many "foreigners" worked on each project? A large product may have hundreds of projects, and collating how many "foreigners" worked on each of them provides little value (and says nothing about the assurance of the software development process) while being extremely expensive to collect. (The question was dropped from the final document.)

Constraint 5: There is no defect-free or even security defect-free software.

Discussion: While better commercial software is achievable, perfect software is not. This is the case because of a combination of generally poor "security education" in universities (most developers are not taught even basic secure development practices and have to be retrained by the companies that hire them), imperfect development practices, imperfect testing practices, and the fact that new classes of vulnerabilities are being discovered (and exploited) as enemies become more sophisticated. Better security education, better development practices and better testing will improve COTS (and non-COTS) software but will not eliminate all vulnerabilities or even all security vulnerabilities -- people make mistakes, and its not possible to catch all of those mistakes.

As noted elsewhere, manual code inspection is infeasible over large code bases and is error prone. Automated vulnerability-finding tools are the only scalable solution for large code bases (to automate "error finding") but even the best commercially available automated vulnerability-finding tools find perhaps 50% of security defects in code resulting from coding errors but very few security design errors (e.g., an automated tool can't "detect" that a developer neglected to include key security functionality, like encrypting passwords or requiring a password at all).

Lastly, no commercial software ships with "zero defects." Most organizations ship production software only after a phase-in period (so-called alpha and beta testing) in which a small, select group of production customers use the software and provide feedback, and the vendor fixes the most critical defects. In other words, there is typically a "cut-off" in that less serious vulnerabilities are not fixed prior to the product being generally available to all customers.

It is reasonable and achievable that a company has enough rigor in its development practice to include, as part of a robust development practice, actively looking for security defects (using commercial automated tools), triaging them (e.g., by assigning a Common Vulnerability Scoring System (CVSS) score) and, for example, fixing all issues above a particular severity). That said, it is a certainty that some vulnerabilities will still be discovered after the product has shipped, and some of these will be security vulnerabilities.

There is a reasonableness test here we all understand. Commercial software is designed for commercial purposes and with commercial assurance levels. "Commercial software" is not necessarily military grade any more than a commercial vehicle - a Chevy Suburban, for example - is expected to perform like an M1 Abrams tank. Wanting commercial software to have been built (retroactively) using theoretically perfect but highly impractical development models (and by cleared US citizens in a secured facility, no less) might sound like Nirvana to a confluence of assurance agitators - but it is neither reasonable nor feasible and it is most emphatically not commercial software.

Book(s) of the Month

Strong Men Armed: The United States Marines vs. Japan by Robert Leckie

Robert Leckie was a Marine who served in WWII in the Pacific theater and also a prolific writer, much of it military history (another book, Helmet for My Pillow, was a basis for HBO's The Pacific). As much as I have read about the Pacific War - and I've read a lot - I continue to be inspired and humbled by the accounts of whose who fought it and what they were up against: a fanatical, ideologically-inspired and persistent foe who would happily commit suicide if he were able to take out many of "the American enemy." The Marines were on the front lines of much of that war and indeed, so many battles were the Marines' to fight and win. What I liked about this book was that it did not merely recap which battles were fought when, where and by which Marine division led by what officer, but it delves into the individuals in each battle. You know why Joe Foss received the Congressional Medal of Honor, and for what (shooting down 23 Japanese planes over Guadalcanal), for example. History is made by warriors, and everyone - not just the US Marines - should know who our heroes are. (On a personal note, I was also thrilled to read, on page 271 of my edition, several paragraphs about the exploits of Lt. Col Henry Buse, USMC, on New Britain. I later knew him as General Henry Buse, a family friend. Rest in peace, faithful warrior.)

I'm Staying with My Boys: The Heroic Life of Sgt. John Basilone, USMC by Jim Proser

One of many things to love about the US Marine Corps is that they know their heroes: any Marine knows who John Basilone is and why his name is held in honor. This book - told in the first person, unusually - is nonetheless not an autobiography but a biography of Sgt. "Manila" John Basilone, who was a recipient of the Congressional Medal of Honor for his actions at Lunga Ridge on Guadalcanal. He could have sat out the rest of the war selling war bonds but elected to return to the front, where he was killed the first day of the battle for Iwo Jima. In a world where mediocrity and the manufactured 15 minutes of fame are celebrated, this is what a real hero - and someone who is worthy of remembrance - looks like. He is reported to have said upon receiving the CMH: "Only part of this medal belongs to me. Pieces of it belong to the boys who are still on Guadalcanal. It was rough as hell down there."

The citation for John Basilone's Congressional Medal of Honor:

" For extraordinary heroism and conspicuous gallantry in action against enemy Japanese forces, above and beyond the call of duty, while serving with the 1st Battalion, 7th Marines, 1st Marine Division in the Lunga Area. Guadalcanal, Solomon Islands, on 24 and 25 October 1942. While the enemy was hammering at the Marines' defensive positions, Sgt. Basilone, in charge of 2 sections of heavy machineguns, fought valiantly to check the savage and determined assault. In a fierce frontal attack with the Japanese blasting his guns with grenades and mortar fire, one of Sgt. Basilone's sections, with its gun crews, was put out of action, leaving only 2 men able to carry on. Moving an extra gun into position, he placed it in action, then, under continual fire, repaired another and personally manned it, gallantly holding his line until replacements arrived. A little later, with ammunition critically low and the supply lines cut off, Sgt. Basilone, at great risk of his life and in the face of continued enemy attack, battled his way through hostile lines with urgently needed shells for his gunners, thereby contributing in large measure to the virtual annihilation of a Japanese regiment. His great personal valor and courageous initiative were in keeping with the highest traditions of the U.S. Naval Service."

Other Links

More than you ever wanted to know about sagebrush:

The Root of The Problem

Mary Ann Davidson - Thu, 2010-09-02 02:07

Summer in Idaho is treasured all the more since it is all too brief. We had a long, cold spring - my lilacs were two months behind those of friends and family on the east coast - and some flowers that normally do well here never did poke their colorful heads out of the ground.

My personal gardening forays have been mixed: some things I planted from seeds never came up, and others only just bloomed in August, much to my delight. I am trying to create order from chaos - more specifically, I want a lovely oasis of flowers in a rock garden I have admittedly neglected for several years. Nature abhors a vacuum and thus, she made a successful flanking maneuver to colonize flowerbeds with sagebrush and grasses. I am way beyond "yanking and weed killer" and have traded in my trowel for heavier equipment. You need a shovel and a strong back to pull up a sagebrush and as for the grass, I've had to remove the top three inches of soil in many places and move a number of rocks to get at the root system snaking under them.

I never appreciated the expression, "getting at the root of the problem" until I dealt with invasive sagebrush and "grass-zilla." I have no choice but to do it because if I do not eradicate the root system, I will continue to battle these opportunistic biological interlopers one new sprout at a time. Just as, if you do not figure out the - pun intended - root cause of a security vulnerability, but just fix the symptoms, you will later have to clean up the rest of the buggy snippets that are choking your code.

I have had professional experiences that mirror my rock garden. That is, that there are "interloping and invasive" ideas that take hold with unbelievable tenacity to the point it is hard to eradicate them. The sagebrush and grass of the cybersecurity area are what I can only call the (myth of the) evil vendor cabal (var. multae crappycodae) and supply chain risk management (malwarum hysteriensis). Both have taken hold of otherwise rational human beings just like the pods took over people's minds in Invasion of the Body Snatchers.

In the course of my work, I attend a lot of meetings, seminars and the like on software assurance. The good news is that in the last couple of years, most of the vendors who attend these events (think of the big names in software and hardware) are doing pretty much the same sort of mom and secure apple pie things in software development. The bar, I can say pretty confidently, has been raised. This does not mean industry is perfect, nor does it mean that industry is "done" improving security. I would add that all of us know that building better code is good business: good for customers and good for us. It's also important for critical infrastructure. We get it.

However, to go to some of these meetings, you wouldn't think anything had changed. I have recently been astonished at the statements of opinion - without any facts to back them up - about the state of software development and the motives of those of us who do it, and even more disturbed at what I can only describe as outright hostility to industry in particular and capitalism in general. I suspect at least part of the reason for the hostility is the self-selecting nature of some of these meetings. That is, for some assurance-focused groups, vendors only attend meetings sporadically (because it's more productive to spend time improving your product than in talking about it). That leaves the audience dominated by consultants, academics and policy makers. Each group, in its own way, wants to make the problem better and yet each, in its own way, has a vested interest in convincing other stakeholders that they - and only they - can fix the problem. Many of them have never actually built software or hardware or worked in industry - and it shows. Theory often crumbles upon the altar of actual practice.

What I have heard some of these professional theorists say is not only breathtakingly ironic but often more than a little hypocritical: for example, a tenured academic complaining that industry is "not responsive to the market." (See my earlier blog "The Supply Chain Problem") on fixing the often-execrable cybersecurity education in most university programs and the deafening silence I got in response from the universities I sent letters to.) If you are tenured, you do not have to respond to market forces: you can teach the same thing for thirty years whether or not it is what the market needs or wants and whether or not you are any good at it. (What was that again about being nonresponsive to market forces?)

I am also both amused and annoyed at the hordes of third party consultants all providing a Greek chorus of "you can't trust your suppliers - let us vet them for you." Their purpose in the drama of assurance seems to be the following:


  • Create fear, uncertainty and doubt (FUD) in the market - "evil, money-grubbing vendors can't be trusted; good, noble consultants are needed to validate security"

  • Draft standards - under contract to the government - that create new, expensive third party software and hardware validation schemes

  • Become the "validator" of software after your recommendations to the government - the ones you wrote for them - have been accepted

Could there possibly be a clearer definition of "conflict of interest" than the above? Now, I do not blame anyone for trying to create a market - isn't that what capitalism is? - but trying to create a market for your services by demonizing capitalism is hilariously ironic. One wants to know, "quis custodiet ipsos custodes?" (Who watches the watchers, otherwise known as, "why should I trust consultants who, after all, exist to sell more consulting services?")

The antibusiness rhetoric got so bad once that I took advantage of a keynote I was delivering to remark - because I am nothing if not direct - that, contrary to popular belief, there is no actual Evil Vendor Cabal wherein major software and hardware suppliers collude to determine how we can collectively:


  • build worse products

  • charge more for them and

  • put our customers at increased risk of cyberattack.


It doesn't happen. And furthermore, I added, the government likes and has benefited from buying commercial software for many applications since it is feature rich, maintained regularly, generally very configurable, and runs on a lot of operating systems. "How well," I added, "did it work when government tried to build all these systems from scratch?" The answer is, the government does not have the people or the money to do that: they never did. But the same consultants who are creating FUD about commercial software would be happy to build custom software for everything at 20 times the price, whether or not there is a reason to build custom software.

"You are all in business to make a profit!" one person stated accusingly, as if that were a bad thing. "Yes," I said, "and because we are in business to make a profit, it is very much in our interest to build robust, secure software, because it is enormously expensive for us to fix defects - especially security defects - after we ship software, and we'd much rather spend the resources on building new features we can charge for, instead of on old problems we have to fix in many, many places. Furthermore, we run our own businesses on our own software so if there is horrible security, we are the first 'customer' to suffer. And lastly, if you build buggy, crappy software that performs poorly and is expensive to maintain, you will lose customers to competitors, who love to point at your deficiencies if customers have not already found them."

The second and more disturbingly tenacious idea - and I put this in the category of grass since it seemingly will take a lot of grubbing in the dirt to eradicate it - is what is being called "supply chain risk," this year's hot boy band, judging from the amount of screaming, fainting and hysteria that surrounds it. And yet, if "it" is such a big deal, why oh why can't the people writing papers, draft standards and proposed legislation around "it" describe exactly what they are worried about? I have read multiple pieces of legislation and now, a draft NIST standard on "supply chain risk management" and still there is no clear articulation of "what are you worried about?"

I generally have a high degree of regard for the National Institute of Standards and Technology (NIST). In the past, I've even advocated to get them more money for specific projects that I thought would be a very good use of taxpayer money. I am therefore highly disturbed that a draft standard on supply chain risk management, a problem supposedly critical to our national interests, appears to be authored by contractors and not by NIST. Specifically, two out of three people who worked on the draft are consultants, not NIST employees. (Disclaimer: I know both of them professionally and I am not impugning them personally.) There is no way to know whether the NIST employee who is listed on the standard substantially contributed to the draft or merely managed a contract that "outsourced" development of it.

As I noted earlier, there is an inherent problem in having third parties who would directly stand to benefit if a "standard" is implemented participate in drafting it. Human nature being what it is, the temptation to create future business for oneself is insidiously hard to resist. Moreover, it is exceedingly difficult to resist one's own myopias about how to solve a problem and, let's face it, if you are a consultant, every problem looks like the solution is "hire a consultant." It would be exactly the same thing if, say, the federal government asked Oracle to draft a request for proposal that required a ...database. Does anybody think we could possibly be objective? Even if we tried to be open minded, the set of requirements we would come up with would look suspiciously like Oracle, because that's what we are most familiar with.

Some will argue that this is a draft standard, and will go through revisions, so the provenance of the ideas shouldn't matter. However, NIST's core mission is developing standards. If they are not capable of drafting standards themselves then they should either get the resources to do so or not do it at all. Putting it differently, if you can't perform a core mission, why are you in business? If I may be a bit cheeky here, there is a lesson from Good Old Capitalism here: you cannot be in all market segments (otherwise known as "You can't be all things to all people"). It's better to do a few things well than to try to do everything, and end up doing many things badly. I might add, any business that tried to be in too many market segments that they had no actual expertise in would fail - quickly - because the market imposes that discipline on them.

Back to the heart of the hysteria: what, precisely is meant by "supply chain risk?" At the root of all the agitation there appears to be two concerns, both of which are reasonable and legitimate to some degree. They are:


  • Counterfeiting

  • Malware


Taking the easier one first, "counterfeiting" in this context means "purchasing a piece of hardware FOO or software BAR where the product is not a bona fide FOO or BAR but a knockoff." (Note: this is not the case of buying a "solid gold Rolex" on the street corner for $10 when you know very well this is not a real Rolex - not at that price.) From the acquirer's viewpoint, the concern is that a counterfeit component will not perform as advertised (i.e., might fail at a critical juncture), or won't be supported/repaired/warranted by the manufacturer (since it is a fake product). It could also include a suspicion that instead of GoodFoo you are getting EvilKnockoffFOO, which does something very different - and malicious - from what it's supposed to do. More on that later.

From the manufacturer's standpoint, counterfeiting cuts into your revenue stream since someone is "free riding" on your brand, your advertising, maybe even your technology, and you are not getting paid for your work. Counterfeits may also damage your brand (when FakeFOO croaks under pressure instead of performing like the real product). Counterfeiting is the non-controversial part of supply chain concerns in that pretty much everybody agrees you should get what you pay for, and if you buy BigVendor's product FOO, version 5, you want to know you are actually getting FOO, version 5 (and not fake FOO). Note: I say, "non controversial," but when you have government customers buying products off eBay (deeply discounted) who are shocked - shocked I tell you! - to discover that they have bought fakes, you do want to say, "do you buy F-22s off eBay? No? Then what makes you think you can buy mission critical hardware off eBay? Buy From An Authorized Distributor, fool!"

The second area of supply chain risk hysteria is malware. Specifically, the concern that someone, somewhere will Put Something Bad in code (such as a kill switch which would render the software or hardware inoperable at a critical juncture). Without ever articulating it, the hysteria is typically that An Evil Foreigner - not a Good American Programmer - will Put Something Bad in Code. (Of course, other countries have precisely the same concern, only in their articulation, it is evil Americans who will Put Something Bad In Code.) The "foreign boogeymen" problem is at the heart of the supply chain risk hysteria and has led to the overreach of proposed solutions for it. (For example, the NIST draft wanted acquirers to be notified of changes to personnel involving "maintenance." Does this mean that every time a company hires a new developer to work on old code - and let's face it, almost everybody who works in development for an established company touches old code at some point - they have to send a letter to Uncle Sam with the name of the employee? Can you say "intrusive?")

So here is my take on the reality of the "malware" part of supply chain. It's a long explanation, and I stole it from a paper I did on supply chain issues for a group of legislators. I offer these ideas as points of clarification that I fervently hope will frame this discussion, before someone, in a burst of public service, creates an entirely new expensive, vague, "construct" of policy remedies for an unbounded problem. Back to my gardening analogy, if eradicating the roots of a plant is important and necessary to kill off a biological interloper, it is also true that some plants will not grow in all climates and in all soil no matter what you do: I cannot grow plumeria (outdoors) in Idaho no matter how hard I try and no matter how much I love it. Similarly, some of the proposed "solutions" to supply chain risk are not going to thrive because of a failure to understand what is reasonable and feasible and will "grow" and what absolutely will not. I'll go farther than that - some of the proposed remedies - and much of what is proposed in the draft NIST standard - should be dosed with weed killer.

Constraint 1: In the general case - and certainly for multi-purpose infrastructure and applications software and hardware - there are no COTS products without global development and manufacturing.

Discussion: The explosion in COTS software and hardware of the past 20 years has occurred precisely because companies are able to gain access to global talent by developing products around the world. For example, a development effort may include personnel on a single "virtual team" who work across the United States and in the United Kingdom and India. COTS suppliers also need access to global resources to support their global customers. For example, COTS suppliers often offer 7x24 support in which responsibility for addressing a critical customer service request migrates around the globe, from support center to support center (often referred to as a "follow the sun" model). Furthermore, the more effective and available (that is, 7x24 and global) support is, the more likely problems will be reported and resolved more quickly for the benefit of all customers. Even smaller firms that produce specialized COTS products (e.g., cryptographic or security software) may use global talent to produce it.

Hardware suppliers are typically no longer "soup to nuts" manufacturers. That is, a hardware supplier may use a global supply network in which components - sourced from multiple entities worldwide - are assembled by another entity. Software is loaded onto the finished hardware in yet another manufacturing step. Global manufacturing and assembly helps hardware suppliers focus on production of the elements for which they can best add value and keeps overall manufacturing and distribution costs low. We take it for granted that we can buy serviceable and powerful personal computers for under $1000, but it was not that long ago that the computing power in the average PC was out of reach for all but highly capitalized entities and special purpose applications. Global manufacturing and distribution makes this possible.

In summary, many organizations that would have deployed custom software and hardware in the past have now "bet the farm" on the use of COTS products because they are cheaper, more feature rich, and more supportable than custom software and hardware. As a result, COTS products are being embedded in many systems - or used in many deployment scenarios - that they were not necessarily designed for. Supply chain risk is by no means the only risk of deploying commercial products in non-commercial threat environments.

Constraint 2: It is not possible to prevent someone from putting something in code that is undetectable and potentially malicious, no matter how much you tighten geographic parameters.

Discussion: One of the main expressions of concern over supply chain risk is the "malware boogeyman," most often associated with the fear that a malicious employee with authorized access to code will put a backdoor or malware in code that is eventually sold to a critical infrastructure provider (e.g., financial services, utilities) or a defense or intelligence agency. Such code, it is feared, could enable an adversary to alter (i.e., change) data or exfiltrate data (e.g., remove copies of data surreptitiously) or make use of a planted "kill switch" to prevent the software or hardware from functioning. Typically, the fear is expressed as "a foreigner" could do this. However, it is unclear precisely what "foreigner" is in this context:


  • There are many H1B visa holders (and green card holders) who work for companies located in the United States. Are these "foreigners?"

  • There are US citizens who live in countries other than the US and work on code there. Are these "foreigners?" That is, is the fear of code corruption based on geography or national origin of the developer?

  • There are developers who are naturalized US citizens (or dual passport holders). Are these "foreigners?"

(Ironically, naturalized citizens and H1B visa holders are arguably more "vetted" that native-born Americans.) It is unclear whether the concern is geographic locale, national origin of a developer or overall development practice and the consistency by which it is applied worldwide.

COTS software, particularly infrastructure software (operating systems, databases, middleware) or packaged applications (customer relationship management (CRM), enterprise resource planning (ERP)) typically has multiple millions of lines of code (e.g., the Oracle database has about 70 million lines of code). Also typically, commercial software is in near-constant state of development: there is always a new version under development or old versions undergoing maintenance. While there are automated tools on the market that can scan source code for exploitable security defects (so-called static analysis tools), such tools find only a portion of exploitable defects and these are typically of the "coding error" variety. They do not find most design defects and they would be unlikely to find deliberately introduced backdoors or malware.

Given the size of COTS code bases, the fact they are in a near constant state of flux, and the limits of automated tools, there is no way to absolutely prevent the insertion of bad code that would have unintended consequences and would not be detectable. (As a proof point, a security expert in command and control systems once put "bad code" in a specific 100 lines of code and challenged code reviewers to find it within the specific 100 lines of code. They couldn't. In other words, even if you know where to look, malware can be and often is undetectable.)

Lastly, we are sticking our collective heads in the sand if we think that no American would ever put something deliberately bad in code. Most of the biggest intelligence leaks of the past were perpetrated by cleared American citizens (e.g., Aldrich Ames, Robert Hanssen and the Walker spy ring). But there are other reasons people could Do Bad Things To Code, such as being underpaid and disgruntled about it (why not stick a back door in code and threaten to shut down systems unless someone gives you a pay raise?).

Constraint 3: Commercial assurance is not "high assurance" and the commercial marketplace will not support high assurance software.

Discussion: Note that there are existing, internationally recognized assurance measures such as the Common Criteria (ISO-15408) that validate that software meets specific (stated) threats it was designed to meet. The Common Criteria supports a sliding scale of assurance (i.e., levels 1 through 7) with different levels of software development rigor required at each level: the higher the assurance level, the more development rigor required to substantiate the higher assurance level. Most commercial software can be evaluated up to Evaluation Assurance Level (EAL) 4 (which, under the Common Criteria Recognition Arrangement (CCRA), is also accepted by other countries that subscribe to the Common Criteria). Few commercial entities ask for or require "high assurance" software and few if any government customers ask for it, either.

What is achievable and commercially feasible is for a supplier to have reasonable controls on access to source code during its development cycle and reasonable use of commercial tools and processes that will find routine "bad code" (such as exploitable coding errors that lead to security vulnerabilities). Such a "raise the bar" exercise may have and likely will have a deterrent affect to the extent that it removes the plausible deniability of a malefactor inserting a common coding error that leads to a security exploit. Using automated vulnerability finding tools, in addition to improving code hygiene, makes it harder for someone to deliberately insert a backdoor masquerading as a common coding error because the tools find many such coding errors. Thus, a malefactor may, at least, have to work harder.

That said, and to Constraint 1, the COTS marketplace will not support significantly higher software assurance levels such as manual code review of 70 million lines of code, or extensive third party "validation" of large bodies of code beyond existing mechanisms (i.e., the Common Criteria) nor will it support a "custom code" development model where all developers are US citizens, any more than the marketplace will support US-only components and US-only assembly in hardware manufacturing. This was, in fact, a conclusion reached by the Defense Science Board in their report on foreign influence on the supply chain of software. And in fact, supply chain risk is not about the citizenship of developers or their geographic locale but about the lifecycle of software, how it can be corrupted, and taking reasonable and commercially feasible precautions to prevent code corruption.

Constraint 4: Any supply chain assurance exercise - whether improved assurance or improved disclosure - must be done under the auspices of a single global standard, such as the Common Criteria.

Discussion: Assurance-focused supply chain concerns should use international assurance standards (specifically the Common Criteria) to address them. Were someone to institute a separate, expensive, non-international "supply chain assurance certification," not only would software assurance not improve, it would likely get worse, because the same resources that companies today spend on improving their product would be spent on secondary or tertiary "certifications" that are expensive, inconsistent and non-leverageable. In the worst case, a firm might have to produce different products for different geographic locales, which would further divert resources (and weaken security). A new "regulatory regime" - particularly one that largely overlaps with an existing scheme - would be expensive and "crowd out" better uses of time, people, and money. To the extent some supply chain issues are not already addressed in Common Criteria evaluations, the Common Criteria could be modified to address them, using an existing structure that already speaks to assurance in the international realm.

Even in cases of "supply chain disclosure," any such disclosure requirement needs to ensure that the value of information - to purchasers - is greater than the cost to suppliers of providing such information. To that end, disclosure should be standardized, not customized. Even a large vendor would not be able to complete per-customer or per-industry questionnaires on supply chain risk for each release of each product they produce. The cost of completing such "per-customer, per-industry" questionnaires would be considerable, and far more so for small, niche vendors or innovative start-ups.

For example, a draft questionnaire developed by the Department of Homeland Security asked, for each development project, for each phase of development (requirement, design, code, and test) how many "foreigners" worked on each project? A large product may have hundreds of projects, and collating how many "foreigners" worked on each of them provides little value (and says nothing about the assurance of the software development process) while being extremely expensive to collect. (The question was dropped from the final document.)

Constraint 5: There is no defect-free or even security defect-free software.

Discussion: While better commercial software is achievable, perfect software is not. This is the case because of a combination of generally poor "security education" in universities (most developers are not taught even basic secure development practices and have to be retrained by the companies that hire them), imperfect development practices, imperfect testing practices, and the fact that new classes of vulnerabilities are being discovered (and exploited) as enemies become more sophisticated. Better security education, better development practices and better testing will improve COTS (and non-COTS) software but will not eliminate all vulnerabilities or even all security vulnerabilities -- people make mistakes, and its not possible to catch all of those mistakes.

As noted elsewhere, manual code inspection is infeasible over large code bases and is error prone. Automated vulnerability-finding tools are the only scalable solution for large code bases (to automate "error finding") but even the best commercially available automated vulnerability-finding tools find perhaps 50% of security defects in code resulting from coding errors but very few security design errors (e.g., an automated tool can't "detect" that a developer neglected to include key security functionality, like encrypting passwords or requiring a password at all).

Lastly, no commercial software ships with "zero defects." Most organizations ship production software only after a phase-in period (so-called alpha and beta testing) in which a small, select group of production customers use the software and provide feedback, and the vendor fixes the most critical defects. In other words, there is typically a "cut-off" in that less serious vulnerabilities are not fixed prior to the product being generally available to all customers.

It is reasonable and achievable that a company has enough rigor in its development practice to include, as part of a robust development practice, actively looking for security defects (using commercial automated tools), triaging them (e.g., by assigning a Common Vulnerability Scoring System (CVSS) score) and, for example, fixing all issues above a particular severity). That said, it is a certainty that some vulnerabilities will still be discovered after the product has shipped, and some of these will be security vulnerabilities.

There is a reasonableness test here we all understand. Commercial software is designed for commercial purposes and with commercial assurance levels. "Commercial software" is not necessarily military grade any more than a commercial vehicle - a Chevy Suburban, for example - is expected to perform like an M1 Abrams tank. Wanting commercial software to have been built (retroactively) using theoretically perfect but highly impractical development models (and by cleared US citizens in a secured facility, no less) might sound like Nirvana to a confluence of assurance agitators - but it is neither reasonable nor feasible and it is most emphatically not commercial software.

Book(s) of the Month

Strong Men Armed: The United States Marines vs. Japan by Robert Leckie

Robert Leckie was a Marine who served in WWII in the Pacific theater and also a prolific writer, much of it military history (another book, Helmet for My Pillow, was a basis for HBO's The Pacific). As much as I have read about the Pacific War - and I've read a lot - I continue to be inspired and humbled by the accounts of whose who fought it and what they were up against: a fanatical, ideologically-inspired and persistent foe who would happily commit suicide if he were able to take out many of "the American enemy." The Marines were on the front lines of much of that war and indeed, so many battles were the Marines' to fight and win. What I liked about this book was that it did not merely recap which battles were fought when, where and by which Marine division led by what officer, but it delves into the individuals in each battle. You know why Joe Foss received the Congressional Medal of Honor, and for what (shooting down 23 Japanese planes over Guadalcanal), for example. History is made by warriors, and everyone - not just the US Marines - should know who our heroes are. (On a personal note, I was also thrilled to read, on page 271 of my edition, several paragraphs about the exploits of Lt. Col Henry Buse, USMC, on New Britain. I later knew him as General Henry Buse, a family friend. Rest in peace, faithful warrior.)

I'm Staying with My Boys: The Heroic Life of Sgt. John Basilone, USMC by Jim Proser

One of many things to love about the US Marine Corps is that they know their heroes: any Marine knows who John Basilone is and why his name is held in honor. This book - told in the first person, unusually - is nonetheless not an autobiography but a biography of Sgt. "Manila" John Basilone, who was a recipient of the Congressional Medal of Honor for his actions at Lunga Ridge on Guadalcanal. He could have sat out the rest of the war selling war bonds but elected to return to the front, where he was killed the first day of the battle for Iwo Jima. In a world where mediocrity and the manufactured 15 minutes of fame are celebrated, this is what a real hero - and someone who is worthy of remembrance - looks like. He is reported to have said upon receiving the CMH: "Only part of this medal belongs to me. Pieces of it belong to the boys who are still on Guadalcanal. It was rough as hell down there."

The citation for John Basilone's Congressional Medal of Honor:

" For extraordinary heroism and conspicuous gallantry in action against enemy Japanese forces, above and beyond the call of duty, while serving with the 1st Battalion, 7th Marines, 1st Marine Division in the Lunga Area. Guadalcanal, Solomon Islands, on 24 and 25 October 1942. While the enemy was hammering at the Marines' defensive positions, Sgt. Basilone, in charge of 2 sections of heavy machineguns, fought valiantly to check the savage and determined assault. In a fierce frontal attack with the Japanese blasting his guns with grenades and mortar fire, one of Sgt. Basilone's sections, with its gun crews, was put out of action, leaving only 2 men able to carry on. Moving an extra gun into position, he placed it in action, then, under continual fire, repaired another and personally manned it, gallantly holding his line until replacements arrived. A little later, with ammunition critically low and the supply lines cut off, Sgt. Basilone, at great risk of his life and in the face of continued enemy attack, battled his way through hostile lines with urgently needed shells for his gunners, thereby contributing in large measure to the virtual annihilation of a Japanese regiment. His great personal valor and courageous initiative were in keeping with the highest traditions of the U.S. Naval Service."

Other Links

More than you ever wanted to know about sagebrush:

Installing and Configuring Oracle Portal, Forms, Reports and Discoverer

Habib Gohar - Tue, 2010-08-31 02:08
Excellant steps… http://oraclelabs.com/index.php/2010/08/30/installing-and-configuring-oracle-portal-forms-reports-and-discoverer/ Regards

OOW 2010 Plans and Anti-plans

Dan Norris - Tue, 2010-08-31 00:24

I have plenty of things that are keeping me busy for OOW 2010 and you’ll all get to see the results at the event (if you’re there), but I only have one traditional technical session where I’ll be on stage. I’m presenting the following session jointly with an Oracle Database Machine customer:

Session ID: S316824
Title: Top 10 Lessons Learned in Deploying the Oracle Exadata
Tuesday, September 21, 12:30PM
Location: Moscone South, Rm 307

Check the OOW 2010 content catalog for updated room assignments and times.

Even better than a technical session is the interview and Q&A session I’m doing on Oracle Technology Network Live which is 30 minutes of pure technical talk about Exadata. The session is properly titled “Exadata for Geeks” and I’ll be joining Justin Kestelyn, editor of Oracle Technology Network at the OTN Lounge which is located in the Mason Street tent this year (*not* the previous location in Moscone West).

Significantly, this year I elected not to organize what would have been the 3rd annual pre-OOW scuba dive in Monterey Bay. Time and my work requirements are the primary reasons for this, but it also is a result of the fact that not a single person asked me about it, so apparently it was just for me after all :). Instead, I’m hoping that I might get to visit Alcatraz this year. I’ve been to SF so very many times in the past 12 years, but have yet to take that tour, so I think it’s time (I’ve heard it is a really interesting tour).

See you in SF!

This blog is inactive (at least for the time being)

Christian Bilien - Thu, 2010-08-26 14:53
It’s been more than 2 years and a half since I last blogged on Oracle performance. I had the feeling I could not carry on this extra time activity when I started managing a 20 people DBA team and 4 technologies (including 2  Oracle competitors- Microsoft and Sybase which has ASE and IQ). It would […]
Categories: DBA Blogs

I’ll be a presenter at Oracle Open World 2010

Christian Bilien - Thu, 2010-08-26 14:39
I submitted an abstract to the Oracle OpenWorld 2010 that was found to be worthwhile enough to be accepted. Here are the session details: Speaker(s) Christian BILIEN, BNP PARIBAS Corporate Investment Banking, Head of the DBAs Monday, September 20, 2:00PM | Moscone South, Rm 236 60 min. Session ID: S314649 Title: Large-Scale Oracle RAC (and […]
Categories: DBA Blogs

The True Cost of a Core

Brent Martin - Thu, 2010-08-26 07:21

Servers are becoming more powerful as manufacturers are finding new ways to get more cores into a CPU.  Today it’s not uncommon to see hexa and octa-core processors shipping at the same price points the dual- and hexa-cores shipped yesterday.  Where manufacturers once got their performance improvements through raw CPU speed, they are now getting their getting the majority of performance improvement through more cores in their processor chips.


Unfortunately the economics of additional cores for performance aren’t the same as improvements through improved clock cycles because software manufactures have largely tied their technology licensing to the number of cores on a system, and their pricing isn’t decreasing as the number of cores on these new servers increase.


For example, say you buy a basic server with two hexa-core processors, so you’re looking at 12 cores on the box.  Now let’s suppose the list price for Oracle Database is $47,500 per core.    So your list price to run an Oracle database on your new server will be $285,000.  And that’s not counting tuning packs, diagnostic packs, management packs, or even maintenance -- which is calculated as a percentage of the base price.  It turns out the cheapest part of this equation may be the hardware!


So if you’re planning on running software from the big vendors, conduct a solid sizing exercise and be sure to buy just the number of cores that you need.  Leave empty sockets for growth, but you might want to choose models that let you scale with fewer cores to avoid breaking the bank. Avoid sharing servers with more than one software package that is licensed per core (i.e. Informatica and Oracle DB), or you could end up paying double for server capacity that you’ll never be able to fully realize.  And when you DO add cores, be sure to also purchase the additional licenses to stay in compliance.  I’ve heard that software vendors’ compliance teams occasionally check up on you, and running with a few extra cores could break more than your annual budget.

Oracle Acquisitions Survey: The Results Are In

Lisa Dobson - Tue, 2010-08-24 03:41
I received another email from Stephen Jannise of ERP Software Advice this morning to let me know that he had published the results from the Oracle Mergers and Acquisitions Survey that I had previously blogged about.There were a total of 1,250 responses to the survey and the results are interesting.The two front runners fall into the category of "fairly straightforward ideas" with 38% of the Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com1

Berkeley DB Java Edition on ZFS Tuning Note

Charles Lamb - Tue, 2010-08-24 02:27

I have been spending some time working on tuning continuous write load tuning on a Solaris 10/ZFS based system. One of the issues I observed is a drop in throughput every several seconds. Removing the fsync() calls from JE (you would never want to do this normally) smoothed out the dips in throughput which pointed to IO issues.

My colleague Sam pointed me at this discuss-zfs entry. And indeed, adding

set zfs:zfs_write_limit_override = 0x2000000

to /etc/system (with the fsync() calls put back into JE) does in fact seem to smooth things out (0x2000000 = 33MB). Of course, you'll want to adjust that parameter based on what size of on-board disk cache you have.

One way to get a rough indication of whether this is a potential problem is to use iostat and see if there are IO spikes.

What to Convert in GL: Balance or Transaction Detail

Krishanu Bose - Sat, 2010-08-21 22:38
A key question which all consultants face while handling GL conversion is what to convert, whether to convert the prior period balances or to convert the detailed transactions.
Typically most organisations while converting GL from a legacy system bring in only the balances data and very rarely do we bring in the transactions details. The reasons for the same is as follows:
1. Usually there is a change in COA while moving from legacy GL to Oracle GL, hence the code combination would never be the same in legacy and Oracle.
2. Drill down from Oracle GL to Oracle sub-ledger is not possible as there is no linkage between sub-ledger data and GL data post conversion.
3. The legacy system is usually archived for a defined period of time due to audit and legal reasons. This archived instance can be used for resolving historical audit and reconciliation issues that may arise at a later point in time.
However, if we are upgrading from an older version of Oracle to a new one, then it makes sense to bring in the transaction data because drill down feature would be available and code combination would remain the same across both the versions. But, here again, we need not bring in the transaction details for all historical data but only for a small period range, typically we convert transaction details from start of the year to the cut-over date and balance data for prior period-years.

ADF logging level to see SQL statements

Peter O'Brien - Mon, 2010-08-16 07:34
Very handy for debugging runtime issues when developing, the Oracle ADF Business Components layer can output the SQL that it is executing. To do this though, the logging threshold must be set to a specific level...


'-Djbo.debugoutput=console -Djbo.logging.trace.threshold=5'


Curiously. the SQL statements do not get displayed for any other threshold level.

Read/Write NTFS on my MacBook Pro

Bas Klaassen - Thu, 2010-08-12 03:41
Today I tried to start my virtual linux machines (created in vmware workstation on windows to install/upgrade eBS environments) on my MacBook Pro. I downloaded a trial version of Vmware Fusion. When trying to start a virtual machine, Vmware would show me the following error 'Read only file system' and the machine would not start. It seemed I could not write to the folders containing the Vmware Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com3
Categories: APPS Blogs

FIRST_ROWS vs ALL_ROWS

Robert Vollman - Wed, 2010-08-11 14:35
A colleague asked me some questions about FIRST_ROWS and ALL_ROWS, but I'm hesitant to blog about it because it's already been done so well by others -- the best example would probably be Sachin Arora.Nevertheless, it never hurts to lend another voice to the Oracle choir, so here's everything I know on the topic.FIRST_ROWS and ALL_ROWS are values for the optimizer setting OPTIMIZER_MODE. You canRobert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com197

Discoverer OLAP is certified with OLAP 11g

Keith Laker - Tue, 2010-08-10 06:02
A few people have asked me recently when an updated version of Discoverer OLAP will be released that supports the 11g OLAP Option. The answer is simple - it has already been released!! (but I guess that many people missed it because it was bundled as part of a broader patchset and not widely announced)



If you are interested, you can download it from OTN under Portal, Forms, Reports and Discoverer (11.1.1.3.0)

An updated version of the BI Spreadsheet add-in has been released too and can also be downloaded from OTN


Categories: BI & Warehousing

Fame at last for my biggest Apex project to date

Tony Andrews - Mon, 2010-08-09 11:32
I'm very pleased to see that the Apex project I started and worked on for several years is now the subject of an entry under Customer Quotes on the OTN Apex page."At Northgate Revenues & Benefits, we have used APEX to replace our legacy Oracle Forms system comprising around 1500 Forms. Our user interface has 10,000 end users daily, across 172 clients, who this year sent out over 12 million Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com5http://tonyandrews.blogspot.com/2010/08/fame-at-last-for-my-biggest-apex.html

Moving blog from wordpress.com to Jekyll

Raimonds Simanovskis - Sun, 2010-08-08 16:00
Jekyll
Why to move?

This blog was hosted for several years on wordpress.com as it was the easiest way to host a blog when I started. But recently I was not very satisfied with it because of the following reasons:

  • I include code snippets in my blog posts quite often and several times I had issues with code formatting on wordpress.com. I used MarsEdit to upload blog posts but when I read previous posts back then quite often my < and > symbols were replaced with &lt; and &gt;.
  • I would prefer to write my posts in Textile and not in plain HTML (I think it could be possible also with wordpress.com but it was not obvious to me).
  • I didn’t quite like CSS design of my site and wanted to improve it but I prefer minimalistic CSS stylesheets and didn’t want to learn how to do design CSS specific for Wordpress sites.
  • Wordpress site was too mainstream, I wanted something more geeky :)

When I do web app development then I use TextMate for HTML / CSS and Ruby editing (sometime I use CSSEdit when I need to do more CSS editing), I use Textile for wiki-style content editing in my apps, I use git for version control, I use Ruby rake for build and deployment tasks. Wouldn’t it be great if I could use the same toolset for writing my blog?

What is Jekyll?

I had heard about Jekyll blogging tool several times and now I decided that it is the time to start to use it. Jekyll was exactly matching my needs:

  • You can write blog posts in Textile (or in Markdown)
  • You can design HTML templates and CSS stylesheets as you want and use Liquid to embed dynamic content
  • You can store all blog content in git repository (or in any other version control system that you like)
  • And finally you use jekyll Ruby gem to generate static HTML files that can be hosted anywhere

So it sounds quite easy and cool therefore I started migration.

Migration Initial setup

I started my new blog repository using canonical example site from Jekyll’s creator. You just need to remove posts from _posts directory and start to create your own.

Export from wordpress.com

At first I needed to export all my existing posts from wordpress.com. I found helpful script which processes wordpress.com export and creates Textile source files for Jekyll as well as comments import file for Disqus (more about that later). It did quite good job but I needed anyway to go manually through all posts to do the following changes:

  • I needed to manually change HTML source for lists to Textile formatted lists (export file conversion script converted just headings to Textile formatting) as otherwise they were not looking good when parsed by Textile formatting.
  • I needed to wrap all code snippets with Jekyll code highlighting tags (which uses Pygments tool to generate HTML) – as previously I had not used consistent formatting style I could not do that by global search & replace.
  • I needed to download all uploaded images from wordpress.com and put them in images directory.
CSS design

As I wanted to create more simple and maintainable CSS stylesheets I didn’t just copy previous CSS files but manually picked just the parts I needed. And now as I had full control over CSS I spent a lot of time improving my previous design (font sizes, margins, paddings etc.) – but now at least I am more satisfied with it :)

Tags

As all final generated pages are static there is no standard way how to do typical dynamic pages like list of posts with selected tag. But the good thing is that I can create rake tasks that can re-generate all dynamic pages as static pages whenever I do some changes to original posts. I found some examples that I used to create my rake tasks for tag pages and tag cloud generation.

Related pages

Previously wordpress.com was showing some automatically generated related posts for each post. Initially it was not quite obvious how to do it (as site.related_posts was always showing the latest posts). Then I found that I need to turn on lsi option and in addition install GSL library (I installed it with homebrew) and RubyGSL (as otherwise related posts generation was very slow).

Comments

The next issue is that in static HTML site you cannot store comments and you need to use some hosted commenting system. The most frequently commenting system in Jekyll sites is Disqus and therefore I decided to use it as well. It took some time to understand how it works but it provides all necessary HTML snippets that you need to include in your layout templates and then it just works.

Previously mentioned script also included possibility to import my existing comments from wordpress.com into Disqus. But that was not quite as easy as I hoped:

  • Disqus API that allows to add comments to existing post that is found by URL is not creating new discussion threads if they do not exist. Therefore I needed at first to open all existing pages to create corresponding Disqus discussion threads.
  • As in static HTML case I do not have any post identifiers that could be used as discussion thread identifiers I need to ensure that my new URLs of blog posts are exactly the same as the old ones (in my case I needed to add / at the end of URLs as URL without ending / will be considered as different URL by Disqus).
  • There was issue that some comments in export file had wrong date in URL (it was in cases when draft of post was prepared earlier than post was published) and I needed to fix that in export file.

So be prepared that you will need to import and then delete imported comments several times :)

RSS / Atom feeds

If you have existing subscribers to your RSS or Atom feed then you either need to use the same URL for new feed as well or to redirect it to the new feed URL. In my case I created new Feedburner feed and redirected old feed URL to the new one in .htaccess file.

Other URL mappings

In my case I renamed categories to tags in my blog posts and URLs but as these old category URLs were indexed by Google and were showing on to Google search results I redirected them as well in .htaccess file.

Search

If you want to allow search in your blog then the easiest way is just to add Google search box with sitesearch parameter.

Analytics

Previously I used standard wordpress.com analytics pages to review statistics, now I added Google Analytics for that purpose.

Deployment

Finally after all migration tasks I was ready to deploy my blog into production. As I had account at Dreamhost I decided that it is good enough for static HTML hosting.

I created rake tasks for deployment that use rsync for file transfer and now I can just do rake deploy to generate the latest version of site and transfer it to hosting server.

After that I needed to remap DNS name of blog.rayapps.com to new location and wait for several hours until this change propogated over Internet.

Additional HTML generation speed improvements

When I was doing regular HTML re-generation using jekyll I noticed that it started to get quite slow. After investigation I found out that the majority of time went on Pygments execution for code highlighting. To fix this issue I found jekyll patches that implemented Pygments results caching and I added it as ‘monkey patch’ to my repository (it stores cached results in _cache directory). After this patch my HTML re-generation happens instantly.

My blog repository

I published ‘source code’ of my blog on GitHub so you can use it as example if I convinced you to migrate to Jekyll as well :)

The whole process took several days but now I am happy with my new “geek blogging platform” and can recommend it to others as well.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator