Feed aggregator

SOA Suite 11g advanced training experiences

Jornica - Sun, 2012-07-22 09:20
Read my report from Oracle Fusion Middleware Summer Camps in Munich: SOA Suite 11g advanced training experiences.

Using History Keys in SQL*Plus

Barry McGillin - Thu, 2012-07-19 13:57
I was working through a bug the other day and using SQL*Plus, which for the most part doesn't annoy me too much.  However, one of the things that does, is having to retype lots of stuff. (We dont have that problem in SQL Developer).

Having hunted around for a few minutes, I found rlwrap which is a GNU readline wrapper.  All this means is that when we use it on SQL*Plus, it give us keyboard history and user defined completion.  I've found a few posts about it too, which are referred to below, but I wanted to do this for our virtual machine.

We use our Oracle Developer Days VM a lot internally as its great for spooling a DB having a full environment ready to play with and test features.  I'm using that for this post.

You can download rlwrap from here.  There are also RPMs available too.  I pulled down the tar ball.   Expand it out and you have a bunch of files for a standard build

Firstly, we need to run the ./configure script to find all the dependencies.  You can see a cut down version of the output of that below.



checking for tgetent in -lcurses... no
checking for tgetent in -lncurses... no
checking for tgetent in -ltermcap... no
configure: WARNING: No termcap nor curses library found
checking for readline in -lreadline... no
configure: error:

You need the GNU readline library(ftp://ftp.gnu.org/gnu/readline/ ) to build
this program!


[root@localhost rlwrap-0.37]#

Running configure on my system flagged that I didnt have the readline package installed.   However, when I went to install it with

[root@localhost ~]# yum install readline
Loaded plugins: security
Setting up Install Process
Package readline-5.1-3.el5.i386 already installed and latest version
Nothing to do

I discovered it was already installed.  A quick look through the config.log tho, from the configure process shows that the -lreadline library dependency could not be satisfied.  It needed the development package to build.


[root@localhost rlwrap-0.37]# yum install readline-devel


Total download size: 202 k
Is this ok [y/N]: y
Downloading Packages:
(1/2): libtermcap-devel-2.0.8-46.1.i386.rpm | 56 kB 00:00
(2/2): readline-devel-5.1-3.el5.i386.rpm | 146 kB 00:01
--------------------------------------------------------------------------------
Total 85 kB/s | 202 kB 00:02
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libtermcap-devel 1/2
Installing : readline-devel 2/2

Installed:
readline-devel.i386 0:5.1-3.el5

Dependency Installed:
libtermcap-devel.i386 0:2.0.8-46.1

Complete!
[root@localhost rlwrap-0.37]#


Ok, Now to try configure again..

configure: creating ./config.status
config.status: creating Makefile
config.status: creating filters/Makefile
config.status: creating doc/Makefile
config.status: creating src/Makefile
config.status: creating doc/rlwrap.man
config.status: creating config.h
config.status: executing depfiles commands

Now do:
make (or gmake) to build rlwrap
make check for instructions how to test it
make install to install it

[root@localhost rlwrap-0.37]#


Running the configure again, succeeded creating my makefile. Great.  Now run the following to build it and install it in the right place and we should be getting places.


[root@localhost rlwrap-0.37]# make

and

[root@localhost rlwrap-0.37]# make install


Great. Now, rlwrap is installed in /usr/local/bin and we can use it in our oracle terminal window.


[oracle@localhost rlwrap-0.37]$ rlwrap
Usage: rlwrap [options] command ...

Options:
-a[password:] --always-readline[=password:]
-A --ansi-colour-aware
-b <chars> --break-chars=<chars>


Now we can use rlwrap to run SQL*Plus, which gets me back to what I wanted to do at the start.  I've kicked this off with the '-c' option.


[oracle@localhost ~]$ rlwrap -c sqlplus barry/barry

SQL*Plus: Release 11.2.0.2.0 Production on Thu Jul 19 17:51:51 2012

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

BARRY@ORCL>

Now my up and down arrows work AND with the '-c' option for rlwrap, we get filename completion for free.

BARRY@ORCL> @re
remote.sql reset_imdbcache reset_xmldb
repos/ reset_sqldev reset_xmldb~
reset_OE.sql reset_svn
reset_apex reset_xdbPorts.sql
BARRY@ORCL> @re


So, now I'm a lot happier and can zip through loading files and getting my previous statements.

Now, I know there are issues with using this when we redirect files into SQL*Plus, on other blogs like this from Lutz Hartmann, but for me and working with plus in a terminal window, this will do nicely.

OWB 11gR2 – Parallelization using DBMS_PARALLEL_EXECUTE

Antonio Romero - Wed, 2012-07-18 10:08

As well as all of the parallel query and DML capabilities exposed in OWB, the 11gR2 release of OWB includes out of the box support for the DBMS_PARALLEL_EXECUTE package in 11gR2 of the Oracle database, there are various articles on how to do the grunt work manually coding it on the web (see here for example). The use cases for this range from Oracle techie stuff like managing roll back segments for large table updates to parallelizing your PLSQL mappings…whatever they may be. Here we will see how with a couple of configuration settings in OWB you can self parallelize mappings – divide and conquer the olde fashioned way.

The use case that is generally mentioned for using this is for doing large table updates – basically its dividing a BIG problem into smaller pieces, the package also exposes a resume capability for reprocessing failed chunks which is also exposed from OWB (when you execute a chunked mapping you can resume or start a mapping). The approach can also be utilized for parallelizing arbitrary PLSQL (support for set based, row based also the table being chunked can be the source or the target or an arbitrary SQL statement). The 11.2.0.3 patch is best for this.

Chunk by SQL Statement

The example below updates employees salaries, driving the parallel worker threads by the departments table. As you see below we the start and end department id are the same, we are doing a distinct, so all employees in a department will be updated in each chunk, so we can increase the salary within the EMPLOYEES table for each department in a chunk. We are using the department id to drive the chunking – each department if processed in its own thread. We will see how we can control the thread pool of workers.

Note above there are other options such as chunk by ROWID and an arbitrary SQL statement).

The mapping now using olde fashioned divide and conquer has new runtime properties which relate back to the DBMS_PARALLEL_EXECUTE package and include the resume functionality for processing failed chunks and the parallel level, so you can change at runtime whether 1,2 or n threads are used to process the chunks.

We are saying that we want 2 child processes processing the chunks, this is a runtime parameter. Its wise to think about the number of chunks and the number of child processes to ensure optimal execution plan. The image below depicts the code that gets generated with chunk parallel level 2, and parallel level 4 – essentially the package ‘main’ procedure uses DBMS_PARALLEL_EXECUTE to process the heart of the mapping in the child processes.

There is much more to the package than meets the eye the resume capability for example provides a way of reprocessing failed chunks, rather than reprocessing everything again. This is also exposed as a runtime parameter for the chunked OWB mapping so you can resume and reprocess only the failed chunks.

Chunk by ROWID

This is the classic Large Table Update example that typical divide and conquer is used for. This example updates employees salaries, driving the parallel worker threads by rows in the target employees table itself. With this example when we configure the mapping we pick chunk method as ROWID, the chunk table EMPLOYEES, chunk type BY_ROWS and size is 1000 for example. The EMPLOYEES table is set to perform and update, I define the target filter for update for the ON clause in the merge/update – so its the ‘how do I identify the rows to update within this chunk’ clause. The other thing I had to do was define a chunking expression – now in this case its kind of redundant since the chunk is being done in the ON clause of the update – so we can trick OWB by just saying ‘start=start and end=end’ using the variables. If you don’t specify this, OWB will complain that the chunking expression is invalid.

So the MERGE statement within the chunking code will increase the SALARY for the chunk, you will see the expression to increase salary by, the dummy chunking expression used in selecting from the source (DUAL) and which rows to match – the rows in this chunk.

This let’s us perform large table updates in chunks and drive the parallelized mapping using mapping input parameters.

The parallelization for PLSQL (or row based mappings) is an interesting case - for example for the likes of match merge which has an inherent divide and conquer strategy (in binning), with match merge out of the box the processing of the bins is serialized by default. Combining the chunk parallelization with the match merge binning lets you boost the performance of such mappings. So if you pair the chunking criteria with the binning you can substantially increase the performance of such mappings.

Summer Potpourri

Mary Ann Davidson - Tue, 2012-07-17 14:41

Its summer in Idaho, and how! (It was over 90 yesterday and people can say “it’s a dry heat” until the cows come home with heatstroke. My oven is dry, but I don’t want to sit in that, either.) The air is scented with sagebrush (the quintessential Idaho smell), the pine needles I am clearing from my side yard, the lavender and basil in my garden, and the occasional ozone from imminent thunderstorms. It’s those snippets of scent that perfume long, languid summer days here.  With that, I offer my own “literary potpourri” – thought snippets in security.
Digital Pearl Harbor
In the interests of greater service to my country, I would like to propose a moratorium on the phrase “digital (or “cyber”) Pearl Harbor.”  This particular phrase is beyond tiresome cliché, to the point where it simply does not resonate, if it ever did.  The people who use it for the most part demonstrate an utter lack of understanding of the lessons of Pearl Harbor.  It’s as bad as the abuse of the word “decimate,” the original meaning of which was to kill every  10th person (a form of military discipline used by officers in the Roman Army). Unless you were one of the ten, it hardly constituted “utter annihilation,” the now generally accepted usage of “decimate.”   The users and abusers of the phrase “digital Pearl Harbor” typically mean to say that we are facing the cyber equivalent of:

-- A sneak attack
-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin/devastation
-- With attendant loss of life
Not to go on and on about the Pacific War – not that it isn’t one of my favorite topics - but Pearl Harbor turned out to be a blessing in disguise for the United States.  Note that I am not saying it was a blessing for the 2403 men and women who died in the attack (not to mention those who were wounded). In support of my point:
1) The Japanese attack on Pearl Harbor brought together the United States as almost nothing else could have. On December 6, 1941, there were a lot of isolationists. On December 8, 1941, nary a one.  (For extra credit, Hitler was not obligated to declare war on the United States under the terms of the Tripartite Agreement, yet he did so, which means – as if we needed one – we also had a valid excuse to join the European war.)
2) The Japanese did not help their cause any by – due to a timing snafu – only severing diplomatic relations after the attack had begun, instead of before. Thus, Pearl Harbor wasn’t just a sneak attack, but a sneak attack contrary to diplomatic norms.  
3)
Japan left critical facilities at Pearl Harbor intact, due to their failure to launch another attack wave. Specifically, Japan did not destroy the POL (petroleum, oil and lubricant) facilities at Pearl Harbor. They also did not destroy the shipyards at Pearl Harbor. Had the POL facilities been destroyed, the Pacific Fleet would have had to function out of San Francisco (or another West Coast port) instead of Hawai’i.  As for the dry-dock facilities, most of the ships that were sunk  at Pearl Harbor were ultimately raised, refurbished, and rejoined the fleet.  (Special kudos to the divers who did that work, who do not generally get the credit they deserve for it.) The majority of ships on Battleship Row were down on December 7 -- but far from out. 
4) The attack forced the US to rely on their aircraft carriers. Famously, the US carriers were out to sea during the attack, unfortunately for the Japanese, who truly wanted to sink carriers more than battleships.  Consequently, the US was forced (in the early months of the war) to rely on their carriers, and carriers were the future of war-at-sea.  (The Japanese ship Yamato, which, with her sister Musashi, were the largest battleships ever built, was a notable non-force during the war.) 
5) Japan was never going to prevail against the industrial might of the United States.  Famously, ADM Isoroku Yamamoto -- who had initially opposed the attack on Pearl Harbor -- said, "I can run wild for six months … after that, I have no expectation of success.” It was almost exactly six months between December 7, 1941 and the battle of Midway (June 4-6, 1942), which was arguably the turning point of the Pacific War.
December 7, 1941 was and always will be “a date that shall live in infamy,” but Pearl Harbor was not the end of the United States.  Far from it.
Thus, the people who refer to “Cyber Pearl Harbor” or “Digital Pearl Harbor”(DPH) are using it in virtually complete ignorance of actual history.  Worse, unless they can substantiate some actual “for instances”  that would encompass the basic “here’s what we mean by that,” they run the risk of becoming the boy who cried cyber wolf.  Specifically, and to return to my earlier points:
-- A sneak attack
With the amount of cyber intrusions, cyber thefts, and so on, would any cyber attack (e.g., on critical infrastructure) really be “a sneak attack” at this point? If we are unprepared, shame on us. But nobody should be surprised. Even my notably technophobic Mom reads the latest and greatest articles on cyber attacks, and let’s face it, there are a lot of them permeating even mainstream media. It’s as if Japan had attacked Seattle, Los Angeles and San Diego and pundits were continuing to muse about whether there would be an attack on Pearl Harbor. Duh, yes!
-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin /devastation
If we know of critical systems that are so fragile or interdependent that we could be ruined if they were “brought down,” for pity’s sake let’s get on with fixing them. For the amount of time pundits spending opining on DPH, they could be working to fix the problem.  Hint: unless Congress has been taken over by techno-savvy aliens and each of the members is now supergeek, they cannot solve this problem via legislation. If a critical system is laid low, what are we going to say? “The answer is – more laws!” Yessirree, I had no interest in protecting the power grid before we were attacked, but golly jeepers, now there’s a law, I suddenly realize my responsibilities. (Didn’t we defeat imperial Japan with – legislation? Yep, we threw laws at the Japanese defenders of Tarawa, Guadalcanal, and Peleliu. Let’s give Marines copies of the Congressional Record instead of M4s.)
-- With attendant loss of life
It’s not inconceivable that there are systems whose failures could cost lives. So let’s start talking about that (we do not, BTW, have to talk about that in gory detail wherein Joe-Bob Agent-of-a- Hostile-Nation-State now has a blueprint for evil). If it -- loss of life -- cannot be substantiated, then it’s another nail in the coffin of using DPH as an industry scare tactic.
To summarize, I have plenty of concerns about the degree to which people rely on systems that were not designed for their threat environments, and/or that embed a systemic risk (the nature of which is that it is not mitigateable – that’s why it is “systemic” risk).  I also like a catchy sound bite as much as the next person (I believe I was the first person to talk about a Cyber Monroe Doctrine).  But I am sick to death of “DPH” and all its  catchy variants. To those who use it: stop waving your hands, and Put Up or Shut Up – read about Pearl Harbor and either map the digital version to actual events or find another CCT (Cyber Catchy Term) . Cyberpocalypse, Cybergeddon, CyberRapture – there are so many potential terms of gigabit gloom and digi-doom – I am sure we as an industry can do better.
Hand waving only moves hot air around – it’s doesn’t cool anything off.
Security Theater
I recently encountered a “customer expectations management” issue -- which we dealt with pretty quickly -- that reminds me of a Monty Python sketch.  It illustrates the difference between “real security” and “security theater” -- those feel-good, compliance-oriented, “everybody else does this so it must be OK” exercises that typically end in “but we can’t have a security problem -- we checked all the required boxes!”  
Here goes. I was told by a particular unnamed development group that customers requesting a penetration test virtually demanded a penetration test report that showed there were vulnerabilities, otherwise they didn’t believe it was a “real” report.  (Sound of head banging on wall.) I’d laugh if the stakes weren’t so high and the discussion weren’t so absurd.
If the requirement for “security” is “you have to produce a penetration test that shows there are vulnerabilities,” that is an easy problem to solve. I am sure someone can create a software program that randomly selects from, say, the oodles of potential problems outlined in the Common Weakness Enumeration (CWE), and produces a report showing one or more Vulnerabilities Du Jour. Of course, it probably needs to be parameterized (e.g.,  you have to show at least one vulnerability per X thousand lines of code, you specify primary coding language so you can tailor the fake vulnerabilities reported to the actual programming language, etc.).  Rather than waste money using a really good tool (or hiring a really good third party), we can all just run the report generator. Let’s call it BogusBreakIt. “With BogusBreakIt, you can quickly and easily show you have state-of- the-art, non-existent security problems – without the expense of an actual penetration test! Why fix actual problems when customers only want to know you still have them? Now, with new and improved fonts!”
With apologies to the Knights Who Say, “Ni,” all we are missing is a shrubbery (not too expensive). The way this exercise should work is that instead of hiring Cheap and Bad Pen Testers R Us to make your customers feel good about imperfect security (why hire a good one to find everything if the bar is so low?), you do the best you can do, yourselves, then, where apropos, hire a really good pen tester, triage the results, and put an action plan together to address the crappiest problems first. If you provide anything to customers, it should not be gory details of problems you have not fixed yet, it should be a high level synopsis with an accompanying remediation plan.  Any customer who really cares doesn’t want “a report that shows security problems,” but a report that shows the vendor is focused on improving security in priority order – and, of course, actually does so.  
Moral: It’s hard enough working in security without wasting scarce time, money, and people on delivering shrubberies instead of security.
CSSLEIEIO
I don’t think anybody can doubt my commitment to assurance – it’s been my focus at Oracle for most of the time I’ve worked in security, going on 20 years (I should note that I joined the company when I was 8). It is with mixed feelings that I say that while I was an early (grandfathered) recipient of the Certified Secure Software Lifecycle Professional (CSSLP) certification, I have just let it lapse. I’m not trying to bash the Information Systems Security Certification Consortium (ISC(2)), the developer and “blessing certifier” of the CSSLP, and certainly not trying to denigrate the value of assurance, but the entire exercise of developing this certification never sat well with me. Part of it is that I think it solves the wrong problem – more on which later. Also, my cynical and probably unfair comment when I heard that ISC(2) was developing the CSSLP certification was that, having saturated the market for Certified Information Systems Security Professionals (CISSPs), they needed a new source of revenue. (You do wonder when you look at the number of business cards with a multiplicity of alphabet soup on them: CISM, CISSP, CSSLP, EIEIO (Ok, I made that last one up).)
I let my CSSLP lapse because of a) laziness and b) the fact that I got nothing from it.  Having a CSSLP made no difference to my work or my “professional standing.” It added no value to me personally, or, more importantly, to assurance at Oracle. I started working in assurance before the CSSLP dreamer-uppers ever thought of Yet Another Certification, and my team (and many others in Oracle) have proceeded to continue to try to improve our practices. ISC(2) had no impact on that. None. I wondered why I was paying X dollars to an organization for them to “bless” work that we were doing anyway, that I was doing, anyway, that did not add one iota to our knowledge or practices?  (To be fair, some of the people who work for me have CSSLPs and have kept them current. I can’t speak for what they think they get out of having a CSSLP.)
We have an EIEIO syndrome in security – so many certifications, and really, is security any better? I don’t know. I don’t know how much difference many of these certifications make, except that job search engines apparently look for them as keywords. Many certifications across various industries are used as barriers to market entry, to the point that some forward- thinking states are repealing these requirements as being anti-competitive. (And really, do we need a certification for interior decorators? Is it that critical that someone know the difference between Rococo and Neoclassical styles? C’mon!) In some areas, certifications probably do make sense. Some of them might be security areas. But it doesn’t do any good if the market is so fragmented and we are adding more certifications just to add them. And it really doesn’t do any good if we have too many of them to the point where the next one is JASC – just another security certification. CSSLP felt like that to me.  
I certainly think assurance is important. I just do not know – and remain really, really unconvinced -- that having an assurance “certification” for individuals has done anything to improve the problem space. As I’ve opined before, I think it would be far, far, better to “bake in” security to the computer science and related degree programs than try to “bolt on” assurance through an ex post facto certification. It’s like an example I have used before: the little Dutch boy, who, in the story, put his fingers in leaks in a dike to prevent a flood. We keep thinking if only we had more little Dutch boys, we can prevent a flood. If we don’t fix the “builders” problem,  we – and all the Dutch boys we are using  to stem the flood – will surely drown.

I am not perfect. As good as my team is -- and they and many others in Oracle are the real builders of our assurance program -- they are not perfect. But I stand on our record, and none of that record was, in my opinion, affected one iota by the presence or absence of CSSLPs among our practitioners.
If I may be indulged:
Old MacDonald had some code
EIEIO
And in that code he had some flaws
EIEIO
With a SQL injection here and an XSS there
Run a scan, fuzz your code
Everywhere a threat model
Old MacDonald fixed his code
EIEIO*
Last Bits
I mentioned in an earlier blog, in a truly breathtaking example of self-promotion, that my sister Diane and I (writing as Maddi Davidson) had written and published the first in an IT Industry-based murder mystery series,  Outsourcing Murder. The two exciting bits of news about that are, first of all book 2, Denial of Service, is almost (OK, 90%) done.  Stay tuned. The second bit is that Outsourcing Murder made a summer reading list for geeks.  It’s probably the only time in my life I (or rather, my sister and I) will appear in a list with Kevin Mitnick and Neal Stephenson.
*Ira Gershwin would turn over in his grave – I know I’ll never make it as a lyricist.

Summer Potpourri

Mary Ann Davidson - Tue, 2012-07-17 14:41

Its summer in Idaho, and how! (It was over 90 yesterday and people can say “it’s a dry heat” until the cows come home with heatstroke. My oven is dry, but I don’t want to sit in that, either.) The air is scented with sagebrush (the quintessential Idaho smell), the pine needles I am clearing from my side yard, the lavender and basil in my garden, and the occasional ozone from imminent thunderstorms. It’s those snippets of scent that perfume long, languid summer days here.  With that, I offer my own “literary potpourri” – thought snippets in security.

Digital Pearl Harbor

In the interests of greater service to my country, I would like to propose a moratorium on the phrase “digital (or “cyber”) Pearl Harbor.”  This particular phrase is beyond tiresome cliché, to the point where it simply does not resonate, if it ever did.  The people who use it for the most part demonstrate an utter lack of understanding of the lessons of Pearl Harbor.  It’s as bad as the abuse of the word “decimate,” the original meaning of which was to kill every  10th person (a form of military discipline used by officers in the Roman Army). Unless you were one of the ten, it hardly constituted “utter annihilation,” the now generally accepted usage of “decimate.”   The users and abusers of the phrase “digital Pearl Harbor” typically mean to say that we are facing the cyber equivalent of:


-- A sneak attack

-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin/devastation

-- With attendant loss of life

Not to go on and on about the Pacific War – not that it isn’t one of my favorite topics - but Pearl Harbor turned out to be a blessing in disguise for the United States.  Note that I am not saying it was a blessing for the 2403 men and women who died in the attack (not to mention those who were wounded). In support of my point:

1) The Japanese attack on Pearl Harbor brought together the United States as almost nothing else could have. On December 6, 1941, there were a lot of isolationists. On December 8, 1941, nary a one.  (For extra credit, Hitler was not obligated to declare war on the United States under the terms of the Tripartite Agreement, yet he did so, which means – as if we needed one – we also had a valid excuse to join the European war.)

2) The Japanese did not help their cause any by – due to a timing snafu – only severing diplomatic relations after the attack had begun, instead of before. Thus, Pearl Harbor wasn’t just a sneak attack, but a sneak attack contrary to diplomatic norms.  

3)
Japan left critical facilities at Pearl Harbor intact, due to their failure to launch another attack wave. Specifically, Japan did not destroy the POL (petroleum, oil and lubricant) facilities at Pearl Harbor. They also did not destroy the shipyards at Pearl Harbor. Had the POL facilities been destroyed, the Pacific Fleet would have had to function out of San Francisco (or another West Coast port) instead of Hawai’i.  As for the dry-dock facilities, most of the ships that were sunk  at Pearl Harbor were ultimately raised, refurbished, and rejoined the fleet.  (Special kudos to the divers who did that work, who do not generally get the credit they deserve for it.) The majority of ships on Battleship Row were down on December 7 -- but far from out. 

4) The attack forced the US to rely on their aircraft carriers. Famously, the US carriers were out to sea during the attack, unfortunately for the Japanese, who truly wanted to sink carriers more than battleships.  Consequently, the US was forced (in the early months of the war) to rely on their carriers, and carriers were the future of war-at-sea.  (The Japanese ship Yamato, which, with her sister Musashi, were the largest battleships ever built, was a notable non-force during the war.) 

5)
Japan was never going to prevail against the industrial might of the United States.  Famously, ADM Isoroku Yamamoto -- who had initially opposed the attack on Pearl Harbor -- said, "I can run wild for six months … after that, I have no expectation of success.” It was almost exactly six months between December 7, 1941 and the battle of Midway (June 4-6, 1942), which was arguably the turning point of the Pacific War.

December 7, 1941 was and always will be “a date that shall live in infamy,” but Pearl Harbor was not the end of the United States.  Far from it.

Thus, the people who refer to “Cyber Pearl Harbor” or “Digital Pearl Harbor”(DPH) are using it in virtually complete ignorance of actual history.  Worse, unless they can substantiate some actual “for instances”  that would encompass the basic “here’s what we mean by that,” they run the risk of becoming the boy who cried cyber wolf.  Specifically, and to return to my earlier points:

-- A sneak attack

With the amount of cyber intrusions, cyber thefts, and so on, would any cyber attack (e.g., on critical infrastructure) really be “a sneak attack” at this point? If we are unprepared, shame on us. But nobody should be surprised. Even my notably technophobic Mom reads the latest and greatest articles on cyber attacks, and let’s face it, there are a lot of them permeating even mainstream media. It’s as if Japan had attacked Seattle, Los Angeles and San Diego and pundits were continuing to muse about whether there would be an attack on Pearl Harbor. Duh, yes!

-- That “destroys the fleet” – leaving something critical to our national defense or national well-being in a state of ruin /devastation

If we know of critical systems that are so fragile or interdependent that we could be ruined if they were “brought down,” for pity’s sake let’s get on with fixing them. For the amount of time pundits spending opining on DPH, they could be working to fix the problem.  Hint: unless Congress has been taken over by techno-savvy aliens and each of the members is now supergeek, they cannot solve this problem via legislation. If a critical system is laid low, what are we going to say? “The answer is – more laws!” Yessirree, I had no interest in protecting the power grid before we were attacked, but golly jeepers, now there’s a law, I suddenly realize my responsibilities. (Didn’t we defeat imperial Japan with – legislation? Yep, we threw laws at the Japanese defenders of Tarawa, Guadalcanal, and Peleliu. Let’s give Marines copies of the Congressional Record instead of M4s.)

--
With attendant loss of life

It’s not inconceivable that there are systems whose failures could cost lives. So let’s start talking about that (we do not, BTW, have to talk about that in gory detail wherein Joe-Bob Agent-of-a- Hostile-Nation-State now has a blueprint for evil). If it -- loss of life -- cannot be substantiated, then it’s another nail in the coffin of using DPH as an industry scare tactic.

To summarize, I have plenty of concerns about the degree to which people rely on systems that were not designed for their threat environments, and/or that embed a systemic risk (the nature of which is that it is not mitigateable – that’s why it is “systemic” risk).  I also like a catchy sound bite as much as the next person (I believe I was the first person to talk about a Cyber Monroe Doctrine).  But I am sick to death of “DPH” and all its  catchy variants. To those who use it: stop waving your hands, and Put Up or Shut Up – read about Pearl Harbor and either map the digital version to actual events or find another CCT (Cyber Catchy Term) . Cyberpocalypse, Cybergeddon, CyberRapture – there are so many potential terms of gigabit gloom and digi-doom – I am sure we as an industry can do better.

Hand waving only moves hot air around – it’s doesn’t cool anything off.

Security Theater

I recently encountered a “customer expectations management” issue -- which we dealt with pretty quickly -- that reminds me of a Monty Python sketch.  It illustrates the difference between “real security” and “security theater” -- those feel-good, compliance-oriented, “everybody else does this so it must be OK” exercises that typically end in “but we can’t have a security problem -- we checked all the required boxes!”  

Here goes. I was told by a particular unnamed development group that customers requesting a penetration test virtually demanded a penetration test report that showed there were vulnerabilities, otherwise they didn’t believe it was a “real” report.  (Sound of head banging on wall.) I’d laugh if the stakes weren’t so high and the discussion weren’t so absurd.

If the requirement for “security” is “you have to produce a penetration test that shows there are vulnerabilities,” that is an easy problem to solve. I am sure someone can create a software program that randomly selects from, say, the oodles of potential problems outlined in the Common Weakness Enumeration (CWE), and produces a report showing one or more Vulnerabilities Du Jour. Of course, it probably needs to be parameterized (e.g.,  you have to show at least one vulnerability per X thousand lines of code, you specify primary coding language so you can tailor the fake vulnerabilities reported to the actual programming language, etc.).  Rather than waste money using a really good tool (or hiring a really good third party), we can all just run the report generator. Let’s call it BogusBreakIt. “With BogusBreakIt, you can quickly and easily show you have state-of- the-art, non-existent security problems – without the expense of an actual penetration test! Why fix actual problems when customers only want to know you still have them? Now, with new and improved fonts!”

With apologies to the Knights Who Say, “Ni,” all we are missing is a shrubbery (not too expensive). The way this exercise should work is that instead of hiring Cheap and Bad Pen Testers R Us to make your customers feel good about imperfect security (why hire a good one to find everything if the bar is so low?), you do the best you can do, yourselves, then, where apropos, hire a really good pen tester, triage the results, and put an action plan together to address the crappiest problems first. If you provide anything to customers, it should not be gory details of problems you have not fixed yet, it should be a high level synopsis with an accompanying remediation plan.  Any customer who really cares doesn’t want “a report that shows security problems,” but a report that shows the vendor is focused on improving security in priority order – and, of course, actually does so.  

Moral: It’s hard enough working in security without wasting scarce time, money, and people on delivering shrubberies instead of security.

CSSLEIEIO

I don’t think anybody can doubt my commitment to assurance – it’s been my focus at Oracle for most of the time I’ve worked in security, going on 20 years (I should note that I joined the company when I was 8). It is with mixed feelings that I say that while I was an early (grandfathered) recipient of the Certified Secure Software Lifecycle Professional (CSSLP) certification, I have just let it lapse. I’m not trying to bash the Information Systems Security Certification Consortium (ISC(2)), the developer and “blessing certifier” of the CSSLP, and certainly not trying to denigrate the value of assurance, but the entire exercise of developing this certification never sat well with me. Part of it is that I think it solves the wrong problem – more on which later. Also, my cynical and probably unfair comment when I heard that ISC(2) was developing the CSSLP certification was that, having saturated the market for Certified Information Systems Security Professionals (CISSPs), they needed a new source of revenue. (You do wonder when you look at the number of business cards with a multiplicity of alphabet soup on them: CISM, CISSP, CSSLP, EIEIO (Ok, I made that last one up).)

I let my CSSLP lapse because of a) laziness and b) the fact that I got nothing from it.  Having a CSSLP made no difference to my work or my “professional standing.” It added no value to me personally, or, more importantly, to assurance at Oracle. I started working in assurance before the CSSLP dreamer-uppers ever thought of Yet Another Certification, and my team (and many others in Oracle) have proceeded to continue to try to improve our practices. ISC(2) had no impact on that. None. I wondered why I was paying X dollars to an organization for them to “bless” work that we were doing anyway, that I was doing, anyway, that did not add one iota to our knowledge or practices?  (To be fair, some of the people who work for me have CSSLPs and have kept them current. I can’t speak for what they think they get out of having a CSSLP.)

We have an EIEIO syndrome in security – so many certifications, and really, is security any better? I don’t know. I don’t know how much difference many of these certifications make, except that job search engines apparently look for them as keywords. Many certifications across various industries are used as barriers to market entry, to the point that some forward- thinking states are repealing these requirements as being anti-competitive. (And really, do we need a certification for interior decorators? Is it that critical that someone know the difference between Rococo and Neoclassical styles? C’mon!) In some areas, certifications probably do make sense. Some of them might be security areas. But it doesn’t do any good if the market is so fragmented and we are adding more certifications just to add them. And it really doesn’t do any good if we have too many of them to the point where the next one is JASC – just another security certification. CSSLP felt like that to me.  

I certainly think assurance is important. I just do not know – and remain really, really unconvinced -- that having an assurance “certification” for individuals has done anything to improve the problem space. As I’ve opined before, I think it would be far, far, better to “bake in” security to the computer science and related degree programs than try to “bolt on” assurance through an ex post facto certification. It’s like an example I have used before: the little Dutch boy, who, in the story, put his fingers in leaks in a dike to prevent a flood. We keep thinking if only we had more little Dutch boys, we can prevent a flood. If we don’t fix the “builders” problem,  we – and all the Dutch boys we are using  to stem the flood – will surely drown.


I am not perfect. As good as my team is -- and they and many others in Oracle are the real builders of our assurance program -- they are not perfect. But I stand on our record, and none of that record was, in my opinion, affected one iota by the presence or absence of CSSLPs among our practitioners.

If I may be indulged:

Old MacDonald had some code

EIEIO

And in that code he had some flaws

EIEIO

With a SQL injection here and an XSS there

Run a scan, fuzz your code

Everywhere a threat model

Old MacDonald fixed his code

EIEIO*

Last Bits

I mentioned in an earlier blog, in a truly breathtaking example of self-promotion, that my sister Diane and I (writing as Maddi Davidson) had written and published the first in an IT Industry-based murder mystery series,  Outsourcing Murder. The two exciting bits of news about that are, first of all book 2, Denial of Service, is almost (OK, 90%) done.  Stay tuned. The second bit is that Outsourcing Murder made a summer reading list for geeks.  It’s probably the only time in my life I (or rather, my sister and I) will appear in a list with Kevin Mitnick and Neal Stephenson.

*Ira Gershwin would turn over in his grave – I know I’ll never make it as a lyricist.

Should you go to OpenWorld 2012

Andrews Consulting - Wed, 2012-07-11 12:25
Oracle OpenWorld is still over 11 weeks away (September 30 – October 4) but those considering going need to decide soon whether to participate this year. Registration costs will increase $300 at the end of this week. More importantly, hotel and flight options become more scarce and expensive as each day passes. OpenWorld has become […]
Categories: APPS Blogs

Resizing the filesystem using Logical Volume Manager within Oracle Linux

Ittichai Chammavanijakul - Fri, 2012-07-06 20:53

Oftentimes we all run into the situation where the file system is full (or almost full) and need more space. This expansion task seems to be a lot easier when using the Logical Volume Manager (LVM) in Oracle Linux.

  • Review the current size.
[root@ol6 ~]# df -H
Filesystem                 Size   Used  Avail Use% Mounted on
/dev/mapper/vg_ol6-lv_root 27G    22G   3.8G  86% /
tmpfs                      1.3G   209M   1.1G  17% /dev/shm
/dev/sda1                  508M    97M   385M  21% /boot
Downloads                  750G   172G   578G  23% /media/sf_Downloads

Plan to add 30G to the root file system.

  • Create a partition on the newly-added disk.
[root@ol6 ~]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x0c04311f.
Changes will remain in memory only, until you decide to write them.

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3916, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3916, default 3916): 
Using default value 3916

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
  •  Create a physical volume on top of it.
[root@ol6 ~]# pvcreate /dev/sdf1
  Writing physical volume data to disk "/dev/sdf1"
  Physical volume "/dev/sdf1" successfully created
  • Review the current volume. Note that currently there is no free extends (noted by zero value of the “Free PE / Size”).
[root@ol6 ~]# vgdisplay
  --- Volume group ---
  VG Name               vg_ol6
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               29.51 GiB
  PE Size               4.00 MiB
  Total PE              7554
  Alloc PE / Size       7554 / 29.51 GiB
 Free PE / Size 0 / 0  
  VG UUID               2e2VHd-Mb3D-Uz0G-4Yec-tbfe-f3cI-7cvpby
  • Extend this volume with a new disk.
[root@ol6 ~]# vgextend vg_ol6 /dev/sdf1
  Volume group "vg_ol6" successfully extended
  • Check the volume again. The “Free PE / Size” is now 30G.
[root@ol6 ~]# vgdisplay
--- Volume group ---
VG Name vg_ol6
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 59.50 GiB
PE Size 4.00 MiB
Total PE 15233
Alloc PE / Size 7554 / 29.51 GiB
Free PE / Size 7679 / 30.00 GiB
VG UUID 2e2VHd-Mb3D-Uz0G-4Yec-tbfe-f3cI-7cvpby
  • Now let’s review the logical volume.
[root@ol6 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_ol6/lv_root
  LV Name                lv_root
  VG Name                vg_ol6
  LV UUID                rd2d4X-vqE8-xENi-clCz-Oa0T-0R6X-RFCBDq
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
 LV Size 25.10 GiB
  Current LE             6426
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg_ol6/lv_swap
  LV Name                lv_swap
  VG Name                vg_ol6
  LV UUID                xM3Blz-wvpG-IUfF-WhWc-EHoI-I0xG-oeV1IR
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                4.41 GiB
  Current LE             1128
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

We want to add additional 30G into the existing /dev/vg_ol6/lv_root. So the total size will be 55.1GB.

  • We can extend the logical volume to the needed size.
[root@ol6 ~]# lvextend -L 55.10G /dev/vg_ol6/lv_root
  Rounding size to boundary between physical extents: 55.10 GiB
  Extending logical volume lv_root to 55.10 GiB
  Insufficient free space: 5120 extents needed, but only 5119 available

You may have to adjust the size if the initial specified size is too large.

[root@ol6 ~]# lvextend -L 55G /dev/vg_ol6/lv_root
  Extending logical volume lv_root to 55.00 GiB
  Logical volume lv_root successfully resized
  • Now finally you can extend the file system.
[root@ol6 ~]# resize2fs /dev/vg_ol6/lv_root 55G
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg_ol6/lv_root is mounted on /; on-line resizing required
old desc_blocks = 3, new_desc_blocks = 4
Performing an on-line resize of /dev/vg_ol6/lv_root to 14417920 (4k) blocks.
The filesystem on /dev/vg_ol6/lv_root is now 14417920 blocks long.
  • The file system is resized while the system is still on-line.
[root@ol6 ~]# df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_ol6-lv_root
55G 22G 33G 39% /
tmpfs 1.3G 209M 1.1G 17% /dev/shm
/dev/sda1 508M 97M 385M 21% /boot
Downloads 750G 172G 578G 23% /media/sf_Downloads

References:

Categories: DBA Blogs

Java 7 and Forms 11

Gerd Volberg - Tue, 2012-07-03 02:17
Some days ago a new certification-document with an interesting info was updated:


Link to the certifications

In this excel-sheet is Forms 11.1.1.6 certified with JRE 1.7.0_04+. Very good news! And now we can hope, that Forms 11.1.2 gets the same certification soon, because this is the latest version available on OTN :



And last but not least: " Please Oracle, certify Oracle Forms 10g with JRE 7 ! "
Gerd

Kscope '12 and the Rodeo Special Event

Marc Sewtz - Sat, 2012-06-30 21:10

Just arrived back home in New York after a great week at the ODTUG Kscope ’12 conference in San Antonio, TX. It was great meeting many members of our ever growing APEX community again, seeing several outstanding presentations on APEX and other topics and being able to present to our customers what’s new in Oracle Application Express 4.2 and introducing the mobile features we’ve been working on over the past couple of month.

If you've been to our sessions and want to try out for yourself what's new, or if you're just curious what APEX is all about, please sign up for our Early Adopters hosted site at:

https://apexea.oracle.com 

And thank you Kscope for organizing yet another amazing special event Wednesday night, I truly enjoyed the Rodeo, certainly something you don’t see every day in New York City. I’ve put together a few scenes of the event and loaded the up on YouTube, please check out the video here:



Do You Know What Day It Is?

Kenneth Downs - Sat, 2012-06-30 07:51
Have you ever sat in on a meeting like this?

Person 1:  The customer wants a green background on this page.

Person 2: Do we know they're going to stick with green?  Maybe we need a configuration option for background color.

Nobody wants to disagree and the manager never comes to these boring design meetings so there is quiet before somebody says...

Person 3: A configuration option just for background color is kind of weak, we'd be better off allowing choices for all colors.

Person 4: Is this an instance level configuration, or do we let the end-user pick her color?

Person 2 (the original troublemaker): well it really has to be instance level, but the end-user can override, that way we satisfy all possibilities.

From here the debate continues until they've decided to create a skinning system with lots of UI color pickers and other stuff, and they wrap up with:

Person 6: Well this sounds great, but what should we make the default background color?

...and nobody can remember that:

The customer specifically asked for green, only green, and nothing but green.

Is this an exaggeration?  Perhaps, but I've sat in meetings that come close.  Seems to me that I've sat in meetings that were worse, but I'd hate to slander anybody, and being as I'm not perfect myself, we'll leave that one alone for now.

So anyway, there were many things going wrong in that meeting, but we're going to stick with the simple fact that nobody there knew this one simple rule, the first rule of system design, which should be burned into your brain, which you should repeat before and during any design or architecture meeting:

Today's Constant is Tomorrow's Variable.

This is not so much a rule as an observation, but when you realize that it is true on almost any level, it can become a guiding rule, something that actually lets you make decisions with confidence, decisions that turn out to work well.

The problem is that most people in technology don't know what day it is.  They make one of two mistakes, they either:
  1. constantly plan, estimate, design, or program for tomorrow's variable when all they need to do is handle the simpler case of today's constant.
  2. or they don't realize that when "the customer changed the requirements!!!" the customer is doing what everybody does, taking something simple from yesterday and making it a bit more complex today.
So you can turn this around and say, "Everything I do today will get changed tomorrow.  It will become more complicated.  Everything I think is fixed or constant will float or become a variable."

What It Means
We've all learned (hopefully) that you don't embed constants in code because it makes the code difficult to change.  We define the constants in headers, or we accept all values from outside sources as parameters.  This makes the code more flexible, and it is a good thing.  The code is more robust, it can handle more cases without breaking or needing alteration.

The trick to using Ken's first and only rule of system design, "Today's Constant is Tomorrow's Variable" is to recognize the many forms of "constants" that we build into our systems in hard-to-reach places, making it hard to change them when tomorrow comes and they are no longer fixed and constant.

Another trick to using the rule is to always know what day it is.  Most of what I do today will involve features that may never change.  It is extremely easy to see how they might change, but impossible to know for sure.  So we leave them constant for today.

Back To That Meeting
Let's go back to that meeting we started with.  Here is how it goes when somebody knows that "Today's Constant is Tomorrow's Variable."


Person 1:  The customer wants a green background on this page.

Person 2: Do we know they're going to stick with green?  Maybe we need a configuration option for background color.

Our Hero:  We don't know they are going to stick with green because we never know if any customer is ever going to stick with anything, and we all know that today's constant is tomorrow's variable.  However, I hope we've put the styles into a style sheet and not embedded them directly into the HTML so we can change it later if we have to, right?


Somebody mumbles that yes in fact we do use style sheets, and the meeting moves on.

Is That  Lame Example?
That's a pretty lame example really, who doesn't use style sheets?  And who embeds constants in their code?

Well kiddies it turns out that we didn't always use style sheets.  When I got interested in web pages CSS was still optional (Yes! Believe it!) and you put your style information directly into tags, which was basically embedding constants into code, and it didn't take long before you intuitively realized this was not right, and you discovered CSS.  

It is amazing how often that basic pattern repeats itself, trying to identify what you thought was a constant, realizing it is "buried in code", and turning it into a variable.

Today's Post Is Tomorrow's Promise

I'll be posting a lot more on this subject, using a very reliable and strict schedule based on a host of variables that mostly comes down to whenever-the-hell-I-feel-like-it(TM).  

Cheers.

Categories: Development

Google Compute Engine, the compete engine

William Vambenepe - Fri, 2012-06-29 01:16

Google is going to give Amazon AWS a run for its money. It’s the right move for Google and great news for everyone.

But that wasn’t plan A. Google was way ahead of everybody with a PaaS solution, Google App Engine, which was the embodiment of “forward compatibility” (rather than “backward compatibility”). I’m pretty sure that the plan, when they launched GAE in 2008, didn’t include “and in 2012 we’ll start offering raw VMs”. But GAE (and PaaS in general), while it made some inroads, failed to generate the level of adoption that many of us expected. Google smartly understood that they had to adjust.

“2012 will be the year of PaaS” returns 2,510 search results on Google, while “2012 will be the year of IaaS” returns only 2 results, both of which relate to a quote by Randy Bias which actually expresses quite a different feeling when read in full: “2012 will be the year of IaaS cloud failures”. We all got it wrong about the inexorable rise of PaaS in 2012.

But saying that, in 2012, IaaS still dominates PaaS, while not wrong, is an oversimplification.

At a more fine-grained level, Google Compute Engine is just another proof that the distinction between IaaS and PaaS was always artificial. The idea that you deploy your applications either at the IaaS or at the PaaS level was a fallacy. There is a continuum of application services, including VMs, various forms of storage, various levels of routing, various flavors of code hosting, various API-centric utility functions, etc. You can call one end of the spectrum “IaaS” and the other end “PaaS”, but most Cloud applications live in the continuum, not at either end. Amazon started from the left and moved to the right, Google is doing the opposite. Amazon’s initial approach was more successful at generating adoption. But it’s still early in the game.

As a side note, this is going to be a challenge for the Cloud Foundry ecosystem. To play in that league, Cloud Foundry has to either find a way to cover the full IaaS-to-PaaS continuum or it needs to efficiently integrate with more IaaS-centric Cloud frameworks. That will be a technical challenge, and also a political one. Or Cloud Foundry needs to define a separate space for itself. For example in Clouds which are centered around a strong SaaS offering and mainly work at higher levels of abstraction.

A few more thoughts:

  • If people still had lingering doubts about whether Google is serious about being a Cloud provider, the addition of Google Compute Engine (and, earlier, Google Cloud Storage) should put those to rest.
  • Here comes yet-another-IaaS API. And potentially a major one.
  • It’s quite a testament to what Linux has achieved that Google Compute Engine is Linux-only and nobody even bats an eye.
  • In the end, this may well turn into a battle of marketplaces more than a battle of Cloud environment. Just like in mobile.
Categories: Other

My Oracle Support Interface Change

Michael Armstrong-Smith - Wed, 2012-06-27 19:06
If you are a use of My Oracle Support you should have received notification regarding a change to the user interface. As of July 13, 2012 Oracle will be retiring their Flash-based interface and replacing it with an improved HTML interface that has added functionality.

There is a CALL TO ACTION regarding this change. If you follow the link you will be directed to a page that will describe the changes in detail and advise what you have to do in order to take advantage of the new user interface.

Oracle ADF with SSO – The Definitive Guide

Eduardo Rodrigues - Wed, 2012-06-27 13:22
by Fábio Souza & Eduardo RodriguesIntroductionWe know. It’s been a looooooong time again. But once you read this post, we are quite sure you’ll be happy we took the time to write it. And it’s also...

This is a summary only. Please, visit the blog for full content and more.

Oracle ADF with SSO – The Definitive Guide

Java 2 Go! - Wed, 2012-06-27 13:22
by Fábio Souza & Eduardo RodriguesIntroductionWe know. It’s been a looooooong time again. But once you read this post, we are quite sure you’ll be happy we took the time to write it. And it’s also...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Java 7-Auto-Update Warning

Gerd Volberg - Tue, 2012-06-26 06:01
It looks like, that all forms-applications run into a huge problem, starting next week, July 3rd 2012.

Oracle's Java 1.6-Updater will automatically update the JRE to 1.7 on all PC's, where auto-update is activated. And Java 1.7 is not certified with Forms 10g, Forms 11g or the E-Business-Suite (EBS-forms-components)

I try to find all relevant informations to that topic and blog them afterwards. So feel free to share this blog to all customers who may have this problem next week.


Here are the first links:

The latest news directly from E-Business Suite Development



JRE Troubleshouting Guide



OTN Forms Forum: steps to install and deinstall
Michael Armstroing describes amongst others the deinstallation of Java 1.7



And here is the description of the internal problem, why we get problems with Java 1.7



read it
Gerd

Pages

Subscribe to Oracle FAQ aggregator