Feed aggregator

In memoriam – 3

Jonathan Lewis - Wed, 2017-10-11 07:30

My father-in-law died a few weeks ago, aged 95. This is the story that he wrote for his children and grandchildren a few years ago describing his experiences as a Naval engineer on the aircraft carrier HMS Indefatigable during the second world war.

ROY‘S NAVAL CAREER

When war broke out on 3rd September 1939 I wanted to join the Navy, and a few days later I saw a  new recruiting office near Southend Pier so I went in and asked how I would be able to join. A Petty Officer looked at me and said “Well, sonny, you will have to wait until you are 18”. I was then only 17 so I continued with my plan to become an engineer. In those days parents either had to pay the full cost of university education or rely on their children gaining scholarships. In my case scholarships were essential. So, concentrating on mathematics, I took Higher School Certificate (A-Levels) in July 1938 and July 1939, but did not gain any scholarships. At that time I was Head Boy at Lindisfarne College and in late September the school was evacuated to North Wales from the Southend area because of fears of bombing and invasion but here the buildings were not well equipped and there was no laboratory. However, the Southend High School remained at Southend and arrangements were made to transfer me there.

In December 1939 I was awarded a Scholarship at Queens’ College, Cambridge. Then in May 1940 when the German blitzkrieg started the High School was evacuated to Mansfield in the Midlands, but there I took the HSC again and as a result gained a State Scholarship and a Southend Borough Major Scholarship, which in total was enough to see me through Cambridge. There I made friends with Denis Campbell, Stuart Glass and Edward Higham. In addition to lectures we went regularly to tutorials with a great character called (Professor) Archie Browne. He had additional duties as Steward of the College, and was responsible for obtaining food supplies and coal for heating, which was very difficult in wartime.

The course was completed in two years and, with blackouts, air raid precautions and other restrictions, social life was limited. I joined the Naval Section of the Cadet Corps and the Home Guard which took up one or two afternoons each week. I remember one exercise where we had to make a mock attack by night on an airfield some ten miles north of Cambridge. The defenders somehow knew that we would attack the SE corner and mustered there, but we made a mistake and went for the NE corner which was undefended, so we theoretically captured that bit of airfield! We had to march there and back, and the blisters lasted for weeks. On another exercise Cambridge was attacked by the Welch Fusiliers, I remember being knocked on the head and falling into a ditch half full of water. I was considered a casualty and allowed to return to college for a hot bath.

July – September 1942         I applied to join the Royal Navy as an engineer officer and had interviews at the Admiralty including medical examinations. As a result I was accepted and appointed a Probationary Temporary Acting Sub.Lieutenant (E) RNVR, and the next step was to purchase my uniform at Gieves in London, including the purple stripe denoting engineering.

October 1942         I reported to Portsmouth Barracks for four weeks training. I wore my uniform for the first time at Warminster in Wiltshire where we were living, and traveled to Portsmouth without any knowledge of how to make or receive naval salutes in public! This was soon rectified at Portsmouth where I joined twenty other trainees for the course which included instruction in naval customs and traditions, rules and regulations, security, and the all-embracing Kings Regulations and Admiralty Instructions. We also had training in small arms firing and endless square bashing under the eagle eye of Chief Petty Officer Sims, who was as tough as old boots.

November 1942 – November 1943         I was posted to John Brown’s Engineering Works at Clydebank with Donald Townend and Ian Richardson for practical marine engineering training. John Brown’s was a huge organisation which built engines as well as ships, and just after we arrived Indefatigable was launched. This was an amazing sight, seeing 30,000 tons of ship slide down the slipway into the river Clyde. Before the war the Queen Mary and the Queen Elizabeth were built on the same slipway.

The three of us were billeted with two or three other naval officers in lodgings at Glasgow where we three shared a room and were looked after by a homely landlady and her staff. Every morning we put on civilian clothes and caught a rickety old tram for a 30 minute journey to Clyde bank. There we worked successively in the Pattern-shop (making wood moulds), Foundry, Boiler-shop (being deafened by riveting), Machine-shop, Fitting-shop, Pipe-shop, Drawing Office and Dockyard. We did actually work, scraping bearings, operating lathes, casting metal, always under the supervision of an experienced workman. During lunch hours we used to climb over Indefat, deafened again by riveting, but we got to know the ship. At that time the yard was completing R class destroyers at the rate of about one every fortnight, and we used to take part in their initial sea trials so gaining experience of firing up boilers and operating turbine plant.

During the summer of 1943 we got to know the permanent RN engineer officers appointed to supervise the fitting out of the ship, including Peter Sandison who looked after the flight deck gear. We were seconded to help the checking of the installation and testing of all kinds of machinery, and in November I was chosen to be officially appointed to Indefat, while the other two went off to other ships.

December 1943 – February 1944         The ship was commissioned on 8th December and taken over by the RN from the yard. After dock trials we steamed down the Clyde and carried out various trials including full power of the 148,000 HP engines, the measured mile speed test (32 knots) off the Isle of Arran, and steering and going astern trials from full ahead. I remember on one occasion the steering gear locked solid at hard-a-starboard while doing full speed. We went round in circles flying two black balls showing we were out of control! Several weeks were spent commissioning and training the crew, taking on stores and ammunition, gunnery practice, testing of radar and flight deck equipment, while some time was spent at sea.

March – June 1944         The first aircraft flew in on 23rd March, and thereafter the squadrons began to arrive. We spent days at sea practising aircraft landings and by the end of June we had a complement of some 75 aircraft including Seafires, Fireflies and Hellcats. When at sea engineer officers kept watch for four hours at a time, the middle watch (midnight to 4 am) and the morning watch (4 am to 8 am) being the worst. During a watch we had to visit each engine room and boiler room, and altogether a total of seventeen machinery spaces where each involved climbing up and down three sets of ladders, as the only passage was via the main deck. The best visit was always to a boiler room, where the Chief Stoker would provide a mug of ‘kai’, a chocolate slab heated in hot water and steam.

In addition each officer had responsibility for a department which included the operation and maintenance of all the equipment in it, and the men carrying out this work. Over the years mine included seven steam generators supplying electricity to the ship, three emergency diesel generators, motor boats, steering gear and auxiliary machinery including the big evaporators for making the ship’s fresh water from seawater. Also every six months each officer took it in turn to run the ship’s laundry for 2000 crew!

At Action Stations if not on watch each engineer officer had a Damage Control section of the ship to look after. Mine was the midships section above one of the engine rooms, and my team consisted of about ten stokers and technicians. We might be stationed there all day with only sandwiches and ‘Spotted Dick’ for lunch!

July – October 1944         Indefat joined the Fleet at Scapa Flow surrounded by battleships, cruisers and destroyers, and spent much time at sea on Russian convoy escort duty going beyond the Arctic Circle. In July we made an attack on the largest German battleship Tirpitz, which was moored in a Norwegian fiord and was always a potential menace to Russian convoys. This operation was called MASCOT and with two other carriers the aircraft carrying out the attack included 44 Barracudas, 18 Hellcats and 12 Fireflies, supported by many Seafires as fighter escorts. The weather was not good with cloud and fog around and although the Tirpitz was damaged it was decided to make another attack in mid-August. Prior to that strikes were made against some installations on the Norwegian coast and then on 18th August we sailed for the second Tirpitz attack called operation GOODWOOD. At this time a valuable convoy was en route to Russia and our job was to protect it from the Tirpitz and Uboats. The convoy did arrive safely.

Indefat aircraft included 12 Barracudas, 12 Fireflies, 12 Hellcats and 32 Seafires, and the ship was accompanied by Formidable, Furious and two small escort carriers, together with destroyers. On the first day one escort carrier was torpedoed and badly damaged, and had to return to Scapa escorted by the second small carrier. Some time later a destroyer was torpedoed and sank, with few survivors. The operation lasted for seven days with the ship at Action Stations most of the time. At one point Indefat seemed to be under serious attack by Uboats, with the ship taking evasive action and shaken by exploding depth charges from nearby destroyers, while it was reported that one torpedo passed under Indefat. GOODWOOD was successful as Tirpitz was hit several times and had to be moved to the port of Tromso for repairs, where she was later sunk by the RAF with their 10 ton Tallboy bombs. Had she remained in the narrow fiord in the lee of the mountains protected by smokescreens they might never have hit her.

Above the Arctic Circle the sun at this time only went below the horizon for a short time, which meant that our ships could be continually kept under observation by German aircraft and Uboats. There were however some fascinating panoramas of sea and sky, and I remember that one evening the ship had to steam into the wind straight for the coast and the spectacular black rugged mountains of Norway loomed up ahead. I vowed that one day I would revisit the area, and so I did with Joan during our Norwegian cruise of 1987.

Our base was Scapa Flow where we returned every few days. Occasionally we went ashore and the main treat was a visit to the NAAFI canteen which provided a large dish of bacon and baked beans. Otherwise we spent time in the wardroom eating, drinking and playing shove ha’penny or bar skittles. One day we played hockey against a team of large and ruthless Wrens, who beat us using their sticks with wild abandon.

In July more engineer officers joined the ship and I knew that one of them would occupy the vacant berth in my double cabin. I anxiously watched them come aboard and liked the look of Brian, and was very glad when he was allocated to my cabin. Then began a friendship which has lasted all our lives.

October – November 1944         We returned to Clydebank in October and made preparations for going to the Far East. Then we steamed down to Portsmouth and went into dry dock for maintenance and cleaning the ship’s bottom. After this we were ready for sailing but before doing so on 16th November the King and the Royal Family came aboard to wish us Good Luck. We were all mustered in our divisions on the flight deck, the King inspected us and then asked for a cup of tea. This caused a flap as all the cooks and stewards were mustered, and it took the duty officer nearly half an hour to find some tea and make it!

December 1944         After leaving Portsmouth we sailed to Ceylon, passing through Gibraltar, the Mediterranean, the Suez Canal, and then across the Indian Ocean arriving at Colombo on 10th December. We stopped off Algiers where our Mess Secretary went ashore and triumphantly came back with a large load of Algerian wine, which turned out to be the most awful plonk! We had Admiral Vian, the fighting Admiral, on board and at Colombo he demanded to be ferried ashore immediately in his Admiral’s Barge. This motor boat arrived on board at Portsmouth just before we sailed and was stowed in one of the hangars, where the engine could not be tested. I was in charge of boats and I insisted that the boat should have a trial run before an official trip. The Admiral was furious and came storming down the Hight deck demanding an explanation, so I stood to attention quaking in my shoes and gave one. He looked me up and down and said “Right, I will give you ten minutes”. Luckily all went well. Strange how one remembers these things!

During the remainder of the month we spent time at Colombo or Trincomalee storing ship, or at sea exercising with other ships of the Fleet. Trincomalee was a beautiful harbour, and I remember Brian and I were thrilled to bring back a pineapple (which we hadn’t seen for years) to our cabin, but when with due ceremony we slit it open it was full of insects!

January 1945         On New Year’s Day we sailed in company with three other carriers, the battleship King George V, and several cruisers and destroyers for air strikes against the Japanese oil refineries at Palembang in Sumatra. The first strike took place on 4 January and about 100 aircraft took part plus 40 Seafires which provided fighter cover. The refineries were damaged but after returning to Trincomalee it w as decided that further strikes would be carried out and they took place on 24 and 29 January. These were major strikes carried out by 144 aircraft for the first and 128 for the second, plus the usual fighter cover. This time the Japanese were well prepared and on several occasions the Fleet came under attack by enemy aircraft. These were fought off by our guns and aircraft, two being shot down close to Indefat. There were many air battles and we lost 41 aircraft together with many of the aircrews. This included several aircraft that were damaged by enemy action and then crashed on deck landing. The worst event was the fate of nine aircrew survivors who had to force land in Sumatra, were made prisoners, taken to Singapore and then later beheaded. The strikes were successful as the refineries produced some 50 of Japanese oil requirements and they were reduced to a standstill, only increasing back to one third capacity by the end of March. After this we steamed south for Australia and crossed the line (the Equator) with King Neptune and his cohorts “coming aboard” on 1st February. I was duly ducked and scrubbed m a makeshift swimming pool.

February 1945         We called in at Fremantle and six days later arrived in Sydney and moored at Wooloomooloo near the Harbour Bridge. The Australians were very hospitable and Brian, Colin and I were “adopted” by the Murray-Jones family with two daughters, Judy and Annabel. They would invite us home for a meal or arrange some tennis or swimming, not that there was much time as we were busy with maintenance and storing for the Pacific. Towards the end of the month we steamed north with the British Pacific Fleet under Admiral Rawlings.

March 1945         After 11 days at sea we arrived at the island of Manus and then went on to Ulithi, another island. This had an enormous harbour and was full of American ships, a total of about 1,400 preparing for the invasion of Japan. Our Fleet then became Task Force 57 operating with the American 3rd Fleet under Admiral Spruance, and consisted of three other Fleet carriers, eleven destroyers and a number of support ships including sloops, frigates, minesweepers, oil tankers and hospital ships. Sailing from Ulithi our first strike took place on 26th March against some of the Japanese islands south of Okinawa where it was estimated that the Japanese had 10,000 aircraft, of which about 4,000 were suicide bombers called Kamikazes.

Then began a series of strike days, each being a long day’s activity for the Fleet, particularly for the ships’ companies of the aircraft carriers. We would go to Action Stations at 0600 and return to Defence Stations at 2000, and periodically a “Flash Red” warning would be broadcast when enemy aircraft approached. Several air battles took place and, throughout the day, the Fleet wheeled and turned in and out of the wind for the carriers to land on and fly off strikes and fighter escorts. When the last aircraft landed on at dusk the air engineering department worked all night to repair, re-arm, and refuel aircraft ready for the next day.

April 1945         On the morning of 1 April we were hit by a Kamikaze which exploded into the flight deck and bridge structure. Because the flight deck had 3″ armour plate the damage was not catastrophic but fourteen of the crew, including the ship’s doctor, were killed and there was a lot of damage to the flight deck barrier gear and bridge communications. I was Damage Control Officer for the area and my team had to remove the casualties and start repairing the damage. I remember the whole area was flooded with hot steam, as the steam-to-ships siren pipework was broken, until I managed to telephone Y boiler room to shut off the master valve.

Peter Sandison’s team did a good job to repair arrester wires and barriers, and the ship was flying off aircraft an hour later, much to the amazement of the American ships and Admirals. The American carriers with light steel decks were very vulnerable and many of their carriers were sunk or badly damaged due to Kamikazes. On 6 and 7 April the Japanese made massive attacks on allied ships with most of them concentrated on American ships to the north of our Fleet. These attacks were made by 600 aircraft, including 355 Kamikazes, and some 380 were shot down but six American ships were sunk and twenty-one damaged. At this time the giant Japanese battleship Yamato came out on a suicide mission and was sunk by American torpedo bombers with a loss of 2,100 men.

Operations continued until the last week of April when our Fleet returned to Leyte island for refitting and oiling, having been at sea continuously for 32 days. By this time sixty support ships had arrived to provide repair and maintenance facilities. During the month I was promoted to Temporary Lieutenant (E) RN and wore my second stripe.

May 1945         On 1st May the Fleet including the carriers Indefatigable, Implacable. Indomitable, Formidable and Victorious left Leyte to resume operations against the Japanese shipping and shore installations, with Action Stations every day except for the odd day when we retired for refuelling by waiting tankers. British ships were essentially designed for Atlantic operations, and consequently there was very little air conditioning to deal with the hot climate of the Pacific, Some of the machinery spaces reached temperatures of 1400 F and almost every day one’s boiler suit could be twice soaked with perspiration. After a few weeks one would suffer from prickly heat and would be painted with purple potassium permanganate, so looking like an Ancient Briton! Sleeping at night on the quarter-deck was the most comfortable time. Food was almost all dehydrated or tinned and a staple of the diet of dehydrated potato served in a variety of ways – mashed, cubed, boiled, roast or fried. There were also plenty of tins of egg powder and powdered milk!

On 4 May Formidable was hit by a Kamikaze which caused considerable damage and fires on the flight deck but the ship remained operational. Indomitable was nearly hit by another Kamikaze which was shot down and crashed some thirty feet off the starboard bow. A few days later Formidable was again hit and fires were started in the hangar, and nine aircraft were destroyed. All through this period the enemy pressed home their attacks with great skill and determination, making good use of cloud cover, decoys and variations of height. All five carriers were hit at least once by Kamikazes, but nevertheless our aircraft flew some 2500 sorties, dropped over 500 tons of bombs and destroyed about 60 aircraft, at a loss of 98 aircraft.

June 1945         At the beginning of June we returned to Sydney for vital boiler maintenance, aircraft repairs and other general refitting. This was a welcome relief after 100 days on the ship at sea and again the Murray Jones were very hospitable, so we enjoyed some tennis and swimming off Bondi Beach. Towards the end of June the Fleet sailed north again and resumed operations in co-operation with the American Third Fleet.

July – August 1945         We carried out strikes against the Japanese mainland for the first time, including airfields and installations in the Tokyo area. The routine developed of 4 or 5 days at Action Stations, then a day’s withdrawal for refuelling, and then back again for more strikes. It was a time of Action Stations, watch-keeping, eating and sleeping in a noisy, hot and tiring atmosphere, with some excitement when enemy aircraft appeared. The Flight Deck was again busy from dawn to dusk, sending off bombers and also fighters to protect the Fleet. Unfortunately many did not return, and several had accidents when landing back on. At this stage the whole of the Japanese mainland from north to south was under attack by allied ships, with the Americans concentrating on destroying the remnants of the Japanese navy. The British aircraft bombed industrial targets including shipping, oil storage tanks, railways and factories, and on two occasions the battleship King George V carried out extensive bombardment with her heavy guns.

On 4th August all ships were ordered to withdraw some 300 miles from Japan, and on the 6th the first atomic bomb was dropped on Hiroshima, and then the second on Nagasaki. Further strikes continued until the Japanese finally surrendered on 15th August. The Fleet remained at sea but on the 25th we were hit by a typhoon. The waves were awesome, I remember standing on the flight deck which was 70 ft above normal sea level, and watching waves much higher than this coming towards me. The ship was rolling 35° from one side to the other, but we survived. Three American destroyers capsized, and we saw one American carrier with a large part of its flight deck hanging over its bows, as though it had received a punch on the nose!

September 1945         The Japanese Surrender was signed on the USS Missouri in Tokyo Bay on 2nd September, much to our relief. We remained at sea, and with the American Fleet took part in an enormous “parade” of ships outside Tokyo Bay. Then we spent three days in the Bay, while some of our crew went ashore to find and collect prisoners of war and transport them to hospital ships. The famous Mount Fuji is usually covered in cloud but early one morning the tannoy broadcast that it was visible, and I remember a marvellous view of its snow-capped peak.

After this we steamed back to Sydney arriving towards the end of the month, ready for a respite after 73 days of sea time. It was time to reflect on past events, the worst being during July and August when the Fleet lost over 140 aircraft from all causes, by enemy action or deck landings. Since then there has been a lot of discussion about dropping the atomic bombs and their consequences, but to my mind the following reasons justified the decision.

  1. The Americans estimated that there would be around a half-million Allied casualties if the invasion of Japan had taken place later in the year. This did not happen.
  2. About 40,000 British and Allied prisoners of war were kept by the Japanese in horrendous conditions and most would probably not have survived another winter. They were rescued.
  3. The Japanese had some five thousand aircraft and pilots trained as Kamikazes to be used against an invasion fleet, and we would have been in the forefront of this.

October – December 1945         Indefat remained in Sydney and the crew were allowed a lot of shore leave. The Murray-Jones thoughtfully provided a flat where many of us could stay, including Brian, Colin, Peter Fanghanel and others. One highlight was when all the latter including me made up a party to go ski-ing for a week at Mount Kosciusco. We arrived at the snowline and were then told that the chalet was 12 miles away and could only be reached on skis. Some of us, including me, had not skied before but we were told “Oh, that’s OK, today is Tuesday, and there is a tractor going up on Thursday which could pick you up if you are stuck”!

This was a time of hard work and playas supplies were exhausted, the engines needed refitting, the ship needed cleaning and the typhoon had damaged part of the hull so the ship had to go into dry dock. Some of us were seconded to the dockyard to help out with various jobs and I enjoyed the use of a 500cc motor-bike.

We managed another five days on a sheep farm, again with Brian and Colin. The farm was enormous and the family relied on horses to get around. On the first day we were each provided with a horse, but I viewed this with trepidation. So evidently did the horse, as after 15 minutes he turned round and trotted home, and there was nothing I or anyone else could do to stop him! I decided that I would stick to something with a brake and throttle.

January – March 1946         On 20th January we left Sydney for the journey home. Three days later we arrived at Melbourne where we had a tremendous welcome, with a parade led by the Royal Marines hand marching through the streets to the City Hall where the Governor took the salute accompanied by Admiral Vian. We stayed a week and were well entertained, then steamed across the Australian Bight which was unpleasantly rough to call in at Fremantle for a few hours before setting off for Capetown.

We arrived at Capetown after 17 days at sea. Again we were well looked after with a reception at the Governor’s Residence and an expedition to Table Mountain. This was the highlight of the visit, we took the cable car to the top with marvellous views all round and then came all the way down on foot. On 24th February we left Capetown, arriving at Gibraltar on 11 th March. On the way we passed close to St. Helena and Ascension Island. The Duty Officer went ashore and paid his respects to the Governor, who presented him with a live turtle to make soup! The ship’s butcher did not think much of this so when we left the turtle was returned to the sea, and was last seen swimming happily to the shore.

This part of the trip was pleasant and not too hot, every day there were games of deck hockey on the flight deck using a rope grommet instead of a ball. At Gibraltar we stayed for one day and Peter Fanghanel was the only one of our group who managed to go ashore, he came back with a large case of Tio Pepe sherry.

Finally we arrived at Portsmouth on 16th March and berthed inside the harbour, with crowds lining the Southsea promenade and cheering as we went in. We engineers saw little of this, hut we looked forward to a pint at the St. Enoch’s Hotel and then some leave. I think I had about ten days at Westcliff with Mother and Brenda, it was good to see them again after nearly 2½years.

April – October 1946         The ship sailed again on 25th April with 130 “Bush Brides”, who were brides of Australian servicemen and were going to join their husbands to live in Australia. The voyage again was through the Mediterranean and then a brief stay at Aden. Brian, Colin and I went ashore and we asked some joker the way to the local Club for a drink. “Oh” he said “lt’s that white building up on the hill”. So we trudged up the hill, knocked on the front door which was opened by a smart servant who asked what we wanted. We said we would like a drink, to which he replied that this was the Consul’s Residence. Anyway, the Consul was very decent, gave us more than one drink and we went happily back to the ship.

We arrived back in Sydney on 25th May and left again on the 9th June with over 1,000 service personnel due to be demobilised, including some RAF. We also carried 65 tons of food for Britain and about 18,000 gift parcels of food. From Fremantle the engines worked at full power and lndefat made a record-breaking non-stop trip of 21 days to Portsmouth. Then on 29th July we sailed again to Colombo and repatriated another large number of service personnel. The highlight I remember was a visit to Kandy and the Temple of the Sacred Tooth, where we were guided by Buddhist priests in their saffron yellow robes.

The last major event was a parade by the ship’s company on 19th September through Holborn in London, the borough that had “adopted” us during the war. As one of the officers with the longest-serving time in Indefat I was placed in the front rank, and there is a photo in our album. After the march we were inspected by the Mayor and then had a luncheon in the Town Hall, where our Battle Ensign flown by Indefat during Action Stations was presented to be hung in the Council Chamber. The demobilisation process was slow, but I finally left the ship and the Navy on 1st October 1946, after a wardroom party the night before! I well remember going down the gangway, walking through Portsmouth Dockyard and then out through the Main Gate, ready to face a different kind of life and world.


Use Bit to represent groups

Dylan's BI Notes - Wed, 2017-10-11 03:17
Here I am providing an alternate approach of supporting group membership in MySQL. It is a common seen requirement that a group may have multiple members and a person may be added to multiple groups.  This many to many relationship is typically modeled in an intersection table. When the group membership is being used as […]
Categories: BI & Warehousing

Oracle Database Multilingual Engine (MLE)

Yann Neuhaus - Wed, 2017-10-11 01:35

My ODC appreciation blog post was about Javascript in the database running in the beta of the Oracle Database Multilingual Engine (MLE). Here I’ll detail my first test which is a comparison, in performance, between a package written in Javascript, running in the MLE, and one written and running in PL/SQL.

I’ve downloaded the 12GB .ova from OTN, installed the latest SQLcl, and I’m ready to load my first Javascript procedure. I want something simple that I can run a lot of times because I want to test my main concern when running code in a different engine: the context switch between the SQL engine and the procedural one.

My kid’s maths exercises were about GCD (greatest common divisor) this week-end so I grabbed the Euclid’s algorithm in Javascript. This algorithm was the first program I ever wrote long time ago, on ZX-81, in BASIC. Now in Javascript it can use recursion. So here is my gcd.js file:

module.exports.gcd = function (a, b) {
function gcd(a, b) {
if (b == 0)
{return a}
else
{return gcd(b, a % b)}
}
return gcd(a,b)
}

We need strong typing to be able to load it as a stored procedure, so here is the TypeScript definition in gcd.d.ts

export function gcd(a:number, b:number ) : number;

I load it with the dbjs utility, which I run in verbose mode:

[oracle@dbml MLE]$ dbjs deploy -vv gcd.js -u demo -p demo -c //localhost:1521/DBML
deploy: command called /media/sf_share/MLE/gcd.js oracle
Oracle backend: starting transpiler
gcd: processed function
Oracle backend: opening connection to database
gcd.js: retrieving functions
dropModule: called with gcd.js
loadModule: called with gcd.js
BEGIN
EXECUTE IMMEDIATE 'CREATE PACKAGE GCD AS
FUNCTION GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER AS LANGUAGE JS LIBRARY "gcd.js" NAME "gcd" PARAMETERS("p0" DOUBLE, "p1" DOUBLE);
END GCD;';
END;
: generated PLSQL
+ gcd.js
└─┬ gcd
└── SCALAR FUNCTION GCD.GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER

As it is mentioned in the verbose log, the Javascript code is transpiled. My guess is that the Javascript is parsed by the Oracle Truffle framework and compiled by Oracle GaalVM. More info in the One VM to Rule Them All paper.

This has loaded the package, the library and an ‘undefined’ object of type 144 (this MLE is in beta so not all dictionary views have been updated):


SQL> select * from dba_objects where owner='DEMO';
 
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
----- ----------- -------------- --------- -------------- ----------- ------- ------------- --------- ------ --------- --------- --------- --------- ------------ ------- ----------- -----------------
DEMO GCD 93427 PACKAGE 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93426 LIBRARY 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93425 UNDEFINED 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 129 NONE N
 
 
SQL> select * from sys.obj$ where obj# in (select object_id from dba_objects where owner='DEMO');
 
OBJ# DATAOBJ# OWNER# NAME NAMESPACE SUBNAME TYPE# CTIME MTIME STIME STATUS REMOTEOWNER LINKNAME FLAGS OID$ SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 SIGNATURE SPARE7 SPARE8 SPARE9
---- -------- ------ ---- --------- ------- ----- ----- ----- ----- ------ ----------- -------- ----- ---- ------ ------ ------ ------ ------ ------ --------- ------ ------ ------
93427 284 GCD 1 9 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 51713CBD7509C7BDA23B4805C3E662DF 0 0 0
93426 284 gcd.js 1 22 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 8ABC0DDB16E96DC9586A7738071548F0 0 0 0
93425 284 gcd.js 129 144 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 0 0 0

MLE Javascript

So, I’ve executed the function multiple times for each one of 10 millions rows:

SQL> select distinct gcd(rownum,rownum+1),gcd(rownum,rownum+2),gcd(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:17.64

The execution on 30 million took 17 seconds

PL/SQL function

In order to compare, I’ve created the same in PL/SQL:

SQL> create or replace function gcd_pl(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd_pl.gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd_pl.gcd(a,b);
12 end;
13 /

Here is the execution:

SQL> select distinct gcd_pl(rownum,rownum+1),gcd_pl(rownum,rownum+2),gcd_pl(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:21.05

PL/SQL UDF function

In 12c we can declare a function with the pragma UDF so that it is optimized for calling from SQL

SQL> create or replace function gcd_pl_udf(a number, b number) return number as
2 pragma UDF;
3 function gcd(a number, b number) return number is
4 begin
5 if b = 0 then
6 return a;
7 else
8 return gcd_pl_udf.gcd(b,mod(a,b));
9 end if;
10 end;
11 begin
12 return gcd_pl_udf.gcd(a,b);
13 end;
14 /

Here is the execution:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:51.85

Native compilation

We can also improve PL/SQL runtime by compiling it in native, rather than being interpreted on p-code

SQL> alter session set plsql_code_type=native;
Session altered.
 
SQL> alter function gcd_pl_udf compile;
Function altered.
 
SQL> alter function gcd_pl compile;
Function altered.

and here is the result:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:10.31
 
SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:45.54

Inline PL/SQL

Finally, similar to an UDF function, we can declare the function in the query, inlined in a WITH clause:


SQL> with function gcd_pl_in(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd(a,b);
12 end;
13 select distinct gcd_pl_in(rownum,rownum+1),gcd_pl_in(rownum,rownum+2),gcd_pl_in(rownum,rownum+3) from xmltable('1 to 10000000')
14 /

And here is the result:

Elapsed: 00:00:48.92

Elapsed time summary

Here is a recap of the elapsed time:
CaptureMLE

Elapsed: 00:00:17.64 for MLE Javascript
Elapsed: 00:00:45.54 for PL/SQL UDF function (native)
Elapsed: 00:00:48.92 for Inline PL/SQL
Elapsed: 00:00:51.85 for PL/SQL UDF function (interpreted)
Elapsed: 00:01:10.31 for PL/SQL function (native)
Elapsed: 00:01:21.05 for PL/SQL function (interpreted)

The top winner is Javascript!

Perfstat Flame Graph

My tests were deliberately doing something we should avoid for performance and scalability: call a function for each row, because this involves a lot of time spent in switching the context between the SQL and the procedural engine. But this is however good for code maintainability. This overhead is not easy to measure from the database. We can look at the call stack to see what happens when the process is evaluating the operand (evaopn2) and switches to PL/SQL (evapls), and what happens besides running the PL/SQL itself (pfrrun). I have recorded perf-stat for the cases above to display the Flame Graph on the call stack. When looking for more information I remembered that Frits Hoogland already did that so I let you read Frits part1 and part2

You can download my Flame Graphs and here is a summary of .svg name and call stack from operand evaluation to PL/SQL run:

PL/SQL UDF function (native) perf-gcd_pl_UDF_native.svg evaopn2>evapls>peidxrex>penrun
Inline PL/SQL perf-gcd_pl_inline.svg evaopn2>evapls>kkxmss_speedy_stub>peidxrex>pfrrun>pfrrun_no_tool
PL/SQL UDF function (interpreted) perf-gcd_pl_UDF_interpreted.svg evaopn2>evapls>peidxexe>pfrrun>pfrrun_no_tool
PL/SQL function (native) perf-gcd_pl_native.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>penrun
PL/SQL function (interpreted) perf-gcd_pl_interpreted.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>pfrrun>pfrrun_no_tool

But more interesting is the Flame Graph for the JavaScript execution:
CaptureMLEFlame

My interpretation on this is limited but I don’t see a stack of context switching function before calling the MLE engine, which is probably the reason why it is fast. Besides the ‘unknown’ which is probably the run of the JavaScript itself (the libwalnut.so library has no symbols) we can see that most of the time is in converting SQL data types into JavaScript types at call, and the opposite on return:

  • com.oracle.walnut.core.types.OraNumberUtil.doubleToNumber
  • com.oracle.walnut.core.types.OraNumberUtil.numberToDouble

This is the price to pay when running a different language, with different data types.

So what?

This MultiLingual Engine looks promising, both for functionalities (choose the language to run in the database) and performance (same address space than the SQL, and context switching is minimal). Of course, this is only in beta. There may be more things to implement, with more overhead. For example, we can imagine that if it goes to production there will be some instrumentation to measure time and record it in the Time Model. It may also be optimized further. You can test it (download from the MLE home and give feedback about it (on the MLE forum).

This post was about to measuring performance when switching from SQL to PL/SQL. In next post, I’ll look at callbacks when running SQL from MLE.

 

Cet article Oracle Database Multilingual Engine (MLE) est apparu en premier sur Blog dbi services.

Talking about APEX Reporting and AOP @ Montreal Oracle Dev Day 2017

Dimitri Gielis - Wed, 2017-10-11 01:00
For those in Montreal and the surrounding area I encourage you to come out to the Montreal Oracle Dev Day on October 25th (8:30-4:30 at Centre for Sustainable Development).

Here’s a summary agenda of the presentations with the full agenda here:
Aside from the presentations you will have plenty of opportunity to network and share your Oracle development experiences. All speakers will be available all day so feel free to bring your APEX questions!

You can register now online.

As I'm not that much in this part of the world it would be great to meet in person. I would love to hear your thoughts on APEX Office Print (AOP) too.  If you have any questions, feedback or just want to talk how to use AOP in your environment, don't hesitate to come up to me. I'm more than happy to talk to you :)

Categories: Development

Converting your XAI Services to IWS using scripting

Anthony Shorten - Tue, 2017-10-10 17:14

With the deprecation announcement surrounding XML Application Integration (XAI), it is possible to convert to using Inbound Web Services (IWS) manually or using a simple script. This article will outline the process of building a script to bulk transfer the definitions over from XAI to IWS.

Ideally, it is recommended that you migrate each XAI Inbound Service to Inbound Web Services manually so that you can take the opportunity to rationalize your services and reduce your maintenance costs but if you want to simply transfer over to the new facility in bulk this can be done via a service script to migrate the information.

This can be done using a number of techniques:

  • You can drive the migration via a query portal that can be called via a Business Service from a BPA or batch process.
  • You can use the Plug-In Batch to pump the services through a script to save time.

In this article I will outline the latter example to illustrate the migration as well as highlight how to build a Plug In Batch process using configuration alone.

Note: Code and Design in this article are provided for illustrative purposes and only cover the basic functionality needed for the article. Variations on this design are possible through the flexibility of the extensible of the product. These are not examined in any detail except to illustrate the basic process.

Note: The names of the objects in this article are just examples. Alternative values can be used, if desired.

Design

The design for this is as follows:

  • Build a Service script that will take the XAI Inbound Service identifier to migrate and perform the following
    • Read the XAI Inbound Service definition to load the variables for the migration
    • Check that the XAI Inbound Service is valid to be migrated. This means it must be owned by Customer Modification and uses the Business Adaptor XAI Adapter.
    • Transfer the XAI Inbound Service definition to the relevant fields in the Inbound Web Service and add the service. Optionally activate the service ready for deployment. The deployment activity itself should not be part of the script as it is not a per service activity usually.
    • By default the following is transferred:
      • The Web Service name would be the Service Name on the XAI Inbound Service not the identifier as that is randomly generated.
      • Common attributes are transferred across from the existing definition
      • A single operation, with the same name as the Inbound Web Service, is created as a minimalist migration option.
  • Build a Plug In Batch definition to include the following:
    • The Select Record algorithm will identify the list of services to migrate. It should be noted that only services that are owned by the Customer Modification (CM) owner should be migrated as ownership should be respected.
    • The script for the above will be used in the Process Record algorithm.

The following diagram illustrates the overall process:

Plug In Development Process

The design of the Plug In Batch will only work for Oracle Utilities Application Framework V4.3.0.4.0 and above but the Service Script used for the conversion can be used with any implementation of Oracle Utilities Application Framework V4.2.0.2.0 and above. On older versions you can hook the script into another script such as BPA or drive it from a query zone.

Note: This process should ONLY be used to migrate XAI Inbound Services that are Customer Modifications. Services owned by the product itself should not be migrated to respect record ownership rules.

XAI Inbound Service Conversion Service Script

The first part of the process is to build a service script that establishes an Inbound Web Service for an XML Application Integration Inbound Service. To build the script the following process should be used:

  • Create Business Objects - Create a Business Object, using Business Object maintenance, based upon XAI SERVICE (XAI Inbound Service) and F1-IWSSVC (Inbound Web Service) to be used as Data Areas in your script. You can leave the schema's as generated with all the elements defined or remove the elements you do not need (as this is only a transient piece of functionality). I will assume that the schema will be as the default generation using the Schema generator in the Dashboard. Remember to allocate the Application Service for security purposes (I used F1-DFLTS as that is provided in the base meta data). The settings for the Business Objects are summarized as follows:
Setting XAI Inbound Service BO Values IWS Service BO Values Business Object CMXAIService CMIWSService Description XAI Service Conversion BO IWS Service Conversion BO Detailed Description Conversion BO for XML Application Integration Conversion BO for Inbound Web Services Maintenance Object XAI SERVICE F1-IWSSVC Application Service F1-DFLTS F1-DFLTS Instance Control Allow New Instances Allow New Instances
  • Build Script - Build a Service Script with the following attributes:
Setting Value Script CMConvertXAI Description Convert an XAI Service to IWS Service Detailed Description

Script that converts the passed in XAI Service Id into an Inbound Web Service.

- Reads the XAI Inbound Service definition
- Copies the relevant attributes to the Inbound Web Service
- Add the Inbound Web Service

Script Type Service Script Application Service F1-DFLTAPS Script Engine Version 3.0 Data Area CMIWSService - Data Area Name IWSService Data Area CMXAIService - Data Area Name XAIService Schema (this is the input value and some temporary variables)

<schema>
  <xaiInboundService mdField="XAI_IN_SVC_ID"/>
  <operations type="group">
    <iwsName/>  
    <operationName/>  
    <requestSchema/>  
    <responseSchema/>  
    <requestXSL/>  
    <responseXSL/>  
    <schemaName/>  
    <schemaType/>  
    <transactionType/>  
    <searchType/>
   </operations>
</schema>

The Data Area section looks like this:

  • Add the following code to your script (this is in individual edit-data steps):

Note: The code below is very basic and there are optimizations that can be done to make it smaller and more efficient. This is just some sample code to illustrate the process.

10: edit data
     // Jump out if the inbound service Id is blank
     if ("string(parm/xaiInboundService) = $BLANK")
       terminate;
     end-if;
end-edit;
20: edit data
     // populate the key value from the input parameter
     move "parm/xaiInboundService" to "XAIService/xaiServiceId";
     // invoke the XAI Service BO to read the service definition
     invokeBO 'CMXAIService' using "XAIService" for read;
     // Check that the Service Name is populated at a minimum
     if ("XAIService/xaiInServiceName = $BLANK")
       terminate;
     end-if;
     // Check that the Service type is correct
     if ("XAIService/xaiAdapter != BusinessAdaptor")
       terminate;
     end-if;
     // Check that the owner flag is CM
     if ("XAIService/customizationOwner != CM")
       terminate;
     end-if;
end-edit;
30: edit data
     // Copy the key attributes from XAI to IWS
     move "XAIService/xaiInServiceName" to "IWSService/iwsName";
     move "XAIService/description" to "IWSService/description";
     move "XAIService/longDescription" to "IWSService/longDescription";
     move "XAIService/isTracing" to "IWSService/isTracing";
     move "XAIService/postError" to "IWSService/postError";
     move "XAIService/shouldDebug" to "IWSService/shouldDebug";
     move "XAIService/xaiInServiceName" to "IWSService/defaultOperation";
     // Assume the service will be Active (this can be altered)
     // For example, set this to false to allow for manual checking of the
     // setting. That way you can confirm the service is set correctly and then
     // manually set Active to true in the user interface.
     move 'true' to "IWSService/isActive";
     // Process the list for the operation to the temporary variables in the schema
     move "XAIService/xaiInServiceName" to "parm/operations/iwsName";
     move "XAIService/xaiInServiceName" to "parm/operations/operationName";
     move "XAIService/requestSchema" to "parm/operations/requestSchema";
     move "XAIService/responseSchema" to "parm/operations/responseSchema";
     move "XAIService/inputXSL" to "parm/operations/requestXSL";
     move "XAIService/responseXSL" to "parm/operations/responseXSL";
     move "XAIService/schemaName" to "parm/operations/schemaName";
     move "XAIService/schemaType" to "parm/operations/schemaType";
     // move "XAIService/transactionType" to "parm/operations/transactionType";
     move "XAI/searchType" to "parm/operations/searchType";
     // Add the parameters to the operation list object
     move "parm/operations" to "IWSService/+iwsServiceOperation";
end-edit;
40: edit data
     // Invoke BO for Add
     invokeBO 'CMIWSService' using "IWSService" for add;
end-edit;

Note: The code example above does not add annotations to the Inbound Web Service to attach policies for true backward compatibility. It is assumed that policies are set globally rather than on individual services. If you want to add annotation logic to the script it is recommended to add an annotations group to the script internal data area and add annotations list in logic in the script.

One thing to point out for XAI. To use the same payload for an XAI service in Inbound Web Services, a single operation must exist with the same name as the Service Name. This is the design pattern for a one to one conversion. It is possible to vary from that if you manually convert from XAI to IWS as it is possible to reduce the number of services in IWS using multiple operations. Refer to Migrating from XAI to IWS (Doc Id: 1644914.1) and Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support for a discussion of the various techniques available. The attribute mapping looks like this:

Mapping of objects

The Service Script has now been completed. All it needs is to pass the XAI Inbound Service Identifier (not the name) to parm/xaiInboundService structure.

Building The Plug In Batch Control

In past releases, the only way to build a Batch process that is controlled via a Batch Control was to use the Oracle Utilities SDK using Java. It is now possible to define what is termed a Plug In based Batch Control which allows you to use ConfigTools and some configuration to build your batch process. The fundamental principle is that batch is basically selecting a set of records to process and then passing those records into something to process them. In our case, we will provide an SQL statement to subset the services to convert from XAI to pass to the service we just built in the previous step.

Select Records Algorithm

The first part of the Plug In Batch process is to define the Select Records algorithm that defines the parameters for the Batch process, the commit strategy and the SQL used to pump the records into the process. The first step is to create a script to be used for the Algorithm Type of Select Records to define the parameters and the commit strategy. For this example I created a script with the following parameters:

Setting Value Script CMXAISEL Description XAI Select Record Script - Parameters Detailed Description This script is the driver for the Select Records algorithm for the XAI to IWS conversion Script Type Plug In Script Algorithm Entity Batch Control - Select Records Script Version 3.0 Script Step 10: edit data
 // Set strategy and key field
 // Strategy values are dictated by BATCH_STRATEGY_FLG lookup
 //  Set JOBS strategy as this is a single threaded process
 //  I could use THDS strategy but then would have to put in logic for
 // restart in the SQL. The current SQL has that logic already implied.
 move 'JOBS' to "parm/hard/batchStrategy";
 move 'XAI_IN_SVC_ID' to "parm/hard/keyField";
end-edit;

Note: I have NO parameters for this job. If you wish to add processing for parameters, take a look at some examples of this algorithm type to see the processing necessary for bind variables.

The next step is to create an algorithm type. This will be used by the algorithm itself to define the process. Typically, an algorithm type is the definition of the physical aspects of the algorithm and its parameters. For the select algorithm the following algorithm type was created:

Setting Value Algorithm Type CMXAISEL Description XAI Selection Algorithm Detailed Description This algorithm Type is a generic wrapper to set the job parameters Algorithm Entity Batch Control - Select Records Program Type Plug In Script Plug In Script CMXAISEL Parameter SQL (Sequence 1 - Required) - This is the SQL to pass into the process

The last step is to create the Algorithm to be used in the Batch Control. This will use the Algorithm Type created earlier. Create the algorithm definition as follows:

Setting Value Algorithm Code CMXAISEL Description XAI Conversion Selection Algorithm Type CMXAISEL Effective Date Any valid date in the past is acceptable SQL Parameter

SELECT xai_in_svc_id FROM ci_xai_in_svc
WHERE xai_adapter_id = 'BusinessAdaptor'
AND
xai_in_svc_name NOT IN ( SELECT in_svc_name FROM f1_iws_svc)
AND
owner_flg = 'CM'

You might notice the SQL used in the driver. It passes the XAI_IN_SVC_ID's for XAI Inbound Services that use the Business Adaptor, are not already converted (for restart) and are owned by Customer Modification.

Process Records Algorithm

The next step is to link the script created earlier to the Process Records algorithm. As with the Select Records algorithm, a script, an algorithm type and algorithm entries need to be created.

The first part of the process is to build a Plug-In Script to pass the data from the Select Records Algorithm to the Service Script that does the conversion. The parameters are as follows:

Setting Recommended Value Script CMXAIProcess Description Process XAI Records in Batch Detailed Description This script reads the parameters from the Select records and passes them to the XAI Conversion script Script Type Plug-In Script Algorithm Entity Batch Control - Process Record Script Version 3.0 Data Area Service Script - CMConvertXAI - Data Area Name ConvertXAI Script Step if ("parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value != $BLANK")
    move "parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value" to "ConvertXAI/xaiInboundService";
    invokeSS 'CMConvertXAI' using "ConvertXAI" ;
end-if;

The script above basically takes the parameters passed to the algorithm and then passes them to the Service Script for processing

The next step is to define this script as an Algorithm Type:

Setting Value Algorithm Type CMXAIPROC Description XAI Conversion Algorithm Detailed Description This algorithm type links the algorithm to the service script to drive the process. Algorithm Entity Batch Control - Process Record Program Type Plug-In Script Plug-In Script CMXAIProcess

The last step in the algorithm process is to create the Algorithm entry itself:

Setting Value Algorithm Code CMXAIPROCESS Description XAI Conversion Process Record Algorithm Type CMXAIPROC Plug In Batch Control Configuration

The last part of the process is to bring all the configuration into a single place, the Batch Control. This will pull in the algorithms into a configuration ready for use.

Setting Value Batch Control CMXAICNV Description Convert XAI Services to IWS Detailed Description

This batch control converts the XAI Inbound Services to Inbound Web Services to aid in the mass migration of the meta data to the new facility.
This batch job only converts the following:

- XAI Services that are owned by Customer Modification to respect record ownership.
- XAI Services that use the Business Adaptor XAI Adapter. Other types are auto converted in IWS
- XAI Services that are not already defined as Inbound Web Services

Application Service F1-DFLTAPS Batch Control Type Not Timed Batch Category Adhoc Algorithm - Select Records CMXAISEL Algorithm - Process Records CMXAIPROCESS

The Plug-in batch process is now defined.

Summary

The conversion process can be summarized as follows:

  • A Service Script is required to transfer the data from the XAI Inbound Web Service to the Inbound Web Service definition. This converts only services owned by Customer Modification, have not been migrated already and use the Business Adaptor XAI Adapter. The script sets the same parameters as the XAI Service for backward compatibility and creates a SINGLE operation Web Service with the same payload as the original.
  • The Select Records Algorithm is defined which defines the subset of records to process with a script that defines the job properties, an algorithm entry to define the script to the framework and an algorithm, with the SQL to use, to link to the Batch Control.
  • The Process Records Algorithm is defined which defines the processing from the Select Records and links in the Service Script from the first step. As with any algorithm, the code is built, in this case in Plug-In Script to link the data to the script, an algorithm type entry defines the script and then an algorithm definition is created to link to the batch control.
  • The last step is to create the Batch Control that links the Select Records and Process Records algorithms.

Cloning Goldengate Integrated Capture and DB

Michael Dinh - Tue, 2017-10-10 17:10

Using DBMS_STREAMS_ADM To Cleanup GoldenGate

Let’s say you want to clone DB and Goldengate implementation from PROD to DEV, then you need to drop the capture that was registered with PROD database.

This is what happens when dependencies are introduced / created.

select capture_name from dba_capture;
exec DBMS_CAPTURE_ADM.DROP_CAPTURE ('&capture');

nVision Performance Tuning: Introduction

David Kurtz - Tue, 2017-10-10 15:41
This blog post is the first in a series that discusses how to get good performance from nVision as used in General Ledger reporting.

PS/nVision is a PeopleTools technology that extracts data from the database and places it in an Excel spreadsheet (see PS/nVision Overview).  Although PS/nVision can be used with any PeopleSoft product, it is most commonly used in Financials General Ledger.

The SQL queries generated by nVision are, at least conceptually, similar to data warehouse queries. The ledger, ledger budget or summary ledger tables are the fact tables.

The ledger tables are analysed by their attribute columns. There are always literal conditions on the fiscal year and accounting period, there is usually a literal condition on currency code.  Then there are criteria on some of the other attributes.  I will take an example that analyses the ledger table in three dimensions: BUSINESS_UNIT, ACCOUNT and CHARTFIELD1, but there are many other attribute columns on the ledger tables.  These attributes are defined in lookup tables in the application, but their hierarchies are defined in trees.

nVision reports use the trees to determine which attribute values to report.  A report might report on a whole tree, or particular nodes, or branches of a tree.  nVision joins the tree definition to the attribute table and produces a list of attributes to be reported.  These are put into working storage tree selector tables (PSTREESELECT01 to 30).  The choice of selector table is controlled by the length of the attribute column.  BUSINESS_UNIT is a 5 character column so it goes into PSTREESELECT05. CHARTFIELD1 and ACCOUNT are 10 character columns so they use PSTREESELECT10.  These selector tables form the dimensions in the queries.

Here is an example of a SQL statement generated by nVision.  The tree selector 'dimension' tables are joined to the ledger 'fact' table.

SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A
, PSTREESELECT05 L1
, PSTREESELECT10 L
, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2016
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM
This SQL looks simple enough, but there are various complexities
  • The tree selector tables are populated at runtime.  Many dimensions can be stored in each tree selector table, each keyed by a different SELECTOR_NUM.
  • Selectors can be static or dynamic.  In dynamic selectors, the data is only stored temporarily for the lifetime of the report and will be deleted when it completes.  So immediately, there is a challenge of keeping statistics up to date, and even then Oracle doesn't always manage to find an effective execution plan.
  • Different selectors will have different numbers of rows, so the statistics have to describe that skew.
  • Different nVision reports and even different parts of the same report generate different statements that can use different combinations of attribute columns.  The number of dimensions can vary, I have seen systems that use as many as five different trees in a single query.
  • Then the database needs to find the relevant rows on the ledger table for the dimensions specified as efficiently as possible.
This very quickly becomes a difficult and complex problem.  This series articles works through the various challenges and describe methods to overcome them.  Not all of them are applicable to all systems, in some cases, it will be necessary to choose between approaches depending on circumstances.

nVision Performance Tuning: Table of Contents

David Kurtz - Tue, 2017-10-10 15:39
This post is an index for a series of blog posts that discuss how to get good performance from nVision as used in General Ledger reporting.  As the posts become available links will be updated in this post.
  • Introduction
  • nVision Performance Options
  • Indexing of Ledger, Budget and Summary Ledger Tables
  • Partitioning of Ledger, Budget and Summary Ledger Tables
  • Additional Oracle Instrumentation for nVision
  • Logging Selector Usage
  • Analysis of Tree Usage  with the Selector Log
  • Interval Partitioning and Statistics Maintenance of Selector Tables
  • Compression without the Advanced Compression option
  • Maintaining Statistics on Non-Partitioned Selector Tables
  • Excel -v- OpenXML
The current versions of scripts mentioned in the series will be made available on GitHub.


ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC

Amis Blog - Tue, 2017-10-10 13:40

Here is my entry for the Oracle Developer Community ODC Appreciation Day (#ThanksODC).

It is quite hard to make a choice for a feature to write about. So many to talk about. And almost every day another favorite of the month. Sliding time windows. The Oracle Developer Community – well, that is us. All developers working with Oracle technology, sharing experiences and ideas, helping each other with inspiration and solutions to challenges, making each other and ourselves better. Sharing fun and frustration, creativity and best practices, desires and results. Powered by OTN now kown as ODC. Where we can download virtually any software Oracle has to offer. And find resources – from articles and forum answers to documentation and sample code. This article is part of the community effort to show appreciation – to the community and to the Orace Developer Community (organization).

For fun, you could take a look at how the OTN site started – sometime in 2000 – using the WayBack machine: https://web.archive.org/web/20000511100612/http://otn.oracle.com:80/ 

image

And the WayBack machine is just one of many examples of timelines – presentation of data organized by date.image We all know how pictures say more than many words. And how tables of data are frequently to much less accessible to users than to the point visualizations. For some reason, data associated with moments in time have always had special interest for me. As do features that are about time – such as Flashback Query, 12c Temporal Database and SYSDATE (or better yet: SYSTIMESTAMP).

To present such time-based data in way that reveals the timeline and historical threat that resides in the data, we can make use of the Timeline component that is available in:

In JET:image

In ADF:

This image is described in the surrounding text

In Data Visualization Cloud:

Note that in all cases it does not take much more than a dataset with date (or date time) attribute and one or more attributes to create a label and perhaps to categorize. A simple select ename, job, hiredate from emp suffices.

The post ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC appeared first on AMIS Oracle and Java Blog.

Thanks, ODC (Oracle Developer Community)!

Scott Spendolini - Tue, 2017-10-10 08:04
I owe a lot of thanks to the ODC - which stands for Oracle Developer Community.  What is ODC?  You may remember it as OTN, or the Oracle Technology Network.  Same people, different name.  Why they changed it I can't say.  People just liked it better that way... (love that song)

In any case, what am I thankful for?  A lot.  To start, the tools that I use day in and day out: SQL Developer, ORDS, Oracle Data Modeler, SQLcl and - of course - APEX.  Without these tools, I'm likely on a completely different career path, perhaps even one that aligns more closely with my degree in television management.

While the tools are great, it's really the people that make up the community that make ODC stand out. From the folks who run ODC and the Oracle ACE program to the developers and product managers who are behind the awesome tools, the ODC community is one of, if not the greatest asset of being involved with Oracle's products.

If you have yet to get more involved with this community, and are wondering how you can, well, there's no better time that on ODC appreciation day!  Here's some basic and simple things that you can do to become more involved:

  • Read and reply to posts on the ODC forums.  You'd be surprised how far a simple reply can go to help others.
  • Attend local user group conferences.  Consider not only presenting at them as well, but volunteering your time to help with the organization.
  • Attend and/or create a local MeetUp that focuses on the tools that you use.  It can be as general or as specific as you'd like it to be.
  • Get a Twitter account and follow the ODC community members.  Not sure where to start?  Try this list of "Oracle Peeps" from Jeff Smith: https://twitter.com/thatjeffsmith/lists/oraclepeeps
  • Encourage your co-workers to do the same!
There's no better way of showing your support for the ODC community than becoming more involved with it!

Oracle Names IBM as Strategic HR BPO Provider

Oracle Press Releases - Tue, 2017-10-10 07:00
Press Release
Oracle Names IBM as Strategic HR BPO Provider Oracle recognizes IBM as a strategic HR BPO provider on the Oracle HCM Cloud platform

Redwood Shores, Calif.—Oct 10, 2017

Oracle today named IBM (NYSE: IBM) as a strategic partner to provide Business Process Outsourcing for Human Resources delivered on the Oracle HCM Cloud platform. Together, IBM and Oracle will enable organizations to seamlessly migrate to Oracle’s HCM Cloud platform, transform HR operations and processes, and capitalize on the efficiencies of a managed service giving access to a global network of HR professionals. 

“Oracle has witnessed Human Resource departments adopt Oracle HCM Cloud at an unprecedented rate,” said Tony Kender, Senior Vice President, North America HCM Cloud Business, Oracle. “The inherent benefits of consuming HR as a service has allowed CHROs around the world to focus on strategic talent and HR direction with providing surety of high quality next generation HR operations. Increasingly forward thinking CHRO’s are recognizing that progressive Business Process Outsourcing (BPO) provides a significant quantitative business case for HR transformation. Oracle is excited to collaborate with a globally recognized BPO provider and world-class consulting partner such as IBM, who is superbly qualified to provide our clients a seamless transition on that journey.”

IBM is widely recognized as a leader in consulting and managed services for all aspects of Human Resources enabled by cloud and cognitive technology innovations. IBM’s BPO HR and talent solution footprint spans all areas of the HR domain from talent acquisition to talent development and HR operations. The transformation of the employee and manager experience is significantly fueled by the shift to cloud based talent and HR systems. IBM delivers HR operations services using cloud enabled solutions enhanced by automation, robotics, cognitive technology, voice of the client analytics, closed loop incident management automation, and analytics-based defect reduction.

IBM focuses on delivering a high quality and consistent employee experience using technology to liberate people to achieve more. IBM is truly differentiated by the range of cognitive solutions for HR enabling business to make faster, smarter decisions with insight and foresight. Attracting talent, growing career paths and extending and personalizing learning are some examples of the possibilities with AI solutions also highly complementary to the Oracle HCM cloud platform.

 

“This announcement expands upon a 30 year relationship between Oracle and IBM, where IBM has achieved Cloud Elite Partner status in addition to being an Oracle Diamond Partner,” said Dan Eybergen, North America Oracle Service Line leader within IBM. “We look forward to the opportunity to successfully bring our clients live on the Oracle HCM Cloud platform to continue to transform the way HR services are delivered.”

Contact Info
Scott Thornburg
Oracle
+1.415.816.8844
scott.thornburg@oracle.com
Kristin Reeves
Blanc & Otus PR for Oracle
+1.415.787.6744
kreeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle’s partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Talk to a Press Contact

Scott Thornburg

  • +1.415.816.8844

Kristin Reeves

  • +1.415.787.6744

Using Adobe InDesign with Oracle Content Experience Cloud

5 Things I learned about Using Adobe Design Products with Oracle Content Experience Cloud

As a designer I am always a little leery when someone tells me they are going to ask me to change my process.  To my great relief moving from my desktop and WebCenter to Content Experience Cloud is not only easy but will make me faster.  Here are the first 5 things I have learned since making the switch.

From the Desktop to the Cloud
Working within the Cloud
Commenting in the Cloud

  1. Drag and Drop!  – Content Experience Cloud makes it easy to drag your exported package folder or image source files from your desktop into the cloud.   You can also save yourself a step and save directly to the cloud.
  2. Generating Content – If you work in an environment where one department generates the images and another might do the writing and a third does the final review and publish CEC will make it easy to collaborate.  Simply place the content from the different departments in the shared folder and BAM instant collaboration.  No more broken links and big file drops using a third party.
  3. Open your Adobe file directly – You don’t need to download the file before opening it up each time and relinking your image files.  Open directly from the cloud and immediately start working.
  4. Security – As previously mentioned you don’t need to use one of those third party’s to transfer your files.  You also can control who has access to the shared content at each step.  For example you don’t need to include all departments in the design phase.  Once it is ready for sharing the exported document can be saved into a production folder for publication. This eliminates the risk that a partially finished product would be published by mistake.
  5.  Shorten the Review Cycle – Shorten the review cycle by directing all stakeholders to the correct folder.  This will reduced the need to email each version to everyone each time.  Comments can be made directly within the folder.

Having the ability to work collaboratively within a cloud application is a big advantage for graphic designers.  The files we tend to use are usually large and sending them back and forth is a consistent challenge.  Without a cloud application teams are forced to export and package the project at each step and send to each other using a dropbox or similar application.  The next team member has to download the content to their computer make any edits, and then send it on to the next step.  Watch as I demonstrate how your team can use Oracles Content Experience Cloud with your Adobe software to cut out steps and make collaboration a breeze.

The post Using Adobe InDesign with Oracle Content Experience Cloud appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Partner Webcast – Achieve Database High Availability and Disaster Recovery with Oracle Cloud

The High Availability and Disaster Recovery needs of customers traditionally have required significant capital investment in the infrastructure that provides the redundant capabilities that are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

ODC Appreciation Day: OBIEE's Time Hierarchies

Rittman Mead Consulting - Tue, 2017-10-10 01:58
 OBIEE's Time Hierarchies

After last year successful OTN Appreciation Day, it's time again to show our love for a particular feature in any Oracle's tool we use in our work. You may have noted a name change with OTN now becoming ODC: Oracle Developer Community.

What

The feature I want to speak about is OBIEE's Time Hierarchies.
For anybody in the BI business the time dimension(s) are the essence of the intelligence bit: being able to analyze trends, compare current period with previous one, plot year to date or rolling measures are just some of the requirements we get on daily basis.
A time hierarchy definition allows the administrator to set which time levels are exposed, how the rollup/drill down works and how previous/following members of the level are calculated.
Once the hierarchy is defined, all the related calculations are simple as calling a function (e.g. AGO), defining the level of detail necessary (e.g. Month) and the number of items to take into account (e.g. -1).

A Time hierarchy definition is necessary in the following cases:

  • Time comparisons - e.g. current vs previous month
  • Time related rollups - e.g. Year to date
  • Drill path definition - e.g. Year-Month-Day
  • Fact Tables at various level of details - e.g. daily fact table and monthly pre-aggregated rollup
  • Time related level based measures - e.g. monthly sum of sales coming from a fact table at daily level
Why

Why do I like time hierarchies? Simple! It's a very clever concept in the RPD, which requires particular knowledge and dedicated attention.

If done wright, once defined, is available in every related table and makes the time comparison formulas easy to understand and to create. If done wrong, errors or slowness in the related analysis can be difficult to spot and improve/fix.

Still time hierarchies are a central piece in every BI implementation, so once understood and implemented correctly give a massive benefit to all developers.

How

We blogged about time dimensions and calculations back in 2007 when OBI was still on version 10! The original functionality is still there and the process to follow is pretty much the same.
Recently was introduced the concept of Logical Sequence Number, a way of speeding up some time series calculations by removing the ranking operations needed to move back (or forth) in history.

 OBIEE's Time Hierarchies

I wanted to keep the blog post short, since the time hierarchies information can be found in millions of blog posts. I just wanted the to give few hints to follow when creating a time hierarchy:

  • It can be created on any data with a predefined order, no need to be a date! you could compare for example a certain product with another in the inventory having the previous code.
  • The Chronological Key defines the sorting of the level, for example how years, months or dates are ordered. Ordering months alphabetically with a format like YYYY-MM it's correct while using MM-YYYY provides wrong results.
  • Double check the hierarchies, something like YEAR-> MONTH -> WEEK -> DATE can be incorrect since a week can be split in different months!
  • Set appropriately the number of elements for each level. This is useful, especially when the hierarchy is complex or pre-aggregated facts, for OBIEE to understand which table to query depending on the level of the analysis.
  • Setup the Logical Sequence Number. LSNs are useful if you are looking to reduce the impact of the time series processing at a minimum.
  • If you are looking for very optimal performances for a specific report, e.g. current year vs previous, physicalizing the time series result, previous year, directly in the table alongside with the current year will give what you're looking for.

This was just a quick overview of OBIEE's Time Hierarchies, why are so useful and what you should be looking after when creating them! Hope you found this short post useful.

Follow the #ThanksODC hashtag on Twitter to check which post have been published on the same theme!

Categories: BI & Warehousing

ODC Appreciation Day : Javascript in the database

Yann Neuhaus - Mon, 2017-10-09 23:00

Tim Hall has launched the idea to post small blogs this day, from all the Oracle community, about an Oracle feature. I choose one feature that is only released in beta test for the moment: the Multilingual Engine (MLE) which is able to run Javascript stored procedures in the database.

Why?

When I first heard about this idea, last year before OOW16, I didn’t understand the idea at all. But the good thing at Oracle Open World, is that we can discuss with Oracle product managers, and with other Oracle DBAs or Developers, rather than relying on rumors or wrong ideas. My perception of Javascript was narrowed to the language used at client-side in thin clients, in the browser, to manage the presentation layer. It is interpreted by the browser, has no type checking, and errors are not easy to understand. Clearly, the opposite if something that I want to run in my database, on my data. PL/SQL is obviously the best choice: compiled and run into the database, strong typing to avoid runtime errors, directly integrated with SQL for better performance, etc.

So that idea of JS in the database made me smile, but I was wrong. What I didn’t get is that Javascript is just a language, and running Javascript does not mean that it has to be interpreted like when it is running on a browser.

Multilingual Engine (MLE)

Actually, what Oracle is developing in its lab goes far beyond just running Javascript in the database. They are building an execution engine, like PL/SQL or SQL execution engine, but this one being able to run programs written in different languages. They start with Javascript and TypeScript (and then strong typing here) but this can be extended in the future (Python, and why not PL/SQL one day running there). The programs will be loaded into the database as stored procedures/functions/packages and compiled into an intermediate representation, like bytecode. This code is optimized to access efficiently to data, like the PL/SQL engine.

Actually, I’ll show in a future post that this new engine can run faster than PL/SQL for some processing and that it looks like the context switching with the SQL engine is highly efficient.

Javascript

So, why would you write your stored procedure in Javascript? The first reason is that there are a lot of existing libraries available and you may not want to re-write one. For example, I remember when working on an airline company application that I had to write in PL/SQL the function to calculate the orthodromic distance (aka great circle). This is a very simple example. But if you can get the formula in Javascript, then why not compile from this rather than translate it into another language? Currently, you can find pretty everything in Javascript or Python.

The second reason is that your application may have to use the same function at different layers. For example, you can check that a credit card number is correctly formed in the presentation layer, in order to show quickly to the user if it is correct or not. That may be Javascript in the browser. But the database should also verify that in case the rows are inserted with a different application, or in case the number has been corrupt in between. That may be PL/SQL in the database. Then you have to maintain two libraries in two different languages, but doing the same thing. Being able to run Javascript in the database let us re-use exactly the same library in the client and in the database.

Finally, one reason why some enterprise architects do not want to write procedures in the database is that the language for that, PL/SQL, can only run on Oracle. If they can write their business logic in a language that can run everywhere, then there is no vendor lock-in anymore. They have the possibility to run on another RDBMS if needed, and still get the optimal performance of processing data in the database.

Public Beta

Currently, this is a lab project from Oracle in Zurich. They have released a public beta downloadable as a VM. Just go to the download page at http://www.oracle.com/technetwork/database/multilingual-engine/overview/index.html

Capture;LE

And stay tuned to this blog to see some performance comparison with PL/SQL User-Defined Function.

 

Cet article ODC Appreciation Day : Javascript in the database est apparu en premier sur Blog dbi services.

Second Highest Sal

Tom Kyte - Mon, 2017-10-09 21:46
Hello Tom, How are you, After long time i visited the site and able to find the button. Ok Here is my question how can i get from sql second highest salary record from the table but with deparment wise Ram 10 1000 Jai 10 2000 San 20 3000...
Categories: DBA Blogs

No Guarantees with opatch -report or CheckConflict

Michael Dinh - Mon, 2017-10-09 15:13

I have performed the following checks.

# $GRID_HOME/OPatch/opatch auto /media/swrepo/JUL2017PSU/26030799 -report -ocmrf /tmp/ocm.rsp
$ $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /media/swrepo/JUL2017PSU/26030799

Actual patching failed.

# $GRID_HOME/OPatch/opatch auto /media/swrepo/JUL2017PSU/26030799 -ocmrf /tmp/ocm.rsp
Executing /u01/app/oracle/product/11.2.0/grid/perl/bin/perl 
/u01/app/oracle/product/11.2.0/grid/OPatch/crs/patch11203.pl 
-patchdir /media/swrepo/JUL2017PSU -patchn 26030799 
-ocmrf /tmp/ocm.rsp -paramfile /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatchauto2017-10-09_10-35-34.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatchauto2017-10-09_10-35-34.report.log

2017-10-09 10:35:34: Starting Oracle Restart Patch Setup
Using configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params

Stopping RAC /u01/app/oracle/product/11.2.0/dbhome_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0/dbhome_1 successfully

patch /media/swrepo/JUL2017PSU/26030799/25869727  apply successful for home  /u01/app/oracle/product/11.2.0/dbhome_1
patch /media/swrepo/JUL2017PSU/26030799/25920335/custom/server/25920335  apply successful for home  /u01/app/oracle/product/11.2.0/dbhome_1

Stopping CRS...

Stopped CRS successfully

Error : The opatch Applicable check failed.  The patch /media/swrepo/JUL2017PSU/26030799/25920335 is not applicable to /u01/app/oracle/product/11.2.0/grid
Error:Patch Applicable check failed for /u01/app/oracle/product/11.2.0/grid

Starting CRS...

ERROR: Prereq checkApplicable failed. Refer log file for more details.


opatch auto failed.
#
Really useful info – ERROR: Prereq checkApplicable failed. Refer log file for more details.

I digress.

After some digging – search for ZOP-46 from /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatch

$ grep -n "ZOP-46" opatch2017-10-09*.log
opatch2017-10-09_10-41-58AM_1.log:13:
[Oct 9, 2017 10:42:00 AM]    ZOP-46: 
The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. 
All required components, however, are installed.


$ head -25 opatch2017-10-09_10-41-58AM_1.log
[Oct 9, 2017 10:41:59 AM]    PREREQ session

[Oct 9, 2017 10:41:59 AM]    
OPatch invoked as follows: 'prereq CheckApplicable 
-ph /media/swrepo/JUL2017PSU/26030799/25920335 
-oh /u01/app/oracle/product/11.2.0/grid 
-invPtrLoc /u01/app/oracle/product/11.2.0/grid/oraInst.loc '

[Oct 9, 2017 10:41:59 AM]    OUI-67077:
                             Oracle Home       : /u01/app/oracle/product/11.2.0/grid
                             Central Inventory : /u01/app/oracle/oraInventory
                                from           : /u01/app/oracle/product/11.2.0/grid/oraInst.loc
                             OPatch version    : 11.2.0.3.6
                             OUI version       : 11.2.0.4.0
                             OUI location      : /u01/app/oracle/product/11.2.0/grid/oui
                             Log file location : /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatch/opatch2017-10-09_10-41-58AM_1.log
[Oct 9, 2017 10:41:59 AM]    Patch history file: /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt
[Oct 9, 2017 10:41:59 AM]    Invoking prereq "checkapplicable"

[Oct 9, 2017 10:42:00 AM]    
ZOP-46: The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. 
All required components, however, are installed.

[Oct 9, 2017 10:42:00 AM]    Patch 25920335:
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/appvipcfg.pl" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'appvipcfg.pl' to '/u01/app/oracle/product/11.2.0/grid/bin/appvipcfg.pl'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/oclumon.bin" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'oclumon.bin' to '/u01/app/oracle/product/11.2.0/grid/bin/oclumon.bin'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/ologgerd" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'ologgerd' to '/u01/app/oracle/product/11.2.0/grid/bin/ologgerd'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/osysmond.bin" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'osysmond.bin' to '/u01/app/oracle/product/11.2.0/grid/bin/osysmond.bin'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/crs/demo/coldfailover/act_db.pl" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'act_db.pl' to '/u01/app/oracle/product/11.2.0/grid/crs/demo/coldfailover/act_db.pl'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/crs/demo/coldfailover/act_listener.pl" does not exists or is not readable
$ ls -l /media/swrepo/JUL2017PSU/26030799/25920335/files/bin/appvipcfg.pl
-rwxr-x--- 1 root root 9051 Jun 27 07:40 /media/swrepo/JUL2017PSU/26030799/25920335/files/bin/appvipcfg.pl

Please don’t ask me why.

Solution.

# cd /media/
# chmod -R 777 swrepo/
# chown -R oracle:dba patches/

opatch report “ERROR: Prereq checkApplicable failed.” when Applying Grid Infrastructure patch (Doc ID 1417268.1)

	A. Expected behaviour if GRID_HOME has not been unlocked
 	B. Bug 13575478
 	C. The patch is stored in a shared NFS location and there is a permission issue accessing the patch
 	D. The patch is not unzipped as grid user, often it is unzipped as root user
 	E. The patch is unzipped inside GRID_HOME

In summary, trust but verify!


New OA Framework 12.2.4 Update 17 Now Available

Steven Chan - Mon, 2017-10-09 13:11

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.4 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.4 users should apply this patch.  Future OAF patches for EBS Release 12.2.4 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.4 bundle patches.

This latest bundle patch includes a fix for the following issue:

  • EBS version is displayed in error message shown at the Error page footer.

Related Articles

Categories: APPS Blogs

Process Cloud Service - Using correlations to communicate between processes (part 2)

Continuing my previous article Process Cloud Service - Using correlations to communicate between processes (part 1) I would like to demonstrate correlations in action using PCS Player. The PCS...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator