Feed aggregator

No More Content Chaos – Fishbowl Helps Manufacturer Integrate Oracle WebCenter with Agility for Enterprise Asset Management and Multichannel Delivery

This post is a recap of a recent Oracle WebCenter Content digital asset management project we completed for a manufacturing customer. They have thousands of products, and each has its own set of images, illustrations, videos, and manuals. Over 20,000 different asset types like these supported multiple processes and are distributed across a variety of mediums.

Google Search Appliance (GSA) Replacement, Alternatives, and Migration
The Business Problem: Disparate Content Storage

The company was storing some of its assets within Oracle Universal Content Management (UCM). Others, however, were stored in employee laptops, file servers, and various other locations. The assets were also stored with multiple naming conventions. These factors made it difficult to find relevant assets for products and repurpose them across channels and teams, as well as collaborate on the assets internally or with partners. Employees involved in asset creation and management described it as “content chaos”.

Illustration 1: Content Creation Lifecycle
The company’s view on how content will be created and consumed. This includes the various audiences for the content illustrating the content types necessary (electronic and print) – technical specifications, price and availability, manuals, etc.

This disparate content storage architecture and inconsistent file naming also meant that there was limited visibility into when and where purchased assets like image libraries were being used. This made it difficult to share such investments throughout the company.

Selecting Oracle WebCenter Content for Asset Storage and Management

With these issues identified, the company set out to develop a strategy to implement a permissions-based, centralized document and digital asset management system.  They had been using Oracle UCM for some time, but their initial implementation was not architected in a way to best suit the needs to search, access, use, and share assets across the company.

The company had also recently deployed the Agility product information management (PIM) system. Agility is used to manage the fully lifecycle of the company’s products, including where and how they can be marketed and sold, as well as what they should cost. The company wanted to integrate Agility with Oracle UCM, so that they could relate products in Agility with associated assets in UCM. This would make it easier to support commerce channels, including the company’s website, print catalogs, and the future desire to enable the sales team to access the company’s full product line “binder” from their mobile devices. The company needed the content management system to be able to output multiple rendition types including videos and PDFs.

For these reasons, the company decided to upgrade Oracle UCM to WebCenter Content 12c. An added benefit of 12c was that it provided the ability to integrate with third-party audio and video transcoding systems. This enabled the company to leverage FFmpeg software, which records, converts, and streams audio and video data. This video rendition engine is an open source product and therefore free, which helped keep implementation costs down as no additional licenses needed to be purchased.

The company partnered with Fishbowl Solutions to perform the upgrade to WebCenter 12c and integrate the product with Agility. Fishbowl had engaged with this company previously to scope out the future use of Oracle UCM (WebCenter), as well as design how its digital asset management component could be architected to achieve the following:

  • Simplified Image Access and Control
    • Provide an easy-to-use system that minimizes non-value add tasks. Users should have simple input mechanisms, assets should be transformed transparently to the user, and end up distributed to the right channel based on product name and metadata values.
  • Simple Scalable Solution for More Effective Collateral Control Across All Brands
    • Simplified solution architecture that is scalable to the needs of the company’s brands and shared functions and clarifies/enforces the defined “fit for purpose” solutions.
  • Image and Referential Content are Appropriately Managed throughout the Lifecycle
    • Knowledge workers want a solution that manages data according to organizational policy and standards within the systems they use on a day-to-day basis and does not require significant additional effort.
Oracle WebCenter Content 12c Implementation and Agility Integration

The company created a “Portfolio Initiative Roadmap” identifying these 9 key initiatives required to build out their Enterprise Digital Asset Management Solution and Program:

  1. Establish Foundation
  2. Marketing and Creative Services Enablement
  3. Sales Enablement
  4. Training and Certification Enablement
  5. Engineering Enablement
  6. Archive and Access AP Images To/From Oracle WebCenter Content
  7. Creative Services Enablement
  8. Ecommerce Enablement
  9. Evolve Foundation

Fishbowl worked with the company to deliver roadmap item number one: Establish Foundation. With this initiative Fishbowl needed to first upgrade Oracle UCM to Oracle WebCenter 12c. This included migrating the creative assets into the new instance. In parallel with the content migration and upgrade, the Fishbowl team designed an integration between Agility and Oracle WebCenter. This integration would enable Agility’s product taxonomy and data model to be associated with WebCenter’s metadata fields. This essentially would enable metadata between the systems to be shared and synced, making it easier to standardize on how assets were named.

The Fishbowl team did an outstanding job researching, planning, troubleshooting and creating the migration schedule for the Oracle Universal Content Management to WebCenter Content 12c upgrade. We did encounter one issue the night of the release, but the team developed a resolution and was ready to attempt the upgrade within 3 days.  I had the utmost confidence their plan was solid and we would attempt the upgrade mid-week.  The next attempt went very smoothly and users were in WebCenter Content the next morning.

Manager, Digital Strategy, Manufacturing Company

This integration would also provide renditions of the assets stored in Oracle WebCenter to display in Agility. For example, photographs and images of the company’s products are rendered via Oracle WebCenter’s digital asset management component to produce various formats – high and low resolution, 1200 x 1200, etc. Since Agility would be used by many departments at the company, including marketing, creative services, sales, and engineering; it was important that various formats of the assets could be easily found. This would help accelerate the execution of campaigns through web and email channels, as well as when print-ready images were needed to create or update product installation manuals and catalogs that were stored as PDFs. Additionally, associating the assets with the products in Agility would enable them to be paired so that they could be part of the product’s lifecycle – when products were discontinued, associated assets could be removed from the system. The following graphic illustrates the Oracle WebCenter and Agility integration:

Illustration 2: WebCenter Content and Agility Integration  Fishbowl designed a filter to capture all of the IDs of content items checked into the PIM Asset profile and added these to the Send To PIM Table. A system event was then added in WebCenter which runs about every 5 minutes, and this checks assets that are in the Send To PIM Table and adds them to the PIM Table. After it is added to the PIM Table, an API call is made to the PIM system to pull the assets from that table and add them to the appropriate product with any relations associated to it (i.e. the high resolution image for a specific product). After it is added into the PIM system, an API call is made to WebCenter with the updated path of the asset.

Results

This company first invested in Oracle WebCenter about five years ago. Although the system was being used to store some technical and business documents, the feeling amongst senior leadership was that the return on that investment was minimal. It hadn’t reached widespread adoption, and it was viewed as a system that wasn’t streamlining business processes and therefore wasn’t saving the company money.

This company, like most WebCenter Content customers, wasn’t fully aware of all that the system had to offer. Oracle WebCenter Content’s capabilities had to be reimagined, and in this case it was the Manager of Digital Strategy at the company that found an ideal use case. He was aware of the “content chaos” that the marketing and creative services teams were experiencing, which was part of the reason campaigns for specific channels took a long time to launch, as well as how the assets for such campaigns couldn’t be repurposed easily.

With the implementation of Oracle WebCenter Content 12c, and its integration with Agility, that has all changed. WebCenter has been identified as the system to store and manage assets, and as those assets are checked in they are automatically linked to products in Agility. This means that employees across divisions and geographic locations can easily locate the acceptable product assets for use in marketing, sales, engineering, training, or installation purposes. Outdated assets can easily be removed from the system, either manually or based on a content lifecycle schedule. Furthermore, having a standardized repository for the assets will better ensure they are reused across divisions and geographic locations, and no longer does the unknown location of assets impede internal or external collaboration. Furthermore, the open-source FFMpeg video rendition engine didn’t require the purchase of additional licenses, reducing future license and maintenance costs. Overall, the WebCenter and Agility-based digital asset management system has provided the foundation to effectively and efficiently control and deliver assets to the company’s targeted commerce channels.

The Fishbowl consultants on this project have all proved to be a very valuable part of my digital content strategy. I am very happy with the level of support I have received and amazed by how quickly they are able to provide conceptual ideas, working POC’s and final deliverables. They have been instrumental in developing and implementing the integration between Oracle WebCenter’s digital asset management component and our new Agility PIM application. I view them as members of my team, and they were all key in the successful implementation of our digital content management system.

Manager, Digital Strategy, Manufacturing Company

Time running out on your GSA?

Our expert team knows both GSA and Mindbreeze. We’ll help you understand your options and design a migration plan to fit your needs.

The post No More Content Chaos – Fishbowl Helps Manufacturer Integrate Oracle WebCenter with Agility for Enterprise Asset Management and Multichannel Delivery appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Using an OpenLayers map to select countries in an Oracle JET application

Amis Blog - Tue, 2018-01-02 07:03

In a recent article I discussed how the open source geo, GIS and map library OpenLayers can be used in an Oracle JET application. That article shows how countries selected in a standard Oracle JET Checkbox Set component are represented on a world map. In this article, we take this one step further – or actually several steps. By adding interactivity to the map, we allow users to select a country on the world map and we notify JET components of this selection. The synchronization works both ways: if the user types the name of a country in a JET input text component, this country is highlighted on the world map. Note that the map has full zooming and panning capabilities.

Webp.net-gifmaker (2)

 

The Gif demonstrates how the map is first shown with France highlighted. Next, the user types Chile into the input-text component and when that change is complete, the map is synchronized: Chile is highlighted. The user then hovers over Morocco – and the informational DIV element shows that fact. No selection is made yet. Then the user mouse clicks on Morocco. The country is selected on the map and the JET input-text component is synchronized with the name of the country. Subsequently, the same is done with India: hover and then select. Note: the map can easily support multi-country selection; I had to turn off that option explicitly (default behavior is to allow it).

The challenges I faced when implementing this functionality:

  • add vector layer with countries (features)
  • highlight country when mouse is hovering over it
  • add select interaction to allow user to select a country
  • communicate country selection event in map to “regular” JET component
  • synchronize map with country name typed into JET component

The steps (assuming that the steps in this article are performed first):

  1. Create mapArea.html with input-text, informational DIV and map container (DIV)
  2. Create mapArea.js for the mapArea module
  3. Add a div with data bind for mapArea module to index.html
  4. Download a file in GEOJSON format with annotated geo-json geometries for the countries of the world
  5. Initialize the map with two layers – raster OSM world map and vector country shapes
  6. Add overlay to highlight countries that are hovered over
  7. Add Select Interaction – to allow selection of a country – applying a bold style to the selected country
  8. Update JET component from country selection
  9. Set country selection on map based on value [change]in JET component

And here is the code used to implement this: https://github.com/lucasjellema/jet-and-openlayers .

 

Create mapArea.html with input-text, informational DIV and map container (DIV)
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.6.4/ol-debug.css" />
<h2>Select Country on Map</h2>
<div id="componentDemoContent" style="width: 1px; min-width: 100%;">
    <div id="div1">

        <oj-label for="text-input">Country</oj-label>
        <oj-input-text id="text-input" value="{{selectedCountry}}" on-value-changed="[[countryChangedListener]]"></oj-input-text>
    </div>
</div>
<div id="info"></div>
<div id="map2" class="map"></div>
Create ViewModel mapArea.js for the mapArea module

 

define(
    ['ojs/ojcore', 'knockout', 'jquery', 'ol', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojlabel'],
    function (oj, ko, $, ol) {
        'use strict';
        function MapAreaViewModel() {
            var self = this;

            self.selectedCountry = ko.observable("France");
            self.countryChangedListener = function(event) {
            }
...
      }
        return new MapAreaViewModel();
    }
);
Add a DIV with data bind for mapArea module to index.html
    ...</header>
    <div role="main" class="oj-web-applayout-max-width oj-web-applayout-content">
    <div data-bind="ojModule:'mapArea'" /> 
    </div>
    <footer class="oj-web-applayout-footer" role="contentinfo">
    ...
Download a file in GEOJSON format with annotated geo-json geometries for the countries of the world

I downloaded a GEOJSON file with country data from GitHub: https://github.com/johan/world.geo.json and placed the file in the directory src\js\viewModels of my JET application:

image

 

Initialize the map with two layers – raster OSM world map and vector country shapes
        function MapAreaViewModel() {
            var self = this;

            self.selectedCountry = ko.observable("France");
            self.countryChangedListener = function(event) {
                // self.selectInteraction.getFeatures().clear();
                // self.setSelectedCountry(self.selectedCountry())                
            }


            $(document).ready
                (
                // when the document is fully loaded and the DOM has been initialized
                // then instantiate the map
                function () {
                    initMap();
                })


            function initMap() {
                var style = new ol.style.Style({
                    fill: new ol.style.Fill({
                        color: 'rgba(255, 255, 255, 0.6)'
                    }),
                    stroke: new ol.style.Stroke({
                        color: '#319FD3',
                        width: 1
                    }),
                    text: new ol.style.Text()
                });

                self.countriesVector = new ol.source.Vector({
                    url: 'js/viewModels/countries.geo.json',
                    format: new ol.format.GeoJSON()
                });
               self.map2 = new ol.Map({
                    layers: [
                        new ol.layer.Vector({
                            id: "countries",
                            renderMode: 'image',
                            source: self.countriesVector,
                            style: function (feature) {
                                style.getText().setText(feature.get('name'));
                                return style;
                            }
                        })
                        , new ol.layer.Tile({
                            id: "world",
                            source: new ol.source.OSM()
                        })
                    ],
                    target: 'map2',
                    view: new ol.View({
                        center: [0, 0],
                        zoom: 2
                    })
                });
         }//initMap

 

Add overlay to highlight countries that are hovered over

Note: this code is added to the initMap function:

                // layer to hold (and highlight) currently selected feature(s) 
                var featureOverlay = new ol.layer.Vector({
                    source: new ol.source.Vector(),
                    map: self.map2,
                    style: new ol.style.Style({
                        stroke: new ol.style.Stroke({
                            color: '#f00',
                            width: 1
                        }),
                        fill: new ol.style.Fill({
                            color: 'rgba(255,0,0,0.1)'
                        })
                    })
                });

                var highlight;
                var displayFeatureInfo = function (pixel) {

                    var feature = self.map2.forEachFeatureAtPixel(pixel, function (feature) {
                        return feature;
                    });

                    var info = document.getElementById('info');
                    if (feature) {
                        info.innerHTML = feature.getId() + ': ' + feature.get('name');
                    } else {
                        info.innerHTML = '&nbsp;';
                    }

                    if (feature !== highlight) {
                        if (highlight) {
                            featureOverlay.getSource().removeFeature(highlight);
                        }
                        if (feature) {
                            featureOverlay.getSource().addFeature(feature);
                        }
                        highlight = feature;
                    }

                };

                self.map2.on('pointermove', function (evt) {
                    if (evt.dragging) {
                        return;
                    }
                    var pixel = self.map2.getEventPixel(evt.originalEvent);
                    displayFeatureInfo(pixel);
                });

 

Add Select Interaction – to allow selection of a country – applying a bold style to the selected country

This code is based on this example: http://openlayers.org/en/latest/examples/select-features.html .

                // define the style to apply to selected countries
                var selectCountryStyle = new ol.style.Style({
                    stroke: new ol.style.Stroke({
                        color: '#ff0000',
                        width: 2
                    })
                    , fill: new ol.style.Fill({
                        color: 'red'
                    })
                });
                self.selectInteraction = new ol.interaction.Select({
                    condition: ol.events.condition.singleClick,
                    toggleCondition: ol.events.condition.shiftKeyOnly,
                    layers: function (layer) {
                        return layer.get('id') == 'countries';
                    },
                    style: selectCountryStyle

                });
                // add an event handler to the interaction
                self.selectInteraction.on('select', function (e) {
                    //to ensure only a single country can be selected at any given time
                    // find the most recently selected feature, clear the set of selected features and add the selected the feature (as the only one)
                    var f = self.selectInteraction.getFeatures()
                    var selectedFeature = f.getArray()[f.getLength() - 1]
                    self.selectInteraction.getFeatures().clear();
                    self.selectInteraction.getFeatures().push(selectedFeature);
                });

and just after the declaration of self.map2:

...
    self.map2.getInteractions().extend([self.selectInteraction]);

Update JET component from country selection

Add to the end of the select event handler of the selectInteraction:

                    var selectedCountry = { "code": selectedFeature.id_, "name": selectedFeature.values_.name };
                    // set name of selected country on Knock Out Observable
                   self.selectedCountry(selectedCountry.name);

Create

                self.setSelectedCountry = function (country) {
                    //programmatic selection of a feature
                    var countryFeatures = self.countriesVector.getFeatures();
                    var c = self.countriesVector.getFeatures().filter(function (feature) { return feature.values_.name == country });
                    self.selectInteraction.getFeatures().push(c[0]);
                }
Set country selection on map based on value [change]in JET component

Implement the self.countryChangedListener that is refered to in the mapArea.html file in the input-text componentL:

            self.countryChangedListener = function(event) {
                self.selectInteraction.getFeatures().clear();
                self.setSelectedCountry(self.selectedCountry())                
            }

Create the following listener (for the end of loading the GeoJSON data in the countriesVector); when loading is ready, the current country value in the selectedCountry observable backing the input-text component is used to select the initial country:

                var listenerKey = self.countriesVector.on('change', function (e) {
                    if (self.countriesVector.getState() == 'ready') {
                        console.log("loading dione");
                        // and unregister the "change" listener 
                        ol.Observable.unByKey(listenerKey);
                        self.setSelectedCountry(self.selectedCountry())
                    }
                });

 

References

GitHub Repo with the code (JET Application) : https://github.com/lucasjellema/jet-and-openlayers .

Countries GeoJSON file – https://github.com/johan/world.geo.json

Open Layers Example of Select Interaction – http://openlayers.org/en/latest/examples/select-features.html

Open Layers API – Vector: http://openlayers.org/en/latest/apidoc/ol.source.Vector.html

Event Listener for OpenLayers Vector with GEOJSON source – https://gis.stackexchange.com/questions/123149/layer-loadstart-loadend-events-in-openlayers-3/123302#123302

Animated Gif maker – http://gifmaker.me/

OpenLayers 3 : Beginner’s Guideby Thomas Gratier; Erik Hazzard; Paul Spencer, Published by Packt Publishing, 2015

OpenLayers Book – Handling Selection Events –  http://openlayersbook.github.io/ch11-creating-web-map-apps/example-08.html

The post Using an OpenLayers map to select countries in an Oracle JET application appeared first on AMIS Oracle and Java Blog.

Wrong Cell Server name on X6-2 Elastic Rack - Bug 25317550

Syed Jaffar - Tue, 2018-01-02 05:55
Two X6-2 Elastic Full capacity Exadata systems were deployed recently. Due to the following BUG, cell names were not properly updated with the client provided names after executing the applyElasticConfig.sh.

Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

Though this doesn't impact the operations, but, certainly will create confusion when multiple Exadata systems are deployed in the same data center, due to exact name of cell, cell disks, grid disks.

Note : Its highly recommended to validate the cell names after executing the applyElasticConfig.sh, before running the onecommand. If you encounter the similar problem, simply change the cell name with alter cell name=[correctname] and proceed with onecommand execution to avoid the BUG.

The default names looks like the below :

# dcli -g cell_group -l root 'cellcli -e list cell attributes name'
                        celadm1: ru02
                        celadm1: ru04
                        celadm3: ru06
                        celadm4: ru08
                        celadm5: ru10
                        celadm6: ru12



Changing the cell name to reflect the cell disk, grid disk names, you need to follow the below procedure:

The procedure below must be performed on all cells separately and sequentially(to avoid full downtime);

1) Change the cell name:
    cellcli> alter cell name=celadm5

 
2) Confirm griddisks can be taken offline.

    cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
            ASMDeactivationOutcome - Should be YES for all griddisks


3) Inactivate griddisk on that cell
    cellcli> alter griddisk all inactive

 
            Observation - IF any votesiks are in the storage server will relocate to any surviving storage servers.


4) Change cell disk name
            alter celldisk CD_00_ru10 name=CD_00_celadm5;    
            alter celldisk CD_01_ru10 name=CD_01_
celadm5;
            alter celldisk CD_02_ru10 name=CD_02_
celadm5;
            alter celldisk CD_03_ru10 name=CD_03_
celadm5;
            alter celldisk CD_04_ru10 name=CD_04_
celadm5;
            alter celldisk CD_05_ru10 name=CD_05_
celadm5;
            alter celldisk CD_06_ru10 name=CD_06_
celadm5;
            alter celldisk CD_07_ru10 name=CD_07_
celadm5;
            alter celldisk CD_08_ru10 name=CD_08_
celadm5;
            alter celldisk CD_09_ru10 name=CD_09_
celadm5;
            alter celldisk CD_10_ru10 name=CD_10_
celadm5;
            alter celldisk CD_11_ru10 name=CD_11_
celadm5;

5) Change Griddisk name using the below examples (do it for all grid disks, DATAC1, DBFS & RECOC1)
            alter GRIDDISK DATAC1_CD_00_ru10  name=DATAC1_CD_00_
celadm5;
            alter GRIDDISK DBFS_DG_CD_02_ru10 name=DBFS_DG_CD_02_
celadm5;
            alter GRIDDISK RECOC1_CD_11_ru10  name=RECOC1_CD_11_
celadm5;

6) Activate griddisk on that cell
            cellcli> ALTER GRIDDISK ALL ACTIVE;
       
        There are some important points to be noted after activating griddisks.


      a) asm disks path and name
       * griddisk name change is automatically getting relflected in asm disk path.
       * asm logical name is still referring old name.
      b) failgroup
       * failgroup name is changed and using the same old name.

7) Changing ASM logical name and failgroup name.

    * This can be achived by dropping asmdisk and adding back with correct name. The observation is failgroup name will get automatically changed when we adding
      back asm disks with correct name.
    * ASMCA is the best tool to drop and add back asm disks with 250+ rebalancing power limit.
   
    a) Drop asm disks and observations.
        * We need to make sure asmdisks can be dropped
             cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
                ASMDeactivationOutcome - Should be YES for all griddisks
        * Drop asmdisks using asmca or alter diskgroup 


        We can see asmdisk state will be dropping and there will be an ongoing rebalance operation.
   
        * ASM rebalance operation.
            We can see ongoing asm rebalance operation using below command and change the power to finish it fast.


                sqlplus / as asm


                sql> select * from v$asm_operation;
                sql> alter diskgroup DATAC1 rebalance power 256;


        * Once rebalance operation completed we can asm disk state as changed to noraml, name will become empty failgroup also changed with corret name.
       
a) ADD back asm disks and observations.

Adding back as well can be done by using asmca or asm diskgroup alter commands.
We need to make sure we are adding back with correct name in this case DATAC1_CD_00_RU10 should be added back DATAC1_CD_00_arb02celadm19

We can see ongoing asm rebalance operation using below command and change the power to finish it fast.


        sqlplus / as asm
        sql> select * from v$asm_operation;
   
8) Remaining cells

We can continue same operation for remaining cells and entire operation can be completed with out any downtime at database level.
Once we have completed we can see all votedisks as well relocated or renamed with new name.


References:
Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

I appreciate and thank my team member Khalid Kizhakkethil for doing this wonderful job and preparing the documentation.


 



New OA Framework 12.2.5 Update 18 Now Available

Steven Chan - Tue, 2018-01-02 05:33

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes (TBS) fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • Tip type is not displayed in 'Screen Reader Optimized' accessibility mode.
  • Null Pointer Exception is thrown with a specific Configurator Developer flow that uses the query bean.

Related Articles

Categories: APPS Blogs

Quick SQL: from Packaged App to built-in feature in Oracle APEX 5.2

Dimitri Gielis - Tue, 2018-01-02 04:26
I blogged about Quick SQL already a few times as I saw not many developers knew about it.

In Oracle APEX 5.1 you can install Quick SQL by going to the Packaged Apps section and install it from there:



I really love Quick SQL as it allows me to build my data model very fast, but also shows me the structure of the tables in a very efficient way. That is why I created a script that can reverse engineer existing tables into the Quick SQL format.

From Oracle APEX 5.2 onwards you won't find Quick SQL in the packaged app section anymore... but no worries, it's not gone, it's now built-in the APEX framework itself :)

Go to SQL Workshop - SQL Scripts:


Hit the Quick SQL button:


Here you have Quick SQL :)


You can run your script after you save, straight from this interface.

Note: the screenshots are taken from Oracle APEX 5.2 Early Adopter, so things might change in the final release of APEX 5.2.
Categories: Development

Defaults

Jonathan Lewis - Tue, 2018-01-02 02:43

Following on from a Twitter reference and an update to an old posting about a side effect of  constraints on the work done inserting data, I decided to have a closer look at the more general picture of default values and inserts. Here’s a script that I’ve tested against 11.2.0.4, 12.1.0.2, and 12.2.0.1 (original install, no patches applied in all cases):


rem
rem     Script:         defaults_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2017
rem

create table t1 (
        column1  varchar2(10),
        column2  varchar2(10),
        column3  varchar2(10),
        column4  varchar2(10),
        column32 varchar2(32)   default 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        column33 varchar2(33)   default 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        virtual1      generated always as (
                column4 || column3 || column2 || column1
        ) virtual
)
segment creation immediate
;

execute dbms_output.put_line(dbms_stats.create_extended_stats(user,'t1','(column1 || column2 || column3 || column4)'))
execute dbms_output.put_line(dbms_stats.create_extended_stats(user,'t1','(column1,column2,column3)'))

create or replace function plsqlfunction_with_30char_name(
        i_in varchar2
)
return varchar
deterministic
is
begin
        return initcap(i_in);
end;
/

create index t1_i1 on t1(substr(plsqlfunction_with_30char_name(column1),1,10));

When you create a function-based index you get a hidden, virtual column supporting the index expression; when you create extended stats (of either type) you get a hidden virtual column holding the extension definition, when you create any type of virtual column, including a “real” virtual column you get a data dictionary entry holding the column name and the expression definition: all these options use the “data_default” column from user_tab_cols to display the defining information – as we can see when we the following query:


select  column_name, data_default
from    user_tab_cols
where   table_name = 'T1'
order by
         column_id
;

COLUMN_NAME                      DATA_DEFAULT
-------------------------------- --------------------------------------------------------------------------------
COLUMN1
COLUMN2
COLUMN3
COLUMN4
COLUMN32                         'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
COLUMN33                         'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
VIRTUAL1                         "COLUMN4"||"COLUMN3"||"COLUMN2"||"COLUMN1"
SYS_STUDAN97VB_XDKRTR_NPFAB80P   "COLUMN1"||"COLUMN2"||"COLUMN3"||"COLUMN4"
SYS_STUTCBJ8OFFSY1D9ZXRYZ0W3W#   SYS_OP_COMBINED_HASH("COLUMN1","COLUMN2","COLUMN3")
SYS_NC00010$                     SUBSTR("TEST_USER"."PLSQLFUNCTION_WITH_30CHAR_NAME"("COLUMN1"),1,10)

Apart from the special cases I’ve just listed, you’ll also see the “default values” I specified for column32 and column33 – you’ll notice that I’ve supplied a 30 character string as the default for column32, and a 31 character string as the default for column33 – this is a convenience that means the used space in the data_default (which is a long column) corresponds to the name of the column once you include the single quotes in the their character count.

Having set my data up I’m going to emulate a bad application that uses lots of literal string SQL and leaves Oracle to fill in the default values (and, of course, derive the various virtual values it might need).


alter session set events '10046 trace name context forever, level 4';

begin
        for i in 1..10 loop
                execute immediate '
                        insert into t1 (column1, column2, column3, column4)
                        values( ' || i || ', ' || i || ', ' || i || ', ' || i || ')'
                ;
                commit;
        end loop;
end;
/

alter session set events '10046 trace name context off';

This code generates 10 strings that populate column1 through to column4 only. But you’ll notice the call to enable SQL tracing – and here’s the interesting bit of the output from applying tkprof to the trace file:


  SQL ID: 47r1y8yn34jmj Plan Hash: 2191121161

select default$
from
 col$ where rowid=:1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       50      0.00       0.00          0          0          0           0
Execute     50      0.00       0.00          0          0          0           0
Fetch       50      0.00       0.00          0        100          0          50
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      150      0.00       0.00          0        100          0          50

The summary is the same for all three versions of Oracle I tested*** – we’ve queried for a default value from col$ 5 times for each row we insert! (Technically that’s 5 times for each insert statement we’ve had to (hard-)parse; this anomaly wouldn’t appear if we have been using a bind-variable method and reusing the insert statement.) *** There is one difference in 12.2 – the number of parse calls reported for this statement was 1 rather than 50 but, judging by the various cursor cache hit stats, that may be due to a change in accounting rather than a change in workload.

Check the table definition: there are two “real defaults” and 4 expressions due to the various virtual columns – so why 5 calls per insert and not 6 ? The answer lies in the length of the actual value involved – if the text that appears in the (long) data_default column is 32 characters or shorter it will be stored in the the dictionary cache (rowcache), but only one of our 6 values is that short, so Oracle looks up the other five on each insert (hard parse).

This is a little strange on two counts: first, given the enormous memories that have been available for years and the extreme usefulness of virtual columns and extended stats it seems something of an oversight to limit the memory available to the cache that holds such critical definitions; secondly, we don’t need to evaluate the virtual columns (unless they are indexed) or extended stats on inserts so why is Oracle querying the definitions anyway ? [Possible answer: perhaps Oracle is using generic code that allows for column check constraints – which might exist on users’ virtual columns – and hasn’t catered for bypassing system-generated virtual columns.]

A key point to remember before you start worrying too much about the impact of the execution count for this query against col$ is that it’s only relevant to “hard” parses – so you’re only likely to notice it if you have a large number of different “literal string” inserts that should be using bind variables; and that means you’ve probably got an important coding defect to address before you worry too much about the extra impact caused by this particular call. Nevertheless there are a couple of bug reports on MoS that have been raised against this query and, after writing this note, I did a quick Google search for the critical SQL_ID and found (among others) this production example from Franck Pachot.

 


Embedding OpenLayers in Oracle JET for Advanced Maps and GIS style User Interfaces

Amis Blog - Mon, 2018-01-01 10:49

Oracle JET is a toolkit for the creation of rich web applications. Many applications will have location-related aspects. Such applications can benefit from advanced map capabilities – for presenting data in maps, allowing users to interact with maps in order to formulate queries, navigate to relevant details or manipulate data. OpenLayers is one of the most prominent open source JavaScript libraries for working with maps in web applications. It provides an API for building rich web-based geographic applications similar to Google Maps and Bing Maps. One of the geographic data providers that OpenLayers works well with is Open Street Map (OSM) – also fully open source.

In this article, I will report on my first steps with OpenLayers and OSM integrated in Oracle JET. In a few simple steps, I will create the JET application illustrated below –a  mix of a JET Checkbox Set where countries can be selected and an OpenLayers map that is manipulated from JavaScript to show (and hide) markers for the countries that are selected (and deselected).

Webp.net-gifmaker

This article should provide you with a starting point for working with OpenLayers in JET yourself. Source code for this article can be downloaded from GitHub: https://github.com/lucasjellema/jet-and-openlayers .

Steps:

  • create new JET application (for example with JET CLI)
  • download OpenLayers distribution and add to JET application’s folders
  • configure the JET application for the OpenLayers library
  • add a DIV as map container to the HTML file
  • add the JavaScript code to initialize and manipulate the map to the ViewModel JS file

In more detail:

1. Create a new Oracle JET application

Follow for example the steps described on the Oracle JET Web Pages: http://www.oracle.com/webfolder/technetwork/jet/globalGetStarted.html 

Use

ojet create projectname

to create the new JET application

2. Download OpenLayers Distribution and Add to the JET Application

Download the OpenLayers distribution – a zip-file with the combined JavaScript and CSS files for OpenLayers (4.x) from https://github.com/openlayers/openlayers/releases/ 

In the JET application’s js/libs directory, create a new directory openlayers and add the new library and any accompanying files to it.

image

3. Configure the JET application for the OpenLayers library

In the js directory update the js/main-release-paths.json file to include the new library.

  "ol": "libs/openlayers/ol-debug",
  "olcss": "libs/openlayers/ol.css"
}

In your RequireJS bootstrap file, typically main.js, add a link to the new file in the path mapping section and include the new library in the require() definition.

  paths:
  //injector:mainReleasePaths
  {
    ...
    'ol': 'libs/openlayers/ol-debug'
  }
  //endinjector

In the same file add a Shim Configuration for OpenLayers

// Shim configurations for modules that do not expose AMD
  shim:
  {
    'jquery':
    {
      exports: ['jQuery', '$']
    }
    ,'ol':
    {
      exports: ['ol']
    }
  }
}

Finally, add module ‘ol’ to the call to require ad as parameter in the callback function (if you want to perform special initialization on this module):

require(['ojs/ojcore', 'knockout', 'appController','ol', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar', 'ojs/ojmenu','ojs/ojmodule'],
  function (oj, ko, app, ol) { // this callback gets executed when all required modules are loaded
    ...

Now to actually include  map in a View in the JET application:

4. Add a DIV as map container to the HTML file

The View contains a DIV that will act as the container for the map. It also contains a Checkbox Set with checkboxes for five different countries. The checkbox set is data bound to the ViewModel; any change in selection status will trigger an event listener. Additionally, the currentCountries variable in the ViewModel is updated with any change by the user.

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.6.4/ol-debug.css" />
<h2>Workarea with Map - Select Countries</h2>
<div id="div1">
        <oj-checkboxset id="countriesCheckboxSetId" labelled-by="mainlabelid" class="oj-choice-direction-row" value="{{currentCountries}}"
                on-value-changed="[[selectionListener]]">
                <oj-option id="uruopt" value="uy">Uruguay</oj-option>
                <oj-option id="romopt" value="ro">Romania</oj-option>
                <oj-option id="moropt" value="ma">Morocco</oj-option>
                <oj-option id="spaopt" value="es">Spain</oj-option>
                <oj-option id="indopt" value="in">India</oj-option>
        </oj-checkboxset>
        <br/>
</div>
<div id="map2" class="map"></div>

5. Add JavaScript code to initialize and manipulate the map to the ViewModel JS file

Add OpenLayers dependency in workArea.js:

define(
    ['ojs/ojcore', 'knockout', 'jquery', 'ol', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojbutton', 'ojs/ojlabel', 'ojs/ojcheckboxset'],
    function (oj, ko, $, ol) {
        'use strict';
        function WorkAreaViewModel() {
            var self = this;

The following code defines a countryMap – a collection of five elements (one for each of five countries) that hold longitude and lattitude for each country, as well as a display name and country code (also the key in the map). Subsequenty, an OpenLayers feature is created for each country, and referenced from the countryMap element for later use.

            self.currentCountries = ko.observableArray([]);

            self.countryMap = {};
            self.countryMap['in'] = { "place_id": "177729185", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "304716", "boundingbox": ["6.5546079", "35.6745457", "68.1113787", "97.395561"], "lat": "22.3511148", "lon": "78.6677428", "display_name": "India", "class": "boundary", "type": "administrative", "importance": 0.3133568788165, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "India", "country_code": "in" } };
            self.countryMap['es'] = { "place_id": "179962651", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "1311341", "boundingbox": ["27.4335426", "43.9933088", "-18.3936845", "4.5918885"], "lat": "40.0028028", "lon": "-4.003104", "display_name": "Spain", "class": "boundary", "type": "administrative", "importance": 0.22447060272487, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Spain", "country_code": "es" } };
            self.countryMap['ma'] = { "place_id": "217466685", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "3630439", "boundingbox": ["21.3365321", "36.0505269", "-17.2551456", "-0.998429"], "lat": "31.1728192", "lon": "-7.3366043", "display_name": "Morocco", "class": "boundary", "type": "administrative", "importance": 0.19300832455819, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Morocco", "country_code": "ma" } }
            self.countryMap['ro'] = { "place_id": "177563889", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "90689", "boundingbox": ["43.618682", "48.2653964", "20.2619773", "30.0454257"], "lat": "45.9852129", "lon": "24.6859225", "display_name": "Romania", "class": "boundary", "type": "administrative", "importance": 0.30982735099944, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Romania", "country_code": "ro" } };
            self.countryMap['uy'] = { "place_id": "179428864", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "287072", "boundingbox": ["-35.7824481", "-30.0853962", "-58.4948438", "-53.0755833"], "lat": "-32.8755548", "lon": "-56.0201525", "display_name": "Uruguay", "class": "boundary", "type": "administrative", "importance": 0.18848351906936, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Uruguay", "country_code": "uy" } };

            for (const c in self.countryMap) {
                // create a feature for each country in the map 
                var coordinates = ol.proj.transform([1 * self.countryMap.lon, 1 * self.countryMap.lat], 'EPSG:4326', 'EPSG:3857');
                var featurething = new ol.Feature({
                    name: self.countryMap.display_name,
                    geometry: new ol.geom.Point(coordinates)
                });
                self.countryMap.feature = featurething;
            }

Then add the code to do the initialization of the Map itself – to be performed when the DOM is ready

            $(document).ready
                (
                // when the document is fully loaded and the DOM has been initialized
                // then instantiate the map
                function () {
                    initMap();
                })

            function initMap() {
                self.elem = document.getElementById("text-input");
                self.map = new ol.Map({
                    target: 'map2',
                    layers: [
                        new ol.layer.Tile({
                            source: new ol.source.OSM()
                        })
                    ],
                    view: new ol.View({
                        center: ol.proj.fromLonLat([-2, -5]),
                        zoom: 3
                    })
                });
            }

and the DIV target container is available:

Also add the code for the selectionListener to be executed whenever countries are selected or deselected.
This code adds OpenLayers features for each of the currently selected countries. Next, construct a layer which contains these features and has a specific style (red circle with big X) associated with it. Finally, add this layer to the map – to have the features displayed in the web page.

            // triggered whenever a checkbox is selected or deselected
            self.selectionListener = function (event) {
                console.log("Country Selection Changed");

                var vectorSource = new ol.source.Vector({}); // to hold features for currently selected countries
                for (var i = 0; i < self.currentCountries().length; i++) {
                    // add the feature to the map for each currently selected country
                    vectorSource.addFeature(self.countryMap[self.currentCountries()[i]].feature);
                }//for

                var layers = self.map.getLayers();
                // remove the feature layer from the map if it already was added
                if (layers.getLength() > 1) {
                    self.map.removeLayer(layers.item(1));
                }
                //Create and add the vector layer with features to the map
                // define the style to apply to these features: bright red, circle with radius 10 and a X as (text) content
                var vector_layer = new ol.layer.Vector({
                    source: vectorSource
                    ,style: function(feature) {
                        var style = new ol.style.Style({
                            image: new ol.style.Circle({
                              radius: 10,
                              stroke: new ol.style.Stroke({
                                color: '#fff'
                              }),
                              fill: new ol.style.Fill({
                                //color: '#3399CC' // light blue
                                color: 'red' // light blue
                            })
                            }),
                            text: new ol.style.Text({
                              text: "X",
                              fill: new ol.style.Fill({
                                color: '#fff'
                              })
                            })
                          });
                          return style;
                        }
                 } )
                self.map.addLayer(vector_layer);

            }//selectionListener
        }

References

Source code in GitHub Repo: https://github.com/lucasjellema/jet-and-openlayers 

Blog article by Enno Schulte (Virtual7) on adding Socket.io as third part library to a JET 3.x application: http://www.virtual7.de/blog/2017/07/oracle-jet-3-add-third-party-libraries-example-socket-io/ 

Documentation on adding 3rd party libraries to JET 4.0: https://docs.oracle.com/middleware/jet410/jet/developer/GUID-EC40DF3C-57FB-4919-A066-73E573D66B67.htm#JETDG-GUID-EC40DF3C-57FB-4919-A066-73E573D66B67 

OJET Docs Checkbox Set – http://www.oracle.com/webfolder/technetwork/jet/jsdocs/oj.ojCheckboxset.html

The post Embedding OpenLayers in Oracle JET for Advanced Maps and GIS style User Interfaces appeared first on AMIS Oracle and Java Blog.

Happy New Year 2018

Senthil Rajendran - Sun, 2017-12-31 21:46

Happy New Year 2018

Data Access Layer vs Table APIs

Andrew Clarke - Sun, 2017-12-31 11:59
One of the underlying benefits of PL/SQL APIs is the enabling of data governance. Table owners can shield their tables behind a layer of PL/SQL. Other users have no access to the tables directly but only through stored procedures. This confers many benefits:
  • Calling programs code against a programmatic interface. This frees the table owner to change the table's structure whenever it's necessary without affecting its consumers.
  • Likewise the calling programs get access to the data they need without having to know the details of the table structure, such as technical keys.
  • The table owner can use code to enforce complicated business rules when data is changed.
  • The table owner can enforce sophisticated data access policies (especially for applications using Standard Edition without DBMS_RLS).
So naturally the question arises, is this the same as Table APIs?

Table APIs used to be a popular approach to encapsulating tables. The typical Table API comprised two packages per table; one package provided methods for inserting, updating and deleting records, and the other package provided query methods. The big attraction of Table APIs was that they could be 100% generated from the data dictionary - both Oracle Designer and Steven Feuerstein's QNXO library provided TAPI generators. And they felt like good practice because, y'know, access to the tables was shielded by a PL/SQL layer.

But there are several problems with Table APIs.

The first is that they entrench row-by-agonising-row processing. Table APIs have their roots in early versions of Oracle so the DML methods only worked with a single record. Even after Oracle 8 introduced PL/SQL collection types TAPI code in the wild tended to be RBAR: there seems to something in the brain of the average programmer which predisposes them to prefer loops executing procedural code rather than set operations.

The second is that they prevent SQL joins. Individual records have to be selected from one table to provide keys for looking up records in a second table. Quite often this leads to loops within loops. So-called PL/SQL joins prevent the optimizer from choosing good access paths when handling larger amounts of data.

The third issue is that it is pretty hard to generate methods for all conceivable access paths. Consequently the generated packages had a few standard access paths (primary key, indexed columns) and provided an dynamic SQL method which accepted a free text WHERE clause. Besides opening the package to SQL injection this also broke the Law of Demeter: in order to pass a dynamic WHERE clause the calling program needed to know the structure of the underlying table, which defeats the whole objective of encapsulation.

Which leads on to the fourth, more philosophical problem with Table APIs: there is minimal abstraction. Each package is generated so it fits very closely to the structure of the Table. If the table structure changes we have to regenerate the TAPI packages: the fact that this can be done automatically is scant recompense for the tight coupling between the Table and the API.

So although Table APIs could be mistaken for good practice in actuality they provide no real benefit. The interface is 1:1 with the table structure so it has no advantage over granting privileges on the table. Combined with the impact of RBAR processing and PL/SQL joins on performance and the net effect of Table APIs is disastrous.

We cannot generate good Data Access APIs: we need to write them. This is because the APIs should be built around business functions rather than tables. The API packages granted to other users should comprise procedures for executing transactions. A Unit Of Work is likely to touch more than one table. These have to be written by domain experts who understand the data model and the business rules.

Part of the Designing PL/SQL Programs series

Been There

Jonathan Lewis - Sun, 2017-12-31 04:51

It’s the end of the year and time for a retrospective of some sort so I was thinking of listing the top 10 most popular pages on my blog, but Connor McDonald beat me to it, so I decided to see if I could remember all the countries I’d visited since starting to work with the Oracle software, and here’s the list in alphabetical order:

Antigua
Australia
Austria
Belgium
Bosnia
Brunei
Bulgaria
Canada
China
Croatia
Czech Republic
Denmark
Dubai
Egypt
Estonia
Finland
France
Germany
Greece
Hungary
Iceland
India
Indonesia
Ireland
Israel
Italy
Japan
Latvia
Lithuania
Malaysia

A few of these were holidays rather than work, and I may have forgotten a couple, so if you’ve seen me in your country and it’s not on the list let me know.

The list can be a bit of a nuisance, I had to list “all the countries you’ve visited in the last 10 years” for both the US and Russian visas: the US form only allowed for 5 countries and the Russian one for 40; and the US expected me to list EVERY visit, with dates and city!

 


12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS

Yann Neuhaus - Sat, 2017-12-30 13:54

In the previous post https://blog.dbi-services.com/12c-multitenant-internals-pdb-replay-ddl-for-common-users I’ve done some DDL on a common user to show how this is replayed later for PDBs that were not opened at that time. But what happens when one of the DDL fails on one PDB?

PDB$LASTREPLAY

In the last post, the C##USER1 common user was created and all pluggable databases (PDB1 with con_id=3 and PDB2 with con_id=4) were opened and synchronized:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where opcode=-1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

When REPLAY# in the PDB is equal to the CDB$ROOT one, this means that there are no additional statements to replicate on the PDB.

I have PDB1 opened read write and PDB2 in read only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ ONLY NO

For the demo my user’s default tablespace is SYSTEM:

SQL> select con_id,username,common,default_tablespace from cdb_users where username='C##USER1' order by 1;
 
CON_ID USERNAME COMMON DEFAULT_TABLESPACE
------ -------- ------ ------------------
1 C##USER1 YES SYSTEM
3 C##USER1 YES SYSTEM
4 C##USER1 YES SYSTEM

Failure in opened containers

I want to change the default tablespace for C##USER1 and I have a USERS tablespace in CDB$ROOT (but not in the PDBs):

SQL> alter user C##USER1 default tablespace USERS;
 
Error starting at line : 50 File @ common-users-pdb-sync.sql
In command -
alter user C##USER1 default tablespace USERS
Error report -
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

As we have seen in the last post, the DDL is executed on all containers that are opened read write. Here it is fine on CDB$ROOT but fails on PDB1.

Then I create the USERS tablespace in PDB1:

SQL> alter session set container=PDB1;
Session altered.
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB1/users.dbf' size 5M;
Tablespace USERS created.
SQL> alter session set container=CDB$ROOT;
Session altered.

And now, the statement is successful in CDB$ROOT, replicated on PDB1:

SQL> alter user C##USER1 default tablespace USERS;
User C##USER1 altered.

This is nice: the statement is successful in all containers or fails. When it is successful, statements are recorded in PDB_SYNC$:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 1467010 29-dec-17 09:27:01 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@

Failure in replay at open for closed containers

But PDB2 is not synchronized because it was not opened read write:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

But I don’t have a USERS tablespace in PDB2, so the replay will fail:

SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
 
SQL> alter pluggable database PDB2 open;
ORA-24344: success with compilation error
Pluggable database PDB2 altered.

This is a warning only. The SQlcl feedback is a bit misleading, mentioning a compilation error because this is where we used to have warnings, but the SQl*Plus message is more clear:

SQL> alter pluggable database PDB2 open;
Warning: PDB altered with errors.

The PDB2 cannot be left closed, because you need to create a tablespace here. But it cannot be opened to everyone, because it is not in sync with CDB$ROOT. So what happens is that the PDB is opened in restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE YES

Note that if you look at PDB_SYNC$ in the PDB at that time, it looks like REPLAY#=11 has increased but you also see rows for the statement that has to be run. You have to connect to the PDB because containers() do not run in restricted session containers:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 1469022 29-dec-17 09:27:02 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@
 
SQL> alter session set container=CDB$ROOT;
Session altered.

Actually, the attempt to sync has inserted the statements and pushed the last replay indicator. Now, the PDB has all information to do a sync without the need to go to CDB$ROOT. The DDL was not replayed, but has been stored locally. When the sync will be successful, statements will be removed from the local PDB_SYNC$ leaving only the LASTREPLAY indicator.

PDB_PLUG_IN_VIOLATIONS

More info about the warning is stored in PDB_ALERT$ which you query from PDB_PLUG_IN_VIOLATIONS (the strange name reminds the TRANSPORT_SET_VIOLATIONS view used by DBMS_TTS):

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR PENDING Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.03.266780000 PM

Here you have the statement that failed and the error number, but no recommended ACTION. However, ORA-959 is “tablespace ‘%s’ does not exist” which gives a clue about the problem encountered.

As the PDB is opened a DBA (with RESTRICTED SESSION privilege) can add the tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB2/users.dbf' size 5M;
Tablespace USERS created.

But you cannot simply disable restricted session:

SQL> alter system disable restricted session;
SQL> alter system disable restricted session;
 
Error starting at line : 74 File @ common-users-pdb-sync.sql
In command -
alter system disable restricted session
Error report -
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
65144. 00000 - "ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted"
*Cause: An attempt was made to disable a restricted session while an unresolved error existed in PDB_PLUG_IN_VIOLATIONS.
*Action: Resolve all of the errors before trying to disable a restricted session.

One solution is to close and open the PDB to get the DDL replay:
SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
SQL> alter pluggable database PDB2 open;
Pluggable database PDB2 altered.

The other solution is to call DBMS_PDB.SYNC_PDB and disable restricted mode:

SQL> exec dbms_pdb.sync_pdb; commit;
PL/SQL procedure successfully completed.
 
SQL> alter system disable restricted session;
System DISABLE altered.

In both case, no warning here, and no restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

and the PDB_PLUG_IN_VIOLATIONS is updated to flag the issue as resolved:

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR RESOLVED Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.04.093659000 PM

At that time, the local PDB_SYNC$ table in PDB2 contains only the PDB$LASTREPLAY row, with the same value as in the CDB$ROOT table. The rows with the statements have been deleted once the DDL has been successfully replayed:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11

So what?

The mechanism is simple: record what is done in CDB$ROOT, replicate it in PDBs when possible (opened read-write) and try to replay it, mark the last replay step. For containers that were not writeable, at open, the DDL is replicated on the PDBs that lag being CDB$ROOT and replay step is updated. Then the DDL is replayed. When sucessful, the statement is removed from the replicated DDL. When it fails, you get a warning, and a message in PDB_PLUG_IN_VIOLATIONS, and the PDB is opened in restricted session mode to let you solve the problem.
If you can fix the issue so that the DDL to be replayed is successful, then you can just sync and disable restricted session, or simply close and re-open the PDB.
If you can’t fix it I suppose you need to hack the statements in the local PDB_SYNC$, with Oracle Support agreement of course, and make sure that you arrive to a state which is consistent with the other containers, especially CDB$ROOT.

 

Cet article 12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS est apparu en premier sur Blog dbi services.

Docker-Swarm: Running with more than one manager-node / How to add a secondary manager or multiple managers

Dietrich Schroff - Sat, 2017-12-30 06:41
Adding additional masters to a docker swarm is very easy. I just followed the documentation:

On the manager i ran the following command:
alpine:~# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-5nrni6ksqnkljvqpp59m5gfh1 192.168.178.46:2377And on the node:
node03:~# docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1
l7jsmbghl-5nrni6ksqnkljvqpp59m5gfh1 192.168.178.46:2377

This node joined a swarm as a manager.The new state of the cluster shows:
alpine:~# docker  info | grep -A 23 Swarm
Swarm: active
 NodeID: wy1z8jxmr1cyupdqgkm6lxhe2
 Is Manager: true
 ClusterID: wkbjyxbcuohgdqc3amhl9umlq
 Managers: 2
 Nodes: 4

 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.178.46
 Manager Addresses:
  192.168.178.46:2377
  192.168.178.50:2377
If you want to use a new token just run:
alpine:~# docker  swarm join-token --rotate  manager
Successfully rotated manager join token.

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-cgy143mwghbfoozt0b8li2587 192.168.178.46:2377

nvarchar2

Jonathan Lewis - Sat, 2017-12-30 06:08

Here’s an odd little quirk that appeared when I was playing around with default values just recently. I think it’s one I’ve seen before, I may even have written about it many years ago but I can’t find any reference to it at present. Let’s start with a script that I’ll run on 12.2.0.1 (the effect does appear on earlier versions):


rem
rem     Script:         nvarchar2_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2017
rem

create table t1 (
        column1  varchar2(10),
        column2  nvarchar2(10)
);

create table t2 (
        column1  varchar2(10)
);

alter table t2 add column2 nvarchar2(10);

create table t3 (
        column1  varchar2(10),
        column2  nvarchar2(10) default 'xxxxxxxx'
);

create table t4 (
        column1  varchar2(10)
);

alter table t4 add column2 nvarchar2(10) default 'xxxxxxxx';

insert into t1(column1) values('a');
insert into t2(column1) values('a');
insert into t3(column1) values('a');
insert into t4(column1) values('a');

All I’ve done it create 4 tables which. when described will all look the same:


 Name                    Null?    Type
 ----------------------- -------- ----------------
 COLUMN1                          VARCHAR2(10)
 COLUMN2                          NVARCHAR2(10)

There is a significant different between the first two and the last two, of course, thanks to the specification of a default value which means that the inserts will produce two possible results: the first two tables will have nulls in column2; the last two will have the nvarchar2 equivalent of ‘xxxxxxxx’ which, in my instance, will be a string of 16 bytes: “0,78,0,78,0,78,0,78,0,78,0,78,0,78,0,78”.

Surprisingly, though, there is a dramatic oddity between t3 and t4 which shows up when I query user_tab_cols:

select
        table_name, column_id, column_name,  segment_column_id, data_default
from    user_tab_cols
where   table_name like 'T_'
order by
        table_name, column_id
;

TABLE_NAME            COLUMN_ID COLUMN_NAME          SEGMENT_COLUMN_ID DATA_DEFAULT
-------------------- ---------- -------------------- ----------------- --------------------
T1                            1 COLUMN1                              1
                              2 COLUMN2                              2

T2                            1 COLUMN1                              1
                              2 COLUMN2                              2

T3                            1 COLUMN1                              1
                              2 COLUMN2                              2 'xxxxxxxx'

T4                            1 COLUMN1                              1
                              2 COLUMN2                              3 'xxxxxxxx'
                                SYS_NC00002$                         2

Table t4 has acquired two columns – a hidden column (which physically exists as the second column in the stored data and is declared as raw(126)) and the column which I had specified. You’ll note that the test shows two differences that may be significant: comparing t3/t4 we see that adding, rather than initially defining, the nvarchar2() column introduces the extra column; comparing t2/t4 we see that adding a varchar2() rather than an nvarchar2() doesn’t produce the same effect. Tentative assumption, therefore, is that there is something special about adding nvarchar2() columns.

Casting my mind back to various customers who have gone through multiple upgrades of 3rd party applications that invariably seem to add columns to tables, I wondered whether this extra column appeared every time you added an nvarchar2(). I’d not noticed anything in that past that suggested this might be the case, but it’s obviously worth checking: and in my simple tests it looked as if Oracle created just one extra column and used it to capture a value that seemed to be determined by the number and location of columns that had been added.

It’s a curiosity, and leaves room for further investigation – so if anyone has links to related articles please feel free to add them in the comments.

 


Getting started with Oracle Database in a Docker container!

Amis Blog - Sat, 2017-12-30 05:33

One of the benefits of using Docker is quick and easy provisioning. I wanted to find out first-hand if this could help me get an Oracle Enterprise Edition database quickly up and running for use in a development environment. Oracle provides Docker images for its Standard and Enterprise Edition database in the Oracle Container Registry. Lucas Jellema has already provided two blogs on this (here and here) which have been a useful starting point. In this blog I’ll describe some of the choices to make and challenges I encountered. To summarize, I’m quite happy with the Docker images in the registry as they provide a very easy way to automate the install of an EE database. You can find a Vagrant provisioning shell script with the installation of Docker and Docker commands to execute here and a description on how to use it here.

Docker Installing Docker on Oracle Linux 7 Why Docker

Preparing for this blog was my first real Docker experience outside of workshops. The benefits of Docker I mainly appreciated during this exercise is that

  • Docker uses paravirtualization which is more lightweight than full virtualization on for example VirtualBox or VMWare.
  • The installation of a product inside the container is already fully scripted if you have a Docker image or Dockerfile. There are a lot of images and Dockerfiles available. Also provided and supported by software vendors such as Oracle.
  • The Docker CLI is very user friendly. For example, you can just throw away your container and create a new one or stop it and start it again at a later time. Starting a shell within a container is also easy. Compare this to for example VBoxManage.

In order to install Docker on Oracle Linux 7, you need to do some things which are described below.

Preparing a filesystem

Docker images/containers are created in /var/lib/docker by default. You do not need to specifically create a filesystem for that, however, Docker runs well on a filesystem of type BTRFS. This is however not supported on Oracle Linux. Docker has two editions. Docker CE and Docker EE. Docker CE is not certified for Oracle Linux while EE is. For Docker CE, BTRFS is only recommended on Ubuntu or Debian and for Docker EE, BTRFS is only supported on SLES.

When you do want to use a BTRFS partition (at your own risk), and you want to automate the installation of your OS using Kickstart, you can do this like:

part btrfs.01 --size=1000 --grow
btrfs /var/lib/docker --label=docker btrfs.01

See a complete Kickstart example here for Oracle Linux 7 and the blog on how to use the Kickstart file with Packer here.

Enable repositories

Docker is not present in a repository which is enabled by default in Oracle Linux 7. You can automate enabling them by:

yum-config-manager --enable ol7_addons
yum-config-manager --enable ol7_optional_latest
yum-config-manager --enable ul7_uekr4
Install Docker

Installing Docker can be done with a single command:

yum install docker-engine btrfs-progs btrfs-progs-devel -y

If you’re not using BTRFS, you can leave those packages out.

Start the Docker daemon

The Docker CLI talks to a daemon which needs to be running. Starting the daemon and making it start on boot can be done with:

systemctl start docker
systemctl enable docker
Allow a user to use Docker

You can add a user to the docker group in order to allow it to use docker. This is however a bad practice since the user can obtain root access to the system. The way to allow a non-root user to execute docker is described here. You allow the user to execute the docker command using sudo and create an alias for the docker command to instead perform sudo docker. You can also tune the docker commands to only allow access to specific containers.

Add to /etc/sudoers

oracle ALL=(ALL) NOPASSWD: /usr/bin/docker

Create the following alias

alias docker="sudo /usr/bin/docker"
Oracle database Choosing an edition Why not XE?

My purpose is to automate the complete install of SOA Suite from scratch. In a previous blog I described how to get started with Kickstart, Vagrant, Packer to get the OS ready. I ended in that blog post with the installation of the XE database. After the installation of the XE database, the Repository Creation Utility (RCU) needs to be run to create tablespaces, schemas, tables, etc for SOA Suite. Here I could not continue with my automated install since the RCU wants to create materialized views. The Advanced Replication option however is not part of the current version of the XE database. There was no automated way to let the installer skip over the error and continue as you would normally do with a manual install. I needed a non-XE edition of the database! The other editions of the database however were more complex to install and thus automate. For example, you need to install the database software, configure the listener, create a database, create scripts to start the database when the OS starts. Not being a DBA (or having any ambitions to become one), this was not something I wanted to invest much time in.

Enter Oracle Container Registry!

The Oracle Container Registry contains preconfigured images for Enterprise Edition and Standard Edition database. The Container Registry also contains useful information on how to use these images. The Standard Edition database uses a minimum of 4Gb of RAM. The Enterprise Edition database has a slim variant with less features but which only uses 2Gb of RAM. The slim image also is a lot smaller. Only about 2Gb to download instead of nearly 5Gb. The Standard Edition can be configured with a password from a configuration file while the Enterprise Edition has the default password ‘Oradoc_db1’. The Docker images can use a mounted share for their datafiles.

Create an account and accept the license

In order to use the Container Registry, you have to create an account first. Next you have to login and accept the license for a specific image. This has been described here and is pretty easy.

After you have done that and you have installed Docker as described above, you can start using the image and create containers!

Start using the image and create a container

First you have to login to the container registry from your OS. This can be done using a command like:

docker login -u maarten.smeets@amis.nl -p XXX container-registry.oracle.com

XXX is not my real password and I also did not accidentally commit it to GitHub. You should use the account here which you have created for the Container Registry.

I created a small configuration file (db_env.dat) with some settings. These are all the configuration options which are currently possible from a separate configuration file. The file contains the below 4 lines:

DB_SID=ORCLCDB
DB_PDB=ORCLPDB1
DB_DOMAIN=localdomain
DB_MEMORY=2GB

Next you can pull the image and run a container with it:

docker run -d --env-file db_env.dat -p 1521:1521 -p 5500:5500 -it --name dockerDB container-registry.oracle.com/database/enterprise:12.2.0.1-slim

The -p options specify port mappings. I want port 1521 and port 5500 mapped to my host (VirtualBox, Oracle Linux 7) OS.

You can see if the container is up and running with:

docker ps

You can start a shell inside the container:

docker exec -it dockerDB /bin/bash

I can easily stop the database with:

docker stop dockerDB

And start it again with

docker start dockerDB

If you want to connect to the database inside the container, you can do so by using a service of ORCLPDB1.localdomain user SYS password Oradoc_db1 hostname localhost (when running on the VirtualBox machine) port 1521. For the RCU, I created an Oracle Wallet file from the RCU configuration wizard and used that to automate the RCU and install the SOA Suite required artifacts in the container database. See here.

Finally

I was surprised at how easy it was to use the Docker image from the registry. Getting Docker itself installed and ready was more work. After a container is created based on the image, managing it with the Docker CLI is also very easy. As a developer this makes me very happy and I recommend other developers to try it out! There are some challenges though if you want to use the images on larger scale.

Limited configuration options

Many customers use different standards. The Docker image comes with a certain configuration and can be configured only in a (very) limited way by means of a configuration file (as shown above). You can mount an external directory to store data files.

Limitations in features

Also, the Docker container database can only run one instance, cannot be patched and does not support Dataguard. I can imagine that in production, not being able to patch the database might be an issue. You can however replace the entire image with a new version and hope the new version can still use the old datafiles. You have to check this though.

Running multiple containers on the same machine is inefficient

If you have multiple Oracle Database containers running at the same time on the same machine, you will not benefit from the multitenancy features since every container runs its own container and pluggable database. Also every container runs its own listener.

The post Getting started with Oracle Database in a Docker container! appeared first on AMIS Oracle and Java Blog.

12c Multitenant internals: PDB replay DDL for common users

Yann Neuhaus - Fri, 2017-12-29 16:05

In multitenant, you can create common Users, Roles, and Profiles. You create them in CDB$ROOT, with the CONTAINER=ALL clause (which is optional because it is the only possible value when connected to CDB$ROOT) but they are visible to all containers. As the goal of multitenant is to avoid to duplicate common metadata to all containers, You may think that they are visible through those magic metadata links. But that’s actually wrong: they are simply replicated with a very simple mechanism: the DDL for common objects is replayed into each user PDB.

I’m connected to CDB2’s CDB$ROOT and I have two pluggable databases:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED

PDB1 is opened and PDB2 is closed.

PDB_SYNC$

In this example, I’ll query PDB_SYNC$ which is the table where Oracle stores all DDL for common users, roles, or profiles in order to be able to replay it later:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

I excluded the bitand(flags,8)=8 because it concerns application containers. I query with the container() to show the con_id but this is for con_id=1 which is the CDB$ROOT.

You can see some DDL for CTXSYS recorded on January 26th which is the day where this release (12.2.0.1) was built. I used a template with datafiles to create the CDB with DBCA. And you see some DDL to unlock SYS and SYSTEM on December 23rd when I created the database. You can also see that they are ordeded in sequence with REPLAY#.

More interesting is the OPCODE=-1 which is PDB$LASTREPLAY and looks like the last value of REPLAY#. This means that on this container, CDB$ROOT, all statements where REPLAY#<=5 was run.

With a similar query, I query the opened PDBs:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

There only one row here in CON_ID=3, which is PDB1: the PDB$LASTREPLAY mentioning that all statements up to REPLAY=5 have been run also in this container.
I don’t see PDB2 (CON_ID=4) here because the container() clause cannot query closed containers.

CONTAINER=ALL DDL

I’ll run some common DLL to create a profile, a role and a user:

SQL> create profile C##PROFILE1 limit inactive_account_time 15 container=all;
Profile C##PROFILE1 created.
 
SQL> create role C##ROLE1 container=all;
Role C##ROLE1 created.
 
SQL> create user C##USER1 identified by oracle container=all;
User C##USER1 created.
 
SQL> alter user C##USER1 profile C##PROFILE1;
User C##USER1 altered.
 
SQL> grant C##ROLE1 to C##USER1 container=all;
Grant succeeded.

The C## prefix is mandatory to isolate the common user namespace. You can change it with the common_prefix parameter. You can even set it to the empty string, but then you have a risk of namespace collision when you plug a PDB between CDB having different common profiles or roles.
The CONTAINER=ALL is the default and the only possibility when connected to CDB$ROOT so it is optional. I recommend to mention it explicitly in order to avoid problems when running the same DDL in CDB$ROOT and in PDBs.

All those DDL have been recorded into PDB_SYNC$ and the REPLAY# has been increased:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

PDB1 (CON_ID=3) was opened read write, and then has been synchronized (the DDL has been run in the container to create the same profile, role and user) and the PDB$LASTREPLAY has been updated in this container to show that all has been done:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
%nbsp;
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

MOUNTED or READ ONLY

I open the PDB2 read only because I want to see what is in PDB_SYNC$ there. But READ ONLY means that the DDL cannot be run because no write is allowed in the local dictionary.

SQL> alter pluggable database PDB2 open read only;
Pluggable database PDB2 altered.

Running the same query as above, I can see that PDB2 (CON_ID=4) is synchronized only up to the statements with REPLAY#=5 because my DDL was not replicated there.
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

Sync at OPEN

When I open PDB2 in read write mode, the DDL can be synchronized:

SQL> alter pluggable database PDB2 open read write force;
Pluggable database PDB2 altered.

At open, the DDL from REPLAY#>5 has been replayed and once opened the PDB is in sync with CDB$ROOT:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

So what?

The common users, roles and profiles are not stored only in CDB$ROOT to be shared, but rather replicated to all PDBs. The DDL is replicated synchronously to all opened pluggable databases in read write, and stored into the CDB$ROOT PDB_SYNC$ table to be replayed later when non-synced PDBs are opened. I’ll show in the next post what happens when the DDL is in error.

Note that even when all pluggable databases are opened read write, the DDL is stored and they are purged later (when replayed on all PDBs) because they are needed when you create a new PDB and open it. The PDB$SEED has REPLAY#=0 for PDB$LASTREPLAY which means that all statements will be replayed.

This is 12.2.0.1 where this mechanism is only for common users, roles and profiles having DDL in CDB$ROOT. With application containers, more than that is recorded: all DML and DDL run between the ‘begin install/upgrade/patch’ and ‘end install/upgrade/patch’ in the application root. Then, the statements can be replayed into the application PDB with a simple SYNC command. In the future release (18c) we expect to have that application root behavior ported to CDB$ROOT so that we don’t have to run catupgrd.sql in all containers. Then the PDB will probably be patched or upgraded when opened.

 

Cet article 12c Multitenant internals: PDB replay DDL for common users est apparu en premier sur Blog dbi services.

January PeopleTools Training Courses Posted

Jim Marion - Fri, 2017-12-29 13:56

We posted our January PeopleTools training agenda. This month includes our extremely popular Fluid course as well as our new PeopleTools Delta course. Register online at www.jsmpros.com/training-live-virtual/. Our First course is January 8th, which is coming up quickly.

Besides our regular mid-week agenda, we added weekend courses for those that are committed mid-week and can't take time away for training.

The time zone for these courses is designed to catch as many US attendees as possible. If you would prefer another time zone, let me know and we will consider scheduling a future course in a more favorable time zone.

Why Fluid and why PeopleTools Delta? Fluid first: Any PeopleSoft 2017 Year in review post must include Fluid Campus Solutions. Oracle's Campus Solutions team made significant progress in Fluid student self-service. Honestly, I am extremely impressed with Fluid student self-service. Because of this progress, many of our customers are currently implementing Fluid student self-service. Likewise, PeopleSoft HCM has published retirement dates for key manager and employee self-service components. Support for most Classic manager self-service components, for example, retires in just a couple of days. Classic employee self-service retires one year later, on December 31, 2018 (for more details on Classic retirement dates, see MyOracle Support document 1348959.1). If there was ever a time to think about Fluid, that time has come. Now is a great time to learn Fluid so that you are ready for those change requests and implementation details. While there are obvious similarities between Classic and Fluid development, they are very different.

As customers implement Fluid, they will undoubtedly revisit existing customizations. This is where a PeopleTools Delta course becomes important. You could continue down the same path, customizing delivered definitions or you could investigate new PeopleTools features that allow you to configure (instead of customize) business logic and tailor the user experience. I could recount story after story of customers saving 10's to 100's of thousands of dollars in implementation, customization, and lifecycle management costs because they learned new PeopleTools features.

Does your organization have 8 or more people interested in a training course? If not, do you know 8 or more people from various organizations you can get together for a class (with virtual training, all users can be remote)? If so, we have group rates available. Besides significant savings (quantity discounts), large groups have flexibility and control over the training agenda. Feel free to contact us for more information.

On hitting 100K on StackOverflow

Andrew Clarke - Fri, 2017-12-29 12:14
100,000 is just another number. It's one more than 99,999. And yet, and yet. We live in a decimal cultural. We love to see those zeroes roll up. Order of magnitude baby! It's the excitement of being a child, going on a journey in the family car when the odometer reads 99994. knowing you'll see 100000. Of course everybody got distracted by the journey and next time you look at the dial it reads 100002.

Earlier this year my StackOverflow reputation passed 100,000. Like the car journey I missed the actual moment. My rep had been 99,986 when I last checked the previous evening and 100,011 the next day. Hey ho.

Reputation is a big deal on StackOverflow because it is the prime measure of contribution. As a Q&A site (not a forum - that confuses a lot of people) it needs content, it needs good questions and good answers. Reputation points are the reward for good posts. In this context good is determined democratically: people vote up good questions and good answers, and - crucially - vote down poor questions and answers. Votes are the main way of gaining reputation points: +5 for an upvoted question, +10 for an upvoted answer and +15 for an accepted answer. (There are other ways of gaining - and losing - rep) but posting is the main one. "Reputation is a rough measurement of how much the community trusts you; it is earned by convincing your peers that you know what you’re talking about." Meta Stack Exchange FAQ

So is reputation just a way of keeping score? Nope: it is gamification but there is more to it than that. Reputation means points and what do points make? Prizes Privileges. StackOverflow is largely a self-policing community. There are full-on (elected) moderators but most moderation is actually carried out by regular SO users with sufficient rep. Somebody has asked an unclear question: once you have 50 rep you can post a comment asking for clarification. Got a user who doesn't know how to turn off the CAPSLOCK key? With 2000 rep you can just edit their post and apply sentence case. And so on.

Hmmm, so StackOverflow rewards its keenest contributors by allowing them to do chores around the site. Yes and it works. One of the big problems with forums is other users. Not griefers as such but there are a lot of low-level irritations: users who don't know how to search the site, or how to format their posts, or just generally fail to understand etiquette. Granting increasing moderation privileges at reputation milestones allows committed users to smooth away soem of those irritations.

But still, getting to 100,000 took eight years and almost 3000 answers. Was it worth it? Well, there are no prizes but when you get to 100,000 you do get swag. A big box of swag:



Here is the box with a standard reference pear so you can see just how big it is.


Inside there is - a pen ....


Some stickers ....


A StackOverflow T-shirt (I have negotiated with my better half to keep this one) ...


And an over-sized coffee mug...


One more thing. There are also badges. Badges are nudges to encourage desirable behaviour such as editing posts, voting in moderator elections, reviewing posts, offering bounties, being awesome. Because let's face it, badges are cool. More badges = more flair. And who doesn't want more flair?Got flair? Heck yeah!

profile for APC at Stack Overflow, Q&A for professional and enthusiast programmers

You gotta work harder!

Tim Hall - Fri, 2017-12-29 07:24

If I take a long hard look at the people *I consider* successful, the main thing I notice is they consistently work hard. They have focus and they put in a lot of hours to get what they want. It doesn’t matter what area it is, you can see the same thing time and time again. The people that work hard get the results. The people that don’t typically fail, or at least don’t live up to their potential.

Here are a few quotes from some varied sources.

  • “The harder I work, the luckier I get.” Samuel Goldwyn (possibly)
  • “If you trust in yourself and believe in your dreams and follow your star you’ll still get beaten by people who spent their time working hard and learning things and weren’t so lazy.” Miss Tick (The Wee Free Men, Terry Pratchett)
  • “You want a hot body? You want a Bugatti? You want a Maserati? You better work bitch!” Britney Spears (well, her song writers really)
  • “Work so hard forgot how to vacation.” Post Malone – Congratulations

The IT industry moves so fast you’ve got to stay on the grind or you will fall behind. If you are looking for work-life balance, I don’t think this is the industry for you. That might sound harsh, but I’m not a stupid person and I freely admit to having a no-life approach to this industry and I often feel like I’m sprinting to stand still. I hate to think how people who are not putting in this level of effort feel. Maybe ignorance is bliss…

Over this holiday period I will have had 12.5 working days off and it’s already been a hard grind. I had some goals in mind, which I think I’m going to miss, but it’s not through lack of effort. I don’t really do new years resolutions anymore, but I’m going to keep saying to myself, “You gotta work harder!”

I hope 2018 works out well for everyone, but I’m pretty sure the people who will be the happiest at the end of the year will be the people who have worked the hardest, no matter what they have been working towards…

Good luck everybody!

Cheers

Tim…

Update: It is interesting that when I say work hard, some people assume I mean throw hours mindlessly at stuff. Of course you need to try and work intelligently. When I say work hard, I mean hard. That’s not cruising for endless hours. Part of working hard in my opinion is evaluating your efforts and altering your approach or goals based on that…

Having said that, you do have to throw hours at some stuff to really get to understand it. Learning is like peeling back layers of an onion. After a short time it’s easy to think you know it all. As time progresses you peel back more layers of the onion and you realise there’s even more to know. Many people seem to stop after the first couple of layers because they don’t dedicate the time, then complain about not being good and looking for the quick fixes…

It’s interesting what people consider successful. Notice in the post I said, “*I consider* successful”. We don’t all want the same thing. For you it might be to earn $1 million. For me it might be to write 10 articles on a specific subject. For someone else it might be to perfect their handstand. Depending on what you’re trying to achieve, the approach and time may be different. Don’t judge everyone else by your standards of what success means…

You gotta work harder! was first posted on December 29, 2017 at 2:24 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Latest Updates + FREE Training This Week

Online Apps DBA - Thu, 2017-12-28 03:38

I hope you are enjoying Holidays and Busy making New Year Resolutions for 2018. (Little secret My Resolution for this year is to Database Cloud Certified and if this is yours too then make sure you follow these mails carefully over next few weeks) In this Week, You’ll find: 1. Oracle Cloud Certification for DBA’s […]

The post Latest Updates + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Cloning around with VirtualBox

The Anti-Kyte - Wed, 2017-12-27 12:54

I’m in a slight trough of a week at the moment.
The excitement of seeing what Santa brought has begun to fade but I’ve yet to summon the requisite enthusiasm for seeing in the New Year.
So this post is really one of those little “Notes to self” so that I can save myself some time when next I need to spin up an Oracle database on a VirtualBox VM…

I’ve got a CentOS7 VM on which I’ve installed and configured Oracle XE.
Now, if I want to experiment on XE, I can simply create a temporary copy of this configuration by cloning it, thus saving myself the trouble of going through the basic configuration all over again.

Opening VirtualBox, we can see that I have a number of VMs already set up

To create an exact copy of ( clone) an existing VM, simply select it and either right-click and select Clone, or hit CTRL+O


…then provide the name of the new vm…

…and click Next.

NOTE – I usually don’t bother re-initialising the MAC address as I don’t run these VMs concurrently. If you are planning to do this then it’s something you may want to consider.

For the clone type, I select Full Clone as I want a completely standalone environment.

Finally, I hit the Clone button.

After a short interval, I am rewarded with an exact copy of the VM, with the new name I specified for it.

Now I’m all set to do something creative and daring in my new environment…or will be after another mince pie.

Pages

Subscribe to Oracle FAQ aggregator