Feed aggregator

Creating linguistic indexes for CANADIAN FRENCH

Tom Kyte - Sun, 2018-07-01 12:26
When creating a linguistic index, I am not able to specify CANADIAN FRENCH. Oracle reports that the NLS parameter string is invalid. I suspect that it's because there is a space in it, but the answer eludes me. Here is a short example of a script ...
Categories: DBA Blogs

ora_rowscn - is it always incremental,

Tom Kyte - Sun, 2018-07-01 12:26
Hello, I want to sqoop data out of my Oracle 11.2 database on a daily basis. However, I want to do only incremental extracts. Apparently, scn_to_timestamp doesn't always work due to ORA-08181: specified number is not a valid system change number...
Categories: DBA Blogs

How to get the operating system user OSUSER from Oracle

Tom Kyte - Sat, 2018-06-30 18:06
I believe there is a way to get the LAN user ID of a user from within an Oracle query. I thought the variable was called OSUSER or OS_USER. I've tried select os_user from dual, but that doesn't work. Yet I think I'm close. Can you lead me in the ...
Categories: DBA Blogs

Automatically capture all errors and context in your APEX application

Dimitri Gielis - Sat, 2018-06-30 16:30
Let me start this post with a conversation between an end-user (Sarah) and a developer (Harry):

End-user: "Hey there, I'm receiving an error in the app."
Developer: "Oh, sorry to hear that. What is the message saying?"
End-user: "Unable to process row of table EBA_PROJ_STATUS_CATS.  ORA-02292: integrity constraint (XXX.SYS_C0090660) violated - child record found"
Developer: "Oh, what are you trying to do?"
End-user: "I'm trying to delete a category."
Developer: "Oh, most likely this category is in use, so you can't delete the category, you first need ..."
End-user: "Ehh?!"

You might ask yourself, what is wrong with this conversation?

The first thing is that the end-user gets an error which is hard to understand. She probably got the error before but tried a few times before calling the developer (or support). Most likely Sarah has a tight deadline and these errors don't really help their mood. 
The other problem is that the developer was most likely just busy working on some complex logic and now gets interrupted. It takes some minutes before Harry can understand what Sarah is talking about. He needs to ask a few questions to know what Sarah is doing and doesn't have much context. He might ask to send a screenshot of the error and a few minutes later he receives this (app in APEX 5.1):

Harry is a smart cookie, so he knows in which schema to look for that constraint name, so he knows which table it's linked to. If Harry read my previous blog post on how to remotely see what Sarah was doing, he has more context too.

If the application is running in APEX 18.1, it's a different story. The screenshot will look like this:

APEX 18.1 actually enhanced the default error message. The user gets fewer details and sees a debug id. With this debug id the developer can get actually more info in Your App > Utilities > Debug Messages:


You might also want to check this blog post by Joel Kallman where to find more info when receiving an internal error with debug id.

Although APEX 18.1 captures more info, there's a more recommended way to deal with errors.

In APEX you can define an Error Handling Function which will kick in every time an error occurs. You can define this function in the Application Definition:


When you look in the Packaged applications that are shipped with Oracle Application Express (APEX), you find some examples. The above screenshot comes from P-Track.

The error handling function has this definition:

function apex_error_handling (p_error in apex_error.t_error )
  return apex_error.t_error_result

The example used in P-Track gives a good overview (read the comments in the package) of the different errors you want to capture:

function apex_error_handling (
    p_error in apex_error.t_error )
    return apex_error.t_error_result
is
    l_result          apex_error.t_error_result;
    l_constraint_name varchar2(255);
begin
    l_result := apex_error.init_error_result (
                    p_error => p_error );
    -- If it is an internal error raised by APEX, like an invalid statement or
    -- code which can not be executed, the error text might contain security sensitive
    -- information. To avoid this security problem we can rewrite the error to
    -- a generic error message and log the original error message for further
    -- investigation by the help desk.
    if p_error.is_internal_error then
        -- mask all errors that are not common runtime errors (Access Denied
        -- errors raised by application / page authorization and all errors
        -- regarding session and session state)
        if not p_error.is_common_runtime_error then
            add_error_log( p_error );
            -- Change the message to the generic error message which doesn't expose
            -- any sensitive information.
            l_result.message := 'An unexpected internal application error has occurred.';
            l_result.additional_info := null;
        end if;
    else
        -- Always show the error as inline error
        -- Note: If you have created manual tabular forms (using the package
        --       apex_item/htmldb_item in the SQL statement) you should still
        --       use "On error page" on that pages to avoid loosing entered data
        l_result.display_location := case
                                       when l_result.display_location = apex_error.c_on_error_page then apex_error.c_inline_in_notification
                                       else l_result.display_location
                                     end;
        -- If it's a constraint violation like
        --
        --   -) ORA-00001: unique constraint violated
        --   -) ORA-02091: transaction rolled back (can hide a deferred constraint)
        --   -) ORA-02290: check constraint violated
        --   -) ORA-02291: integrity constraint violated - parent key not found
        --   -) ORA-02292: integrity constraint violated - child record found
        --
        -- we try to get a friendly error message from our constraint lookup configuration.
        -- If we don't find the constraint in our lookup table we fallback to
        -- the original ORA error message.
        if p_error.ora_sqlcode in (-1, -2091, -2290, -2291, -2292) then
            l_constraint_name := apex_error.extract_constraint_name (
                                     p_error => p_error );
            begin
                select message
                  into l_result.message
                  from eba_proj_error_lookup
                 where constraint_name = l_constraint_name;
            exception when no_data_found then null; -- not every constraint has to be in our lookup table
            end;
        end if;
        -- If an ORA error has been raised, for example a raise_application_error(-20xxx)
        -- in a table trigger or in a PL/SQL package called by a process and we
        -- haven't found the error in our lookup table, then we just want to see
        -- the actual error text and not the full error stack
        if p_error.ora_sqlcode is not null and l_result.message = p_error.message then
            l_result.message := apex_error.get_first_ora_error_text (
                                    p_error => p_error );
        end if;
        -- If no associated page item/tabular form column has been set, we can use
        -- apex_error.auto_set_associated_item to automatically guess the affected
        -- error field by examine the ORA error for constraint names or column names.
        if l_result.page_item_name is null and l_result.column_alias is null then
            apex_error.auto_set_associated_item (
                p_error        => p_error,
                p_error_result => l_result );
        end if;
    end if;
    return l_result;
end apex_error_handling;

When defining this error handling function the error the user gets is more like a notification message and embedded in your app. You can also define a custom message, in the above package there's a lookup in an error_lookup table, but as it can't find the constraint name, it falls back to the normal message.


The real power comes when you start to combine the error handling function with a call to also log session state information. Then you know exactly which record this error was produced for.

There are a couple of ways to include the session state:

Team Development

I typically include a feedback page in my apps. When the user logs feedback by clicking on the feedback link, this is saved in Team Development. The really cool thing is that whenever feedback is logged, automatically the session state of items and some other info like the browser that was being used at the moment of the logging is included. But you can also log feedback through an APEX API:

apex_util.submit_feedback (
    p_comment         => 'Unexpected Error',
    p_type            => 3,
    p_application_id  => v('APP_ID'),
    p_page_id         => v('APP_PAGE_ID'),
    p_email           => v('APP_USER'),
    p_label_01        => 'Session',
    p_attribute_01    => v('APP_SESSION'),
    p_label_02        => 'Language',
    p_attribute_02    => v('AI_LANGUAGE'),
    p_label_03        => 'Error orq_sqlcode',
    p_attribute_03    => p_error.ora_sqlcode,
    p_label_04        => 'Error message',
    p_attribute_04    => p_error.message,
    p_label_05        => 'UI Error message',
    p_attribute_05    => l_result.message
);


Logger 

Logger is a PL/SQL logging and debugging framework. If you don't know it yet, you should definitely check it out. In my opinion, Logger is the best way to instrument your PL/SQL code. Logger has many cool features, one of them is the ability to log your APEX items:

logger.log_apex_items('Debug Items from Error log');
With the above methods, you know which record the end-user was looking at and what the context was. Note that you might find this information too if you look at their session, but it would take more time to figure things out.

Be pro-active

Now, to prevent the conversation from happening again, you can take it one step further and start logging and monitoring those errors. Whenever errors happen you can, for example, log it in your own error table, or in your support ticket system and send yourself an email or notification.
Then instead of the end-user calling you, you call them and say "Hey, I saw you had some issues...".

By monitoring errors in your application, you can pro-actively take actions :)

Note that APEX itself also stores Application Errors. You find under Monitor Activity > Application Errors:


The report gives the error and the session, so you look further into what happened:


So, even when you didn't have an error handling function in place, you can still start monitoring errors that happen in your app. I know the readers of this blog are really smart so you might not see any errors, but still, it might be worthwhile to check it once and a while :)

You find another example of the error handling function in my Git account. I included an example of logging in your own error table and sending an email.

.gist .blob-wrapper.data { max-height:200px; overflow:auto; }
Categories: Development

Patching Grid Infrastructure (Oracle Restart) 12.2.0.1 on Linux

Pierre Forstmann Oracle Database blog - Sat, 2018-06-30 07:55

Oracle Corp. has released on OTN an interim patch for Grid Infrastructure 12.2.0.1 on Linux. This patch is a fix needed for Oracle RAC in docker. In this blog article I am not going to install Oracle in docker but I am only going to install this interim patch in following Oracle Restart configuration:

  • Oracle Linux 7.3
  • Oracle Grid Infrastructure (GI) 12.2.0.1
  • Oracle Database 12.2.0.1
  • Grid Infrastructure owner is the same as Oracle Database owner (oracle account).
  • I have used patch README.html instructions when possible.

    Step 1: check opatch version

    In GI environment I have run:

    $ $ORACLE_HOME/OPatch/opatch version
    OPatch Version: 12.2.0.1.6
    
    OPatch succeeded.
    

    Patch README says that OPatch 12.2.0.1.5 is needed: so this is OK.

    However patch README says also that emocmrsp is needed to create an OCM response file but there is no emocmrsp binary GI Home:

    $ ls -al $ORACLE_HOME/OPatch/ocm
    total 4
    drwxr-xr-x.  2 oracle oinstall   24 Jan 26  2017 .
    drwxr-xr-x. 12 oracle oinstall 4096 Jan 26  2017 ..
    -rw-r--r--.  1 oracle oinstall    0 Jun 15  2016 generic.zip
    $ find $ORACLE_HOME -name emocmrsp
    $
    
    Step 2: check GI Home inventory
    $ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME
    Oracle Interim Patch Installer version 12.2.0.1.6
    Copyright (c) 2018, Oracle Corporation.  All rights reserved.
    
    Oracle Home       : /u01/gi12201
    Central Inventory : /u01/orainv
       from           : /u01/gi12201/oraInst.loc
    OPatch version    : 12.2.0.1.6
    OUI version       : 12.2.0.1.4
    Log file location : /u01/gi12201/cfgtoollogs/opatch/opatch2018-06-30_13-00-50PM_1.log
    
    Lsinventory Output file location : /u01/gi12201/cfgtoollogs/opatch/lsinv/lsinventory2018-06-30_13-00-50PM.txt
    
    --------------------------------------------------------------------------------
    Local Machine Information::
    Hostname: ol7ttsa0
    ARU platform id: 226
    ARU platform description:: Linux x86-64
    
    Installed Top-level Products (1): 
    
    Oracle Grid Infrastructure 12c                                       12.2.0.1.0
    There are 1 products installed in this Oracle Home.
    
    
    There are no Interim patches installed in this Oracle Home.
    
    
    --------------------------------------------------------------------------------
    
    OPatch succeeded.
    $ 
    
    Step 3: unzip patch

    I have unzipped patch zip file in /stage directory:

    $ unzip p27383741_122010_Linux-x86-64.zip 
    
    Step 4: check patch conflict

    I have sitchwed to user root (to avoid OPATCHAUTO-72046 error messages):

    # /u01/gi12201/OPatch/opatchauto apply /stage/27383741/27383741/ -analyze
    
    System initialization log file is /u01/gi12201/cfgtoollogs/opatchautodb/systemconfig2018-06-30_01-19-36PM.log.
    
    Session log file is /u01/gi12201/cfgtoollogs/opatchauto/opatchauto2018-06-30_01-19-41PM.log
    The id for this session is 76WA
    [init:init] Executing OPatchAutoBinaryAction action on home /u01/db12201
    
    Executing OPatch prereq operations to verify patch applicability on SIDB Home........
    
    [init:init] OPatchAutoBinaryAction action completed on home /u01/db12201 successfully
    [init:init] Executing SIDBPrereqAction action on home /u01/db12201
    
    Executing prereq operations before applying on SIDB Home........
    
    [init:init] SIDBPrereqAction action completed on home /u01/db12201 successfully
    [init:init] Executing OPatchAutoBinaryAction action on home /u01/gi12201
    
    Executing OPatch prereq operations to verify patch applicability on SIHA Home........
    
    [init:init] OPatchAutoBinaryAction action completed on home /u01/gi12201 successfully
    [init:init] Executing SIHAPrereqAction action on home /u01/gi12201
    
    Executing prereq operations before applying on SIHA Home........
    
    [init:init] SIHAPrereqAction action completed on home /u01/gi12201 successfully
    OPatchAuto successful.
    
    --------------------------------Summary--------------------------------
    
    Analysis for applying patches has completed successfully:
    
    Host:ol7ttsa0
    SIDB Home:/u01/db12201
    
    
    ==Following patches were SUCCESSFULLY analyzed to be applied:
    
    Patch: /stage/27383741/27383741/
    Log: /u01/db12201/cfgtoollogs/opatchauto/core/opatch/opatch2018-06-30_13-19-47PM_1.log
    
    
    Host:ol7ttsa0
    SIHA Home:/u01/gi12201
    
    
    ==Following patches were SUCCESSFULLY analyzed to be applied:
    
    Patch: /stage/27383741/27383741/
    Log: /u01/gi12201/cfgtoollogs/opatchauto/core/opatch/opatch2018-06-30_13-20-15PM_1.log
    
    
    #
    
    Step 5: apply the patch

    In the same root session I have just run (without any setting for ORACLE_HOME or GRID_HOME):

    # /u01/gi12201/OPatch/opatchauto apply /stage/27383741 
    
    System initialization log file is /u01/gi12201/cfgtoollogs/opatchautodb/systemconfig2018-06-30_01-23-52PM.log.
    
    Session log file is /u01/gi12201/cfgtoollogs/opatchauto/opatchauto2018-06-30_01-23-56PM.log
    The id for this session is 2GYB
    [init:init] Executing OPatchAutoBinaryAction action on home /u01/db12201
    
    Executing OPatch prereq operations to verify patch applicability on SIDB Home........
    
    [init:init] OPatchAutoBinaryAction action completed on home /u01/db12201 successfully
    [init:init] Executing SIDBPrereqAction action on home /u01/db12201
    
    Executing prereq operations before applying on SIDB Home........
    
    [init:init] SIDBPrereqAction action completed on home /u01/db12201 successfully
    [init:init] Executing OPatchAutoBinaryAction action on home /u01/gi12201
    
    Executing OPatch prereq operations to verify patch applicability on SIHA Home........
    
    [init:init] OPatchAutoBinaryAction action completed on home /u01/gi12201 successfully
    [init:init] Executing SIHAPrereqAction action on home /u01/gi12201
    
    Executing prereq operations before applying on SIHA Home........
    
    [init:init] SIHAPrereqAction action completed on home /u01/gi12201 successfully
    [shutdown:prepare-shutdown] Executing SIDBPrepareShutDownAction action on home /u01/db12201
    
    Preparing SIDB Home to bring down database service........
    
    [shutdown:prepare-shutdown] SIDBPrepareShutDownAction action completed on home /u01/db12201 successfully
    [shutdown:shutdown] Executing SIDBShutDownAction action on home /u01/db12201
    
    Stopping the database service on SIDB Home for patching........
    
    Following database is been stopped and will be restarted later during the session: db0
    
    [shutdown:shutdown] SIDBShutDownAction action completed on home /u01/db12201 successfully
    [shutdown:shutdown] Executing SIHAShutDownAction action on home /u01/gi12201
    
    Performing prepatch operations on SIHA Home........
    
    Prepatch operation log file location: /u01/base/crsdata/ol7ttsa0/crsconfig/hapatch_2018-06-30_01-25-18PM.log 
    
    [shutdown:shutdown] SIHAShutDownAction action completed on home /u01/gi12201 successfully
    [offline:binary-patching] Executing OPatchAutoBinaryAction action on home /u01/db12201
    
    Start applying binary patches on SIDB Home........
    
    [offline:binary-patching] OPatchAutoBinaryAction action completed on home /u01/db12201 successfully
    [offline:binary-patching] Executing OPatchAutoBinaryAction action on home /u01/gi12201
    
    Start applying binary patches on SIHA Home........
    
    [offline:binary-patching] OPatchAutoBinaryAction action completed on home /u01/gi12201 successfully
    [startup:startup] Executing SIHAStartupAction action on home /u01/gi12201
    
    Performing postpatch operations on SIHA Home........
    
    Postpatch operation log file location: /u01/base/crsdata/ol7ttsa0/crsconfig/hapatch_2018-06-30_01-27-55PM.log 
    
    [startup:startup] SIHAStartupAction action completed on home /u01/gi12201 successfully
    [startup:startup] Executing SIDBStartupAction action on home /u01/db12201
    
    Starting the database service on SIDB Home........
    
    [startup:startup] SIDBStartupAction action completed on home /u01/db12201 successfully
    [startup:finalize] Executing SIDBFinalizeStartAction action on home /u01/db12201
    
    No step execution required.........
    
    [startup:finalize] SIDBFinalizeStartAction action completed on home /u01/db12201 successfully
    [online:product-patching] Executing SIDBOnlineAction action on home /u01/db12201
    
    Trying to apply SQL patches on SIDB Home.
    
    [online:product-patching] SIDBOnlineAction action completed on home /u01/db12201 successfully
    [finalize:finalize] Executing OracleHomeLSInventoryGrepAction action on home /u01/gi12201
    
    Verifying patches applied on SIHA Home.
    
    [finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u01/gi12201 successfully
    [finalize:finalize] Executing OracleHomeLSInventoryGrepAction action on home /u01/db12201
    
    Verifying patches applied on SIDB Home.
    
    [finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u01/db12201 successfully
    OPatchAuto successful.
    
    --------------------------------Summary--------------------------------
    
    Patching is completed successfully. Please find the summary as follows:
    
    Host:ol7ttsa0
    SIDB Home:/u01/db12201
    Summary:
    
    ==Following patches were SUCCESSFULLY applied:
    
    Patch: /stage/27383741/27383741
    Log: /u01/db12201/cfgtoollogs/opatchauto/core/opatch/opatch2018-06-30_13-25-36PM_1.log
    
    
    Host:ol7ttsa0
    SIHA Home:/u01/gi12201
    Summary:
    
    ==Following patches were SUCCESSFULLY applied:
    
    Patch: /stage/27383741/27383741
    Log: /u01/gi12201/cfgtoollogs/opatchauto/core/opatch/opatch2018-06-30_13-26-15PM_1.log
    
    # 
    
    Step 6: check that patch has been applied

    I have checked that GI is up and running:

    $ . oraenv
    ORACLE_SID = [oracle] ? +ASM 
    The Oracle base has been set to /u01/base
    $ crsctl stat res -t
    --------------------------------------------------------------------------------
    Name           Target  State        Server                   State details       
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.DATA.dg
                   ONLINE  ONLINE       ol7ttsa0                 STABLE
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       ol7ttsa0                 STABLE
    ora.RECO.dg
                   ONLINE  ONLINE       ol7ttsa0                 STABLE
    ora.asm
                   ONLINE  ONLINE       ol7ttsa0                 Started,STABLE
    ora.ons
                   OFFLINE OFFLINE      ol7ttsa0                 STABLE
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.cssd
          1        ONLINE  ONLINE       ol7ttsa0                 STABLE
    ora.db0.db
          1        ONLINE  ONLINE       ol7ttsa0                 Open,HOME=/u01/db122
                                                                 01,STABLE
    ora.diskmon
          1        OFFLINE OFFLINE                               STABLE
    ora.evmd
          1        ONLINE  ONLINE       ol7ttsa0                 STABLE
    --------------------------------------------------------------------------------
    $ 
    

    I have checked that patch has been applied to GI Home:

    $ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME
    Oracle Interim Patch Installer version 12.2.0.1.6
    Copyright (c) 2018, Oracle Corporation.  All rights reserved.
    
    
    Oracle Home       : /u01/gi12201
    Central Inventory : /u01/orainv
       from           : /u01/gi12201/oraInst.loc
    OPatch version    : 12.2.0.1.6
    OUI version       : 12.2.0.1.4
    Log file location : /u01/gi12201/cfgtoollogs/opatch/opatch2018-06-30_13-30-35PM_1.log
    
    Lsinventory Output file location : /u01/gi12201/cfgtoollogs/opatch/lsinv/lsinventory2018-06-30_13-30-35PM.txt
    
    --------------------------------------------------------------------------------
    Local Machine Information::
    Hostname: ol7ttsa0
    ARU platform id: 226
    ARU platform description:: Linux x86-64
    
    Installed Top-level Products (1): 
    
    Oracle Grid Infrastructure 12c                                       12.2.0.1.0
    There are 1 products installed in this Oracle Home.
    
    
    Interim patches (1) :
    
    Patch  27383741     : applied on Sat Jun 30 13:27:43 CEST 2018
    Unique Patch ID:  21873823
    Patch description:  "OCW Interim patch for 27383741"
       Created on 18 Jan 2018, 17:32:26 hrs PST8PDT
       Bugs fixed:
         25970667, 27187009
    
    
    
    --------------------------------------------------------------------------------
    
    OPatch succeeded.
    $
    

    I have checked that patch has been applied to Database Home:

    $ . oraenv
    ORACLE_SID = [+ASM] ? DB0
    The Oracle base has been changed from /u01/base to /u01/oracle
    $ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME
    Oracle Interim Patch Installer version 12.2.0.1.6
    Copyright (c) 2018, Oracle Corporation.  All rights reserved.
    
    
    Oracle Home       : /u01/db12201
    Central Inventory : /u01/orainv
       from           : /u01/db12201/oraInst.loc
    OPatch version    : 12.2.0.1.6
    OUI version       : 12.2.0.1.4
    Log file location : /u01/db12201/cfgtoollogs/opatch/opatch2018-06-30_13-30-54PM_1.log
    
    Lsinventory Output file location : /u01/db12201/cfgtoollogs/opatch/lsinv/lsinventory2018-06-30_13-30-54PM.txt
    
    --------------------------------------------------------------------------------
    Local Machine Information::
    Hostname: ol7ttsa0
    ARU platform id: 226
    ARU platform description:: Linux x86-64
    
    Installed Top-level Products (1): 
    
    Oracle Database 12c                                                  12.2.0.1.0
    There are 1 products installed in this Oracle Home.
    
    
    Interim patches (1) :
    
    Patch  27383741     : applied on Sat Jun 30 13:26:07 CEST 2018
    Unique Patch ID:  21873823
    Patch description:  "OCW Interim patch for 27383741"
       Created on 18 Jan 2018, 17:32:26 hrs PST8PDT
       Bugs fixed:
         25970667, 27187009
    
    
    
    --------------------------------------------------------------------------------
    
    OPatch succeeded.
    $ 
    

    I have checked if patch has been applied to database:

    OPS$ORACLE@DB0>set linesize 120
    OPS$ORACLE@DB0>column action_time format a15
    OPS$ORACLE@DB0>column action format a10
    OPS$ORACLE@DB0>column version format a12
    OPS$ORACLE@DB0>column description format a50
    OPS$ORACLE@DB0>column comp_name format a40
    OPS$ORACLE@DB0>select name, cdb from v$database;
    
    NAME	  CDB
    --------- ---
    DB0	  NO
    
    OPS$ORACLE@DB0>select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description
      2  from dba_registry_sqlpatch
      3  order by action_time;
    
    no rows selected
    
    OPS$ORACLE@DB0>
    

    This interim patch has not been applied to the database.

    I have also checked that patch is displayed in database instance alert log;

    ==========================================================
    Dumping current patch information
    ===========================================================
    Patch Id: 27383741
    Patch Description: OCW Interim patch for 27383741
    Patch Apply Time: 2018-06-30T13:26:07+02:00
    Bugs Fixed: 25970667,27187009
    ===========================================================
    

    I have also checked that patch is displayed in ASM instance alert log:

    ============================================================
    NOTE: PatchLevel of this instance 1812918032
    ============================================================
    Dumping list of patches:
    ============================================================
    27383741
    ============================================================
    

    I have also checked that GI has been stopped and restarted during patching in $ORACLE_BASE/diag/crs/ol7ttsa0/crs/trace/alert.log:

    2018-06-30 13:25:22.293 [OCSSD(3367)]CRS-1603: CSSD on node ol7ttsa0 has been shut down.
    2018-06-30 13:25:23.622 [OCSSD(3367)]CRS-1660: The CSS daemon shutdown has completed
    2018-06-30 13:25:23.622 [OCSSD(3367)]CRS-8504: Oracle Clusterware OCSSD process with operating system process ID 3367 is exiting
    2018-06-30 13:28:04.361 [CLSECHO(13356)]ACFS-9500: Location of Oracle Home is '/u01/gi12201' as determined from the internal configuration data
    2018-06-30 13:28:05.539 [CLSECHO(13702)]ACFS-9300: ADVM/ACFS distribution files found.
    2018-06-30 13:28:05.853 [CLSECHO(13726)]ACFS-9119: Driver oracleacfs.ko failed to unload.
    2018-06-30 13:28:05.930 [CLSECHO(13750)]ACFS-9427: Failed to unload ADVM/ACFS drivers. A system reboot is recommended.
    2018-06-30 13:28:06.089 [CLSCFG(13807)]CRS-1810: Node-specific configuration for node ol7ttsa0 in Oracle Local Registry was patched to patch level 1812918032.
    2018-06-30 13:28:10.327 [OHASD(13827)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 13827
    2018-06-30 13:28:10.331 [OHASD(13827)]CRS-0714: Oracle Clusterware Release 12.2.0.1.0.
    2018-06-30 13:28:10.347 [OHASD(13827)]CRS-2112: The OLR service started on node ol7ttsa0.
    2018-06-30 13:28:10.364 [OHASD(13827)]CRS-1301: Oracle High Availability Service started on node ol7ttsa0.
    2018-06-30 13:28:10.634 [CSSDAGENT(13893)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 13893
    2018-06-30 13:28:10.665 [ORAAGENT(13890)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 13890
    2018-06-30 13:28:10.780 [ORAROOTAGENT(13898)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 13898
    2018-06-30 13:28:11.200 [ORAAGENT(13947)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 13947
    2018-06-30 13:28:11.360 [EVMD(13971)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 13971
    2018-06-30 13:28:15.868 [CSSDAGENT(14098)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 14098
    2018-06-30 13:28:15.911 [ORAROOTAGENT(14101)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 14101
    2018-06-30 13:28:16.019 [OCSSD(14130)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 14130
    2018-06-30 13:28:17.031 [OCSSD(14130)]CRS-1713: CSSD daemon is started in hub mode
    2018-06-30 13:28:25.628 [OCSSD(14130)]CRS-1601: CSSD Reconfiguration complete. Active nodes are ol7ttsa0 .
    2018-06-30 13:28:29.146 [OCSSD(14130)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
    

    I have ignored ACFS-9119 and ACFS-9427 messages because I don’t use ACFS any more on this node.

    Patch 27383741 has been successfully installed in this Oracle Restart environment.

    Categories: DBA Blogs

    101 Ways to Process JSON with PeopleCode

    Jim Marion - Fri, 2018-06-29 17:53
    ... well... maybe not 101 ways, but there are several!

    There is a lot of justified buzz around JSON. Many of us want to (or must) generate and parse JSON with PeopleSoft. Does PeopleSoft support JSON? Yes, actually. The Documents module can generate and parse JSON. Unfortunately, many of us find the Documents module's structure too restrictive. The following is a list of several alternatives available to PeopleSoft developers:

    • Documents module
    • Undocumented JSON objects delivered by PeopleTools
    • JSON.org Java implementation (now included with PeopleTools)
    • JavaScript via Java's ScriptEngineManager

    We will skip the first two options as there are many examples and references available on the internet. In this post, we will focus on the last two options in the list: JSON.org and JavaScript. Our scenario involves generating a JSON object containing a role and a list of the role's permission lists.

    PeopleCode can do a lot of things, but it can't do everything. When I find a task unfit for PeopleCode, I reach out to the Java API. PeopleCode has outstanding support for Java. I regularly scan the class and classes directories of PS_HOME, looking for new libraries I can leverage from PeopleCode. One of the files in my App Server's class path is json.jar. As a person interested in JSON, how could I resist inspecting the file's contents? Upon investigation, I realized that json.jar contains the json.org Java JSON implementation. This is good news as I used to have to add this library myself. So how might we use json.jar to generate a JSON file? Here is an example

    JSON.org has this really cool fluent design class named JSONStringer. If the PeopleCode editor supported custom formatting, fluent design would be really, really cool. For now, it is just cool. Here is an example of creating the same JSON using the JSONStringer:

    What about reading JSON using json.org? The following example starts from the JSON string generated by JSONStringer. It is a little ugly because it requires Java Reflection to invoke the JSONObject constructor. On the positive side, though, this example demonstrates Java Class casting in PeopleCode (hat tip to tslater2006 for helping me with Java Class casting in PeopleCode)

    What is that you say? Your PeopleTools installation doesn't have the json.jar (or jsimple.jar) files? If you like this approach, then I suggest working with your system administrator to deploy the JSON.org jar file to your app and/or process scheduler's Java class path

    But do we really need a special library to handle JSON? By definition, JSON describes a JavaScript object. Using Java's embedded JavaScript script engine, we have full access to JavaScript. Here is a sample JavaScript file that generates the exact same JSON as the prior two examples:

    ... and the PeopleCode to invoke this JavaScript:

    Did you see something in this post that interests you? Are you ready to take your PeopleTools skills to the next level? We offer a full line of PeopleTools training courses. Learn more at jsmpros.com.

    Statspack installation scripts

    Yann Neuhaus - Fri, 2018-06-29 14:24

    When Diagnostic Pack is disabled, either because you don’t have Diagnostic Pack or you are in Standard Edition, I highly recommend to install Statspack. When you will need it, to investigate on an issue that occured in the past, you will be happy to have it already installed and gathering snapshots.

    I order to be sure to have it installed correctly, there’s a bit more to do than just what is described in spcreate.doc and I detail that in a UKOUG Oracle Scene article Improving Statspack Experience.

    For easy download, I’ve put the scripts on GitHub: https://github.com/FranckPachot/scripts/tree/master/statspack

    You will find the following scripts for Statspack installation:

    • 01-install.sql to create the tablespace and call spcreate
    • 02-schedule.sql to schedule snap and purge jobs
    • 03-idle-events.sql to fix issue described at https://blog.dbi-services.com/statspack-idle-events/
    • 04-modify-level.sql to collect execution plans and segment statistics

    And some additional scripts:

    • 08-create-delta-views.sql to create views easy to query
    • 11-comment-with-load.sql to add load information to snapshots without comments

    You can leave feedback and comment on Twitter:

    The scripts I use to install Statspack are on GitHub:https://t.co/qboDoX2pc9
    - I'll be happy to have your feedback here -

    — Franck Pachot (@FranckPachot) June 29, 2018

     

    Cet article Statspack installation scripts est apparu en premier sur Blog dbi services.

    New Oracle Utilities Testing Accelerator (6.0.0.0)

    Anthony Shorten - Fri, 2018-06-29 14:02

    I am pleased to announce the next chapter in automated testing solutions for Oracle Utilities products. In the past some Oracle Utilities products have used Oracle Application Testing Suite with some content to provide an amazing functional and regression testing solution. Building upon that success, a new solution named the Oracle Utilities Testing Accelerator has been introduced that is an new optimized and focused solution for Oracle Utilities products.

    The new solution has the following benefits:

    • Component Based. As with the Oracle's other testing solutions, this new solution is based upon testing components and flows with flow generation and databank support. Those capabilities were popular with our existing testing solution customers and exist in expanded forms in the new solution.
    • Comprehensive Content for Oracle Utilities. As with Oracle's other testing solutions, supported products provided pre-built content to significantly reduce costs in adoption of automation. In this solution, the number of product within the Oracle Utilities portfolio has greatly expanded to provide content. This now includes both on-premise product as well as our growing portfolio of cloud based solutions.
    • Self Contained Solution.  The Oracle Utilities Testing Accelerator architecture has been simplified to allow customers to quickly deploy the product with the minimum of fuss and prerequisites.
    • Used by Product QA. The Oracle Utilities Product QA teams use this product on a daily basis to verify the Oracle Utilities products. This means that the content provided has been certified for use on supported Oracle Utilities products and reduces risk of adoption of automation.
    • Behavior-Driven Development Support. One of most exciting capabilities introduced in this new solution, is the support for Behavior-Driven Development (BDD), which is popular with the newer Agile based implementation approaches. One of the major goals of the new testing capability is reduce rework from the Agile process into the building of test assets. This new capability introduces Machine Learning into the testing arena for generating test flows from Gherkin syntax documentation from Agile approaches. A developer can reuse their Gherkin specifications to generate a flow quickly without the need for rework. As the capability uses Machine Learning, it can be corrected if the assumptions it makes are incorrect for the flow and those corrections will be reused for any future flow generations. An example of this approach is shown below:

    • Selenium Based. The Oracle Utilities Testing Accelerator uses a Selenium based scripting language for greater flexibility across the different channels supported by the Oracle Utilities products. The script is generated automatically and does not need any alteration to be executed correctly.
    • Data Independence. As with other Oracle's testing products, data is supported independently of the flow and components. This translates into greater flexibility and greater levels of reuse in using automated testing. It is possible to change data at anytime during the process to explore greater possibilities in testing.
    • Support for Flexible Deployments. Whilst the focus of the Oracle Utilities Testing Accelerator is functional and/or regression testing.
    • Beyond Functional Testing. The Oracle Utilities Testing Accelerator is designed to be used for testing beyond just functional testing. It can be used to perform testing in flexible scenarios including:
      • Patch Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of product patches on business processes using the flows as a regression test.
      • Extension Release Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of releases of extensions from the Oracle Utilities SDK (via the migration tools in the SDK) or after a Configuration Migration Assistant (CMA) migration.
      • Sanity Testing. In the Oracle Cloud the Oracle Utilities Testing Accelerator is being used to assess the state of a new instance of the product including its availability and that the necessary data is setup ensuring the instance is ready for use.
      • Cross Oracle Utilities Product Testing. The Oracle Utilities Testing Accelerator supports flows that cross Oracle Utilities product boundaries to model end to end processes when multiple Oracle Utilities products are involved.
      • Blue/Green Testing. In the Oracle Cloud, zero outage upgrades are a key part of the solution offering. The Oracle Utilities Testing Accelerator supports the concept of blue/green deployment testing to allow multiple versions to be able to be tested to facilitate smooth upgrade transitions.
    • Lower Skills Required. The Oracle Utilities Testing Accelerator has been designed with the testing users in mind. Traditional automation involves using recording using a scripting language that embeds the data and logic into a script that is available for a programmer to alter to make it more flexible. The Oracle Utilities Testing Accelerator uses an orchestration metaphor to allow a lower skilled person, not a programmer, to build test flows and generate, no touch, scripts to be executed.

    An example of the Oracle Utilities Testing Accelerator Workbench:

    New Architecture

    The Oracle Utilities Testing Accelerator has been re-architectured to be optimized for use with Oracle Utilities products:

    • Self Contained Solution. The new design is around simplicity. As much as possible the configuration is designed to be used with minimal configuration.
    • Minimal Prerequisites. The Oracle Utilities Testing Accelerator only requires Java to execute and a Database schema to store its data. Allocations for non-production for existing Oracle Utilities product licenses are sufficient to use for this solution. No additional database licenses are required by default.
    • Runs on same platforms as Oracle Utilities applications. The solution is designed to run on the same operating system and database combinations supported with the Oracle Utilities products.

    The architecture is simple:

    UTA 6.0.0.0 Architecture

    • Product Components. A library of components from the Product QA teams ready to use with the Oracle Utilities Testing Accelerator. You decide which libraries you want to enable.
    • Oracle Utilities Testing Accelerator Workbench. A web based design toolset to manage and orchestrate your test assets. Includes the following components:
      • Embedded Web Application Server. A preset simple configuration and runtime to house the workbench.
      • Testing Dashboard. A new home page outlining the state of the components and flows installed as well as notifications for any approvals and assets ready for use.
      • Component Manager. A Component Manager to allow you to add custom component and manage the components available to use in flows.
      • Flow Manager. A Flow Manager allowing testers to orchestrate flows and manage their lifecycle including generation of selenium assets for execution.
      • Script Management. A script manager used to generate scripts and databanks for flows.
      • Security. A role based model to support administration, development of components/flows and approvals of components/flows.
    • Oracle Utilities Testing Accelerator Schema. A set of database objects that can be stored in any edition of Oracle (PDB or non-PDB is supported) for storing assets and configuration.
    • Oracle Utilities Testing Accelerator Eclipsed based Plug In. An Oxygen compatible Eclipse plugin that executes the tests including recording of performance and payloads for details test analysis.
    New Content

    The Oracle Utilities Testing Accelerator has expanded the release of the number of products supported and now includes Oracle Utilities Application Framework based products and Cloud Services Products. New content will be released on a regular basis to provide additional coverage for components and a set of prebuilt flows that can be used across products.

    Note: Refer to the release notes for supported Oracle Utilities products and assets provided.

    Conclusion

    The Oracle Utilities Testing Accelerator provides a comprehensive testing solution, optimized for Oracle Utilities products, with content provided by Oracle to allow implementation to realize lower cost and lower risk adoption of automated testing.

    For more information about this solution, refer to the Oracle Utilities Testing Accelerator Overview and Frequently Asked Questions (Doc Id: 2014163.1) available from My Oracle Support.

    Note: The Oracle Utilities Testing Accelerator is a replacement for the older Oracle Functional Testing Advanced Pack for Oracle Utilities. Customers on that product should migrate to this new platform. Utilities to convert any custom components from the Oracle Application Testing Suite platform are provided with this tool.

    #SwissPGDay is a place where you can meet developers of the PostgreSQL community

    Yann Neuhaus - Fri, 2018-06-29 12:10

    swisspgday
    As the #SwissPGDay is located one hour south of  Zurich we traveled yesterday evening to Rapperswil, where is a nice little Swiss city. For the People coming one day before, Stephan Wagner organized a nice dinner in an excellent restaurant, before I forget the Red-wine was also excellent !

    preconf-diner
    The #SwissPGday is located in Rapperswil at the HSR(Hochschule für Technik) near the Zürisee, it’s a nice location and everything was nicely organized. Also the look-out from the HSR is very beautiful.

    lake
    After a short introduction from Stephan Keller (HSR) where the room of the plenum session was completely full, the presentations was splitted in two streams. For your information last year they had around 60 participants this year we are around 100 participants, so I would say, that PostgreSQL it’s the trend.

    Screen Shot 2018-06-29 at 19.44.58

    Today what I especially appreciated, is that many teacher’s of the event are also developers of the PostgreSQL community, thus we get the latest information of some development and it was possible to discuss about new features directly with them.

    I will also give you feedback of 2 sessions from other partner which I really appreciated. For sure the session from my colleague Daniel Westermann was the best one :-), who presented the new features of PostgreSQL 11 with many demos.

    Screen Shot 2018-06-29 at 22.20.47The first one was from Laurenz Albe(Cybertec), who presented a community Tool ORA_MIGRATOR and tips to migrate from Oracle to PostgreSQL.
    The second one was from  Harald Armin Massa(2ndQuadrant), who presented the new PostgreSQL logical replication, which was developed by his company and is now partially available for the community edition.

    As usual to finish an Apero was organized to drink a beer together and exchange with the other participants.

    See you next year at the #SwissPGDay 2019.

     

    Cet article #SwissPGDay is a place where you can meet developers of the PostgreSQL community est apparu en premier sur Blog dbi services.

    DataGuard and Transparent Data Encryption

    Yann Neuhaus - Fri, 2018-06-29 09:45

    Setting up a DatagGard environment for a database with Transparent Data Encryption requires some tasks concerning the encryption keys. Otherwise the steps are the same than for an environment without TDE.
    In this blog we will present the tasks we have to do on both primary and standby servers for the keys. We will not describe the procedure to build the standby database. We will just talk about tasks for the wallet and we will verify that data for encrypted tables are being replicated.
    We are using oracle 12.2 and a non-container database.
    Tasks on primary side
    First on the primary server we have to configure the keystore location. This will be done by updating the sqlnet.ora with the directory whch will contain the keys.

    [oracle@primaserver ~]$mkdir /u01/app/wallet
    
    [oracle@primaserver admin]$ cat sqlnet.ora
    # sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
    # Generated by Oracle configuration tools.
    
    NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
    
    # For TDE
    ENCRYPTION_WALLET_LOCATION=
     (SOURCE=
      (METHOD=file)
       (METHOD_DATA=
        (DIRECTORY=/u01/app/wallet)))
    [oracle@primaserver admin]$
    

    After on the primary we have to create the keystore.

    SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/wallet' identified by root ;
    
    keystore altered.
    

    Next we have to open the keystore before creating the master key

    SQL> ADMINISTER KEY MANAGEMENT set KEYSTORE open   identified by root ;
    
    keystore altered.
    

    And then we can create the master key.

    SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY root WITH BACKUP;
    
    keystore altered.
    

    The wallet should be open before we can access to encrypted objects. So every time the database starts up, we have to manually open the wallet. To avoid this we can just create an auto_login wallet which will automatically opened at each database startup.

    SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/wallet' identified by root;
    
    keystore altered.
    

    Tasks on standby side
    On the standby side we just have to copy files in the wallet and to update the sqlnet.ora file.

    [oracle@primaserver wallet]$ pwd
    /u01/app/wallet
    [oracle@primaserver wallet]$ ls
    cwallet.sso  ewallet_2018062707462646.p12  ewallet.p12
    [oracle@primaserver wallet]$ scp * standserver1:$PWD
    oracle@standserver1's password:
    cwallet.sso                                   100% 3891     3.8KB/s   00:00
    ewallet_2018062707462646.p12                  100% 2400     2.3KB/s   00:00
    ewallet.p12                                   
    

    And that’s all. We can now configure our standby database. Below our configuration

    DGMGRL> show configuration;
    
    Configuration - DGTDE
    
      Protection Mode: MaxPerformance
      Members:
      DGTDE_SITE1 - Primary database
        DGTDE_SITE2 - Physical standby database
    
    Fast-Start Failover: DISABLED
    
    Configuration Status:
    SUCCESS   (status updated 1 second ago)
    
    DGMGRL>
    

    Now let’s verify that encrypted data are being replicated. We have a table with an encrypted column

    SQL> show user
    USER is "SCOTT"
    SQL> desc TEST_COL_ENC
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     ID                                                 NUMBER
     DESIGNATION                                        VARCHAR2(30) ENCRYPT
    
    SQL> select * from TEST_COL_ENC;
    
            ID DESIGNATION
    ---------- ------------------------------
             1 toto
             2 tito
             3 tata
    
    SQL>
    

    And let’s insert some data form the primary

    SQL> insert into TEST_COL_ENC values (4,'titi');
    
    1 row created.
    
    SQL> insert into TEST_COL_ENC values (5,'teti');
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

    From the standby let’s query the table

    SQL> select db_unique_name,open_mode from v$database;
    
    DB_UNIQUE_NAME                 OPEN_MODE
    ------------------------------ --------------------
    DGTDE_SITE2                    READ ONLY WITH APPLY
    
    SQL> select * from scott.TEST_COL_ENC;
    
            ID DESIGNATION
    ---------- ------------------------------
             4 titi
             5 teti
             1 toto
             2 tito
             3 tata
    
    SQL>
    

    To finish we will remind following notes about DataGuard and TDE (Oracle Documentation)

    The database encryption wallet on a physical standby database must be replaced with a fresh copy of the database encryption wallet from the primary database whenever the TDE master encryption key is reset on the primary database.

    For online tablespaces and databases, as of Oracle Database 12c Release 2 (12.2.0.1), you can encrypt, decrypt, and re-key both new and existing tablespaces, and existing databases within an Oracle Data Guard environment. This tasks will be automatically performed on the standby once done on the primary. Note that these online tasks cannot be done directly on the standby side.

    In an offline conversion, the encryption or decryption must be performed manually on both the primary and standby. An offline conversion affects the data files on the particular primary or standby database only. Both the primary and physical standby should be kept at the same state. You can minimize downtime by encrypting (or decrypting) the tablespaces on the standby first, switching over to the primary, and then encrypting (or decrypting) the tablespaces on the primary.

     

    Cet article DataGuard and Transparent Data Encryption est apparu en premier sur Blog dbi services.

    Managing Cost-Based Optimizer Statistics for PeopleSoft

    David Kurtz - Fri, 2018-06-29 06:05
    I gave this presentation to UKOUG PeopleSoft Roadshow 2018

    PeopleSoft presents some special challenges when it comes to collecting and maintaining the object statistics used by the cost-based optimizer.

    I have previously written and blogged on this subject.  This presentation focuses exclusively on the Oracle database and draws together the various concepts into a single consistent picture.  It makes clear recommendations for Oracle 12c that will help you work with the cost-based optimizer, rather than continually fight against it.

    It looks at collecting statistics for permanent and temporary working storage tables and considers some other factors that can affect optimizer statistics.

    This presentation also discusses PSCBO_STATS, that is going to be shipped with PeopleTools, and compares and contrasts it with GFCPSSTATS11.

    Some Interview Questions

    Tom Kyte - Fri, 2018-06-29 05:26
    Hi Tom, Recently I attened a inteview , I have given the questions which I'm not able to answer. 1.When using rman ,oracle will use its own processes than the OS one.the questions is what are the advantages oracle is getting by using it ow...
    Categories: DBA Blogs

    Truncate upgrade

    Jonathan Lewis - Fri, 2018-06-29 02:22

    Connor McDonald produced a tweet yesterday linking to a short video he’d created about an enhancement to the truncate command in 12c. If you have referential integrity declared between a parent and child table then in 12c you can truncate the parent table and Oracle will truncate the child table for you – rather than raising an error. The feature requires the foreign key constraint to be declared “on delete cascade” – which is an option that I don’t see used very often. Unfortunately if you try to change an existing foreign key constraint to meet this requirement you’ll find that you can’t (yet) use the “alter table modify constraint” to make the necessary change. As Connor pointed out, you’ll have to drop and recreate the constraint – which leaves you open to bad data getting into the system or an outage while you do the drop and recreate.

    If you don’t want to stop the system but don’t want to leave even a tiny window for bad data to arrive here’s a way to do it. In summary:

    1. Add a virtual column to the child table “cloning” the original foreign key column
    2. Create an index on the  virtual column (if you have concerns about “foreign key locking”)
    3. Add a foreign key constraint based on the virtual column
    4. Drop the old foreign key constraint
    5. Recreate the old foreign key constraint “on delete cascade”
    6. Drop the virtual column

    Here’s some sample SQL:

    
    rem
    rem	Script:		122_truncate_workaround.sql
    rem	Author:		Jonathan Lewis
    rem	Dated:		Jun 2018
    rem	Purpose:	
    rem
    rem	Last tested 
    rem		18.1.0.0	via LiveSQL
    rem		12.2.0.1
    rem		12.1.0.2
    
    drop table child;
    drop table parent;
    
    create table parent (
    	p number,
    	constraint p_pk primary key(p)
    );
    
    create table child (
    	c	number,
    	p	number,
    	constraint c_pk primary key(c),
    	constraint c_fk_p foreign key (p) references parent
    );
    
    create index c_fk_p on child(p);
    
    insert into parent values(1);
    insert into child values(1,1);
    
    commit;
    
    prompt	==========================================================================
    prompt	Truncate  should fail with
    prompt	ORA-02266: unique/primary keys in table referenced by enabled foreign keys
    prompt	==========================================================================
    
    truncate table parent;
    
    alter table child add (
    	pv generated always as (p+0) virtual
    )
    ;
    
    create index c_ipv on child(pv) online;
    
    alter table child add constraint c_fk_pv
    	foreign key (pv)
    	references parent(p)
    	on delete cascade
    	enable novalidate
    ;
    alter table child modify constraint c_fk_pv validate;
    
    alter table child drop constraint c_fk_p;
    
    prompt	===================================================================================
    prompt	Illegal insert (first 1) should fail with
    prompt	ORA-02291: integrity constraint (TEST_USER.C_FK_PV) violated - parent key not found
    prompt	===================================================================================
    
    insert into child (c,p) values(2,2);
    insert into child (c,p) values(2,1);
    commit;
    
    alter table child add constraint c_fk_p
    	foreign key (p)
    	references parent(p)
    	on delete cascade
    	enable novalidate
    ;
    
    alter table child modify constraint c_fk_p validate;
    
    prompt	===================================================
    prompt	Dropping the virtual column results in Oracle
    prompt	dropping the index and constraint at the same time
    prompt	===================================================
    
    alter table child drop column pv;
    
    

    The overhead of this strategy is significant – I’ve created an index (which you may not need, or want, to do) in anticipation of a possible “foreign key locking” issue – and I’ve used the online option to avoid locking the table while the index is created which means Oracle has to use a tablescan to acquire the data. I’ve enabled a new constraint without validation (which takes a brief lock on the table) then validated it (which doesn’t lock the table but could do a lot of work). Then I’ve dropped the old constraint and recreated it using the same novalidate/validate method to minimise locking time. If I were prepared simply to drop and recreate the original foreign key I wouldn’t need to create that index and I’d only do one validation pass rather than two.

     

    Using PeopleCode to Read (and process) Binary Excel Files

    Jim Marion - Thu, 2018-06-28 23:10

    At HIUG Interact last week, a member asked one of my favorite questions:

    "Does anyone know how to read binary Microsoft Excel files from PeopleSoft?"

    Nearly 15 years ago my AP manager asked me the same question, but phrased it a little differently:

    "We receive invoices as Excel spreadsheets. Can you convert them into AP vouchers in PeopleSoft?"

    Of course my answer was "YES!" How? Well... that was the challenge. I started down the typical CSV/FileLayout path, but that seems to be a temporary band aid, and challenging for the best users. I wanted to read real binary Excel files directly through the Process Scheduler, or basically, with PeopleCode. But here is the reality: PeopleCode is really good with data and text manipulation, but stops short of binary operations. Using PeopleCode's Java interface, however, anything is possible. After a little research, I stumbled upon Apache POI, a Java library that can read and write binary Excel files. With a little extra Java code to interface between PeopleCode and POI's Java classes, I had a solution. Keep in mind this was nearly 15 years ago. PeopleSoft and Java were both a little different back then and today's solution is slightly simpler. Here is a summary of PeopleSoft and Java changes that simplify this solution:

    • As of PeopleTools 8.54, PeopleSoft now includes POI in the App and Process Scheduler server Java class path. This means I no longer have to manage POI as a custom Java library.
    • The standard JRE added support for script engines and included the JavaScript script engine with every deployment. This means I no longer have to write custom Java to interface between POI and PeopleCode, but can leverage the dynamic nature of JavaScript.

    How does a solution like this work? The ultimate goal is to process spreadsheet rows through a Component Interface. First we need to get data rows into a format we can process. Each language and operating environment has its strengths:

    • PeopleCode can handle simple Java method invocations,
    • JavaScript can handle complex Java method invocation without compilation,
    • Java is really good at working with binary files, and
    • PeopleCode and Component Interfaces play nicely together.

    My preference is to capitalize on these strengths. With this in mind, I put together the following flow:

    1. Use PeopleCode to create an instance of a JavaScript script interpeter,
    2. Use JavaScript to invoke POI and iterate over spreadsheet rows, inserting row data into a temporary table, and
    3. Use PeopleCode to process those rows through a component interface.

    The code for this solution is in two parts: JavaScript and PeopleCode. Here is the JavaScript:

    Next hurdle: where do we store JavaScript definitions so we can process them with PeopleCode? Normally we place JavaScript in HTML definitions. This works great for online JavaScript as we can use GetHTMLText to access our script content. App Engines, however, are not allowed to use that function. An alternative is to use Message Catalog entries for scripts. The following PeopleCode listing uses an HTML definition, but accesses the JavaScript content directly from the HTML definition Metadata table:

    To summarize this PeopleCode listing, it first creates a JavaScript script engine manager, it then evaluates the above JavaScript, and finishes by processing rows through a CI (the CI part identified as a TODO segment).

    This example is fully encapsulated in as few technologies as possible: PeopleCode and JavaScript, with a little SQL to fetch the JavaScript. The code will work online as well as from an App Engine. If this were in an App Engine, however, I would likely replace the JavaScript GUID section with the AE's PROCESS_INSTANCE. Likewise, I would probably use an App Engine Do-Select instead of a PeopleCode SQL cursor.

    Did you see something on this blog that interests you? Are you ready to take your PeopleTools skills to the next level? We offer a full line of PeopleTools training courses. Learn more at jsmpros.com.

    How to Export to Excel and Print to PDF in Oracle APEX? The answer...

    Dimitri Gielis - Thu, 2018-06-28 16:46
    Two questions that pop-up a lot when I'm at a conference or when doing consulting are:
    • How can I export my data from APEX to Excel?
    • How can I print to PDF? Or how can I get a document/report with my data?
    The reason those questions are asked every time again is that although those features exist to a certain extent in APEX, what you actually want, is not shipped with Oracle Application Express (APEX), at least not yet in Oracle APEX 18.1 and before.

    Although the solution to both questions is the same, I'll go into more detail on the specific questions separately.

    How can I export my data from APEX to Excel?

    People typically want to export data from a Classic Report, Interactive Report, Interactive Grid or a combination of those to Excel.

    What APEX provides out-of-the-box is the export to CSV format, which can be opened in Excel.

    The biggest issue with CSV is that it's not native Excel format. Depending on the settings of Excel (or better your OS globalization settings) the CSV will open incorrectly. Instead of different columns, you see one big line. You also get an annoying message that some functions will be lost as it's not a native Excel format.


    You can customize the CSV separator, so the columns are recognized. But with a global application (users with different settings), it's still a pain. Maybe the biggest issue people have with CSV export is that it's just plain text, so the markup or customizations (sum, group by, ...) are lost.

    You can enable the CSV export in the attributes section of the respective components:


    When you have BI Publisher (BIP) setup and in APEX specified as Print Server, you have a few more options. In the Classic Report, you find it in the Printing section - there's an option for Excel. In the Interactive Report, there's an option for XLS, the Interactive Grid doesn't have an option.

    BI Publisher is expensive and comes with a big infrastructure and maintenance overhead, so this is not an option for many APEX people. But even the companies who have it, are looking at other solutions because although you get a native Excel file, it's cumbersome to use and BIP doesn't export your Interactive Report exactly as you see it on the screen with the customizations you did.

    So how to get around those issues then? There are some APEX plugins to export an Interactive Report and Grid as you see it on the screen. The plugin of Pavel is probably the most popular one.
    If you need to export one IR/IG at a time to Excel in a pre-defined Excel file, this might be an option for you. If you want to use your own Excel template, the ability to export multiple IR/IG at the same time or want more flexibility all around, you want to read on...

    The solution

    APEX Office Print (AOP). The AOP plugin extends APEX so you can specify the Excel file you want to start from, your template, in combination with the different APEX reports (Classic Report, Interactive Report, Interactive Grid) and get the output in Excel (or other formats). AOP is really easy to use, yet flexible and full of features no other solution provides. I'll touch on three different aspects customers love.

    Interactive Report/Grid to Excel with AOP - WYSIWYG (!)

    This feature is what customers love about AOP and something you won't find anywhere else. You can print one or more Interactive Reports and Grids directly to Excel, exactly as you see it on the screen. So if the end-user made a break, added some highlights or did some computations, it's all known by AOP. Even the Group by and Pivot are no problem. The implementation is super simple; in Excel, you can define your template; a title, a logo etc. Where you want to see the Interactive Report or Grid you specify {&interactive_1}, {&interactive_2} and for the Interactive Grid you specify {&static_id&}. In the AOP APEX plugin, you specify the template, and the static ids of the Interactive Report / Grid regions and that is it! AOP is doing the merge... if in the template the special tags are seen, AOP will generate the IR/IG. Not a screenshot - REAL table data! Here's an example with one Interactive Report:


    In your Excel you can add multiple tags, on the same sheet and on different sheets... and this doesn't only work in Excel, but also in Word and PDF!

    But there is even more... what if you look at the Interactive Report as a chart?
    You got it... AOP even understands this. You can plot the table data with {&interactive} and by using {$interactive} it will generate the chart ... and that is a native Office chart, you can still change it in Excel!

    Here's an example of the output generated by AOP with three interactive reports, one as a chart:


    All the above goodies you can do through the AOP PL/SQL API too. Some people use this to schedule their reports and email them out on a daily basis, so they don't even have to go into APEX.

    For me, the Interactive Report and Grid feature are one of the killer features of AOP.

    Advanced templates in Excel with AOP

    AOP is really flexible in how you build your template. The templating engine supports hierarchical data, angular expressions, conditions, blocks of data so you can view data next to each other and it supports HTML expressions too.

    Here's an example of a template which loops over the orders and shows the product of that order. It contains a condition to show an "X" when the quantity is higher than 2 and it also has an expression to calculate the price of the line (unit price * quantity).


    The data source specified in the plugin is of type SQL. AOP supports the cursor technique in SQL to create hierarchical data:


    And (a part of) the output looks like this:


    I'm amazed by what people come up with in their templates to create really advanced Excel sheets. It's really up to your imagination... and a combination of the features of Excel.

    Multiple sheets in one Excel file with AOP

    We have one customer who basically dumps their entire database in Excel. Every table has its own sheet in Excel. You just need to put the right tags in the different sheets and you are done.

    AOP also supports the dynamic generation of sheets in Excel, so you get for example one sheet per customer and on that sheet the orders of that customer. The template looks like this (the magic tag is {!customers}):


    The output is this:


    We built this feature a while back based on some customers feedback.

    Dynamic column generation in Excel with AOP

    This is a new feature we have been working for in AOP 4.0. By using the {:tag} we can generate columns dynamically now too:


    This might be useful if you want to pivot the data or want to see it in a different format. This feature is also available for Word tables. Another way of pivoting is doing it in Oracle or in an Interactive Report. This feature took us a long time to develop, but we think it's worth it.

    I hope by the above I demonstrated why I believe APEX Office Print (AOP) is "THE" solution if you want to export your data from APEX (or the Oracle Database) into Excel.


    Let's move on to the second question...

    How can I print to PDF? Or how can I get a document/report with my data?

    Oracle Application Express (APEX) has two integrated ways to print to PDF: either you use XSL-FO or you use BI Publisher. But the reason people still ask the question of how to print to PDF is that the one is too hard to implement (XSL-FO) and the other (BI Publisher) is too expensive, too hard to maintain and not user-friendly enough.

    Again APEX Office Print (AOP) is the way to go. AOP is so easy to use, so well integrated with APEX, that most developers love to work with it. Based on a template you create in Word, Excel, Powerpoint, HTML or Text you can output to PDF. In combination with the AOP plugin or PL/SQL API, it's easy to define where your data and template is, and AOP does the merge for you.

    Building the template

    It begins the same as with any print engine... You don't want to learn a new tool to build your template in. You want to have a fast result. So the way you get there with AOP is, use the AOP plugin, define your data source and let AOP generate the template for you. AOP will look at your data and create a starter template for you (in Word, Excel, HTML or Text) with the tags you can use based on your data and some explanation how to use the tags.

    Here's an example where AOP generates a Word template based on the SQL Query specified in the Data Source:



    So now you have a template you can start from. Next, you customize the template to your needs... or you can even let the business user customize the template. The only thing to know is how to use the specific {tags}. As a developer, I always thought my time would be better spent than changing the logo on a template or changing some sentences over and over again. With AOP my dream comes true; as a developer, I can concentrate on my query (data), the business user can create the template themselves and send the new version or upload it straight into the app whenever changes are required.

    When customers show me what they did with AOP; from creating templates for invoices, bills of materials, certificates to full-blown books, I'm really impressed by their creativity. If you imagine it, you can probably do it :)

    Here's the AOP plugin, where we specify where the customized Word template can be found (in Static Application Files) and set the output to PDF:


    Features in AOP that people love

    When you download APEX Office Print, it comes with a Sample app, which shows the features of AOP in action. Here's a screenshot of some of the Examples you find in the AOP Sample App:


    As this blog post is getting long, I won't highlight all the features of AOP and why they rock so much, but I do want to take two features you probably won't find anywhere else.

    Native Office Charts and JET Charts in PDF

    AOP supports the creation of native Office Charts, so you can even customize the charts further in Word. But sometimes people want to see exactly the chart they have on the screen, it is a JET chart, a Fusion chart, Highchart or any other library... With AOP you can get those charts straight into your PDF! The only thing you have to do is specifying the static id of the region and in your template, you put {%region} ... AOP will screenshot what the user sees and replace the tag with a sharp image. So even when the customer removed a series from the legend, it's exactly like that in the PDF.



    HTML content in PDF

    At the APEX World conference, a customer showed their use case of APEX together with AOP. Before they had to manage different Word documents and PDFs, but it was so hard as they had to update different documents every time again, it got out of sync and it was just a pain overall to deal with. So they replaced all this by Oracle APEX and Rich Text Editors. They created a structured database, so the information was in there once, but by using APEX Office Print (AOP) they generate all the different documents (Word/PDF) they need.

    AOP will interpret the HTML when it sees an underscore in the tag e.g. {_tag}, then it will translate that HTML into native Word styling. If a PDF is requested, the Word is converted to PDF, so the PDF contains real bold text, or real colors etc.

    Here's an example of how Rich Text is rendered to PDF.


    AOP also understands when you use for example HTML expressions in your Classic or Interactive Report, or you do some inline styling. It took us a very long time to develop this feature, but the feedback we get from our customer base made it worthwhile :)

    So far I showed Word as starting template for your PDF, but sometimes Powerpoint is a great start too, and not many people know about that. In Powerpoint you can make pixel perfect templates too and go to PDF is as easy as coming from Word.

    In our upcoming release of AOP 4.0, we spend a lot of time improving our PDF feature set. We will introduce PDF split and merge and the ability to prepend and append files to any of your documents.


    Some last words

    If you are interested in what APEX Office Print (AOP) is all about, I recommend to sit down and watch this 45 minutes video I did at the APEX Connect conference. In that presentation, I go from downloading, installing to using and show many features of AOP live.



    We at APEX R&D are committed to bringing the best possible print engine to APEX, which makes your life easier. We find it important to listen to you and support you however we can. We really want you to be successful. So if you have feedback for us in ways we can help you, even more, let us know, we care about you. We won't rest before we let everybody know about our mission and want to stay "the" printing solution for APEX.

    Sometimes I get emails from developers who tell me they have to do a comparison between the print engines for Oracle APEX, but they love AOP. If you include some of the above features (IR/IG to PDF or Excel, JET Charts, and HTML to PDF) in your requirements, you are guaranteed to work with APEX Office Print, there's nothing else that comes even close to those features :)

    AOP's philosophy has been to be as integrated as possible in APEX, as easy as building APEX applications, yet flexible enough to build really advanced reports. We make printing and exporting of data in APEX easy.

    If you read until here, you are amazing, now I rest my case :)
    Categories: Development

    Performance Metric Service – Classic Configuration

    DBASolved - Thu, 2018-06-28 16:37

    Almost a year ago, Oracle released Oracle GoldenGate 12c (12.3.0.1.x). At that time, there were two architectures released; Microservices and Classic. Both architectures provided the same enterprise level replication. The only difference was that one enabled a RESTful API interface with HTML5 page and the other was still command line driven.

    The biggest change though was with the addition of the Performance Metric Service/Server that come bundled with the core product. This is a huge addition to the core product and allows end-users to monitor their Oracle GoldenGate environment in near-realtime. On the Microservices architecture this service is enabled automatically and can be used on a per deployment basis. With the Classic architecture, it is there but requires a small configuration to get it to work.

    In this post, I’ll show you how to get the Performance Metric Service (PMSRVR) in Classic Architecture configured and access the RESTful API endpoints. The context of this post actually builds upon a post I did almost 3 years ago (here), where I talked about how to pull XML information via a browser for Oracle GoldenGate.

    After installing Oracle GoldenGate 12c (12.3.0.1.4) Classic Architecture, open GGSCI and evaluate the environment. You should notice that you have a Manager, JAgent, and Performance Metric Service (PMSRVR) all as defaults (Figure 1).

    Figure 1:

    Next start the Manager (MGR) process. This is done the same way as has been done in in the past – START MGR. Once the MGR process is started, your GGSCI should look like Figure 2.

    Figure 2:

    Now to get the PMSRVR to work. This requires the editing of the GLOBALS file. The GLOBALS files can be edited either from the command line (vi GLOBALS). Within the GLOBALS file, turn on the ENABLEMONITORING parameter. At this point, you need to understand that there has been a few changes to Oracle GoldenGate with the ENABLEMONTIORING parameter. Without getting into to much details of the changes, you now have to specify the option for UDP.

    A simple GLOBALS file would look like this:

    ENABLEMONITORING UDP

    At this point, you can start the PMSRVR within GGSCI (start pmsrvr) (Figure 3). What this does is provide you with a default port of 9004 to access PMSRVR pages via HTTP. If you want to get more detail and have a bit more control over the port numbers, you can modify the GLOBALS file to specify the HTTP port you want to use.

    An example would look like this:

    ENABLEMONITORING UDP HTTPPORT 12000

    Note: https://docs.oracle.com/goldengate/c1230/gg-winux/GWURF/Chunk1486599197.htm#GWURF474

    Then restart the PMSRVR. After the restart, you will be able to access the PMSRVR via the HTTP port specified.

    Figure 3:

    Now to access the PMSRVR page, just navigate to http://hostname:port/groups (Figure 4). This is the starting point for checking the status.

    Figure 4:

    Notice that in Figure 4, you see a list of all services that are avaliable within the product. The services like AdminSrvr, Recvsrvr, Distsrvr, and Adminclnt are never executed. This is normal since this is not the Microservice Architecture. These services will not work.

    At this point, you can use the web pages to drill into the PMSRVR, MGR and any capture/apply processes that are being monitored by the PMSRVR.

    Enjoy!!

    Categories: DBA Blogs

    Ubuntu 16.04: Installation of chrome-browser fails with libnss3 (>= 2:3.22) [2]

    Dietrich Schroff - Thu, 2018-06-28 14:19

    The first solution for installing google-chrome after getting the error
    dpkg: Abhängigkeitsprobleme verhindern Konfiguration von google-chrome-stable:
     google-chrome-stable hängt ab von libnss3 (>= 2:3.22); aber:
      Version von libnss3:amd64 auf dem System ist 2:3.21-1ubuntu4.

    dpkg: Fehler beim Bearbeiten des Paketes google-chrome-stable (--install):
     Abhängigkeitsprobleme - verbleibt unkonfiguriert
    was to install chromium (see this posting).

    But now i know, what the problem was:
    Inside /etc/apt/sources.list the entry:
    deb http://security.ubuntu.com/ubuntu/ xenial-security restricted main multiverse universewas missing. After adding this line i was able to install the libnss3 2.3.22:
    # apt-get install libnss3
    Paketlisten werden gelesen... Fertig
    Abhängigkeitsbaum wird aufgebaut.     
    Statusinformationen werden eingelesen.... Fertig
    Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
      libappindicator1 libcurl3 libdbusmenu-gtk4 libindicator7
    Verwenden Sie »sudo apt autoremove«, um sie zu entfernen.
    Die folgenden zusätzlichen Pakete werden Installiert
      libnspr4 libnss3-nssdb
    Die folgenden Pakete werden aktualisiert (Upgrade):
      libnspr4 libnss3 libnss3-nssdb
    3 aktualisiert, 0 neu installiert, 0 zu entfernen und 490 nicht aktualisiert.
    Es müssen 1.270 kB an Archiven heruntergeladen werden.
    Nach dieser Operation werden 31,7 kB Plattenplatz zusätzlich benutzt.
    Möchten Sie fortfahren? [J/n]
    Holen:1 http://security.ubuntu.com/ubuntu xenial-security/main amd64 libnspr4 amd64 2:4.13.1-0ubuntu0.16.04.1 [112 kB]
    Holen:2 http://security.ubuntu.com/ubuntu xenial-security/main amd64 libnss3-nssdb all 2:3.28.4-0ubuntu0.16.04.3 [10,6 kB]
    Holen:3 http://security.ubuntu.com/ubuntu xenial-security/main amd64 libnss3 amd64 2:3.28.4-0ubuntu0.16.04.3 [1.148 kB]
    Es wurden 1.270 kB in 1 s geholt (737 kB/s).
    (Lese Datenbank ... 140220 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../libnspr4_2%3a4.13.1-0ubuntu0.16.04.1_amd64.deb ...
    Entpacken von libnspr4:amd64 (2:4.13.1-0ubuntu0.16.04.1) über (2:4.11-1ubuntu1) ...
    Vorbereitung zum Entpacken von .../libnss3-nssdb_2%3a3.28.4-0ubuntu0.16.04.3_all.deb ...
    Entpacken von libnss3-nssdb (2:3.28.4-0ubuntu0.16.04.3) über (2:3.21-1ubuntu4) ...
    Vorbereitung zum Entpacken von .../libnss3_2%3a3.28.4-0ubuntu0.16.04.3_amd64.deb ...
    Entpacken von libnss3:amd64 (2:3.28.4-0ubuntu0.16.04.3) über (2:3.21-1ubuntu4) ...
    libnspr4:amd64 (2:4.13.1-0ubuntu0.16.04.1) wird eingerichtet ...
    libnss3-nssdb (2:3.28.4-0ubuntu0.16.04.3) wird eingerichtet ...
    libnss3:amd64 (2:3.28.4-0ubuntu0.16.04.3) wird eingerichtet ...
    Trigger für libc-bin (2.23-0ubuntu3) werden verarbeitet ...

    After this, google-chrome installed without any problem:
    root@estherpc:~/Downloads# dpkg -i google-chrome-stable_current_amd64\ \(2\).deb
    Vormals nicht ausgewähltes Paket google-chrome-stable wird gewählt.
    (Lese Datenbank ... 140222 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von google-chrome-stable_current_amd64 (2).deb ...
    Entpacken von google-chrome-stable (64.0.3282.186-1) ...
    google-chrome-stable (64.0.3282.186-1) wird eingerichtet ...
    update-alternatives: /usr/bin/google-chrome-stable wird verwendet, um /usr/bin/x-www-browser (x-www-browser) im automatischen Modus bereitzustellen
    update-alternatives: /usr/bin/google-chrome-stable wird verwendet, um /usr/bin/gnome-www-browser (gnome-www-browser) im automatischen Modus bereitzustellen
    update-alternatives: /usr/bin/google-chrome-stable wird verwendet, um /usr/bin/google-chrome (google-chrome) im automatischen Modus bereitzustellen
    Trigger für bamfdaemon (0.5.3~bzr0+16.04.20160415-0ubuntu1) werden verarbeitet ...
    Rebuilding /usr/share/applications/bamf-2.index...
    Trigger für gnome-menus (3.13.3-6ubuntu3) werden verarbeitet ...
    Trigger für desktop-file-utils (0.22-1ubuntu5) werden verarbeitet ...
    Trigger für mime-support (3.59ubuntu1) werden verarbeitet ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...

    what's the exact difference bwtween force logging and supplemental logging?

    Tom Kyte - Thu, 2018-06-28 11:06
    I am confusing about the difference bwtween force logging and supplemental logging, could you explain that for me? and are both force logging and supplemental loggins necessary for goldengate? thanks!
    Categories: DBA Blogs

    Compare for matching recrds i n two table and return true or false against each row

    Tom Kyte - Thu, 2018-06-28 11:06
    Dear Tom I am sorry if this has been dealt before, I am a complete beginner to ORACLE SQL and will appreciate your help. I have two tables table A and Table B. Table A has about 60,000 rows of customer data with Customer Id as main identifie...
    Categories: DBA Blogs

    New Study: 93% of People Would Trust Orders from a Robot at Work

    Oracle Press Releases - Thu, 2018-06-28 07:00
    Press Release
    New Study: 93% of People Would Trust Orders from a Robot at Work HR leaders and employees want to embrace AI, but organizations are failing to prepare the workforce

    Redwood Shores, Calif.—Jun 28, 2018

    People are ready to take instructions from robots at work according to a new study conducted by Oracle and Future Workplace, a research firm preparing leaders for disruptions in recruiting, development and employee engagement. The study of 1,320 U.S. HR leaders and employees found that while people are ready to embrace Artificial Intelligence (AI) at work, and understand that the benefits go far beyond automating manual processes, organizations are not doing enough to help their employees embrace AI and that will result in reduced productivity, skillset obsolescence and job loss. 

    The study–AI at Work–identified a large gap between the way people are using AI at home and at work. While 70 percent of people are using some form of AI in their personal life, only 6 percent of HR professionals are actively deploying AI and only 24 percent of employees are currently using some form of AI at work. To determine why there is such a gap in AI adoption when people are clearly ready to embrace AI at work (93 percent would trust orders from a robot), the study examined HR leader and employee perceptions of the benefits of AI, the obstacles preventing AI adoption and the business consequences of not embracing AI.    

    Employees and HR Leaders See the Potential of AI

    All respondents agreed that AI will have a positive impact on their organizations and when asked about the biggest benefit of AI, HR leaders and employees both said increased productivity. In the next three years, respondents expect the benefits to include:     

    • Employees believe that AI will improve operational efficiencies (59 percent), enable faster decision making (50 percent), significantly reduce cost (45 percent), enable better customer experiences (40 percent) and improve the employee experience (37 percent).

    • HR leaders believe AI will positively impact learning and development (27 percent), performance management (26 percent), compensation/payroll (18 percent) and recruiting and employee benefits (13 percent).

    Organizations are Not Doing Enough to Prepare the Workforce for AI

    Despite its clear potential to improve business performance, HR leaders and employees believe that organizations are not doing enough to prepare the workforce for AI. Respondents also identified a number of other barriers holding back AI in the enterprise.

    • Almost all (90 percent) of HR leaders are concerned they will not be able to adjust to the rapid adoption of AI as part of their job and to make matters worse, they are not currently empowered to address an emerging AI skill gap in their organization.

    • While more than half of employees (51 percent) are concerned they will not be able to adjust to the rapid adoption of AI and 71 percent believe AI skills and knowledge will be important in the next three years, 72 percent of HR leaders noted that their organization does not provide any form of AI training program.

    • On top of the skill gap, HR leaders and employees identified cost (74 percent), failure of technology (69 percent) and security risks (56 percent) as the other major barriers to AI adoption in the enterprise.

    Not Embracing AI Now Will Result in Job Loss, Irrelevance and Loss of Competitive Advantage   

    Despite all the talk about people being worried about AI entering the workplace, the study found the opposite to be true with HR leaders and employees (79 percent of HR leaders; 60 percent of employees) believing a failure to adopt AI will have negative consequences on their own careers, colleagues and overall organization. 

    • Respondents identified reduced productivity, skillset obsolescence and job loss as the top three consequences of failing to embrace AI in the workforce.

    • From an organizational standpoint, respondents believe embracing AI will have the most positive impact on directors and C-Suite executives. By failing to empower leadership teams with AI, organizations could lose a competitive advantage.

    Methodology

    For this survey, 1,320 HR Leaders and employees were asked about their views regarding AI implementation and usage in the workplace. The study targeted HR Leaders and employees who work across different sectors and in organizations of different sizes. All panelists have passed a double opt-in process and complete on average 300 profiling data points prior to taking part in surveys.

    • “As this study shows, people are not afraid of AI taking their jobs and instead want to be able to quickly and easily take advantage of the latest innovations,” said Emily He, SVP, Human Capital Management Cloud Business Group, Oracle. “To help employees embrace AI, organizations should partner with their HR leaders to address the skill gap and focus their IT strategy on embedding simple and powerful AI innovations into existing business processes.”

      “AI will enable companies to stay competitive, HR leaders to be more strategic and employees to be more productive at work. If organizations want to take advantage of the AI revolution, while closing the skills gap, they will have to invest in AI training programs. If employees want to stay relevant to the current and future job market, they need to embrace AI as part of their job.”

      —Dan Schawbel, Research Director at Future Workplace, author of Back to Human

    Contact Info
    Siobhan Lyons
    Oracle
    202-431-9411
    siobhan.lyons@oracle.com
    Dan Schawbel
    Research Director at Future Workplace
    617-840-0073
    dan@futureworkplace.com
    About Oracle

    The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

    About Future Workplace

    Future Workplace is an executive development firm dedicated to rethinking and re-imagining the workplace. Future Workplace works with heads of talent management, human resources, corporate learning, and diversity to prepare for the changes impacting recruitment, employee development, and engagement. Future Workplace is host to the 2020 Workplace Network, an Executive Council that includes 50 plus heads of Corporate Learning, Talent, and Human Resources who come together to discuss debate and share “next” practices impacting the workplace and workforce of the future. For more information, please visit: http://www.futureworkplace.com.

    Talk to a Press Contact

    Siobhan Lyons

    • 202-431-9411

    Dan Schawbel

    • 617-840-0073

    Pages

    Subscribe to Oracle FAQ aggregator