Feed aggregator

Serving Oracle JET Application from WebLogic

Andrejus Baranovski - Sun, 2016-07-24 03:43
Oracle JET application can be served from WebLogic as static content (HTML/JavaScript), JET code will be downloaded and executed on the client. Read more about packaging and deploying JET applications in developer guide - Packaging and Deploying Web Applications.

Here is demo JET application served on WebLogic server, port 7101. JET index.html page is referenced through application context root, registered on WebLogic during deployment of static content:


Required steps:

1. Generate JET application from basic template (use sudo on Mac OS):

yo oraclejet RSJETDemoApp --template=basic

2. Navigate to JET application folder and run grunt command to generate minified JET app structure under release folder:

grunt build:release


3. In order to serve JET as static content from WebLogic, we need to add WEB-INF folder and web.xml file. This will allow to deploy it as application to WebLogic. Create web.xml file with empty structure:


4. Final structure should look like this, web.xml and WEB-INF folder inside release:


5. Go to WebLogic Console and in deployment screen navigate to folder location, where release is located:


6. As for any other Web application deployment, choose Install this deployment as an application:


7. Provide application name and specify deployment accessible for the folder location:


8. Deployment should be completed without errors:


There is no other job for the server as to transfer HTML and JavaScript to the client. Code is running on the client.

Dot Qualify Everything?

Bar Solutions - Sun, 2016-07-24 00:35
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

There is talk about Dot Qualifying Everything in your PL/SQL Code. But what are the pros and cons in this approach?

Let’s say we have a rather simple function which returns a string with alternatively upper and lower case characters. Normally I would write it as follows:

create or replace function wavey(string_in in varchar2) return varchar2 is
  l_returnvalue varchar2(30);
begin
  for indx in 1 .. length(string_in) loop
    l_returnvalue := l_returnvalue || case mod(indx, 2)
                       when 0 then
                        upper(substr(string_in, indx, 1))
                       else
                        lower(substr(string_in, indx, 1))
                     end;
  end loop;
  dbms_output.put_line(l_returnvalue);
  return l_returnvalue;
end;

The output for this function using the ENAME column of the EMP table is like this:

sMiTh
aLlEn
wArD
jOnEs
mArTiN
bLaKe
cLaRk
sCoTt
kInG
tUrNeR
aDaMs
jAmEs
fOrD
mIlLeR

But what if the was a malicious user that created a package names DBMS_OUTPUT which included all the programs in the original DBMS_OUTPUT package, but with some code added. Let’s create a simple package like this:

create or replace package dbms_output is
  procedure put_line(a in varchar2);
end;
create or replace package body dbms_output is
  procedure put_line(a in varchar2)
  is
  begin
    sys.dbms_output.put('changed :');
    sys.dbms_output.put_line(a);
  end;
end;

Notice I just included the PUT_LINE procedure in here and I am not really doing anything malicious here. The output of my function would now be:

changed :sMiTh
changed :aLlEn
changed :wArD
changed :jOnEs
changed :mArTiN
changed :bLaKe
changed :cLaRk
changed :sCoTt
changed :kInG
changed :tUrNeR
changed :aDaMs
changed :jAmEs
changed :fOrD
changed :mIlLeR

Not exactly the way I would want. How can I prevent this from happening? The answer is actually really simple. Qualify the call to DBMS_OUTPUT with the schema name where the packages resides:

create or replace function wavey(string_in in varchar2) return varchar2 is
  l_returnvalue varchar2(30);
begin
  for indx in 1 .. length(string_in) loop
    l_returnvalue := l_returnvalue || case mod(indx, 2)
                       when 0 then
                        upper(substr(string_in, indx, 1))
                       else
                        lower(substr(string_in, indx, 1))
                     end;
  end loop;
  sys.dbms_output.put_line(l_returnvalue);
  return l_returnvalue;
end;

The output is back to what we expected

sMiTh
aLlEn
wArD
jOnEs
mArTiN
bLaKe
cLaRk
sCoTt
kInG
tUrNeR
aDaMs
jAmEs
fOrD
mIlLeR

But should you Dot Qualify everything in your code? Like this?

create or replace function demo.wavey(string_in in sys.standard.varchar2) return sys.standard.varchar2 is
  l_returnvalue sys.standard.varchar2(30);
begin
  <<theloop>>
  for indx in 1 .. length(wavey.string_in) loop
    wavey.l_returnvalue := wavey.l_returnvalue || case mod(theloop.indx, 2)
                             when 0 then
                              sys.standard.upper(standard.substr(wavey.string_in, theloop.indx, 1))
                             else
                              sys.standard.lower(standard.substr(wavey.string_in, theloop.indx, 1))
                           end;
  end loop;
  sys.dbms_output.put_line(wavey.l_returnvalue);
  return wavey.l_returnvalue;
end;

I don’t this adds to the readability of the code, but it sure makes your code safer against malicious users that want to implement code that can be executed by your programs. Please add your thoughts on this subject.

Troubleshooting ORA-02049: timeout: distributed transaction waiting for lock

Michael Dinh - Sat, 2016-07-23 17:45

Controlling Connections Established by Database Links

Tracing can be done using – ALTER SYSTEM SET EVENTS ‘2049 trace name ERRORSTACK level 3’;

Nice note from MOS.

How to Obtain a System State Trace BEFORE the Error ORA-2049 Occurs, While Still Experiencing the Contention (Doc ID 789517.1)

Commit after select from dblink – blocking lock.
Session 1:
++++++++++
04:44:51 ARROW:(DEMO@leo):PRIMARY>
04:45:30 ARROW:(DEMO@leo):PRIMARY> select name from v$pdbs@mdinh_april;

NAME
------------------------------
APRIL

Elapsed: 00:00:00.01
04:45:36 ARROW:(DEMO@leo):PRIMARY> commit;

Commit complete.

Elapsed: 00:00:00.00
04:45:41 ARROW:(DEMO@leo):PRIMARY>
04:45:49 ARROW:(DEMO@leo):PRIMARY> update t set id=-9999;

++++++++++
Session 2:
04:45:18 ARROW:(DEMO@leo):PRIMARY> update t set id=100;

1 row updated.

Elapsed: 00:00:00.00
04:45:21 ARROW:(DEMO@leo):PRIMARY> @/media/sf_working/sql/b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   DEMO            32,17        74nhxnyztg454    402 SQL*Net message to c ACTIVE      0 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           41,271       1qfpvr7brd2pq    113 enq: TX - row lock c ACTIVE      8 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   DEMO            7741d4und71ph with s as ( SELECT decode(level,1,'BLOCKING','WAITING') state, LPAD('*',(level-1
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
04:54:19 ARROW:(DEMO@leo):PRIMARY> select count(*) from t;

  COUNT(*)
----------
         1

Elapsed: 00:00:00.01
05:00:49 ARROW:(DEMO@leo):PRIMARY>
No commit after select from dblink – ORA-02049: timeout: distributed transaction waiting for lock.

Session is automatically killed based on database paraneter tributed_lock_timeout (default is 60s)

Session 1:
++++++++++
21:58:06 ARROW:(DEMO@leo):PRIMARY> update t set id=100;

1 row updated.

Elapsed: 00:00:00.01
21:58:14 ARROW:(DEMO@leo):PRIMARY>

Session 2:
+++++++++
21:58:45 ARROW:(DEMO@leo):PRIMARY> select name from v$pdbs@mdinh_april;

NAME
------------------------------
APRIL

Elapsed: 00:00:00.00
21:58:51 ARROW:(DEMO@leo):PRIMARY> update t set id=1;
update t set id=1
       *
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock


Elapsed: 00:01:00.01
21:59:59 ARROW:(DEMO@leo):PRIMARY>

Don’t forget to commit or rollback after using dblinks.

UPDATED: Tried to determine if there was a way to monitor distributed transaction and in this test case was not able to.

Session 1: 
++++++++++
15:59:32 ARROW:(MDINH@leo):PRIMARY> update demo.t set id=100;

1 row updated.

Elapsed: 00:00:00.01
15:59:43 ARROW:(MDINH@leo):PRIMARY>

Session 2: Distributed Transaction
++++++++++
16:00:43 ARROW:(DEMO@leo):PRIMARY> select name from v$pdbs@mdinh_april;

NAME
------------------------------
APRIL

Elapsed: 00:00:00.00
16:00:47 ARROW:(DEMO@leo):PRIMARY> update t set id=-9999;
update t set id=-9999
       *
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock


Elapsed: 00:10:00.00
16:10:57 ARROW:(DEMO@leo):PRIMARY> update t set id=-9999;
update t set id=-9999
*
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock


Elapsed: 00:10:00.01
16:23:03 ARROW:(DEMO@leo):PRIMARY>

Session 3: Monitoring
++++++++++
16:21:23 ARROW:(SYS@leo):PRIMARY> show parameter distributed_lock_timeout

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
distributed_lock_timeout             integer     600
16:21:35 ARROW:(SYS@leo):PRIMARY> @b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   MDINH           26,3                           32 SQL*Net message from INACTIVE   22 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           16,23        1qfpvr7brd2pq     70 enq: TX - row lock c ACTIVE      9 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   MDINH           4cnt32uym27j2 update demo.t set id=100
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
16:21:38 ARROW:(SYS@leo):PRIMARY> @b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   MDINH           26,3                           32 SQL*Net message from INACTIVE   22 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           16,23        1qfpvr7brd2pq     70 enq: TX - row lock c ACTIVE      9 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   MDINH           4cnt32uym27j2 update demo.t set id=100
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
16:22:10 ARROW:(SYS@leo):PRIMARY> @b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   MDINH           26,3                           32 SQL*Net message from INACTIVE   23 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           16,23        1qfpvr7brd2pq     70 enq: TX - row lock c ACTIVE     10 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   MDINH           4cnt32uym27j2 update demo.t set id=100
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
16:22:56 ARROW:(SYS@leo):PRIMARY> @b.sql

no rows selected

Elapsed: 00:00:00.01

no rows selected

Elapsed: 00:00:00.00
16:23:12 ARROW:(SYS@leo):PRIMARY>

CODE : Persistent Variables via PL/SQL Package and DBMS_APPLICATION_INFO

Hemant K Chitale - Sat, 2016-07-23 10:43
I am now introducing some code samples in my blog.  I won't restrict myself to SQL but will also include PL/SQL  (C ? Bourne/Korn Shell ? what else ?)

This is the first of such samples.


Here I demonstrate using a PL/SQL Package to define persistent variables and then using them with DBMS_APPLICATION_INFO.  This demo consists of only 2 variables being used by 1 session.  But we could have a number of variables in this Package and invoked by multiple client sessions in the real workd.

I first :

SQL> grant create procedure to hr;

Grant succeeded.

SQL>


Then, in the HR schema, I setup a Package to define variables that can persist throughout a session.  Public Variables defined in a Package, once invoked, persist throughout the session that invoked them.

create or replace package
define_my_variables
authid definer
is
my_application varchar2(25) := 'Human Resources';
my_base_schema varchar2(25) := 'HR';
end;
/

grant execute on define_my_variables to hemant;
grant select on employees to hemant;


As HEMANT, I then execute :

SQL> connect hemant/hemant  
Connected.
SQL> execute dbms_application_info.set_module(-
> module_name=>HR.define_my_variables.my_application,-
> action_name=>NULL);

PL/SQL procedure successfully completed.

SQL>


As SYSTEM, the DBA can monitor HEMANT

QL> show user
USER is "SYSTEM"
SQL> select sid, serial#, to_char(logon_time,'DD-MON HH24:MI:SS') Logon_At, module, action
2 from v$session
3 where username = 'HEMANT'
4 order by 1
5 /

SID SERIAL# LOGON_AT MODULE
---------- ---------- ------------------------ ----------------------------------------------------------------
ACTION
----------------------------------------------------------------
1 63450 23-JUL 23:24:03 Human Resources



SQL>


Then, HEMANT intends to run a query on the EMPLOYEES Table.

SQL> execute dbms_application_info.set_action(-
> action_name=>'Query EMP');

PL/SQL procedure successfully completed.

SQL> select count(*) from hr.employees where job_id like '%PROG%'
2 /

COUNT(*)
----------
5

SQL>


SYSTEM can see what he is doing with

SQL> l
1 select sid, serial#, to_char(logon_time,'DD-MON HH24:MI:SS') Logon_At, module, action
2 from v$session
3 where username = 'HEMANT'
4* order by 1
SQL> /

SID SERIAL# LOGON_AT MODULE
---------- ---------- ------------------------ ----------------------------------------------------------------
ACTION
----------------------------------------------------------------
1 63450 23-JUL 23:24:03 Human Resources
Query EMP


SQL>


Returning, to the HR login, I can see :

SQL> show user
USER is "HEMANT"
SQL> execute dbms_output.put_line(-
> 'I am running ' || hr.define_my_variables.my_application || ' against ' || hr.define_my_variables.my_base_schema);
I am running Human Resources against HR

PL/SQL procedure successfully completed.

SQL>


So, I have demonstrated :
1.  Using a PLSQL Package Specification (without the need for a Package Body) to define variables that are visible to another session.

2.  The possibility of using this across schemas.  HR could be my "master schema" that setups all variables and HEMANT is one of many "client" schemas (or users) that use these variables..

3. The variables defined will persist throughout the client session once they are invoked.

4.  Using DBMS_APPLICATION_INFO to call these variables and setup client information.


Note :  SYSTEM can also trace HEMANT's session using DBMS_MONITOR as demonstrated in Trace Files -- 2 : Generating SQL Traces (another session)

.
.
.

Categories: DBA Blogs

Release 12.2.5 AR Enhacement : Apply Receipts Automatically based on Match Score and Knapsack Method

OracleApps Epicenter - Sat, 2016-07-23 09:30
This is one of R12.2.5 Enhancement in EBS AR. The Automatic Cash Application improves accuracy and on-time application of cash receipts with the introduction of two new methods for automatically applying cash receipts. The first method generates match scores using the Levenshtein distance algorithm and automatically applies the receipt based on a score threshold. The […]
Categories: APPS Blogs

Oracle Data Relationship Management Analytics

OracleApps Epicenter - Sat, 2016-07-23 06:07
The following Patch Set Updates have been released Recently for Oracle Data Relationship Management 11.1.2.4.341. Patch 23236297 - PATCH SET UPDATE: ORACLE DATA RELATIONSHIP MANAGEMENT - 11.1.2.4.341 Patch 23750023 - PATCH SET UPDATE: DATA RELATIONSHIP MANAGEMENT ANALYTICS - 11.1.2.4.341 Oracle Data Relationship Management Analytics is a capstone dashboard and reporting application that draws upon the […]
Categories: APPS Blogs

Redo log block size on ODA X6 all flash

Yann Neuhaus - Fri, 2016-07-22 16:43

On the Oracle Database Appliance, the redo logs are on Flash storage (and with X6 everything is on Flash storage) so you may wonder if we can benefit from 4k redo blocksize. Here are some tests about it on an ODA X6-2M.

I’ll compare the same workload (heavy inserts) with 512 bytes and 4k bytes block size redo. However, we can’t create a log group different than 512 bytes:

ORA-01378: The logical block size (4096) of file
/u03/app/oracle/redo/LABDB1/onlinelog/o1_mf_999_%u_.log is not compatible with
the disk sector size (media sector size is 512 and host sector size is 512)

This is because the flash storage is exposed with 512 bytes sector size:

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 4894016 4500068 2441888 1023992 0 Y DATA/
MOUNTED NORMAL N 512 4096 4194304 1231176 221172 610468 -199762 0 N RECO/

Then, in order to be able to create new redo log groups with higher block size you need to set “_disk_sector_size_override” to TRUE;

I have 3 log groups with 512 bytes block size, and 3 groups with 4k:


LOGFILE
GROUP 10 '+RECO/LABDB1/ONLINELOG/group_10.264.917867333' SIZE 51200M BLOCKSIZE 512,
GROUP 11 '+RECO/LABDB1/ONLINELOG/group_11.265.917867489' SIZE 51200M BLOCKSIZE 512,
GROUP 12 '+RECO/LABDB1/ONLINELOG/group_12.266.917867645' SIZE 51200M BLOCKSIZE 512,
GROUP 13 '+RECO/LABDB1/ONLINELOG/group_13.267.917867795' SIZE 51200M BLOCKSIZE 4096,
GROUP 14 '+RECO/LABDB1/ONLINELOG/group_14.268.917867913' SIZE 51200M BLOCKSIZE 4096,
GROUP 15 '+RECO/LABDB1/ONLINELOG/group_15.269.917868013' SIZE 51200M BLOCKSIZE 4096

In 12c the database files should be on ACFS and not directly on the diskgroup. We did this on purpose in order to check if there is any overhead when in ACFS and we have seen exactly the same performance in both. There is something I dislike here however: redo log files are not multiplexed with multiple log members, but rely on the diskgroup redundancy. I agree with that in ASM because you are not supposed to manage the files and then risk to delete one of them. But in ACFS you see only one file, and if you drop it by mistake, both mirrors are lost, with the latest transactions.

On an insert intensive workload I take AWR snapshots between two log switches:

ODAX6REDO
The switch between blocksize 512 and blocksize 4096 happened at 12:35

Don’t be nervous about those orange ‘log file sync waits’ we had to run 10000 transactions per second in order to get some contention here.

We have to go to the details in order to compare, from an AWR Diff report:

Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff
--------------- --------------- ------
DB time: 37.9 37.3 -1.4
CPU time: 19.0 24.4 28.4
Background CPU time: 0.8 1.0 23.2
Redo size (bytes): 61,829,138.5 76,420,493.9 23.6
Logical read (blocks): 1,181,178.7 1,458,915.9 23.5
Block changes: 360,883.0 445,770.8 23.5
Physical read (blocks): 0.4 1.1 164.3
Physical write (blocks): 14,451.2 16,092.4 11.4
Read IO requests: 0.4 1.1 164.3
Write IO requests: 9,829.4 10,352.3 5.3
Read IO (MB): 0.0 0.0 100.0
Write IO (MB): 112.9 125.7 11.4
IM scan rows: 0.0 0.0 0.0
Session Logical Read IM:
User calls: 8,376.0 10,341.2 23.5
Parses (SQL): 5,056.0 6,247.8 23.6
Hard parses (SQL): 0.0 0.0 0.0
SQL Work Area (MB): 3.1 3.2 3.5
Logons: 0.4 0.3 -37.2
Executes (SQL): 225,554.2 278,329.3 23.4
Transactions: 10,911.0 13,486.4 23.6

The second workload, when redo blocksize was 4k, was able to handle 23% more activity.

‘log file sync’ average time is 1.3 milliseconds instead of 2.4:

Top Timed Events First DB/Inst: LABDB1/labdb1 Snaps: 155-156 (Elapsed time: 301.556 sec DB time: 11417.12 sec), Second DB/Inst: LABDB1/labdb1 Snaps: 157-158 (Elapsed time: 301.927 sec DB time: 11269.1 sec)
-> Events with a "-" did not make the Top list in this set of snapshots, but are displayed for comparison purposes
 
1st 2nd
------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time(ms) %DB time
------------------------------ ------------- ------------ ------------ ------------- ----------- ------------------------------ ------------- ------------ ------------ ------------- -----------
CPU time N/A 5,722.8 N/A 50.1 CPU time N/A 7,358.4 N/A 65.3
log file sync Commit 2,288,655 5,412.1 2.4 47.4 log file sync Commit 2,808,036 3,535.5 1.3 31.4
target log write size Other 363,206 283.7 0.8 2.5 target log write size Other 644,287 278.2 0.4 2.5
log file parallel write System I/O 368,063 225.1 0.6 2.0 enq: TX - row lock contention Application 171,485 170.2 1.0 1.5
db file parallel write System I/O 12,399 160.2 12.9 1.4 db file parallel write System I/O 12,131 150.4 12.4 1.3
enq: TX - row lock contention Application 144,822 133.2 0.9 1.2 log file parallel write System I/O 649,501 148.1 0.2 1.3
library cache: mutex X Concurrency 130,800 120.8 0.9 1.1 library cache: mutex X Concurrency 86,632 128.1 1.5 1.1
log file sequential read System I/O 7,433 27.5 3.7 0.2 LGWR wait for redo copy Other 478,350 45.1 0.1 0.4
LGWR wait for redo copy Other 228,643 20.8 0.1 0.2 log file sequential read System I/O 6,577 21.7 3.3 0.2
buffer busy waits Concurrency 261,348 15.8 0.1 0.1 buffer busy waits Concurrency 295,880 20.1 0.1 0.2
--------------------------------------------------------------------------------------------------------------------

We see that this difference comes from lower latency in ‘log file parallel write':

Wait Events First DB/Inst: LABDB1/labdb1 Snaps: 155-156 (Elapsed time: 301.556 sec DB time: 11417.12 sec), Second DB/Inst: LABDB1/labdb1 Snaps: 157-158 (Elapsed time: 301.927 sec DB time: 11269.1 sec)
-> Ordered by absolute value of 'Diff' column of '% of DB time' descending (idle events last)
 
# Waits/sec (Elapsed Time) Total Wait Time (sec) Avg Wait Time (ms)
---------------------------------------- ---------------------------------------- -------------------------------------------
Event Wait Class 1st 2nd %Diff 1st 2nd %Diff 1st 2nd %Diff
------------------------------ ------------- -------------- -------------- ---------- -------------- -------------- ---------- --------------- --------------- -----------
log file sync Commit 7,589.5 9,300.4 22.5 5,412.1 3,535.5 -34.7 2.36 1.26 -46.61
log file parallel write System I/O 1,220.5 2,151.2 76.2 225.1 148.1 -34.2 0.61 0.23 -62.30
enq: TX - row lock contention Application 480.2 568.0 18.3 133.2 170.2 27.8 0.92 0.99 7.61
LGWR wait for redo copy Other 758.2 1,584.3 109.0 20.8 45.1 117.1 0.09 0.09 0.00
library cache: mutex X Concurrency 433.8 286.9 -33.8 120.8 128.1 6.0 0.92 1.48 60.87
db file parallel write System I/O 41.1 40.2 -2.3 160.2 150.4 -6.2 12.92 12.40 -4.02
cursor: pin S Concurrency 29.7 46.0 55.0 9.9 16.6 67.0 1.11 1.19 7.21
cursor: mutex X Concurrency 7.0 10.8 54.2 13.6 19.7 45.0 6.39 6.01 -5.95
latch: In memory undo latch Concurrency 585.3 749.0 28.0 10.8 16.3 50.8 0.06 0.07 16.67

In order to go into details, here is the wait event histogram for 512 bytes redo blocksize:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
LGWR all worker groups 41 48.8 12.2 14.6 14.6 4.9 2.4 2.4
LGWR any worker group 259 6.2 5.4 8.9 13.9 18.1 18.1 29.3
LGWR wait for redo copy 228.9K 99.1 .9 .0
LGWR worker group orderin 442 44.6 9.7 4.5 5.0 9.3 10.6 16.3
log file parallel write 368.5K 85.3 7.5 4.7 1.4 .9 .2 .0
log file sequential read 7358 6.5 13.1 59.0 17.2 3.0 1.1 .2
log file sync 2.3M 48.9 23.1 17.0 5.7 2.7 2.3 .3

and for 4096 bytes blocksize:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
LGWR all worker groups 20 45.0 5.0 15.0 10.0 5.0 20.0
LGWR any worker group 235 7.2 3.0 5.5 7.7 14.5 25.1 37.0
LGWR wait for redo copy 478.7K 98.9 1.0 .1 .0
LGWR worker group orderin 517 51.3 9.7 2.3 2.9 7.2 11.6 15.1
log file parallel write 649.9K 97.7 1.3 .3 .3 .4 .0 .0
log file sequential read 6464 5.7 8.2 73.5 11.0 1.2 .3 .1
log file sync 2.8M 78.2 15.6 2.3 .8 1.6 1.2 .

Few milliseconds are not perceived by end-user at commit except if the application has a design that is so bad that hundreds of commits are done for each user interaction. Even if both are really fast, the log writers was above 1ms for writes only for 1% of them when in blocksize 4k vs. 15% with default blocksize.

This faster latency is measured by I/O statistics as well:

Reads: Reqs Data Writes: Reqs Data Waits: Avg
Function Name Data per sec per sec Data per sec per sec Count Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------

BLOCKSIZE 512:
LGWR 0M 0.0 0M 18.1G 2420.4 61.528M 368.9K 0.6
BLOCKSIZE 4096:
LGWR 0M 0.0 0M 24.1G 4263.5 81.689M 649.5K 0.2

To be comprehensive, here are the statistics related with redo, thanks to those many statistics available in 12c:

Value per Second (Elapsed Time)
------------------------------------------- ---------------------------------------
Statistic 1st 2nd %Diff 1st 2nd %Diff
------------------------------ ---------------- ---------------- --------- -------------- -------------- ---------
 
redo KB read 16,319,609 15,783,576 -3.3 54,118.0 52,276.1 -3.4
redo blocks checksummed by FG 26,587,090 1,000,267 -96.2 88,166.3 3,312.9 -96.2
redo blocks written 37,974,499 6,318,372 -83.4 125,928.5 20,926.8 -83.4
redo blocks written (group 0) 37,256,502 6,257,861 -83.2 123,547.5 20,726.4 -83.2
redo blocks written (group 1) 717,997 60,511 -91.6 2,381.0 200.4 -91.6
redo entries 24,023,503 30,214,386 25.8 79,665.1 100,071.8 25.6
redo size 18,644,947,688 23,073,410,468 23.8 61,829,138.5 76,420,493.9 23.6
redo synch long waits 343 4,890 1,325.7 1.1 16.2 1,321.1
redo synch time 541,804 354,625 -34.5 1,796.7 1,174.5 -34.6
redo synch time (usec) 5,418,056,862 3,546,209,390 -34.5 17,967,000.7 11,745,254.3 -34.6
redo synch time overhead (usec) 145,664,759 197,925,281 35.9 483,043.8 655,540.2 35.7
redo synch time overhead count ( 2ms) 2,295,847 2,821,726 22.9 7,613.3 9,345.7 22.8
redo synch time overhead count ( 8ms) 443 3,704 736.1 1.5 12.3 734.7
redo synch time overhead count ( 32ms) 2 9 350.0 0.0 0.0 200.0
redo synch writes 2,305,502 2,849,645 23.6 7,645.4 9,438.2 23.5
redo wastage 179,073,264 2,703,864,280 1,409.9 593,830.9 8,955,357.7 1,408.1
redo write finish time 291,094,266 277,884,591 -4.5 965,307.5 920,370.1 -4.7
redo write gather time 63,237,013 125,066,420 97.8 209,702.4 414,227.3 97.5
redo write info find 2,296,292 2,825,439 23.0 7,614.8 9,358.0 22.9
redo write schedule time 63,679,682 125,819,850 97.6 211,170.3 416,722.8 97.3
redo write size count ( 4KB) 12,220 0 40.5 0
redo write size count ( 8KB) 26,420 2,246 -91.5 87.6 7.4 -91.5
redo write size count ( 16KB) 69,674 94,557 35.7 231.0 313.2 35.5
redo write size count ( 32KB) 108,676 268,794 147.3 360.4 890.3 147.0
redo write size count ( 128KB) 106,651 253,669 137.8 353.7 840.2 137.6
redo write size count ( 256KB) 37,332 28,076 -24.8 123.8 93.0 -24.9
redo write size count ( 512KB) 7,328 2,382 -67.5 24.3 7.9 -67.5
redo write size count (1024KB) 28 28 0.0 0.1 0.1 0.0
redo write time 29,126 27,817 -4.5 96.6 92.1 -4.6
redo write time (usec) 291,261,420 278,162,437 -4.5 965,861.8 921,290.4 -4.6
redo write total time 306,213,383 298,786,696 -2.4 1,015,444.5 989,599.1 -2.5
redo write worker delay (usec) 38,246,633 73,452,748 92.1 126,830.9 243,279.8 91.8
redo writes 368,330 649,751 76.4 1,221.4 2,152.0 76.2
redo writes (group 0) 366,492 648,430 76.9 1,215.3 2,147.6 76.7
redo writes (group 1) 1,838 1,321 -28.1 6.1 4.4 -28.2
redo writes adaptive all 368,330 649,752 76.4 1,221.4 2,152.0 76.2
redo writes adaptive worker 368,330 649,752 76.4 1,221.4 2,152.0 76.2

I’ve added a few things that were masked by the AWR Diff Report. The writes lower than 4k is zero in the second snapshots because it’s the blocksize.
It’s interesting to see that redo size is higher and this is because you write 4k even when you have less. This is measured by ‘redo wastage’.

So, larger blocksize lowers the latency but increases the volume. Here, where NVMe optimizes the bandwidth to Flash storage, it may not be a problem.

So what?

You have to keep in mind that this workload, with lot of small transactions and no other waits, is a special workload for this test. If you are not in this extreme case, then the default block size is probably sufficient for latency and reduces the redo size. However, if log file sync latency is your bottleneck, you may consider increasing the blocksize.

Thanks to

arrow-electronicsOracle Authorized Solution Center, Switzerland.
Those tests were done on ODA X6-2M at Arrow OASC. Arrow has a wide range of Engineered Systems available for Oracle partners, like dbi services, and for customers to do Proof of Concepts, demos, learning, benchmarks, etc.

ODAArrow

 

Cet article Redo log block size on ODA X6 all flash est apparu en premier sur Blog dbi services.

User defined Year-Month data type

Tom Kyte - Fri, 2016-07-22 14:46
Hi Answers Team. I keep seeing my maintenance / design / development teams abusing dates everywhere. One example is the use of YYYYMM as a way to extract a month from some piece of data what starts out as a date. 1. database design will cre...
Categories: DBA Blogs

Restricting datafile addition to a filesystem (ASM)

Tom Kyte - Fri, 2016-07-22 14:46
Hi Tom The normal method of adding datafiles to a tablespace is to login to the database as "sqlplus / as sysdba" and then executing the "alter tablespace add datafile.." command. Is there a way to prevent / prompt the DBAs from accidentally adding ...
Categories: DBA Blogs

RMAN and Archivelog

Tom Kyte - Fri, 2016-07-22 14:46
Hi, Maybe this question was already answered somewhere but there so many question that I can not find the solution on this site. It is rather a simple one (I hope not a stupid one). Strictly speaking, when I make a "backup database" with Rman, do...
Categories: DBA Blogs

executing stored procedures after context change

Tom Kyte - Fri, 2016-07-22 14:46
Hi, I need to execute a stored procedure that exists in many schema, on demand, in any given schema. I am trying to change the current_schema then execute the procedure (GS_LOAD_FDB_PKG is the package in each of our schemas and gst_disease_dxid_u...
Categories: DBA Blogs

Inserting a decimal which has no decimal into the table with a decimal point

Tom Kyte - Fri, 2016-07-22 14:46
Hi Tom, I want to insert a decimal number which has no decimal point into a table with a decimal point. Is there a predefined function or a way to do this? Ex: (46926831654) should be inserted as (469268.31654) I dont want to use the string...
Categories: DBA Blogs

Determining current database name

Tom Kyte - Fri, 2016-07-22 14:46
Which (if either) is preferable for determining the current database name: ORA_DATABASE_NAME or sys_context('USERENV', 'DB_NAME') ? Thanx, D
Categories: DBA Blogs

Visibility on package objects

Tom Kyte - Fri, 2016-07-22 14:46
Hello, I have package that create objects and variables on the package header spec, such as: CREATE OR REPLACE PACKAGE Test_Package AS bVariable BOOLEAN := FALSE; END Test_Package; Am I correct in assuming that Test_Package.bVari...
Categories: DBA Blogs

Its all about DRG(Data Relationship Governance)

OracleApps Epicenter - Fri, 2016-07-22 10:47
You know, Oracle Data Relationship Management (DRM) is a web-based user-friendly platform for users to manage enterprise dimensions. The application provides many features such as SOX-compliant auditing, powerful and highly customizable business rule enforcement, versioning capabilities for storing historical views of dimensions, and multiple integration formats. Within DRM , DRG adds configurable, collaborative workflow to […]
Categories: APPS Blogs

Data governance

OracleApps Epicenter - Fri, 2016-07-22 10:25
Data governance is not a technology or a system but a set of processes and the organizational structure that governs the usage of all data assets of an enterprise. Data governance represents the convergence of data quality, data management, data policies,business process management, and risk management surrounding the handling of information as an asset within […]
Categories: APPS Blogs

APEX IDE for Shakespeare Programming Language (SPL)

Tony Andrews - Fri, 2016-07-22 08:04
Recently I came across an esoteric programming language called The Shakespeare Programming Language (SPL) and become rather fascinated by it.  It's big, and it's clever, but it's not terribly useful or practical.  But this year is the 400th anniversary of Shakespeare's death, which adds some relevance I suppose. Here is an example of an SPL program taken from the SPL docs.  All it does is Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2016/07/apex-ide-for-shakespeare-programming.html

VirtualBox 5.1.2

Tim Hall - Fri, 2016-07-22 07:31

You know a major new release is going to be rapidly followed by a bunch of fixes, so about 9 days after the release of VirtualBox 5.1 we now have VirtualBox 5.1.2.

I have got it running on my Windows 7 desktop and work and it seems fine. I’ll try OSX El Crapitan and Oracle Linux 6 hosts when I get home.

Cheers

Tim…

Update: Works fine on Mac (El Craptian) and Oracle Linux 6 hosts too.

VirtualBox 5.1.2 was first posted on July 22, 2016 at 1:31 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Dynamic Java in PeopleCode

Jim Marion - Fri, 2016-07-22 04:06

The PeopleCode language has a lot of features and functions. But sometimes, it seems there are tasks we need to accomplish that are just out of reach of the PeopleCode language. It is at these times that I reach for Java. I have written a lot about Java, so I'm sure many of you already know how to mix Java with PeopleCode. While certainly a rational solution, one of the pain points of a Java solution is managing custom Java on the app and process scheduler servers. Each time you update your compiled class (or jar) files, you have to restart the server. That might be OK in a stable production environment, where you don't intend to change code often, but in development, it is a real pain! Likewise, maintaining custom Java class and jar files through upgrades can be a little sketchy. Specifically, if you redeploy PeopleTools or rewrite psconfig, then it is possible you may miss some of your custom Java code. PeopleBooks tells us how to setup psconfig for custom Java classes, but again, that is just one more thing to manage through upgrades. Now, imagine being able to update your custom Java code with a Data Mover script. Further, imagine being able to run custom Java without making any changes to your application server. Imagine what it would be like to run Java without having to beg (or bribe) your admin for a "no customization" exception. It is possible today. The answer: Use JavaScript to interface between PeopleCode and the delivered Java Runtime Environment. Through the embedded Mozilla Rhino JavaScript script engine of Java, we have full, dynamic access to the JRE. When and how would you use this? Let's review some examples.

Custom HTTP Connections

For various reasons, some customers choose not to implement Integration Broker. These customers find themselves requiring integration, but without IB's networking features. An alternative to %IntBroker.ConnectorRequestURL is to use Java's HttpURLConnection.I strongly discourage this approach, but the question arises. The JRE is there, well integrated with PeopleCode, and ready for use. From PeopleCode, it is possible to create a Java URLConnection using CreateJavaObject("java.net.URL", "http...").openConnection(). A problem arises when we try to invoke methods of a HttpURLConnection, the real return value of URL.openConnection. Unfortunately, PeopleCode doesn't see it that way, which leads down the reflection path (we don't want to go there). This is where JavaScript can help us. JavaScript doesn't mind that URL.openConnection returned an HttpURLConnection even though it said it would just return a URLConnection. Here is an example:


var result = (function() {
// declare pointers to Java methods to make it easier to invoke the methods
// by name later
var URL = Packages.java.net.URL;
var InputStreamReader = Packages.java.io.InputStreamReader;
var BufferedReader = Packages.java.io.BufferedReader;
var StringBuilder = Packages.java.lang.StringBuilder;

var serverAddress = new URL(
"http://hcm.oraclecloud.com/hcmCoreApi/atomservlet/employee/newhire"
);


// Creates an HttpURLConnection, but returns URLConnection. If I was using
// PeopleCode, PeopleCode would see this as a URLConnection. To invoke
// HttpURLConnection methods, I would need to resort to reflection. This is
// the power of JavaScript in this scenario...
var connection = serverAddress.openConnection();

// ... for example, setRequestMethod is NOT a method of URLConnection. It is
// a method of HttpURLConnection. PeopleCode would throw an error, but
// JavaScript recognizes this is an HttpURLConnection and allows the method
// invocation
connection.setRequestMethod("GET");

// Timeout in milliseconds
connection.setReadTimeout(10*1000);

// Finally, make the connection
connection.connect();

// Read the response
var reader = new BufferedReader(
new InputStreamReader(connection.getInputStream()));
var sb = new StringBuilder();
var line;

while ((line = reader.readLine()) !== null) {
sb.append(line + '\n');
}

// Return the response to PeopleCode. In this case, the response is an XML
// string
return sb;
}());

Excel Spreadsheets

PeopleTools 8.55+ has a PeopleCode API for Excel, which means this solution is now irrelevant. I'm listing it because not everyone is up to PeopleTools 8.55 (yet). If you use this idea to build a solution for 8.54 and later upgrade, Oracle recommends that you switch to the PeopleCode Excel API. The solution will still work with 8.55+, but just isn't recommended post 8.54.

This solution uses the Apache POI library that is distributed with PeopleTools 8.54+ to read and write binary Microsoft Excel files. As with the networking solution above, it is possible to use POI directly from PeopleCode, but a little difficult because POI uses method overloading in a manner that PeopleCode can't resolve. Furthermore, POI uses methods that return superclasses and interfaces that PeopleCode can't cast to subclasses, leading to awful reflection code. Here is an example that reads a spreadsheet row by row, inserting each row into a staging table for later processing.


// endsWith polyfill
if (!String.prototype.endsWith) {
String.prototype.endsWith = function(searchString, position) {
var subjectString = this.toString();
if (typeof position !== 'number' || !isFinite(position) ||
Math.floor(position) !== position ||
position > subjectString.length) {
position = subjectString.length;
}
position -= searchString.length;
var lastIndex = subjectString.indexOf(searchString, position);
return lastIndex !== -1 && lastIndex === position;
};
}

// open a workbook, iterate over rows/cells, and then insert them into a
// staging table
var result = (function() {
// declare pointers to Java methods to make it easier to invoke the methods
// by name
var FileInputStream = Packages.java.io.FileInputStream;

var HSSFWorkbook = Packages.org.apache.poi.hssf.usermodel.HSSFWorkbook;
var Workbook = Packages.org.apache.poi.ss.usermodel.Workbook;
var XSSFWorkbook = Packages.org.apache.poi.xssf.usermodel.XSSFWorkbook;

// declare a PeopleCode function
var SQLExec = Packages.PeopleSoft.PeopleCode.Func.SQLExec;

// internal "helper" function that will identify rows inserted into
var guid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g,
function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
}
);

// open a binary Microsoft Excel file
var fis = new FileInputStream(fileName);

var workbook;

if(fileName.toLowerCase().endsWith("xlsx")) {
workbook = new XSSFWorkbook(fis);
} else if(fileName.toLowerCase().endsWith("xls")) {
workbook = new HSSFWorkbook(fis);
}

var sheet = workbook.getSheetAt(0);
var rowIterator = sheet.iterator();
var roleName,
descr,
row;

// iterate over each row, inserting those rows into a staging table
while (rowIterator.hasNext()) {
row = rowIterator.next();
roleName = row.getCell(0).getStringCellValue();
descr = row.getCell(1).getStringCellValue();

// TODO: turn this into a stored SQL definition, not hard coded SQL
SQLExec("INSERT INTO PS_JM_XLROLE_STAGE VALUES(:1, :2, :3, SYSTIMESTAMP)",
// notice that the SQLExec parameters are wrapped in an array
[guid, roleName, descr]
);
}

// return the unique identifier that can later be used to select the rows
// inserted by this process
return guid;

}());

Here is an example of writing/creating a Microsoft Excel spreadsheet:


var result = (function() {
// import statements
var XSSFWorkbook = Packages.org.apache.poi.xssf.usermodel.XSSFWorkbook;
var FileOutputStream = Packages.java.io.FileOutputStream;

// variable declarations
var workbook = new XSSFWorkbook();
var sheet = workbook.createSheet("Countries");
var fileName = "c:/temp/countries.xlsx";

var row = sheet.createRow(0);
var cell = row.createCell(0);

cell.setCellValue("United States of America");
cell = row.createCell(1);
cell.setCellValue("USA");

row = sheet.createRow(1);
cell = row.createCell(0);
cell.setCellValue("India");
cell = row.createCell(1);
cell.setCellValue("IND");

row = sheet.createRow(1);
cell = row.createCell(0);
cell.setCellValue("Denmark");
cell = row.createCell(1);
cell.setCellValue("DNK");

var fos = new FileOutputStream(fileName);
workbook.write(fos);
fos.close();

return "Created workbook " + fileName;

}());

JSON Parsing

If your goal is to convert a JSON string into SQL insert statements, then this is a very painless alternative:


/* Sample JSON data that will be selected from a record definition
[
{"emplid": "KU0001", "oprid": "HCRUSA_KU0001"},
{"emplid": "KU0002", "oprid": "HCRUSA_KU0002"},
{"emplid": "KU0003", "oprid": "HCRUSA_KU0003"}
];*/

var result = (function() {
var CreateRecord = Packages.PeopleSoft.PeopleCode.Func.CreateRecord;
var Name = Packages.PeopleSoft.PeopleCode.Name;
var SQLExec = Packages.PeopleSoft.PeopleCode.Func.SQLExec;

// example of how to reference a PeopleCode record definition from
// JavaScript. Later we will select JSON_DATA from this table
var rec = CreateRecord(new Name('RECORD', 'NAA_SCRIPT_TBL'));

var count = 0;
var json_string;
var json;

var guid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
});

// Select JSON string from a table. Normally this would come from a variable,
// a service, etc. Here it makes a great example of how to select rows from
// a record definition
rec.GetField(new Name('FIELD', 'PM_SCRIPT_NAME')).setValue('JSON_TEST_DATA');
rec.SelectByKey();
json_string = rec.GetField(new Name('FIELD', 'HTMLAREA')).getValue();

// now convert that received string into an object.
json = JSON.parse(json_string);

// Iterate over json data and...
json.forEach(function(item, idx) {
// ... insert into a staging table
SQLExec("INSERT INTO PS_NAA_TEST_TBL VALUES(:1, :2, :3, SYSTIMESTAMP)",
// notice the array wrapper around SQLExec bind values
[guid, item.emplid, item.oprid]
);
count += 1;
});

return "Inserted " + count + " rows";

}());

I could go on and on with examples of creating zip files, encrypting information, base64 encoding binary data, manipulating graphics using Java 2D, etc, but I think you get the idea.

Links for 2016-07-21 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator