Feed aggregator

Export into pre-existing schema

Tom Kyte - Sun, 2016-07-24 21:46
Hi tom, I have a schema export with expdp with this - expdp user/passwors@test directory=dump_dir dumpfile=user.dmp logfile=user.log schemas=user User have 100 tables 50 procedures and triggers now I want to import this on dB how to import i...
Categories: DBA Blogs

does partition volume impact ad-hoc queries

Tom Kyte - Sun, 2016-07-24 21:46
Hi Tom, We are upgrading from Oracle 8i which is running on a standalone Unix server to 11g release 2 to Exadata. We are migrating Data Warehouse database with large partitioned tables (1000 million and more rows) partitioned monthly,quarterly a...
Categories: DBA Blogs

schema comparison

Tom Kyte - Sun, 2016-07-24 21:46
I learnt that some of the production objects were missing under particular schema. At that point I decided to change the password of the schema in production so that the comparison can be done with the Stage environment and the extent pf damage to th...
Categories: DBA Blogs

Why there is so much logging for "insert /+* append */ into tab select"?

Tom Kyte - Sun, 2016-07-24 21:46
Hi Tom, Q1 ~~~~ I was executing the following sql statement with the table in nologging mode. But it seems the changes are logged (not just the dictionary changes). Please have a look at the following. Case1: ======== SQL> insert /*+...
Categories: DBA Blogs

Video Tutorial: XPLAN_ASH Active Session History - Part 9

Randolf Geist - Sun, 2016-07-24 09:21
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

Serving Oracle JET Application from WebLogic

Andrejus Baranovski - Sun, 2016-07-24 03:43
Oracle JET application can be served from WebLogic as static content (HTML/JavaScript), JET code will be downloaded and executed on the client. Read more about packaging and deploying JET applications in developer guide - Packaging and Deploying Web Applications.

Here is demo JET application served on WebLogic server, port 7101. JET index.html page is referenced through application context root, registered on WebLogic during deployment of static content:


Required steps:

1. Generate JET application from basic template (use sudo on Mac OS):

yo oraclejet RSJETDemoApp --template=basic

2. Navigate to JET application folder and run grunt command to generate minified JET app structure under release folder:

grunt build:release


3. In order to serve JET as static content from WebLogic, we need to add WEB-INF folder and web.xml file. This will allow to deploy it as application to WebLogic. Create web.xml file with empty structure:


4. Final structure should look like this, web.xml and WEB-INF folder inside release:


5. Go to WebLogic Console and in deployment screen navigate to folder location, where release is located:


6. As for any other Web application deployment, choose Install this deployment as an application:


7. Provide application name and specify deployment accessible for the folder location:


8. Deployment should be completed without errors:


There is no other job for the server as to transfer HTML and JavaScript to the client. Code is running on the client.

Dot Qualify Everything?

Bar Solutions - Sun, 2016-07-24 00:35
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

There is talk about Dot Qualifying Everything in your PL/SQL Code. But what are the pros and cons in this approach?

Let’s say we have a rather simple function which returns a string with alternatively upper and lower case characters. Normally I would write it as follows:

create or replace function wavey(string_in in varchar2) return varchar2 is
  l_returnvalue varchar2(30);
begin
  for indx in 1 .. length(string_in) loop
    l_returnvalue := l_returnvalue || case mod(indx, 2)
                       when 0 then
                        upper(substr(string_in, indx, 1))
                       else
                        lower(substr(string_in, indx, 1))
                     end;
  end loop;
  dbms_output.put_line(l_returnvalue);
  return l_returnvalue;
end;

The output for this function using the ENAME column of the EMP table is like this:

sMiTh
aLlEn
wArD
jOnEs
mArTiN
bLaKe
cLaRk
sCoTt
kInG
tUrNeR
aDaMs
jAmEs
fOrD
mIlLeR

But what if the was a malicious user that created a package names DBMS_OUTPUT which included all the programs in the original DBMS_OUTPUT package, but with some code added. Let’s create a simple package like this:

create or replace package dbms_output is
  procedure put_line(a in varchar2);
end;
create or replace package body dbms_output is
  procedure put_line(a in varchar2)
  is
  begin
    sys.dbms_output.put('changed :');
    sys.dbms_output.put_line(a);
  end;
end;

Notice I just included the PUT_LINE procedure in here and I am not really doing anything malicious here. The output of my function would now be:

changed :sMiTh
changed :aLlEn
changed :wArD
changed :jOnEs
changed :mArTiN
changed :bLaKe
changed :cLaRk
changed :sCoTt
changed :kInG
changed :tUrNeR
changed :aDaMs
changed :jAmEs
changed :fOrD
changed :mIlLeR

Not exactly the way I would want. How can I prevent this from happening? The answer is actually really simple. Qualify the call to DBMS_OUTPUT with the schema name where the packages resides:

create or replace function wavey(string_in in varchar2) return varchar2 is
  l_returnvalue varchar2(30);
begin
  for indx in 1 .. length(string_in) loop
    l_returnvalue := l_returnvalue || case mod(indx, 2)
                       when 0 then
                        upper(substr(string_in, indx, 1))
                       else
                        lower(substr(string_in, indx, 1))
                     end;
  end loop;
  sys.dbms_output.put_line(l_returnvalue);
  return l_returnvalue;
end;

The output is back to what we expected

sMiTh
aLlEn
wArD
jOnEs
mArTiN
bLaKe
cLaRk
sCoTt
kInG
tUrNeR
aDaMs
jAmEs
fOrD
mIlLeR

But should you Dot Qualify everything in your code? Like this?

create or replace function demo.wavey(string_in in sys.standard.varchar2) return sys.standard.varchar2 is
  l_returnvalue sys.standard.varchar2(30);
begin
  <<theloop>>
  for indx in 1 .. length(wavey.string_in) loop
    wavey.l_returnvalue := wavey.l_returnvalue || case mod(theloop.indx, 2)
                             when 0 then
                              sys.standard.upper(standard.substr(wavey.string_in, theloop.indx, 1))
                             else
                              sys.standard.lower(standard.substr(wavey.string_in, theloop.indx, 1))
                           end;
  end loop;
  sys.dbms_output.put_line(wavey.l_returnvalue);
  return wavey.l_returnvalue;
end;

I don’t this adds to the readability of the code, but it sure makes your code safer against malicious users that want to implement code that can be executed by your programs. Please add your thoughts on this subject.

Troubleshooting ORA-02049: timeout: distributed transaction waiting for lock

Michael Dinh - Sat, 2016-07-23 17:45

Controlling Connections Established by Database Links

Tracing can be done using – ALTER SYSTEM SET EVENTS ‘2049 trace name ERRORSTACK level 3’;

Nice note from MOS.

How to Obtain a System State Trace BEFORE the Error ORA-2049 Occurs, While Still Experiencing the Contention (Doc ID 789517.1)

Commit after select from dblink – blocking lock.
Session 1:
++++++++++
04:44:51 ARROW:(DEMO@leo):PRIMARY>
04:45:30 ARROW:(DEMO@leo):PRIMARY> select name from v$pdbs@mdinh_april;

NAME
------------------------------
APRIL

Elapsed: 00:00:00.01
04:45:36 ARROW:(DEMO@leo):PRIMARY> commit;

Commit complete.

Elapsed: 00:00:00.00
04:45:41 ARROW:(DEMO@leo):PRIMARY>
04:45:49 ARROW:(DEMO@leo):PRIMARY> update t set id=-9999;

++++++++++
Session 2:
04:45:18 ARROW:(DEMO@leo):PRIMARY> update t set id=100;

1 row updated.

Elapsed: 00:00:00.00
04:45:21 ARROW:(DEMO@leo):PRIMARY> @/media/sf_working/sql/b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   DEMO            32,17        74nhxnyztg454    402 SQL*Net message to c ACTIVE      0 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           41,271       1qfpvr7brd2pq    113 enq: TX - row lock c ACTIVE      8 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   DEMO            7741d4und71ph with s as ( SELECT decode(level,1,'BLOCKING','WAITING') state, LPAD('*',(level-1
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
04:54:19 ARROW:(DEMO@leo):PRIMARY> select count(*) from t;

  COUNT(*)
----------
         1

Elapsed: 00:00:00.01
05:00:49 ARROW:(DEMO@leo):PRIMARY>
No commit after select from dblink – ORA-02049: timeout: distributed transaction waiting for lock.

Session is automatically killed based on database paraneter tributed_lock_timeout (default is 60s)

Session 1:
++++++++++
21:58:06 ARROW:(DEMO@leo):PRIMARY> update t set id=100;

1 row updated.

Elapsed: 00:00:00.01
21:58:14 ARROW:(DEMO@leo):PRIMARY>

Session 2:
+++++++++
21:58:45 ARROW:(DEMO@leo):PRIMARY> select name from v$pdbs@mdinh_april;

NAME
------------------------------
APRIL

Elapsed: 00:00:00.00
21:58:51 ARROW:(DEMO@leo):PRIMARY> update t set id=1;
update t set id=1
       *
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock


Elapsed: 00:01:00.01
21:59:59 ARROW:(DEMO@leo):PRIMARY>

Don’t forget to commit or rollback after using dblinks.

UPDATED: Tried to determine if there was a way to monitor distributed transaction and in this test case was not able to.

Session 1: 
++++++++++
15:59:32 ARROW:(MDINH@leo):PRIMARY> update demo.t set id=100;

1 row updated.

Elapsed: 00:00:00.01
15:59:43 ARROW:(MDINH@leo):PRIMARY>

Session 2: Distributed Transaction
++++++++++
16:00:43 ARROW:(DEMO@leo):PRIMARY> select name from v$pdbs@mdinh_april;

NAME
------------------------------
APRIL

Elapsed: 00:00:00.00
16:00:47 ARROW:(DEMO@leo):PRIMARY> update t set id=-9999;
update t set id=-9999
       *
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock


Elapsed: 00:10:00.00
16:10:57 ARROW:(DEMO@leo):PRIMARY> update t set id=-9999;
update t set id=-9999
*
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock


Elapsed: 00:10:00.01
16:23:03 ARROW:(DEMO@leo):PRIMARY>

Session 3: Monitoring
++++++++++
16:21:23 ARROW:(SYS@leo):PRIMARY> show parameter distributed_lock_timeout

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
distributed_lock_timeout             integer     600
16:21:35 ARROW:(SYS@leo):PRIMARY> @b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   MDINH           26,3                           32 SQL*Net message from INACTIVE   22 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           16,23        1qfpvr7brd2pq     70 enq: TX - row lock c ACTIVE      9 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   MDINH           4cnt32uym27j2 update demo.t set id=100
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
16:21:38 ARROW:(SYS@leo):PRIMARY> @b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   MDINH           26,3                           32 SQL*Net message from INACTIVE   22 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           16,23        1qfpvr7brd2pq     70 enq: TX - row lock c ACTIVE      9 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   MDINH           4cnt32uym27j2 update demo.t set id=100
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
16:22:10 ARROW:(SYS@leo):PRIMARY> @b.sql

STATE      USERNAME        SID_SERIAL   SQL_ID          SEQ# EVENT                STATUS    MIN MACHINE              OSUSER     PROGRAM
---------- --------------- ------------ ------------- ------ -------------------- -------- ---- -------------------- ---------- ---------------
BLOCKING   MDINH           26,3                           32 SQL*Net message from INACTIVE   23 arrow.localdomain    oracle     sqlplus@arrow.l
WAITING    *DEMO           16,23        1qfpvr7brd2pq     70 enq: TX - row lock c ACTIVE     10 arrow.localdomain    oracle     sqlplus@arrow.l

Elapsed: 00:00:00.01

STATE      USERNAME        SQL_ID        SQL_TEXT
---------- --------------- ------------- --------------------------------------------------------------------------------
BLOCKING   MDINH           4cnt32uym27j2 update demo.t set id=100
WAITING    *DEMO           1qfpvr7brd2pq update t set id=-9999

Elapsed: 00:00:00.00
16:22:56 ARROW:(SYS@leo):PRIMARY> @b.sql

no rows selected

Elapsed: 00:00:00.01

no rows selected

Elapsed: 00:00:00.00
16:23:12 ARROW:(SYS@leo):PRIMARY>

CODE : Persistent Variables via PL/SQL Package and DBMS_APPLICATION_INFO

Hemant K Chitale - Sat, 2016-07-23 10:43
I am now introducing some code samples in my blog.  I won't restrict myself to SQL but will also include PL/SQL  (C ? Bourne/Korn Shell ? what else ?)

This is the first of such samples.


Here I demonstrate using a PL/SQL Package to define persistent variables and then using them with DBMS_APPLICATION_INFO.  This demo consists of only 2 variables being used by 1 session.  But we could have a number of variables in this Package and invoked by multiple client sessions in the real workd.

I first :

SQL> grant create procedure to hr;

Grant succeeded.

SQL>


Then, in the HR schema, I setup a Package to define variables that can persist throughout a session.  Public Variables defined in a Package, once invoked, persist throughout the session that invoked them.

create or replace package
define_my_variables
authid definer
is
my_application varchar2(25) := 'Human Resources';
my_base_schema varchar2(25) := 'HR';
end;
/

grant execute on define_my_variables to hemant;
grant select on employees to hemant;


As HEMANT, I then execute :

SQL> connect hemant/hemant  
Connected.
SQL> execute dbms_application_info.set_module(-
> module_name=>HR.define_my_variables.my_application,-
> action_name=>NULL);

PL/SQL procedure successfully completed.

SQL>


As SYSTEM, the DBA can monitor HEMANT

QL> show user
USER is "SYSTEM"
SQL> select sid, serial#, to_char(logon_time,'DD-MON HH24:MI:SS') Logon_At, module, action
2 from v$session
3 where username = 'HEMANT'
4 order by 1
5 /

SID SERIAL# LOGON_AT MODULE
---------- ---------- ------------------------ ----------------------------------------------------------------
ACTION
----------------------------------------------------------------
1 63450 23-JUL 23:24:03 Human Resources



SQL>


Then, HEMANT intends to run a query on the EMPLOYEES Table.

SQL> execute dbms_application_info.set_action(-
> action_name=>'Query EMP');

PL/SQL procedure successfully completed.

SQL> select count(*) from hr.employees where job_id like '%PROG%'
2 /

COUNT(*)
----------
5

SQL>


SYSTEM can see what he is doing with

SQL> l
1 select sid, serial#, to_char(logon_time,'DD-MON HH24:MI:SS') Logon_At, module, action
2 from v$session
3 where username = 'HEMANT'
4* order by 1
SQL> /

SID SERIAL# LOGON_AT MODULE
---------- ---------- ------------------------ ----------------------------------------------------------------
ACTION
----------------------------------------------------------------
1 63450 23-JUL 23:24:03 Human Resources
Query EMP


SQL>


Returning, to the HR login, I can see :

SQL> show user
USER is "HEMANT"
SQL> execute dbms_output.put_line(-
> 'I am running ' || hr.define_my_variables.my_application || ' against ' || hr.define_my_variables.my_base_schema);
I am running Human Resources against HR

PL/SQL procedure successfully completed.

SQL>


So, I have demonstrated :
1.  Using a PLSQL Package Specification (without the need for a Package Body) to define variables that are visible to another session.

2.  The possibility of using this across schemas.  HR could be my "master schema" that setups all variables and HEMANT is one of many "client" schemas (or users) that use these variables..

3. The variables defined will persist throughout the client session once they are invoked.

4.  Using DBMS_APPLICATION_INFO to call these variables and setup client information.


Note :  SYSTEM can also trace HEMANT's session using DBMS_MONITOR as demonstrated in Trace Files -- 2 : Generating SQL Traces (another session)

.
.
.

Categories: DBA Blogs

Release 12.2.5 AR Enhacement : Apply Receipts Automatically based on Match Score and Knapsack Method

OracleApps Epicenter - Sat, 2016-07-23 09:30
This is one of R12.2.5 Enhancement in EBS AR. The Automatic Cash Application improves accuracy and on-time application of cash receipts with the introduction of two new methods for automatically applying cash receipts. The first method generates match scores using the Levenshtein distance algorithm and automatically applies the receipt based on a score threshold. The […]
Categories: APPS Blogs

Oracle Data Relationship Management Analytics

OracleApps Epicenter - Sat, 2016-07-23 06:07
The following Patch Set Updates have been released Recently for Oracle Data Relationship Management 11.1.2.4.341. Patch 23236297 - PATCH SET UPDATE: ORACLE DATA RELATIONSHIP MANAGEMENT - 11.1.2.4.341 Patch 23750023 - PATCH SET UPDATE: DATA RELATIONSHIP MANAGEMENT ANALYTICS - 11.1.2.4.341 Oracle Data Relationship Management Analytics is a capstone dashboard and reporting application that draws upon the […]
Categories: APPS Blogs

Redo log block size on ODA X6 all flash

Yann Neuhaus - Fri, 2016-07-22 16:43

On the Oracle Database Appliance, the redo logs are on Flash storage (and with X6 everything is on Flash storage) so you may wonder if we can benefit from 4k redo blocksize. Here are some tests about it on an ODA X6-2M.

I’ll compare the same workload (heavy inserts) with 512 bytes and 4k bytes block size redo. However, we can’t create a log group different than 512 bytes:

ORA-01378: The logical block size (4096) of file
/u03/app/oracle/redo/LABDB1/onlinelog/o1_mf_999_%u_.log is not compatible with
the disk sector size (media sector size is 512 and host sector size is 512)

This is because the flash storage is exposed with 512 bytes sector size:

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 4894016 4500068 2441888 1023992 0 Y DATA/
MOUNTED NORMAL N 512 4096 4194304 1231176 221172 610468 -199762 0 N RECO/

Then, in order to be able to create new redo log groups with higher block size you need to set “_disk_sector_size_override” to TRUE;

I have 3 log groups with 512 bytes block size, and 3 groups with 4k:


LOGFILE
GROUP 10 '+RECO/LABDB1/ONLINELOG/group_10.264.917867333' SIZE 51200M BLOCKSIZE 512,
GROUP 11 '+RECO/LABDB1/ONLINELOG/group_11.265.917867489' SIZE 51200M BLOCKSIZE 512,
GROUP 12 '+RECO/LABDB1/ONLINELOG/group_12.266.917867645' SIZE 51200M BLOCKSIZE 512,
GROUP 13 '+RECO/LABDB1/ONLINELOG/group_13.267.917867795' SIZE 51200M BLOCKSIZE 4096,
GROUP 14 '+RECO/LABDB1/ONLINELOG/group_14.268.917867913' SIZE 51200M BLOCKSIZE 4096,
GROUP 15 '+RECO/LABDB1/ONLINELOG/group_15.269.917868013' SIZE 51200M BLOCKSIZE 4096

In 12c the database files should be on ACFS and not directly on the diskgroup. We did this on purpose in order to check if there is any overhead when in ACFS and we have seen exactly the same performance in both. There is something I dislike here however: redo log files are not multiplexed with multiple log members, but rely on the diskgroup redundancy. I agree with that in ASM because you are not supposed to manage the files and then risk to delete one of them. But in ACFS you see only one file, and if you drop it by mistake, both mirrors are lost, with the latest transactions.

On an insert intensive workload I take AWR snapshots between two log switches:

ODAX6REDO
The switch between blocksize 512 and blocksize 4096 happened at 12:35

Don’t be nervous about those orange ‘log file sync waits’ we had to run 10000 transactions per second in order to get some contention here.

We have to go to the details in order to compare, from an AWR Diff report:

Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff
--------------- --------------- ------
DB time: 37.9 37.3 -1.4
CPU time: 19.0 24.4 28.4
Background CPU time: 0.8 1.0 23.2
Redo size (bytes): 61,829,138.5 76,420,493.9 23.6
Logical read (blocks): 1,181,178.7 1,458,915.9 23.5
Block changes: 360,883.0 445,770.8 23.5
Physical read (blocks): 0.4 1.1 164.3
Physical write (blocks): 14,451.2 16,092.4 11.4
Read IO requests: 0.4 1.1 164.3
Write IO requests: 9,829.4 10,352.3 5.3
Read IO (MB): 0.0 0.0 100.0
Write IO (MB): 112.9 125.7 11.4
IM scan rows: 0.0 0.0 0.0
Session Logical Read IM:
User calls: 8,376.0 10,341.2 23.5
Parses (SQL): 5,056.0 6,247.8 23.6
Hard parses (SQL): 0.0 0.0 0.0
SQL Work Area (MB): 3.1 3.2 3.5
Logons: 0.4 0.3 -37.2
Executes (SQL): 225,554.2 278,329.3 23.4
Transactions: 10,911.0 13,486.4 23.6

The second workload, when redo blocksize was 4k, was able to handle 23% more activity.

‘log file sync’ average time is 1.3 milliseconds instead of 2.4:

Top Timed Events First DB/Inst: LABDB1/labdb1 Snaps: 155-156 (Elapsed time: 301.556 sec DB time: 11417.12 sec), Second DB/Inst: LABDB1/labdb1 Snaps: 157-158 (Elapsed time: 301.927 sec DB time: 11269.1 sec)
-> Events with a "-" did not make the Top list in this set of snapshots, but are displayed for comparison purposes
 
1st 2nd
------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time(ms) %DB time
------------------------------ ------------- ------------ ------------ ------------- ----------- ------------------------------ ------------- ------------ ------------ ------------- -----------
CPU time N/A 5,722.8 N/A 50.1 CPU time N/A 7,358.4 N/A 65.3
log file sync Commit 2,288,655 5,412.1 2.4 47.4 log file sync Commit 2,808,036 3,535.5 1.3 31.4
target log write size Other 363,206 283.7 0.8 2.5 target log write size Other 644,287 278.2 0.4 2.5
log file parallel write System I/O 368,063 225.1 0.6 2.0 enq: TX - row lock contention Application 171,485 170.2 1.0 1.5
db file parallel write System I/O 12,399 160.2 12.9 1.4 db file parallel write System I/O 12,131 150.4 12.4 1.3
enq: TX - row lock contention Application 144,822 133.2 0.9 1.2 log file parallel write System I/O 649,501 148.1 0.2 1.3
library cache: mutex X Concurrency 130,800 120.8 0.9 1.1 library cache: mutex X Concurrency 86,632 128.1 1.5 1.1
log file sequential read System I/O 7,433 27.5 3.7 0.2 LGWR wait for redo copy Other 478,350 45.1 0.1 0.4
LGWR wait for redo copy Other 228,643 20.8 0.1 0.2 log file sequential read System I/O 6,577 21.7 3.3 0.2
buffer busy waits Concurrency 261,348 15.8 0.1 0.1 buffer busy waits Concurrency 295,880 20.1 0.1 0.2
--------------------------------------------------------------------------------------------------------------------

We see that this difference comes from lower latency in ‘log file parallel write':

Wait Events First DB/Inst: LABDB1/labdb1 Snaps: 155-156 (Elapsed time: 301.556 sec DB time: 11417.12 sec), Second DB/Inst: LABDB1/labdb1 Snaps: 157-158 (Elapsed time: 301.927 sec DB time: 11269.1 sec)
-> Ordered by absolute value of 'Diff' column of '% of DB time' descending (idle events last)
 
# Waits/sec (Elapsed Time) Total Wait Time (sec) Avg Wait Time (ms)
---------------------------------------- ---------------------------------------- -------------------------------------------
Event Wait Class 1st 2nd %Diff 1st 2nd %Diff 1st 2nd %Diff
------------------------------ ------------- -------------- -------------- ---------- -------------- -------------- ---------- --------------- --------------- -----------
log file sync Commit 7,589.5 9,300.4 22.5 5,412.1 3,535.5 -34.7 2.36 1.26 -46.61
log file parallel write System I/O 1,220.5 2,151.2 76.2 225.1 148.1 -34.2 0.61 0.23 -62.30
enq: TX - row lock contention Application 480.2 568.0 18.3 133.2 170.2 27.8 0.92 0.99 7.61
LGWR wait for redo copy Other 758.2 1,584.3 109.0 20.8 45.1 117.1 0.09 0.09 0.00
library cache: mutex X Concurrency 433.8 286.9 -33.8 120.8 128.1 6.0 0.92 1.48 60.87
db file parallel write System I/O 41.1 40.2 -2.3 160.2 150.4 -6.2 12.92 12.40 -4.02
cursor: pin S Concurrency 29.7 46.0 55.0 9.9 16.6 67.0 1.11 1.19 7.21
cursor: mutex X Concurrency 7.0 10.8 54.2 13.6 19.7 45.0 6.39 6.01 -5.95
latch: In memory undo latch Concurrency 585.3 749.0 28.0 10.8 16.3 50.8 0.06 0.07 16.67

In order to go into details, here is the wait event histogram for 512 bytes redo blocksize:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
LGWR all worker groups 41 48.8 12.2 14.6 14.6 4.9 2.4 2.4
LGWR any worker group 259 6.2 5.4 8.9 13.9 18.1 18.1 29.3
LGWR wait for redo copy 228.9K 99.1 .9 .0
LGWR worker group orderin 442 44.6 9.7 4.5 5.0 9.3 10.6 16.3
log file parallel write 368.5K 85.3 7.5 4.7 1.4 .9 .2 .0
log file sequential read 7358 6.5 13.1 59.0 17.2 3.0 1.1 .2
log file sync 2.3M 48.9 23.1 17.0 5.7 2.7 2.3 .3

and for 4096 bytes blocksize:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
LGWR all worker groups 20 45.0 5.0 15.0 10.0 5.0 20.0
LGWR any worker group 235 7.2 3.0 5.5 7.7 14.5 25.1 37.0
LGWR wait for redo copy 478.7K 98.9 1.0 .1 .0
LGWR worker group orderin 517 51.3 9.7 2.3 2.9 7.2 11.6 15.1
log file parallel write 649.9K 97.7 1.3 .3 .3 .4 .0 .0
log file sequential read 6464 5.7 8.2 73.5 11.0 1.2 .3 .1
log file sync 2.8M 78.2 15.6 2.3 .8 1.6 1.2 .

Few milliseconds are not perceived by end-user at commit except if the application has a design that is so bad that hundreds of commits are done for each user interaction. Even if both are really fast, the log writers was above 1ms for writes only for 1% of them when in blocksize 4k vs. 15% with default blocksize.

This faster latency is measured by I/O statistics as well:

Reads: Reqs Data Writes: Reqs Data Waits: Avg
Function Name Data per sec per sec Data per sec per sec Count Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------

BLOCKSIZE 512:
LGWR 0M 0.0 0M 18.1G 2420.4 61.528M 368.9K 0.6
BLOCKSIZE 4096:
LGWR 0M 0.0 0M 24.1G 4263.5 81.689M 649.5K 0.2

To be comprehensive, here are the statistics related with redo, thanks to those many statistics available in 12c:

Value per Second (Elapsed Time)
------------------------------------------- ---------------------------------------
Statistic 1st 2nd %Diff 1st 2nd %Diff
------------------------------ ---------------- ---------------- --------- -------------- -------------- ---------
 
redo KB read 16,319,609 15,783,576 -3.3 54,118.0 52,276.1 -3.4
redo blocks checksummed by FG 26,587,090 1,000,267 -96.2 88,166.3 3,312.9 -96.2
redo blocks written 37,974,499 6,318,372 -83.4 125,928.5 20,926.8 -83.4
redo blocks written (group 0) 37,256,502 6,257,861 -83.2 123,547.5 20,726.4 -83.2
redo blocks written (group 1) 717,997 60,511 -91.6 2,381.0 200.4 -91.6
redo entries 24,023,503 30,214,386 25.8 79,665.1 100,071.8 25.6
redo size 18,644,947,688 23,073,410,468 23.8 61,829,138.5 76,420,493.9 23.6
redo synch long waits 343 4,890 1,325.7 1.1 16.2 1,321.1
redo synch time 541,804 354,625 -34.5 1,796.7 1,174.5 -34.6
redo synch time (usec) 5,418,056,862 3,546,209,390 -34.5 17,967,000.7 11,745,254.3 -34.6
redo synch time overhead (usec) 145,664,759 197,925,281 35.9 483,043.8 655,540.2 35.7
redo synch time overhead count ( 2ms) 2,295,847 2,821,726 22.9 7,613.3 9,345.7 22.8
redo synch time overhead count ( 8ms) 443 3,704 736.1 1.5 12.3 734.7
redo synch time overhead count ( 32ms) 2 9 350.0 0.0 0.0 200.0
redo synch writes 2,305,502 2,849,645 23.6 7,645.4 9,438.2 23.5
redo wastage 179,073,264 2,703,864,280 1,409.9 593,830.9 8,955,357.7 1,408.1
redo write finish time 291,094,266 277,884,591 -4.5 965,307.5 920,370.1 -4.7
redo write gather time 63,237,013 125,066,420 97.8 209,702.4 414,227.3 97.5
redo write info find 2,296,292 2,825,439 23.0 7,614.8 9,358.0 22.9
redo write schedule time 63,679,682 125,819,850 97.6 211,170.3 416,722.8 97.3
redo write size count ( 4KB) 12,220 0 40.5 0
redo write size count ( 8KB) 26,420 2,246 -91.5 87.6 7.4 -91.5
redo write size count ( 16KB) 69,674 94,557 35.7 231.0 313.2 35.5
redo write size count ( 32KB) 108,676 268,794 147.3 360.4 890.3 147.0
redo write size count ( 128KB) 106,651 253,669 137.8 353.7 840.2 137.6
redo write size count ( 256KB) 37,332 28,076 -24.8 123.8 93.0 -24.9
redo write size count ( 512KB) 7,328 2,382 -67.5 24.3 7.9 -67.5
redo write size count (1024KB) 28 28 0.0 0.1 0.1 0.0
redo write time 29,126 27,817 -4.5 96.6 92.1 -4.6
redo write time (usec) 291,261,420 278,162,437 -4.5 965,861.8 921,290.4 -4.6
redo write total time 306,213,383 298,786,696 -2.4 1,015,444.5 989,599.1 -2.5
redo write worker delay (usec) 38,246,633 73,452,748 92.1 126,830.9 243,279.8 91.8
redo writes 368,330 649,751 76.4 1,221.4 2,152.0 76.2
redo writes (group 0) 366,492 648,430 76.9 1,215.3 2,147.6 76.7
redo writes (group 1) 1,838 1,321 -28.1 6.1 4.4 -28.2
redo writes adaptive all 368,330 649,752 76.4 1,221.4 2,152.0 76.2
redo writes adaptive worker 368,330 649,752 76.4 1,221.4 2,152.0 76.2

I’ve added a few things that were masked by the AWR Diff Report. The writes lower than 4k is zero in the second snapshots because it’s the blocksize.
It’s interesting to see that redo size is higher and this is because you write 4k even when you have less. This is measured by ‘redo wastage’.

So, larger blocksize lowers the latency but increases the volume. Here, where NVMe optimizes the bandwidth to Flash storage, it may not be a problem.

So what?

You have to keep in mind that this workload, with lot of small transactions and no other waits, is a special workload for this test. If you are not in this extreme case, then the default block size is probably sufficient for latency and reduces the redo size. However, if log file sync latency is your bottleneck, you may consider increasing the blocksize.

Thanks to

arrow-electronicsOracle Authorized Solution Center, Switzerland.
Those tests were done on ODA X6-2M at Arrow OASC. Arrow has a wide range of Engineered Systems available for Oracle partners, like dbi services, and for customers to do Proof of Concepts, demos, learning, benchmarks, etc.

ODAArrow

 

Cet article Redo log block size on ODA X6 all flash est apparu en premier sur Blog dbi services.

Visibility on package objects

Tom Kyte - Fri, 2016-07-22 14:46
Hello, I have package that create objects and variables on the package header spec, such as: CREATE OR REPLACE PACKAGE Test_Package AS bVariable BOOLEAN := FALSE; END Test_Package; Am I correct in assuming that Test_Package.bVari...
Categories: DBA Blogs

Determining current database name

Tom Kyte - Fri, 2016-07-22 14:46
Which (if either) is preferable for determining the current database name: ORA_DATABASE_NAME or sys_context('USERENV', 'DB_NAME') ? Thanx, D
Categories: DBA Blogs

Inserting a decimal which has no decimal into the table with a decimal point

Tom Kyte - Fri, 2016-07-22 14:46
Hi Tom, I want to insert a decimal number which has no decimal point into a table with a decimal point. Is there a predefined function or a way to do this? Ex: (46926831654) should be inserted as (469268.31654) I dont want to use the string...
Categories: DBA Blogs

executing stored procedures after context change

Tom Kyte - Fri, 2016-07-22 14:46
Hi, I need to execute a stored procedure that exists in many schema, on demand, in any given schema. I am trying to change the current_schema then execute the procedure (GS_LOAD_FDB_PKG is the package in each of our schemas and gst_disease_dxid_u...
Categories: DBA Blogs

RMAN and Archivelog

Tom Kyte - Fri, 2016-07-22 14:46
Hi, Maybe this question was already answered somewhere but there so many question that I can not find the solution on this site. It is rather a simple one (I hope not a stupid one). Strictly speaking, when I make a "backup database" with Rman, do...
Categories: DBA Blogs

Restricting datafile addition to a filesystem (ASM)

Tom Kyte - Fri, 2016-07-22 14:46
Hi Tom The normal method of adding datafiles to a tablespace is to login to the database as "sqlplus / as sysdba" and then executing the "alter tablespace add datafile.." command. Is there a way to prevent / prompt the DBAs from accidentally adding ...
Categories: DBA Blogs

User defined Year-Month data type

Tom Kyte - Fri, 2016-07-22 14:46
Hi Answers Team. I keep seeing my maintenance / design / development teams abusing dates everywhere. One example is the use of YYYYMM as a way to extract a month from some piece of data what starts out as a date. 1. database design will cre...
Categories: DBA Blogs

Its all about DRG(Data Relationship Governance)

OracleApps Epicenter - Fri, 2016-07-22 10:47
You know, Oracle Data Relationship Management (DRM) is a web-based user-friendly platform for users to manage enterprise dimensions. The application provides many features such as SOX-compliant auditing, powerful and highly customizable business rule enforcement, versioning capabilities for storing historical views of dimensions, and multiple integration formats. Within DRM , DRG adds configurable, collaborative workflow to […]
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator