Skip navigation.

DBA Blogs

Bug 13930580 Example

Bobby Durrett's DBA Blog - Thu, 2014-01-09 13:37

It looks like we hit 13930580 on one of our systems and I wanted to put some example output showing log file sync times going above 10 milliseconds (10,000 microseconds) after the log writer switches to polling.

Here is the output in the lgwr trace file showing the switch to polling:

*** 2013-12-27 01:50:47.374
kcrfw_update_adaptive_sync_mode: post->poll long#=1 sync#=1 
sync=55336 poll=1913
4 rw=5506 rw+=5506 ack=0 min_sleep=19134

*** 2013-12-27 01:50:47.374
Log file sync switching to polling
Current scheduling delay is 1 usec
Current approximate redo synch write rate is 0 per sec

Here is a report showing the number of log file sync waits and their average time in microseconds:

END_INTERVAL_TIME          number of waits ave microseconds
-------------------------- --------------- ----------------
26-DEC-13 03.00.19.333 PM           976539       2279.06861
26-DEC-13 04.00.43.475 PM           681212       2029.32406
26-DEC-13 05.00.07.975 PM           343932       1575.26284
26-DEC-13 06.00.34.772 PM           163911       1850.74354
26-DEC-13 07.01.00.911 PM            73151       3815.28597
26-DEC-13 08.00.24.463 PM            39304       5038.05427
26-DEC-13 09.00.48.970 PM            32122       5677.00557
26-DEC-13 10.00.13.491 PM           472349       2353.95857
26-DEC-13 11.00.40.521 PM            18679       18655.5294
27-DEC-13 12.00.06.276 AM            19618       17046.2287
27-DEC-13 01.00.30.299 AM            18983       5721.99178
27-DEC-13 02.00.54.261 AM            17724       17106.3415
27-DEC-13 03.00.18.282 AM             9088       25342.7271
27-DEC-13 04.00.42.218 AM            14365          12128.7
27-DEC-13 05.00.06.391 AM            16323       12879.8831
27-DEC-13 06.00.31.379 AM            43758       15326.7298
27-DEC-13 07.00.56.027 AM            83819       14796.2851
27-DEC-13 08.00.20.637 AM           168718       13506.4363
27-DEC-13 09.00.47.262 AM           302827       19116.9491
27-DEC-13 10.00.14.014 AM           480347       19358.6655
27-DEC-13 11.00.41.178 AM           512777       15952.2358
27-DEC-13 12.00.08.220 PM           511091       13799.5512
27-DEC-13 01.00.38.131 PM           576341       10183.4347
27-DEC-13 02.00.06.308 PM           524568       10251.1259

Notice how the average wait time goes above 10,000 microseconds consistently once the log writer switches to polling between 1 and 2 am on 12/27/2013.  I didn’t show all the output but this long log file sync wait time has continued since the switch.

Also, these long log file sync times don’t correspond to long log file parallel write times.  Here are the number and averages of log file parallel write waits for the same time frame:

END_INTERVAL_TIME          number of waits ave microseconds
-------------------------- --------------- ----------------
26-DEC-13 03.00.19.333 PM           902849       1426.66601
26-DEC-13 04.00.43.475 PM           659701       1394.87763
26-DEC-13 05.00.07.975 PM           344245       1294.92401
26-DEC-13 06.00.34.772 PM           166643       1586.64944
26-DEC-13 07.01.00.911 PM            80457       4019.29429
26-DEC-13 08.00.24.463 PM            46409       5580.67827
26-DEC-13 09.00.48.970 PM            69218       5115.20904
26-DEC-13 10.00.13.491 PM           475297       2219.80541
26-DEC-13 11.00.40.521 PM            40943        19405.052
27-DEC-13 12.00.06.276 AM            38835       18160.8073
27-DEC-13 01.00.30.299 AM            24734       6321.38425
27-DEC-13 02.00.54.261 AM            33617       11723.6698
27-DEC-13 03.00.18.282 AM            36469       17485.2614
27-DEC-13 04.00.42.218 AM            19344       6955.27042
27-DEC-13 05.00.06.391 AM            17857       4399.75718
27-DEC-13 06.00.31.379 AM            45098       4923.02763
27-DEC-13 07.00.56.027 AM            83700       3610.39713
27-DEC-13 08.00.20.637 AM           160919       2841.31507
27-DEC-13 09.00.47.262 AM           266405       3523.86855
27-DEC-13 10.00.14.014 AM           384795        3367.5075
27-DEC-13 11.00.41.178 AM           437806       2729.84248
27-DEC-13 12.00.08.220 PM           448261       2442.81012
27-DEC-13 01.00.38.131 PM           511648       1880.74418
27-DEC-13 02.00.06.308 PM           481106       1919.21158

The average I/O time – log file parallel write – is pretty low when the system is active (more than 100,000 waits per hour) – usually less than 4000 microseconds and yet log file sync is always more than 10,000 after the switch to polling. Also, the CPU on the system is consistently less than 30% used so it isn’t a system load issue.

Here are some Oracle support documents related to this issue:

Adaptive Switching Between Log Write Methods can Cause 
'log file sync' Waits (Doc ID 1462942.1)

Waits for "log file sync" with Adaptive Polling vs Post/Wait 
Choice Enabled (Doc ID 1541136.1)

Bug 13930580: LGWR IS BLOCKING SESSIONS

Here is the script I used to get the wait output:

set linesize 32000
set pagesize 1000
set long 2000000000
set longchunksize 1000
set head off;
set verify off;
set termout off;

column u new_value us noprint;
column n new_value ns noprint;

select name n from v$database;
select user u from dual;
set sqlprompt &ns:&us>

set head on
set echo on
set termout on
set trimspool on

UNDEFINE WAITNAME
UNDEFINE MINIMUMWAITS

spool "&ns.&&WAITNAME..log"

column END_INTERVAL_TIME format a26

select sn.END_INTERVAL_TIME,
(after.total_waits-before.total_waits) "number of waits",
(after.time_waited_micro-before.time_waited_micro)/
(after.total_waits-before.total_waits) "ave microseconds",
before.event_name "wait name"
from DBA_HIST_SYSTEM_EVENT before, 
DBA_HIST_SYSTEM_EVENT after,
DBA_HIST_SNAPSHOT sn
where before.event_name='&&WAITNAME' and
after.event_name=before.event_name and
after.snap_id=before.snap_id+1 and
after.instance_number=1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number and
(after.total_waits-before.total_waits) > &&MINIMUMWAITS
order by after.snap_id;

spool off

I gave log file sync as the WAITNAME and 1 as MINIMUMWAITS for the first output.  I changed WAITNAME to log file parallel write for the second one with 1 still for MINIMUMWAITS.

It looks like there is a new feature in 11.2 that was finally turned on by default in 11.2.0.3.  The work around is to set a hidden parameter to false to turn off the new feature.  Check out the Oracle support docs I listed for details.

- Bobby

p.s. I forgot to mention that when I tested on a test database with and without this new feature the log file sync times were lower with the new feature, as they should be.

With _use_adaptive_log_file_sync=TRUE (feature enabled):

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class log file sync 639,598 3,466 5 86.74 Commit DB CPU 397 9.93 buffer exterminate 683 14 21 0.36 Other virtual circuit wait 912 12 13 0.31 Network SQL*Net message to client 1,293,196 7 0 0.17 Network

With _use_adaptive_log_file_sync=FALSE (disabled as in earlier versions of 11.2):

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class log file sync 639,644 3,553 6 87.31 Commit DB CPU 367 9.02 buffer exterminate 1,364 28 21 0.69 Other buffer busy waits 2,679 15 6 0.37 Concurrency virtual circuit wait 903 13 15 0.32 Network

With the new feature enabled log file sync was 5 milliseconds instead of 6 without it.  So, the new feature does speed up log file sync waits when it is working normally.  But, there must be some bug condition where it degrades to greater than 10 millisecond log file syncs.

Categories: DBA Blogs

Four tough choices for workshops at GLOC 2014

Grumpy old DBA - Thu, 2014-01-09 12:18
This year at the Great Lakes Oracle Conference ( May 12-14 2014 at CSU ) we are doing 1/2 day workshops monday afternoon.  Then the conference kicks into full speed with keynotes tuesday morning and session tracks after that.

We have an incredible set of people doing workshops for our attendees.  We are working on finalizing a fourth workshop in the Essbase area but not quite ready to announce that on.

We are going to go live this weekend with the ability to register for our latest addition to the workshops.

Alex Gorbachev:
From Relational to Hadoop - Migrating your data pipelineI am very excited to announce this addition!

Will update this post with links when this workshop is online for registration. 
Categories: DBA Blogs

Quick Tip on Using Sqoop Action in Oozie

Chen Shapira - Wed, 2014-01-08 18:59

Another Oozie tip blog post.

If you try to use Sqoop action in Oozie, you know you can use the “command” format, with the entire Sqoop configuration in a single line:

<pre><workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
    ...
    <action name="myfirsthivejob">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-traker>foo:8021</job-tracker>
            <name-node>bar:8020</name-node>
            <command>import  --connect jdbc:hsqldb:file:db.hsqldb --table TT --target-dir hdfs://localhost:8020/user/tucu/foo -m 1</command>
        </sqoop>
        <ok to="myotherjob"/>
        <error to="errorcleanup"/>
    </action>
    ...
</workflow-app>

This is convenient, but can be difficult to read and maintain. I prefer using the “arg” syntax, with each argument in its own line:

<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
    ...
    <action name="myfirsthivejob">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-traker>foo:8021</job-tracker>
            <name-node>bar:8020</name-node>
            <arg>import</arg>
            <arg>--connect</arg>
            <arg>jdbc:hsqldb:file:db.hsqldb</arg>
            <arg>--table</arg>
            <arg>TT</arg>
            <arg>--target-dir</arg>
            <arg>hdfs://localhost:8020/user/tucu/foo</arg>
            <arg>-m</arg>
            <arg>1</arg>
        </sqoop>
        <ok to="myotherjob"/>
        <error to="errorcleanup"/>
    </action>
    ...
</workflow-app>

As you can see, each argument here is in its own “arg” tag. Even two arguments that belong together like “–table” and “TT” go in two separate tags.
If you’ll try to put them together for readability, as I did, Sqoop will throw its entire user manual at you. It took me a while to figure out why this is an issue.

When you call Oozie from the command line, all the arguments you pass are sent as a String[] array, and the spaces separate the arguments into array elements. So if you call Sqoop with “–table TT” it will be two elements, “–table” and “TT”.
When using “arg” tags in Oozie, you are basically generating the same array in XML. Oozie will turn the XML argument list into an array and pass it to Sqoop just the way it would in the command line. Then Sqoop parses it in exactly the same way.
So every item separated with space in the command line must be in separate tags in Oozie.

Its simple and logical once you figure out why :)
If you want to dig a bit more into how Sqoop parses its arguments, it is using Apache Commons CLI with GnuParser. You can read all about it.


Categories: DBA Blogs

Delphix upgrade in twenty minutes

Bobby Durrett's DBA Blog - Wed, 2014-01-08 17:29

Just got off of a Webex with Delphix support.  They upgraded our Delphix server from version 3.0.3.1 to 3.2.5.1.  It took about twenty minutes.  Pretty nice compared to an Oracle database upgrade I think!

The only thing that took some time was that I had to be sure I had enough space.  It looks like Delphix needs your disk space to be no more than 85% utilized to fully function.  We ended up extending our four 1.5 terabyte luns to 1.9 TB each to give us space.  Then I cleaned up some archive log space by running a snapsync on each source database to completion.  Our system just needed a little TLC to get some space free again.

But, the upgrade itself, running the scripts and rebooting the server, took 20 minutes and was all done by Delphix support for no extra charge.  Sweet.

- Bobby

Categories: DBA Blogs

Disabling Triggers in Oracle 11.2.0.4

Pythian Group - Wed, 2014-01-08 08:10

In March 2012, I put together a blog post entitled Disabling Oracle triggers on a per-session basis, outlining a way to suspend trigger execution for the current session through a PL/SQL call. Commenter Bryan posted a comment saying he couldn’t get it working in 11.2.0.4:

Unfortunately Oracle seems to have disabled this use in 11.2.0.4, and most likely 12.1 as well. Boo-Hiss! This is needed functionality for DBAs!

A new parameter: enable_goldengate_replication

I tried this on an Oracle 11.2.0.4 system, and I indeed got an error:

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-26947: Oracle GoldenGate replication is not enabled.
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

A quick look at oerr gives a path forward, assuming you do indeed have a GoldenGate license:

[oracle@ora11gr2b ~]$ oerr ora 26947
26947, 00000, "Oracle GoldenGate replication is not enabled."
// *Cause: The 'enable_goldengate_replication' parameter was not set to 'true'.
// *Action: Set the 'enable_goldengate_replication' parameter to 'true'
//           and retry the operation.
//          Oracle GoldenGate license is needed to use this parameter.

The Oracle reference gives a bit more info

ENABLE_GOLDENGATE_REPLICATION controls services provided by the RDBMS for Oracle GoldenGate (both capture and apply services). Set this to true to enable RDBMS services used by Oracle GoldenGate.

The RDBMS services controlled by this parameter also include (but are not limited to):

Service to suppress triggers used by GoldenGate Replicat

As do the GoldenGate 12.1.2 docs:

The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat.

To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting.

ENABLE_GOLDENGATE_REPLICATION=true

So here goes nothing:

SQL> alter system set enable_goldengate_replication=true;

System altered.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

Another error: missing privileges. I checked and double-checked that the required GoldenGate privileges were indeed assigned.

Tracing and permission checks

It’s time to run a 100046 trace (SQL trace) to see what’s really going on.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

And tracefile does show some interesting information. A few of the more interesting snippets:

PARSING IN CURSOR #140324121137184 len=76 dep=0 uid=91 oct=47 lid=91 tim=1388531465245781 hv=1323338123 ad='6c1f63a0' sqlid='gvq73797f12cb'
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;
END OF STMT
...
PARSING IN CURSOR #140324121064984 len=187 dep=1 uid=0 oct=3 lid=0 tim=1388531465246387 hv=2028900049 ad='6c128db8' sqlid='aa9h2ajwfx3qj'
SELECT COUNT(*) FROM ( SELECT GP.USERNAME FROM DBA_GOLDENGATE_PRIVILEGES GP WHERE GP.USERNAME = :B1 UNION ALL SELECT GRANTEE FROM DBA_ROLE_PRIVS WHERE GRANTEE=:B1 AND GRANTED_ROLE='DBA' )
END OF STMT
...
 Bind#0
...
  value="GGS"
...
 Bind#1
...
  value="GGS"
...

The SQL statement is actually checking two things. The first is looking for the current username in the dba_goldengate_privileges view. This view isn’t listed in the Oracle 11.2 documentation, but it does appear in the 12c docs:

ALL_GOLDENGATE_PRIVILEGES displays details about Oracle GoldenGate privileges for the user.

Oracle GoldenGate privileges are granted using the DBMS_GOLDENGATE_AUTH package.

Related Views

DBA_GOLDENGATE_PRIVILEGES displays details about Oracle GoldenGate privileges for all users who have been granted Oracle GoldenGate privileges.

USER_GOLDENGATE_PRIVILEGES displays details about Oracle GoldenGate privileges. This view does not display the USERNAME column.

I had previously run dbms_goldengate_auth to grant privs here, so should be OK.

The second check simply verifies that the DBA role had been granted to the current user, again as recommended by the documentation. (A side note: in previous versions, I had avoided granting the overly broad DBA role to the GoldenGate user in favor of specific grants for the objects it uses. There’s no reason for the GoldenGate user to need to read and modify data objects that aren’t related to its own replication activities for example. And I would argue that it helps avoid errors such as putting the wrong schema in a map statement when permissions are restricted. But sadly it’s no longer possible in the world of 11.2.0.4.)

Running the query manually to verify that the grants are indeed in place:

SQL> SELECT COUNT(*) FROM ( SELECT GP.USERNAME FROM DBA_GOLDENGATE_PRIVILEGES GP WHERE GP.USERNAME = 'GGS'
UNION ALL SELECT GRANTEE FROM DBA_ROLE_PRIVS WHERE GRANTEE='GGS' AND GRANTED_ROLE='DBA' );

  COUNT(*)
----------
         2

Looks good, so that doesn’t seem to be the problem.

Tracing #2: system properties

Back to the 10046 tracefile:

PARSING IN CURSOR #140324119717656 len=45 dep=1 uid=0 oct=3 lid=0 tim=1388531465253124 hv=3393782897 ad='78ae2b40' sqlid='9p6bq1v54k13j'
select value$ from sys.props$ where name = :1
END OF STMT
...
 Bind#0
...
  value="GG_XSTREAM_FOR_STREAMS"
...
FETCH #140324119717656:c=0,e=44,p=0,cr=2,cu=0,mis=0,r=0,dep=1,og=1,plh=415205717,tim=1388531465254441

Because this SQL statement involves an ordinary select without an aggregate function, I can look at the FETCH line in the tracefile to get the number of rows returned. In this case it’s r=0, meaning no rows returned.

The query itself is looking for a system property I haven’t seen before: GG_XSTREAM_FOR_STREAMS. A Google search returns only a single result: the PDF version of the Oracle 11.2 XStream guide. Quoting:

ENABLE_GG_XSTREAM_FOR_STREAMS Procedure
This procedure enables XStream capabilities and performance optimizations for Oracle
Streams components.
This procedure is intended for users of Oracle Streams who want to enable XStream
capabilities and optimizations. For example, you can enable the optimizations for an
Oracle Streams replication configuration that uses capture processes and apply
processes to replicate changes between Oracle databases.
These capabilities and optimizations are enabled automatically for XStream
components, such as outbound servers, inbound servers, and capture processes that
send changes to outbound servers. It is not necessary to run this procedure for
XStream components.
When XStream capabilities are enabled, Oracle Streams components can stream ID key
LCRs and sequence LCRs. The XStream performance optimizations improve efficiency
in various areas, including:
? LCR processing
? Handling large transactions
? DML execution during apply
? Dependency computation and scheduling
? Capture process parallelism

On the surface, I don’t see what this would have to do with trigger execution, but I’m going to try enabling it as per the newly read document anyway:

SQL> exec dbms_xstream_adm.ENABLE_GG_XSTREAM_FOR_STREAMS(enable=>true);

PL/SQL procedure successfully completed.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

No dice.

Tracing #3: process names

Onto the next SQL in the tracefile:

PARSING IN CURSOR #140324120912848 len=114 dep=1 uid=0 oct=3 lid=0 tim=1388531465255628 hv=1670585998 ad='6c2d6098' sqlid='a9mwtndjt67nf'
SELECT COUNT(*) FROM V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.PROGRAM LIKE 'extract%' AND p.spid = :1
END OF STMT
...
 Bind#0
...
  value="2293"

Now we look in v$session, to see if a session associated with the process with OS PID 2293 (which happens to be the SPID of our current shadow process) has a PROGRAM column starting with the word extract. extract is, naturally, the name of the GoldenGate executable that captures data from the source system. In a GoldenGate system, however, trigger suppression does not happen in the extract process at all, but rather the replicat process that applies changes on the target system. So I’m going to skip this check and move on to the next one in the tracefile:

PARSING IN CURSOR #140324120905624 len=169 dep=1 uid=0 oct=3 lid=0 tim=1388531465257346 hv=3013382849 ad='6c122b38' sqlid='38pkvxattt4q1'
SELECT COUNT(*) FROM V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND (S.MODULE LIKE 'OGG%' OR S.MODULE = 'GoldenGate') AND S.PROGRAM LIKE 'replicat%' AND p.spid = :1
END OF STMT
...
 Bind#0
...
  value="2293"

This SQL is similar to the previous one, but instead of looking for a program called extract, it looks for one called replicat, and adds an extra check, so see if the module column either starts with OGG or is called GoldenGate. And since it’s the replicat process that does trigger disabling in GoldenGate, this check is likely to be related.

To make this check succeed, I’m going to have to change both the program and module columns in v$session for the current session. of the two, module is much easier to modify: a single call to dbms_application_info.set_module. But modifying program is less straightforward. One approach is to use Java code with Oracle’s JDBC Thin driver and setting the aptly-named v$session.program property, as explained in De Roeptoeter. But I’m hoping to stay with something I can do in SQL*Plus. If you’ve looked through a packet trace of a SQL*Net connection being established, you will know that the program name is passed by the client at the time of connection establishment, so could be modified by either modifying the network packet in transit. This is also complex to get working, as it also involves fixing checksums and the like. There’s a post on Slavik’s blog with a sample OCI C program that modifies its program information. Again more complexity thn I’d like, but it gave me an idea: if the program is populated by the name of the client-side executable, why don’t we simply copy sqlplus to a name that the dbms_xstream_gg likes better?

[oracle@ora11gr2b ~]$ cp $ORACLE_HOME/bin/sqlplus ./replicat
[oracle@ora11gr2b ~]$ ./replicat ggs

SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 30 14:09:05 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> exec dbms_application_info.set_module('OGG','');

PL/SQL procedure successfully completed.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);

PL/SQL procedure successfully completed.

Success!

Wrapping up

So it looks like you can disable triggers per-session in 11.2.0.4 just like previous versions, but need to jump through a few more hoops to do it. A few conclusions to draw:

  • Oracle patchsets, while normally intended to include bugfixes, can have major changes to underlying functionality too. See Jeremy Schneider’s post on adaptive log file sync for an even more egregious example. So before applying a patchset, test thoroughly!
  • The enforcement of full DBA privileges for the GoldenGate user in Oracle 11.2.0.4 requires very broad permissions to use GoldenGate, which can be a concern in security-conscious or consolidated environments.

TL;DR: Yes you can still disable triggers per-session in Oracle 11.2.0.4, but you have to have a GoldenGate license, set the enable_goldengate_replication parameter, use a program name that starts with replicat, and set your module to OGG.

Categories: DBA Blogs

#Oracle Database whoami for Multitenant

The Oracle Instructor - Wed, 2014-01-08 05:49

As an enhancement to the Oracle Database whoami for versions before 12c, this also shows the Container Name to which the session is connected:

[oracle@linuxbox ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Wed Jan 8 12:34:04 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> @whoami
USER: SYS
SESSION ID: 253
CURRENT_SCHEMA: SYS
INSTANCE NAME: cdb1
CDB NAME: cdb1
CONTAINER NAME: CDB$ROOT
DATABASE ROLE: PRIMARY
OS USER: oracle
CLIENT IP ADDRESS:
SERVER HOSTNAME: linuxbox
CLIENT HOSTNAME: linuxbox

PL/SQL procedure successfully completed.

SQL> connect system/oracle_4U@pdb1
Connected.
SQL> @whoami
USER: SYSTEM
SESSION ID: 253
CURRENT_SCHEMA: SYSTEM
INSTANCE NAME: cdb1
CDB NAME: cdb1
CONTAINER NAME: PDB1
DATABASE ROLE: PRIMARY
OS USER: oracle
CLIENT IP ADDRESS: 555.555.5.555
SERVER HOSTNAME: linuxbox
CLIENT HOSTNAME: linuxbox

PL/SQL procedure successfully completed.

The content of whoami.sql:

set serveroutput on
begin
dbms_output.put_line('USER: '||sys_context('userenv','session_user'));
dbms_output.put_line('SESSION ID: '||sys_context('userenv','sid'));
dbms_output.put_line('CURRENT_SCHEMA: '||sys_context('userenv','current_schema'));
dbms_output.put_line('INSTANCE NAME: '||sys_context('userenv','instance_name'));
dbms_output.put_line('CDB NAME: '||sys_context('userenv','cdb_name'));
dbms_output.put_line('CONTAINER NAME: '||sys_context('userenv','con_name'));
dbms_output.put_line('DATABASE ROLE: '||sys_context('userenv','database_role'));
dbms_output.put_line('OS USER: '||sys_context('userenv','os_user'));
dbms_output.put_line('CLIENT IP ADDRESS: '||sys_context('userenv','ip_address'));
dbms_output.put_line('SERVER HOSTNAME: '||sys_context('userenv','server_host'));
dbms_output.put_line('CLIENT HOSTNAME: '||sys_context('userenv','host'));
end;
/

Shortcut to get the name of the current container is:

SQL> show con_name

CON_NAME
------------------------------
PDB1

You may find that useful in a multitenant environment with many Pluggable Databases within one Container Database :-)


Tagged: 12c New Features, whoami
Categories: DBA Blogs

Cool Picture of Instance/Database from 12c Concepts

Bobby Durrett's DBA Blog - Tue, 2014-01-07 18:00

I just think this is a cool picture of an Oracle 12c instance and database from Oracle’s 12c Concepts manual (not my own work):

Instance and Database from 12c Concepts

This is from the concepts manual found here: url

- Bobby

Categories: DBA Blogs

Finished reading Oracle Core

Bobby Durrett's DBA Blog - Tue, 2014-01-07 12:37

Just finished reading the book by Jonathan Lewis titled “Oracle Core: Essential Internals for DBAs and Developers“.  I think I picked it up at the Collaborate 13 conference in Denver last April but haven’t had time (or taken the time) to read it.

Reading a book like Oracle Core can be a challenge because it is pretty dense with example scripts and outputs including dumps in hex.  So, I decided to take the strategy of pushing myself to crash through the book without carefully following every example.  I may only have absorbed about 10% of the material but if I didn’t jam through it I would have gotten 0%!

I picked up Oracle Core because I had read another book by the same author titled “Cost-Based Oracle Fundamentals” which has paid for itself 100 times over in terms of helping me tune queries.  I highly recommend Cost-Based Oracle Fundamentals without reservation.  But, like Oracle Core it can be a challenge to just sit down and read it and follow every SQL example and output.  Probably it would be worth making a first pass focusing on just the English language text and skimming the examples, maybe delving into the examples of most interest.

In the case of Oracle Core I haven’t yet put it to practical use but I’m glad to have at least skimmed through it.  Now I know what’s in it and can refer back to it when needed.

Next I hope to start reading up on Oracle 12c since I plan to get my certification this year.  But, I wanted to finish Oracle Core before I moved on, even if I only read it at a high level.

- Bobby

 

 

Categories: DBA Blogs

Oracle GoldenGate (Streams) process disabled – Why?

DBASolved - Tue, 2014-01-07 12:24

It is amazing what Oracle Enterprise Manager 12c will report once it is configured for a product.  Once such product is Oracle GoldenGate.  I have stepped into a project where I’m running Oracle GoldenGate between many different environments for production purposes.   Just trying to get a handle around what is going on has been a task.  In talking with the customer, they were starting to implement Oracle Enterprise Manager 12c.   Once OEM was setup, we added the Oracle GoldenGate plug-in and started to monitor the replication environments.

Monitoring the Oracle GoldenGate environments, I noticed a warning in the Incident Manager.  The warning that was noticed was: “Status for Streams process OGG$_CGGMONX9AC55691 is DISABLED”.    I got to thinking, what is this message about?  Much more, how do I resolve this warning (I like OEM to be quite.  I started to look around MOS for answers, to my surprise, not much is written about his message.

image

Oracle GoldenGate, classic capture, doesn’t report these types of messages within Oracle Enterprise Manager 12c.  Classic Capture mostly reports the up and down status of Oracle GoldenGate processes.  This message had to be coming from some integrated version of the extract (first clue was the word Streams).  Keeping that Streams may be used in some way, the DBA_CAPTURE table should be able to shine a bit of light on this warning.

From a SQL*Plus prompt or an SQL IDE (prefer SQL Developer), the DBA_CAPTURE view can be queried.  

image 

From looking at the STATUS column, I verified that I’ve found the correct record.  The PURPOSE column shows that this extract (capture) is being used for Streams.  What!?, wait a minute, I’m using Oracle GoldenGate. 

Yes, Oracle GoldenGate is being used.  If there is information in the DBA_CAPTURE view it is because the Extract has been registered (integrated) with the database some how.  The status is DISABLED, an indicator that this extract was registered for logretention:

GGSCI> stop extract cggmonx
GGSCI>
dblogin userid ggate password ggate
GGSCI> register extract cggmonx logretention
GGSCI> start extract cggmonx

Now, that it is understood that the extract has been registered for log retention, what does this actually mean?

According to the Oracle GoldenGate 11g Reference Guide, an extract can be registered in one of two modes.

1. Database  – Enables integrated capture for the Extract group. In this mode,
Extract integrates with the database logmining server to receive
change data in the form of logical change records (LCR). Extract
does not read the redo logs. Extract performs capture processing,
filtering, transformation, and other requirements

2. Logretention – Enables an Extract group, classic capture mode, to work with
Oracle Recovery Manager (RMAN) to retain the logs that Extract
needs for recovery

As indicated a few lines up, this extract has been registered with logretention.  This means that the extract creates an underlying Streams capture process and prevents RMAN from removing any archivelogs that may be needed for replication of data.  As part of creating the underlying Streams structure, Oracle creates a queue under the Oracle GoldenGate owner (The queue name can also be found in the DBA_CAPTURE view).

image

Now that the root problem of the DISABLE message in Oracle Enterprise Manager 12c has been identified, how can this message be resolved?

The simplest way is to unregister the extract from the database/logretention knowing that Oracle GoldenGate configuration is using Classic Capture.  Keep in mind that when unregistering the extract, retention of archivelogs will not be enforced when RMAN backs them up and possibility removes them.  Make sure you RMAN retention policies are what you expect them to be. 

To unregister an extract that is using logretention, use the steps below:

GGSCI> stop extract cggmonx
GGSCI>
dblogin userid ggate password ggate
GGSCI> unregister extract cggmonx logretention
GGSCI> start extract cggmonx

 

Enjoy!

twitter: @dbasolved

blog; http://dbasolved.com


Filed under: Golden Gate, OEM
Categories: DBA Blogs

schema validation scripts and alter session set current_schema ... make me not so grumpy

Grumpy old DBA - Tue, 2014-01-07 07:14
Believe it or not many DBAs/Developers are unaware of ( well or have forgotten ) how to "switch into" a different schema.

PLSQL has had the option for a long time

alter session set current_schema = SOME_SCHEMA_NAME;

This does not give you full schema owner capabilities ( well depends on what your login session capabilities have ) but can be very useful.  For instance in some kind of script to validate that all the expected objects exist and are at the corrrect version you could use it like this.

set echo off
set feedback on
set heading off
set linesize 168
set serveroutput on size unlimited
set term on

alter session set current_schema = FIRST_SCHEMA_BEING_CHECKED;

BEGIN
  validate_objects.bv_show_valid_messages := TRUE;
  validate_objects.bv_stop_on_error := FALSE;

  dbms_output.put_line(chr(13));
  dbms_output.put_line('===============================================================================');
  dbms_output.put_line(chr(13));
 
  -- a bunch of calls against a validation package ... check that tables exists / views exists / foreign keys exists / indexes exist / packages procedures functions exists / data exists

 
  -- at the end check that all objects are valid in the schema ...  
 
END;
/
Categories: DBA Blogs

12c Online Partitioned Table Reorganisation Part I (Prelude)

Richard Foote - Mon, 2014-01-06 22:07
First post for 2014 !! Although it’s generally not an overly common activity with Oracle databases, reorganising a table can be somewhat painful, primarily because of the associated locking implications and the impact it has on indexes. If we look at the following example: So we have a table with a couple of indexes. We […]
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Twelve: Concluding Remarks

Iggy Fernandez - Mon, 2014-01-06 21:10
Day One: Disruptive Innovation Day Two: Requirements and Assumptions Day Three: Functional Segmentation Day Four: Sharding Day Five: Replication and Eventual Consistency Day Six: The False Premise of NoSQL Day Seven: Schemaless Design Day Eight: Oracle NoSQL Database Day Nine: NoSQL Taxonomy Day Ten: Big Data Day Eleven: Mistakes of the relational camp Day Twelve: […]
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Eleven: Mistakes of the relational camp

Iggy Fernandez - Sun, 2014-01-05 21:35
On the eleventh day of Christmas, my true love gave to me Eleven pipers piping. (Yesterday: Big Data in a Nutshell)(Tomorrow: Concluding Remarks) Over a lifespan of four and a half decades, the relational camp made a series of strategic mistakes that made NoSQL possible. The mistakes started very early. The biggest mistake is enshrined […]
Categories: DBA Blogs

Partner Webcast – Oracle Engineered Systems & Partner Service Opportunities

The old way to buy servers, storage and networking equipment in different cycles has reached its end. Today we see a trend towards buying integrated systems that are tested, certified, sold and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Ten: Big Data in a Nutshell

Iggy Fernandez - Sun, 2014-01-05 01:11
On the tenth day of Christmas, my true love gave to me Ten lords a-leaping. (Yesterday: NoSQL Taxonomy)(Tomorrow: Mistakes of the relational camp) The topic of Big Data is often brought up in NoSQL discussions so let’s give it a nod. In 1998, Sergey Brin and Larry Page invented the PageRank algorithm for ranking web […]
Categories: DBA Blogs

Programming Elastic MapReduce Using AWS Services to Build an End-to-End Application

Surachart Opun - Fri, 2014-01-03 22:59
Amazon Elastic MapReduce (Amazon EMR) is a web service that makes it easy to quickly and cost-effectively process vast amounts of data. Amazon EMR uses Hadoop, an open source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances.
Anyway, You are looking for a book about programming Elastic MapReduce. I mention a book titles - Programming Elastic MapReduce Using AWS Services to Build an End-to-End Application By Kevin Schmidt, Christopher Phillips.
Programming Elastic MapReduce Using AWS Services to Build an End-to-End Application By Kevin Schmidt, Christopher PhillipsThis book will give readers the best practices for using Amazon EMR and various AWS and Apache technologies. Readers will learn much more about.
  • Get an overview of the AWS and Apache software tools used in large-scale data analysis
  • Go through the process of executing a Job Flow with a simple log analyzer
  • Discover useful MapReduce patterns for filtering and analyzing data sets
  • Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow
  • Learn the basics for using Amazon EMR to run machine learning algorithms
  • Develop a project cost model for using Amazon EMR and other AWS tools
 A book gives readers how to use Amazon EC2 Services Management Console and learn more about it. Readers will get good examples in a book. However, It will be good, if readers can create an AWS Account and use it with examples in a book. Illustration and example in a book, that is very helpful and make a book easy to read and follow each example.



Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Nine: NoSQL Taxonomy

Iggy Fernandez - Fri, 2014-01-03 04:23
On the ninth day of Christmas, my true love gave to me Nine ladies dancing. (Yesterday: Oracle NoSQL Database)(Tomorrow: Big Data in a Nutshell) NoSQL databases can be classified into the following categories: Key-value stores: The archetype is Amazon Dynamo of which DynamoDB is the commercial successor. Key-value stores basically allow applications to “put” and […]
Categories: DBA Blogs

Demo Links To All Older Posts Now Accessible (Chains)

Richard Foote - Thu, 2014-01-02 22:05
OK, for a quite some time (too long probably !!!) people have been sending me emails and leaving comments that they have been unable to access a number of the demos to my older posts and those listed in my Presentations and Demos page. I previously would write an article but include a demo that […]
Categories: DBA Blogs

Top Ten Posts So Far

Bobby Durrett's DBA Blog - Thu, 2014-01-02 15:04

Just for fun I’ve pasted in a table listing the top 10 most viewed posts on this blog as links and including total number views since this blog began in March 2012.  I based this on WordPress’s statistics so I’m not sure exactly how the blog software collects the numbers but it is fun to get some positive feedback.  Hopefully it means people are getting something out of it.  I’m certainly enjoying putting it together.  Here are the links ordered by views as listed on the right:

cell single block physical read 3,738 REGEXP_LIKE Example 2,822 Finding query with high temp space usage using ASH views 2,232 DBA_HIST_ACTIVE_SESS_HISTORY 2,097 CPU queuing and library cache: mutex X waits 1,801 DBMS_SPACE.SPACE_USAGE 1,748 Resource Manager wait events 1,566 Fast way to copy data into a table 1,166 Delphix First Month 1,074 use_nl and use_hash hints for inner tables of joins 1,047

Anyway, I thought I would list the top ten posts on this blog if you want to read the ones that have the most views and possibly are the most useful.

- Bobby

 

 

 

Categories: DBA Blogs

Statistics gathering and SQL Tuning Advisor

Pythian Group - Thu, 2014-01-02 07:50

Our monitoring software found a long running job on one of our client’s databases. The job was an Oracle’s auto task running statistics gathering for more than 3 hours. I was curious to know why it took so long and used a query to ASH to find out the most common SQL during the job run based on the module name. Results were surprising to me: top SQL was coming with SQL Tuning Advisor comment.

Here is the SQL I used:

SQL> select s.sql_id, t.sql_text, s.cnt
  2  from
  3    (select *
  4     from
  5      (
  6        select sql_id, count(*) cnt
  7        from v$active_session_history
  8        where action like 'ORA$AT_OS_OPT_SY%'
  9        group by sql_id
 10        order by count(*) desc
 11      )
 12     where rownum <= 5
 13    ) s,
 14    dba_hist_sqltext t
 15  where s.sql_id = t.sql_id;

SQL_ID        SQL_TEXT                                                                                CNT
------------- -------------------------------------------------------------------------------- ----------
020t65s3ah2pq select substrb(dump(val,16,0,32),1,120) ep, cnt from (select /*+ no_expand_table        781
byug0cc5vn416 /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)          43
bkvvr4azs1n6z /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)          21
46sy4dfg3xbfn /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)        1559

So most queries are coming with “SQL Analyze” comment right in the beginning of SQL which is running from DBMS_STATS call, which is confusing. After some bug search I have found a MOS Doc ID 1480132.1 which includes a PL/SQL stack trace from a DBMS_STATS procedure call, and it is going up to DBMS_SQLTUNE_INTERNAL, which means DBMS_STATS has a call to the SQL Tuning package; very odd:

SQL> select * from dba_dependencies where name = 'DBMS_STATS_INTERNAL' and referenced_name = 'DBMS_SQLTUNE_INTERNAL';

OWNER                          NAME                           TYPE               REFERENCED_OWNER       REFERENCED_NAME
------------------------------ ------------------------------ ------------------ ------------------------------ ----------------------------------
REFERENCED_TYPE    REFERENCED_LINK_NAME                                                                                                     DEPE
------------------ -------------------------------------------------------------------------------------------------------------------------------
SYS                            DBMS_STATS_INTERNAL            PACKAGE BODY       SYS                    DBMS_SQLTUNE_INTERNAL
PACKAGE                                                                                                                                     HARD

Turns out, this call has nothing to do with the SQL Tuning. It is just a call to a procedure in this package, which happen to look like an SQL from SQL Tuning Advisor. I have traced a GATHER_TABLE_STATS call in a test database first with SQL trace and then with DBMS_HPROF and here is how the call tree looks like:

SELECT RPAD(' ', (level-1)*2, ' ') || fi.owner || '.' || fi.module AS name,
       fi.function,
       pci.subtree_elapsed_time,
       pci.function_elapsed_time,
       pci.calls
FROM   dbmshp_parent_child_info pci
       JOIN dbmshp_function_info fi ON pci.runid = fi.runid AND pci.childsymid = fi.symbolid
WHERE  pci.runid = 1
CONNECT BY PRIOR childsymid = parentsymid
  START WITH pci.parentsymid = 27;
NAME                                     FUNCTION                       SUBTREE_ELAPSED_TIME FUNCTION_ELAPSED_TIME                CALLS
---------------------------------------- ------------------------------ -------------------- --------------------- --------------------
...
SYS.DBMS_STATS_INTERNAL                  GATHER_SQL_STATS                           21131962                 13023                    1
  SYS.DBMS_ADVISOR                       __pkg_init                                       88                    88                    1
  SYS.DBMS_SQLTUNE_INTERNAL              GATHER_SQL_STATS                           21118776                  9440                    1
    SYS.DBMS_SQLTUNE_INTERNAL            I_PROCESS_SQL                              21107094              21104225                    1
      SYS.DBMS_LOB                       GETLENGTH                                        37                    37                    1
      SYS.DBMS_LOB                       INSTR                                            42                    42                    1
      SYS.DBMS_LOB                       __pkg_init                                       15                    15                    1
      SYS.DBMS_SQLTUNE_INTERNAL          I_VALIDATE_PROCESS_ACTION                        74                    39                    1
        SYS.DBMS_UTILITY                 COMMA_TO_TABLE                                   35                    35                    1
      SYS.DBMS_SQLTUNE_UTIL0             SQLTEXT_TO_SIGNATURE                            532                   532                    1
      SYS.DBMS_SQLTUNE_UTIL0             SQLTEXT_TO_SQLID                                351                   351                    1
      SYS.XMLTYPE                        XMLTYPE                                        1818                  1818                    1
    SYS.DBMS_SQLTUNE_UTIL0               SQLTEXT_TO_SQLID                                528                   528                    1
    SYS.DBMS_UTILITY                     COMMA_TO_TABLE                                   88                    88                    1
    SYS.DBMS_UTILITY                     __pkg_init                                       10                    10                    1
    SYS.SQLSET_ROW                       SQLSET_ROW                                       33                    33                    1
    SYS.XMLTYPE                          XMLTYPE                                        1583                  1583                    1
  SYS.DBMS_STATS_INTERNAL                DUMP_PQ_SESSTAT                                  73                    73                    1
  SYS.DBMS_STATS_INTERNAL                DUMP_QUERY                                        2                     2                    1
...

So there is a procedure DBMS_SQLTUNE_INTERNAL.GATHER_SQL_STATS which is being called by DBMS_STATS_INTERNAL, and this procedure actually runs a SQL like this:

/* SQL Analyze(0) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  */to_char(count("ID")),to_char(substrb(dump(min("ID"),16,0,32),1,120)),to_char(substrb(dump(max("ID"),16,0,32),1,120)),to_char(count("X")),to_char(substrb(dump(min("X"),16,0,32),1,120)),to_char(substrb(dump(max("X"),16,0,32),1,120)),to_char(count("Y")),to_char(substrb(dump(min("Y"),16,0,32),1,120)),to_char(substrb(dump(max("Y"),16,0,32),1,120)),to_char(count("PAD")),to_char(substrb(dump(min("PAD"),16,0,32),1,120)),to_char(substrb(dump(max("PAD"),16,0,32),1,120)) from "TIM"."T1" t  /* NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL*/

Which is basically approximate NDV calculation. So, nothing to be afraid of, it’s just the way the code is organized: DBMS_STATS uses API of SQL Tuning framework when you are using DBMS_STATS.AUTO_SAMPLE_SIZE as the ESTIMATE_PERCENT (which is the default & recommended value in 11g+).

Categories: DBA Blogs