Feed aggregator

OpenFiler - lost iSCSI disks after reboot. How To Restore

Hans Forbrich - Sat, 2013-07-06 20:40
I've been using OpenFiler 2.99 as a NAS and iSCSI server for some tests.  

A few days ago I had a power outage and that caused me some problems, so I got another UPS and added the OpenFiler system to the UPS.  Unfortunately, even though  I cleanly shut the filer down, it still caused the same problems as before.

This blog is more of a record for me, when I need to redo this exercise.  

Environment:
  OpenFiler 2.99  x86_64 NAS Appliance
    - obtained from http://openfiler.com/community/download
    - built on a home-brew system with 1 500GB base drive &  4x1.5TB Seagate SATA drives

Symptoms:
  After reboot, iSCSI disks were not presented.

Detailed symptoms & steps, as root:





1)  The 4 SATA drives were visible, and the partitions were listed in /dev
      (/dev/sdb1 -> /dev/sde1).  However, pvdisplay could not see the partitions.

     ran fdisk for all disks and rewrote the partition table, and reran pvdisplay

 # fdisk /dev/sdb
        ...
 Command (m for help):  w
 # pvdisplay

2) The Logical volumes were there but 'Status not available'

  ran the vgchange command.

# vgchange -a y
# lvdisplay

3) The iSCSI partitions still not being presented, apparently because the LUNs were stuck.

Command Line, as root
  went to /opt/openfiler/iscsi/targets
  made a backup of iscsi_settings.xml
  removed all the
 
OpenFiler admin web site (https://myhost:446/)
  Services > Services Section : Manage Services > iSCSI Target ... stopped and started
  Volumes > Volumes Section : iSCSI Targets > LUN Mapping ... remapped all LUNs

After remapping, all were visible again.

Categories: DBA Blogs

EM 12.1.0.3 interesting feature - Deploying Oneoff patches to Agents

Fuad Arshad - Fri, 2013-07-05 09:19
Enterprise Manager 12c Already allows for deploying one-off patches thru the provisioning and patching module but what  if you need to deploy a lot of agents and don't want to keep patching them after the fact. EM 12.1.0.3 has  a new feature that allows for keeping management agent as well as plugin patches on the OMS for a particular agent version and the patches will automatically be applied when the agent  or plugin is deployed or upgraded. This is particularly a useful feature when you need to deploy or upgrade in bulk and have to apply one off patches on the environment as well.

The documentation for this feature is available in Technet EM Docs. The feature allows for generic patches by  putting them on each OMS
. In case of a Multi OMS this needs to be done on all OMS's Create a directory like below
$/install/oneoffs//Generic/
e.g 
$/install/oneoffs/12.1.0.3.0/Generic/
or 
$/install/oneoffs///
e.g 
$/install/oneoffs/12.1.0.3.0/linux_x64/  


On deployment or upgrade. The Patches will automatically  be  applied. The patches can be validated using the usual methods . Either by looking at the Manage Cloud Control --> Agents Screens or  Opatch lsinventory on the agent.

This is a very useful feature and will allow for rapid deployment and upgrades for agents without having to worry about applying one-off patches later.



Connect to pluggable database

Bas Klaassen - Fri, 2013-07-05 04:15
After creating the second pluggable database, next step is to connect to the pluggable databases and cretae a tablespace in one of them. Before connecting, be sure the pluggable databases are listed in the listener.ora.. LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com5
Categories: APPS Blogs

Major Price Cuts in Essbase, OBIEE, BIFS, and OSSM

Look Smarter Than You Are - Thu, 2013-07-04 19:20
Pricing Went Down 25-40%While Oracle is pretty good at giving discounts off list price, it's rare when they actually cut their list prices.  Shockingly, they just lowered (for what I believe is the first time since these products made it onto the price lists) the per processor prices on several of their Business Intelligence offerings: Essbase, OBIEE (Oracle Business Intelligence Foundation Suite), BIFS (Business Intelligence Foundation Suite), and OSSM (Oracle Scorecard & Strategy Management).

Per the price list dated June 25, 2013, the per processor prices have dropped substantially:

  • Essbase went from $184,000 to $138,000.  That's a 25% decrease.
  • OBIEE went from $295,000 to $221,250.  That's also a 25% decrease.
  • BIFS went from $450,000 to $300,000.  That's a 33% decrease.
  • OSSM went from $149,250 to $89,550.  That's a 40% decrease.
Now think about this for a second.  BIFS (Business Intelligence Foundation Suite) comes with Essbase, OBIEE, OSSM, and a few other fun things like EAL4HFM (Essbase Analytics Link for HFM).  BIFS was already a great deal because just buying Essbase, OBIEE, and OSSM separately was setting you back $628,250 but as a bundle costs you only $450,000.  That's a 28% decrease off just those 3 components separately.  Now those separate components list at $448,800 or if you buy the BIFS bundle, $300,000 which is a 33% discount off the components separately.

In other words, you now get OBIEE, Essbase, OSSM, and some other products for just $5,000 more per processor than OBIEE cost alone 2 weeks ago (it was $295,000, remember).  The named user costs for these products has not changed which means that they are positioning these price cuts directly at the enterprise customers: companies who are looking to adopt Oracle Business Analytics across their organization.  Considering those prices above are list, enterprise customers should be getting a discount starting off those prices which makes processor licensing start to seem very attractive for large deployments.

Core Factors
Also remember that Oracle doesn't charge this full price for every core on the processor.  They have a "processor factor" which charges less per core.  Depending on the type of processor, the Oracle Processor Core Factor Table will charge between 25% and 100% of the list processor price.

Take Exalytics X2-4, for example.  It has 4 Intel Xeon E7-4800 chips in it.  Each of those chips has 10 cores giving you 40 cores in total.  Based on the Processor Factor, these cores count as only half a processor.  In other words, to license a full X2-4, you'd need to pay for 20 processor licenses which at the new $300,000 price means a list of $6,000,000.  That's the maximum (not including tax, maintenance, TimesTen, etc.) that you'd pay but it would assumedly come in less than that which is really impressive to license an entire Exalytics box for unlimited users.  Unlimited, people.  Your whole organization could access OBIEE and Essbase for at most $6MM in software.

This may be the pricing discount your company needs to buy unlimited user licenses of Oracle Business Analytics.  And don't hold your breath for Oracle to drop any more list prices.  Take it as a gift and buy it before they change their minds.

Update as of 7-15-13According to an article on Information Week, during the release of Exalytics X3-4, Paul Rodwick was asked about the recent price decreases mentioned above.  He gave the intriguing response that while the prices did go down, it's "old news" because Oracle stealthily did it 9 months ago.  While I don't have the technology price list he's referring to (if you do, post a link to it in the comments), here's Paul's quote:
The cost for BI Foundation Suite on a named-user basis has never been changed, but about nine months ago we adjusted per-CPU pricing in part because we were seeing more customers want to license the full complement of Exalytics.
Categories: BI & Warehousing

Going Production...

Dominic Giles - Thu, 2013-07-04 13:48

This blog is going production... Just like Oracle Database 12c.

 Comments and code snippets to follow

Oracle 12c - Session-specific statistics for global temporary tables

Mihajlo Tekic - Wed, 2013-07-03 23:11
It’s been more than a week since Oracle Database 12c officially became available. I spent the very first night on reading the documentation, New Features chapter in particular.

New version, new features, improvements, enhancements, some more exciting than others. One however, intrigued me a bit .That is Session-specific statistics for global temporary tables. From time to time I am involved in optimizing queries that use global temporary tables so I am well aware of the challenges that come with them.

As the name suggest temporary tables contain temporary data. The data is visible only to the current session and persist either to the next commit or until the session is terminated. Either way, two sessions may populate the same temporary table with different data and concurrently run identical queries against it.

As we all know queries are executed using an execution plan generated by the optimizer. The efficiency of the execution plan depends upon the information the optimizer has about the data at the time the query is parsed. In my opinion, in order to get good execution plans(I am deliberately avoiding to say optimal plans) , accurate statistics are the most important piece of information you want to feed the optimizer with . 

However, when temporary tables are involved, it is often a challenge for the optimizer to generate efficient execution plan(s) that will be appropriate for all use cases.

In some cases when temporary tables are populated with data having similar characteristics, it is often a practice to have appropriate statistics collected and fixed (dynamic sampling may also fit in this case) so the optimizer produces efficient and stable execution plan(s). (one size fits all)

However, often temporary tables do not have statistics simply because it is difficult to find ones that will cover all use cases.

Consider the following example:

There is table MY_OBJECTS created using the following CTAS statement:

 CREATE TABLE my_objects AS  
SELECT * FROM all_objects
, (SELECT * FROM dual CONNECT BY rownum<=10);

A table MY_TYPES that contains all distinct data types derived from ALL_OBJECTS

 CREATE TABLE my_types AS  
SELECT DISTINCT object_type FROM all_objects;

A table T that has the same structure as MY_OBJECTS

 CREATE TABLE t AS   
SELECT * FROM my_objects WHERE 1=2;

And a global temporary table TEMP_OBJECTS that has only one column which will accept OBJECT_IDs

 CREATE GLOBAL TEMPORARY TABLE temp_objects   
(object_id NUMBER NOT NULL)
ON COMMIT PRESERVE ROWS;

Collect statistics for MY_OBJECTS and MY_TYPES tables

 EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_OBJECTS',cascade=>true);  
EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_TYPES',cascade=>true);

Let's suppose there are two sessions that load some data in TEMP_OBJECTS and then execute the following insert statement:

 INSERT INTO t  
SELECT /*+ gather_plan_statistics */ o.*
FROM my_objects o
, my_types t
, temp_objects tt
WHERE o.object_type=t.object_type
AND o.object_id=tt.object_id;

The first session (SID 22) inserts one record in TEMP_OBJECTS and then executes the insert statement from above by invoking insert_1.sql script.


 test[22,39]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1);  

1 row created.

test[22,39]> commit;

Commit complete.

test[22,39]> @insert_1

10 rows created.



Let's check the execution plan being used. It is based on Merge Cartesian Join, which is somewhat acceptable in this case.

 test[22,39]> @explain_last  

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0  
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.04 | 14036 | 14028 | 1519K| 1519K| 1521K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.02 | 6 | 2 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.09 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.



Session with SID (251) kicks in a moment later loading 1000 rows in TEMP_OBJECTS and runs the same insert statement. Since the same SQL statement has already been parsed , this session will reuse the existing cursor. However, in this case the plan being used is the one that should be avoided since cartesian join operation may severely impact performances:

 test[251,65]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);  

1000 rows created.

test[251,65]> @insert_1

10000 rows created.

test[251,65]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.22 | 14037 | 14026 | 3162K| 2024K| 3101K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.73 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.02 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.21 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.97 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.


As expected, the same cursor was reused by both sessions.

 sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';  

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N

sys[263,185]>


As you can sense, plan stability is one of the biggest challenges when dealing with queries that depend on temporary tables.

There are several strategies available, that I am aware of, on how to attack this problem. They include: use of stored outlines; fixed statistics; baselines; hints (cardinality hint for example); periodically invalidating plans; using separate workspaces (schemas) for different types of workloads; and couple of others more or less effective. Each of these comes with its own limitations.

With Oracle 12c users have an ability to gather session specific statistics for global temporary tables. This feature seems to nicely address the challenges from above.

When session specific statistics are used, each session collects “private” statistics and have CBO generate execution plan/cursor based on them. These cursors are not shared with other sessions. Also a cursor that’s been generated using session specific statistics is invalidated when statistics are re-collected within the same session.

Session specific statistics are enabled when GLOBAL_TEMP_TABLE_STATS preference is set to SESSION.

 exec dbms_stats.set_table_prefs(ownname=>'TEST', tabname=>'TEMP_OBJECTS',pname=>'GLOBAL_TEMP_TABLE_STATS',pvalue=>'SESSION');  

Once the preference is set all one needs to do is to gather stats for session based statistics to be collected.

And that is all, now let’s see which plans will be generated for each of the sessions if we repeat the example from above when session statistics are being used.

Session with SID 251 loads 1000 rows into TEMP_OBJECTS table, collects stats and executes the same insert statement.

 test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);  

1000 rows created.

test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');

PL/SQL procedure successfully completed.

test[251,69]> @insert_1

10000 rows created.

Now, the execution plan used is based on a HASH JOIN rather than MERGE CARTESIAN JOIN operation

 test[251,69]> @explain_last  

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 4256520316

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.07 | 14037 | 14026 | 1696K| 1696K| 1583K (0)|
| 3 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
|* 4 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.04 | 14034 | 14026 | 2293K| 2293K| 1607K (0)|
| 5 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1000 | 1000 |00:00:00.01 | 4 | 0 | | | |
| 6 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.64 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE")
4 - access("O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- Global temporary table session private statistics used


30 rows selected.

Note the “Note” - Global temporary table session private statistics used.

If you check v$sqlarea or v$sql you will see there are two child cursors for the same sql id.


 sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';  

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N
4256520316 1 00000000625A3928 1 0 Y N

sys[263,185]>


It is worth to mention that sessions that use global statistics will keep using without interfering with the session specific stats and cursors.

For example, session with SID 26 loads 1000 rows, but doesn't collect stats on TEMP_OBJECTS table. The insert statement in this case will use the globally available cursors (child cursor 0)

   
test[26,145]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);

1000 rows created.

test[26,145]> commit;

Commit complete.

test[26,145]> @insert_1

10000 rows created.

test[26,145]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.13 | 14039 | 14026 | 3162K| 2024K| 3114K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.64 | 9 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.01 | 6 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.19 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.88 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.

sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 0 Y N


Let's suppose the session with sid 251 modifies the TEMP_OBJECTS table, deletes its content and loads only 1 row followed by refreshing the session specific statistics. In this case it is expected to see the plan with hash value 3579371359 to be used.

 test[251,69]> DELETE FROM temp_objects;  

1000 rows deleted.

test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1);

1 row created.

test[251,69]> commit;

Commit complete.

test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');

PL/SQL procedure successfully completed.

test[251,69]>



If you check the cursor state (valid/invalid) you can see the session private cursor (child cursor 1) was just invalidated.

 PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS  EXECUTIONS INVALIDATIONS I I  
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 1 Y N

test[251,69]> @insert_1

10 rows created.

test[251,69]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.01 | 14037 | 14026 | 1519K| 1519K| 1426K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.01 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.05 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- Global temporary table session private statistics used


30 rows selected.


As expected the plan with hash value 3579371359 was used.

 PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS  EXECUTIONS INVALIDATIONS I I  
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
3579371359 1 00000000625A3928 1 1 Y N


You could also use v$sql_shared_cursor view to observe how the cursors are created and shared.
In order to track statistics one could use [DBA|USER|ALL]_TAB_STATISTICS views. These views have SCOPE column that indicate whether the statistics are shared or session specific:

 test[251,69]> SELECT owner, table_name, num_rows, last_analyzed, scope FROM dba_tab_statistics WHERE table_name='TEMP_OBJECTS';  

OWNER TABLE_NAME NUM_ROWS LAST_ANAL SCOPE
--------------- --------------- ---------- --------- -------
TEST TEMP_OBJECTS SHARED
TEST TEMP_OBJECTS 1 30-JUN-13 SESSION


Finally, something to think about is the possibility to increase hard parsing. As you may have observed from above, each parse made after the stats are gathered is a hard parse. Therefore, depending upon your workload you may expect to see more hard parsing.

Resources

SQL to NoSQL : Copy your data from MySQL to Couchbase

Tugdual Grall - Wed, 2013-07-03 12:52
TL;DR: Look at the project on Github. Introduction During my last interactions with the Couchbase community, I had the question how can I easily import my data from my current database into Couchbase. And my answer was always the same: Take an ETL such as Talend to do it Just write a small program to copy the data from your RDBMS to Couchbase... So I have written this small program thatTugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2

Oracle 12c plugable databases

Bas Klaassen - Wed, 2013-07-03 07:01
Last week I downloaded the new Oracle 12c database to check this new feature called 'Plugable database'. Using the runnstaller I fist created the 12c database with the 'Create as container database' option enabled. After installing the database, I dicided to use the dbca to create another plugable database. So, there should be 2 plugable databases and one container database rightBas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com5
Categories: APPS Blogs

Upgrading Oracle Database 11.2.0.3 to 12.1.0 (Oracle 12c)

Asif Momen - Wed, 2013-07-03 06:47
There are different ways of upgrading to the latest release of Oracle database and Oracle provides multiple methods to upgrade. Few are listed below:
  • Database Upgrade Assistant (DBUA)
  • Manual Upgrade
  • Transportable Tablespaces
  • Datapump export/import
  • Oracle Streams
  • Oracle GoldenGate
I chose DBUA to upgrade my test database as it was the simplest and quickest of all. Below are the database environment details prior to upgrade:

Oracle Database Version:             Oracle Database 11gR2 (11.2.0.3) (64-bit)
Operating System:                         Oracle Enterprise Linux 6.1 (64-bit)

To begin the upgrade process, I copied Oracle 12c software to the database server and did the following
  1. Backup Oracle database
  2. Stop the database
  3. Stop listener
  4. Change ORACLE_HOME environment variable in the bash profile of “oracle” user
  5. Launch Oracle 12c Installer (./runInstaller)


You will see the following screen: 


Click "Next" and chose "Skip software updates".


Click "Next" and select "Upgrade an existing database". When you select this option Oracle will install Oracle 12cR1 software and then automatically launch DBUA to upgrade the databases.


Click "Next" and continue as shown in the screenshots below:







Make sure all the warnings are resolved before proceeding with the next step. In my case, I ignored the warning as this is a test database and the swap size is almost closer to the expected value.


Installation of Oracle 12 software begins as shown below:




 After installing the software and completing Net configuration, Oracle Universal Installer will launch DBUA to upgrade existing databases.

Below are the screenshots from DBUA:




I ignored the warnings as this is our test database. You would like to resolve these warning for your production, development, testing, etc databases.













Well, it took more than 90 minutes to upgrade my test database. DBUA generates log files under "/u01/app/oracle/cfgtoollogs/dbua//upgrade1/" location. You may monitor the logs to see the progress and scripts that DBUA executes during the upgrade process.





 Well, that's it. My database is now upgraded to Oracle Database 12c.

SQL> select * from v$version;

BANNER CON_ID
---------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production      0
PL/SQL Release 12.1.0.1.0 - Production      0
CORE 12.1.0.1.0 Production       0
TNS for Linux: Version 12.1.0.1.0 - Production      0
NLSRTL Version 12.1.0.1.0 - Production      0

SQL> 

I am sure you would like to look for alternative and near zero downtime database upgrade approaches to upgrade your production databases. Here are few of the helpful links in this regard:

Enjoy your Oracle Database 12c.
 



Why use OVM for Oracle Databases

Oracle NZ - Tue, 2013-07-02 23:32

Some time ago I made a benchmark exercise to compare the performance of an Oracle Database running in a bare metal environment versus a virtualized environment to clear some of the main questions our team and clients in Revera have, such as:

  • Does an Oracle Database performs well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

Here you can find the results and the answers to the questions above: http://oraclenz.org/wp-content/uploads/2013/07/Why-use-OVM-Revera.pdf

 

Regards,

 

Francisco Munoz Alvarez



Tags:  , , ,

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Why use OVM for Oracle Databases], All Right Reserved. 2016.
Categories: DBA Blogs

Playing with VirtualBox, Oracle 12c (12.1.0.1) and OEL 6.4 – Part 1

Oracle NZ - Mon, 2013-07-01 23:22

Now that 12c was released it’s time to install in on OEL 6.4 64 bits OS. Let’s start playing with this.

Before we start with the creation of the VM we will need to download some software first, such as:

Now that we have all the required software, let’s use the following steps to create our virtual machine using VirtualBox.

1 – Start VirtualBox and click on the option [New].

2 – Name the Virtual Machine and select the type of OS (Oracle Linux) and click [Next].

SnapCrab_NoName_2013-6-28_12-20-49_No-00

3 – Enter the amount of RAM for the VM. For this example I’m entering 4096MB if you do not have this amount of memory to use, please try with 2048 MB. An click [Next].

SnapCrab_NoName_2013-6-28_12-21-55_No-00

4 – Now is time to create the virtual disk for the VM. Select Create a Virtual Hard Drive now and click [Create].

SnapCrab_NoName_2013-6-28_12-22-30_No-00

5 – Select a hard drive type to be created, select VMDK and click [Next].

SnapCrab_NoName_2013-6-28_12-22-51_No-00

6 – Select the Dynamically allocated option and click [Next].

SnapCrab_NoName_2013-6-28_12-23-11_No-00

7 – Now let’s select the file allocation and size for our disk.  Let’s enter 50 GB for the disk Data and click [CREATE] to create the VM.

SnapCrab_NoName_2013-6-28_12-23-43_No-00

8 – Select the VM just created, than click [Settings] , select [Storage] , [ADD CD/DVD] , add the OEL 6.4 64 bit ISO image and click [OK].

SnapCrab_NoName_2013-6-28_12-27-18_No-00

9 – Due that I have a god number of CPUs available on my laptop, I will edit the VM to have 2 vCPUs. (This is an optional step)

SnapCrab_NoName_2013-6-28_12-28-47_No-00

10 – Start  the VM and click on the option [Install or upgrade an existing system].

SnapCrab_NoName_2013-6-28_12-29-56_No-00

11 – Select [Skip] the Disk Test.

SnapCrab_NoName_2013-6-28_12-30-48_No-00

12 – The installation Welcome Screen will appear. Please click [Next].

SnapCrab_NoName_2013-6-28_12-31-31_No-00

13 – Select the installation Language. For this example we will select [English] and click [Next].

SnapCrab_NoName_2013-6-28_12-31-57_No-00

14 – Select the VM Keyboard to be use [U.S. English] and click [Next].

SnapCrab_NoName_2013-6-28_12-32-23_No-00

15 – Select [Basic Storage Devices] and click [Next].

SnapCrab_NoName_2013-6-28_12-32-53_No-00

16 – You will receive a warning message that the device bellow may contain data. Click [Yes, discard any data].

SnapCrab_NoName_2013-6-28_12-33-28_No-00

17 – Enter the Host name of the VM, For this example I will use oracle12c and click [Next].

SnapCrab_NoName_2013-6-28_12-34-11_No-00

18 – Select your time zone and click [Next].

SnapCrab_NoName_2013-6-28_12-34-45_No-00

19 – Enter the Root use password. For this example I used “oracle” and click [Next].

SnapCrab_NoName_2013-6-28_12-35-20_No-00

20 – For the type of installation select [Replace Existing Linux System(s)] and click [Next].

SnapCrab_NoName_2013-6-28_12-36-31_No-00

21 – Once again you will receive a warning message, this time saying that  the partitions will be written to disk. Please click on [Write changes to disk].

SnapCrab_NoName_2013-6-28_12-37-9_No-00

22 – In the next screen select [Basic Server], [Customize now]  and click [Next]

SnapCrab_NoName_2013-6-28_12-38-39_No-00

23 -  Now we will select the following packages group on our installation:

  • Base System > Base
  • Base System > Compatibility libraries
  • Base System > Hardware monitoring utilities
  • Base System > Large Systems Performance
  • Base System > Network file system client
  • Base System > Performance Tools
  • Base System > Perl Support
  • Servers > Server Platform
  • Servers > System administration tools
    • Select the Package oracle-rdbms-server-11gR2-preinstall due that the 12c package still not available yet and this will work just fine for this installation.
  • Desktops > Desktop
  • Desktops > Desktop Platform
  • Desktops > Fonts
  • Desktops > General Purpose Desktop
  • Desktops > Graphical Administration Tools
  • Desktops > Input Methods
  • Desktops > X Window System
  • Development > Additional Development
  • Development > Development Tools
  • Applications > Internet Browser

SnapCrab_NoName_2013-6-28_12-41-59_No-00

24 – Click [Next] and the Installation Process will start.

SnapCrab_NoName_2013-6-28_12-47-40_No-00

25 – The Congratulations screen will appear and you will need to click [Reboot].

SnapCrab_NoName_2013-6-28_13-7-49_No-00

26 – On the OEL 6 Welcome page click [Forward].

SnapCrab_NoName_2013-6-28_13-8-51_No-00

27 – Review the License Information, select [Yes, I agree to the License Agreement] and click [Forward].

SnapCrab_NoName_2013-6-28_13-9-17_No-00

28 – In the Set Up Software Updates page just click [Forward].

SnapCrab_NoName_2013-6-28_13-9-51_No-00

29 – In the Create User page just click [Forward].

SnapCrab_NoName_2013-6-28_13-10-26_No-00

30 – Enter the Date and Time for your system and click [Forward].

SnapCrab_NoName_2013-6-28_13-10-58_No-00

31 – Do not select Enable Kdump and click [Finish] to reboot the machine once again.

SnapCrab_NoName_2013-6-28_13-11-33_No-00

32 – After the reboot connect to the machine using the root user.

SnapCrab_NoName_2013-6-28_13-12-14_No-00

33 – Disable the Firewall on [Administration] –> [Firewall].

SnapCrab_NoName_2013-6-28_13-13-15_No-00

34 – Select [Devices] –> [Install Guest Additions] to install the Virtual Box Guest Additions.

SnapCrab_NoName_2013-6-28_13-14-20_No-00

35 – Transfer the Database files to the virtual machine and unzip them.

  • $ unzip linuxamd64_12c_database_1of2.zip
  • $ unzip linuxamd64_12c_database_2of2.zip

36 – Configure the Host file as per the screenshot bellow.

SnapCrab_NoName_2013-6-28_14-43-26_No-00

37 – Use YUM (public yum)  to automatically setup the Oracle prerequisites if not used when installing the OS.

SnapCrab_NoName_2013-6-28_13-38-56_No-00

*** As you can see above in the screen shot, the oracle-rdbms-server-12cR1-preinstall package still not available. For this scenario we will use the oracle-rdbms-server-11gR2-preinstall package that will do the trick for us.

38 – Now we will need to setup the user oracle password .

$ passwd oracle (for this lab we will user the password oracle)

39 – Create the directories in which the Oracle software will be installed.

mkdir -p /u01/app/oracle/product/12.1.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01

40 – Login as the oracle user and add the following lines at the end of the “.bash_profile” file.

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=oracle12c; export ORACLE_HOSTNAME
ORACLE_UNQNAME=DB12G; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1; export ORACLE_HOME
ORACLE_SID=DB12G; export ORACLE_SID

PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

41 – On the unziped folder created, go to database/rpm and install the cvuqdisk-1.0.9-1.rpm package.

$ rpm –Uvh cvuqdisk-1.0.9-1.rpm

42 – run xhost +

$ xhost +

access control disabled, clients can connect from any host

43 – Stop the Virtual Machine and Remove the OS disk from the [Storage] section and unselect the CD/DVD-ROM from the Boot Order and Start the VM.

SnapCrab_NoName_2013-6-28_13-24-51_No-00

Next, on part 2 of this lab, we will install the Oracle Database software and take a look in the Oracle Enterprise Manager Database Express.

 

Regards,

 

Francisco Munoz Alvarez

//



Tags:  , , , , ,

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Playing with VirtualBox, Oracle 12c (12.1.0.1) and OEL 6.4 – Part 1], All Right Reserved. 2016.
Categories: DBA Blogs

Oracle 12c New Features - Convert a database to be Pluggable

ContractOracle - Mon, 2013-07-01 22:08
In a previous blog post I demonstrated creating a Pluggable Database (PDB) from the PDB$SEED database  which is created at the same time as the Container Database (CDB).  That process was quick and easy, but is only useful for creating new empty databases.  

If we want to migrate existing databases which contain data (e.g upgraded from 11g) to the CDB/PDB multitenant architecture we need to convert them to be PDBs so we can plug them in.

For this example I created a stand-alone database called NONCDB.

[oracle@rac1 T12]$ export ORACLE_SID=NONCDB
[oracle@rac1 T12]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 10:31:54 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

We can see in v$database that it is not a CDB or PDB.

SQL> select CDB from v$database;

CDB
---
NO

To convert it to be a PDB we first need to get the database in a consistent state and run DBMS_PDB.DESCRIBE to create an XML file to describe the database.


SQL> shutdown immediate;

Database closed.

Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1235959808 bytes
Fixed Size                  2287816 bytes
Variable Size             452986680 bytes
Database Buffers          771751936 bytes
Redo Buffers                8933376 bytes
Database mounted.
SQL> alter database open read only;

Database altered.

SQL> BEGIN
 DBMS_PDB.DESCRIBE(
  pdb_descr_file => '/u01/app/oracle/oradata/NONCDB/noncdb.xml');
 END;

PL/SQL procedure successfully completed.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Now we can plug NONCDB into a existing CDB database T12.

[oracle@rac1 T12]$ export ORACLE_SID=T12
[oracle@rac1 T12]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 10:42:01 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> select CDB from v$database;

CDB
---
YES

I am plugging the database in to a CDB on the same server as the original database so I will create the PDB with NOCOPY TEMPFILE REUSE.  If you are changing directory structures then you would need to use FILE_NAME_CONVERT.

SQL> CREATE PLUGGABLE DATABASE NONCDB USING '/u01/app/oracle/oradata/NONCDB/noncdb.xml' NOCOPY tempfile reuse;

Pluggable database created.

Now we need to update the data dictionary in the new PDB by running noncdb_to_pdb.sql

SQL> alter session set container=NONCDB;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

This script has a lot of output which I will not show but unfortunately it ended with an error :-

SQL> -- get rid of idl_ub1$ rows for MDL java objects
SQL> delete from sys.idl_ub1$ where obj# in (select obj# from sys.obj$ where bitand(flags, 65536)=65536 and type# in (28,29,30,56));
^Cdelete from sys.idl_ub1$ where obj# in (select obj# from sys.obj$ where bitand(flags, 65536)=65536 and type# in (28,29,30,56))
                *
ERROR at line 1:
ORA-01013: user requested cancel of current operation

The new PDB was left in READ WRITE state after the script exited and seems usable, but due to the error I can't be sure everything completed OK so it would be worth checking with Oracle Support.


SQL> select name, open_mode from v$pdbs;



NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
NONCDB                         READ WRITE

SQL> alter session set container=NONCDB;

Session altered.

SQL> create user test identified by test;

User created.

We can see that by converting the existing database to a PDB it only kept the SYSTEM, SYSAUX, USERS tablespaces, and has dropped  the UNDO datafiles along with the original REDO logs and control files.

SQL> select file_name from dba_data_files;

FILE_NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/NONCDB/system01.dbf
/u01/app/oracle/oradata/NONCDB/sysaux01.dbf
/u01/app/oracle/oradata/NONCDB/users01.dbf


[oracle@rac1 NONCDB]$ pwd
/u01/app/oracle/oradata/NONCDB
[oracle@rac1 NONCDB]$ ls -lrt
total 1712016
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:30 redo03.log
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:30 redo02.log
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:36 redo01.log
-rw-r----- 1 oracle oinstall  57679872 Jul  2 10:37 undotbs01.dbf
-rw-r--r-- 1 oracle oinstall      3986 Jul  2 10:38 noncdb.xml
-rw-r----- 1 oracle oinstall  10043392 Jul  2 10:39 control02.ctl
-rw-r----- 1 oracle oinstall  10043392 Jul  2 10:39 control01.ctl
-rw-r----- 1 oracle oinstall  62922752 Jul  2 10:45 temp01.dbf
-rw-r----- 1 oracle oinstall   5251072 Jul  2 11:03 users01.dbf
-rw-r----- 1 oracle oinstall 807411712 Jul  2 11:03 system01.dbf
-rw-r----- 1 oracle oinstall 702554112 Jul  2 11:03 sysaux01.dbf
Categories: DBA Blogs

Context Switching: An Example

Chet Justice - Mon, 2013-07-01 15:45
Last week at #kscope13 I saw an outstanding example of context switching. If you don't know what it is, Tom Kyte explains it here.

The two environments are just "different", separate and distinct. You can do plsql without SQL, you can do SQL (and many times do) without invoking plsql. There is a call overhead to go from SQL to PLSQL (the "hit" is most evident when SQL invokes PLSQL - not so much the other way, when SQL is embedded in PLSQL). Even if this hit is very very small (say 1/1000th of a second) - if you do it enough, it adds up. So, if it can be avoided - it should be.

The session was Using Kanban and Scrum to Increase Your Development Throughput presented by Stew Stryker (not to be confused with Ted Striker) of Dartmouth College (Stew gave me a gallon of Vermont Maple Syrup which exploded in my bag on the flight home, a gift for sharing my hotel room. Thanks Stew! ;)). So here's the example he gave to demonstrate context switching.

Take a list of names and time yourself writing out the first letter of each name, then the second, until you are finished.



Now, same list of names and write them out the way you normally would, left to right.



If the first method was faster, you are a freak of nature.
Categories: BI & Warehousing

"An unexpected error has been detected by HotSpot Virtual Machine" Error While Launching Installer

Asif Momen - Mon, 2013-07-01 05:44
Today while installing Oracle 11gR2 (11.2.0.3) 64-bit on RHEL 6.1 64-bit on a VMWare box, I received the mentioned error. Looking up at MOS wasn't fruitful. However, this seems to be an RHEL bug.


bash-4.1$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 27976 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-01_03-12-27AM. Please wait ...bash-4.1$ #
# An unexpected error has been detected by HotSpot Virtual Machine:
#
#  SIGSEGV (0xb) at pc=0x00007f33a28c8d70, pid=2400, tid=139859082229520
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_30-b03 mixed mode)
# Problematic frame:
# C  [ld-linux-x86-64.so.2+0x14d70]
#
# An error report file with more information is saved as hs_err_pid2400.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

bash-4.1$ 


The workaround to the problem is to set "LD_BIND_NOW" environment variable to a value "1" as shown below and re-launch the installer. 

bash-4.1$ export LD_BIND_NOW=1
bash-4.1$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 27339 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-01_03-29-40AM. Please wait ...
bash-4.1$ 


According to a post on "The Gruff DBA" blog, this bug seems to have reported on  11.2.0.1 & 11.2.0.3 



:)

Oracle 12c New Features - Query CDB_PDB_HISTORY for Pluggable Database History

ContractOracle - Mon, 2013-07-01 03:45
If you are interested in checking the history of PDBs, then view CDB_PDB_HISTORY is a good place to start.

SQL> SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY WHERE CON_ID > 2 ORDER BY 5;  

DB_NAME    CON_ID PDB_NAME        OPERATION        OP_TIMESTA CLONED_FROM_PDB
---------- ------ --------------- ---------------- ---------- ---------------
SEEDDATA        5 PDB$SEED        UNPLUG           24-MAY-13
SEEDDATA        3 PDB$SEED        UNPLUG           24-MAY-13
SEEDDATA        4 PDB$SEED        UNPLUG           24-MAY-13
T12             5 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             3 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             4 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             5 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             3 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             4 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             4 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             4 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             4 PDB2            PLUG             01-JUL-13  PDB1
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB3            CLONE            01-JUL-13  PDB1

Categories: DBA Blogs

Oracle 12c New Features - TEMP_UNDO_ENABLED

ContractOracle - Mon, 2013-07-01 01:12
Oracle 12c introduces new parameter TEMP_UNDO_ENABLED which can be set at database and session level.  If this parameter is enabled, then undo for temporary objects (e.g global temporary tables) is written to the TEMP tablespace, compared to the default of writing to the UNDO tablespace.  This can help improve performance and reduce UNDO and REDO.

SQL> connect test/test@pdb1
Connected.

SQL> CREATE GLOBAL TEMPORARY TABLE my_temp_table (id integer) ON COMMIT PRESERVE ROWS;

Table created.

SQL> show parameter temp_undo_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------
temp_undo_enabled                    boolean     FALSE

Now run an insert to the global temporary table with the parameter set to FALSE. 

SQL> set autotrace on statistics
SQL> insert into my_temp_table values (1);

1 row created.


Statistics
----------------------------------------------------------
          1  recursive calls
          8  db block gets
          1  consistent gets
          0  physical reads
        312  redo size
        853  bytes sent via SQL*Net to client
        837  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

According to Autotrace statistics this generates redo of 312.

SQL> connect test/test@pdb1
Connected.
SQL> alter session set temp_undo_enabled=true;

Session altered.

Now run the insert again with the parameter set to TRUE.  

SQL> set autotrace on statistics
SQL> insert into my_temp_table values (1);

1 row created.


Statistics
----------------------------------------------------------
          3  recursive calls
         13  db block gets
          1  consistent gets
          0  physical reads
        280  redo size
        850  bytes sent via SQL*Net to client
        837  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

According to Autotrace statistics this generates redo of 280 (compared to 312).  Reduced UNDO and REDO from temporary transactions can help the performance of the database and reduce disk space for UNDO tablespaces, archivelogs, and backups. The Oracle documentation says "If database applications make use of temporary objects (using global temporary tables or temporary table transformations), it is advisable to set this parameter's value to true."  

Statistics on TEMP UNDO are available via V$TEMPUNDOSTAT

SQL> desc V$TEMPUNDOSTAT
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------
 BEGIN_TIME                                         DATE
 END_TIME                                           DATE
 UNDOTSN                                            NUMBER
 TXNCOUNT                                           NUMBER
 MAXCONCURRENCY                                     NUMBER
 MAXQUERYLEN                                        NUMBER
 MAXQUERYID                                         VARCHAR2(13)
 UNDOBLKCNT                                         NUMBER
 EXTCNT                                             NUMBER
 USCOUNT                                            NUMBER
 SSOLDERRCNT                                        NUMBER
 NOSPACEERRCNT                                      NUMBER
 CON_ID                                             NUMBER

More details here :- http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn10326.htm#REFRN10326



Categories: DBA Blogs

parameter ENABLE_DDL_LOGGING

ContractOracle - Sun, 2013-06-30 23:43
If Oracle parameter ENABLE_DDL_LOGGING is enabled DDL records are written to the ADR.

SQL> show parameter enable_ddl_logging

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------
enable_ddl_logging                   boolean     FALSE

SQL> alter system set enable_ddl_logging=true;

System altered.

SQL> connect c##test/test@pdb1
Connected.
SQL> create view x as select * from user_views;

View created.

SQL> drop view x;

View dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

[oracle@rac1 log]$ pwd
/u01/app/oracle/diag/rdbms/t12/T12/log
[oracle@rac1 log]$ ls
ddl  ddl_T12.log  debug  test
[oracle@rac1 log]$ more *.log
Mon Jul 01 12:35:54 2013
diag_adl:create view x as select * from user_views
diag_adl:drop view x

[oracle@rac1 log]$ cd ddl
[oracle@rac1 ddl]$ more *.xml
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='rac1.test.com' host_addr='192.168.1.205'
 version='1'>
 create view x as select * from user_views
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='rac1.test.com' host_addr='192.168.1.205'>
 drop view x

Categories: DBA Blogs

Oracle 12c New Features - Clone a Plugged In Database

ContractOracle - Sun, 2013-06-30 23:21
One of the benefits of the CDB/PDB model in Oracle 12c is that it allows rapid cloning of Pluggable Databases (PDB).  To create a clone database in previous versions of Oracle the DBA would have needed to create a new database instance with a new set of parameters and then clone the source database files using rman.  

In the following example I will clone database PDB1 to a new database PDB3 using only the "create pluggable database" command.  First the source database needs to be open read-only.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

Then clone the PDB1 to PDB3.

SQL> create pluggable database PDB3 from PDB1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdb1','/u01/app/oracle/oradata/T12/pdb3');  

Pluggable database created.

And open both databases read-write.

SQL> alter pluggable database PDB3 open;

Pluggable database altered.

SQL> alter pluggable database PDB1 close immediate;

Pluggable database altered.

SQL> alter pluggable database PDB1 open;

Pluggable database altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE
PDB3                           READ WRITE


Categories: DBA Blogs

OBIEE 11.1.1.6.11 Bundle Patch Now Available

Abhinav Agarwal - Sun, 2013-06-30 22:37
Last week saw the release of the OBIEE Bundle Patch 11.1.1.7.1 and on Friday bundle patch 11.1.1.6.11 for Oracle Business Intelligence Enterprise Edition became available on the Oracle My Support Portal.

The tracking ids for the respective bundle patch components are:
  • Oracle Business Intelligence Installer (BIINST): ID: 16747681
  • Oracle Real Time Decisions (RTD): ID: 16747684
  • Oracle Business Intelligence Publisher (BIP): ID: 16747692
  • Oracle Business Intelligence ADF Components (BIADFCOMPS): ID:  16747699
  • Enterprise Performance Management Components Installed from BI Installer 11.1.1.6.x (BIFNDNEPM): ID:  16747703
  • Oracle Business Intelligence: (OBIEE): ID: 16717325
  • Oracle Business Intelligence Platform Client Installers and MapViewer: ID: 16747708
Happy Monday to all!
Abhinav
Bangalore

    Oracle 12c New Features - Plugging and Unplugging Databases

    ContractOracle - Sun, 2013-06-30 22:35
    In a previous blog post I demonstrated creating Pluggable Databases (PDB) in an Oracle 12c Container Database (CDB).  In this test I will demonstrate how easy it is to unplug a PDB from a CDB, and then plugin again.

    We currently have one PDB with name PDB1.  We will shutdown, unplug it, and drop it.

    SQL> select name from v$pdbs;

    NAME
    ------------------------------
    PDB$SEED
    PDB1

    SQL> alter pluggable database pdb1 close immediate;

    Pluggable database altered.

    SQL> alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml';

    Pluggable database altered.

    SQL> drop pluggable database pdb1 keep datafiles;

    Pluggable database dropped.

    SQL> select name from v$pdbs;

    NAME
    ------------------------------
    PDB$SEED

    We can now backup the database to tape for later restore, or copy the datafiles and xml file for the pluggable database to another CDB on another server and plugin.  In this example I will just plug the database back into the original CDB.  

    Before we plugin we first need to run DBMS_PDB.CHECK_PLUG_COMPATIBILITY to check that the PDB is compatible with the new CDB.

    SQL> set serveroutput on
    SQL> DECLARE
       compatible BOOLEAN := FALSE;
      2    3  BEGIN
      4     compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
      5          pdb_descr_file => '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml');
      6     if compatible then
      7        DBMS_OUTPUT.PUT_LINE('Is pluggable PDB2 compatible? YES');
      8     else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB2 compatible? NO');
      9     end if;
     10  END;
     11  /
    Is pluggable PDB2 compatible? YES

    PL/SQL procedure successfully completed.

    As the PDB is compatible with the CDB we can proceed to plug it in. 

    SQL> create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE;

    Pluggable database created.

    SQL> select name, open_mode from v$pdbs;

    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB$SEED                       READ ONLY
    PDB1                           MOUNTED

    SQL> alter session set container=PDB1;

    Session altered.

    SQL> alter database open;

    Database altered.

    SQL> connect test/test@pdb1;
    Connected.


    SQL> show con_name



    CON_NAME

    ------------------------------
    PDB1

    We are now able to login to the plugged in database.

    The Alert log entries for these operations are as follows :-

    Mon Jul 01 11:14:31 2013
    alter pluggable database pdb1 close immediate
    Mon Jul 01 11:14:31 2013
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    Pluggable database PDB1 closed
    Completed: alter pluggable database pdb1 close immediate
    alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml'
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    Completed: alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml'
    drop pluggable database pdb1 keep datafiles
    Mon Jul 01 11:15:02 2013
    Deleted file /u01/app/oracle/oradata/T12/pdb1/pdbseed_temp01.dbf
    Completed: drop pluggable database pdb1 keep datafiles
    create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE
    Mon Jul 01 11:20:45 2013
    ****************************************************************
    Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
    If any errors are encountered before the pdb is marked as NEW,
    then the pdb must be dropped
    ****************************************************************
    Deleting old file#10 from file$
    Deleting old file#11 from file$
    Adding new file#12 to file$(old file#10)
    Adding new file#13 to file$(old file#11)
    Successfully created internal service pdb1 at open
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    ****************************************************************

    Post plug operations are now complete.
    Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
    ****************************************************************
    Completed: create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE
    Mon Jul 01 11:29:00 2013
    alter database open
    Mon Jul 01 11:29:00 2013
    Pluggable database PDB1 dictionary check beginning
    Pluggable Database PDB1 Dictionary check complete
    Opening pdb PDB1 (3) with no Resource Manager plan active

    XDB installed.

    XDB initialized.
    Pluggable database PDB1 opened read write
    Completed: alter database open






    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator