Feed aggregator

Oracle 12c - Session-specific statistics for global temporary tables

Mihajlo Tekic - Wed, 2013-07-03 23:11
It’s been more than a week since Oracle Database 12c officially became available. I spent the very first night on reading the documentation, New Features chapter in particular.

New version, new features, improvements, enhancements, some more exciting than others. One however, intrigued me a bit .That is Session-specific statistics for global temporary tables. From time to time I am involved in optimizing queries that use global temporary tables so I am well aware of the challenges that come with them.

As the name suggest temporary tables contain temporary data. The data is visible only to the current session and persist either to the next commit or until the session is terminated. Either way, two sessions may populate the same temporary table with different data and concurrently run identical queries against it.

As we all know queries are executed using an execution plan generated by the optimizer. The efficiency of the execution plan depends upon the information the optimizer has about the data at the time the query is parsed. In my opinion, in order to get good execution plans(I am deliberately avoiding to say optimal plans) , accurate statistics are the most important piece of information you want to feed the optimizer with . 

However, when temporary tables are involved, it is often a challenge for the optimizer to generate efficient execution plan(s) that will be appropriate for all use cases.

In some cases when temporary tables are populated with data having similar characteristics, it is often a practice to have appropriate statistics collected and fixed (dynamic sampling may also fit in this case) so the optimizer produces efficient and stable execution plan(s). (one size fits all)

However, often temporary tables do not have statistics simply because it is difficult to find ones that will cover all use cases.

Consider the following example:

There is table MY_OBJECTS created using the following CTAS statement:

 CREATE TABLE my_objects AS  
SELECT * FROM all_objects
, (SELECT * FROM dual CONNECT BY rownum<=10);

A table MY_TYPES that contains all distinct data types derived from ALL_OBJECTS

 CREATE TABLE my_types AS  
SELECT DISTINCT object_type FROM all_objects;

A table T that has the same structure as MY_OBJECTS

 CREATE TABLE t AS   
SELECT * FROM my_objects WHERE 1=2;

And a global temporary table TEMP_OBJECTS that has only one column which will accept OBJECT_IDs

 CREATE GLOBAL TEMPORARY TABLE temp_objects   
(object_id NUMBER NOT NULL)
ON COMMIT PRESERVE ROWS;

Collect statistics for MY_OBJECTS and MY_TYPES tables

 EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_OBJECTS',cascade=>true);  
EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_TYPES',cascade=>true);

Let's suppose there are two sessions that load some data in TEMP_OBJECTS and then execute the following insert statement:

 INSERT INTO t  
SELECT /*+ gather_plan_statistics */ o.*
FROM my_objects o
, my_types t
, temp_objects tt
WHERE o.object_type=t.object_type
AND o.object_id=tt.object_id;

The first session (SID 22) inserts one record in TEMP_OBJECTS and then executes the insert statement from above by invoking insert_1.sql script.


 test[22,39]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1);  

1 row created.

test[22,39]> commit;

Commit complete.

test[22,39]> @insert_1

10 rows created.



Let's check the execution plan being used. It is based on Merge Cartesian Join, which is somewhat acceptable in this case.

 test[22,39]> @explain_last  

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0  
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.04 | 14036 | 14028 | 1519K| 1519K| 1521K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.02 | 6 | 2 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.09 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.



Session with SID (251) kicks in a moment later loading 1000 rows in TEMP_OBJECTS and runs the same insert statement. Since the same SQL statement has already been parsed , this session will reuse the existing cursor. However, in this case the plan being used is the one that should be avoided since cartesian join operation may severely impact performances:

 test[251,65]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);  

1000 rows created.

test[251,65]> @insert_1

10000 rows created.

test[251,65]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.22 | 14037 | 14026 | 3162K| 2024K| 3101K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.73 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.02 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.21 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.97 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.


As expected, the same cursor was reused by both sessions.

 sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';  

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N

sys[263,185]>


As you can sense, plan stability is one of the biggest challenges when dealing with queries that depend on temporary tables.

There are several strategies available, that I am aware of, on how to attack this problem. They include: use of stored outlines; fixed statistics; baselines; hints (cardinality hint for example); periodically invalidating plans; using separate workspaces (schemas) for different types of workloads; and couple of others more or less effective. Each of these comes with its own limitations.

With Oracle 12c users have an ability to gather session specific statistics for global temporary tables. This feature seems to nicely address the challenges from above.

When session specific statistics are used, each session collects “private” statistics and have CBO generate execution plan/cursor based on them. These cursors are not shared with other sessions. Also a cursor that’s been generated using session specific statistics is invalidated when statistics are re-collected within the same session.

Session specific statistics are enabled when GLOBAL_TEMP_TABLE_STATS preference is set to SESSION.

 exec dbms_stats.set_table_prefs(ownname=>'TEST', tabname=>'TEMP_OBJECTS',pname=>'GLOBAL_TEMP_TABLE_STATS',pvalue=>'SESSION');  

Once the preference is set all one needs to do is to gather stats for session based statistics to be collected.

And that is all, now let’s see which plans will be generated for each of the sessions if we repeat the example from above when session statistics are being used.

Session with SID 251 loads 1000 rows into TEMP_OBJECTS table, collects stats and executes the same insert statement.

 test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);  

1000 rows created.

test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');

PL/SQL procedure successfully completed.

test[251,69]> @insert_1

10000 rows created.

Now, the execution plan used is based on a HASH JOIN rather than MERGE CARTESIAN JOIN operation

 test[251,69]> @explain_last  

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 4256520316

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.07 | 14037 | 14026 | 1696K| 1696K| 1583K (0)|
| 3 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
|* 4 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.04 | 14034 | 14026 | 2293K| 2293K| 1607K (0)|
| 5 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1000 | 1000 |00:00:00.01 | 4 | 0 | | | |
| 6 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.64 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE")
4 - access("O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- Global temporary table session private statistics used


30 rows selected.

Note the “Note” - Global temporary table session private statistics used.

If you check v$sqlarea or v$sql you will see there are two child cursors for the same sql id.


 sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';  

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N
4256520316 1 00000000625A3928 1 0 Y N

sys[263,185]>


It is worth to mention that sessions that use global statistics will keep using without interfering with the session specific stats and cursors.

For example, session with SID 26 loads 1000 rows, but doesn't collect stats on TEMP_OBJECTS table. The insert statement in this case will use the globally available cursors (child cursor 0)

   
test[26,145]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);

1000 rows created.

test[26,145]> commit;

Commit complete.

test[26,145]> @insert_1

10000 rows created.

test[26,145]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.13 | 14039 | 14026 | 3162K| 2024K| 3114K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.64 | 9 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.01 | 6 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.19 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.88 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.

sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 0 Y N


Let's suppose the session with sid 251 modifies the TEMP_OBJECTS table, deletes its content and loads only 1 row followed by refreshing the session specific statistics. In this case it is expected to see the plan with hash value 3579371359 to be used.

 test[251,69]> DELETE FROM temp_objects;  

1000 rows deleted.

test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1);

1 row created.

test[251,69]> commit;

Commit complete.

test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');

PL/SQL procedure successfully completed.

test[251,69]>



If you check the cursor state (valid/invalid) you can see the session private cursor (child cursor 1) was just invalidated.

 PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS  EXECUTIONS INVALIDATIONS I I  
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 1 Y N

test[251,69]> @insert_1

10 rows created.

test[251,69]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.01 | 14037 | 14026 | 1519K| 1519K| 1426K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.01 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.05 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- Global temporary table session private statistics used


30 rows selected.


As expected the plan with hash value 3579371359 was used.

 PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS  EXECUTIONS INVALIDATIONS I I  
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
3579371359 1 00000000625A3928 1 1 Y N


You could also use v$sql_shared_cursor view to observe how the cursors are created and shared.
In order to track statistics one could use [DBA|USER|ALL]_TAB_STATISTICS views. These views have SCOPE column that indicate whether the statistics are shared or session specific:

 test[251,69]> SELECT owner, table_name, num_rows, last_analyzed, scope FROM dba_tab_statistics WHERE table_name='TEMP_OBJECTS';  

OWNER TABLE_NAME NUM_ROWS LAST_ANAL SCOPE
--------------- --------------- ---------- --------- -------
TEST TEMP_OBJECTS SHARED
TEST TEMP_OBJECTS 1 30-JUN-13 SESSION


Finally, something to think about is the possibility to increase hard parsing. As you may have observed from above, each parse made after the stats are gathered is a hard parse. Therefore, depending upon your workload you may expect to see more hard parsing.

Resources

SQL to NoSQL : Copy your data from MySQL to Couchbase

Tugdual Grall - Wed, 2013-07-03 12:52
TL;DR: Look at the project on Github. Introduction During my last interactions with the Couchbase community, I had the question how can I easily import my data from my current database into Couchbase. And my answer was always the same: Take an ETL such as Talend to do it Just write a small program to copy the data from your RDBMS to Couchbase... So I have written this small program thatTugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2

Oracle 12c plugable databases

Bas Klaassen - Wed, 2013-07-03 07:01
Last week I downloaded the new Oracle 12c database to check this new feature called 'Plugable database'. Using the runnstaller I fist created the 12c database with the 'Create as container database' option enabled. After installing the database, I dicided to use the dbca to create another plugable database. So, there should be 2 plugable databases and one container database rightBas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com5
Categories: APPS Blogs

Upgrading Oracle Database 11.2.0.3 to 12.1.0 (Oracle 12c)

Asif Momen - Wed, 2013-07-03 06:47
There are different ways of upgrading to the latest release of Oracle database and Oracle provides multiple methods to upgrade. Few are listed below:
  • Database Upgrade Assistant (DBUA)
  • Manual Upgrade
  • Transportable Tablespaces
  • Datapump export/import
  • Oracle Streams
  • Oracle GoldenGate
I chose DBUA to upgrade my test database as it was the simplest and quickest of all. Below are the database environment details prior to upgrade:

Oracle Database Version:             Oracle Database 11gR2 (11.2.0.3) (64-bit)
Operating System:                         Oracle Enterprise Linux 6.1 (64-bit)

To begin the upgrade process, I copied Oracle 12c software to the database server and did the following
  1. Backup Oracle database
  2. Stop the database
  3. Stop listener
  4. Change ORACLE_HOME environment variable in the bash profile of “oracle” user
  5. Launch Oracle 12c Installer (./runInstaller)


You will see the following screen: 


Click "Next" and chose "Skip software updates".


Click "Next" and select "Upgrade an existing database". When you select this option Oracle will install Oracle 12cR1 software and then automatically launch DBUA to upgrade the databases.


Click "Next" and continue as shown in the screenshots below:







Make sure all the warnings are resolved before proceeding with the next step. In my case, I ignored the warning as this is a test database and the swap size is almost closer to the expected value.


Installation of Oracle 12 software begins as shown below:




 After installing the software and completing Net configuration, Oracle Universal Installer will launch DBUA to upgrade existing databases.

Below are the screenshots from DBUA:




I ignored the warnings as this is our test database. You would like to resolve these warning for your production, development, testing, etc databases.













Well, it took more than 90 minutes to upgrade my test database. DBUA generates log files under "/u01/app/oracle/cfgtoollogs/dbua//upgrade1/" location. You may monitor the logs to see the progress and scripts that DBUA executes during the upgrade process.





 Well, that's it. My database is now upgraded to Oracle Database 12c.

SQL> select * from v$version;

BANNER CON_ID
---------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production      0
PL/SQL Release 12.1.0.1.0 - Production      0
CORE 12.1.0.1.0 Production       0
TNS for Linux: Version 12.1.0.1.0 - Production      0
NLSRTL Version 12.1.0.1.0 - Production      0

SQL> 

I am sure you would like to look for alternative and near zero downtime database upgrade approaches to upgrade your production databases. Here are few of the helpful links in this regard:

Enjoy your Oracle Database 12c.
 



Why use OVM for Oracle Databases

Oracle NZ - Tue, 2013-07-02 23:32

Some time ago I made a benchmark exercise to compare the performance of an Oracle Database running in a bare metal environment versus a virtualized environment to clear some of the main questions our team and clients in Revera have, such as:

  • Does an Oracle Database performs well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

Here you can find the results and the answers to the questions above: http://oraclenz.org/wp-content/uploads/2013/07/Why-use-OVM-Revera.pdf

 

Regards,

 

Francisco Munoz Alvarez



Tags:  , , ,

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Why use OVM for Oracle Databases], All Right Reserved. 2016.
Categories: DBA Blogs

Playing with VirtualBox, Oracle 12c (12.1.0.1) and OEL 6.4 – Part 1

Oracle NZ - Mon, 2013-07-01 23:22

Now that 12c was released it’s time to install in on OEL 6.4 64 bits OS. Let’s start playing with this.

Before we start with the creation of the VM we will need to download some software first, such as:

Now that we have all the required software, let’s use the following steps to create our virtual machine using VirtualBox.

1 – Start VirtualBox and click on the option [New].

2 – Name the Virtual Machine and select the type of OS (Oracle Linux) and click [Next].

SnapCrab_NoName_2013-6-28_12-20-49_No-00

3 – Enter the amount of RAM for the VM. For this example I’m entering 4096MB if you do not have this amount of memory to use, please try with 2048 MB. An click [Next].

SnapCrab_NoName_2013-6-28_12-21-55_No-00

4 – Now is time to create the virtual disk for the VM. Select Create a Virtual Hard Drive now and click [Create].

SnapCrab_NoName_2013-6-28_12-22-30_No-00

5 – Select a hard drive type to be created, select VMDK and click [Next].

SnapCrab_NoName_2013-6-28_12-22-51_No-00

6 – Select the Dynamically allocated option and click [Next].

SnapCrab_NoName_2013-6-28_12-23-11_No-00

7 – Now let’s select the file allocation and size for our disk.  Let’s enter 50 GB for the disk Data and click [CREATE] to create the VM.

SnapCrab_NoName_2013-6-28_12-23-43_No-00

8 – Select the VM just created, than click [Settings] , select [Storage] , [ADD CD/DVD] , add the OEL 6.4 64 bit ISO image and click [OK].

SnapCrab_NoName_2013-6-28_12-27-18_No-00

9 – Due that I have a god number of CPUs available on my laptop, I will edit the VM to have 2 vCPUs. (This is an optional step)

SnapCrab_NoName_2013-6-28_12-28-47_No-00

10 – Start  the VM and click on the option [Install or upgrade an existing system].

SnapCrab_NoName_2013-6-28_12-29-56_No-00

11 – Select [Skip] the Disk Test.

SnapCrab_NoName_2013-6-28_12-30-48_No-00

12 – The installation Welcome Screen will appear. Please click [Next].

SnapCrab_NoName_2013-6-28_12-31-31_No-00

13 – Select the installation Language. For this example we will select [English] and click [Next].

SnapCrab_NoName_2013-6-28_12-31-57_No-00

14 – Select the VM Keyboard to be use [U.S. English] and click [Next].

SnapCrab_NoName_2013-6-28_12-32-23_No-00

15 – Select [Basic Storage Devices] and click [Next].

SnapCrab_NoName_2013-6-28_12-32-53_No-00

16 – You will receive a warning message that the device bellow may contain data. Click [Yes, discard any data].

SnapCrab_NoName_2013-6-28_12-33-28_No-00

17 – Enter the Host name of the VM, For this example I will use oracle12c and click [Next].

SnapCrab_NoName_2013-6-28_12-34-11_No-00

18 – Select your time zone and click [Next].

SnapCrab_NoName_2013-6-28_12-34-45_No-00

19 – Enter the Root use password. For this example I used “oracle” and click [Next].

SnapCrab_NoName_2013-6-28_12-35-20_No-00

20 – For the type of installation select [Replace Existing Linux System(s)] and click [Next].

SnapCrab_NoName_2013-6-28_12-36-31_No-00

21 – Once again you will receive a warning message, this time saying that  the partitions will be written to disk. Please click on [Write changes to disk].

SnapCrab_NoName_2013-6-28_12-37-9_No-00

22 – In the next screen select [Basic Server], [Customize now]  and click [Next]

SnapCrab_NoName_2013-6-28_12-38-39_No-00

23 -  Now we will select the following packages group on our installation:

  • Base System > Base
  • Base System > Compatibility libraries
  • Base System > Hardware monitoring utilities
  • Base System > Large Systems Performance
  • Base System > Network file system client
  • Base System > Performance Tools
  • Base System > Perl Support
  • Servers > Server Platform
  • Servers > System administration tools
    • Select the Package oracle-rdbms-server-11gR2-preinstall due that the 12c package still not available yet and this will work just fine for this installation.
  • Desktops > Desktop
  • Desktops > Desktop Platform
  • Desktops > Fonts
  • Desktops > General Purpose Desktop
  • Desktops > Graphical Administration Tools
  • Desktops > Input Methods
  • Desktops > X Window System
  • Development > Additional Development
  • Development > Development Tools
  • Applications > Internet Browser

SnapCrab_NoName_2013-6-28_12-41-59_No-00

24 – Click [Next] and the Installation Process will start.

SnapCrab_NoName_2013-6-28_12-47-40_No-00

25 – The Congratulations screen will appear and you will need to click [Reboot].

SnapCrab_NoName_2013-6-28_13-7-49_No-00

26 – On the OEL 6 Welcome page click [Forward].

SnapCrab_NoName_2013-6-28_13-8-51_No-00

27 – Review the License Information, select [Yes, I agree to the License Agreement] and click [Forward].

SnapCrab_NoName_2013-6-28_13-9-17_No-00

28 – In the Set Up Software Updates page just click [Forward].

SnapCrab_NoName_2013-6-28_13-9-51_No-00

29 – In the Create User page just click [Forward].

SnapCrab_NoName_2013-6-28_13-10-26_No-00

30 – Enter the Date and Time for your system and click [Forward].

SnapCrab_NoName_2013-6-28_13-10-58_No-00

31 – Do not select Enable Kdump and click [Finish] to reboot the machine once again.

SnapCrab_NoName_2013-6-28_13-11-33_No-00

32 – After the reboot connect to the machine using the root user.

SnapCrab_NoName_2013-6-28_13-12-14_No-00

33 – Disable the Firewall on [Administration] –> [Firewall].

SnapCrab_NoName_2013-6-28_13-13-15_No-00

34 – Select [Devices] –> [Install Guest Additions] to install the Virtual Box Guest Additions.

SnapCrab_NoName_2013-6-28_13-14-20_No-00

35 – Transfer the Database files to the virtual machine and unzip them.

  • $ unzip linuxamd64_12c_database_1of2.zip
  • $ unzip linuxamd64_12c_database_2of2.zip

36 – Configure the Host file as per the screenshot bellow.

SnapCrab_NoName_2013-6-28_14-43-26_No-00

37 – Use YUM (public yum)  to automatically setup the Oracle prerequisites if not used when installing the OS.

SnapCrab_NoName_2013-6-28_13-38-56_No-00

*** As you can see above in the screen shot, the oracle-rdbms-server-12cR1-preinstall package still not available. For this scenario we will use the oracle-rdbms-server-11gR2-preinstall package that will do the trick for us.

38 – Now we will need to setup the user oracle password .

$ passwd oracle (for this lab we will user the password oracle)

39 – Create the directories in which the Oracle software will be installed.

mkdir -p /u01/app/oracle/product/12.1.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01

40 – Login as the oracle user and add the following lines at the end of the “.bash_profile” file.

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=oracle12c; export ORACLE_HOSTNAME
ORACLE_UNQNAME=DB12G; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1; export ORACLE_HOME
ORACLE_SID=DB12G; export ORACLE_SID

PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

41 – On the unziped folder created, go to database/rpm and install the cvuqdisk-1.0.9-1.rpm package.

$ rpm –Uvh cvuqdisk-1.0.9-1.rpm

42 – run xhost +

$ xhost +

access control disabled, clients can connect from any host

43 – Stop the Virtual Machine and Remove the OS disk from the [Storage] section and unselect the CD/DVD-ROM from the Boot Order and Start the VM.

SnapCrab_NoName_2013-6-28_13-24-51_No-00

Next, on part 2 of this lab, we will install the Oracle Database software and take a look in the Oracle Enterprise Manager Database Express.

 

Regards,

 

Francisco Munoz Alvarez

//



Tags:  , , , , ,

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Playing with VirtualBox, Oracle 12c (12.1.0.1) and OEL 6.4 – Part 1], All Right Reserved. 2016.
Categories: DBA Blogs

Oracle 12c New Features - Convert a database to be Pluggable

ContractOracle - Mon, 2013-07-01 22:08
In a previous blog post I demonstrated creating a Pluggable Database (PDB) from the PDB$SEED database  which is created at the same time as the Container Database (CDB).  That process was quick and easy, but is only useful for creating new empty databases.  

If we want to migrate existing databases which contain data (e.g upgraded from 11g) to the CDB/PDB multitenant architecture we need to convert them to be PDBs so we can plug them in.

For this example I created a stand-alone database called NONCDB.

[oracle@rac1 T12]$ export ORACLE_SID=NONCDB
[oracle@rac1 T12]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 10:31:54 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

We can see in v$database that it is not a CDB or PDB.

SQL> select CDB from v$database;

CDB
---
NO

To convert it to be a PDB we first need to get the database in a consistent state and run DBMS_PDB.DESCRIBE to create an XML file to describe the database.


SQL> shutdown immediate;

Database closed.

Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1235959808 bytes
Fixed Size                  2287816 bytes
Variable Size             452986680 bytes
Database Buffers          771751936 bytes
Redo Buffers                8933376 bytes
Database mounted.
SQL> alter database open read only;

Database altered.

SQL> BEGIN
 DBMS_PDB.DESCRIBE(
  pdb_descr_file => '/u01/app/oracle/oradata/NONCDB/noncdb.xml');
 END;

PL/SQL procedure successfully completed.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Now we can plug NONCDB into a existing CDB database T12.

[oracle@rac1 T12]$ export ORACLE_SID=T12
[oracle@rac1 T12]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 10:42:01 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> select CDB from v$database;

CDB
---
YES

I am plugging the database in to a CDB on the same server as the original database so I will create the PDB with NOCOPY TEMPFILE REUSE.  If you are changing directory structures then you would need to use FILE_NAME_CONVERT.

SQL> CREATE PLUGGABLE DATABASE NONCDB USING '/u01/app/oracle/oradata/NONCDB/noncdb.xml' NOCOPY tempfile reuse;

Pluggable database created.

Now we need to update the data dictionary in the new PDB by running noncdb_to_pdb.sql

SQL> alter session set container=NONCDB;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

This script has a lot of output which I will not show but unfortunately it ended with an error :-

SQL> -- get rid of idl_ub1$ rows for MDL java objects
SQL> delete from sys.idl_ub1$ where obj# in (select obj# from sys.obj$ where bitand(flags, 65536)=65536 and type# in (28,29,30,56));
^Cdelete from sys.idl_ub1$ where obj# in (select obj# from sys.obj$ where bitand(flags, 65536)=65536 and type# in (28,29,30,56))
                *
ERROR at line 1:
ORA-01013: user requested cancel of current operation

The new PDB was left in READ WRITE state after the script exited and seems usable, but due to the error I can't be sure everything completed OK so it would be worth checking with Oracle Support.


SQL> select name, open_mode from v$pdbs;



NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
NONCDB                         READ WRITE

SQL> alter session set container=NONCDB;

Session altered.

SQL> create user test identified by test;

User created.

We can see that by converting the existing database to a PDB it only kept the SYSTEM, SYSAUX, USERS tablespaces, and has dropped  the UNDO datafiles along with the original REDO logs and control files.

SQL> select file_name from dba_data_files;

FILE_NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/NONCDB/system01.dbf
/u01/app/oracle/oradata/NONCDB/sysaux01.dbf
/u01/app/oracle/oradata/NONCDB/users01.dbf


[oracle@rac1 NONCDB]$ pwd
/u01/app/oracle/oradata/NONCDB
[oracle@rac1 NONCDB]$ ls -lrt
total 1712016
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:30 redo03.log
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:30 redo02.log
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:36 redo01.log
-rw-r----- 1 oracle oinstall  57679872 Jul  2 10:37 undotbs01.dbf
-rw-r--r-- 1 oracle oinstall      3986 Jul  2 10:38 noncdb.xml
-rw-r----- 1 oracle oinstall  10043392 Jul  2 10:39 control02.ctl
-rw-r----- 1 oracle oinstall  10043392 Jul  2 10:39 control01.ctl
-rw-r----- 1 oracle oinstall  62922752 Jul  2 10:45 temp01.dbf
-rw-r----- 1 oracle oinstall   5251072 Jul  2 11:03 users01.dbf
-rw-r----- 1 oracle oinstall 807411712 Jul  2 11:03 system01.dbf
-rw-r----- 1 oracle oinstall 702554112 Jul  2 11:03 sysaux01.dbf
Categories: DBA Blogs

Context Switching: An Example

Chet Justice - Mon, 2013-07-01 15:45
Last week at #kscope13 I saw an outstanding example of context switching. If you don't know what it is, Tom Kyte explains it here.

The two environments are just "different", separate and distinct. You can do plsql without SQL, you can do SQL (and many times do) without invoking plsql. There is a call overhead to go from SQL to PLSQL (the "hit" is most evident when SQL invokes PLSQL - not so much the other way, when SQL is embedded in PLSQL). Even if this hit is very very small (say 1/1000th of a second) - if you do it enough, it adds up. So, if it can be avoided - it should be.

The session was Using Kanban and Scrum to Increase Your Development Throughput presented by Stew Stryker (not to be confused with Ted Striker) of Dartmouth College (Stew gave me a gallon of Vermont Maple Syrup which exploded in my bag on the flight home, a gift for sharing my hotel room. Thanks Stew! ;)). So here's the example he gave to demonstrate context switching.

Take a list of names and time yourself writing out the first letter of each name, then the second, until you are finished.



Now, same list of names and write them out the way you normally would, left to right.



If the first method was faster, you are a freak of nature.
Categories: BI & Warehousing

"An unexpected error has been detected by HotSpot Virtual Machine" Error While Launching Installer

Asif Momen - Mon, 2013-07-01 05:44
Today while installing Oracle 11gR2 (11.2.0.3) 64-bit on RHEL 6.1 64-bit on a VMWare box, I received the mentioned error. Looking up at MOS wasn't fruitful. However, this seems to be an RHEL bug.


bash-4.1$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 27976 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-01_03-12-27AM. Please wait ...bash-4.1$ #
# An unexpected error has been detected by HotSpot Virtual Machine:
#
#  SIGSEGV (0xb) at pc=0x00007f33a28c8d70, pid=2400, tid=139859082229520
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_30-b03 mixed mode)
# Problematic frame:
# C  [ld-linux-x86-64.so.2+0x14d70]
#
# An error report file with more information is saved as hs_err_pid2400.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

bash-4.1$ 


The workaround to the problem is to set "LD_BIND_NOW" environment variable to a value "1" as shown below and re-launch the installer. 

bash-4.1$ export LD_BIND_NOW=1
bash-4.1$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 27339 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-01_03-29-40AM. Please wait ...
bash-4.1$ 


According to a post on "The Gruff DBA" blog, this bug seems to have reported on  11.2.0.1 & 11.2.0.3 



:)

Oracle 12c New Features - Query CDB_PDB_HISTORY for Pluggable Database History

ContractOracle - Mon, 2013-07-01 03:45
If you are interested in checking the history of PDBs, then view CDB_PDB_HISTORY is a good place to start.

SQL> SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY WHERE CON_ID > 2 ORDER BY 5;  

DB_NAME    CON_ID PDB_NAME        OPERATION        OP_TIMESTA CLONED_FROM_PDB
---------- ------ --------------- ---------------- ---------- ---------------
SEEDDATA        5 PDB$SEED        UNPLUG           24-MAY-13
SEEDDATA        3 PDB$SEED        UNPLUG           24-MAY-13
SEEDDATA        4 PDB$SEED        UNPLUG           24-MAY-13
T12             5 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             3 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             4 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             5 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             3 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             4 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             4 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             4 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             4 PDB2            PLUG             01-JUL-13  PDB1
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB3            CLONE            01-JUL-13  PDB1

Categories: DBA Blogs

Oracle 12c New Features - TEMP_UNDO_ENABLED

ContractOracle - Mon, 2013-07-01 01:12
Oracle 12c introduces new parameter TEMP_UNDO_ENABLED which can be set at database and session level.  If this parameter is enabled, then undo for temporary objects (e.g global temporary tables) is written to the TEMP tablespace, compared to the default of writing to the UNDO tablespace.  This can help improve performance and reduce UNDO and REDO.

SQL> connect test/test@pdb1
Connected.

SQL> CREATE GLOBAL TEMPORARY TABLE my_temp_table (id integer) ON COMMIT PRESERVE ROWS;

Table created.

SQL> show parameter temp_undo_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------
temp_undo_enabled                    boolean     FALSE

Now run an insert to the global temporary table with the parameter set to FALSE. 

SQL> set autotrace on statistics
SQL> insert into my_temp_table values (1);

1 row created.


Statistics
----------------------------------------------------------
          1  recursive calls
          8  db block gets
          1  consistent gets
          0  physical reads
        312  redo size
        853  bytes sent via SQL*Net to client
        837  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

According to Autotrace statistics this generates redo of 312.

SQL> connect test/test@pdb1
Connected.
SQL> alter session set temp_undo_enabled=true;

Session altered.

Now run the insert again with the parameter set to TRUE.  

SQL> set autotrace on statistics
SQL> insert into my_temp_table values (1);

1 row created.


Statistics
----------------------------------------------------------
          3  recursive calls
         13  db block gets
          1  consistent gets
          0  physical reads
        280  redo size
        850  bytes sent via SQL*Net to client
        837  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

According to Autotrace statistics this generates redo of 280 (compared to 312).  Reduced UNDO and REDO from temporary transactions can help the performance of the database and reduce disk space for UNDO tablespaces, archivelogs, and backups. The Oracle documentation says "If database applications make use of temporary objects (using global temporary tables or temporary table transformations), it is advisable to set this parameter's value to true."  

Statistics on TEMP UNDO are available via V$TEMPUNDOSTAT

SQL> desc V$TEMPUNDOSTAT
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------
 BEGIN_TIME                                         DATE
 END_TIME                                           DATE
 UNDOTSN                                            NUMBER
 TXNCOUNT                                           NUMBER
 MAXCONCURRENCY                                     NUMBER
 MAXQUERYLEN                                        NUMBER
 MAXQUERYID                                         VARCHAR2(13)
 UNDOBLKCNT                                         NUMBER
 EXTCNT                                             NUMBER
 USCOUNT                                            NUMBER
 SSOLDERRCNT                                        NUMBER
 NOSPACEERRCNT                                      NUMBER
 CON_ID                                             NUMBER

More details here :- http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn10326.htm#REFRN10326



Categories: DBA Blogs

parameter ENABLE_DDL_LOGGING

ContractOracle - Sun, 2013-06-30 23:43
If Oracle parameter ENABLE_DDL_LOGGING is enabled DDL records are written to the ADR.

SQL> show parameter enable_ddl_logging

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------
enable_ddl_logging                   boolean     FALSE

SQL> alter system set enable_ddl_logging=true;

System altered.

SQL> connect c##test/test@pdb1
Connected.
SQL> create view x as select * from user_views;

View created.

SQL> drop view x;

View dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

[oracle@rac1 log]$ pwd
/u01/app/oracle/diag/rdbms/t12/T12/log
[oracle@rac1 log]$ ls
ddl  ddl_T12.log  debug  test
[oracle@rac1 log]$ more *.log
Mon Jul 01 12:35:54 2013
diag_adl:create view x as select * from user_views
diag_adl:drop view x

[oracle@rac1 log]$ cd ddl
[oracle@rac1 ddl]$ more *.xml
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='rac1.test.com' host_addr='192.168.1.205'
 version='1'>
 create view x as select * from user_views
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='rac1.test.com' host_addr='192.168.1.205'>
 drop view x

Categories: DBA Blogs

Oracle 12c New Features - Clone a Plugged In Database

ContractOracle - Sun, 2013-06-30 23:21
One of the benefits of the CDB/PDB model in Oracle 12c is that it allows rapid cloning of Pluggable Databases (PDB).  To create a clone database in previous versions of Oracle the DBA would have needed to create a new database instance with a new set of parameters and then clone the source database files using rman.  

In the following example I will clone database PDB1 to a new database PDB3 using only the "create pluggable database" command.  First the source database needs to be open read-only.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

Then clone the PDB1 to PDB3.

SQL> create pluggable database PDB3 from PDB1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdb1','/u01/app/oracle/oradata/T12/pdb3');  

Pluggable database created.

And open both databases read-write.

SQL> alter pluggable database PDB3 open;

Pluggable database altered.

SQL> alter pluggable database PDB1 close immediate;

Pluggable database altered.

SQL> alter pluggable database PDB1 open;

Pluggable database altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE
PDB3                           READ WRITE


Categories: DBA Blogs

OBIEE 11.1.1.6.11 Bundle Patch Now Available

Abhinav Agarwal - Sun, 2013-06-30 22:37
Last week saw the release of the OBIEE Bundle Patch 11.1.1.7.1 and on Friday bundle patch 11.1.1.6.11 for Oracle Business Intelligence Enterprise Edition became available on the Oracle My Support Portal.

The tracking ids for the respective bundle patch components are:
  • Oracle Business Intelligence Installer (BIINST): ID: 16747681
  • Oracle Real Time Decisions (RTD): ID: 16747684
  • Oracle Business Intelligence Publisher (BIP): ID: 16747692
  • Oracle Business Intelligence ADF Components (BIADFCOMPS): ID:  16747699
  • Enterprise Performance Management Components Installed from BI Installer 11.1.1.6.x (BIFNDNEPM): ID:  16747703
  • Oracle Business Intelligence: (OBIEE): ID: 16717325
  • Oracle Business Intelligence Platform Client Installers and MapViewer: ID: 16747708
Happy Monday to all!
Abhinav
Bangalore

    Oracle 12c New Features - Plugging and Unplugging Databases

    ContractOracle - Sun, 2013-06-30 22:35
    In a previous blog post I demonstrated creating Pluggable Databases (PDB) in an Oracle 12c Container Database (CDB).  In this test I will demonstrate how easy it is to unplug a PDB from a CDB, and then plugin again.

    We currently have one PDB with name PDB1.  We will shutdown, unplug it, and drop it.

    SQL> select name from v$pdbs;

    NAME
    ------------------------------
    PDB$SEED
    PDB1

    SQL> alter pluggable database pdb1 close immediate;

    Pluggable database altered.

    SQL> alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml';

    Pluggable database altered.

    SQL> drop pluggable database pdb1 keep datafiles;

    Pluggable database dropped.

    SQL> select name from v$pdbs;

    NAME
    ------------------------------
    PDB$SEED

    We can now backup the database to tape for later restore, or copy the datafiles and xml file for the pluggable database to another CDB on another server and plugin.  In this example I will just plug the database back into the original CDB.  

    Before we plugin we first need to run DBMS_PDB.CHECK_PLUG_COMPATIBILITY to check that the PDB is compatible with the new CDB.

    SQL> set serveroutput on
    SQL> DECLARE
       compatible BOOLEAN := FALSE;
      2    3  BEGIN
      4     compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
      5          pdb_descr_file => '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml');
      6     if compatible then
      7        DBMS_OUTPUT.PUT_LINE('Is pluggable PDB2 compatible? YES');
      8     else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB2 compatible? NO');
      9     end if;
     10  END;
     11  /
    Is pluggable PDB2 compatible? YES

    PL/SQL procedure successfully completed.

    As the PDB is compatible with the CDB we can proceed to plug it in. 

    SQL> create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE;

    Pluggable database created.

    SQL> select name, open_mode from v$pdbs;

    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB$SEED                       READ ONLY
    PDB1                           MOUNTED

    SQL> alter session set container=PDB1;

    Session altered.

    SQL> alter database open;

    Database altered.

    SQL> connect test/test@pdb1;
    Connected.


    SQL> show con_name



    CON_NAME

    ------------------------------
    PDB1

    We are now able to login to the plugged in database.

    The Alert log entries for these operations are as follows :-

    Mon Jul 01 11:14:31 2013
    alter pluggable database pdb1 close immediate
    Mon Jul 01 11:14:31 2013
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    Pluggable database PDB1 closed
    Completed: alter pluggable database pdb1 close immediate
    alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml'
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    Completed: alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml'
    drop pluggable database pdb1 keep datafiles
    Mon Jul 01 11:15:02 2013
    Deleted file /u01/app/oracle/oradata/T12/pdb1/pdbseed_temp01.dbf
    Completed: drop pluggable database pdb1 keep datafiles
    create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE
    Mon Jul 01 11:20:45 2013
    ****************************************************************
    Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
    If any errors are encountered before the pdb is marked as NEW,
    then the pdb must be dropped
    ****************************************************************
    Deleting old file#10 from file$
    Deleting old file#11 from file$
    Adding new file#12 to file$(old file#10)
    Adding new file#13 to file$(old file#11)
    Successfully created internal service pdb1 at open
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    ****************************************************************

    Post plug operations are now complete.
    Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
    ****************************************************************
    Completed: create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE
    Mon Jul 01 11:29:00 2013
    alter database open
    Mon Jul 01 11:29:00 2013
    Pluggable database PDB1 dictionary check beginning
    Pluggable Database PDB1 Dictionary check complete
    Opening pdb PDB1 (3) with no Resource Manager plan active

    XDB installed.

    XDB initialized.
    Pluggable database PDB1 opened read write
    Completed: alter database open






    Categories: DBA Blogs

    Oracle 12c New Features - Container and Pluggable Databases

    ContractOracle - Sun, 2013-06-30 20:30
    Oracle 12c introduces "Multitenant Architecture" which allows consolidation of databases via Container Databases (CDB) and Pluggable Databases (PDB)

    The CDB database owns the SGA and running processes, and the PDB databases are serviced by those resources.  This new architecture will be a big change for DBAs experienced in managing earlier versions of the Oracle database, so it is worth taking the time to read the documentation and testing extensively before using these new features.  The theory is that many databases sharing one SGA and set of processes should be more efficient that multiple individually managed memory segments, so this feature is specifically aimed at clouds and large companies.


    I used the DBCA utility to create a CDB called T12, which also created a small PDB$SEED database.  DBCA is easy to run, and similar to previous versions, so I won't show screen shots here.  It is also possible to create a CDB database using the CREATE DATABASE statement along with the new ENABLE PLUGGABLE DATABASE clause.  


    When managing CDBs and PDBs it is important to ensure you know what container you are currently working on.  By default when you login you will end up in CDB$ROOT.  


    [oracle@rac1 admin]$ ps -ef | grep pmon
    oracle    7830     1  0 09:08 ?        00:00:00 ora_pmon_T12

    [oracle@rac1 admin]$ echo $ORACLE_SID

    T12

    [oracle@rac1 admin]$ sqlplus / as sysdba

    SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 1 09:11:11 2013
    Copyright (c) 1982, 2013, Oracle.  All rights reserved.

    Connected to:

    Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
    and Unified Auditing options

    SQL> show con_name

    CON_NAME
    ------------------------------
    CDB$ROOT

    SQL> show con_id

    CON_ID
    ------------------------------
    1

    By selecting from v$database we can see that we are currently in a CDB.

    SQL> select DBID, name, CDB, CON_ID, CON_DBID from v$database;

          DBID NAME      CDB     CON_ID   CON_DBID
    ---------- --------- --- ---------- ----------
    1216820329 T12       YES          0 1216820329

    We can select PDB from dba_services to check what PDBs exist.  In this case we have not created any PDBs, so only the CDB T12 is listed against CDB$ROOT.

    SQL> select name, pdb from dba_services;

    NAME                PDB
    ----------------------------------------------------------------
    SYS$BACKGROUND      CDB$ROOT
    SYS$USERS           CDB$ROOT
    T12XDB              CDB$ROOT
    T12                 CDB$ROOT

    SQL> select name, con_id from v$active_services;

    NAME                                                  CON_ID
    ---------------------------------------------------------------- 
    T12XDB                                                1
    T12                                                   1
    SYS$BACKGROUND                                        1
    SYS$USERS                                             1

    Create a tnsnames.ora entry for connecting to CDB service T12.

    T12 =
     (DESCRIPTION =
     (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.test.com)(PORT = 1521)))
     (CONNECT_DATA =
     (SERVICE_NAME = T12)
     )
      )

    When we try to create a user we find that in a CDB we are are not allowed to create "local" users, but can only create "common" users with usernames starting with "C##"

    SQL> create user test identified by test;
    create user test identified by test
                *
    ERROR at line 1:
    ORA-65096: invalid common user or role name

    SQL> !oerr ora 65096
    65096, 00000, "invalid common user or role name"
    // *Cause:  An attempt was made to create a common user or role with a name
    //          that wass not valid for common users or roles.  In addition to
    //          the usual rules for user and role names, common user and role
    //          names must start with C## or c## and consist only of ASCII
    //          characters.
    // *Action: Specify a valid common user or role name.
    //

    SQL> create user test identified by test container=current;
    create user test identified by test container=current
                                   *
    ERROR at line 1:
    ORA-65049: creation of local user or role is not allowed in CDB$ROOT

    SQL> create user c##test identified by test;

    User created.

    There are now additional data dictionary views to help manage the PDB and CDB databases (names include PDB, CDB).  You will also notice that many data dictionary views now contain a column CON_ID which allows DBAs to check details for a specific CDB or PDB.  

    If we select from v$datafile in the new container database we can see that in addition to the datafiles for the CDB T12, there are datafiles listed for database "pdbseed".  The PDB$SEED database is created at the same time as the CDB and can be used as a source to create PDB databases.


    SQL> select name, con_id, plugged_in from v$datafile order by 2;

    NAME                                           CON_ID PLUGGED_IN
    -------------------------------------------------- ---------- ---
    /u01/app/oracle/oradata/T12/system01.dbf            1          0
    /u01/app/oracle/oradata/T12/sysaux01.dbf            1          0
    /u01/app/oracle/oradata/T12/undotbs01.dbf           1          0
    /u01/app/oracle/oradata/T12/users01.dbf             1          0
    /u01/app/oracle/oradata/T12/pdbseed/system01.dbf    2          0
    /u01/app/oracle/oradata/T12/pdbseed/sysaux01.dbf    2          0

    6 rows selected.

    The PDB$SEED database is mounted read only, so it is possible to explore it, but there are limits to what you can do with this DB.

    SQL> select con_id, name, open_mode from v$pdbs;

        CON_ID NAME                                        OPEN_MODE

    ---------- ------------------------------------------- ---------
             2 PDB$SEED                                    READ ONLY

    SQL> alter session set container=PDB$SEED;

    Session altered.

    SQL> select name from v$database;

    NAME
    ---------
    T12

    SQL> show con_name

    CON_NAME
    ------------------------------
    PDB$SEED

    SQL> show con_id

    CON_ID
    ------------------------------
    2

    SQL> select open_mode from v$database;

    OPEN_MODE
    --------------------
    READ ONLY

    SQL> shutdown;
    ORA-65017: seed pluggable database may not be dropped or altered

    To create our own read-write PDB as a copy of the PDB$SEED database we just need to execute the "create pluggable database" command.

    SQL> create pluggable database PDB1 admin user pdb1_admin identified by password roles = (DBA) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdbseed','/u01/app/oracle/oradata/T12/pdb1');

    Pluggable database created.


    SQL> select pdb_name, status from cdb_pdbs;

    PDB_NAME      STATUS
    ----------------------------------------------------------------
    PDB$SEED      NORMAL
    PDB1          NEW

    SQL> select name, con_id from v$active_services order by 1;

    NAME                                           CON_ID
    ---------------------------------------------------------------- 
    SYS$BACKGROUND                                 1
    SYS$USERS                                      1
    T12                                            1
    T12XDB                                         1
    pdb1                                           3

    SQL> select name from v$datafile where con_id=3;

    NAME
    -----------------------------------------------------------------
    /u01/app/oracle/oradata/T12/pdb1/system01.dbf
    /u01/app/oracle/oradata/T12/pdb1/sysaux01.dbf

    SQL> select name, open_mode from v$pdbs;

    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB$SEED                       READ ONLY
    PDB1                           MOUNTED

    We can see from the above that the new PDB was created in MOUNTED state.  We will need to open it if we want to use it.  When we look in the CDB alert log we can see the following :-

    Mon Jul 01 10:02:30 2013
    create pluggable database PDB1 admin user pdb1_admin identified by * roles = (DBA) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdbseed','/u01/app/oracle/oradata/T12/pdb1')
    Mon Jul 01 10:02:53 2013
    ****************************************************************
    Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
    If any errors are encountered before the pdb is marked as NEW,
    then the pdb must be dropped
    ****************************************************************
    Deleting old file#5 from file$
    Deleting old file#7 from file$
    Adding new file#10 to file$(old file#5)
    Adding new file#11 to file$(old file#7)
    Successfully created internal service pdb1 at open
    ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
    ****************************************************************
    Post plug operations are now complete.
    Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
    ****************************************************************

    Completed: create pluggable database PDB1 admin user pdb1_admin identified by * roles = (DBA) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdbseed','/u01/app/oracle/oradata/T12/pdb1')

    We can open databases individually as follows, or open all PDBs using "alter pluggable database all open;"

    SQL> alter session set container=PDB1;

    Session altered.

    SQL> show con_name

    CON_NAME
    ------------------------------
    PDB1

    SQL> alter database open;

    Database altered.

    SQL> select name, open_mode from v$pdbs;

    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB1                           READ WRITE

    We can see the following in the CDB alert log.

    alter database open
    Mon Jul 01 10:12:50 2013
    Pluggable database PDB1 dictionary check beginning
    Pluggable Database PDB1 Dictionary check complete
    Opening pdb PDB1 (3) with no Resource Manager plan active

    XDB installed.


    XDB initialized.

    Pluggable database PDB1 opened read write

    Completed: alter database open

    Create a tnsnames.ora entry for the new PDB using the default service PDB1.

    PDB1 =
     (DESCRIPTION =
     (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.test.com)(PORT = 1521)))
     (CONNECT_DATA =
     (SERVICE_NAME = PDB1)
     )
      )

    Now that we have created a PDB we can create "local" users.

    SQL> show con_name

    CON_NAME
    ------------------------------
    PDB1

    SQL> create user test identified by test;

    User created.

    SQL> grant create session to test;

    Grant succeeded.

    It is also possible to grant privileges in the PDB for the "common" users that exist in the CDB.

    SQL> grant create session to C##TEST container=ALL;


    Grant succeeded.

    We can now connect directly to the PDB1 pluggable database using both the "local" and "common" users.

    SQL> connect test/test@PDB1
    Connected.
    SQL> show con_name

    CON_NAME

    ------------------------------
    PDB1

    SQL> connect C##TEST/test@PDB1
    Connected.

    SQL> show con_name

    CON_NAME
    ------------------------------
    PDB1

    When logged into PDBs many data dictionary views will restrict our view of the world via CON_ID so that we can't see records relating to other PDBs.  We can see from the following selects that PDB1 uses the UNDO and REDO files from the container database, but has its own SYSTEM, SYSAUX, TEMP files.

    SQL> select name from v$datafile;

    NAME
    -----------------------------------------------------------------
    /u01/app/oracle/oradata/T12/undotbs01.dbf
    /u01/app/oracle/oradata/T12/pdb1/system01.dbf
    /u01/app/oracle/oradata/T12/pdb1/sysaux01.dbf

    SQL> select name from v$tempfile;

    NAME
    -----------------------------------------------------------------
    /u01/app/oracle/oradata/T12/pdb1/pdbseed_temp01.dbf

    SQL> select member from v$logfile;

    MEMBER
    -----------------------------------------------------------------
    /u01/app/oracle/oradata/T12/redo03.log
    /u01/app/oracle/oradata/T12/redo02.log
    /u01/app/oracle/oradata/T12/redo01.log

    PDB databases can be created from a seed database as demonstrated above, cloned from other PDB databases, or plugged in from previously unplugged PDBs or converted non-CDB databases.  In addition to creating new PDB databases we can also drop, rename, clone, unplug, plug backup, restore, and duplicate.  Check the Oracle documentation for details.

    More details here :- Introduction to the Multitenant Architecture
    Categories: DBA Blogs

    Playing with VirtualBox, Oracle 12c (12.1.0.1) and OEL 6.4 – Part 2

    Oracle NZ - Sun, 2013-06-30 19:24

    We will complete our lab by installing the Oracle Database software, create a Pluggable database and take a look in the Oracle Enterprise Manager Database Express.

    The steps to install and create an Oracle Database 12c are:

    1 – Go to the database directory created when unzipped the 2 Oracle Database Files and run /runInstaller .

    SnapCrab_NoName_2013-7-1_20-33-26_No-00

    2 – Unselect the option [I wish to receive security updates via My Oracle Support] and click [Next]. Of course, if you want to receive updates all you need to do is to leave this option marked and enter your My Oracle Support email and password.

    .SnapCrab_NoName_2013-7-1_20-34-0_No-00

    3 – You will receive a warning message that you have not provided an email address. Just click on [Yes].

    SnapCrab_NoName_2013-7-1_20-34-27_No-00

    4 – Select [Skip software updates] and click [Next].

    SnapCrab_NoName_2013-7-1_20-34-46_No-00

    5 – Select [Create and Configure a Database] and click [Next].

    SnapCrab_NoName_2013-7-1_20-35-8_No-00

    6 – Select [Server Class] and click [Next]. *** If you do not have enough resources on your laptop, please choose Desktop Class instead of Server Class.

     SnapCrab_NoName_2013-7-1_20-35-30_No-00

    7 – Select the type of Database to be installed, choose [Single instance database installation] and click [Next].

     SnapCrab_NoName_2013-7-1_20-35-49_No-00

    8 – Select the Install type. For this lab I will choose [Advanced Install] and click [Next].

     SnapCrab_NoName_2013-7-1_20-36-7_No-00

    9 – Select the product Language, here all you need to to is leave English as the default and click [Next].

     SnapCrab_NoName_2013-7-1_20-36-24_No-00

    10 – Select the Database Edition you want to install. Select [Enterprise Edition] and click [Next[.

     SnapCrab_NoName_2013-7-1_20-36-38_No-00

    11 – Specify the Installation location and click [Next].

     SnapCrab_NoName_2013-7-1_20-36-53_No-00

    12 – Create the Inventory and click [Next].

     SnapCrab_NoName_2013-7-1_20-37-15_No-00

    13 – Select the type of database you want to create. Select [General Purpose] and click [Next].

     SnapCrab_NoName_2013-7-1_20-37-28_No-00

    14 – Specify the database identifiers, enter “orcl” as the SID and here you can decide if you want to create a Container Database or as early releases a non-Container Database (by not selecting [Create as Container database]. For this example we will select Container database and create a Pluggable Database called “pdborcl” and click [Next].

     SnapCrab_NoName_2013-7-1_20-37-48_No-00

    15 – Specify the configurations options, by default the Automatic Memory Management is enabled, click [Next].

     SnapCrab_NoName_2013-7-1_20-38-3_No-00

    16 – Specify the storage to be used. Select [File System] and click [Next].

     SnapCrab_NoName_2013-7-1_20-38-26_No-00

    17 – Management options. If you have an EM Cloud Control running on your environment, here is where you specify the EM details to manage this database. Leave as the default and click [Next].

     SnapCrab_NoName_2013-7-1_20-38-40_No-00

    18 – Enable Recovery and click [Next].

     SnapCrab_NoName_2013-7-1_20-39-14_No-00

    19 – Specify Passwords. For this scenario we use the password “oracle” to all accounts. Click [Next].

     SnapCrab_NoName_2013-7-1_20-39-38_No-00

    20 – A warning will appear due that we are using a easy password. Click [Yes].

     SnapCrab_NoName_2013-7-1_20-39-53_No-00

    21 – Click [Next] on Privileged Operating Systems groups.

     SnapCrab_NoName_2013-7-1_20-40-7_No-00

    22 – Review the Summary page and click [install].

     SnapCrab_NoName_2013-7-1_20-40-35_No-00

     SnapCrab_NoName_2013-7-1_20-41-28_No-00

    23 – Execute the configuration scripts as root and click [Ok] to continue the installation.

     SnapCrab_NoName_2013-7-1_20-43-51_No-00

     SnapCrab_NoName_2013-7-1_20-44-33_No-00

     SnapCrab_NoName_2013-7-1_20-45-15_No-00

     SnapCrab_NoName_2013-7-1_20-45-52_No-00

     SnapCrab_NoName_2013-7-1_20-51-35_No-00

    24 – In the Finish page click [Close].

     SnapCrab_NoName_2013-7-1_20-52-3_No-00

    25 – Edit /etc/oratab as follows.

     SnapCrab_NoName_2013-7-1_20-52-59_No-00

    26 – Check the listener status.

     SnapCrab_NoName_2013-7-1_20-53-29_No-00

    27 – Check if the Container Database is running.

     SnapCrab_NoName_2013-7-1_20-53-46_No-00

    28 – Connect to the Enterprise Manager Database Express. It will first ask you to add a Security Exception in Firefox.

    SnapCrab_NoName_2013-7-1_20-57-17_No-00

     SnapCrab_NoName_2013-7-1_20-57-30_No-00

    29 – The Next step will be to download the Adobe Flash Player rpm and install it.

    SnapCrab_NoName_2013-7-2_12-58-39_No-00

    30 – Enter your database username and password. SYS and oracle and click [Login]

     SnapCrab_NoName_2013-7-2_13-1-6_No-00

    31 – This is the First Screen of the new Enterprise Manager Database Express 12c. Here you are not able to use monitoring, set alertings or even execute backup and recovery operations. But you can do a lot of other stuff such  such as per example: Manage Storage such as: Undo, Redo Log Files, and Control Files, Configure Initialization Parameters, Memory  and Database Features and finally Manage  Performance, SQL Tuning and Users. and Roles (Security).

     SnapCrab_NoName_2013-7-2_13-47-36_No-00

    SnapCrab_NoName_2013-7-2_13-44-55_No-00

    SnapCrab_NoName_2013-7-2_13-45-27_No-00

    SnapCrab_NoName_2013-7-2_13-45-56_No-00

    SnapCrab_NoName_2013-7-2_13-44-21_No-00

     SnapCrab_NoName_2013-7-2_13-46-24_No-00

     SnapCrab_NoName_2013-7-2_13-47-3_No-00

     

    Hope you enjoyed this tutorial and soon many more will come.

     

    Regards,

     

    Francisco Munoz Alvarez

    //



    Tags:  , , , , , ,

    Del.icio.us
    Facebook
    TweetThis
    Digg
    StumbleUpon

    Copyright © OracleNZ by Francisco Munoz Alvarez [Playing with VirtualBox, Oracle 12c (12.1.0.1) and OEL 6.4 – Part 2], All Right Reserved. 2016.
    Categories: DBA Blogs

    Easy HTML output in IPython Notebook

    Catherine Devlin - Fri, 2013-06-28 12:54
    If any object has a _repr_html_ method, the IPython Notebook will use it to render HTML output. It's really easy to make a simple class that permits general dynamic HTML-rich output with Markdown. Markdown is a superset of HTML, so HTML in your output string will work, too.

    import markdown
    class MD(str):
    def _repr_html_(self):
    return markdown.markdown(self)
    Four little lines, and you can do this!

    SOA Suite 11g Developers Cookbook Published

    Antony Reynolds - Fri, 2013-06-28 10:33
    SOA Suite 11g Developers Cookbook Available

    Just realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year!

    In some ways this was an easier book to write than the Developers Guide, the hard bit was deciding what recipes to include.  Once we had decided that the writing of the book was pretty straight forward.

    The book focuses on areas that we felt we had neglected in the Developers Guide, and so there is more about Java integration and OSB, both of which we see a lot of questions about when working with customers.

    Amazon has a couple of reviews.

    Table of Contents

    Chapter 1: Building an SOA Suite Cluster
    Chapter 2: Using the Metadata Service to Share XML Artifacts
    Chapter 3: Working with Transactions
    Chapter 4: Mapping Data
    Chapter 5: Composite Messaging Patterns
    Chapter 6: OSB Messaging Patterns
    Chapter 7: Integrating OSB with JSON
    Chapter 8: Compressed File Adapter Patterns
    Chapter 9: Integrating Java with SOA Suite
    Chapter 10: Securing Composites and Calling Secure Web Services
    Chapter 11: Configuring the Identity Service
    Chapter 12: Configuring OSB to Use Foreign JMS Queues
    Chapter 13: Monitoring and Management

    More Reviews

    In addition to the Amazon Reviews I also found some reviews on GoodReads.

    Free WebLogic Administration Cookbook

    Antony Reynolds - Fri, 2013-06-28 10:16
    Free WebLogic Admin Cookbook

    Packt Publishing are offering free copies of Oracle WebLogic Server 12c Advanced Administration Cookbook : http://www.packtpub.com/oracle-weblogic-server-12c-advanced-administration-cookbook/book  in exchange for a review either on your blog or on the title’s Amazon page.

    Here’s the blurb:

    • Install, create and configure WebLogic Server
    • Configure an Administration Server with high availability
    • Create and configure JDBC data sources, multi data sources and gridlink data sources
    • Tune the multi data source to survive database failures
    • Setup JMS distributed queues
    • Use WLDF to send threshold notifications
    • Configure WebLogic Server for stability and resilience

    If you’re a datacenter operator, system administrator or even a Java developer this book could be exactly what you are looking for to take you one step further with Oracle WebLogic Server, this is a good way to bag yourself a free cookbook (current retail price $25.49).

    Free review copies are available until Tuesday 2nd July 2013, so if you are interested, email Harleen Kaur Bagga at: harleenb@packtpub.com.

    I will be posting my own review shortly!

    Pages

    Subscribe to Oracle FAQ aggregator