Skip navigation.

Feed aggregator

SQL Server: DBCC CHECKDB does not detect corruption

Yann Neuhaus - Mon, 2014-08-04 02:12

During my audits at customer places, it still happens very often to find SQL Server databases with page verification option configured to "none". I always alert my customers on this configuration point because it can have an impact on the overall integrity of their databases. One of my customer told me that the integrity task of its maintenance will detect the corruption anyway and alert him by email - but is it really the case?

Of course, my response to the customer is that it depends of the corruption type and that a dbcc checkdb is not an absolute guarantee in this case. I like to show this example to my customers to convince them to change the page verify option to another thing than "none".

Let me show this to you:

 

use [master]; go   -- create corrupt db if DB_ID('corrupt') is not null        drop database corrupt;        create database corrupt; go   -- force page verify option to none alter database corrupt set page_verify none; go   use corrupt; go   -- create two tables t1 and t2 create table dbo.t1( id int identity primary key, col1 varchar(50)); create table dbo.t2( id int identity primary key, col1 int, col2 int, col3 varchar(50));   -- insert some sample data declare @random int; declare @i int = 1;   while @i begin        set @random = 10000/(rand()* 1000);        if @random > 50 set @random = 50;          insert t1(col1) values (REPLICATE('t', 50));        insert t2(col1, col2, col3) values (@i, @i + 1, REPLICATE('t', @random));          set @i = @i + 1; end   -- add a foreign key on column id to table t2 that references the table t1 on column id alter table dbo.t2 add constraint FK_t2_id foreign key (id) references dbo.t1(id); go   -- create a nonclustered covered index on table t1 create nonclustered index idx_t2_col1 on dbo.t2( col1, col2) include ( col3 ); go

 

At this point we have two tables named t1 and t2. Table t2 has a foreign key constraint on the id column that references the table t1 on the column with the same name.

Now let’s corrupt a data page in the clustered index on the table t1. First, we will find the first data page in the clustered index of the table t1:

 

-- get the first data page on the t1 table clustered index dbcc ind('corrupt', 'dbo.t1', 1); go

 

blog_14_-_dbcc_checkb_corrupt_-_1

 

Then we will find the first row. The first row is stored in slot 0 which is located at offset 0x060.

 

-- Display dump page id = 15 dbcc traceon(3604); go dbcc page ('corrupt', 1, 15, 3); go

 

blog_14_-_dbcc_checkb_corrupt_-_2

 

Now it’s time to corrupt the id column (id = 1) located to the offset 0x4 in the row. That means we have to place to the offset 0x60 + 0x4 to corrupt this column.

We will use the DBCC WRITEPAGE undocumented command to corrupt our page (again, a big thanks to Paul Randal for showing us how to use this command for testing purposes).

 

-- corrupt the concerned page alter database corrupt set single_user; go   dbcc writepage('corrupt', 1, 15, 100, 1, 0x00, 1)   alter database corrupt set multi_user; go

 

Now if we take a look at the page id=15, we notice that the id column value is now changed from 1 to 0.

 

blog_14_-_dbcc_checkb_corrupt_-_3

 

Ok, let’s run a DBCC CHECKDB command:

 

-- perform an integrity check with dbcc checkdb dbcc checkdb('corrupt') with no_infomsgs, all_errormsgs; go

 

blog_14_-_dbcc_checkb_corrupt_-_4

 

As you can see, the dbcc checkdb command does not detect any corruption! Now, let’s run the following statements:

 

-- first query select t2.col2, t2.col3 from dbo.t2        join dbo.t1              on t1.id = t2.id where t2.col1 = 1

 

blog_14_-_dbcc_checkb_corrupt_-_5

 

Do you notice that reading the corrupted page does not trigger an error in this case?

 

-- second query select t2.col2, t2.col3, t1.col1 from dbo.t2        join dbo.t1              on t1.id = t2.id where t2.col1 = 1

 

blog_14_-_dbcc_checkb_corrupt_-_6

 

As you can notice, adding the t1.col1 column to the query will give it a different result between the both queries. Strange behavior isn’t it? In fact, the two queries above don’t use the same execution plan as the following below:

Query 1:

 

blog_14_-_dbcc_checkb_corrupt_-_7

 

Query 2:

 

blog_14_-_dbcc_checkb_corrupt_-_8

 

In the query 1, due to the foreign key constraint, the query execution engine doesn’t need to join t2 to t1 to retrieve data because we need only data already covered by the idx_t2_col1 index on table t2. However the story is not the same with the query 2. Indeed, we want to retrieve an additional value provided by the col1 column from the table t1. In this case SQL Server has to join t1 and t2 because the covered index idx_t2_col1 cannot provide all the data we need. But remember we had corrupt the id column of the primary key of the table t1 by changing the value from 1 to 0. This is why the query 2 doesn’t display any results.

The main question here is: why dbcc checkdb doesn’t detect the corruption? Well, in this case corruption has occurring directly on the data value and dbcc checkdb doesn’t have a verification mechanism to detect a corruption issue. Having a checksum value stored in the page would help dbcc checkdb operation in this case because it could compare a computed checksum while reading the page with the stored checksum stored on it.

Below the output provided by dbcc checkdb command if checksum page verify option was enabled for the database …

 

blog_14_-_dbcc_checkb_corrupt_-_9

 

… or when we ran the query used earlier:

 

blog_14_-_dbcc_checkb_corrupt_-_10

 

My conclusion:

Do not hesitate to change your page verify option value when it is configured to "none".

PostgreSQL for Oracle DBAs - an introduction

Yann Neuhaus - Sun, 2014-08-03 22:16

Having worked for several years as an Oracle DBA, I decided to have a look at the PostgreSQL database and see how it functions in comparison to the Oracle Database.

The "Enterprise DB" graphical installation of PostgreSQL 9.3 is quite easy and rather fast. Under Linux you run the graphical installer, dialog boxes lead you through the installation process. You enter the specific information of your system and at the end of the PostgreSQL installation, the Stack Builder package is invoked if you need to install applications, drivers, agents or utilities.

You can download the Enterprise DB utility using the following URL:

http://www.enterprisedb.com/downloads/postgres-postgresql-downloads

I have installed PostgreSQL 9.3 using Enterprise DB as described below:

 

pg1

 

Choose Next.

 

pg2

 

Specify the installation directory where PostgreSQL 9.3 will be installed.

 

pg3

 

Select the directory that will store the data.

 

pg4

 

Provide a password to the PostgreSQL database user.

 

pg5

 

Select a port number.

 

pg6

 

Choose the locale for the new database cluster.

 

pg7

 

PostgreSQL is now ready to be installed.

 

pg8

 

You can choose to launch or not the Stack Builder - if not, the installation process will begin.

If you encounter any problem during the installation phase, the log files are generated in /tmp.

Under Linux, a shell script named uninstall-postgresql is created in the PostgreSQL home directory to de-install the software.

The installation phase is very quick, your PostgreSQL cluster database is ready to use. Furthermore, the Enterprise DB installation creates the automatic startup file in /etc/init.d/postgresql-9.3 to start PostgreSQL in case of a server reboot.

Once the Enterprise DB installation is processed, a database storage area is initialized on disk (a database cluster). After the installation, this database cluster will contain a database named postgres and will be used by utilities or users:

 

postgres=# \list                                 List of databases   Name   | Owner   | Encoding | Collate   |   Ctype   |   Access privileges-----------+----------+----------+------------+------------+-------------postgres | postgres | UTF8     | en_US.utf8 | en_US.utf8 |template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres      +           |         |         |           |           | postgres=CTc/postgrestemplate1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres+           |         |         |           |           | =c/postgres

 

By default, a new database is created by cloning the system standard base named template1. The template0 allows you to create a database containing only pre-defined standard objects.

The sqlplus oracle equivalent command in PostgreSQL is psql. As you will see in the document, the PostgreSQL commands begin with the \ sign. The “\?” command lists every possibility.

For example, the following commands connects to the psi database:

 

-bash-3.2$ psql -d psi

Password:psql.bin (9.3.4)

Type "help" for help.No entry for terminal type "xterm";

using dumb terminal settings.

psi=# \q


If you do not want the system to ask for a password, you simply have to create a .pgpass file in the postgres home directory with the 0600 rights and the following syntax:

 

-bash-3.2$ more .pgpass

localhost:5432:PSI:postgres:password

 

-bash-3.2$ su - postgres

Password:

-bash-3.2$ psql -d psi

psql.bin (9.3.4)

Type "help" for help.

No entry for terminal type "xterm";

using dumb terminal settings.

psi=#

psi-# \q

 

At first you probably need to create a database. As an Oracle DBA, I was wondering about some typical problems such as character set or default tablespace. With PostgreSQL, it is quite easy to create a database.

As the locale en_US.utf8 has been chosen during the installation phase to be used by the cluster database, every database you will create will use it.

When you create a database you can specify a default tablespace and an owner. At first we create a tablespace:

 

postgres=# create tablespace psi location '/u01/postgres/data/psi';

CREATE TABLESPACE

 

The tablespace data is located in /u01/postgres/data/psi:

 

-bash-3.2$ ls

PG_9.3_201306121

-bash-3.2$ ls PG_9.3_201306121/

16526

-bash-3.2$ ls PG_9.3_201306121/16526/

12547     12587_vm  12624     12663     12728     12773

12547_fsm 12589     12625     12664     12728_fsm 12774

12664_vm  12730   12774_vm     12627     12666     12731     12776

 

Then we create the database:

 

postgres=# create database psi owner postgres tablespace psi;

CREATE DATABASE

 

We can list all databases with the \list command:

 

postgres=# \list                                

                 List of databases

   Name   | Owner   | Encoding | Collate   |   Ctype   |   Access privileges

-----------+----------+----------+------------+------------+-------------

postgres | postgres | UTF8     | en_US.utf8 | en_US.utf8 |

psi       | postgres | UTF8     | en_US.utf8 | en_US.utf8 |

template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres        

|         |         |           |           | postgres=CTc/postgres

template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres+           |         |         |           |           | =c/postgres

 

Now, we can connect to the psi database and create objects, the syntax is quite similar to Oracle:

 

postgres=# \c psi

You are now connected to database "psi" as user "postgres".


We create a table and an index:

 

psi=# create table employe (name varchar);

CREATE TABLE

psi=# create index employe_ix on employe (name);

CREATE INDEX

 

We insert values in it:


psi=# insert into employe values ('bill');

INSERT 0 1

 

We reconnect to the psi database:


-bash-3.2$ psql -d psi

Password:

psql.bin (9.3.4)

Type "help" for help.

No entry for terminal type "xterm";

using dumb terminal settings.


The following command lists the tables:


psi=# \dt[+]                    

             List of relations

Schema | Name   | Type | Owner   | Size | Description

--------+---------+-------+----------+-------+-------------

public | employe | table | postgres | 16 kB |

(1 row)

psi=# select * from employe;

name

------

bill

(1 row)

 

The \d+ postgreSQL command is the equivalent of the Oracle desc command:


psi=# \d+ employe                            

                Table "public.employe"

Column |       Type       | Modifiers | Storage | Stats target | Description

--------+-------------------+-----------+----------+--------------+-------------

name   | character varying |          | extended |             |

Indexes:

   "employe_ix" btree (name)

Has OIDs: no


Obviously we also have the possibility to create a schema and create objects in this schema.

Let's create a schema:


psi=# create schema psi;

CREATE SCHEMA


Let's create a table, insert objects in it and create a view:


psi=# create table psi.salary (val integer);

CREATE TABLE

psi=# insert into psi.salary values (10000);

INSERT 0 1

psi=# select * from psi.salary;

val

-------

10000

psi=# create view psi.v_employe as select * from psi.salary;

CREATE VIEW

 

If we list the tables we can only see the public objects:


psi=# \d        

        List of relations

Schema | Name   | Type | Owner  

--------+---------+-------+----------

public | employe | table | postgres

(1 row)


If we modify the search path, all schemas are visible:


psi=# set search_path to psi,public;

SET

psi=# \d 

        List of relations

Schema | Name   | Type | Owner  

--------+---------+-------+----------

psi   | salary | table | postgres

public | employe | table | postgres


Oracle DBA’s are familiar with sql commands - e. g. to get the table list of a schema by typing select table_name, owner from user_tables, etc.

What is the equivalent query in postgreSQL?

PostgreSQL uses a schema named information_schema available in every database. The owner of this schema is the initial database user in the cluster. You can drop this schema, but the space saving is negligible.

You can easily query the tables of this schema to get precious informations about your database objects:

Here is a list of the schemas tables:


psi=# select table_name, table_schema from information_schema.tables where table_schema in ('public','psi');

table_name | table_schema

------------+--------------

employe   | public

salary     | psi


We can display the database character set:


psi=# select character_set_name from information_schema.character_sets;

character_set_name

--------------------

UTF8

 

We can display schema views:


psi=# select table_name from information_schema.views where table_schema='psi';

table_name

------------

v_employe


Using the information_schema schema helps us to display information about a lot of different database objects (tables, constraints, sequences, triggers, table_privileges …)

Like in Oracle you can run a query from the SQL or the UNIX prompt. For example, if you want to know the index name of the table employe, you shoud use the index.sql script:


select

t.relname as table_name,

i.relname as index_name,

a.attname as column_name

from

pg_class t,pg_class i,

pg_index ix,pg_attribute a

wheret.oid = ix.indrelid

and i.oid = ix.indexrelid

and a.attrelid = t.oid

and a.attnum = ANY(ix.indkey)

and t.relkind = 'r'

and t.relname = 'employe'

order byt.relname,i.relname;


If you want to display the employee index from the SQL prompt, you run:


psi=# \i index.sql

table_name | index_name | column_name

------------+------------+-------------

employe   | employe_ix | name


If you want to run the same query from the UNIX prompt:


-bash-3.2$ psql -d psi -a -f index.sql

Password:

table_name | index_name | column_name

------------+------------+-------------

employe   | employe_ix | name


However, typing an SQL request might be interesting, but - as many Oracle DBA - I like using an administration console because I think it increases efficiency.

I have discovered pgAdmin, an administration tool designed for Unix or Windows systems. pgAdmin is easy to install on a PostgreSQL environment and enables many operations for the administration of a cluster database.

pgAdmin3 is installed in the home directory of the user postgre - in my case in /opt/postgres/9.3.

To successfully enable pgAdmin3, it is necessary to correctly initialize the LD_LIBRARY_PATH variable:

 

export LD_LIBRARY_PATH=/opt/PostgreSQL/9.3/lib:/opt/PostgreSQL/9.3/pgAdmin3/lib

 

The pgadmin3 console:

 

pg9

 

As you can see, you can administer every database object (tables, functions, sequences, triggers, views…).

You can visualize the table creation scripts:

 

pg9

 

You can edit / change / modify the privileges of an object:

 

pg11

 

You also have the possibility to create scripst for the database creation:

 

pg12

 

Or even to backup the database:

 

pg13

 

This tool seems to be very powerful, even if for the moment, I did not find any performance tool available like in Cloud Control 12c.

 

Conclusion

Discovering PostgreSQL as an Oracle DBA, I realized how close the two products are. The PostgreSQL database has a lot of advantages such as the easy installation, the general usage and the price (because it’s free!).

For the processing of huge amounts of data, Oracle certainly has advantages, nevertheless the choice of a RDBMS always depends on what your application business needs are.

#GoldenGate Bound Recovery

DBASolved - Sun, 2014-08-03 19:40

Every once in awhile when I restart an extract, I see entries in the report file that reference “Bounded Recovery”.  What exactly is “Bounded Recovery”?

First, keep in mind that “Bounded Recovery” is only for Oracle databases!

Second, according to the documentation, “Bounded Recovery” is a component of the general extract checkpointing facility.  This component of the extract guarantees an efficient recovery after an extract stops for any reason, no matter how many uncommitted transactions are currently outstanding.  The Bounded Recovery parameter sets an upper boundary for the maximum amount of time that an extract is needed to recover to the point where it stopped before continuing normal processing.

The default settings for “Bounded Recovery” is set to 4 hours and needed recovery information is cached in the OGG_HOME/BR/<extract name> directory  This is verified when I look at the report file for my extract named EXT.


2014-07-21 17:26:30 INFO OGG-01815 Virtual Memory Facilities for: BR
 anon alloc: mmap(MAP_ANON) anon free: munmap
 file alloc: mmap(MAP_SHARED) file free: munmap
 target directories:
 /oracle/app/product/12.1.2/oggcore_1/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR = /oracle/app/product/12.1.2/oggcore_1

According to documentation, the default setting so for “Bounded Recovery” should be sufficient for most environments.  It is also noted that the “Bounded Recovery” settings shouldn’t be changed without the guidance of Oracle Support.

Now that the idea of a “Bounded Recovery” has been established, lets try to understand a bit more about how a transaction is recovered in Oracle GoldenGate with the “Bounded Recovery” feature.

At the start of a transaction, Oracle GoldenGate must cache the transaction (even if it contains no data).  The reason for this is due to the need to support future operations of a transaction.  If the extract hits a committed transaction, then the cached transaction is written to the trail file and clears the transaction from memory.  If the extract hits a rollback, then the cached transaction is discarded from memory.  As long as a an extract is processing a transaction, before a commit or rollback, the transaction is considered an open transaction and will be collected.  If the extract is stopped before it encounters a commit or rollback, the extract needs all of the cached transaction information recovered before the extract can start.  This approach applies to all transactions that were open at the time of the extract being stopped.

There are three ways that an extract performs recovery:

  1. No open transactions when extract is stopped, the recovery begins at the current extract read checkpoint (Normal recovery)
  2. Open transactions whose start points in the log were very close in time to the time when the extracted was stopped, the extract begins its recovery by re-reading the logs from the beginning of the oldest open transaction (Considered a normal recovery)
  3. One or more open transactions that extract qualified as long-running open transactions, extract begins recovery (Bounded Recovery)

What defines a long-running transaction for Oracle GoldenGate?

Transactions in Oracle GoldenGate are long-running if the transaction has been open longer than one (1) “Bounded Recovery” interval.

A “bounded recovery interval” is the amount of time between “Bounded Recovery checkpoints” which persists the current state and data of the extract to disk.  “Bounded Recovery checkpoints” are used to identify a recovery position between tow “Bounded Recovery intervals”.  The extract will pick up from the last “bounded recovery checkpoint”, instead of processing from the log position where the open long-running transaction first appeared.

What is the maximum Bounded Recovery time?

The maximum bounded recovery time is no more than twice the current “Bounded Recovery checkpoint” interval.  However, the actual recovery time will be dictated by the following:

  1. The time from the last valid Bounded Recovery interval to when the extract was stopped
  2. Utilization of the extract in that period
  3. The percent of utilization for transaction that were previously written to the trail

Now that the basic details of “Bounded Recovery” have been discussed.  How can the settings for “Bounded Recovery” be changed?

“Bounded Recovery” can be changed by updating the extract parameter file with the following parameter:


BR
[, BRDIR directory]
[, BRINTERVAL number {M | H}]
[, BRKEEPSTALEFILES]
[, BROFF]
[, BROFFONFAILURE]
[, BRRESET]

As noted, there are a few options that can be set with the BR parameter.  If I wanted to shorten my “Bound Recovery” time and change directories where the cached information is stored I can do something similar to this:


--Bound Recovery
BR BRDIR ./dirbr BRINTERVAL 20M

In the example above, I’m changing the directory to a new directory called DIRBR (created manually as part of subdirs).  I also changed the interval from 4 hours to 20 minutes.

Note: 20 minutes is the smallest accepted time for the BRINTERVAL parameter.

After adding the BR parameter with options to the extract, the extract needs to be restarted.  Once the extract is up and running, the report file for the extract can be checked to verify that the new parameters have been taken.


2014-08-02 22:20:54  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /oracle/app/product/12.1.2/oggcore_1/dirbr/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 20M
BRDIR      = ./dirbr

Hopefully, this post provided a better understanding of one least understood option within Oracle GoldenGate.

Enjoy!!

About me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Viewing Figures

Jonathan Lewis - Sun, 2014-08-03 17:05

The day has just started in Singapore – though it’s just coming up to midnight back home – and the view counter has reached 4,00,009 despite the fact that I’ve not been able to contribute much to the community for the last couple of months. Despite the temporary dearth of writing it’s time to have a little review to see what’s been popular and how things have changed in the 10 months it took to accumulate the last 500,000 views so here are some of the latest WordPress stats.

All time ratings AWR / Statspack 80,997 Updated from time to time NOT IN 51,673 February 2007 Cartesian Merge Join 39,265 December 2006 dbms_xplan in 10g 37,725 November 2006 Shrink Tablespace 31,184 November 2006 Ratings over the last year AWR / Statspack 13,905 Updated from time to time AWR Reports 9,494 February 2011 Shrink Tablespace 8,402 February 2010 Lock Modes 8,221 June 2010 NOT IN 6,388 February 2007

The figures for the previous half million views (opens in a new window) are very similar for most of the top 5 although “Analysing Statspack (1)” has been pushed from 5th place to 6th place in the all-time greats; and “Lock Modes” has swapped places with “NOT IN” in the annual ratings. As the annual WordPress summary says: “… your posts have staying powere, why not write more about …”.

The number of followers has gone up from about 2,500  to just over 3,900 but, as I said last time, I suspect that there’s a lot of double counting related to twitter.

 


A Quick Trip To The Mother Ship

Floyd Teter - Sun, 2014-08-03 15:11
The title of this post notwithstanding, I was not abducted by aliens last week.  Take off your tin-foil hat, it's all cool.  I spent a few days last week a few different teams at Oracle HQ, mostly digging into the progress of some cool new work in progress.  Thought I'd share what I learned.

One caveat before I start sharing.  My agreement with Oracle prevents me from talking about specific details and delivery dates.  Personally, I don't have much of a problem with that - product development news on Oracle's products is Oracle's news to share, if and when they decide to share it. Now that we're clear about that, let's get to the good stuff.

I was fortunate enough to have a good chunk of the brain trust from the Sierra-Cedar Oracle Higher Education Practice (that's the former Io Consulting group) with me:  Steve Kish, Elizabeth Malmborg, Anastasia Metros and Ted Simpson (yes, he of HEUG fame).  It was cool to watch them consider the new things coming for the Higher Education marketplace.  Gave me a measure of how the Higher Ed marketplace will respond.

Most of day one was spent with the leadership of the Oracle Higher Education development team, reviewing their progress in building the new Oracle Student Cloud product.  They're further along in the development lifecycle than I'd expected, which was a pleasant surprise.  And one thing became very clear to me as a result of the review:  planning to throw away PeopleSoft Campus Solutions should not be a part of anyone's short-term game plan.   Oracle Student Cloud is focused on offering a solution for managing continuing education.  Expectations are that early adopters of Oracle Student Cloud will be using the product as a value-added enhancement to the Campus Solutions product.

Don't get confused here.  Oracle has both the Oracle Student Cloud and the Oracle Higher Education Cloud in their development pipeline.  But we talking about two different products here with two different sets of target customers, development life cycles and different release dates.  The latter product will have a much larger focus than the former.

So, what's the best strategy for a higher ed institution that preserves their investment and offers maximum flexibility going forward?  Get to the latest release of whatever you're currently using, whether it's an Oracle product or not.  Make sure you're up to date - it's the best platform for moving forward.  And yes, there are other elements to the strategy as well, but that's not my main purpose for writing this particular post.

Day two was spent with the Oracle User Experience team.  Great stuff as usual.  A special thanks to Oracle's Michael LaDuke for putting the day together.  And it was fun to see the understanding of UX take shape in the minds of the Sierra-Cedar leadership team, especially during a discussion around wire framing practices.  We also some soon-to-be-released incremental progress with Simplified UI.  And, finally, we saw some cool new products in the works.  On this final note, it's pretty obvious that the UX team is now focused on innovating by applying Fusion Middleware technology to mobile use cases (both tablet and phone).  Saw some pretty good stuff with the potential for adding some high value to day-to-day business processing (both in terms of automation and collecting business intelligence).

I only got two days this trip...wasn't nearly enough.  The upshot?  Lots of cool stuff on the horizon.

GI Commands : 2 -- Managing the Local and Cluster Registry

Hemant K Chitale - Sun, 2014-08-03 07:51
In 11gR2

There are essentially two registries, the Local Registry and the Cluster Registry.

Let's check the Local Registry :

[root@node1 ~]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/11.2.0/cdata/node1.olr
crs_home=/u01/app/grid/11.2.0
[root@node1 ~]#
[root@node1 ~]# file /u01/app/grid/11.2.0/cdata/node1.olr
/u01/app/grid/11.2.0/cdata/node1.olr: data
[root@node1 ~]#

So, like the Cluster Registry, the Local Registry is a binary file.  It is on a local filesystem on the node, not on ASM/NFS/CFS.  Each node in the cluster has its own Local Registry.

The Local Registry can be checked for consistency (corruption) using ocrcheck with the "-local" flag.  Note : As demonstrated in my previous post, the root account must be used for the check.

[root@node1 ~]# su - grid
-sh-3.2$ su
Password:
[root@node1 grid]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2696
Available space (kbytes) : 259424
ID : 1388021147
Device/File Name : /u01/app/grid/11.2.0/cdata/node1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

Now let's look at the Cluster Registry :

[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : /fra/ocrfile
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

The Cluster Registry is distributed across two ASM DiskGroups (+DATA and +FRA) and one filesystem (/fra/ocrfile). Yes, this is a special case that I've created to distribute the OCR in this manner.

I cannot add the OCR to a location which is an ASM diskgroup with a lower asm.compatible.

[root@node1 grid]# ocrconfig -add +DATA2
PROT-30: The Oracle Cluster Registry location to be added is not accessible
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid.
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +DATA2.255.1
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-06512: at line 4

[root@node1 grid]#

I now remove the filesystem copy of the OCR.

[root@node1 grid]# ocrconfig -delete /fra/ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

Note, however, that the ocrconfig delete doesn't actually remove the filesystem file that I had created.

[root@node1 grid]# ls -l /fra/ocrfile
-rw-r--r-- 1 root root 272756736 Aug 3 21:27 /fra/ocrfile
[root@node1 grid]# rm /fra/ocrfile
rm: remove regular file `/fra/ocrfile'? yes
[root@node1 grid]#

I will now add a filesystem location for the OCR.

[root@node1 grid]# touch /fra/new_ocrfile
[root@node1 grid]# ocrconfig -add /fra/new_ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded
Device/File Name : /fra/new_ocrfile
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]# ls -l /fra/new_ocrfile
-rw-r--r-- 1 root root 272756736 Aug 3 21:30 /fra/new_ocrfile
[root@node1 grid]#

What about OCR Backups ?  (Note : Oracle does frequent automatic backups of the OCR, but *not* of the OLR).
N.B. : This listing doesn't show all the OCR backups you'd expect because I don't have my cluster running continuously through all the days.

[root@node1 grid]# ocrconfig -showbackup

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr

node1 2011/11/09 23:20:25 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_232025.ocr
[root@node1 grid]#

Let me run an additional backup from node2.

[root@node2 grid]# ocrconfig -manualbackup

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node2 grid]#

We can see that the backup done today (03-Aug) is listed at the top.  Let's check a listing from node1

[root@node1 grid]# ocrconfig -showbackup

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node1 grid]#

Yes, the backup of 03-Aug is also listed.  But, wait ! Why is it on node1 ?  Let's go back to node2 and do a filesytem listing.

[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
ls: /u01/app/grid/11.2.0/cdata/rac/backup*: No such file or directory
[root@node2 grid]#

Yes, as we've noticed. The backup doesn't really exist on node2.

[root@node1 grid]# ls -lt /u01/app/grid/11.2.0/cdata/rac/
total 114316
-rw------- 1 root root 8024064 Aug 3 21:37 backup_20140803_213717.ocr
-rw------- 1 root root 8003584 Jul 6 22:39 backup_20140706_223955.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 day.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 week.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 backup00.ocr
-rw------- 1 root root 8003584 Jul 5 17:30 backup_20140705_173025.ocr
-rw------- 1 root root 7708672 Jun 16 22:15 backup_20140616_221507.ocr
-rw------- 1 root root 7708672 Jun 16 22:14 backup_20140616_221405.ocr
-rw------- 1 root root 7688192 Nov 9 2011 backup_20111109_232025.ocr
-rw------- 1 root root 7667712 Nov 9 2011 backup_20111109_230940.ocr
-rw------- 1 root root 7647232 Nov 9 2011 backup_20111109_230916.ocr
-rw------- 1 root root 7626752 Nov 9 2011 backup_20111109_224725.ocr
-rw------- 1 root root 7598080 Nov 9 2011 backup_20111109_222941.ocr
-rw------- 1 root root 7593984 Oct 22 2011 backup01.ocr
-rw------- 1 root root 7593984 Oct 21 2011 backup02.ocr
[root@node1 grid]#

Yes, *ALL* the OCR backups to date have been created on node1 -- even when executed from node2.  node1 is still the "master" node for OCR backups as long as it is up and running.  I shut down Grid Infrastructure on node1.

[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.new_svc.svc' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.gns' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node1'
CRS-2677: Stop of 'ora.scan2.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'node2'
CRS-2676: Start of 'ora.cvu' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.new_svc.svc' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2676: Start of 'ora.scan3.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2676: Start of 'ora.scan2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2677: Stop of 'ora.gns' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'node1'
CRS-2677: Stop of 'ora.gns.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'node2'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.gns.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'node2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2676: Start of 'ora.gns' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'node2'
CRS-2676: Start of 'ora.oc4j' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]#

So, all the Grid Infrastructure services are down on node1. I will run an OCR Backup from node2 and verify it's location.

[root@node2 grid]# ocrconfig -manualbackup

node2 2014/08/03 21:49:02 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr
[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
-rw------- 1 root root 8024064 Aug 3 21:49 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr
[root@node2 grid]#

Yes, the backup got created on node2 now.

Question : Would there have been a way to create a backup on node2 without shutting down node1 ?

.
.
.

Categories: DBA Blogs

Analogy – 2

Jonathan Lewis - Sun, 2014-08-03 06:41

I suggested a little while ago that thinking about the new in-memory columnar store as a variation on the principle of bitmap indexes was quite a good idea. I’ve had a couple of emails since then asking me to expand on the idea because “it’s wrong” – I will follow that one up as soon as I can, but in the meantime here’s another angle for connecting old technology with new technology:

It is a feature of in-memory column storage that the default strategy is to store all columns in memory. But it’s quite likely that you’ve got some tables where a subset of the columns are frequently accessed and other columns are rarely accessed and it might seem a waste of resources to keep all the columns in memory just for the few occasional queries. So the feature allows you to de-select columns with the “no inmemory({list of columns})” option – it’s also possible to use different degrees of compression for different columns, of course, which adds another dimension to design and planning – but that’s a thought for another day.

So where else do you see an example of being selective about where you put columns ?  Index Organized Tables (IOTs) – where you can choose to put popular columns in the index (IOT_TOP) segment, and the rest in the overflow segment, knowing that this can give you good performance for critical queries, but less desirable performance for the less important or less frequent queries. IOTs allow you to specify the (typically short) list of columns you want “in” – it might be quite nice if the same were true for the in-memory option, I can imagine cases where I would want to include a small set of columns and exclude a very large number of them (for reasons that bring me back to the bitmap index analogy).

 


Oracle Database 12c: In-Memory Option

Asif Momen - Sun, 2014-08-03 06:16
Starting with Oracle Database 12cR1 (12.1.0.2), a new static pool in the SGA is designed to store data in the columnar format and is called In-Memory Column Store (IMCS). Each table column is stored as a separate structure in IMCS. The In-Memory column store does not replace the buffer cache, rather supplements by storing data in columnar format.
Following the levels at which IMCS can be enabled at:
  • Column
  • Partition / sub-partition
  • Table
  • Materialized view
  • Tablespace
The IMCS is populated by a set of background processes. Objects are populated into the IMCS either in a prioritized list soon after database start-up or after they are queried for the first time.

Like other Oracle Database Options, you make NO changes in your application to start benefiting from the In-Memory Option. It is completely transparent to the applications. Also, Oracle Optimizer is fully aware of the column format and automatically utilizing IMCS when required.

I plan to test and blog more on the In-Memory option. Following are few of the topics that I plan to post a blog entry on:
  • Enable and disable In-Memory Option
  • In-Memory Option at various levels
  • In-Memory Space pressure
  • In-Memory background processes
  • In-Memory with compression levels
  • In-Memory statistics
  • In-Memory and Data Pump Export
  • In-Memory with Multi-tenant Option



Handling Rollback Operation for ADF Input Components with Immediate Property

Andrejus Baranovski - Sun, 2014-08-03 02:44
One of my colleagues is implementing advanced dynamic ADF BC/ADF UI functionality. To force validation to be invoked properly, he must use ADF UI input components set with Immediate=true property. This allows to keep validation messages always displayed, even if user navigates to edit another dynamic ADF UI input attribute, until form is submitted or canceled. However, as by JSF design - usage of input components with Immediate=true, blocks execution of command components, if there are validation errors available. This means - user can't use Cancel button to reset the form with validation errors, until he would fix these errors manually. However, there is a potential solution - we could use ADF subform component to isolate input components, with a combination of ADF View reload to refresh rendered UI.

Sample application - ADFAttributeImmediateApp.zip, implements  basic ADF Panel Form Layout with a set of input components. Each of these input components is set with Immediate=true property:


Let's say there is validation error on the screen for input text component with Immediate=true property:


Try to press Cancel button - no action will happen, it will continue complaining about validation error:


This is how it supposed to be - by default, Cancel button will not be invoked, because Immediate=true validation from input component would block it. We could improve it and adjust for our specific scenario to make sure Cancel button will be invoked, even with validation errors on the screen. We should surround input components with ADF subform component, this allows to isolate from Cancel command:


Cancel button must stay out of ADF subform, this will make it possible to invoke Cancel action listener method, even with immediate input components available on the same screen (but surrounded with ADF subform):


Action Listener for Cancel button gets invoked and reset happens now, no matter if there is validation error for input component with Immediate=true:


There is one extra bit - even with Rollback operation executed, UI would stay unsynchronised and keep displaying validation message until next submit. It requires to force ADF view refresh - you should recreate ADF view programmatically from the same method, where you are invoking Rollback operation:

New in Oracle 12c: Querying an Associative Array in PL/SQL Programs

Galo Balda's Blog - Sat, 2014-08-02 16:23

I was aware that up to Oracle 11g, a PL/SQL program wasn’t allowed use an associative array in a SQL statement. This is what happens when I try to do it.

SQL> drop table test_array purge;

Table dropped.

SQL> create table test_array as
  2  select level num_col from dual
  3  connect by level <= 10;

Table created.

SQL> select * from test_array;

   NUM_COL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10                                                                      

10 rows selected.

SQL> drop package PKG_TEST_ARRAY;

Package dropped.

SQL> create or replace package PKG_TEST_ARRAY as
  2
  3    type tab_num is table of number index by pls_integer;
  4
  5  end PKG_TEST_ARRAY;
  6  /

Package created.

SQL> declare
  2    my_array pkg_test_array.tab_num;
  3  begin
  4    for i in 1 .. 5 loop
  5      my_array(i) := i*2;
  6    end loop;
  7
  8    for i in (
  9              select num_col from test_array
 10              where num_col in (select * from table(my_array))
 11             )
 12    loop
 13      dbms_output.put_line(i.num_col);
 14    end loop;
 15  end;
 16  /
            where num_col in (select * from table(my_array))
                                                  *
ERROR at line 10:
ORA-06550: line 10, column 51:
PLS-00382: expression is of wrong type
ORA-06550: line 10, column 45:
PL/SQL: ORA-22905: cannot access rows from a non-nested table item
ORA-06550: line 9, column 13:
PL/SQL: SQL Statement ignored
ORA-06550: line 13, column 26:
PLS-00364: loop index variable 'I' use is invalid
ORA-06550: line 13, column 5:
PL/SQL: Statement ignored

As you can see, the TABLE operator is expecting either a nested table or a varray.

The limitation has been removed in Oracle 12c. This is what happens now.

SQL> set serveroutput on
SQL>
SQL> drop table test_array purge;

Table dropped.

SQL> create table test_array as
  2  select level num_col from dual
  3  connect by level <= 10;

Table created.

SQL> select * from test_array;

   NUM_COL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10                                                                      

10 rows selected.

SQL> drop package PKG_TEST_ARRAY;

Package dropped.

SQL> create or replace package PKG_TEST_ARRAY as
  2
  3    type tab_num is table of number index by pls_integer;
  4
  5  end PKG_TEST_ARRAY;
  6  /

Package created.

SQL> declare
  2    my_array pkg_test_array.tab_num;
  3  begin
  4    for i in 1 .. 5 loop
  5      my_array(i) := i*2;
  6    end loop;
  7
  8    for i in (
  9              select num_col from test_array
 10              where num_col in (select * from table(my_array))
 11             )
 12    loop
 13      dbms_output.put_line(i.num_col);
 14    end loop;
 15  end;
 16  /

2
4
6
8
10                                                                              

PL/SQL procedure successfully completed.

Here’s another example using a slightly different query.

SQL> declare
  2    my_array pkg_test_array.tab_num;
  3  begin
  4    for i in 1 .. 5 loop
  5      my_array(i) := i*2;
  6    end loop;
  7
  8    for i in (
  9              select a.num_col, b.column_value
 10              from
 11                test_array a,
 12                table (my_array) b
 13              where
 14                a.num_col = b.column_value
 15             )
 16    loop
 17      dbms_output.put_line(i.num_col);
 18    end loop;
 19  end;
 20  /

2
4
6
8
10                                                                              

PL/SQL procedure successfully completed.

Very nice stuff.


Filed under: 12C, PL/SQL Tagged: 12C, PL/SQL
Categories: DBA Blogs

Got Oracle? Better Manage Your Licenses!

Brent Martin - Sat, 2014-08-02 14:55

There's actually quite a bit of information on the internet about the complexity in managing the technology licenses you purchase (and don't purchase) from Oracle.  This has been a problem for many versions, and is still a problem in the current version.  Here are some examples:


http://kevinclosson.wordpress.com/2014/07/24/oracle-database-12c-release-12-1-0-2-my-first-observations-licensed-features-usage-concerns-part-i/
http://kevinclosson.wordpress.com/2014/07/25/oracle-database-12c-release-12-1-0-2-my-first-observations-licensed-features-usage-concerns-part-ii/
http://mikesmithers.wordpress.com/2013/01/26/oracle-database-diagnostic-and-tuning-packs-exactly-what-are-you-not-licensed-for/
http://mikesmithers.wordpress.com/2013/02/09/sqldeveloper-and-a-very-expensive-query/


In these articles the following patterns start emerging:


Installing Oracle software enables extra cost "licensable" features that you may or may not be entitled to.
Some of these features can be disabled. Others cannot be disabled by any method in the Oracle documentation.
Regardless, Oracle databases track "usage" of all of these components in audit tables in the database.
If Oracle audits your company for license compliance, data in the database audit tables will be used as evidence and may make your company liable for compliance


I don't really want to debate Oracle's intentions with these practices, or whether or not they'd actually compromise their relationship with their clients to pursue accidental use revenue.  Oracle is a sophisticated organization and I think it's safe to assume this behavior is deliberate and is designed for their own benefit.  At best, these practices represent a risk for Oracle's customers that need to be mitigated.

So if you are going to purchase and deploy Oracle software - and you almost certainly will given Oracle's acquisition record -  you need to take steps to protect your company from potential license compliance issues down the road.

To be fair the items I'm outlining here make sense when you license and deploy enterprise software from any vendor.  But I'm picking on Oracle because I believe managing deployment of Oracle licenses is possibly the most complex challenge of its kind in the industry and it deserves special attention. 

Before we go any further I need to put out the standard disclaimer.  I am not a lawyer nor an Oracle licensing expert and I am not an authority on these topics. Do not take anything in this article at face value.  Validate everything with your own experts.  Hire your own consultants and attorneys who specialize in Oracle products and Oracle negotiation to help you through the process..  

Now that that's out of the way let's get started.  The areas I think you should focus on are:


Initial contract negotiations
Implementing an IT license tracking solution
Create a compliance process that involves your strategic sourcing team and the technology folks who work with the products.


Reducing Compliance Risk Starts With The Contract.
Once you select a set of products that you want to purchase from Oracle, the negotiations start.  Oracle typically extends relatively large discounts off the software list price (I've seen 50%-95% depending on the size of the deal).  While that may seem like a lot and I agree it is very important, here are some other things besides price that are equally important to negotiate for:



Oracle changes the terms of their licensing from time to time. This will not be to your benefit. Be sure to negotiate contract terms that lock in specific license metric definitions and license terms.  And be sure to include language that protects you from the "click-through" agreements.
Along the same lines, be sure to create a Master Services Agreement that will cover future purchases so that you don't have to re-negotiate these terms with each incremental purchase.
Don't allow Oracle to audit your company for license compliance.  In my opinion they shouldn't have the right to show up and audit you unless they have evidence that you are out of compliance.  Be sure to negotiate these terms carefully because it could save you from a ton of trouble later.
Do include language that states if licensable components are installed without a specific notification in the installation program or in the software itself to inform someone that they are being installed then your company is not liable for any usage.
Do not agree to use Oracle's scripts that will crawl your servers and detect usage.  Agree to provide this information from your own tracking system.

Deploy a License Tracking System
OK so hopefully you'll start out with a contract that protects you from things like accidental deployments and random audit demands.  The next layer of protection involves a system that can keep track of all of this for you soRead More...

Full Disclosure

Michael Feldstein - Sat, 2014-08-02 12:41

As you probably know, we run a consulting business (MindWires Consulting) and sometimes work with the companies and schools that we write about here. Consequently, we periodically remind you and update you on our conflict of interest policies. We do our best to avoid or minimize conflicts of interest where we can, but since our system isn’t perfect, we want you to understand how we handle them when they arise so that you can consider our analysis with the full context in mind. We value your trust and don’t take it for granted.

We talk a lot with each other about how to deal with conflicts of interest because we run into them a lot. On the one hand, we find that working with the vendors and schools that we write about provides us with insight that is helpful to a wide range of clients and readers. There just aren’t too many people who have the benefit of being able to see how all sides of the ed tech relationships work. But along with that perspective comes an inevitable and perpetual tension with objectivity. When we started our business together 18 months ago, we didn’t have a clear idea where these tensions would show up or how big of an issue they might turn out to be. We originally thought that our blogging was going to remain an addiction that was subsidized but somewhat disconnected from our consulting. But it turns out that more than 90% of our business comes from readers of the blog, and a significant portion of it comes out of conversations stimulated by a specific post. Now that we understand that relationship better, we’re getting a better handle on the kinds of conflict of interest that can arise and how best to mitigate them. Our particular approach in any given situation depends on lot on whether the client wants analysis or advice.

Disclosure

In many cases, clients want us to provide deeper, more heavily researched, and more tailored versions of the analysis that we’ve provided publicly on this blog. In this situation, there isn’t a strong a direct conflict of interest between working providing them with what they are asking for and writing public analysis about various aspects of their business. That said, no matter how hard we try to write objectively about an organization that is, was, or could be a client, human nature being what it is, we can’t guarantee that we will never be even subconsciously influenced in our thinking. That is why we have a policy to always disclose when we are blogging about a client. We have done this in various ways in the past. Going forward, we are standardizing on an approach in which we will insert a disclosure footnote at the end of the first sentence in the post in which the client is named. It will look like this.[1] (We are not fully satisfied that the footnote is prominent enough, so we will be investigating ways to make it a little more prominent.) We will insert these notices in all future posts on the blog, whether or not we are the authors of those posts. In cases where the company in question is not currently a client but was recently and could be again in the near future, we will note that the company “was recently a client of MindWires Consulting”.

Recusal

Sometimes the client wants not only analysis but also strategic advice. Those situations can be trickier. We want to avoid cases in which we blog in praise (or condemnation) of a company for taking an action that they paid us to tell them to take. Our policy is that we don’t blog about any decisions that a company might make based on our advice. There are some theoretical situations in which we might consider making an exception to that rule, but if they ever do come up in reality, then the disclosure principle will apply. We will let you know if, when, and why we would make the exception. Aside from that currently theoretical exception, we recuse ourselves from blogging about the results of our own consulting advice. Furthermore, when potential clients ask us for advice that we think will put us into a long-term conflict of interest regarding one of our core areas of analysis, we turn down that work. Analysis take precedence over advice.

Getting Better at This

We’re going to continue thinking about this and refining our approach as we learn more. We also have some ideas about business models that could further minimize potential conflicts in the future. We’ll share the details with you if and when we get to the point where we’re ready to move forward on them. In the meantime, we will continue to remind you of our current policy periodically so that you are in a better position to judge our analysis. And as always, we welcome your feedback.

 

  1. Full disclosure: Acme Ed Tech Company is a client of MindWires Consulting, the sponsor of e-Literate.

The post Full Disclosure appeared first on e-Literate.

RMAN Pet Peeves

Michael Dinh - Sat, 2014-08-02 12:38

Do you validate your backup and what command do you use?

Lately, I have been using restore database validate preview summary to kill 2 birds with 1 stone.

The issue is RMAN will skip validation of archived log backupset when archived log exists.

Does this seem wrong to you?

Please take a look at a test case here

What do you think?


Are You Using BULK COLLECT and FORALL for Bulk Processing Yet?

Eddie Awad - Sat, 2014-08-02 12:01

Steven Feuerstein was dismayed when he found in a PL/SQL procedure a cursor FOR loop that contained an INSERT and an UPDATE statements.

That is a classic anti-pattern, a general pattern of coding that should be avoided. It should be avoided because the inserts and updates are changing the tables on a row-by-row basis, which maximizes the number of context switches (between SQL and PL/SQL) and consequently greatly slows the performance of the code. Fortunately, this classic antipattern has a classic, well-defined solution: use BULK COLLECT and FORALL to switch from row-by-row processing to bulk processing.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

Linking of Bugs, Notes and SRs now available in SRs

Joshua Solomin - Fri, 2014-08-01 18:01

We have extended the linking capability within the body of an SR. Because of security concerns and issues with dealing with embedded HTML, we don't let SRs contain HTML directly.

But we now allow a variety of formats to LINK from Bugs, Documents and other SRs within the body of an SR.

Screen shot of links that work in SR updates

So now you can a) direct link to these items when a support engineer gives you a bug or doc to follow, or you can update the SR using one of these formats. Hopefully they are not too tough to follow.

Knowledge Documents Formats
note 1351022.2
doc id 1351022.2
document id 1351022.2

Bug Formats
bug 1351022.2

Service Request Formats
SR 3-8777412995
SR Number 3-8777412995
Service Request 3-8777412995

Hope this helps!


REST enable your Database for CRUD with TopLink/EclipseLink and JDeveloper

Shay Shmeltzer - Fri, 2014-08-01 17:10

It seems that REST interfaces are all the rage now for accessing your backend data, this is especially true in the world of mobile development. In this blog I'm going to show you how easy it is to provide a complete REST interface for your database by leveraging TopLink/EclipseLink and JDeveloper.

This relies on a capability that is available in TopLink 12c where every JPA entity that you have created can be RESTified with a simple servlet that TopLink provides.

All you need to do is locate the file toplink-dataservices-web.jar on your machine (this is included in the JDeveloper install so you can just search that directory) and then package your project as a WAR.

At that point you'll be able to get a complete CRUD set of operation for this entity.

In the video below I'm to retrieving departments by their id using a URL like this:

http://127.0.0.1:7101/TLServices-Project1-context-root/persistence/v1.0/out/entity/Departments/30

(out - name of my persistence unit. Departments - name of my entity) 

A complete list of all the REST URL syntax is here part of the TopLink documentation on this feature.:

http://docs.oracle.com/middleware/1213/toplink/solutions/restful_jpa.htm#CHDEGJIG

Check out how easy the process is in this video (using MySQL database):

Here are some additional URL samples for getting other types of queries:

Get all the Employees -  http://127.0.0.1:7101/TLServices/persistence/v1.0/out/query/Employees.findAll

Get all the Employees in department 50 - http://127.0.0.1:7101/TLServices/persistence/v1.0/out/entity/Departments/50/employeesList

Executing a specific named query (@NamedQuery(name = "Employees.findByName", query = "select o from Employees o where o.first_name like :name order by o.last_name"))  -http://127.0.0.1:7101/TLServices/persistence/v1.0/out/query/Employees.findByName;name=John

Categories: Development

Best of OTN - Week of July 27th

OTN TechBlog - Fri, 2014-08-01 13:13
Systems Community - Rick Ramsey, OTN Systems Community Manager -

Tech Article -  Playing with ZFS Snapshots, by ACE Alexandre Borges -
Alexandre creates a ZFS pool, loads it with files, takes a snapshot, verifies that the snapshot worked, removes files from the pool, and finally reverts back to the snapshot file. Then he shows you how to work with snapshot streams. Great way to do backups

From OTN Garage FB - Recently a DBA at an IOUG event complained to Tales from the Data Center that they were unable to install from the Solaris 11.2 ISO. They had seen an Openstack a few weeks ago, and wanted to know how to install Solaris 11.2 in a VM. So guys… here is a step by step for you - Tales from the Datacenter.

Java Community - Tori Wieldt, OTN Java Community Manager

Tech Article: Learning Java Programming with BlueJ IDE https://blogs.oracle.com/java/entry/tech_article_learning_java_programming

The Java Source Blog - The Java Hub at JavaOne! Come see the Oracle Technology Network team and see cool demo's, interviews, etc.

Friday Funny : "An int and an int sometimes love each other very much and decide to make a long." @asz #jvmls Thanks @stuartmarks !

Database Community - Laura Ramsey, OTN Database Community Manager

OTN DBA/DEV Watercooler BlogOracle Database 12c Release 12.1.0.2 is Here! ..with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release 12.1.0.2 supports Linux and Oracle Solaris (SPARC and x86 64 bit).  Read More!

Architect Community - Bob Rhubart, OTN Architect Community Manager
Top 3 Playlists on the OTN ArchBeat YouTube Channel

Common Roles get copied upon plug-in with #Oracle Multitenant

The Oracle Instructor - Fri, 2014-08-01 08:51

What happens when you unplug a pluggable database that has local users who have been granted common roles? They get copied upon plug-in of the PDB to the target container database!

Before Unplug of the PDBThe picture above shows the situation before the unplug command. It has been implemented with these commands:

 

SQL> connect / as sysdba
Connected.
SQL> create role c##role container=all;

Role created.

SQL> grant select any table to c##role container=all;

Grant succeeded.

SQL> connect sys/oracle_4U@pdb1 as sysdba
Connected.
SQL> grant c##role to app;

Grant succeeded.



SQL> grant create session to app;

Grant succeeded.

The local user app has now been granted the common role c##role. Let’s assume that the application depends on the privileges inside the common role. Now the pdb1 is unplugged and plugged in to cdb2:

SQL> shutdown immediate
Pluggable Database closed.
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@EDE5R2P0 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb2
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle
[oracle@EDE5R2P0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 29 12:52:19 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create pluggable database pdb1 using '/home/oracle/pdb1.xml' nocopy;

Pluggable database created.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> connect app/app@pdb1
Connected.
SQL> select * from scott.dept;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

SQL> select * from session_privs;

PRIVILEGE
----------------------------------------
CREATE SESSION
SELECT ANY TABLE

SQL> connect / as sysdba
Connected.

SQL> select role,common from cdb_roles where role='C##ROLE';

ROLE
--------------------------------------------------------------------------------
COM
---
C##ROLE
YES

As seen above, the common role has been copied upon the plug-in like the picture illustrates:
After plug-in of the PDBNot surprisingly the local user app together with the local privilege CREATE SESSION was moved to the target container database. But it is not so obvious that the common role is copied then to the target CDB. This is something I found out during delivery of a recent Oracle University LVC about 12c New Features, thanks to a question of one attendee. My guess was it will lead to an error upon unplug, but this test-case proves it doesn’t. I thought that behavior may be of interest to the Oracle Community. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Multitenant
Categories: DBA Blogs

Log Buffer #382, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-01 07:41

Leading the way are the blogs which are acting as beacons of information guiding the way towards new vistas of innovation. This Log Buffer edition appreciates that role and presents you with few of those blogs.

Oracle:

Is there any recommended duration after which Exalytics Server should be rebooted for optimal performance of Server?

GlassFish On the Cloud Consulting Services by C2B2

This introduction to SOA Governance series contains two videos. The first one explains SOA Governance and why we need it by using a case study. The second video introduces Oracle Enterprise Repository (OER), and how it can help with SOA Governanc.

Oracle BI APPs provide two data warehouse generated fiscal calendars OOTB.

If you’re a community manager who’s publishing, monitoring, engaging, and analyzing communities on multiple social networks manually and individually, you need a hug.

SQL Server:

Spackle: Making sure you can connect to the DAC

Test-Driven Development (TDD) has a misleading name, because the objective is to design and specify that the system you are developing behaves in the ways that the customer expects, and to prove that it does so for the lifetime of the system.

Set a security standard across environments that developers can see and run, but not change.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others. The goal of defensive database programming, the goal of this book, is to help you to produce resilient T-SQL code that robustly and gracefully handles cases of unintended use, and is resilient to common changes to the database environment.

One option to get notified when TempDB grows is to create a SQL Alert to fire a SQL Agent Job that will automatically send an email alerting the DBA when the Tempdb reaches a specific file size.

MySQL:

By default when using MySQL’s standard replication, all events are logged in the binary log and those binary log events are replicated to all slaves (it’s possible to filter out some schema).

Testing MySQL repository packages: how we make sure they work for you

If your project does not have something that you can adapt that quote to, odds are your testing is inadequate.

Compare and Synchronize with Updated Comparison Tools!

Beyond the FRM: ideas for a native MySQL Data Dictionary.

Categories: DBA Blogs

How to Load Informix DB2 Using SSIS

Chris Foot - Fri, 2014-08-01 04:30

Can Microsoft SQL Server and Informix DB2 environments integrate together? The answer is YES!! I have received an increasing amount of questions concerning wanting to cross platform ETL development work between the two. Driven from these questions, I want to dig deeper into regards to manipulating data between Microsoft SQL Server and Informix DB2.

Recently, I have been asked to load data to Informix DB2 using SSIS which is the focus of my topic. When I was tasked with this request, I did some research and started to develop a solution. However, I ran into some common issues that had unanswered questions in regards to writing via Informix ODBC with SSIS out on the internet. Unfortunately, to this day, I have not seen an actual step- by- step blog about this topic based on my own personal searches. With that being said, I decided to blog about it myself.

Let’s start with the basic information first. What do you need to successfully use Informix with your SQL Server environment?

You should know, at minimum, the following:

  • What versions of the driver you have
  • What version of SQL Server is installed on your server
  • What the version of your operating system is

The version of the driver can cause unforeseen issues when trying to load into Informix via SSIS. Check how your ODBC driver is registered. You can do this by simply checking both 32 bit and 64 bit ODBC Data Source Administrator. Here are the commands for 32 bit and 64 bit respectively:

32 Bit: C:\Windows\SysWOW64\odbcad32.exe

64 Bit: C:\Windows\system32\odbcad32.exe

As you can see, I do have both registered in my current environment:

32 Bit

Image and video hosting by TinyPic

64 Bit

Image and video hosting by TinyPic

This is a common issue I have seen between the two. No matter if you have SQL Server 32 Bit or 64 Bit, BIDS is a 32 bit platform application, and the runtime of BIDS needs to be set to reflect this. This is done in the solution properties.

Image and video hosting by TinyPic

In the properties, you click the debugging option and set Run64BitRunTime from True to False.

Image and video hosting by TinyPic

Now, you are ready to set up your connections and build your package. In your connection manager, select where your source data is coming from. For my example, it’s going to be SQL Server, so I need an OLE DB connection. The destination I will use is an ADO.NET connection manager.

Image and video hosting by TinyPic

Here is the little piece that took a while to figure out. Your connection string within your ADO.NET connection manager needs to have “delimident=y” as an argument within the connection string.

Image and video hosting by TinyPic

Now, my connection string reads as follows:

Dsn=INFORMIX ODBC;Driver={ifxoledbc};delimident=y

Notice that I do not have my UID or password passed in through the connection string because they are already stored on my server when I set them up in my Data Source ODBC Administrator.

From here, I am going to simply set up my Dataflow with a source and destination using the connection managers that I have created and map all of my columns.

Image and video hosting by TinyPic

That’s it! Now, all you have to do is run it and test it.

Image and video hosting by TinyPic

I have just written 27 records to Informix DB2 via SSIS using the Informix ODBC driver provided by IBM! Extracting, Transforming, and Loading data (ETL) sometimes requires outside drivers and connection managers which require us to learn new thing, and we are learning new things every day in the development world. I hope that you found my blog informative and that it helps others reduce the search for writing to Informix via SSIS. Stay tuned for my next blog post in the next few weeks.

The post How to Load Informix DB2 Using SSIS appeared first on Remote DBA Experts.