Re: ASM for single-instance 11g db server?

From: onedbguru <onedbguru_at_yahoo.com>
Date: Tue, 5 Apr 2011 17:21:25 -0700 (PDT)
Message-ID: <f6eabd9e-7727-48b9-8a1c-f5f0a89a6e60_at_x1g2000yqb.googlegroups.com>



On Apr 5, 8:49 am, Mladen Gogala <gogala.mla..._at_gmail.com> wrote:
> On Tue, 05 Apr 2011 22:14:34 +1000, Noons wrote:
> > Indeed.  The only serious advantage I can see with ASM.  Then again: how
> > many times in life does one need to transfer TB dbs from EMC to NetApp?
>
> Hmmm, I might need to do something like this soon. The company that I
> work for is using 3Par storage. HP bought 3Par and is now offering an EVA
> model as replacement. Depending on the outcome of the deal, I may have to
> move things around a bit. I was planning  to use physical standby
> databases but now you got me interested. What provisions are there in ASM
> to do that? Can you give me some clues? How would I do that? Failure
> groups?
>
> --http://mgogala.byethost5.com

M.
First, my apologies for the unintended unprofessional tone in my original response...

This is the beauty of this procedure, it is so simple even a .... can do it. :) While John points out one method (failure groups) - the method below is what we used to move a 200TB db while adding 1TB/ day... this is important to note as it took 27 days to do this move with NO downtime. Failuregroups was investigated, but eventually rejected. Don't recall exactly why as this was 3-4 years ago now and I have slept at least once since then...

add new storage luns to the system, get them configured and available to ASM across the cluster.

pick a node in the cluster.

login to ASM using sqlplus / as sysasm

and for each diskgroup - we did diskgroups serially: [first add all of the new disks]
alter diskgroup {N} add disk new1 power 0; alter diskgroup {N} add disk new2 power 0; ....
alter diskgroup {N} add disk new{N} power 0; [now drop all of the old disks]
alter diskgroup {N} drop disk old1 power 0; alter diskgroup {N} drop disk old2 power 0; ...
alter diskgroup {N} drop disk old{N} power 0;

alter diskgroup {N} rebalance power 11; -- where 11 is some number of parallel server processes you can use to do the move, on a 48 dualcore  system, 11 is acceptable :)

and wait until it finishes. On the gear we had, while adding 1TB of data a day, we averaged 300+GB/hr for the move.

Finally, upgrade to 11gR2 (11.2.0.2) Some of the new features like interval partitions (automatically creates the next partition for you without locking up your app), DBMS_PARALLEL_EXECUTE (helps in the parallel execution of dml like update,delete - even (insert into as select)) and EXCHANGE PARTITION (I believe this was also available back in 10g) just to name a couple that I have found extremely useful, especially in a DW environment.

The exchange partition makes that truncate oldest partition a much shorter process - but you do need to be aware of the use of GLOBAL vs. LOCAL indexes. Received on Tue Apr 05 2011 - 19:21:25 CDT

Original text of this message