RE: ASM administrative guidelines
Date: Wed, 13 Feb 2008 09:48:52 -0500
Well first and foremost, you should *not* mix disk drives of different speeds in the same disk group. If your SAN is stat-mux plaid under the covers so your LUNs are already not really isolable units of i/o then there is unlikely to be a performance benefit going beyond 2 groups. If you think you might provide media of differential speeds at some point in the future, then isolating online redo and undo now might be useful, so you're moving a whole disk group. If the folks who manage your storage have already mixed up disks of different speeds in constructing LUNs, that is worth a fair amount of effort to untangle. If you have time based partitions you may well have a good economic case to mix different speeds of disk farms at an installation to support the hot young stuff and the luke warm older stuff, for example, but you should not mix them together.
If, on the other hand, your storage configuration has defied the preponderance of advice that stat-mux plaids are the "best practice" verified by randomized block access tests that guarantee that no other alignment can do better (as opposed to a mix of random and batch stream i/o that simulates a known job mix), then there are myriad opportunities to gain some fault isolation and performance improvement for some extra effort in setup. As for having multiple databases on a single host, that also implies that you are already bucking the "best practice" of having a single global database and coalescing all databases into one. (Good for you, if you're doing it for valid reasons, such as, perhaps wildly different service level, client timezone, and uptime requirements for different applications.)
So I would say *no*, there is not an inherent reason for having only two disk groups, other than that being the only way to get the flatest possible distribution of data by size via ASM balancing across all your disks (short of mixing redo and data, which the builders of ASM themselves advise against). Flatest by bulk, by the way, does not translate to fastest, but it does tend to minimize hot spots. If you have not tangled the physical provisioning down to the spinning rust (or battery backed and spinning rust backed memory solid state media) it seems likely to be a useful investment of your time to isolate your database backups.
Just over a year ago some useful views on the underlying subject were posted here: http://jonathanlewis.wordpress.com/2007/02/05/go-faster-stripes
which are about the kindest words I've ever seen about my technical views on how disk farms should be set up.
On the other hand, going hog wild creating disk groups without a specific purpose in mind just creates complexity for no useful gain. It seems to me you are unlikely to go down that path.
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]
On Behalf Of Jeffery Thomas
Sent: Wednesday, February 13, 2008 8:26 AM To: oracle-l
Subject: Re: ASM administrative guidelines
I received a number of replies in private, and the consensus was just two disk groups per cluster. Are there some inherent reasons for having only two disk groups, other than for administrative simplicity? Are there issues with having a large number of disk groups? For example, we may be looking at EMC Snapview to perform our backups on a cluster that will have multiple databases. Accordingly, we would have to segregate each database so that each has its own set of disk groups, a minimum of 3 per database: data, flash, and online redo logs.
On Feb 7, 2008 10:45 AM, Jeffery Thomas <jeffthomas24_at_gmail.com> wrote:
We are looking at migrating to 10gR2 RAC/ASM from 9i RAC/VERITAS. I've
read various papers and
purchased the Oracle ASM book from Oracle Press and have researched this stuff, but from a practical
level, I was wondering what kind of admin guidelines that those of you in more mature ASM shops may
have developed, or perhaps any paradigm shifts that you've made as a result of working with ASM.
For example, have you changed your standards as per multiplexing control
files and redo logs? Do you use
OMF or have you devised any sort of naming scheme with respect to ASM file aliases? If you were to redo
your ASM install, would you change anything on how you configured it or are administering it?