Re: ASM for single-instance 11g db server?

From: onedbguru <>
Date: Mon, 4 Apr 2011 20:46:06 -0700 (PDT)
Message-ID: <>

On Apr 4, 9:02 pm, Mladen Gogala <> wrote:
> On Mon, 04 Apr 2011 17:08:03 -0700, John Hurley wrote:
> > I do not agree with Mladen's assertion that ASM is only for RAC.
> The primary purpose of ASM is to prevent the competition from using a
> general purpose FS for their databases. Cluster-consistent file system
> can get an open source DB like MySQL, PostgreSQL or Firebird  one giant
> step closer to something like RAC.
> The problem with ASM is that it cannot be accessed by the general purpose
> file tools like tar, cpio, cp, od or fuser. If your dump file is on ASM,
> you can't use bzip2 to compress it. The only compression utility
> available on ASM is "rm", from asmcmd. That achieves 100% compress ratio,
> but decompression can be tricky. No mv, either, no moving archive logs to
> a less crowded place. ASM is a non-standard form of storage and I would
> stay away from it, wherever I can. Unfortunately, I can't stay away from
> it with RAC.
> That is the same thing that will eventually kill off Exadata: you have to
> buy a rather expensive SAN device that you can only utilize for Oracle. I
> can imagine the facial expression of my CIO when asking: "what do you
> mean, that I cannot store files there?" Rolls Royce is a better car than
> Toyota Camry, but there are other considerations, too. BTW, here are some
> cool T-shirts:
> --


I would disagree with your limited knowledge and inaccurate assessment of ASM and it's other features such as ACFS - especially in 11gR2. I am a STRONG proponent of using ASM for everything I can... and that includes OS Files using ASM/ACFS - think of it as a volume manager that not only works on a single node, but across an entire cluster. A year or so ago, I saw it used for the middle-tier Weblogic servers where OS files were accessible by any node in the cluster. With EVERY database I have migrated to ASM (both RAC and Single-instance) I gained a minimum of 7% but more often closer to 13% performance improvement. That includes tiny databases (<100G) as well as ELDB's (Extremely Large DB in the 100's of TB range - both clustered and nonclustered)

BTW - I NEVER propose the waste of spindles doing RAID10 on a SAN array. With modern SAN storage, having the many GB of cache at the storage array makes any perceived write performance of RAID5 a moot point. When you manage 100's of TB of storage, mirroring is a massive waste and still does not perform any better. One of the additional features - if you really need 1) long distance clusters or 2) redundant storage, you can use ASM's FAILURE GROUPS - where you can have mirrored DISKGROUPS. Works great!!

With standard file systems, each file system only gets one read and one write FIFO buffer for moving stuff to/from memory. When you have many smaller LUNS in ASM, the parallelism gives a much superior performance.

With ASM moving to "another storage vendor" is very easy with ASM - you can do it with NO downtime. Several years ago we used ASM to move several hundred TB db from EMC to NetApp while adding >1TB/day with very little performance impact on the rest of the system. I have never seen any file-system based database do that and I have been around for a very long time... (EMC, IMProfessionalO, is very limited in it's ability to "RAID<anything>" together - last system I worked on, the max was 8 spindles in a raid group... That may have changed... My former employer ran into a major problem where the array was swapping in a spare and it took 5 days. Meanwhile, (IIRC) nothing could be done on that frame until it completed. Even DEC/HP StorageWorks ran circles around EMC.

Oracle just announced a new product called CloudFS. Guess what it is comprised of?? Oracle Clusterware+ASM+ACFS. And the performance of ASM+ACFS is significantly faster - especially READS and DELETEs.. On your Linux (or any other UNIX for that matter) Try doing an "rm -FR somedir" where "somedir" has more than 300GB in hundreds of files - and tell me how long it took? With ACFS the removal of the pointers is instantaneous.

As for compression - if the need is for performance - quit being so stingy with the disk space and by what you need, not what you can "get away with". As they have said for years now, "Disks are cheap". When you do TB sized databases compressing/uncompressing is a ludicrous configuration.

I would also add that using ASM for non-RAC is very much worth it. When your db gets to the point that it needs RAC, then it will already be in ASM. And with RAC if you need to move to new systems, you just add the new hardware to the cluster and shutdown the old nodes and you keep going. Works great, last a long time... Received on Mon Apr 04 2011 - 22:46:06 CDT

Original text of this message