Re: ASM parameters

From: Mladen Gogala <>
Date: 28 Jan 2008 20:42:49 GMT
Message-ID: <479e3e49$0$1344$>

On Mon, 28 Jan 2008 11:01:21 -0800, joel garry wrote:

> On Jan 27, 1:17 pm, Mladen Gogala <> wrote:

>> On Sat, 26 Jan 2008 19:14:14 -0800, Charles Hooper wrote:
>> > "Take the Guesswork Out of Database Layout and I/O Tuning with
>> > Automatic Storage Management"
>> >
>> 20guesswork%20out%20of%20db%20tuning%2001-06.pdf
>> > "The Kernel Sequential File I/O (ksfq) provides support for
>> > sequential disk/tape access and buffer management.  Ksfq allocates 4
>> > sequential buffers by default.  The size of the buffers is determined
>> > by dbfile_direct_io_count parameter set to 1MB by default.  Some of
>> > the ksfq clients are Datafile, Redolog file, RMAN, Archive log file,
>> > Datapump, Data Guard and the File Transfer Package."
>> Charles, this is a great document. Here is my brief understanding how
>> ASM works. ASM runs in user space, not in kernel space, which means it
>> isn't a driver. It only provides database processes (s00x, dbwr, lgwr,
>> ckpt) with locations where to read from or write to. The IO calls
>> themselves are still performed by the corresponding database processes
>> and are targeted to the underlying raw devices. In other words, ASM
>> handles what is known as "file system metadata" - directories, files,
>> extent maps and alike. If you take a look at IBM JFS, the open source
>> implementation, you will spot the terms "transaction" and "file system
>> metadata" in the include headers. JFS is, of course, a full grown file
>> system with various options. ASM is not, but Oracle still needs to know
>> where exactly on the disk block 20A3F in the file 133 is. ASM takes
>> care of that. DBA puts a bunch of raw devices into an ASM disk group
>> and ASM creates extent map and "directories" for him.
>> On the operating system level, one can control caching of file blocks
>> and prefetch. I was trying to do the same with ASM, but to no avail.
>> One very nice thing about general purpose cluster file system like JFS
>> is that one can open files using the O_DIRECT flag and without it. If
>> the file is opened without the O_DIRECT flag, it will be buffered and
>> the file system prefetch will be applied to it. That will be of great
>> help for  utilities like tar, cpio, cp, scp or gzip and performance of
>> these utilities will be as expected. If, on the other hand, ASM or OCFS
>> is all you have, better be prepared for a shock. Simple tar or cp
>> operations will take hours, literally. Not even the oracle version of
>> those utilities will help much. Buffering and prefetch are the only
>> cure. That is why I moved log_archive_dest outside of ASM, wherever
>> possible. Still, the performance of RMAN backup is abysmal. I am using
>> RMAN with the MML library for NetBackup. With the log_archive_dest
>> (ext3 cross-mounted using NFS3) I am getting 30 MB/sec transfer rate.
>> With ASM - only 5MB sec. I am alleviating the problem by using "BACKUP
>> AS COMPRESSED BACKUPSET", but that, too, is slow. I was hoping for an
>> under the hood file system cache and prefetch implementation.
>> --
> Have you tried increasing the channel count?  Do the PX waits increase
> if you do?
> I'm wondering where the real problem lies, whether it really is with ASM
> or something that is a consequence of the MML.  Or RAC issues as implied
> by those gcs and ges waits.  Maybe ASM just isn't always smart enough to
> not have to figure out which node has which appropriate block and has to
> ask and wait around to find out.  Have you tried it without RAC?
> jg

Joel, this is actually a suggestion made by a colleague of mine. I will definitely try it out. When two smart guys suggest the same thing, then it is likely to be the solution. I'll post the results.

Received on Mon Jan 28 2008 - 14:42:49 CST

Original text of this message