Re: ASM and RAID

From: Robert Klemme <shortcutter_at_googlemail.com>
Date: Tue, 20 Jan 2009 23:33:05 +0100
Message-ID: <6tn1p1Fbjgn9U2_at_mid.individual.net>



On 20.01.2009 04:09, Michael Austin wrote:
> Robert Klemme wrote:
>> On 18.12.2008 17:32, Michael Austin wrote:
>>> ASM also does not do the actual reading/writing - the RDBMS does. In 
>>> other words - ASM does not proxy the I/O for the RDBMS - RDBMS writes 
>>> directly to the data files.  ASM just tells the RDBMS what the extent 
>>> map is and only at file open time... which is why you need additional 
>>> shared_pool (1MB for every 100GB of file space).
>>
>> That's an interesting bit of information.  How is the ASM able to 
>> replace a clustered file system?  Does it provide only the meta data 
>> layer which controls concurrent access to files?

>
> Sorry taking so long to get back to this thread...

Thank you for the update nevertheless!

> In order to have a "clustered file system" one of the prerequisites is
> to have some method of I/O fencing - a mechanism to prevent both servers
> from writing the same data block at the same time. This is what Cache
> Fusion which includes the Distributed Lock Manager (DLM) is for - to
> allow concurrent writes to raw devices from multiple servers in a RAC
> cluster.

Ok, that's what I meant. ASM manages the meta data (i.e. which process is writing which block) and the instances query this before writing or reading.

> If you ever worked in an OpenVMS environment - this is how they made it
> work... The DLM is the most important piece of this whole thing. Without
> it, you would have chaos.

Which can be nice at times - but not in an Oracle tablespace. :-)

>>> There is a book called Automatic Storage Management - practical 
>>> under-the-hood ??? that is very good at the mechanisms within ASM and 
>>> RDBMS...
>>
>> I am assuming you mean this one: http://www.amazon.com/dp/0071496076

>
> Yes.

Ok.

Cheers

        robert

-- 
remember.guy do |as, often| as.you_can - without end
Received on Tue Jan 20 2009 - 16:33:05 CST

Original text of this message