Re: OCFS2 on RHAS4

From: David Barbour <david.barbour1_at_gmail.com>
Date: Mon, 15 Sep 2008 14:50:38 -0400
Message-ID: <69eafc3f0809151150x661aba4ei281f8e20b4ed1200@mail.gmail.com>


I think Roman is on the right track. There also appears to be a mismatch between the block sizes and methodology each command utilizes. I remember seeing this some time back and having to explain it to damagement. We see a similar mismatch in space utilization between reports generated with SAP on AIX, the same report natively from the AIX server and the same report from Oracle Grid Control. They're all different. For example, Oracle considers 1000M to be 1GB.

Here's what I do see on my only Linux box (still trying to determine if this fad is going to last):
[root_at_swslnxorctst ~]# df -h /var

Filesystem                             Size     Used  Avail   Use% Mounted
on
/dev/mapper/rootvg-varlv          1008M  134M  824M  14%   /var

[root_at_swslnxorctst ~]# du -sh /var
113M /var

[root_at_swslnxorctst ~]# du -s --si /var
118M /var

[root_at_swslnxorctst ~]# du -s --apparent-size /var 96338 /var

[root_at_swslnxorctst ~]# du -sk /var
115184 /var

On Mon, Sep 15, 2008 at 11:28 AM, Roman Podshivalov < roman.podshivalov_at_gmail.com> wrote:

> What about not fully populated tempfiles/sparse files ? They usually not
> reported by df, but in your case with OCFS who knows... BTW most of the
> difference falls into first mountpoint where you have sys and I presume temp
> files, rest could be just rounding error 8-)
>
> --romas
>
> On Mon, Sep 15, 2008 at 9:53 AM, sol beach <sol.beach_at_gmail.com> wrote:
>
>> All the systems were concurrently shutdown & rebooted on Friday.
>> fsck.ocfs was run against all volumes & found no problems.
>>
>>
>> On Mon, Sep 15, 2008 at 4:02 AM, Roberts, David (GSD - UK) <
>> david.h.roberts_at_logica.com> wrote:
>>
>>> Discrepancies certainly can appear when files that have a file handle
>>> held open by a running program are deleted from any file system, as Unix
>>> will not free up the space until the file handle has been closed.
>>>
>>>
>>>
>>> When were the systems last re-booted? And if it is a while since they
>>> were rebooted, do the values from df change after they are rebooted?
>>>
>>>
>>>
>>> Dave
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *David Roberts*
>>> *Logica* *- Releasing your potential*
>>>
>>> www.logica.com
>>>
>>>
>>>
>>> Logica UK Limited
>>> Registered in England and Wales (registered number *947968*)
>>> Registered office: Stephenson House, 75 Hampstead Road, London NW1 2PL, United
>>> Kingdom
>>>
>>> -----Original Message-----
>>> *From:* oracle-l-bounce_at_freelists.org [mailto:
>>> oracle-l-bounce_at_freelists.org] *On Behalf Of *sol beach
>>> *Sent:* 12 September 2008 23:16
>>> *To:* oracle-l
>>> *Subject:* OCFS2 on RHAS4
>>>
>>>
>>>
>>> It appears to me that OCFS2 has a "space leak"
>>> The amount of disk in use reported by du -sh does not agree with df -h
>>> I see the same on 3 different systems.
>>> Does anyone else see the same?
>>> On other file systems (ext3,etc.) I have not seen such discrepancies.
>>> ==========================================
>>> [root_at_rac01 u01]# du -sh /u01 /u03 /u04 /u05 /u06
>>> 8.5G /u01
>>> 27G /u03
>>> 184G /u04
>>> 28G /u05
>>> 16K /u06
>>> [root_at_rac01 u01]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/cciss/c0d0p5 22G 2.6G 18G 13% /
>>> /dev/cciss/c0d0p1 99M 18M 76M 19% /boot
>>> none 16G 0 16G 0% /dev/shm
>>> /dev/cciss/c0d0p3 22G 153M 20G 1% /var
>>> /dev/sda1 547G 187G 361G 35% /u04
>>> /dev/sdj1 136G 29G 108G 21% /u03
>>> /dev/sdc1 18G 16G 2.5G 87% /u01
>>> /dev/sdd1 119G 30G 90G 25% /u05
>>> /dev/sde1 137G 1.1G 136G 1% /u06
>>> ==========================================
>>> [root_at_amo01dn ~]# du -sh /u01 /u02 /u03 /u04 /u05 /u06
>>> df -h
>>> 4.8G /u01
>>> 870M /u02
>>> 68G /u03
>>> 55G /u04
>>> 69G /u05
>>> 37G /u06
>>> [root_at_amo01dn ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/cciss/c0d0p3 7.7G 3.6G 3.8G 49% /
>>> /dev/cciss/c0d0p1 99M 18M 77M 19% /boot
>>> none 7.9G 0 7.9G 0% /dev/shm
>>> /dev/cciss/c0d0p5 18G 4.4G 13G 26% /var
>>> /dev/sda1 9.8G 6.0G 3.9G 61% /u01/orasys/orcl
>>> /dev/sda2 9.8G 1.2G 8.7G 12% /u02/oraredo/orcl
>>> /dev/sda3 118G 69G 49G 59% /u03/oraflash/orcl
>>> /dev/sdb1 547G 56G 492G 11% /u04/oradbf/orcl
>>> /dev/sdc1 137G 70G 68G 51% /u05/orasupp/orcl
>>> /dev/sdd1 137G 38G 99G 28% /u06/oraback/orcl
>>> ==========================================
>>>
>>> [root_at_amo1sd ~]# du -sh /u01 /u02 /u03 /u04 /u05 /u06
>>> 3.4G /u01
>>> 1.1G /u02
>>> 98G /u03
>>> 105G /u04
>>> 123G /u05
>>> du: cannot access `/u06': No such file or directory
>>> [root_at_amo1sd ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/cciss/c0d0p3 7.8G 3.2G 4.3G 43% /
>>> /dev/cciss/c0d0p1 99M 15M 80M 16% /boot
>>> none 6.8G 0 6.8G 0% /dev/shm
>>> /dev/cciss/c0d0p5 10G 152M 9.3G 2% /var
>>> /dev/cciss/c1d0p1 9.8G 8.9G 889M 92% /u01/orasys/orcl
>>> /dev/cciss/c1d0p2 9.8G 1.4G 8.5G 14% /u02/oraredo/orcl
>>> /dev/cciss/c1d0p3 118G 100G 19G 85% /u03/oraflash/orcl
>>> /dev/cciss/c1d1p1 684G 106G 579G 16% /u04/oradbf/orcl
>>> /dev/cciss/c1d2p1 137G 124G 13G 91% /u05/orasupp/orcl
>>>
>>> This e-mail and any attachment is for authorised use by the intended
>>> recipient(s) only. It may contain proprietary material, confidential
>>> information and/or be subject to legal privilege. It should not be copied,
>>> disclosed to, retained or used by, any other party. If you are not an
>>> intended recipient then please promptly delete this e-mail and any
>>> attachment and all copies and inform the sender. Thank you.
>>>
>>
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Sep 15 2008 - 13:50:38 CDT

Original text of this message