Re: Database Usage of Unix FFS

From: Neal P. Murphy <murphyn_at_orca.cig.mot.com>
Date: 24 Feb 94 16:20:47 GMT
Message-ID: <murphyn.762106847_at_orca>


lparsons_at_world.std.com (Lee E Parsons) writes:

>Right now I'm just having a problem listening to people make this
>raw -vs- cooked argument when I don't think we have actually
>set up the cooked FS in a reasonable manner. I'm hoping to take
>the suggestions this post generates and bench a raw fs with
>a cooked fs that has been tuned for a dbase environment.

If you create an FFS with the largest possible logical block size and 0% reserved space AND you immediately write the entire DB file to disk (all nulls, e.g.), then you needn't worry about the blocks of the file being scattered. The DB file will be laid out in FFS' most optimal pattern. Blocks never get deleted from the DB file, so the file cannot become scattered. I really doubt that FFS dynamically moves blocks around on disk. If you can perform other tuning of the file system, do so before laying down the file.

Also, on a raw slice/partition (e.g., /dev/rsd1b on a Sun) create a DB partition of the same size. Then on a block slice/partition (e.g., /dev/sd2a on a Sun), create another DB partition.

Now do your worst on each of the three DB partitions in turn. I would not be surprised if you found that the block device turns in the best performance because:

  • the OS buffers/caches disk blocks in RAM,
  • you won't have the overhead of the file system slowing you down, and A raw device gets no buffering/caching. One system or another has been found where the raw device is faster than the block device.

In summary, optimize an FFS for a few large file and large blocks and create DB's on that filesystem, on a raw disk slice and a block disk device. Run your benchmarks and real-life applications and see which medium turns in the best performance.

Fester Received on Thu Feb 24 1994 - 17:20:47 CET

Original text of this message