Re: Raw I/O vs files (was Re: Raw partitions / cooked files)
Date: Tue, 27 Jul 93 22:17:43 GMT
Message-ID: <1993Jul27.221743.27702_at_exlog.com>
In article <2500_at_coyote.UUCP> gainer_at_slowmo.almaden.ibm.com (P. Gainer) writes:
>In a recent append, kevinc_at_sequent.com (kevinc) writes...
>
[...]
>
>In the meantime, many unix DBMSes implemented raw I/O. Now it seems to me
>that doing this allows one to write a filesystem customized for a database.
>In doing this, one should be able to obtain more performance than one could
>using the standard OS file system.
I'm not advocating that of course but what your saying sounds a little like "I dont like the way vi works so I'm going to change the kernel to fix it." :-}
All the things you would want build into the fscode is already in the database. There are a ton of good reasons for using cooked database files but if your system is REALLY being impacted by the cost of running that filesystem code then the answer it to get it out of the way not improve it. If you need the very small perf increase gained by running a non-standard fs then you need the increase gained by not running the fs at all.
>Has anyone done much benchmarking to determine whether recent raw I/O
>implementations are *significantly* faster than file system implementations?
I've asked the same question a couple of times and gotten no answer.
Before I get flamed I should say there are cases where going raw is the thing to do, but those cases are rare and usually in situations where doing the Right thing is not an option. ie) get more/better disks, controllers, increase memory
-- Regards, Lee E. Parsons Baker Hughes Inteq, Inc Oracle Database Administrator lparsons_at_exlog.comReceived on Wed Jul 28 1993 - 00:17:43 CEST