Re: Nearest Common Ancestor Report (XDb1's $1000 Challenge)

From: thirdrock <billdonovan1947_at_yahoo.com.au>
Date: Tue, 01 Jun 2004 16:33:34 +1000
Message-ID: <opr8wkh8jbt90d53_at_news.optusnet.com.au>


On Sun, 30 May 2004 00:24:48 +0200, Hugo Kornelis <hugo_at_pe_NO_rFact.in_SPAM_fo> wrote:

> On 29 May 2004 11:18:22 -0700, Neo wrote:

> No, they are not. There is no such thing as a "RAM table" in MS SQL
> Server.

Yes there is. It's called a temporary table. Really fast too, but does not get flushed to disk at any point.
Still, what was the point you were trying to make?

>
> All modifications to data are written to disk. In fact, they are written
> *twice*. The data file contains the actual table contents. The layout is
> optimised for fast access. Data pages are cached to speed up retrieval if
> the same data is to be accessed again. Changes are made to data pages in
> cache; writing the changes to disk is an asynchronous process. But to
> prevent data loss due to power outage, all changes are also written to
> the
> log file. These writes are synchronous - SQL Server will never report a
> transaction completed until the HD controller confirms that the relevant
> log pages have been written to disk.>
> Another thing to be aware of is the overhead introduced by the locking
> mechanism. Not only every write, but even every write requires a check if
> a lock can be made, the actual setting of that lock and the release of
> the
> lock when the transaction is complete.

I know, which makes updating a million record table unbelievably slow on SQL Server. I did some Y2K work on a dual 400mhz IBM server w/ ATA RAID, and a single update would take nearly 90 minutes. Shutting down SQL Server and restarting it in single-user mode (which causes other headaches, but enough of my problems) reduced that time to 50 minutes. This was versionb 6.5 so things may have improved since then. Every other database I tested the same update on beat this performance by at least 50%, Oracle was 100% faster. Even Access was faster, but once the record count went above a million, Access started crashing and corrupting data.

> When running the tests on large
> sets of data (to answer Nick's messages), I did run one test with SQL
> Server started in single user mode. This reduced the elapsed time by some
> 20 to 30% !! I decided to leave those tests out of the results I posted
> here, since using SQL Server in single user mode is very atypical (in
> fact, this mode is actually intended for maintenance work only).

Yes, quite due to the fact that you can only have one connection, it takes some juggling.

Have fun,
Ian

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Received on Tue Jun 01 2004 - 08:33:34 CEST

Original text of this message