Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: RDBMS Server Performance Expectations

Re: RDBMS Server Performance Expectations

From: Curt Mayers, Consultant <curtm_at_wavenet.com>
Date: 1997/04/10
Message-ID: <334CA0E8.2D6C@wavenet.com>#1/1

Replies intermixed with your text:

John Hamer wrote:
>
> I'm working on a MIS project which has some scalability issues with
> its RDBMS, so I'm looking for comments from users of other RDBMS
> platforms. I don't have any experience on "big" RDBMS such as Informix
> or Oracle so as much as anything I'm looking for guidance on what
> behaviour we should reasonably expect. The project is using the Raima
> "Velocis SQL Server" on Netware 3.12 & 4.1, and Windows NT TCP/IP
> platforms, with record sizes up to around 1kB. Some tables contain
> tens of thousands of records and on large sites will may contain a few
> millions. Some trends have emerged that bother us:
>
> 1. The server RAM required during a SQL transaction appears to be one
> to two times the total size of modified records, i.e. updating
> 1,000 1kB records increases the SQL server's RAM allocation by
> about 2MB. This can lead to server crashing, e.g. "update TBL set
> NUM = 0;" can fail on a server with 96MB of RAM with a table size
> of several tens of thousands of records:
> a) do such broad updates work in general? If not, then how do people
> do such things as adding 10% to every stock item price?

	You can do the change outside of a transaction, or lock the 
	table prior to doing the mass update (you must have exclusive
	access to the table during the update.  Sample code:

		begin work;
		lock table TABNAME in exclusive mode;
		update table TABNAME set price = price * 1.10;
		commit work;

	If this presents a transaction size too large for you logical
	logs, you segregate it, or use an update cursor that commits
	after every transaction.

> b) do RDBMS generally eat RAM this fast in transactions?
No. normally not, although 96 MB is on the smallish for a large scale database server.

> 2. The size of records on the disk appears to be around the sum of
> field sizes PLUS the size of every instance of a field in each key.
> The data seems quite loosely packed; the database files typically
> PKZIPs to less than 10% of the original size. For our larger
> clients (keeping several million records) this will produce
> databases several gigabytes in size. Are other SQL databases more
> compact?
        

        No. Most databases are mostly air. String fields are generally the biggest source of air, because the entire

	declared string length is stored in each record, even if it
	is blank.

	Another potential source of waste is inappropriately declared
	blob fields (text or byte):  I can't remember the precise 
	record field length of a blob field (56 bytes springs to 
	mind), as well at least 1 blob page (usually 1024 bytes)
	for any non-null field.
	

> 3. By PC standards the server hardware employed is moderate, e.g.
> 120MHz Pentium with 64MB RAM (Top-end "off the shelf" PC hardware
> (e.g. quad Pentium-Pro 200MHz) might (perhaps) run 10 times
> faster). This has performance around 1-10 insertions per second,
> 2-5 deletions per second, 10-50 find/reads per second.
> a) On a similar level of hardware, how does this performance rate?

	Not to dodge the question, but it depends upon the complexity
	of your selects, how much (and how appropriate) your indexing,
	how much memory is allocated to the database engine, etc.  If
	you can give me more information, I might be able to better
	ballpark.


> b) What sort of hardware is used for other RDBMS with a similar size
> of databases (10,000 - 1,000,000 records?)
Again, depends. How big are the records? Are they being interactively queried? How complex are your selects? The Pentium can do pretty well, but for upwards of 1 million records in a table, some more memory might help a lot. A very rough ballpark is that engine memory should be sufficient to store about 5% of your total data.

> Many thanks for any advice you can give (and apologies for chucking
> into this and a couple of related newsgroups). I'd also be
> particularly interested to hear from any other large-scale Velocis
> users.
>
> --
> -- Dr. John Hamer Email: J.Hamer_at_cs.auckland.ac.nz
> -- Department of Computer Science http://www.cs.auckland.ac.nz
> -- University of Auckland Phone: +64 9 3737 599 x8758
> -- Private Bag 92019, Auckland, New Zealand. Fax: +64 9 3737 453
Received on Thu Apr 10 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US