Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.misc -> Re: RDBMS Server Performance Expectations
In article <j00encnqwjp.fsf_at_cs20.cs.auckland.ac.nz>, John Hamer <jhamer
@cs20.cs.auckland.ac.nz> writes
>
>I'm working on a MIS project which has some scalability issues with
>its RDBMS, so I'm looking for comments from users of other RDBMS
>platforms. I don't have any experience on "big" RDBMS such as Informix
>or Oracle so as much as anything I'm looking for guidance on what
>behaviour we should reasonably expect. The project is using the Raima
>"Velocis SQL Server" on Netware 3.12 & 4.1, and Windows NT TCP/IP
>platforms, with record sizes up to around 1kB. Some tables contain
>tens of thousands of records and on large sites will may contain a few
>millions. Some trends have emerged that bother us:
>
>1. The server RAM required during a SQL transaction appears to be one
> to two times the total size of modified records, i.e. updating
> 1,000 1kB records increases the SQL server's RAM allocation by
> about 2MB. This can lead to server crashing, e.g. "update TBL set
> NUM = 0;" can fail on a server with 96MB of RAM with a table size
> of several tens of thousands of records:
> a) do such broad updates work in general? If not, then how do people
> do such things as adding 10% to every stock item price?
Yes they do. COnvertional RDBMS's do not store the whole update in
memory but store it on disk in logfiles. These should be on a separate
disk than the database so that they will survive if a disk containing
data fails. Also they should be continuusly written to tape once the
transaction in that part of the logfile has commited.
> b) do RDBMS generally eat RAM this fast in transactions?
>
No - they use logfiles.
>2. The size of records on the disk appears to be around the sum of
> field sizes PLUS the size of every instance of a field in each key.
> The data seems quite loosely packed; the database files typically
> PKZIPs to less than 10% of the original size. For our larger
> clients (keeping several million records) this will produce
> databases several gigabytes in size. Are other SQL databases more
> compact?
>
No.
>3. By PC standards the server hardware employed is moderate, e.g.
> 120MHz Pentium with 64MB RAM (Top-end "off the shelf" PC hardware
> (e.g. quad Pentium-Pro 200MHz) might (perhaps) run 10 times
> faster). This has performance around 1-10 insertions per second,
> 2-5 deletions per second, 10-50 find/reads per second.
> a) On a similar level of hardware, how does this performance rate?
> b) What sort of hardware is used for other RDBMS with a similar size
> of databases (10,000 - 1,000,000 records?)
>
>Many thanks for any advice you can give (and apologies for chucking
>into this and a couple of related newsgroups). I'd also be
>particularly interested to hear from any other large-scale Velocis
>users.
>
>
-- David WilliamsReceived on Fri Apr 11 1997 - 00:00:00 CDT
![]() |
![]() |