Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.misc -> Re: SGA sizing for high-performance OLTP database
The SGA is probably the least of your concerns. I would definately look
into tuning I/O. Any table over 1 million rows, can gain a lot with
partitioning of the data. You will most likely want to use local indexes.
The problem with large tables is that in order to query the table you will
have to have indexes on the tables, but once the indexes get large enough,
you have to read 10-20 blocks just to get a single row. My guess is that
is you are limited to mainly two tables, that you won't have a lot of
variance in the SQL. You most likely won't need a large SGA. I would start
with 100Meg, and see if you even use that.
--
Bob Fazio
Remove no.spam from my email to reply
Jonathan Robinson <robinsonj_at_logica.com> wrote in message
news:7q5q8h$7f3_at_romeo.logica.co.uk...
> The OLTP system I'm working on requires extremely fast data retrieval -
> reads and writes, no updates. The platform is Solaris 2.6 with Oracle
8.1.5.
> In order to meet clients performance requirements I have been looking at
> ways of optimising the database eg. keeping an important index on our most
> significant table in cache, keeping the last 15 minutes of data in cache
> etc. I understand the max. SGA size available on this architecture would
be
> 3.75Gb so I am going to be limited in what I can keep in physical memory
and
> what will be swapped out. The 2 most significant tables (where 90% of
> transactions go through) are 40Gb and 70Gb in size. Any pointers as to
where
> I can gain performance uplift ?
>
> System hasn't been implemented yet so anybody with information based on
past
> experiences of implementing systems with similar high-performance
> requirements would be much appreciated.
>
> Cheers.
>
>
>
>
>
Received on Mon Aug 30 1999 - 21:41:36 CDT
![]() |
![]() |