Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.misc -> Re: Any Info Would Help...
hilljm1974_at_gmail.com wrote:
> Sorry, I guess the post was a bit confusing.
>
> I am not having any problems loading records into the tables. The
> issue is the response time when a user issues a query via the ad hoc
> engine. The table is denormalized (this is a data warehouse
> environment) and highly indexed. However, during testing, as I
> approach 13+ million records the query performance is diminished.
>
> Currently we are running dual Xeon 3GHz processors with 16 GB Ram with
> 1 mirrored pair of 160 GB 10K RPM HDD.
>
> Future hardware will be dual Dual-Core 3GHz processors with 24 GB Ram
> with 4 mirrored pairs of 300 GB 10K RPM HDDs.
>
> How would you define your tablespace, etc. to maxmize the DB server's
> performance?
With modern hardware and Oracle, there isn't much you can do with tablespaces to maximize performance, although there are things you can do to mess up performance. In particular, if you are writing out archives, redo, undo and db tablespaces to a single device, you may be creating some contention there. Many people have pointed out that it isn't how fast your drives are, it's how many spindles can I/O be spread across for performance. And any I/O you _don't_ have to do is even better. That's why most performance problems are SQL problems.
You might be better off finding out what the real problem is. I see "Highly denormalized" with "ad hoc" and I think, "oh boy, this person is ignoring basic performance design in his application and wondering why it doesn't scale." I see "mirrored pairs" and I'm wondering exactly how they are being mirrored and what controller configuration.
The first thing you should do is look at your wait states (tables in Oracle) and see what the system is waiting on. Then you should explain plan on your problem query. Then you should run a trace on it. I predict you will eventually discover that any hardware you will throw at this problem will scale arithmetically, while fixing the design and access pathing will scale geometrically or better. A few statspack runs may help, too. This is an iterative process, you probably have more than one thing wrong. What are your db_block_size, operating system, SGA details?
If you don't find anything obviously out-of-whack with how the optimizer is deciding to access your data, or various latching issues, you might look into predigesting the data for the performance-sensitive situations. It doesn't matter how many indices you have if you are always full-table-scanning, and sometimes it's better to full-table-scan than use an index. It just depends.
Even better than expecting a simple answer to your question here would be to learn how to fix the problems. Many people here like the books by Cary Milsap, Jonathan Lewis, and Thomas Kyte.
jg
-- @home.com is bogus. http://www.signonsandiego.com/news/science/20061130-9999-lz1c30memory.htmlReceived on Mon Jan 15 2007 - 19:25:09 CST
![]() |
![]() |