Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance difference between 2 machines

Re: Performance difference between 2 machines

From: Fabrizio <fabrizio.magni_at_mycontinent.com>
Date: Wed, 22 Jun 2005 21:19:34 GMT
Message-ID: <GBkue.30934$b5.1388065@news3.tin.it>


Noons wrote:
> For the block sizes: I don't see them as a determining
> factor here at all. Provided they remain as large as
> (or larger) than the file system ones, you're in
> diminishing returns land.
>
>
> Fabrizio, Joel you following this? If so do you agree?

Hi Nuno,
yes, I read the post with the OP result and waited *eagerly* for your comments.

I need to say in advance that I've very limited experience with HP-UX so   I hope any HP guru will correct my mistakes or imprecision.

I compared the result and I believe you are right. CPU and memory can be the reason for the performance differences.

The first set of results (writes) showed comparable "overall time" for the two systems, while "sys time" needs to be investigated. Why the difference?
CPU power? Different OS design?
Async I/O is used on the two machine?
As far as I know aio needs to be explicitly enabled via SAM on HP-UX. And what's the maximum i/o of the scheduler -> module -> device of any system?

On the second set (reads) I fear we got another problem. I strongly believe that the i/o was buffered for the RH. I cannot affirm the same for the HP-UX (my own ignorance), On RH it would explain the high time taken for the first dd and the similar time after.
We got a degradation when the block size is smaller than the pagesize (4k for a x86 machine).

A superficial conclusion: for writes the CPU (and probably the aio disabled) had a performance hit while for reads all was decided by the "memory" (caching).

Reading the original post I agree with you. It is not the block size the main problem here, especially on a DWH that showed poor results even for batch jobs (I assume this is a low concurrency system).

For the "db file scattered read" I'd try to enable (if not already) the async i/o (this can make a big difference for batch jobs) and to adjust the DB_FILE_MULTIBLOCK_READ_COUNT.

To know the real "service time" would be useful...

However I cannot consider a "select count(1) from TABLE_A" that takes 280 seconds for less than 200MB a good result...

Not a great analysis but I need to sleep! :) Maybe tomorrow I'll get better ideas.

Good night.

-- 
Fabrizio Magni

fabrizio.magni_at_mycontinent.com

replace mycontinent with europe
Received on Wed Jun 22 2005 - 16:19:34 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US