Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Recommended Wintel Hardware

Re: Recommended Wintel Hardware

From: Paul Drake <drak0nian_at_yahoo.com>
Date: 18 Jun 2003 15:04:25 -0700
Message-ID: <1ac7c7b3.0306181404.b539c54@posting.google.com>


eb_two_at_yahoo.com (Eric) wrote in message news:<3020b5bd.0306180529.3cd1d042_at_posting.google.com>...
> If you were building the best Oracle server you could, single box,
> must run Intel and Windows 2000, what hardware would you recommend?
> Hard drive space is not a consideration because we will be using a
> Network Appliance Filer. What brand of server, how many processors,
> how much ram, xeon, p4, hyperthreading, etc?
>
> 300 users, 100 gig database, horribly inefficient transaction logging.
> I need brute power to overcome programming shortcomings!
>
> Currently running a 4-way Dell 6450 with 4 gig of ram, and its getting
> its arse kicked.
>
> Sincere thanks for your considerate replies!
>
> Eric Brander

Eric,

Even though that model (6450) has more than one memory controller, it lacks the memory bandwidth of the 6650 model (its PC100 ECC SDRAM). So you're going to see more available CPU capacity (1.9 GHz vs. 900 MHz) and memory bandwidth from a P IV Xeon MP 1.9 GHz (quad) with PC400 DDR RAM. You'll also see more bandwidth available on the multiple PCI buses, as they conform to the PCI-X spec. You can get this kind of info from Dell, though.

If you're bound on I/O for transaction logging, you're probably still going to be bound on I/O on the NetApp Filer. What was the prior storage configuration - how many drives, controllers and channels?

Did you have write-back caching enabled? What was the amount of memory on your RAID controllers? How many drives/controllers were the redo logs volumes allocated? What other files were on those drives?
(don't mess with those drive heads - its a serial operation) How many members per redo log group? Multiple members helps to feed the archiver processes.

I have a PE6450 (PIII quad 900 MHz, 2 MB cache) at a site with 2 external PV 220S cabinets, each with split backplanes, filled with 14 drives each. 4 drives are allocated to redo (only 2 members per redo log group, I would have liked more) as a pair of RAID 1 vols on seperate controllers and 4 drives are allocated to archredo (RAID 10). It is not bound on transaction logging - this much I know from the wait events.

Some gurus report - that if your true bottleneck was I/O - and you use faster CPUs (or faster internal bandwidth) that you'll put even more demand on the rate-limiting component, that will have even higher queue depths resulting in even worse performance that before.

How many mount points will the NetApp Filer have over how many gigabit cards (or FCHBA)?

What is the average size of your I/Os (e.g. LGWR and ARCHn)? How many I/Os per second do you expect?
How many I/Os per second will the NetApp Filer support (with its cache flooded)?

We are testing a PE6650 against a Dell|EMC CX200. Its awesome for large operations (rman backups), not so good for extremely random I/O. Its mounted on 2 x 2 Gbps FCHBAs. You might want to compare the NetApp Filer against that model.

I wish that I would have seen Noons posting about hiking up the maxIO size from 256 KB to 1 MB before I built that box.

hth.

Pd Received on Wed Jun 18 2003 - 17:04:25 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US