Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: SuperDome and VPARS

Re: SuperDome and VPARS

From: Matthew Zito <>
Date: Wed, 12 Jan 2005 14:07:25 -0500
Message-Id: <>

Sorry I didn't respond, I just haven't spent any specific time with the vPAR technology on superdomes. In general, though, its a tradeoff:

-partitioning gives you enhanced separation between your oracle instances, which helps protect you against "bad neighbor" syndrome. More of a concern when you have databases managed by different teams, or with simultaneous utilization peaks.

-partitioning, though, can add overhead.

-On the other hand, when you create partitions in superdome servers where domains are equivalent to processor cells (i.e. no spanning an OS instance across cells), you get a big performance jump, because there's no need for the NUMA technology to kick in.

So, I'd say leave them in one partition, unless you want to enforce strict partitioning in terms of OS and so-such. Why add complexity?

In general (if I may generalize again), fewer and fewer people are using the superdomes. With the move to the Itanium 2 processors, they got significantly more expensive, and in terms of functionality, HP-UX is rapidly falling behind other UNIXes in terms of both 3rd party support and OS features. I think HP had counted on a lot of the Tru64 crowd moving over to HP-UX - instead, they're all moving over to Linux.

Like many divisions of HP, their UNIX division is a mess. Too bad, too.


Matthew Zito
GridApp Systems
Cell: 646-220-3551
Phone: 212-358-8211 x 359

On Jan 12, 2005, at 10:54 AM, Kline.Michael wrote:

> Got no hits on this. I take it there aren't too many using SuperDome,
> or
> never played much with splitting out the VPARS and comparing results.
> Michael Kline
>> -----Original Message-----
>> We've got two data warehouses that we are going to migrate. Here were
>> some of the thoughts. I was wondering if anyone had the luxury of
>> testing BOTH ways and what they found out. NORMALLY the PRD database
>> builds a bunch of data which this then transferred to RPT for
> reporting,
>> so one is active and then the other, but both COULD be active at the
>> same time. These are almost 2TB each, so not that big. The application
>> NORMALLY shows signs of being I/O bound more than anything else due to
>> the size of the data.
>> Anyone "Been there, done that???"
>> I'm sort of inclined to keep the current way, one VPAR.
>> "I am sorry I have not been paying a much attention to the
> configuration
>> as I should, but I want to make a suggestion. The Superdome resources
>> for XXXXX appear to be 8 CPUs, 16 GB memory and 8 SAN cards. As I
>> understand, the current plan is to create two VPARS that have two
>> permanently assigned CPU's and the ability to acquire up to 4
> 'floating'
>> CPUs. Each would have a static amount of memory and each would have 4
>> SAN cards. I would suggest we put all of these resources into one
> vpar.
>> That way the two instances, pfrpt, pfprd would have full access to ALL
>> resources. This could improve I/O throughput, as well as provide more
>> combined memory and CPU power that two separate partitions. I don't
>> know about the other software systems (People tools, informatica), but
>> this seems like a superior configuration than two vpars who will have
>> unused computer resources at different times. What do you think?"
>> Michael Kline
>> Database Administration
>> SunTrust Technology Center
>> 1030 Wilmer Avenue
>> Richmond, Virginia 23227
>> Outside 804.261.9446
>> STNet 643.9446
>> Cell 804.744.1545
> ************************************************
> The information transmitted is intended solely
> for the individual or entity to which it is
> addressed and may contain confidential and/or
> privileged material. Any review, retransmission,
> dissemination or other use of or taking action
> in reliance upon this information by persons or
> entities other than the intended recipient is
> prohibited. If you have received this email in
> error please contact the sender and delete the
> material from any computer.
> ************************************************
> --
Received on Wed Jan 12 2005 - 13:07:11 CST

Original text of this message