Re: RAC or Large SMP...?

From: Tim X <>
Date: Fri, 10 Oct 2008 17:38:52 +1100
Message-ID: <> writes:

>> I'm also not convinced that the fewer servers are easier to administer
>> arguement is as valid these days. This was certainly true in the past,
>> but modern package management has become quite sophisticated.
>> Managing larger numbers of servers dedicated to the same role isn't that
>> much of an overhead anymore. At least we haven't seen a substantial
>> increase in administration since moving to RAC. In fact, the added
>> fault tolerance has reduced impact and stress on staff when hardware
>> failures occur.
>> Tim
> Its exactly this area of RAC (i.e. adminstration) that concerns me.
> In your experience does the following scenario sound familiar:
> "Ah yes, troubleshooting. I’ve seen many clusters that just froze for
> no apparent
> reason in my time. It’s always possible to make the OS or Cluster
> software dump a
> trace/log file when it happens.
> The resulting trace/log file from the cluster will normally be the
> size of Texas, and
> only one or two people in the entire vendor organisation can truly
> understand them,
> you will be told.
> Then the files (often with sizes measured in GB) are shipped to the
> vendor and some
> months later they will report back that it wasn’t possible to pinpoint
> the exact reason
> for the complete cluster freeze or crash, but that this parameter was
> probably a bit low
> and this parameter was probably a bit high.
> That’s what always happens. I have never – really: never – seen a
> vendor who could
> correctly diagnose and explain a hanging cluster or a cluster that
> kept crashing.
> As to Oracle trouble shooting I’m not so worried. Oracle will either
> have a
> performance problem, which is easy to diagnose using the Wait
> Interface or you’ll
> get ora-600 errors that are fairly easy to diagnose, although you’ll
> need to spend the
> required 42 hours logging and maintaining an iTAR or SR or whatever
> the name is
> these days.
> In other words: Finding out what’s wrong (if anything) in Oracle is
> much easier than
> finding out what’s wrong with a cluster."
> This quote was pulled from
> Has the Oracle clusteware and RAC become mature enough so that the
> above is no longer a common problem..? The company I now work for
> deployed RAC 9i and went through 6 months of hell exactly like the
> scenario above, so they have been burned in the past.
> There is also the argument that RAC systems will require more
> scheduled downtime than single instance systems because there are more
> Oracle homes to patch (CRS, multiple database homes, ASM homes etc).
> Personally, I'd love to implement the RAC solution as I think that it
> is an excellent technology but somehow I think that I may regret it in
> the long run......

We have not experienced the scenario outlined above with the RAC environment. We have experienced problems on another cluster, but that has essentially been due to the fact we adopted a clustered configuration using Linux quite early and we tried to do it 'on the cheap'. Part of the problem was due to not enough nodes in the cluster, immaturity (at the time) of the filesystem used and lack of experience/training. It was a valuable learning experience though. I think if we were going to deploy another general purpose cluster, we would probably adopt more advanced network switching and load balancing technology, especially if we were doing another Linux based cluster. this is mainly because some of the support for general purpose clustering under Linux is perhaps a bit immature compared to other commercial solutions. However, I'm not sure this is as critical with a RAC configuration because a lot of the 'nasties' are handled to a large extent by RAC.

the only problems we have had with running under a RAC configuration have been fairly minor and have mainly involved applications having poor design or simply requiring different approaches to tuning. We probably would have run into similar problems if we had adopted systems with many cores.

to some extent, I suspect the level of maintenance also depends on your configuration and the extent to which you can take advantage of things like networked storage, networked file systems and how you configure things. For example, using a networked filesystem such as NFS or GFS can reduce the impact of configuration and patching. However, its a two edged sword - if you use it incorrectly, you can end up with lockups or reduced performance (especially true with some NFS implementations) or create a single point of failure that can undermine all the fault tolerance benefits. A frequent cause of 'lock up' problems in clustered systems is due to inappropriate application of networked file systems.

There are a number of non-obvious issues with any cluster and if there is nobody in-house with any experience, I'd certainly recommend getting the proposed architecture checked by someone with experience or even bringing in the right people to help advise on what the best architecture would be (very important to have a clear idea of the outcomes you are after and priorities - for example, is performance a higher priority than fault tolerance, what are the storage requirements and how are they expected to change over time, does the environment have high numbers of clients and how do they access the system or does the environment have few clients, but high data processing overheads, what is the break up between development and production, is the environment one with fairly constant processing levels or does it spend most of the time doing little, but at regular intervals has peak processing demands that are time critical etc. The configuration, quality and speed of your network is also relevant. Likewise, your predicted future demands are relevant. While this is difficult to judge accurately, having some idea of demand growth and how it will occur (gradual, sudden jumps etc) may be very relevant.

You mention your company whent through 6 months of hell with RAC and 9i. What happened after that 6 months? Did things settle down or was the plug pulled? Do you know if the problems were lack of skills/experience, lack of resources or just due to adopting the wrong platform? Not all clustering solutions are equal and there is a lot of differences between various vendors. For example, 4 years ago, Linux clustering was very much frontier 'wild west' stuff, especially compared to solutions from the big vendors. However, things have improved a lot over the last 4 years. There is also more knowledge and support available.

of course, there are no hard rules here. Environments, requirements and available skill levels vary widely and must be taken into account. For example, if you have few system admins and skilled DBAs, then maybe RAC isn't a good choice. Even the physical environment can be an important consideration. Many new servers generate considerable heat and cooling in server rooms can be a real issue. Likewise, stand-by generators and UPS equipment may be an important consideration.

On the other hand, the ability of various parts of your technology stack to take advantage of multiple cores can be quite limited. This can significantly decrease the efficiency or extent to which the cores are used. I've seen situations where performance on a multi-core system has been degraded to not much better than a single core system because an important component in the stack was only able to use a single core. It effectively was the bottleneck for the whole system (this was not an Oracle system). The performance was slow, but when you analysed the situation, most of the cores were essentially idle because everything was waiting on this single component. for this reason, I think your quite right that the only way to determine who well things work based on a SMP configuration is to actually experiment and collect stats. Just don't assume that if you get a performance of x with 12 cores you will get a performance of 2x with 24 cores. In some situations you will get only a bit better than x with double the cores. If you do decide to do some experiments, try to make sure you get some stats on core utilisation. this will help in determining to what extent additional cores may improve performance.

Of course, similar holds
with RAC, doubling the number of nodes doesn't double your performance and to some extent how the application is implemented can cause bottlenecks (Consider Daniel's point on the performance of AQ and the impact of having a single queue on one node compared to having a queue on each node).

For me, the difference is that in our environment, we have pretty good control over how the applications are configured and in many cases, developed. For example, if we found an application that was heavily based on AQ was performing poorly, we could re-configure and deploy with a queue on each node. On the other hand, if we were based on an SMP system and we determine that the problem was a critical component that was written in such a way that it was only able to use a single core, unless its a component we developed, there is little we can do, either because we don't have the sources or we don't have the skills/resources to change it.

There is a growing argument that one of the main limitations of multi-core systems isn't at the hardware level, but rather, at the software level. In particular, the lack of good support in many languages for writing software that is able to exploit parallel processing efficiently. The few languages that provide such support often rely heavily on the ability of the programmer to understand the complexities and have the skills to adopt algorithms suitable for such processing. such programmers are rare, particularly in the current era of out sourcing and commodity development where the emphasis tends to be towards programmers that are API driven code monkeys that have little depth of knowledge regarding algorithms and data structures generally and probably little, if any, knowledge of issues associated with parallel processing. There are some very interesting approaches being developed and I'm confident that in time, many of these considerations will be handled by clever compilers and sophisticated language support and APIs that are multi-core aware, but thats probably a way off yet.

I also think maturity/experience of approach is quite relevant. I think oracle has probably got a lot more knowledge and experience with a RAC approach than the SMP approach. I wold not be surprised to find components within the Oracle stack that don't perform as efficiently on an SMP system as they do on a RAC system, partially because systems with high numbers of cores are still relatively new (at one level, they may not seem that different from systems with high numbers of CPUs, but I do think there are unique diffeences that need to be considered) and because I think the RAC based model is more straight-forward and less complex to develop for.

We only started migrating to RAC 12 months ago. So far, the result has been very positive. Of course, that dreaded cluster lock-up may be just around the corner, but so far we have had much better uptime stats, improved performance and all round better experience. Our annual support and license costs are so much lower that we are probably going to get two additional staff (because tehre hasn't been a sys admin maintenance blow-out, (there is some debate regarding whether the additional staff will be DBAs, sys admins or developers!). We are lucky in that we do have a couple of excellent DBAs and sys admins. To be honest, part of the high maintenance costs was due to the age of some of our hardware, which was old because replacement costs were extremely high. Moving away form expensive 'boom boxes' was the right decision for us.


tcross (at) rapttech dot com dot au
Received on Fri Oct 10 2008 - 01:38:52 CDT

Original text of this message