Re: Oracle Exadata for DB consolidation

From: goran bogdanovic <goran00_at_gmail.com>
Date: Wed, 21 Nov 2012 19:54:45 +0100
Message-ID: <CAGyPXK4KUvmvMApO7Yjs5N7PjPbmrrsFfsyfSBoWaxuCWMjQUQ_at_mail.gmail.com>



Anady,
many thanks for very informative answer!

database patches are for me of no concern as <commercial start> my team is mastering this topic the same way Paganini masters violin <commercial stop> ;-)

My bigger concern is storage layer patches. We can't afford 2-hours downtime and hence rolling way is only viable. As I understood you, you advice this strategy only in case of high redundancy diskgroups.

Did you encounter any failures during patching (either on test or prod system)?

What do you define as rollback solution in such cases?

Is the total outage of Exadata in such case inevitable?

Based on your customer sites, how they deal with cell outages? What are possible scenarios for cell 'recovery'?

Many thanks in advance.

cheers,
goran

On Wed, Nov 21, 2012 at 4:31 PM, Andy Colvin <acolvin_at_enkitec.com> wrote:

> When it comes to patching Exadata, there are 3 basic levels of patches -
> Infiniband switch patches, Exadata Storage Server patches (OS/firmware
> updates), and Quarterly Database patches (standard Oracle quarterly PSUs).
> It's recommended to patch quarterly, but many of our customers only patch
> twice a year. All of the patches can be done rolling. Oracle does offer
> the platinum support service which includes patching, but personally, I
> wouldn't want Oracle to have that kind of access to my systems. Here's a
> quick breakdown of the patches on Exadata
>
> *Infiniband switch patches* - These patches are always applied rolling,
> and are quick and easy to apply. They're released very infrequently...the
> last one was released more than a year ago.
> *
> *
> *Storage server patches* - These patches apply new OS images to the
> storage, and include new firmware updates for various components (BIOS,
> RAID controller, ILOM, etc). We recommend to take an outage to apply these
> patches (generally ~1.5 hours for the entire rack, depending on what
> version you're patching from), but they can be applied rolling. If you
> prefer rolling patches, remember that it will take longer (greater than 2
> hours per cell, so > 6 hours for 1/4 rack, > 14 hours for 1/2 rack, > 28
> hours for full rack) and it will reduce your redundancy by 1 cell during
> the entire patch window. For this reason, I recommend only applying
> rolling cell patches on Exadata racks with high redundancy diskgroups.
> Rolling versus non-rolling is a matter of what type of downtime is
> acceptable to the business, and whether you'd rather accept the devil you
> know (~1.5 hours of known downtime) or the devil you don't know (rolling
> patch time depends on workload, possibility of an unplanned outage if
> running normal redundancy and you lose a disk during the patch). For me,
> I'd rather take the known quantity than the unknown. That said, I've
> patched *many* Exadata racks and haven't had to deal with an unplanned
> outage from patching.
>
> *Quarterly database patches* - These are your standard Oracle quarterly
> patches, with a few extras thrown in. They're released on the same cycle
> as the standard PSU/CPU patches, and are applied with OPatch. Because of
> this, they're applied rolling, affecting one node at a time. For
> consolidated environments with multiple homes, you can clone your existing
> home, patch it, then move your database instances to that newly patched
> home to cut down on your downtime. These patches are really pretty easy to
> apply if you're familiar with Oracle's standard patching methodology.
>
> Overall, we have many customers running Exadata as a consolidation
> platform, running various mixed workloads. One customer has multiple
> PeopleSoft databases, Oracle BI, and data warehouses. Another has
> eBusiness Suite, an internal transactional system, and Oracle BI. Just a
> few examples.
>
> Andy Colvin
>
> Principal Consultant
> Enkitec
> andy.colvin_at_enkitec.com
> http://blog.oracle-ninja.com
>
>
>
> On Nov 21, 2012, at 4:10 AM, goran bogdanovic wrote:
>
> Hi list,
> I have couple questions related to Exadata but first a little bit of
> background story:
>
> I am considering different solutions for database consolidation as well as
> increasing high availability of single systems in one step.
> At present our production systems (OLTP & DWH) are running more or less as
> isolated 'islands'.
> Since the Data-Center is getting bigger & bigger so our DC as well
> operational bill too.
> Many systems don't really need dedicated HW to run on, so two or more of
> them can be consolidated to run on e.g. one 2-node HA cluster with
> sufficient CPU and IO resources to satisfy total needs of all databases
> running on such high available 'consolidated platform'.
> Oracle VM is not an option.
> Lets put licensing topic aside for now.
>
> So, now back to original topic ;-)
>
> One of the options I am considering is Oracle Exadata.
> I had a first (and short) workshop with people from Oracle which was more
> let's say high-level presentation of Exadata.
> The figures presenting 'power' of Exadata are impressive.
> That's one side of picture.
>
> Topics which are still open for me are:
>
> 1. Patching/Upgrade of Exadata Storage Servers, Firmware, InfiBand switches
> as well as other HW components
> - how oft this need to be done?
> - patching/upgrade in rolling fashion possible or not?
>
>
> 2. Real life operational experience with Exadata
> - I would be very grateful for any insights and experiences from list
> members with Exadata experience i.e. faced bugs, problems & issues
>
> 3. Oracle sales presented a Turkcell as one which used Exadata as
> consolidation platform ... if anyone on the list is working for them and
> would be kind enough to share experience I would be very grateful.
>
> Many thanks in advance to anyone kind enough to tackle above topics.
>
> cheers,
> goran
>
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 21 2012 - 19:54:45 CET

Original text of this message