Re: Effect of listener on existing connections?

From: <groups.broberg_at_gmail.com>
Date: Mon, 16 Jun 2008 14:48:55 -0700 (PDT)
Message-ID: <6c3d1506-4a7c-4a6c-861e-c435aea4edcc@l42g2000hsc.googlegroups.com>


On Jun 13, 10:32 pm, AGT <usenetpersonger..._at_gmail.com> wrote:
> <groups.brob..._at_gmail.com> wrote in message
>
> news:a8f8b4b1-adbb-4916-9b23-8d72f6d221ba_at_f36g2000hsa.googlegroups.com...
> Writes:
>
> >> Oracle support is not giving us satisfactory results. Perhaps you can
> >> give some answers?
>
> >> We've recently upgraded our system to Oracle 10.2.0.3.0, running on
> >> Solaris (sparc) 10 inside a ZFS zone (our previous system was Oracle
> >> 9.2.0.4.0 running on sparc Solaris 8, and was running on that for the
> >> last 5 years). Since the upgrade 6 weeks ago, we've had two instances
> >> where our applications (running in the same O/S environment on a
> >> different node on the cluster) have locked up - existing connections
> >> to Oracle become unresponsive when executing SQL (with no error
> >> message - they just block), and attempts to create new connections are
> >> met with the error:
> >> "ORA-03135: connection lost contact".
>
> How hard would it be to eliminate the zoning..?
> I dont think this is related nor ZFS but if you could test w/o
> these changes then youd know for sure.
>
> I dont know why you would do this in the first place. Zones are
> appropriate for some things and you generally get better
> throughput from ZFS over UFS but zones just stir up the pot for me.
>
> Dedicate the box(es) to Oracle only - dont even make a special
> project for it - just use default. Keep things simple as possible.
>
> Maybe you have reasons for all this fancy overhead but so far I see none : >

The zones are here to stay. We sell a turnkey solution that runs on self-contained hardware, so our apps plus the database all live on one box (really a cluster - two boxes, with the one as a failover node). The zones make it much easier to administer & monitor all the components of the system (db + apps) with a unified mechanism.

Additionally, we perform our hot backups using zones & snapshots, which is much faster and less intrusive than what we used to have running rman - our backup window is now a second or two (while the snapshot is taken), at which point the snapshot of the db zone can be backed up at any point in the subsequent 24 hours. This works much better for us, as different clients have different backup strategies. It's simpler to let them point to a net-mountable volume that contains all the files they need to archive to whatever backup strategy they're using for their enterprise.

In any case, we haven't figured out how to replicate the scenario yet - it only occurs in the production environment, and never showed up during our testing. We typically tested a months worth of operation at an accelerated rate (anywhere from 4x to 40x the normal speeds, so tests finished in 2 - 7 days). We could start running some tests at 1x speeds, but given the intermittent rate of failure, it would be hard to draw any conclusions from a non-zone-based system that didn't fail after running for a month or two.

We may as well kick off a several-month simulation, though, in case we start to see this problem occur with regularity. Received on Mon Jun 16 2008 - 16:48:55 CDT

Original text of this message