Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Multiple listeners improve application.

Re: Multiple listeners improve application.

From: Jim Kennedy <kennedy-downwithspammersfamily_at_attbi.net>
Date: Tue, 19 Jul 2005 07:30:03 -0700
Message-ID: <pKydnezLM7j5k0DfRVn-iw@comcast.com>

"Ashutosh" <v_ashutosh_at_hotmail.com> wrote in message news:1121760535.235853.231710_at_g43g2000cwa.googlegroups.com...
> Hi ,
>
> This was to discuss a typical error we've been encountering related to
> listener. Our application which is a GUI, communicates with Oracle
> Database using NET8.
>
> The Problem :
>
> One fine day application response time reduces . Certain screens which
> were taking around 5-10 secs went upto around 90 secs.
>
> Errors observed:
>
> Users observe frequent error of ORA-12535 Operation Timed Out Error .
> In the listener.ora we see this error - Solaris Error: 130: Software
> caused connection abort.
>
> Observation:
>
> tnsping on the server itself to the instance becomes somewhere between
> 800-2,000 ms. and from the client to the database on the same lan
> becomes somewhere between 800-10,000 ms. tnslsnr process starts
> appearing in the top processes(using the top command) . CPU usage of
> the tnslsnr varies between 0.3% and 1.4%. The no. os processes were
> around 300 and 200 oracle connections.
>
> Workaround:
>
> The listener was on port 1525. We started another listener to the same
> database on the port 1521. Out of the 200 Oracle users only 10 were put
> on the port 1521. The users on the port 1521 enjoyed very good
> performance and the tnsping response time was between 0-20ms .The users
> on the port 1525 were still suffering with extremely high tnsping. The
> moment we move around 50 users from 1525 onto port 1521 the
> performance of users on this port also deteriorates.
>
> System details:
>
> Oracle: 8.1.7.4(64bit)
> optimizer_mode : Choose
> OS : Sun OS 5.8
> Cluster : Yes( Two nodes. In case of a failure, one db fails onto
> another node)
> 6 CPU, 8 GB RAM
>
> Tests Done:
>
> real Memory of around 4 GB is available. Observed using top command .
>
> SAR: output shows good amount of %idle. %wio between 0 and 2.
>
> VMSTAT: A typical output taken with 5 secs time interval shows adequate
> free swap space. pi,po being zero but mf are high.
>
> procs memory page disk faults
> cpu
> r b w swap free re mf pi po fr de sr s0 s6 s1 s1 in sy cs
> us sy id
> 0 2 0 13903528 4363784 705 10895 0 0 0 0 0 6 0 6 0 3971 7435 5069
> 26 32 42
> 0 2 0 13908872 4366672 651 10179 0 0 0 0 0 5 0 5 0 4083 7369 5084
> 30 31 39
> 0 1 0 13917176 4370776 717 11005 0 0 0 0 0 5 0 5 0 3884 7596 4875
> 26 32 41
> 1 0 0 13913680 4369872 649 10137 0 0 0 0 0 3 1 3 0 4319 26374 5215
> 31 58 10
> 1 0 0 13916880 4370376 461 7099 0 0 0 0 0 3 0 3 0 3703 28709 4749
> 32 54 15
> 0 1 0 13913056 4367488 647 10282 0 0 0 0 0 7 0 7 0 4879 26935 6119
> 27 56 17
> 0 1 0 13914896 4368256 576 8756 0 0 0 0 0 1 0 1 0 3976 25271 4700
> 28 52 20
>
> Typical netstat -i taken at 5 second interval looks like the one below
> , with no errors:
>
> Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs
> Collis Queue
> lo0 8232 loopback localhost 1738063 0 1738063 0 0
> 0
> hme0 1500 hydom hydom 14100825 0 20899422 0 0
> 0
> qfe2 1500 192.10.40.0 192.10.40.2 434324 0 987515 0 0
> 0
> qfe3 1500 192.20.20.0 192.20.20.3 6549 0 6118 0 0
> 0
> qfe1 1500 172.16.1.0 172.16.1.1 19065430 0 19362906 0 0
> 0
> hme1 1500 172.16.0.128 172.16.0.129 19032744 0 19337598 0 0
> 0
>
> Excerpt of STATSPACK typical output taken over peak load and 100 mins
> snapshot:
>
> Instance Efficiency Percentages (Target 100%)
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Buffer Nowait %: 100.00 Redo NoWait %:
> 100.00
> Buffer Hit %: 97.89 In-memory Sort %:
> 99.86
> Library Hit %: 99.36 Soft Parse %:
> 99.08
> Execute to Parse %: 13.74 Latch Hit %:
> 99.99
> Parse CPU to Parse Elapsd %: 91.43 % Non-Parse CPU:
> 94.65
>
> Top 5 Wait Events
> ~~~~~~~~~~~~~~~~~ Wait
> % Total
> Event Waits Time (cs)
> Wt Time
> -------------------------------------------- ------------ ------------
> -------
> db file sequential read 30,512 23,887
> 40.96
> SQL*Net more data to client 37,725 16,698
> 28.63
> file open 8,391 7,073
> 12.13
> db file scattered read 30,097 4,459
> 7.65
> log file sync 13,241 2,231
> 3.83
> -------------------------------------------------------------
>
>
> tkprof of the GUI process DOESNOT SHOW ANY :
>
> a) Full tablescans of big tables.
> b) High CPU utilization
> c) Exceptionally high parse, execute or fetch timings for the total no.
> of rows.
>
>
> Details logs can be available if required.
>
>
> Looking forward to your precious comments.
>
>
> Thanks and Regards
>
> Ashutosh Verma
>

I don't think it is the listener. The listener is only involved in connecting the client to the server. It starts up a dedicated server process - assuming that you are using dedicated connections - connects that process to Oracle and that's it. You can see this by connecting to Oracle and then stopping the listener. You will notice already connected sessions are fine, but new sessions won't be able to connect. The exception here is if you are connecting and disconnecting with almost every SQL statement. Then you need to fix the application.
Jim Received on Tue Jul 19 2005 - 09:30:03 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US