Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> tuning : file number translation table

tuning : file number translation table

From: <carol.legros_at_accenture.com>
Date: Fri, 18 Jul 2003 10:11:12 -0400
Message-Id: <25956.338449@fatcity.com>


This is a multipart message in MIME format. --=_related 004E57F485256D67_=
Content-Type: multipart/alternative; boundary="=_alternative 004E57FA85256D67_="

--=_alternative 004E57FA85256D67_=
Content-Type: text/plain; charset="us-ascii"

I'm hoping someone out there has experienced this problem... I can't seem to find many posts
on MetaLink that discuss this.

My environment :



I am running a 500 Gig OLTP database in a Solaris environment. I have some 0+1 disk
available, but mostly RAID5 (array) for the datafiles.

This is not in production yet, but we're doing some load testing, and so far, I've had the
typical contentions with the "undo header" and "undo block" contention, "segment header"
and so on. I've reduced these issues significantly, but now I think I may have a problem
with "hot spots" and I/O.

The one latch that comes up with a high % (contention) is "file number translation table".
Its at %15. All other latch miss percentages are below 0. Seems like the access to the
files is being pounded.

Anyone had contention with this latch before ?

Another thing that make me feel this is possibly I/O related is that the tablespace and datafiles show an uneven amount of activity across all.... possibly because this app naturally does tons of INSERTS and few UPDATES.
Maybe I need to use partitioning to even out the activity.

The top wait stats are related to dispatchers and MTS. I have a lot of dispatchers and shared servers
(all are busy) but I suspect these wait stats are high because file access may now be the issue.
Should I consider fewer dispatchers and shared servers ? This may relieve the
"file number translation table" situation, but then I'm back to where I started before (lower number of
concurrent sessions with a reasonable response time).

Any advice or comments would be appreciated. Thanks in advance, Carol  

--=_alternative 004E57FA85256D67_=
Content-Type: text/html; charset="us-ascii"

<br><font size=2 face="sans-serif">I'm hoping someone out there has experienced this problem... I can't seem to find many posts</font>
<br><font size=2 face="sans-serif">on MetaLink that discuss this.</font>
<br>
<br><font size=2 face="sans-serif">My environment :</font>
<br><font size=2 face="sans-serif">-------------------------</font>
<br><font size=2 face="sans-serif">I am running a 500 Gig OLTP database in a Solaris environment. &nbsp;I have some 0+1 disk</font>
<br><font size=2 face="sans-serif">available, but mostly RAID5 (array) for the datafiles.</font>
<br>
<br><font size=2 face="sans-serif">This is not in production yet, but we're doing some load testing, and so far, I've had the </font>
<br><font size=2 face="sans-serif">typical contentions with the &quot;undo header&quot; and &quot;undo block&quot; contention, &quot;segment header&quot;</font>
<br><font size=2 face="sans-serif">and so on. &nbsp;I've reduced these issues significantly, but now I think I may have a problem</font>
<br><font size=2 face="sans-serif">with &quot;hot spots&quot; and I/O. &nbsp;</font>
<br>
<br><font size=2 face="sans-serif">The one latch that comes up with a high % (contention) is &quot;file number translation table&quot;.</font>
<br><font size=2 face="sans-serif">Its at %15. &nbsp;All other latch miss percentages are below 0. &nbsp;Seems like the access to the</font>
<br><font size=2 face="sans-serif">files is being pounded. </font>
<br>
<br><font size=2 face="sans-serif">Anyone had contention with this latch before ? &nbsp;</font>
<br>
<br><font size=2 face="sans-serif">Another thing that make me feel this is possibly I/O related is that the tablespace and datafiles show an uneven </font>
<br><font size=2 face="sans-serif">amount of activity across all.... possibly because this app naturally does tons of INSERTS and few &nbsp;UPDATES.</font>
<br><font size=2 face="sans-serif">Maybe I need to use partitioning to even out the activity.</font>
<br>
<br><font size=2 face="sans-serif">The top wait stats are related to dispatchers and MTS. &nbsp;I have a lot of dispatchers and shared servers</font>
<br><font size=2 face="sans-serif">(all are busy) but I suspect these wait stats are high because file access may &nbsp;now be the issue. &nbsp;</font>
<br><font size=2 face="sans-serif">Should I consider fewer dispatchers and shared servers ? &nbsp;This may relieve the </font>
<br><font size=2 face="sans-serif">&quot;file number translation table&quot; situation, but then I'm back to where I started before (lower number of </font>
<br><font size=2 face="sans-serif">concurrent sessions with a reasonable response time).</font>
<br>
<br><font size=2 face="sans-serif">Any advice or comments would be appreciated. &nbsp;Thanks in advance,</font>
<br><font size=2 face="sans-serif">Carol</font>
<br><font size=2 face="sans-serif">&nbsp; &nbsp; </font>
<br><font size=2 face="sans-serif"><br>
</font><img src=cid:_1_158C000001D4004E57F185256D67>
--=_alternative 004E57FA85256D67_=--
--=_related 004E57F485256D67_=

Content-Type: image/gif
Content-ID: <_1_158C000001D4004E57F185256D67>
Content-Transfer-Encoding: base64

R0lGODlhiQFWAOcAAAAAAICAgIAAAICAAACAAACAgAAAgIAAgICAQABAQACA/wBAgEAA/4BAAP//

/8DAwP8AAP//AAD/AAD//wAA//8A////gAD/gID//4CA//8AgP+AQAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACwAAAAAiQFWAEAI/wAdCBxI
sKDBgwgTKlzIsKHDhxAjSpxIsaLFixgzatzIsaPHjyBDihxJsqTJkyhTqlzJsqXLlzAHUqBgcKbA mTZl0nRgM+fNnToL5uwJdChQnkdjKl3KtKnThAGiRl04dapHqVYbZn24FSHWrhfBPh1LtqxZgWLP ql3Ltm3IAGgHwnUAty5dg3Pvyo2LVypav3TtBrZaNS9WuYer/s16GGLevXYN85V8ECzOnZd/8vxp VChmnDpB9+RcVDTpzadTZ0ZKFPTpzqNZJ5WtOrNrt7hz697Nu+njuV8DI6ZKeatx4IYfCxeOnOBx wF3FMlbeu7r169iza9/Ovbv37+DDi/8fT768+fPo01OkTtEnQds0S3cO6tnn5fjxU9NWz7+///8A BqjSdIv5VRhihSX2G2DDORccYXUlhyCDy0HYV4MH/qWhhRU2J+CHIIYo4ojrOTdZQdRRpteKDWrI nIQvdohicwQu5yJXeKHIV1wqzujZe58hRR9RQgrpXpFEalakZkHa1+RR7kUZ5JKwlfajlExOeaRa jWmlVYoOzlijmGC2CJWPE8JI4propRWRm2zGKeecdNZp55145qnnnnz26eefgAYqaFtfFWrooYgm quiijDbq6KOQRirppJRWaumlmGaqqUb3VYTloKCCuKVstyGJH5D1ifbkkPiNGuqrbLr/CuustNZq 61ljNpahhbpG5uGNExaI3IHDKgchhWb2qlevDCoI560AcmjjhgsCO+2110ZHo5ppcQhWjRnK6K2H z0Jr7rnoWjddhGYOViBJ4b4UnI3HLbZjSUfKyulqI+k7InB7LXvvjr9BZayCGLJ7rIHbJjwYe1Tl KPCJK0J8UL7wsbpklhkbaaXHQ3mMGqkbS3mflh8nSSqULGeppL/plksnzAl1mu7NOO9ZsL32Asyi tRTWe2GYfVVr8MFEJ52zoCr2KJjPSgMr9JguCm3t0C1KK/PSXHft9ddghy322GSXbfbZaKet9tps t+3223DHLffcdOtmcd145603ujTv/+133wt96vfg9CF08sunFo64qbRVSXjXWxr1qcobV+744vs9 frPNKye1mslOutYp6PppbvrFpZ6u+uqsM7R163tLW9ndYdHule0Wvc4V7rAvBW7D4/LKsNRIx2j8 wzI6jO3vyAtfNbm8954S82Guy67SXQJdfLhUEy37jc4/n/y40js1b5rH04tw1MK2H/y054+f4oKy K6t+hLqX313++vfv//+1w9bMPDcb6RkHfGWq2Eji5ZL5JatpKMGYSDxHEsAJCGoAi4zEWAS1dg1n fcSCnriGlz7HNOsugplYj6LmqtYkjnIii6ELGRclkc2GcpMT3ZRoqCol+ZCHmCPUvP981kGCmeh2 3rveCJPnMOplz0s6UuGDAtaQHFJpVaiSYAwZ1zgbLk5wJUOZxpxkuJbJcIdsKZiBooigd1HRQQtT IvKON8T1ufGJX4JOFOlXLDQBSXI41CEXbTY6lgFSJgohpA4BKUjOkCZxjzTVIQdZwJvh0Tony6Qm N8nJTnryk6AMpShHScrDAfCU4uEd//yIylmt8EyuK567PPgtiK2LOHCMGrJayZudSUaNb5Qa/Pho NPTxqJg50hYckcnLXh6RPYq5m/WoprXt4a9ZtoyfdNTUzF4iCzqEgczs/Ji9Lm1TmW5EYrDSeclu Received on Fri Jul 18 2003 - 09:11:12 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US