RE: Practices for DB Scalability

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Fri, 1 Nov 2013 15:36:13 -0400
Message-ID: <002801ced739$a0c4c1c0$e24e4540$_at_rsiz.com>



Okay, just two more for now, because I'd be mad at myself if I left them out:

Minimize the number of connections you have to your database, especially by using queue capable service daemons for routine transactions instead of dynamic connections. See the "Real World" video on the part about minimizing connections results: MOST often you will be way better off to keep the total connections low, with the dramatic lowering of transaction elapsed time dropping so fast any queuing to get a turn at bat plus the lower elapsed time will usually be much less than the former transaction elapsed time starting right away.

And a drumroll for everyone who constructs transactions dynamically on a client and submits them sql text and all across the network, one insert, update, delete, and report query at a time:

Don't strand uncommitted partially complete multi-step logical units of work across the network between client and DB server for repeated turn-around calls across the network. Bundle them up into packaged PL/SQL routines and just send the data for the parameters, and handle rollback, commit, and status reporting back as a result, either to a queuing service daemon or as a call to a stored package. This eliminates both the iterative line turn-around time and data transmission from the window during which the transaction is active AND prevents hanging a transaction in flight when you lose a connection for the duration of the lost connection time detection. Yes, this does require that you plan an action or exception for the outcome of each step. (See TK and others on why this should always be the case anyway.) And you'll need to know where to look for your results.

mwf

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Mark W. Farnham
Sent: Friday, November 01, 2013 2:35 PM
To: carlospena999_at_gmail.com; oracle-l_at_freelists.org Subject: RE: Practices for DB Scalability

Six "your mileage may vary" topic areas:

  1. Move any PL/SQL routines that make sense into packages stored in the database.
  2. Load and pin and arrange to keep in PL/SQL routines that are used semi-frequently so they are already in when you hit peak load and you do not have the overhead of them periodically aging out and having to load them when they are demanded (possibly extra work at peak load). Of course there is the trade-off of how much memory to use this way versus the escaped load elapsed time at peak.
  3. An amazing amount of reduced overhead can often be had by reading in re-reading to heat small to modest sized lookup and reference tables, especially ones that change very infrequently.
  4. Establish an order of update for multi table update routines. This can dramatically reduce the number of deadlock situations you hit and which the database engine then has to sort out for you.
  5. Look for tables where the inserts take place randomly with respect to most frequently used query predicates. Consider carefully whether physically re-ordering such tables will make your routine access enough cheaper over time to offset the overhead of finding a maintenance window and doing the physical re-ordering. Compare cluster factors on all the indexes used for read queries to check whether any important access paths have been degraded (theoretically possible, worth checking, I've never seen it in the wild.)
  6. Use a final value of NULL for status indexes on transaction table and similar opportunities for "sparse indexes." Possibly finesse this with a virtual column turning NULL into, for example, DONE, if this is required where you cannot repair the code.

I can't really claim these are the most common or frequently used techniques. Should they be?

Now of course I left out instrumenting your code in the first place. I would say that Cary Millsap and his Method-r company are probably information central on that topic.

Now in the considerations you're already looking at you get close to workshift planning. IF you have batch only windows of time, pedal to the metal then, and try to get all the big, non-interactive jobs done then. Make sure your reserve free headroom for the DBA even then, but definitely leave a lot of free headroom when interactive users are using the system.

Finally, once in a while turn on the automatic memory stuff if you have the relevant packs and see what they recommend about giving more granules of memory to this or that. Once you have a good idea back from AWR and/or ADDM of the ranges you need, set things to a healthy place and turn the automatic stuff off. Moving a few granules back and forth at peak load can be painful.

I'll let someone else chime in about RAC, block mastering, and application affinity...
Start with James Morle's stuff on setting up your disk farm intelligently and you probably won't have to go anywhere else... Tim Gorman's stuff on scaling to infinity and the virtuous cycle JL, TK, SQLMaria, Wolfgang, Karen, and Chris on getting good plans (and many others, but go ahead and criticize my starting list while I laugh at you.) Doug Burns on the trials and tribulations of big time partitioning statistics. Mark Bobak on the details of diagnosing deadlocks you do experience. Tanel Poder on general what the hell happened Kerry's company is probably a good place to start on Exadata issues

mwf

PS: I *hope* you catch my drift that this is a multi-volume book you're after, so you're really only likely to get a reference list from oracle-l on the general topic.
PPS: didn't mean to leave out Stephane F. either, or Kevin Closson on knowing where your bottleneck actually is in your hardware.

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of C P
Sent: Friday, November 01, 2013 3:32 AM
To: oracle-l_at_freelists.org
Subject: Practices for DB Scalability

List,
I am looking for some of the most common and frequently used techniques to improve scalability of oracle database systems.

I see that minimizing the work like reducing latch contention, physical reads and buffer gets are important, I am trying to find out ways to do that, in both home grown (HG) and third party COTS applications where you cannot touch the SQL, more specifically in the third party COTS.

Here are some ways I know of to improve scalability:

  • use bind variables (HG),
  • use connection pooling (HG),
  • careful about running big reports/long running queries that could clog IO and/or flood buffer cache; also could cause problems if a big table is involved and happens to be frequently updated table resulting in lot more IOs due to read consistency. (HG and COTS)
  • use caching mechanisms in the database (HG and COTS) to improve response times.
  • reduce concurrency where it can be done (HG, how to do that in cots) , but how?

Any more additions/modifications?

CP.

--
http://www.freelists.org/webpage/oracle-l


--
http://www.freelists.org/webpage/oracle-l


--
http://www.freelists.org/webpage/oracle-l
Received on Fri Nov 01 2013 - 20:36:13 CET

Original text of this message