RE: Queueing Theory in Oracle

From: Patterson, Joel <>
Date: Fri, 14 Mar 2014 08:08:22 -0400
Message-ID: <C1117B1AA0340645894671E09A7891F715F3D6881E_at_EIHQEXVM2.ei.local>

One of the issues I have is that the machines I work on are sharing CPU amongst multiple database, along with storage, and memory. There are multiple ways to break that up including virtualization.

I read a book by schalenheimer on predicting performance, (see disclaimer), and it was devoted to one database on one server. Sure, if you can get it.

(I know I have the spelling wrong, and I even googled it, but it is so wrong I'm not getting any hits, but people on this list will probably recognize who I'm trying to reference).

From: [] On Behalf Of Ls Cheng Sent: Friday, March 14, 2014 5:44 AM
To: Ethan Post
Cc: Oracle Mailinglist
Subject: Re: Queueing Theory in Oracle

To explain something we need a solid fundation, to get some meaningful output.

To get the meaningful output we need to get some proper data input and that is where I have doubts, what data can we use as input? Which database metrics could be used and if they could be used are they exponentially distributed? If not we arent suppose to use queueing theory because it is based on exponentially distributed data. We could use user transactions per sec for example but I am not sure if it is always directly proportional to cpu usage, if transaction arrival rate is up 10% cpu usage is up 1%, if transaction rate is 20% up then CPU is up 0.2%. Not sure if I have explained. The good metric which I think is directly related to the CPU usage is logical reads per sec, more logical reads more cpu usage, it has direct relationship, user transaction for instance doesnt always have that direct relationship because we can have different type of transactions, say T1, T2, T3, if T1 will have 50% more arrival rate it doesnt mean CPU usage would be higher, it may be simply 50 commits more, but logical reads is different. But can we use logical reads as arrival rate? Thanks

On Thu, Mar 13, 2014 at 9:38 PM, Ethan Post <<>> wrote: IMHO the practical use of Q'ing theory is to explain to IT managers why system performance goes from wonderful to full suck when CPU is 80-90 utilization. When they say, yeah but we still have 10% left you tell them to go Google Q'ing theory, beyond that I see better ways to answer your question, which leads me to ask what is the question your trying to answer? Taking a guess that you want to know when service times aren't par?

Seems to me in 90% of the cases the issue, if in the database, is going to be reflected in your SQL's and it will effect most SQL's, i.e. the average elapsed times will increase for a larger group of SQL's. I keep a table which groups SQL performance in hourly buckets for SQL's which actually do something, either high execute or high elapsed times, doesn't take up much space and I can go back 1 plus year. The breakout is also in a histogram sort of fashion so I can see how often it executed under 1 sec, 1-5 secs and so on....very easy to query, very easy to quickly determine if avg elapsed times are not the norm, very easy to show history. I know the same sort of stuff is stored in oracle tables someplace and you could also run scheduled checks from ASH against history to look for >N SQL's with service times >X avg or what have you. Using ASH also makes it easy to filter by program, module, client_id and such, since in most cases when you are interested in this sort of detail you are interesting in certain programs/apps/transactions. Just thinking this as I type here, but I am always interested in better diagnosing app server and web server performance from the interaction with the database, just thought that when the app server is sick it would usually be reflected in time between executes or SQL*Net waits. So take for example an app which executes SQL foo 1000 times per hour, but that may all occur in a ten minute period, let's say all of a sudden the SQL's still execute just as fast, but it takes 20 minutes to run the 1000 executes because of the latency on or between the app server, well the avg time between executes would increase, if this occurred across multiple app server related sessions you likely have some sort of latency issue on the app server or web server. Fun thinking about this stuff.
- Ethan Post

On Tue, Mar 11, 2014 at 8:57 AM, Ls Cheng <<>> wrote: Hi all
The question is probably a bit vaue but has anyone succesfully applied Queueing Theory in an Oracle Database using database metrics such as Logical Reads, User Calls, Executes? (Not application metrics such as business transaction per sec, queries run per second etc.) I wonder if these metrics can be used in Queueing Theory since they dont look like following exponential distribution which is a requirement in M/M/n.


Joel Patterson
Sr. Database Administrator | Enterprise Integration
Phone: 904-928-2790 | Fax: 904-733-4916<>


[]<> [] <!/entint> [] <> [] <>
This message (and any associated files) is intended only for the use of the addressee and may contain information that is confidential, subject to copyright or constitutes a trade secret. If you are not the intended recipient, you are hereby notified that any dissemination, copying or distribution of this message, or files associated with this message, is strictly prohibited. If you have received this message in error, please notify us immediately by replying to the message and deleting it from your computer. Messages sent to and from us may be monitored. Any views or opinions presented are solely those of the author and do not necessarily represent those of the company. [v.1.1] --
Received on Fri Mar 14 2014 - 13:08:22 CET

Original text of this message