Re: Multi Threaded server not working for many inserts?

From: Billy Verreynne <vslabs_at_onwe.co.za>
Date: 17 Mar 2004 22:43:55 -0800
Message-ID: <1a75df45.0403172243.494c52f2_at_posting.google.com>


"Royco" <bla_at_bla.nl> wrote i

> Exactly.....
> Running the same SQL in SQLPlus/ with both instances configured as dedicated
> no problems whatsoever.

And why did you neglect to mention this upfront? The "facts of the case" are what you present. All you have stated in the initial postings were that MTS is a problem, bad MTS. No other technical details and analysis as why you have arrived at that conclusion. How do you expect any kind of meaningful advice in the absense of providing tangible information?

> A lot of our pro-C and pro-Cobol programmes are
> using databaselinks in a dedicated server-setup for as long as I can
> remember (more the 10 years now).

I would not trust a Cobol programmer to know a transaction from a database link. (and yeah, I have done my share of Cobol in the 80's, so I should know ;-)

> Configure MTS on both instances and the same SQL from SQLPlus creates a not
> respronding database.

So IOW, the INSERT fails irrespective of it being a push or a pull (local or remote trans) on MTS.. and it does work with dedicated server. Then yeah - it sure looks like MTS is too blame.

But some more config details are missing. What type of connection are used between databases? MTS or dedicated? Is the db link session shared between sessions?

> > And I find it hard to believe that "tracing does not work". Have you
> > enabled SQL tracing for that session? Have you enabled Net tracing?
>
> Our DBA 'sees' the SQL 'arriving' in the target database but that's it.

Here's my suggestions:
- eliminate triggers, indexes and everything else possible from the problem
- test using MTS and dedicated server
- trace the tests

The steps I would follow:

  1. Create a private (non-sharable) database link that connects using a shared connection to the remote database
  2. Create a private (non-sharable) database link that connects using a dedicated session to the remote database (configure another TNS alias for that remote db, and use the SERVER=DEDICATED parameter).
  3. Test both links to ensure that they do connect via MTS and dedicated modes respectively on the remote database
  4. Create test tables (2 columns each, 10 rows per table) on both instances. No triggers, indexes, etc.
  5. Run SQL*Plus and enable tracing for the session. I.e. use ALTER SESSION SET SQL_TRACE=TRUE (or even SET AUTOTRACE ON). Run the INSERT statement (pull data and insert local) testing both database links.
  6. Repeat step 5 using a push INSERT.
  7. Repeat steps 5 and 6 using a dedicated and shared server SQL*Plus session.
  8. Monitor V$ tables for session events, wait states and stats.
  9. Evaluate the trace files

I would also enable tracing on the remote database. At step 5 I would do a [SELECT sysdate FROM dual_at_dblink] in SQL*Plus. I will find the session (with its serial) created by that statement on the remote database and then enable tracing on that database using:

SQL> begin
SQL>   sys.dbms_system.set_ev( :session, :serial, 10046, 12, '');
SQL> end;

> But it seems you are very skilled in pissing people off as if you have all
> the knowledge in the world and think of others as being 'new-bees' because
> they have a problem.

I would not argue at the former (I was indeed trying to drive home my point rather bluntly). But I do not consider myself an Oracle expert - I consider myself an expert on crap. I deal with every day. I know what shit looks like and when someone is shoveling it. And IMO blaming MTS without hard evidence is exactly that.

It's not that I'm a MTS fan either. To me it is about not following basic steps in troubleshooting a problem, grasping at the first straw and holding that up as the cause of the problem.

I've had developers telling me that Oracle somehow "misplaced" millions of rows after their process did a move of rows from one table (INSERT and DELETE) to another and committed. There were no exceptions raised. Thus, according to them, their code worked and Oracle failed somehow and lost millions of rows, despite their COMMIT. The problem was traced to their trigger on the destination table that failed the insert and supressed all exceptions to the client.

I wish I could say that this "grasping at the first convenient straw" is an exception. It is not.

Blaming a piece of software or architecture without you having done your homework.. expect me to reach for my lead pipe and start swinging away getting you all pissed off... ;-)

--
Billy
(a Crap Expert)
Received on Thu Mar 18 2004 - 07:43:55 CET

Original text of this message