Skip navigation.

DBA Blogs

How To Improve SQL Statements Performance: Using SQL Plan Baselines

Pythian Group - Mon, 2014-02-03 09:25

The performance of any Oracle database heavily relies on query execution(s). The reasons for any SQL statement’s execution plan changes could include a variety of actions, like gathering optimizer statistics (table, index, schema etc.)  changing optimizer parameters, schema definitions, adding indexes etc.  As experienced Oracle DBAs, we should be aware of the fact that the above mentioned actions meant to improve SQL performance will not always guarantee positive results.

So in this case, many of us would try to freeze execution plans (Stored Outlines) or lock the optimizer statistics. However, doing so prevents such environments/databases of taking advantage of new optimizer functionality or access paths, which would improve the SQL statements performance. That is where SQL Plan Management comes in very handy…

SQL Plan Management (SPM) provides a framework for completely transparent controlled execution plan evolution. With SPM the optimizer automatically manages execution plans and ensures only known or verified plans are used. When a new plan is found for a SQL statement it will not be used until it has been verified by the database to have comparable or better performance than the current plan.

———————-

Next, I will explain the steps for forcing a bad query to use a better execution plan by loading SQL Plans into SPM using AWR.

Identifying the Slow Query

We have the following scenario:

- Oracle 11.2.0.3 version single instance database

- Performance issue caused by the following bad query:

sql1

with initial explain plan :

sql1

As shown in the Plan output, a full table scan was used, resulting in excessive IO for this query.  It seemed this query needed an index to reduce the IO. Therefore I have added two indexes on ‘status’ and ‘prevobjectid’ columns for the EMPLOYEES table, gathered table statistics and then checked again the explain plan. We will see now that due to index creation the DISPLAY_AWR program shows a newly generated explain plan with improved cost using an index range scan versus the full table scan used by the initial plan (Plan hash value: 2172072736).

sql2

Now we have obtained a new, better execution plan in AWR for the SQL statement, but our next question would be, “How can we make sure it will be the only plan picked by the Cost Based Optimizer for future executions”?

The answer:  “Create a SQL Tuning Set for the SQL, then create a new SQL Baseline from the STS so the Optimize will choose the preferred Execution Plan”.

Each time a SQL statement is compiled, the optimizer first uses a cost-based search method to build a best-cost plan, then tries to find a matching plan in the SQL plan baseline.  If a match is found, the optimizer will proceed using this plan. Otherwise, it evaluates the cost of each accepted plan in the SQL plan baseline and selects the plan with the lowest cost.

Here are the steps for loading SQL Plans into SPM using AWR by implementing SQL Baselines for the bad query.

Step 1: Set up a SQL Baseline using known-good plan, sourced from AWR snapshots.

To do so, SQL Plan Management must be active and the easiest condition to checking optimizer_use_sql_plan_baselines which needs to be TRUE.

sql2 Step 2: Create SQL Tuning Set (STS).

SQL tuning set (STS) is a database object that includes one or more SQL statements along with their execution statistics and execution context, and could include a user priority ranking. You can load SQL statements into a SQL tuning set from different SQL sources, such as AWR, the shared SQL area, or customized SQL provided by the user. An STS includes:

-          A set of SQL statements

-          Associated execution context, such as user schema, application module name and action, list of bind values, and the cursor compilation environment

-          Associated basic execution statistics, such as elapsed time, CPU time, buffer gets, disk reads, rows processed, cursor fetches, the number of executions, the number of complete executions, optimizer cost, and the command type

-          Associated execution plans and row source statistics for each SQL statement (optional)

The concept of SQL tuning sets, along with the DBMS_SQLTUNE package to manipulate them, was introduced in Oracle 10g as part of the Automatic SQL Tuning functionality. Oracle 11g makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:

  • Database, operating system, or hardware upgrades.
  • Database, operating system, or hardware configuration changes.
  • Database initialization parameter changes.
  • Schema changes, such as adding indexes or materialized views.
  • Refreshing optimizer statistics.
  • Creating or changing SQL profiles.

Now I create a SQL Tuning Set based on the slow query with a SQL_ID of 9kt723m2u5vna.

sql1

Step 3: Populate STS from AWR.

Now I will locate the AWR snapshots required to populate the STS, and load the STS based on those snapshot ID’s and the SQL_ID.

sql1

Step 4: List out SQL Tuning Set contents.

Now I can query the STS to verify it contains the expected data.

sql1

Step 5: List out SQL Tuning Set contents

Though I have created and verified the STS, the Baseline has not yet been created.

sql1

Step 6: Load desired plan from STS as SQL Plan Baseline

Now I will load the known good plan that uses the newly created index into the Baseline.

sql1

Step 7: List out the Baselines again.

Now verify the Baseline contains the desired plan.

sql1

Step 8. Flush the current bad SQL Plan.

After loading the baseline, the current cursor must be flushed from the cache to make sure the new plan will be used on next execution of the sql_id 9kt723m2u5vna

sql1 Conclusion

As this blog post demonstrates, SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements and baselines seem to be a definite step in the right direction. Baselines can be captured from multiple sources, SPM allowing new plans to be used if they perform better than the baseline fact that could improve the overall application/system functionality.

Categories: DBA Blogs

RMOUG Training Days 2014

Galo Balda's Blog - Sun, 2014-02-02 11:29

training_days

I’ll be presenting on February 6th. Here are the details of my session:

SQL Pattern Matching in Oracle 12c

Room 501, 11:15 – 12:15

Last year I had a great time, so I can’t wait to be back in Denver.

I hope to see you there!

Updated on 02/17/2014: The presentation is available on Slideshare


Filed under: 12C, RMOUG, SQL Tagged: 12C, RMOUG, SQL
Categories: DBA Blogs

Find what databases you have quickly – #EM12c Inventory Search

DBASolved - Sat, 2014-02-01 22:51

Thought I would write a quite tip on how to find what versions of databases you have in your environment.  Oracle Enterprise Manager 12c provides a quick and easy way to find this information once it is configured within your environment. 

Starting at with the Enterprise menu, you will go to Configuration and then Inventory and Usage Details (Enterprise –> Configuration – Inventory and Usage Details) (Figure 1).

Figure 1: Inventory and Usage Details Menu Item

image

After clicking Inventory and Usage Details, you will land on the associated page.  In the drop down box labeled “Show”, select Databases (Figure 2).  This will show you all the databases that are listed in the Oracle Enterprise Manager 12c that are being monitored.

Figure 2: Show Menu

image

Once all the databases are displayed (listed), you can see exactly what you have in your environment (Figure 3).  In this case, you test environment is comprised of 2 11gR2 databases (11.2.0.3 & 11.2.0.4) and a set of 12cR1 databases (CDB & 2 PDBs).

Figure 3:  Database Inventory listed (not a great graphic)

image

Hope this quick tip helps you find additional information about targets in your environment.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Log Buffer #357, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-01-31 09:16

This Log Buffer captures the excitement which is flowing unbridled after the new year frenzy, and the blogosphere is bubbling with new and novel technical features, tricks and tips. Enjoy!!!

Oracle:

Jeffrey Kemp asks. Need to change the label of an item on-the-fly?

As a system grows, it accumulates technical debt – improvements and cleanup that your really ought to get around to doing.

Sometimes statements take longer than you’d like to complete. This can happen for a number of reasons including data changes, table growth, stale statistics and the occasional optimizer bug you haven’t encountered before.

The JSON-P API as described in JSR-353 is a limited API for working with JSON; but at the basic level it will do the job. It is possible to add a few utility methods that can make your code, in my eyes, prettier.

Oracle Database 11gR2 (11.2.0.4) installation on Oracle Linux 6.4

SQL Server:

Finding “Active” Rows for the Previous Month (SQL Spackle)

How do you configure where your database backups are created? Do you have problems updating all of your backup jobs if the backup location needs to change?

Stairway to Advanced T-SQL Level 2: Writing Subqueries

Getting Starting with Change Data Capture (CDC) in SQL Server – Part 1

In this article, Gail Shaw looks at how you can identify common types of deadlock, the difference between a deadlock and severe blocking, and how to avoid and fix the most common deadlock types.

MySQL:

Percona XtraDB Cluster 5.6 GA release now available.

In MySQL 5.7.2 a new include file “–source include/force_myisam_default.inc“ made its appearance in most of the .test files in the MTR suite.

Daniel thought it would be nice to be able to query the FOSDEM schedule for the MySQL devroom with SQL.

Replicate from Oracle to Oracle, Oracle to MySQL, and Oracle to Analytics.

They say there are two hard things in software development: cache invalidation, naming things, and off-by-one errors.

Categories: DBA Blogs

Purge statspack snaps of source db on test db having different DBID?

Pythian Group - Fri, 2014-01-31 09:07

As most DBAs are aware, the clean deletion of the old statspack snaps is very difficult up to rdbms version 9i. Thanks to the statspack.purge procedure introduced by oracle from 10gR1 version, now it’s possible to purge un-referenced data too. This blog post explains about how to use statspack.purge procedure, but for the test/clone databases which uses different DataBase IDentifier(DBID) compared to the source database. Please remember the steps explained on this blog are not required when the source and test databases have the same DBID.

Normally DBID gets changed on cloned databases during the following scenarios.

1. The most commonly used RMAN ‘duplicate database’ feature to create the test database.

2. The database utility ‘nid’ used to change the test database name and dbid.

3. The test database controlfile gets created using syntax based on the scripts available on text backup controlfile.

Consider you have production/source database is configured to generate statspack snaps once in 20 minutes and the retention were 90 days. When this source database gets cloned using above methods to create test database, it inherits the same behavior. Now the test database contains statspack snaps belongs to source database as well as for the current database too. Even when you modify the existing purge script to retain less snaps, it would valid only for the snaps belong to the current DBID. The snaps belongs to other than current DBID would never get purged by this script, even though they are no longer valid for this test database.

1. Gather the DBID details from stats$snapshot table on the test database.

For example,

SQL> select distinct dbid from stats$snapshot;

DBID
———-
1215068670 ==> This is the source database DBID
393689388 ==> This is the test database DBID

2. Gather the snaps range handled by the source database using the following queries.

For eg:

SQL> select min(snap_id) from stats$snapshot where dbid=1215068670;

MIN(SNAP_ID)
————
90920

SQL> select max(snap_id) from stats$snapshot where dbid=1215068670;

MAX(SNAP_ID)
————
93775

3. Gather the row count on various tables to verify the successful purge activity completion.

For eg:

SQL> select count(1) from stats$snapshot where dbid=1215068670;

COUNT(1)
———-
2211

SQL> select count(1) from stats$sqltext where last_snap_id < 93776;

COUNT(1)
———-
380056

SQL> select count(1) from STATS$STATSPACK_PARAMETER where dbid=1215068670;

COUNT(1)
———-
1

SQL> select count(1) from STATS$UNDOSTAT where snap_id < 93776;

COUNT(1)
———-
4422

4. Copy the $ORACLE_HOME/rdbms/admin/sppurge.sql to your home directory and modify it accordingly.

i) Remove the dbid column from this select statement on the script.

select d.dbid dbid ==> Remove this column being selected.
, d.name db_name
, i.instance_number inst_num
, i.instance_name inst_name
from v$database d,
v$instance i;

ii) Substitute the source database DBID on this location on the script.

begin
:dbid := &dbid; ==> Change the ‘&dbid’ value as 1215068670
:inst_num := &inst_num;
:inst_name := ‘&inst_name’;
:db_name := ‘&db_name’;
end;
/

iii) Change the variable “i_extended_purge” value as ‘true’ on the script.

:snapshots_purged := statspack.purge( i_begin_snap => :lo_snap
, i_end_snap => :hi_snap
, i_snap_range => true
, i_extended_purge => false ==> Change the value as true
, i_dbid => :dbid
, i_instance_number => :inst_num);
end;

5. Execute this custom purge script on the test database and provide the snaps range when requested.

For eg:

Specify the Lo Snap Id and Hi Snap Id range to purge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for losnapid: 90920
Using 90920 for lower bound.

Enter value for hisnapid: 93775
Using 93775 for upper bound.

Deleting snapshots 90920 – 93775.

6. Now logged into the test database and execute the queries to verify the deletion happened.

SQL> select count(1) from stats$snapshot where dbid=1215068670;

COUNT(1)
———-
0

SQL> select count(1) from stats$sqltext where last_snap_id < 93776;

COUNT(1)
———-
7840

SQL> select count(1) from STATS$STATSPACK_PARAMETER where dbid=1215068670;

COUNT(1)
———-
1

SQL> select count(1) from STATS$UNDOSTAT where snap_id < 93776;

COUNT(1)
———-
0

As you noticed, This is very simple action plan, which may require some more modification on the custom purge script when it was used on RAC database.

Categories: DBA Blogs

New member of The OakTable: me!

The Oracle Instructor - Thu, 2014-01-30 01:20

I am an Oakie now :-)

OakTableLogo

 

 

To be a part of this highly respected circle of Oracle Scientists is something that I consider one of the greatest achievements of my professional life.


Categories: DBA Blogs

Query tuning example

Bobby Durrett's DBA Blog - Wed, 2014-01-29 16:58

A coworker of mine cleaned up a bunch of old data to improve performance on an older system of ours and one of the queries started running slower.  It looks like the optimizer was choosing a full table scan when an index existed that was much faster.  So, I took at look at why it was choosing the full scan and what could be done to get the query to run with the index.

This is a 9.2.0.6 64 bit HP-UX PA-Risc system.  The query without any hints runs like this:

Elapsed: 00:04:54.65

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE 
          (Cost=23208 Card=1 Bytes=12)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'MYTABLE' 
              (Cost=23208 Card=68262 Bytes=819144)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     242929  consistent gets
     238854  physical reads
          0  redo size
        203  bytes sent via SQL*Net to client
        242  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

I renamed the production table to MYTABLE to hide its name.  Here is how the query runs with the index hint:

Elapsed: 00:00:00.21

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE 
          (Cost=34865 Card=1 Bytes=12)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'MYTABLE' 
              (Cost=34865 Card=68262 Bytes=819144)
   3    2       INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE) 
                (Cost=194 Card=68262)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1180  consistent gets
          0  physical reads
          0  redo size
        203  bytes sent via SQL*Net to client
        242  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

I got this output by putting these commands before the sql:

set autotrace on
set timing on

The query looks something like this:

SELECT  sum(COLUMN2)
  FROM MYTABLE
   WHERE MYTABLE.COLUMN1 = '01242014';

The index is on COLUMN1 only.

Here is what the query looks like with the index hint:

SELECT /*+ INDEX (MYTABLE MYINDEX) */ sum(COLUMN2)
  FROM MYTABLE
   WHERE MYTABLE.COLUMN1 = '01242014';

So, the question is why does the optimizer choose a full scan which runs for almost 5 minutes instead of the index scan which runs in a fifth of a second?  I think that the answer is that the optimizer assumes there is no caching.  Notice the number of consistent gets and physical reads in each case:

FULL SCAN

     242929  consistent gets
     238854  physical reads

INDEX SCAN

       1180  consistent gets           
          0  physical reads

If you look at the autotrace output and you see consistent gets but no physical reads it means that the blocks are being read from memory and not the disk.  So, just about every block read by the full scan is read from the disk and not from memory.  In the case of the index scan all the blocks are in memory.

Other than the caching the optimizer’s cost estimates aren’t that far off.  I think the optimizer’s cost units are the equivalent of single block reads.  I think I read this in Jonathan Lewis’ book.  These are the sections of the plan that show the cost of the two ways of reading from the table:

   2    1     TABLE ACCESS (FULL) OF 'MYTABLE' 
              (Cost=23208 Card=68262 Bytes=819144)

   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'MYTABLE' 
              (Cost=34865 Card=68262 Bytes=819144)
   3    2       INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE) 
                (Cost=194 Card=68262)

So the full scan has a cost of 23208 which is the equivalent of that many 10 millisecond single block reads.  That’s about 232 seconds which is about 4 minutes.  So, that’s not far off for the cost of the full scan which ran in 5 minutes.  Also, full table scans do multi block reads so when autotrace says physical reads = 238854 what it really means is that many blocks were read in through physical reads.  Our system has db_file_multiblock_read_count=16 so probably 16 blocks are read per physical read.  The optimizer estimated a cost of 23208 which is about 10% of the physical blocks and this wasn’t a bad estimate because the blocks are read together.  So it assumed that the 238854 blocks would be read in the time it would take for 23208 single block reads and this was pretty close.

But, the index scan estimated 34865 sequential reads which is about 6 minutes.  Now, if there was no caching this wouldn’t be such a bad estimate.  Autotrace says there are 1180 blocks read from memory.  If these were read from disk each block would be a separate disk I/O so it would be about 12 seconds.  I checked and I did find that some values of the column had more rows than others and for the value 01242014 that I was testing with was one of the values with fewer rows.  So, for different values of COLUMN1 without caching the index would be a lot closer to the full scan:

SELECT COLUMN1,count(*) cnt
FROM MYTABLE
group by COLUMN1
order by cnt desc;

COLUMN1         CNT
-------- ----------
11262013     873867
11252013     576299
09222013     237098
08052013     179476
12082013     177359
11102013     175178
09152013     174220
10132013     174204
11172013     174102
08252013     173758
... skipped some ...
01242014      53801
10232013      53785
10072013      53335

So the value 01242014 only has 53801 rows but 11262013 has 863867 rows so it probably has blocks closer to the estimated 34865.  Here is a query to show how many blocks each value of the column has:

SELECT COLUMN1,
count(distinct DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)) cnt
FROM MYTABLE
group by COLUMN1
order by cnt desc;

COLUMN1        CNT
-------- ----------
11262013      16338
11252013      10847
09222013       4409
08052013       3437
12082013       3337
11102013       3305
09152013       3290
10132013       3286
11172013       3285
08252013       3281
10272013       3255
... skipped some ...
01242014        926
01232014        924
09262013        922
01132014        915

So, for the values of the column with the most rows there are 16338 blocks, somewhere in the vicinity of the estimated 34865 single block reads.  Anyway, if the optimizer is off by less than a factor of 100 I’d say it is pretty close.  But, the run time is off by about 600 – 5 minutes versus .2 seconds so that is a real problem.

Oracle’s method of dealing with caching of index blocks and indexed table blocks is one of the optimizer parameters.  Here are the settings as we have them on our system:

NAME                                 VALUE
------------------------------------ -----
optimizer_index_caching              0
optimizer_index_cost_adj             100

I messed with different values of optimizer_index_cost_adj and found that 66 was the highest value that would cause the plan to switch to an index scan.

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE 
          (Cost=23011 Card=1 Bytes=12)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'MYTABLE' 
              (Cost=23011 Card=68262 Bytes=819144)
   3    2       INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE) 
                (Cost=194 Card=68262)

Note that the cost of the index access to the table has come down to 23011 which is less than the 23208 estimated for the full scan.  Also note that 23011/34865=.66=66/100.  So, the cost of the index access to the table is just multiplied by the value of optimizer_index_cost_adj divided by 100.

So, now we have come down to where the rubber meets the road.  What should we then do with all of this information?  The development team has told us that they can add a hint to their code, but they are concerned that there is some general problem with the database and they will have to keep adding hints in various other places.  Note that this is a very old system with vendor supplied code.  Presumably before the data was cleaned up the cost of the full scan exceeded that of the index scan and that it is only because our new table has so much less data that the index scan has become less costly.

So, we could set optimizer_index_cost_adj to 66 or some other value but then we would need to test the entire system.  How much effort are we going to put into a system that we need to retire anyway?  This is really where I come down with hints and other performance methods.  I’ve known about using optimizer_index_cost_adj to encourage index scans for a long time.  But, if I’m supporting a bunch of older systems I can’t just go in and start tweaking parameters without appropriate testing.  So, I’d love to adjust the parameter rather than using the hint to band aid this one query, but we have a lot of databases to support and some times you just have to apply the quick fix rather than taking the time to apply a more global solution.

- Bobby

 

 

 

 

 

 

 

 

Categories: DBA Blogs

Database Machine (EXADATA) Schematic problem in OEM12c

DBASolved - Tue, 2014-01-28 11:45

When adding an Exadata to Oracle Enterprise Manager 12c (OEM), it is pretty easy yet at times you may run into a problem or two.  Many of these problems can be minimized by using the Oracle Enterprise Manager 12c: Exadata Discovery Cookbook (cookbook).  The cookbook can be found here, so you don’t have to search to far for it.

As with adding any target in OEM, you add the Exadata the same way (Setup –> Add Target –> Add Targets Manually).  I’m not going to go into details here on adding the Exadata target because it is covered in the cookbook and it is also a basic function of OEM.   Just keep in mind, that adding an Exadata, you will need the databasemachine.xml or the catalog.xml file.  With the Exadata’s I’ve been adding, I have been using the databasemachine.xml.

What I do want to go over, is when you add an Exadata and the schematic of the Exadata box does not appear in OEM.  In figure 1, you see that I have two Exadata boxes listed on the Oracle Exadata Database Machines page. 

Figure 1: Oracle Exadata Database Machines

image

One of these Exadata, was added to OEM without a problem; the other the schematic didn’t come up.  Just to make this a bit easier, the top database machine is the one that is missing the schematic.  Figure 2, shows what I’m referring to about missing the schematic.  What shows up is only the legend, but no graphic for the machine itself.

Figure 2: Missing schematic

image

So, the question has to be asked, Why did this happen? Everything was configured correctly!  Well, I was so puzzled by this, I had to ask Oracle for a bit of help.  What we (myself and Oracle) identified was that the databasemachine.xml file had a duplicate entry in it. 

Fixing the databasemachine.xml file is explained in a bit.  First, you need to remove the database machine that is having the problem from OEM.  This can be done from the Database Machine menu (Figure 3). 

Figure 3: Database Machine menu

image

After clicking the Remove Target… option, you are taken to a page that provides options for removing the Exadat from OEM (Figure 4).   To remove the whole Exadata, the default radio button is fine.  Just make sure you select the option to Unsubscribe SNMP Subscriptions. Once you check the box, make sure all the passwords are correct.   Then click the Yes button in the right-hand corner.

Figure 4: Remove Database Machine

image

When the Exadata is removed, OEM will drop you back into the All Targets page.  At this point, the Exadata has been removed from OEM.

Now, you are back to a clean slate for adding the Exadata to OEM.  Before you rush to re-add the machine, we need to take a look at the databasemachine.xml file.  What you are looking for is any possible duplicates in the file.

Note: The EMOMS.log file on the OEM server helps in resolving this issue.

In reviewing the EMOMS.log, you will want to search for any interactions between OEM on the name of your Exadata.  In searching the EMOMS.log you may find an Oracle error, in my case the error was ORA-00001.  OEM couldn’t add a record to the EM_DBM_RACK_COMPONENTS table due to a PK error (Listing 1).

Listing 1: PK Error in EMOMS.log

ERROR model.DbmDiscovery logp.251 – ORA-00001: unique constraint (SYSMAN.EM_DBM_RACK_COMPONENTS_PK) violated

Knowing that there is a primary key error in the EM_DBM_RACK_COMPONETS table, this should lead you to think that the problem is in the databasemachine.xml file.   The databasemachine.xml file can be edited with any text editor; the file is an XML file.  A simple search for each of the values in the <ADMINNAME>.  Listing 2, shows you what I found in the databasemachine.xml file I’m working with.

Listing 2: Duplicates in databasemachine.xml

   <ITEM ID=”1″>

        <TYPE>ib</TYPE>

        <ADMINNAME>sw-ib1</ADMINNAME>

         <ADMINIP>10.*.*.*</ADMINIP>

         <ULOCATION>1</ULOCATION>

         <UHEIGHT>1</UHEIGHT>

    </ITEM>

    <ITEM ID=”13″>

        <TYPE>ib</TYPE>

        <ADMINNAME>sw-ib1</ADMINNAME>

        <ADMINIP>10.*.*.*</ADMINIP>

         <ULOCATION>20</ULOCATION>

         <UHEIGHT>1</UHEIGHT>

    </ITEM>

One of these entries in the databasemachine.xml file has to go; however, do not remove the whole <ITEM_ID></ITEM_ID>.  Just remove the name from the <ADMINNAME></ADMINNAME> in the second entry.  In this example, it would be <ITEM_ID=13> is the duplicate and I removed the name from the <ADMINNAME> tags.

After editing the databasemachine.xml file,  copy it to all the nodes within the Exadata.  This way you make sure that all the machines have the same file.  To do this,  use SCP (Listing 3).

Listing 3: SCP to copy to other compute nodes

scp ./databasemachine.xml root@db02:/opt/oracle.SupportTools/onecommand

Now, with the databasemachine.xml file in place, you can try and re-add the Exadata to OEM.  Again, keep in mind the easiest way to add the Exadata is to follow the cookbook (here).  That way you make sure all the points are covered.

Once you rediscover the Exadata in OEM, you should be able to pull the Database Machine Schematic up correctly (Figure 5). 

Figure 5: The correct view of Database Machine Schematic

image

Now, that the Exadata has been imported into OEM successfully, you can now add all the associated targets that are running on the Exadata (ie, database, ASM, listener, etc..). 

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Exadata, OEM
Categories: DBA Blogs

My first Backup and Recovery Quiz

Hemant K Chitale - Tue, 2014-01-28 09:45
I have created a short 5 question quiz on Oracle Backup and Recovery at QuizBean.  Please test it.  You can submit your comments against this blog post.  (I want to see if QuizBean is reliable).
.
.
.
Categories: DBA Blogs

Partner Webcast – Oracle Virtual Assembly Builder with Oracle VM 3.2

Increased operating costs, inefficient hardware utilization and rapidly expanding data centers have made virtualization the most compelling IT technology in years. Virtualization for desktop and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Most Simplest Example of Big Data Ever

Pakistan's First Oracle Blog - Mon, 2014-01-27 22:03
When I have to travel by air to somewhere, I go to a specific website to search and book my flight. Every time I visit that website, I encounter the same experience. I have to go through the same steps according to my same preferences, every time I need to book the flight.

Here is what I always do to book my flight at that website on each visit:

  1. Sign in with my userid
  2. Type in my origin and destination (90% of time same)
  3. Type in dates
  4. Sort the results by lowest price
  5. Search for flights with lowest price and least hops (Direct are ideal)
  6. I prefer red eye flights, so I also look for them.
  7. After searching for long time, I come close to what I want and then book it.
Despite of my log-in, despite of my usual pattern of purchase, website doesn't know what I am looking for. After my log-in, all they present to me is my personal details, limited travel history, reward points, and offers based on those reward points.

In other words, they present me with Small Data.

How nice would it be, if after sign-up they ask me in simple words:

Hey Fahd, are you still your old miser-self and want to get the cheapest direct flight(s) from Darwin to Sydney at that godforsaken time at night?

I would then just laugh, click yes, give me travel dates, and they would present me with options tailored to me habits. I know they would have to go through lots of data, anlayze it, slice it, dice it, and then come up with that above statement. I know I am not their only customer, and so they have to do the same for hundreds of thousands of their customers with more complex patterns, along with sifting through data of their partner airlines, may be hotels, car rentals, etc.

In other words, they have to go through Big Data.
Categories: DBA Blogs

Two great SQL Developer presentations by Jeff Smith at NEOOUG January 2014 meeting

Grumpy old DBA - Mon, 2014-01-27 18:55
Jeff Smith aka @thatJeffSmith presented two topics at our January 24th 2014 meeting.  A well attended and smooth delivery even in the face of some not so nice weather.  Cleveland in January and February well it's always somewhat unpredictable.

Here is the meeting setup information:

Jeff Smith is a Senior Principal Product Manager in the Database Tools group at Oracle. He works on the team that brings you SQL Developer, SQL Developer Data Modeler, APEX Listener, the Public Cloud database offering, and 11gR2 XE. Jeff has presented online and in-person at many of the top user groups around the world for the past decade including ODTUG, IOUG, RMOUG, NEOUG, Suncoast Oracle User Group, and more.

Two presentations ( with plenty of time for questions/discussion ):
1) What’s New in Oracle SQL Developer and SQL Developer Data Modeler v4.0
2) Oracle SQL Developer Tips and Tricks

You can download the powerpoints from here:

Presentation 1: Sql Developer Tips and Tricks

Presentation 2: Whats New in Oracle Sql Developer 4.0 and Data Modeler
Categories: DBA Blogs

Upcoming Events...

Tom Kyte - Mon, 2014-01-27 15:08
I took some time off from the road at the beginning of 2014 - getting ready to get back on the road again, lots of trips scheduled from February till the end of June.  This is just looking at public events for the next three months, hope to see you at one of them!
Virtual Developer Day Feb 4thI'll be speaking, along with Jonathan Lewis, about the last 20 years or so of technology and where we think it is all going as part of the Virtual Developer Day - an entirely online event with many technical speakers.  Something to definitely look into!RMOUG Feb 6th-7thAt the beginning of February, I'll be at RMOUG Feb 6th and 7th.  Truth be told, this won't be a trip for me - I'm located in Denver, CO now (since the beginning of the year).  I will just be returning from Toronto at the beginning of the week (I know all of the best places to go in February!).

Ohio Feb 11th-13thOf course, if it is February - it must be Ohio.  That is the best place to fly into and then drive around in during that month :)  I'll be coming down from Ottawa (another 'must see' location in February) and spending Feb 11th in Dayton, Feb 12th in Cincinnati and the 13th in Columbus.  Hitting all of the user groups across the state.

Real World Performance Netherlands, Germany and Bulgaria Feb 18th-21stThen I'm off to continue the Real World Performance tour with Andrew Holdsworth and Graham Wood.  This month we are hitting the town of Breda in the Netherlands on the 17th, Munich in Germany on the 18th and finally Sofia in Bulgria on the 21st.
Hotsos Symposium Mar 3-6thIf it is March, then it is definitely time for the annual Hotsos Symposium.  I'll be returning this year with a few sessions.  It should be nice and warm in Dallas in March!
Ireland March 11th-12thI'll be in Ireland on March 11th for the Ireland OUG conference.  I'll be speaking on performance and the new In-Memory capabilities coming in Oracle Database 12.1.0.2 soon.  On the 12th - I'll be delivering a one day Optimizer Master Class.
Real World Performance London and LatviaThe tour continues with a date of March 26th in London and March 28th in Riga Latvia.  
Belgrade Serbia, April 1st-2ndI'll post more details when I have them, but I'll be doing a two day seminar on the optimizer and developer related topics at this time.  It will be in conjunction with Oracle University so you can monitor their event website for information soon too.
IOUG Collaborate April 7th-11thI'll be doing a one day optimizer seminar and four other sessions during the week.  April in Las Vegas - perfect time to be there!  And Admin Savage will be speaking too - love the mythbusters!
Oracle Conference on the James, April 24thI'll be doing a keynote and a technical session in the morning of the 24th for the VOUG and HROUG.  Lots of great speakers here - definitely check this out if you are in the Virginia area!
Manila, Philippines - week of April 28thI'll be doing some events in Manila this week - details to follow soon!
Categories: DBA Blogs

Do AWR Reports Show the Whole Picture?

Pythian Group - Mon, 2014-01-27 09:59

AWR report is a great source of aggregated information on top activities happening in our databases. I use data collected in AWR quite often, and obviously the easiest way of getting the data out from the AWR is by running the AWR report. In most cases that’s not an issue, but there are certain scenarios when it hides the information one is looking for, just because of how it’s designed.

If I’m trying to collect information about top queries by physical reads, I would normally look at the “SQL ordered by Reads” section and this is what I’d see:

AWR DIsk readsI have the top SQLs by physical reads – just what I’ve been looking for (except the fact that AWR report covers only one of my RAC nodes).

But wait a second, what if there are queries that don’t use bind variables? This might be a problem as each query would have it’s own SQL_ID and probably they wouldn’t make it into the TOP10 just because each of them is treated separately. Nothing to worry about – AWR also collects FORCE_MATCHING_SIGNATURE values (read this blog post to understand why I know they would help) and we can use them to identify and group “similar” statements, we just need a custom script to do that.

Here I use my custom script to report TOP 20 SQL_IDs by physical reads in last 7 days (and I’m reporting data from both RAC nodes in the same list) – you can see the few TOP SQLs are the same as reported in AWR report, but because I’m reporting database-wide statistics instead of instance-wide as AWR does, I have other SQLs on the list too. I’ve also included 2 additional columns:

  • DIFF_PLANS – number of different PLAN_HASH_VALUE values reported for this SQL_ID, and if only one is found – it shows the actual PLAN_HASH_VALUE
  • DIFF_FMS - number of different FORCE_MATCHING_SIGNATURE values reported for this SQL_ID, and if only one is found – it shows the actual FORCE_MATCHING_SIGNATURE

Custom Script - sqlidNow, I can adjust the custom script to aggregate the data by FORCE_MATCHING_SIGNATURE, instead of SQL_ID. I’ll still keep the DIFF_PLANS column and will add a new one – DIFF_SQLID.

Custom Script - fmsThe situation is a little bit different now. Notice how the second row reports FORCE_MATCHING_SIGNATURE  = 0, this typically shows PL/SQL blocks that execute the SQL statements and aggregate statistics from them, so we’re not interested in them. Otherwise the original report by SQL_ID showed quite accurate data in this situation and my suspicions regarding the misuse of literal values where binds should be used, didn’t materialize. Could I be missing anything else? Yes — even the FORCE_MATCHING_SIGNATURE could be misleading in identification of TOP resource consumers, you can write two completely different SQLs (i.e. “select * from dual a” and “select * from dual b”) that will do the same thing and will use the same execution plan. Let’s query the top consumers by PLAN_HASH_VALUE to check this theory!

Custom Script - planI’ve highlighted the third row as the same PLAN_HASH_VALUE is reported for 20 different SQL_IDs, which allowed it to take the third place in the TOP list by physical reads (actually it’s the second place as PLAN_HASH_VALUE=0 is ignorable). The next query expands the third row:

Custom Script - sqlids for planAnd here are All the SQL statements:

All plan sqlsWhat I have here is 20 different  views generated by Oracle Discoverer that query the database by using exactly the same execution plan. Closer look revealed the views included hardcoded query parameters (date intervals for reporting), but in the end, this was the same query! It’s the TOP2 query by physical reads. in the database and if I tune it – all 20 discoverer views will benefit.

I think one of the drawbacks of AWR reports is that it is not able to identify such situations, it would be great if user could choose the column by which he aggregation is done. In the situation I described I was able to identify one of the top queries by physical reads only when I aggregated data by PLAN_HASH_VALUE.

Categories: DBA Blogs

Learn a bit Oracle Scheduler with BROKEN state

Surachart Opun - Mon, 2014-01-27 03:15
On Oracle Database, DBAs can check broken job for Oracle Job (dbms_job) at *_JOBS.BROKEN column. Anyway, DBAs have changed from DBMS_JOB to DBMS_SCHEDULER. So, I was curious How to check broken job for Oracle Scheduler (DBMS_SCHEDULER)? After found out... DBAs can check on *_SCHEDULER_JOBS.STATE column.

STATEVARCHAR2(15)Current state of the job:
  • DISABLED
  • RETRY SCHEDULED
  • SCHEDULED
  • RUNNING
  • COMPLETED
  • BROKEN
  • FAILED
  • REMOTE
  • SUCCEEDED
  • CHAIN_STALLED

When does Oracle Scheduler change STATE to be BROKEN?
Then, DBAs should know some columns as well.

FAILURE_COUNTNUMBERNumber of times the job has failed to runMAX_FAILURESNUMBERNumber of times the job will be allowed to fail before being marked broken
*_SCHEDULER_JOBS.STATE column will change to "BROKEN", when *_SCHEDULER_JOBS.FAILURE_COUNT value = _SCHEDULER_JOBS.MAX_FAILURES value. Really! yes... but I had some example to show about it. I tested on 11.2.0.1.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL>
SQL>
SQL> show user;
USER is "DEMO"
SQL>
SQL> BEGIN
  2    DBMS_SCHEDULER.create_job (
  3      job_name        => 'test_my_job',
  4      job_type        => 'PLSQL_BLOCK',
  5      job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT1''); END;',
  6      start_date      => SYSTIMESTAMP,
  7      repeat_interval => 'freq=daily',
  8      end_date        => NULL,
  9      enabled         => TRUE);
 10  END;
 11  /

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              1

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              2

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed. SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;
JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3

SQL>
SQL>I created Oracle Scheduler and set max_failures attribute. First Idea: set MAX_FAILURES = FAILURE_COUNT.
 SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value => 3);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            3

SQL>
SQL> exec  DBMS_SCHEDULER.run_job (job_name            =>t 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL>  select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            3Case 1: set MAX_FAILURES = FAILURE_COUNT after ran JOB... Nothing change! in user_scheduler_jobs view, I checked on *_scheduler_job_log and *_scheduler_job_run_details views. Nothing change either!
I belief DBAs should disable JOB, then set MAX_FAILURE attribute and enable JOB, because after enabled... FAILURE_COUNT value will be reset to be 0.
Anyway, Tried MAX_FAILURES value = 4.
SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session=> FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            3

SQL>
SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value => 4);
PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            4

SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session ==> FALSE);

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    BROKEN          FALSE             4            4Now!  *_SCHEDULER_JOBS.STATE = "BROKEN". How to fix "BROKEN" state? - Just enable Job.
SQL> EXEC DBMS_SCHEDULER.enable(name=> 'test_my_job');

PL/SQL procedure successfully completed.

SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              0            4

SQL>
SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              1            4How to unset MAX_FAILURES value?
SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value=> NULL);
BEGIN DBMS_SCHEDULER.set_attribute (name=> 'test_my_job', attribute=> 'max_failures',value=> NULL); END;

      *
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00307: too many declarations of 'SET_ATTRIBUTE' match this call
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored


SQL>
SQL>
SQL>
SQL>
SQL>
SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value=> 0);
BEGIN DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value => 0); END;

*
ERROR at line 1:
ORA-27465: invalid value 0 for attribute MAX_FAILURES
ORA-06512: at "SYS.DBMS_ISCHED", line 4370
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2850
ORA-06512: at line 1

SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute=> 'max_failures',value => '');

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              1

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              5Just run "DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute=> 'max_failures',value => '')".

Go back to CASE 1: ... Idea to set MAX_FAILURE attribute!
SQL> EXEC DBMS_SCHEDULER.disable(name => 'test_my_job');

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    DISABLED        FALSE             5

SQL> SQL> exec DBMS_SCHEDULER.set_attribute (name =>  'test_my_job', attribute =>  'max_failures',value => 5);

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    DISABLED        FALSE             5            5

SQL>
SQL> EXEC DBMS_SCHEDULER.enable(name=> 'test_my_job');

PL/SQL procedure successfully completed.

SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              0            5

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    BROKEN          FALSE             5            5Read More... Oracle Document.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

LAST_CALL_ET in V$SESSION

Hemant K Chitale - Sun, 2014-01-26 09:21
The LAST_CALL_ET column in V$SESSION represents, for an ACTIVE, session "represents the elapsed time (in seconds) since the session has become active."

So, if you are running an DML operation, it represents how long the DML has been running.

What if you are running a SELECT that is sending a large number of rows to a client ?  If the query runs for 10minutes and you repeatedly query it, does LAST_CALL_ET show the time since the query began ?

What if you are running a PL/SQL Procedure or an Anonymous PL/SQL block which calls one or more SQL statements ?  At any instant in time the session may be running one of the SQLs.  Does LAST_CALL_ET queried at that instant  show the time that SQL has been running ?


Think about it.

UPDATE 01-Feb-14 :  Here are two discussions around LAST_CALL_ET :

https://community.oracle.com/thread/1115312?start=0&tstart=0
https://community.oracle.com/thread/2584291?start=0&tstart=0

Three earlier posts by me :

16-Aug-2010

17-Aug-2010

22-Aug-2010

.
.
.

Categories: DBA Blogs

My Article on SearchOracle about Exadata Administration

Pakistan's First Oracle Blog - Sat, 2014-01-25 20:48
The massive explosion in data, combined with an ever-increasing hunger to boost processing speeds while keeping costs low, has led to engineered systems in data centers.

Read More
Categories: DBA Blogs

Partner Webcast - Oracle Mobility Business Awareness - 29 Jan 2014

According to many analysts, the number of mobile devices users will exceed the number of computer users by the end of 2013, and will become the dominant platform for accessing applications and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SS Enqueue and sort segment request waits on 8.1.7.4 after trying to drop temp tablespace

Bobby Durrett's DBA Blog - Fri, 2014-01-24 16:48

Participated in a long forum thread that I started with this title:

SS Enqueue and sort segment request waits on 8.1.7.4 after trying to drop temp tablespace

We have had quite the saga on an old 8.1.7 database.  Really, it has been going on since December 26th.  It was pretty cool to see the interaction with the Oracle community while at the same time getting some good help from Oracle support on such an old release.  I definitely appreciate everyone’s efforts!

The core issue relates to dropping objects in a dictionary managed tablespace that have many small extents.  I had similar issues a number of years ago that I mentioned on my old blog:

April 24, 2008 post

May 19, 2008 post

June 2, 2008 post

Bottom line is that if you still have a system with dictionary managed tablespaces don’t let your tables and indexes have small extent sizes or you may end up with an object that is difficult to drop and various issues will ensue if you do drop it.

- Bobby

 

 

 

 

Categories: DBA Blogs

Log Buffer #356, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-01-24 07:38

This Log Buffer edition drops right from the clouds to check what bloggers are doing with their Oracles, SQL Servers, and MySQLs. Enjoy!

Oracle:

The Oracle E-Business Suite provides a large number of diagnostic and monitoring solutions.

A free e-book: Consolidate Your Applications on Oracle WebLogic Server.

How to read the values of bind variables for currently executing statements real time monitoring kicks in.

Problems when doing a V2V or P2V of a windows server to Oracle VM.

Oracle Social Relationship Management: Access the Specialization Exams Today.

SQL Server:

Skipping Columns (& other tricks) Using BCP Format Files (SQL Spackle) .

Search multiple SQL Server Error Logs at the same time.

An Overview of Project Conversion – Level 14 of the Stairway to Integration Services.

Free e-book: Performance Tuning with SQL Server Dynamic Management Views.

Calculating the Median Value within a Partitioned Set Using T-SQL.

MySQL:

Percona Server: Improve Scalability with Percona Thread Pool.

MySQL and PostgreSQL JSON functions: Do they differ much?

Select into outfile and load data infile are not complementary by default.

MySQL Workbench 6.1.1 BETA has been released.

Noel Herrick have covered some strange default behavior around nulls in MySQL.

Categories: DBA Blogs