Feed aggregator

Masterclass – 1

Jonathan Lewis - Mon, 2018-08-20 04:42

A recent thread on the Oracle developer community database forum raised a fairly typical question with a little twist. The basic question is “why is this (very simple) query slow on one system when it’s much faster on another?” The little twist was that the original posting told use that “Streams Replication” was in place to replicate the data between the two systems.

To make life easy for remote trouble-shooters the poster had supplied (for each system) the output from SQL Monitor when running the query, the autotrace output (which shows the predicate section that SQL Monitor doesn’t report), and the session statistics for the query run, plus some statistics about the single table in the query, the index used in the plan, and the column on which that index was based.

Here, with a little cosmetic editing (and a query that has clearly been camouflaged by the OP), is the information supplied for the faster database, where the query took about 30 seconds to complete.


SELECT c1, c2, c3, c4, c5, c6, c7, c8..  
FROM TAB1  
WHERE STS IN ( 'A', 'B')  
AND cnt < '4'  
AND dt < sysdate  
and rownum <=1;  
  
Sql_monitor and stats from DB1  
******************************  
  
Global Information  
------------------------------  
 STS              :  DONE (ALL ROWS)             
 Instance ID         :  1                           
 Execution Started   :  08/17/2018 08:31:22         
 First Refresh Time  :  08/17/2018 08:31:22         
 Last Refresh Time   :  08/17/2018 08:31:53         
 Duration            :  31s                         
 Program             :  sqlplus.exe                 
 Fetch Calls         :  1                           
  
Global Stats  
===============================================================================  
| Elapsed |   Cpu   |    IO    | Concurrency | Fetch | Buffer | Read  | Read  |  
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Calls |  Gets  | Reqs  | Bytes |  
===============================================================================  
|      33 |    3.00 |       30 |        0.08 |     1 |   102K | 38571 | 301MB |  
===============================================================================  
  
SQL Plan Monitoring Details (Plan Hash Value=715774357)  
======================================================================================================================================================================================  
| Id |            Operation            |          Name           |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Activity |       Activity Detail        |  
|    |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes |   (%)    |         (# samples)          |  
======================================================================================================================================================================================  
|  0 | SELECT STATEMENT                |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  1 |   COUNT STOPKEY                 |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  2 |    INLIST ITERATOR              |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  3 |     TABLE ACCESS BY INDEX ROWID | TAB1                    |    114K | 33399 |        32 |     +0 |     2 |        1 | 38377 | 300MB |    96.77 | Cpu (1)                      |  
|    |                                 |                         |         |       |           |        |       |          |       |       |          | db file sequential read (16) |  
|    |                                 |                         |         |       |           |        |       |          |       |       |          | read by other session (13)   |  
|  4 |      INDEX RANGE SCAN           | TAB1_STS_IDX            |    115K |   723 |        30 |     +2 |     2 |     118K |   194 |   2MB |     3.23 | read by other session (1)    |  
======================================================================================================================================================================================  
  
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 33399   (1)| 00:03:14 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   114K|   201M| 33399   (1)| 00:03:14 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   114K|       |   723   (1)| 00:00:05 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"<SYSDATE@!)  
   4 - access("STS"='A' OR "STS"='B')  
  
  
Table stats 
-----------
table_name    num_rows        blocks  empty_blocks  chain_cnt  avg_row_len             
TAB1        79,654,925    22,416,917             0          0        1,847          
  
column_stats(STS)
------------------
table_name  column_name  num_distinct  num_nulls    density  avg_col_len   
TAB1        STS                     5          0  6.2049E-9            2         
  
Index_stats(on STS)
-------------------
index_name    leaf_blocks  distinct_keys  avg_leaf_blocks_per_key  avg_data_blocks_per_key  clustering_factor    num_rows
TAB1_STS_IDX      487,939              5                   97,587                4,458,874         22,294,372  78,308,939   
 
Session stats
-------------
process last non-idle time              1,534,508,966
session connect time                    1,534,508,966
logical read bytes from cache             839,663,616
cell physical IO interconnect bytes       316,055,552
physical read bytes                       316,055,552
physical read total bytes                 316,055,552
file io wait time                          17,044,083
session pga memory                          8,643,880
session pga memory max                      8,643,880
temp space allocated (bytes)                4,194,304
session uga memory                          1,755,696
session uga memory max                      1,755,696
buffer is pinned count                        135,743
table fetch by rowid                          117,519
non-idle wait count                           107,301
session logical reads                         102,500
consistent gets                               102,450
consistent gets from cache                    102,448
no work - consistent read gets                102,368
buffer is not pinned count                    101,741
free buffer inspected                          43,458
free buffer requested                          38,592
physical read total IO requests                38,581
physical read IO requests                      38,581
physical reads                                 38,581
physical reads cache                           38,579
hot buffers moved to head of LRU               37,258
bytes sent via SQL*Net to client                7,370
bytes received via SQL*Net from client          6,869
redo size                                       5,536
undo change vector size                         4,432
DB time                                         3,166
non-idle wait time                              2,962
user I/O wait time                              2,954
table fetch continued row                       2,423


And here’s the equivalent information from the slower database where the query took more than 9 times as long (4 minutes 42 seconds) to complete.


Global Information  
------------------------------  
 STS              :  DONE (ALL ROWS)           
 Instance ID         :  1                         
 Execution Started   :  08/17/2018 08:21:47       
 First Refresh Time  :  08/17/2018 08:21:47       
 Last Refresh Time   :  08/17/2018 08:26:29       
 Duration            :  282s                      
 Module/Action       :  SQL*Plus/-                
 Program             :  sqlplus.exe               
 Fetch Calls         :  1                         
  
Global Stats  
================================================================  
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |  
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |  
================================================================  
|     287 |    8.76 |      278 |     1 |   110K | 110K | 858MB |  
================================================================  
  
SQL Plan Monitoring Details (Plan Hash Value=715774357)  
======================================================================================================================================================================================  
| Id |            Operation            |          Name           |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |        Activity Detail        |  
|    |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |          (# samples)          |  
======================================================================================================================================================================================  
|  0 | SELECT STATEMENT                |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  1 |   COUNT STOPKEY                 |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  2 |    INLIST ITERATOR              |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  3 |     TABLE ACCESS BY INDEX ROWID | TAB1                    |    142K | 40211 |       282 |     +1 |     2 |        1 | 109K | 854MB |   100.00 | db file sequential read (277) |  
|  4 |      INDEX RANGE SCAN           | TAB1_STS_IDX            |    142K |   892 |       280 |     +3 |     2 |     118K |  491 |   4MB |          |                               |  
======================================================================================================================================================================================  
  
Execution Plan (autotrace) 
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 40211   (1)| 00:08:03 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   141K|   249M| 40211   (1)| 00:08:03 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   141K|       |   892   (1)| 00:00:11 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"<SYSDATE@!)  
   4 - access("STS"='A' OR "STS"='B')  
            
Table stats
-----------
table_name    num_rows        blocks  empty_blocks  chain_cnt  avg_row_len             
TAB1        79,447,350   22,318,667            710    537,597        1,847  
  
column_stats(STS)
-----------------
table_name  column_name  num_distinct  num_nulls    density  avg_col_len  
TAB1        STS                     5          0  6.1789E-9            2  
  
Index_stats(on STS)
-------------------
index_name    leaf_blocks  distinct_keys  avg_leaf_blocks_per_key  avg_data_blocks_per_key  clustering_factor    num_rows
TAB1_STS_IDX      493,152              5                   98,630                4,382,625         21,913,127  79,106,263   


Session Stats
-------------
process last non-idle time              1,534,508,200
session connect time                    1,534,508,200
logical read bytes from cache             903,790,592
physical read total bytes                 899,629,056
cell physical IO interconnect bytes       899,629,056
physical read bytes                       899,629,056
file io wait time                         277,881,742
session pga memory                          8,586,744
session pga memory max                      8,586,744
temp space allocated (bytes)                4,194,304
session uga memory max                      1,690,184
session uga memory                          1,690,184
buffer is pinned count                        129,148
table fetch by rowid                          117,521
session logical reads                         110,326
consistent gets                               110,276
consistent gets from cache                    110,276
no work - consistent read gets                110,071
non-idle wait count                           109,879
free buffer requested                         109,830
physical read IO requests                     109,818
physical reads cache                          109,818
physical reads                                109,818
physical read total IO requests               109,818
buffer is not pinned count                    109,577
free buffer inspected                          70,740
hot buffers moved to head of LRU               31,910
DB time                                        28,203
non-idle wait time                             27,788
user I/O wait time                             27,788
dirty buffers inspected                        19,067
bytes sent via SQL*Net to client               14,927
bytes received via SQL*Net from client         10,607
redo size                                       5,440
undo change vector size                         4,432
table fetch continued row                       3,660

There are all sorts of noteworthy details in these two sets of information – some of the “how to see what’s in front of you” type, some of the “be careful, Oracle can deceive you” type. So I’m going to walk though the output picking up a number of background thoughts before commenting on the answer to the basic question.

We’ll start with the object statistics, then we’ll look at the SQL Monitor plan to see if we can determine where the extra time was spent, then we’ll try to work out what else the plan might be telling us about the code and data, then we’ll summarise my observations to make a claim about the difference in behaviour.

Object statistics

The table has 79M rows with average length of 1,847 bytes, using 22M blocks. With an 8KB block size and that average row size we would expect to see about 3 rows per block, and that’s fairly consistent with the value of rows / blocks.  We don’t know what the sample size was for this stats collection, but it might have been a “small” sample size rather than the the 100% you would get from using auto_sample_size, so that might also explain some discrepancy between the two different views on the figures.

We note that the secondary system reports a chain_cnt in excess of 500,000 rows. The only (unhacked) way that this figure could be set would be through a call to analyze statistics, and once the figure is there it won’t go away unless you use the analyze command again to delete statistics.  We don’t know the history of how and when the figure got there so it doesn’t guarantee that there are any chained or migrated rows, nor does the zero in the table stats on the primary system guarantee that it doesn’t have any chained or migrated rows – all it tells us is that at some time someone used the wrong command to gather stats and there were some (less than 1%) migrated or chained rows in the table at the time. (The optimizer will use this figure in its arithmetic if it is set, by the way, so it may affect some of the cost calculations – but not by a huge amount.)

The column sts reports 5 distinct values, no nulls, and a density of 6.2e-9 which is roughly half of 1/79M: so we have a frequency histogram on the column (in the absence of a histogram the density would be 1/5, and it’s reasonable to assume that the number of buckets was either the default or set to something larger than 5).  We were told that the system was running 11.2.0.4 – so we have to be a little suspicious about the accuracy of this histogram since it will have been sampled with a very small sample if the stats collection had used auto_sample_size. (12c will use a specially optimized 100% sample for frequency and top-N histograms when using auto_sample_size)

The index on sts has a clustering_factor of around 22M which is similar to the number of blocks in the table – and that’s not too surprising if there are are only a very small number of distinct values in the column – especially when the presence of the histogram suggest that there’s a skew in the data distribution. (There’s more to come on that point.) The number of leaf blocks is about 500,000 (being lazy about arithmetic) – just as a side note this suggests the index is running in a fairly inefficient state (and probably hasn’t been created with the compress keyword).

Doing a rough estimate of the index arithmetic :  the avg_col_len for sts is 2, so the space required for each index entry will be 13 bytes (2 for the column, 7 for the rowid content, 2 for the row header, 2 for the row directory entry).  Take off the block overhead, and assume the index is running at a “typical” 70% space usage per leaf block and you might expect 5,600 bytes used per leaf block for current index data and that works out to about 430 index entries per leaf block.  With 79M rows in the table that should lead to 79M/430 leaf blocks – i.e. roughly 184,000 leaf blocks, not 493,000 leaf blocks.  However it’s not unusual to see an index with extremely repetitive values operating at something like 50% utilisation, which would bring our estimate to about 310 rows per leaf block and 255,000 leaf blocks – which is still off by a factor of nearly 2 compared to what we’ve actually got. Again, of course, we have to be a little bit cautious about these statistics – we don’t know the sample size, and Oracle uses a surprisingly small number of blocks to sample the stats for an index.

Where’s the time.

The SQL Monitor gives us a very clear report of where most of the time went – almost all of it was spent in I/O waits, and almost all of the wait time was in the “table access by index rowid” opration in both cases; but the primary system did 38,377 read requests while the secondary did 109,000 read requests in that line of the plan. It is significant, though, that quite a lot (40%) of the ASH samples for that operation on the primary system were for “read by other session” rather than “db file sequential read”:  in other words some other session(s) were doing a lot of work to pull the data we wanted into the buffer cache at the same time. Apart from the fact that a wait for “read by other session” often means we spend less time waiting than if we’d had to do the read ourselves, the presence of this wait suggests that other sessions may be “pre-caching” data for us so that we end up having to read far fewer blocks than would otherwise be the case.

It’s important to note at the same time that the difference in Buffer Gets for the two systems was small – 102K vs. 110K – and the “Rows (actual)” was the same in both cases – 118K entries returned by the index range scan.  Both systems did similar amounts of “logical” work, to process similar amounts of data; the difference was the fraction of the work that required a buffer get to turn into a disc read or a “wait for other read”.

We might want to pick up a few more numbers to corroborate the view that the only significant difference was in the volume of data cached and not some more esoteric reason.  Some of the session statistics should help.


DB1:  table fetch by rowid                          117,519
DB2:  table fetch by rowid                          117,521

DB1:  undo change vector size                         4,432
DB2:  undo change vector size                         4,432

DB1:  redo size                                       5,536
DB2:  redo size                                       5,440

DB1:  session logical reads                         102,500
DB2:  session logical reads                         110,326

DB1:  no work - consistent read gets                102,368
DB2:  no work - consistent read gets                110,071

DB1:  table fetch continued row                       2,423
DB2:  table fetch continued row                       3,660

The number of rows fetched by rowid is virtually identical and we have done (virtually) no work that generates undo or redo – such as delayed block cleanout; there are no statistics shown for “%undo record applied” so we probably haven’t done very much work to get a read consistent view of the data though we can’t be sure that the OP simply failed to copy that stat into list supplied (but then the similarity of “session logical reads” to “no work – consistent read gets” confirms the hypothesis that we didn’t do any (significant) work on visiting undo blocks.

We do see a few percent increase in the number of buffer gets (“session logical reads”) – but this may reflect the fact that the actual pattern of data in one table is slightly different from the pattern in the other – thanks to ASSM the process id of the process that inserts a row into a table can affect (within a small range, usually) the block into which the row is inserted; but at 102,000 / 110,000 buffer gets to visit 117,500 rows in the table we can see that there must be some table blocks that hold two (or more) rows that are identified as consecutive in the index – leading to some row visits being achieved through a buffer pin and without a fresh buffer get. You’ll note that this argument is consistent with the small variation in clustering_factor (not that we entirely trust those figures) for the two indexes – the system with the lower clustering_factor for the index has done fewer buffer gets to acquire the same number of rows from the table – by definition that means (assuming default setup) that there are more cases where “the next table row” is in the same block as the current row.

The final figure I’ve shown is the “table fetch continued rows”: according to the table stats (which we don’t necessarily trust completely) 500K out of 79M rows are chained/migrated which is roughly 0.6%. We know that we’re visiting about 117K table rows so might expect (on average) roughly the same percentage migrated/chained viz: 0.6% of 117K = 743, so there’s a little anomaly there (or an error in our assumption about “average” behaviour.  It’s worth noting, though, that a “continued fetch” would have to do an extra buffer visit (and maybe an extra physical read).  You might wonder, of course, how there could be any chained or migrated rows when the average row length is 1,847 bytes but in a follow-up post the OP did say there were 3 BLOB columns in the table, which can cause havoc with interpreting stats for all sorts of reasons. We don’t have any information about the table structure – particularly whether the columns in the query appear before or after the BLOB columns in the table definition – and we don’t know what processing takes place (for example, maybe the 3rd BLOB is only updated after the sts column has been changed to a value other than A or B which would help to explain why we shouldn’t be using the 0.6% calculation above as a table-wide average), so we’re not in a position to say why any of the continued fetches appear but there are several guesses we could make and they’re all easy to check.

Plan observations

If we examine row estimates we see that it 114K for the faster plan and 141K for the slower plan (with a closely corresponding variation in cost). The difference in estimates simply tells us that the histogram gathering was probably a small sample size and subject to a lot of variation. The scale of the estimates tells us that the A and B rows are probably rare – call it 125K out of 79M rows, about 0.16% of the total rows in the table, so it would not be surprising to see consecutive samples for the histogram producing significant variations in estimates.

The other interesting thing we can note in the SQL Monitor plan is that the Starts column for the index range scan / table access operations in both plans shows the value 2: this means that there are no “A” rows that match the other predicates:  Oracle has run the “A” iteration to completion then started the “B” iteration and found a row on the second iteration. Is this a coincidence, or should it always happen, or is it only fairly likely to happen; is it possible to find times when there are no suitable “B” rows but plenty of suitable “A” rows. The final predicate in the query is “rownum <= 1” – so the query is picking one row with no explicit strategy for choosing a specific row when there are multiple appropriate rows, does this mean that we could optimizer the query by rewriting it as a “union all” that searched for B rows first and A rows second ? We just don’t know enough about the processing.

In passing, we can’t get Oracle to search the B rows first by changing the order of the in-list.  If you have a predicate like “where sts in ({list of literal values})” the optimizer will sort the list to eliminate duplicates before rewriting the predicate as a list of disjuncts, and then (if the path uses an iterator) iterate through the list in the resulting order.

In the absence of information about the way in which the data is processed we can only say that we need to avoid visiting the table so frequently. To do this we will need to add one or both of the columns from the other predicates to the index – this might double the size of the index, but eliminate 95% of the potential I/O.  For example if we discover that A and B rows are initially created “into the future” and this query is looking for a row whose “time has come” so that it can be processed and changed to an X row (say) then there may only ever be a tiny number of rows where the “sts = A and the dt < sysdate” and an index on (sts, dt) would be a perfect solution (especially if it were compressed on at least the first column).

The OP has declared a reluctance to add an index to the table – but there are two points to go with this indexing strategy. Since we know there’s a frequency histogram and the A and B rows appear to be rare values – what benefit is there in having an index that covers the other values (unless 2 of the remaining 3 are also rare).  How about creating a function-based index that represents only the rare values and modifying this code to use that index – e.g.

create index t1_id on t1 (
        case sts when 'A' then sts when 'B' then sts end,
        case sts when 'A' then dt  when 'B' then dt  end
) compress 1
;

select  *
from    t1
where   case sts when 'A' then sts when 'B' then sts end in ('A','B')
and     case sts when 'A' then dt  when 'B' then dt  end < sysdate
and     cnt < '4'
and     rownum <= 1
/


You might be able to replace a huge index (79M rows worth) with this small one (120K rows worth) unless there’s too much other code in the system that has to be adjusted or the sts column is actually the target of a referential integrity constraint; at worst you could add this index knowing that it’s generally not going to consume much in the way of extra space or processing resources and is going to save you a lot of work for this query.

Summary

The execution plan from SQL Monitor points very strongly to the fast system benefiting from having a lot of the relevant data cached and constantly being reloaded into the cache by other sessions while the slow system has to acquire almost all of its data by real phyiscal reads. Most of the reads address the table so engineering an index that is low-cost and (fairly) high precision is the most significant strategy for reducing the workload and time on the slow system.

The fact that all the potential A rows fail to match the full predicate set suggests that there MAY be some aspect of the processing that means it would be more efficient to check for B rows before checking for A rows.

Given the massive skew in the data distribution a function-based index that hides all the non-popular values (or even all the values that are not of interest to this query) may be the most cost-effective way of adding a very effective index to the table with minimal resource requirements.

And finally

It’s taken me more than 4 hours to write this note after spending about 10 minutes reading through the information supplied by the OP and identifying and cross-checking details. A small fraction of the 4+ hours was spent creating a little model to check something I had to say about in-lists, the rest of it was trying to write up a coherent description covering all the details.

That’s it for today, but I may have missed a couple of points that I noticed as I read the OP’s posting; and I will want to do a little cosmetic work on this article and check grammar and spelling over the next couple of days.

<h3<Update (already)

Shortly after I posted this blog note the owner of the question reported the following as the distribution of values for the sts column:

 STS   COUNT(*)
---- ----------
   A          6
   E        126
   D        866
   C   80212368
   B     117631

Two things stand out about these figures – first it’s an ideal example of a case where it would be nice avoid having index entries for the 80 million ‘C’ rows. Depending on the coding and testing costs, the supportability of the application and the possible side effects this could be done with a function-based index, or by introducing a virtual column that hides the ‘C’s behing a NULL, or by changing the code to use NULL instead of ‘C’.

Secondly – I made a comment about rewriting the code to query the B’s before the A’s. But we saw that Oracle worked through about 117,000 rows before returning a result: so the fitures above tell us that it must have worked through almost all the B’s and the handful of A’s was just a tiny little blip before it got to the B iteration – so there’s no point in making that change.

My suggestion for the function-based index above could be modified in two ways, of course – add two more “when”s to each “case” to capture the D and E rows, or take the opposite viewpoint and create an index on expressions like: “case sts when ‘C’ then to_char(null) else sts end”. The benefit of the latter approach is that you don’t have to modify the index definition (and rebuild the index) if a new legal value for sts appears.

data guard drift

Tom Kyte - Mon, 2018-08-20 03:26
I have setup DR with maximum performance mode, at any time of instance, how to find the duration of time of data difference between DC and DR.
Categories: DBA Blogs

DATABASE STORAGE USAGE IN ASM

Tom Kyte - Mon, 2018-08-20 03:26
Hey Guys, I am working in a RAC environment and have been struggling to get a perfect query to view ASM Storage usage based on specific and each databases on the ASM. <b>Simply, i want a result that shows a database and how much size that databas...
Categories: DBA Blogs

Materialized View Add Columns

Tom Kyte - Mon, 2018-08-20 03:26
Hi Tom, I have added a new column in my master table and i want this column to be reflected in my Materialized view is this possible without Re creating it i have to fetch all columns from my master table in my view.please suggest.
Categories: DBA Blogs

Event SQL*Net break/reset to client in refresh of materialized view

Tom Kyte - Mon, 2018-08-20 03:26
Dear, I have had event of "SQL*Net break/reset to client" in session (job) with refresh of materialized view Same after kill it the session, kill the running job, the new session again remains with this event. The solution always is re-create th...
Categories: DBA Blogs

[BLOG] Bug 5001589 : Users propagation from OID to EBS 12.1.3 Custom Container

Online Apps DBA - Sun, 2018-08-19 12:28

Learn how to synchronize Active Directory users to OID who actually want EBS application and OBIEE Application with MS – Active Directory Integration [BLOG] Bug 5001589: Users propagation from OID to EBS 12.1.3 Custom Container Visit: https://k21academy.com/ebsoam24 where we discussed all the issues and challenges faced in the process and how to troubleshoot and fix […]

The post [BLOG] Bug 5001589 : Users propagation from OID to EBS 12.1.3 Custom Container appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Automating service creation in Oracle APEX

Joel Kallman - Sun, 2018-08-19 11:47
Oracle Academy is a division in Oracle whose mission is to advance "computer science education globally to drive knowledge, innovation, skills development, and diversity in technology fields."  The programs that Oracle Academy offers are free to accredited secondary schools, technical/vocational schools, and two and four-year colleges and universities.

Oracle Application Express (APEX) has been hosted by Oracle Academy and used for many years to facilitate the delivery of curriculum for database design and programming, SQL, PL/SQL and even APEX.  To facilitate their business requirements, they had custom extensions written (in part, by our team) into the core APEX product.  But this type of solution becomes difficult to maintain, because every release of APEX requires migration and rewrite of these custom extensions.

A number of months ago, I met with the Oracle Academy team to try and encourage them to move the service creation of APEX workspaces into their own custom interfaces.  I told them that everything is provided to easily and programmatically create APEX workspaces, create database users (schemas), create administrators and developers within each workspace, and even pre-load APEX applications and custom database objects in each workspace, at the time of service creation.

Naturally, they asked for an example script to accomplish all of these tasks, which I authored and shared with them.  Because this type of logic is very relevant for many other purposes, e.g., testing, continuous integration, DevOps, I wish to share these same annotated scripts here.  If you are someone who has been using APEX for 10 years, you won't learn anything new here.  But for those just getting started with APEX, I hope you find it fruitful.

Starting with an empty Oracle database, the requirements are to develop a single script which will:

  1. Create a tablespace (storage) for each workspace
  2. Create and associate multiple database users (schemas) with each workspace
  3. Create an APEX workspace
  4. Create an administrator and developer account in each workspace, restricting the access of each developer to only one of the schemas mapped to the workspace.
  5. Create custom database objects in each schema
And here we go...
set define '^' verify on
set concat on
set concat .

Rem
Rem Title: Demo_of_Provisioning.sql
Rem
Rem Description: This script will demonstrate use of all of the APIs and SQL statements necessary
Rem to create tablespaces, data files, database users, and APEX workspaces. From this
Rem example, it is assumed that the Academy team will be able to develop their own custom
Rem provisioning process and no longer rely upon custom extensions to the
Rem Oracle Application Express software.
Rem
Rem Notes: It is assumed that this script is run as user SYSTEM.
Rem
Rem
Rem MODIFIED (MM/DD/YYYY)
Rem jkallman 08/19/2018 - Created


column foo1 new_val LOG1
select 'Demo_of_Provisioning_'||to_char(sysdate,'YYYY-MM-DD_HH24-MI-SS')||'.log' as foo1 from sys.dual;
spool ^LOG1

timing start "Create demonstration tablespaces, schemas and workspaces"


--
-- Predefine the path used for tablespace datafile creation. If you're using Oracle Managed Files
-- or grid infrastructure, then this isn't necessary and you'll remove the 'datafile' portion of
-- the CREATE TABLESPACE statements
--
define DATAFILE_PATH='/u01/app/oracle/oradata/'


--
-- Step 1: Create the Tablespaces
--
-- Since Oracle Academy will want to group multiple database users/workspaces into a single tablespace, we'll need to
-- first create the tablespaces, and then the database users in step 2
--
create tablespace OACADEMY_DATA1 datafile '^DATAFILE_PATH.iacademy_01.dbf'
size 50M autoextend on next 50M maxsize 2G extent management local
autoallocate segment space management auto;

create tablespace OACADEMY_DATA2 datafile '^DATAFILE_PATH.iacademy_02.dbf'
size 50M autoextend on next 50M maxsize 2G extent management local
autoallocate segment space management auto;



--
-- Step 2: Create the Database Users (Schemas)
--
-- It's up to you how you wish to group database users/schemas into different tablespaces.
-- You can do it by total number or free space or ratio of users to tablespaces. It's all up to you.
--
begin
-- DB users in DATA1 tablespace
execute immediate 'create user OACADEMY_DB10 identified by "' || sys.dbms_random.string('X',30) ||
'" default tablespace OACADEMY_DATA1 quota unlimited on OACADEMY_DATA1 temporary tablespace TEMP account lock password expire';

execute immediate 'create user OACADEMY_DB11 identified by "' || sys.dbms_random.string('X',30) ||
'" default tablespace OACADEMY_DATA1 quota unlimited on OACADEMY_DATA1 temporary tablespace TEMP account lock password expire';

-- DB users in DATA2 tablespace
execute immediate 'create user OACADEMY_DB20 identified by "' || sys.dbms_random.string('X',30) ||
'" default tablespace OACADEMY_DATA2 quota unlimited on OACADEMY_DATA2 temporary tablespace TEMP account lock password expire';

execute immediate 'create user OACADEMY_DB21 identified by "' || sys.dbms_random.string('X',30) ||
'" default tablespace OACADEMY_DATA2 quota unlimited on OACADEMY_DATA2 temporary tablespace TEMP account lock password expire';

end;
/



--
-- Step 3: Create the APEX Workspaces
--
-- Create the APEX workspaces and associate a default schema with each
--
begin
apex_instance_admin.add_workspace(
p_workspace_id => null,
p_workspace => 'OACADEMY1',
p_primary_schema => 'OACADEMY_DB10');
end;
/

begin
apex_instance_admin.add_workspace(
p_workspace_id => null,
p_workspace => 'OACADEMY2',
p_primary_schema => 'OACADEMY_DB20');
end;
/


--
-- Step 4: Add additional schemas to the existing workspaces
--
--
begin
apex_instance_admin.add_schema(
p_workspace => 'OACADEMY1',
p_schema => 'OACADEMY_DB11');
end;
/

begin
apex_instance_admin.add_schema(
p_workspace => 'OACADEMY2',
p_schema => 'OACADEMY_DB21');
end;
/


--
-- Show a quick summary of the workspaces and schemas
--
column workspace_name format a50
column schema format a40
select workspace_name, schema from apex_workspace_schemas;



--
-- Step 5: Create an administrator account and a developer account in each worskpace
--
--
begin
-- We must set the APEX workspace security group ID in our session before we can call create_user
apex_util.set_security_group_id( apex_util.find_security_group_id( p_workspace => 'OACADEMY1'));

apex_util.create_user(
p_user_name => 'BOB',
p_email_address => 'bob@bob.com',
p_default_schema => 'OACADEMY_DB10',
p_allow_access_to_schemas => 'OACADEMY_DB10',
p_web_password => 'change_me',
p_developer_privs => 'ADMIN:CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL' ); -- workspace administrator

apex_util.create_user(
p_user_name => 'JUNE',
p_email_address => 'june@june.com',
p_default_schema => 'OACADEMY_DB11',
p_allow_access_to_schemas => 'OACADEMY_DB11',
p_web_password => 'change_me',
p_developer_privs => 'CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL' ); -- developer

commit;
end;
/

begin
-- We must set the APEX workspace security group ID in our session before we can call create_user
apex_util.set_security_group_id( apex_util.find_security_group_id( p_workspace => 'OACADEMY2'));

apex_util.create_user(
p_user_name => 'ALICE',
p_email_address => 'alice@alice.com',
p_default_schema => 'OACADEMY_DB20',
p_allow_access_to_schemas => 'OACADEMY_DB20',
p_web_password => 'change_me',
p_developer_privs => 'ADMIN:CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL' ); -- workspace administrator

apex_util.create_user(
p_user_name => 'AUGUST',
p_email_address => 'august@august.com',
p_default_schema => 'OACADEMY_DB21',
p_allow_access_to_schemas => 'OACADEMY_DB21',
p_web_password => 'change_me',
p_developer_privs => 'CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL' ); -- developer

commit;
end;
/

--
-- Show a quick summary of the APEX users
--
column workspace_name format a30
column user_name format a20
column email format a25
column is_admin format a10
column is_application_developer format a10
select workspace_name, user_name, email, is_admin, is_application_developer from apex_workspace_apex_users order by 1,2;


--
-- Install custom database objects in each schema by simply running one more SQL scripts. Note that the
-- script below is something that you author and maintain. It will do all DML and DDL you have in it,
-- and we simply iterate through the schemas and run the script each time.
--
alter session set current_schema = OACADEMY_DB10;
@custom.sql

alter session set current_schema = OACADEMY_DB11;
@custom.sql

alter session set current_schema = OACADEMY_DB20;
@custom.sql

alter session set current_schema = OACADEMY_DB21;
@custom.sql

timing stop
spool off


And to complete the lifecycle, I also authored a simple SQL script which will cleanup everything created above - removing the APEX workspaces, database users and tablespaces.

set define '^' verify on
set concat on
set concat .

Rem
Rem Title: Demo_of_Cleanup.sql
Rem
Rem Description: This script will cleanup all objects created by script Demo_of_Provisioning.sql
Rem
Rem Notes: It is assumed that this script is run as user SYSTEM.
Rem
Rem **** THIS SCRIPT IS DESTRUCTIVE **** - It will drop tablespaces, data files, workspaces, schemas, etc.
Rem
Rem
Rem MODIFIED (MM/DD/YYYY)
Rem jkallman 08/19/2018 - Created


column foo1 new_val LOG1
select 'Demo_of_Cleanup_'||to_char(sysdate,'YYYY-MM-DD_HH24-MI-SS')||'.log' as foo1 from sys.dual;
spool ^LOG1


timing start "Remove all workspaces, schemas and tablespaces"

--
-- Step 1: Remove the APEX Workspaces
--
--
begin
apex_instance_admin.remove_workspace(
p_workspace => 'OACADEMY1' );
end;
/

begin
apex_instance_admin.remove_workspace(
p_workspace => 'OACADEMY2' );
end;
/



--
-- Step 2: Drop the database users
--
--
drop user OACADEMY_DB10 cascade;
drop user OACADEMY_DB11 cascade;
drop user OACADEMY_DB20 cascade;
drop user OACADEMY_DB21 cascade;



--
-- Step 3: Drop the tablespaces
--
--
drop tablespace oacademy_data1 including contents and datafiles;
drop tablespace oacademy_data2 including contents and datafiles;

timing stop
spool off

[BLOG] E-Business Suite (EBS) on OCI | Block Volume Cloning | Automated Way

Online Apps DBA - Sun, 2018-08-19 10:20

New to Cloning?Want to Learn More & know about the overview of cloning Oracle E-Business Suite on Oracle Cloud Infrastructure (OCI)[BLOG] E-Business Suite (EBS) on OCI | Block Volume Cloning | Automated Way Visit: https://k21academy.com/ebscloud23 & let’s discuss more on Block Volume Cloning New to Cloning?Want to Learn More & know about the overview of cloning Oracle […]

The post [BLOG] E-Business Suite (EBS) on OCI | Block Volume Cloning | Automated Way appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

oracle utl_file encoding from utf8; missing character

Tom Kyte - Sat, 2018-08-18 14:46
I like to export large amount of text data from db to file. The characterset in db is UTF8. The excepted result in the file is ISO8859P2 or MSWIN1250. My db settings: <code> SELECT * FROM v$nls_parameters; 1 NLS_LANGUAGE HUNGARIAN 0 ...
Categories: DBA Blogs

How do I determine how much storage will be required for NUMBER(p, s)?

Tom Kyte - Sat, 2018-08-18 14:46
<i>Oracle Database SQL Language Reference</i> states, "Each NUMBER value requires from 1 to 22 bytes." I was curious how much space a NUMBER(10, 0) would take, so I did: <code>SQL> create table t (a number(10, 0)) 2 / Table created. SQL>...
Categories: DBA Blogs

Exception when executing Bulk Collect

Tom Kyte - Sat, 2018-08-18 14:46
I am using bulk collect with save exceptions, it runs fine sometimes and sometimes it errors out. The error is as follows - <code>BEGIN * ERROR at line 1: ORA-00028: your session has been killed ORA-00028: your session has been killed ORA-...
Categories: DBA Blogs

Performance issue after database upgrade from 11.2.0.4 to 12.1.0.2

Tom Kyte - Sat, 2018-08-18 14:46
Hi Team, We have recently upgraded a SAP database from 11.2.0.4 to 12.1.0.2, After encountered performance issues that did not exist on the previous version(11.2.0.4). Proper keeping sga_target/sga_maxsize did not help Proper keeping db_file_m...
Categories: DBA Blogs

DBMS_SQLTUNE_UTIL0

Tom Kyte - Sat, 2018-08-18 14:46
Hi Tom I have a question regarding DBMS_SQLTUNE_UTIL0 package. This seems to be helper package. Is it okay to use this package directly in code or use of such internal packages should be avoided? I did not find any documentation on this packa...
Categories: DBA Blogs

Space Fragmentation in a table which has only inserts

Tom Kyte - Sat, 2018-08-18 14:46
Hi Tom, This is Praveen. 1)I have table x which has only inserts in it and updates rarely,so my doubt is till now i have heard and read that there will be space fragmentation for delete operation done on a table and i couldn't find out how the ...
Categories: DBA Blogs

definition of v$lock table

Tom Kyte - Sat, 2018-08-18 14:46
please find below query output.. select OWNER,OBJECT_NAME,OBJECT_TYPE from DBA_OBJECTS where OBJECT_NAME in ('V$LOCK'); ---PUBLIC V$LOCK SYNONYM as per above query output V$LOCK is SYNONYM. so i checked in DBA_SYNONYMS for further informa...
Categories: DBA Blogs

Rownum = 1 no longer returning results after upgrade to 12.1

Tom Kyte - Sat, 2018-08-18 14:46
Hi, I have a view that I created in 11g, after we upgraded the database to 12.1 it is no longer returning the dates for the view. I pulled the SQL for the view and confirmed that it no longer works. It does return the comment. I need to use the r...
Categories: DBA Blogs

[VLOG] Oracle WebLogic Administration: Admin Server and Managed Server

Online Apps DBA - Sat, 2018-08-18 14:07

Learn in a bit interactive way! [Video-Blog] Oracle WebLogic Administration: Admin Server and Managed Server Visit: https://k21academy.com/weblogic17 & know about: ✔ What is an Admin Server ✔ What is a Managed Server ✔ How Admin and Managed Server related to each other… Learn in a bit interactive way! [Video-Blog] Oracle WebLogic Administration: Admin Server and […]

The post [VLOG] Oracle WebLogic Administration: Admin Server and Managed Server appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle SOA Suite 12c : Audit Levels (Off, Production, Development)

Dietrich Schroff - Sat, 2018-08-18 08:02
Inside the Enterprise Manager (URL: http://yourhost:7001/em) you can configure SOA Suite Audit Levels:

Choose "Common Properties"

Click on "Change Profile"
The oracle documentation shows the following:



  • Off:
    No business flow instance tracking and payload tracking information is collected. Drilling down into a flow shows the components and their status. Using this setting limits database growth, and also helps reduce latency and improve throughput.
  • Production:
    Flow and audit event information is collected and stored in the database. Drilling down into a flow provides the flow trace. The flow trace shows the exact sequence of component interaction, and also shows component statuses. Drilling down into a BPEL instance shows the execution of BPEL activities, and their statuses.
    As the flow-trace audit trail may not be required for all composites, you may want to set the audit level to Off at the SOA Infrastructure level, and set the audit level to Production for the required composites
  • Development:
    This option collects payload details in addition to flow and audit events. This setting is not recommended for a production environment, as it will impact performance and result in rapid database growth. This level is useful largely for testing and debugging purposes only.

Partner Webcast – Monitoring your cloud-based Infrastructure with Oracle Management Cloud

Today’s Modern IT supports today’s requirements by making IT infrastructure more accessible and available based on demand.  IT infrastructure can be made available both in your...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partitioning -- 3b : More Indexes on Partitioned Table

Hemant K Chitale - Sat, 2018-08-18 03:23
In the previous blog post, I had demonstrated a Global (Non-Partitioned) Index and a Local (Partitioned) Index.  A Global Index itself may be partitioned.  A Local Index is actually Equi-Partitioned with the Tablee.

Let me demonstrate a Global Partitioned Index on the same table now.

Given the SALES_DATA table which is partitioned by SALE_DATE, I might want to support queries by CUSTOMER_ID.  What if I have millions of customers?  My CUSTOMERS table might be partitioned by CUSTOMER_ID.  Can I have an Index on SALES_DATA that is also partitioned by CUSTOMER_ID ?

SQL> l
1 create index sales_data_glbl_part_ndx_2
2 on sales_data (customer_id)
3 global
4 partition by range (customer_id)
5 (
6 partition p_1 values less than (501000) tablespace new_indexes,
7 partition p_2 values less than (1000001) tablespace new_indexes,
8 partition p_3 values less than (1500001) tablespace new_indexes,
9 partition p_4 values less than (maxvalue)
10* )
SQL> /

Index created.

SQL>


This is a Global Partitioned Index which is prefixed on a left prefix (CUSTOMER_ID) of the Index Column(s). Oracle does not support Global Partitioned Indexes that are non-prefixed.  Let me demonstrated the difference between the two.

SQL> drop index sales_data_glbl_part_ndx_2;

Index dropped.

SQL>
SQL> l
1 create index sales_data_glbl_part_ndx_2
2 on sales_data (customer_id, sale_date)
3 global
4 partition by range (customer_id)
5 (
6 partition p_1 values less than (501000) tablespace new_indexes,
7 partition p_2 values less than (1000001) tablespace new_indexes,
8 partition p_3 values less than (1500001) tablespace new_indexes,
9 partition p_4 values less than (maxvalue)
10* )
SQL> /

Index created.

SQL>
SQL>
SQL> l
1 create index sales_data_glbl_part_ndx_3
2 on sales_data (sale_date)
3 global
4 partition by range (customer_id)
5 (
6 partition p_1 values less than (501000) tablespace new_indexes,
7 partition p_2 values less than (1000001) tablespace new_indexes,
8 partition p_3 values less than (1500001) tablespace new_indexes,
9 partition p_4 values less than (maxvalue)
10* )
SQL> /
partition by range (customer_id)
*
ERROR at line 4:
ORA-14038: GLOBAL partitioned index must be prefixed


SQL>
SQL> !oerr ora 14038
14038, 00000, "GLOBAL partitioned index must be prefixed"
// *Cause: User attempted to create a GLOBAL non-prefixed partitioned index
// which is illegal
// *Action: If the user, indeed, desired to create a non-prefixed
// index, it must be created as LOCAL; otherwise, correct the list
// of key and/or partitioning columns to ensure that the index is
// prefixed

SQL>


My first attempt above is valid because the Index is partitioned on CUSTOMER_ID although it is defined on (CUSTOMER_ID, SALE_DATE).  Thus, it is partitioned on a left prefix. The second index attempt fails because the Partition Key (CUSTOMER_ID) is not a left prefix (part of) the Index key (SALE_DATE alone).

Remember : The Index I've created is  partitioned on CUSTOMER_ID while the Table itself is partitioned on SALE_DATE.
This Index is a Global Index so any Partition Maintenance operation (TRUNCATE, DROP, SPLIT, MERGE) may result in the *entire* Index becoming UNUSABLE unless the UPDATE INDEXES clause is used in the Partition Maintenance DDL statement.  This was demonstrated with the Primary Key index being a Global Index in my earlier blog post.


What about Bitmap Indexes ?  Can a Bitmap Index on a partitioned Table be either Global or Local ?  Oracle does not support Global Bitmap Indexes.  A Bitmap Index on a Partitioned Table has to be Local.

SQL> create bitmap index sales_data_glbl_ndx_4
2 on sales_data(product_id)
3 /
on sales_data(product_id)
*
ERROR at line 2:
ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables


SQL> !oerr ora 25122
25122, 00000, "Only LOCAL bitmap indexes are permitted on partitioned tables"
// *Cause: An attempt was made to create a global bitmap index on a partioned
// table.
// *Action: create a local bitmap index instead.

SQL>
SQL> create bitmap index sales_data_lcl_ndx_2
2 on sales_data(product_id) local
3 /

Index created.

SQL>
SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

SQL> select partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name = 'SALES_DATA_LCL_NDX_2'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME STATUS
------------------------------ ------------------------------ --------
P_2018 TBS_YEAR_2018 USABLE
P_2019 TBS_YEAR_2019 USABLE
P_2020 TBS_YEAR_2020 USABLE
P_MAXVALUE USERS USABLE

SQL>


As demonstrated earlier, a Local Index automatically inherits the Partition Name and Tablespace Name from the Table Partitions.

To summarise, these are the Indexes on my SALES_DATA table :

SQL> l        
1 select index_name, index_type, partitioned, tablespace_name, status
2 from user_indexes
3* where table_name = 'SALES_DATA'
SQL> /

INDEX_NAME INDEX_TYPE PAR
------------------------------ --------------------------- ---
TABLESPACE_NAME STATUS
------------------------------ --------
SYS_C0017514 NORMAL NO
HEMANT VALID

SALES_DATA_LCL_NDX_1 NORMAL YES
N/A

SALES_DATA_GLBL_PART_NDX_2 NORMAL YES
N/A

SALES_DATA_LCL_NDX_2 BITMAP YES
N/A


SQL>


Remember that for a Partitioned Index (either Global or Local), the Tablespace Name and Status are attributes at the Partition definition level not at the Index definition.


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator