Feed aggregator

Java Codes for inserting list of directories in database

Tom Kyte - Thu, 2016-09-01 01:26
Hi Tom, This is a java programming language and i'm going to insert the directory path and filename in DB but my problem is the directory path without filename doesnt insert in DB it supposed to be the column FILE_NAMES is already null but it does...
Categories: DBA Blogs

Hierarchical query

Tom Kyte - Thu, 2016-09-01 01:26
Hi: I have hierarchical query where i would like to know all parents, grand parents, their parents etc. i.e. as connect by value is changing i would like to know. I know in 9i we have sys_connect_by_path but i need that functionality in 8.1.7. Is...
Categories: DBA Blogs

How to split the big file contains personal data

Tom Kyte - Thu, 2016-09-01 01:26
Hi Tom, Good evening. Can yo please guide me how we can do below requirement. Input will take huge data file it's having personal any data it's not like fixed sized or delimited. It's like any format. this data have to split 1. Based on t...
Categories: DBA Blogs

Analyzing 27 TB of database for upgrade from (10.2.0.5 to 11.2.0.4)

Tom Kyte - Thu, 2016-09-01 01:26
Hi, We are planning to upgrade 27 terabyte of database from 10.2.0.5 to 11.2.0.4. we are not sure things to look before upgrading as this is very huge database and it is running from many years live for users. Please suggest things and scripts t...
Categories: DBA Blogs

Data Loss

Pete Finnigan - Thu, 2016-09-01 01:26

Quite obviously (well its obvious to me!) one of the areas I am very interested in is data loss / data theft / data security and of course specifically Oracle security. We spend a lot of time looking at customers....[Read More]

Posted by Pete On 31/08/16 At 08:17 PM

Categories: Security Blogs

Oracle Open World 2016 – What GoldenGate sessions are there?

DBASolved - Wed, 2016-08-31 23:00

Well, it is that time of year again; the streets of San Francisco will be crowded with members of the Oracle community! As everyone in the Oracle community descends onto the bay area, there will be excitement about the things that will be announced this year. Sure a lot of it is going to be about “cloud” and what direction Oracle is taking with their “cloud” strategy. Besides, “cloud” there will be a lot of good things to take in as well. As I look through the online session catalog for 2016, I’m interested in the topics related to Oracle GoldenGate.

This year there appears to be a good number of Oracle GoldenGate sessions at Oracle Open World, to be specific there are 21 scheduled during the event. I have listed a few of the ones I think will be interesting and will make an attempt to attend; always check the session catalog because what I think is good you may not.

  • CON6632 – Oracle GoldenGate for Big Data
  • CON7318 – Getting Started with Oracle GoldenGate
  • CON6551 – New Oracle GoldenGate 12.3 Services Architecture (beta stuff)
  • CON6633 – Accelerate Cloud Onboarding with Oracle GoldenGate Cloud Service
  • CON6634 – Faster Design, Development and Deployment with Oracle GoldenGate Studio (should be updated from last year session)
  • CON6558 – Best Practice for High Availability and Performance Tuning for Oracle GoldenGate
  • UGF616 – Oracle GoldenGate and Apache Kafka: A Deep Dive into Real-Time Data Streaming
  • THT7817 – Real-Time and Batch Data Ingestion into Data Lake with Oracle GoldenGate Cloud Service
  • UGF5120 – Oracle GoldenGate and Baseball: Five Fundamentals Before Jumping to the Cloud

As we are within two weeks of Oracle Open World, I hope everyone is ready to go and looking forward to seeing others from the community.

Enjoy!

@dbasolved
http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

How Many ASM Disks Per Disk Group And Adding vs. Resizing ASM Disks In An All-Flash Array Environment

Kevin Closson - Wed, 2016-08-31 16:43

I recently posted a 4-part blog series that aims to inform readers that, in an All-Flash Array environment (e.g., XtremIO), database and systems administrators should consider opting for simplicity when configuring and managing Oracle Automatic Storage Management (ASM).

The series starts with Part I which aims to convince readers that modern systems, attached to All-Flash Array technology, can perform large amounts of low-latency physical I/O without vast numbers of host LUNs. Traditional storage environments mandate large numbers of deep I/O queues because high latency I/O requests remain “in-flight” longer. The longer an I/O request takes to complete, the longer other requests remain in the queue. This is not the case with low-latency I/O. Please consider Part I a required primer.

To add more detail to what was offered in Part I,  I offer Part II.  Part II shares a very granular look at the effects of varying host LUN count (aggregate I/O queue depth) alongside varying Oracle Database sessions executing zero-think time transactions.

Part III begins the topic of resizing ASM disks when additional ASM disk group capacity is needed.  Parts I and II are prerequisite reading because one might imagine that a few really large ASM disks is not going to offer appropriate physical I/O performance. That is, if you don’t think small numbers of host LUNs can deliver the necessary I/O performance you might be less inclined to simply resize the ASM disks you have when extra space is needed.

Everything we know in IT has a shelf-life. With All-Flash Array storage, like XtremIO, it is much less invasive, much faster and much simpler to increase your ASM disk group capacity by resizing the existing ASM disks.

Part IV continues the ASM disk resizing topic by showing an example in a Real Application Clusters environment.

 


Filed under: oracle

[TelkTalk Webinar]: Fast Track your Field Inspection Process with the Cloud

WebCenter Team - Wed, 2016-08-31 09:12
Webinar: Fast Track your Field Inspection Process with the Cloud
Email not displaying correctly?

TekTalk Webinar: Fast Track Your Field Inspection Process with the CloudLearn How Government Agencies Can Streamline the Inspection Process Using Cloud Technologies
Thursday Sept 8, 2016 at 1:00 PM ESTThe inspection process is one of the most critical and time consuming activities governments employ today. The cloud is helping leverage mobile devices to extend and streamline the inspection process. Governments can deploy inspectors more efficiently, saving travel time and expense, and process more inspections in less time using cloud technologies. These include Mobile, Process, and Document Cloud with user-friendly websites for status and updates. Please join us to learn how your agency can take advantage of cloud technologies to improve your inspection process.

During this quick 30 minutes webinar, you will: 
  • Gain an understanding of how Oracle PaaS (Platform as a Service) can provide real business solutions in a timely fashion
  • Understand how Cloud-based applications bring a rapid ROI through quick deployments and minimal training requirements
  • See working examples of field inspection processing 
Know someone who might be interested in this topic? Feel free to forward this email. 
For more information, please contact info@tekstream.com or call 844-TEK-STRM.

blank Tweet This blank Send to Linkedin blank Send to Facebookblank
Copyright © 2016, All rights reserved.

Our mailing address is:
TekStream Solutions
1117 Perimeter Center West
Suite E400
Atlanta, Georgia 30338

Parallel_index hint

Jonathan Lewis - Wed, 2016-08-31 07:28

Prompted by a recent OTN posting I’ve dug out from my library the following demonstration of an anomalty with the parallel_index() hint. This note is a warning about  how little we understand hints and what they’re supposed to mean, and how we can be caught out by an upgrade. We’ll start with a data set which, to match a comment made in the origina posting rather than being a necessity for the demonstration, has an index that I’ve manipulated to be larger than the underlying table:


rem
rem     Script:         parallel_index_hint_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          December 2005
rem

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        mod(rownum,1e4)         modded,
        lpad(rownum,10,'0')     v1,
        lpad('x',30,'x')        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

create index t1_i1 on t1(modded) pctfree 75;
alter table t1 modify modded not null;

Your figures may vary slightly if you try to recreate this model, but according to my stats blocks=7876 for the table and leaf_blocks=8054 for the index. Now I’m going to generate the execution plans for a simple query – count(modded) from the table – with a varying selection of hints.


select /*+ index_ffs(t1 t1_i1) */ count(modded) from t1;

select /*+ parallel_index(t1 t1_i1 20) */ count(modded) from t1;

select /*+ index_ffs(t1 t1_i1) parallel_index(t1 t1_i1 20) */ count(modded) from t1;

select /*+ parallel_index(t1 t1_i1 20) parallel(t1 10) */ count(modded) from t1;

Pause for thought

  • which of the four statements will have a plan that uses an index fast full scan ?
  • which of the four statements will have a plan that indicates parallel execution ?
  • which of the four statements will indicate a parallel index fast full scan ?
  • why are two of the plans going to be identical but with different costs ?

The most interesting bit of this note is in the last question because it’s also the answer to a more subtle “why didn’t Oracle do what I thought it should” question. Here are the four plans I got from an instance of 11.2.0.4 (with a little labelling to remind us about the hinting):


==================================================
index_ffs() on it's own - get index fast full scan
==================================================

-----------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |  1043   (4)| 00:00:06 |
|   1 |  SORT AGGREGATE       |       |     1 |            |          |
|   2 |   INDEX FAST FULL SCAN| T1_I1 |  1000K|  1043   (4)| 00:00:06 |
-----------------------------------------------------------------------

=====================================================
parallel_index() on it's own
Get serial tablescan which is cheaper than serial FFS
=====================================================

-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |  1027   (4)| 00:00:06 |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| T1   |  1000K|  1027   (4)| 00:00:06 |
-------------------------------------------------------------------

========================================================
parallel_index() with index_ffs()
Get parallel fast full scan - at same cost as SERIAL FFS
========================================================

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |          |     1 |  1043   (4)| 00:00:06 |        |      |            |
|   1 |  SORT AGGREGATE           |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR          |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE        |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR    |          |  1000K|  1043   (4)| 00:00:06 |  Q1,00 | PCWC |            |
|   6 |       INDEX FAST FULL SCAN| T1_I1    |  1000K|  1043   (4)| 00:00:06 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------

===============================================
parallel_index() with parallel()
Get parallel fast full scan - costed correctly.
(Not costed at the degree given for table).
===============================================

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |          |     1 |    58   (4)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE           |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR          |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE        |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR    |          |  1000K|    58   (4)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       INDEX FAST FULL SCAN| T1_I1    |  1000K|    58   (4)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------

First plan – we supplied the index_ffs() hint, it’s valid: so Oracle did an index fast full scan. The serial cost was 1,043.

Second plan – we supplied the parallel_index() hint: but the optimizer apparently didn’t consider the cost of a parallel index fast full scan (which ought to have cost something like (1043/20)/0.9 = 58; instead it chose the serial tablescan at a cost of 1027 (cheaper than a serial index fast full scan because of the way I had defined a very large index).

Third plan – we supplied the parallel_index() hint with an explicit index_ffs() hint: both were legal so the optimizer obeyed the hints and produced a parallel index fast full scan (which is what we might have hoped would happen automatically for the second query). But the cost of the query is 1,043 – the cost of the serial index fast full scan.

Final plan – we didn’t hint an index_ffs() we hinted parallel() and parallel_index(): “strangely” Oracle has selected the parallel index fast full scan – and this time the cost is 58 (which, as I pointed out above, is (1043/20)/0.9, the value it’s “supposed” to be). It looks as if Oracle uses the parallel scaling factor on the index only if the table is ALSO hinted (or defined) to be treated to parallel execution.

Note, by the way, that I’ve hinted the table parallel 10 so that the cost of the parallel tablescan (ca. (1027/10)/0.9 = 114) will be larger than the cost of the parallel index fast full scan. Normally you’re likely to have the same degree hinted for table and index, and the index is likely to be smaller than the table which means that if you see this anomaly in production it’s more likely to be a case of Oracle using a parallel tablescan when you’re expecting a parallel index fast full scan. Oracle will have compared the serial cost of an index fast full scan against the parallel cost of the table.

Footnote:

When I ran the script on 12c there was one important change. The cost of the “correctly costed” index fast full scan came out at 103 rather than 58. Oracle has used the degree from the parallel() hint for the tablescan to calculate the cost of the parallel index fast full scan. If you are unlucky enough to have some code that has conflicting degrees hinted (or defined) across tables and indexes then you may see some plans change because parallel index fast full scans suddenly change their cost.

Addendum (prompted by Randolf Geist’s comment): The plan in 12c also had a Note: “Degree of Parallelism is 4 because of table property”. This was a little odd because the degree of the table was 1 and the hint said 10, but the arithmetic of the tablescan had ( as indicated) used a DOP of 4 – despite reporting Degree=0 in the 10053 trace file. The only way I could get the number 4 was to look at the CPU parameters – bearing in mind the critical parallel parameters:

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     2
parallel_threads_per_cpu             integer     2

parallel_adaptive_multi_user         boolean     TRUE
parallel_automatic_tuning            boolean     FALSE
parallel_degree_limit                string      CPU
parallel_degree_policy               string      MANUAL
parallel_io_cap_enabled              boolean     FALSE

The anomaly of “degree 10” appearing in the costing for the index fast full scan was echoed in the 10053 trace: the file reported “Degree: 20”, then showed the cost being divided by 10.

At run-time, though, the query DID run parallel 20 when hinted with both the parallel() and parallel_index() hints, and ran parallel 10 (while costing and repeating the note about parallel 4) when modified to have only the /*+ parallel(t1 10) */ hint.


Year as input in a date column

Tom Kyte - Wed, 2016-08-31 07:26
I have a user who needs the possibility to enter just a year-number (ex. 2016) in a view containing a column that is defined as a date). The user see this value in the view as just a year (by using substr). I have been trying to make a trigger whi...
Categories: DBA Blogs

Procedures in one procedure

Tom Kyte - Wed, 2016-08-31 07:26
Hi, I've created a procedure containing 4 others procedures (INSERT INTO Table...) CREATE OR REPLACE PROCEDURE ALLPROC AS BEGIN PROC1; PROC2; PROC3; PROC4; END ALLPROC; Can you tell me if those 4 procedures are running in parallell or on...
Categories: DBA Blogs

When I trying to insert data to customer table it gives " inconsistent datatypes: expected UDT got CHAR" error,But i have given the datatype of " depNo" as char(9) already.. Can anyone explain me how to fix this error? Thanks.

Tom Kyte - Wed, 2016-08-31 07:26
--------------------creating types-------------- create type currency_vaty as varray(5) of char(3) / create type depcatogory_ty as object( depName char(5), intRate number(2,2), minDeposit number(8), currencies currency_vaty ) / ...
Categories: DBA Blogs

Datatype Number results in Numeric overflow although value is small enough

Tom Kyte - Wed, 2016-08-31 07:26
Try following Test Cases: DECLARE x NUMBER := 1; BEGIN x := 1024 * 1024 * 1024 * 5; END; / -> ORA-01426: numeric overflow DECLARE x NUMBER := 1; BEGIN x := x * 1024; x := x * 1024; x := x * 1024; x := x * 5; END; /...
Categories: DBA Blogs

Blocks allocation to tables and segments

Tom Kyte - Wed, 2016-08-31 07:26
1) select s.segment_name,sum(s.blocks),sum(s.BYTES),sum(s.extents) from user_segments s where s.segment_name='TBL_1' group by s.segment_name; SEGMENT_NAME SUM(S.BLOCKS) SUM(S.BYTES) SUM(S.EXTENTS) TBL_1 5504 45088768 8 2) sel...
Categories: DBA Blogs

How to display zero as count if there is no record in data base in combination with Date column?

Tom Kyte - Wed, 2016-08-31 07:26
Hello, After executing the Query Month Starts from APR to SEP only data is available in database and displaying properly. If there is no data i would like to display Month and Count as 0 with the same result. Ex: Here January(01) month has no re...
Categories: DBA Blogs

High number of buffers to get one block

Tom Kyte - Wed, 2016-08-31 07:26
I have a query who's DBMS_XPLAN output looks like this: <code>------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | Id | O...
Categories: DBA Blogs

Cannot update simple Number(19,2) column

Tom Kyte - Wed, 2016-08-31 07:26
PS - I'm using Toad 4 to do this..... I have a database that has a table called parts_master with many columns, including.... pn varchar2(40) list_price Number(19,2) Mfg_auto_key Number list_price_date(date) I have a column: select pn, l...
Categories: DBA Blogs

dbms_xplan.display_cursor output

Tom Kyte - Wed, 2016-08-31 07:26
I am looking for a list of the output report columns provided by dbms_xplan.display_cursor. There are lots of possible columns but no place that I can find, including the documentation, that tells me very much about them. What I am looking to find is...
Categories: DBA Blogs

Rolling Invalidate Window Exceeded

Yann Neuhaus - Tue, 2016-08-30 16:05

Today I was doing a hard parse storm post-mortem analysis. One hypothesis was rolling invalidation causing invalidation, but figures didn’t match. I often reproduce the hypothesis to check the numbers to be sure I interpret them correctly. Especially the timestamps in V$SQL_SHARED_CURSOR.REASON. And as it may help others (including myself in the future) I share the test case.

I create a table with one row (12c online statistics gathering, so num_rows is 1) and then insert one more row.

21:31:26 SQL> create table DEMO as select * from dual;
Table created.
21:31:26 SQL> insert into DEMO select * from dual;
1 row created.
21:31:26 SQL> commit;
Commit complete.

I run a query on the table. I don’t care about the result, so let’s put it something that will be useful later: the UTC time as the number of seconds since Jan 1st, 1970 (aka Epoch)

21:32:52 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585572
1472585572

The execution plan cardinality estimation is 1 row as this is what is in object statistics.

21:32:52 SQL> select * from table(dbms_xplan.display_cursor(null,null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

I gather statistics with all default attributes, so rolling invalidation occurs.

21:32:52 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.

At this time, the cursor has been parsed only once:

21:32:52 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 1 1 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:32:51

By default the invalidation window is 5 hours. I don’t want to wait so I set it to something shorter- 15 seconds:

21:32:54 SQL> alter system set "_optimizer_invalidation_period"=15;
System altered.

There will not be any invalidation until the next execution. To prove it I wait 20 seconds, run the query again and check the execution plan:

21:33:12 SQL> select (sysdate-date'1970-01-01')*24*3600 from DEMO;
 
(SYSDATE-DATE'1970-01-01')*24*3600
----------------------------------
1472592792
1472592792
 
21:33:12 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

This is still the old cursor (child number 0) with old stats (num_rows=1)

However, from this point rolling invalidation occurs: a random timestamp is generated within the rolling window (15 seconds here – 5 hours in default database).

I don’t know how to see this timestamp at that point (comments welcome) so I run the query several times within this 15 seconds window to see when it occurs:

21:33:16 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585596
1472585596
 
21:33:19 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585599
1472585599
 
21:33:22 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585602
1472585602
 
21:33:25 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585605
1472585605
 
21:33:28 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585608
1472585608
 
21:33:31 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585611
1472585611

After those runs, I check that I have a new execution plan with new estimation from new statistics (num_rows=2):

21:33:31 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------
 
SQL_ID 61x2h0y9zv0r6, child number 1
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 2 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

Yes, I have a new child cursor, child number 1. A new cursor means that I have a reason in V$SQL_SHARED_CURSOR:

21:33:31 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604<
/invalidation_window><ksugctm>1472585607</ksugctm></ChildNode>
 
1

Child cursor number 0 has not been shared because of rolling invalidation. The invalidation_window number, 1472585604, is the timestamp set by rolling invalidation, set at first parse call after stats gathering, and defined within the rolling window that follows. After this one (1472585604 is 21:33:24 in my GMT+2 timezone) the cursor will not be shared and a new hard parse occurs. I think that ksugctm is the timestamp when the new cursor is created. 1472585607 is 21:33:27 here in Switzerland. You see the corresponding timestamps in V$SQL:

21:33:31 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 5 5 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:33:24
0 1 2 2 2016-08-30/21:32:51 2016-08-30/21:33:27 30-AUG-16 21:33:30

Ok. Important thing is that the ‘rolling invalidation’ is not an invalidation (as V$SQL.INVALIDATIONS=0) of the cursor, but just non-sharing of the child.

If we gather statistics with immediate invalidation, it’s different:

21:33:31 SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.
 
21:33:34 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585614
1472585614
21:33:34 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604</invalid
ation_window><ksugctm>1472585607</ksugctm></ChildNode><ChildNode><ChildNumber>0<
/ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><s
ize>0x0</size><details>already_processed</details></ChildNode>

I’ve only one child here, a new one, and I’m not sure the reason has a meaning.

21:33:34 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
1 2 1 1 2016-08-30/21:32:51 2016-08-30/21:33:33 30-AUG-16 21:33:33

This is an invalidation of the cursor. Old children cursors are removed and the proud parent is marked as invalidated 1 time.

 

Cet article Rolling Invalidate Window Exceeded est apparu en premier sur Blog dbi services.

Resizing ASM Disks On Modern Systems. Real Application Clusters Doesn’t Make It Any More Difficult. An XtremIO Example With RAC.

Kevin Closson - Tue, 2016-08-30 15:31

My recent post about adding space to ASM disk groups by resizing them larger, as opposed to adding more disks, did not show a Real Application Clusters example. Readers’ comments suggested there is concern amongst DBAs that resizing disks (larger) in a RAC environment might somehow be more difficult than in non-RAC environments. This blog entry shows that, no, it is not more difficult. If anything is true it is that adding disks to ASM disk groups is, in fact, difficult and invasive and that resizing disks–whether clustered systems or not–is very simple. The entire point of this short blog series is to endear DBAs to the modern way of doing things.

For more background on the topics of LUN sizes and LUN counts in All-Flash Array environments based on proof and data from an XtremIO environment, I recommend the following links:

  1. Stop Constantly Adding Disks To Your ASM Disk Groups. Resize Your ASM Disks On All-Flash Array Storage. Adding Disks Is Really “The Y2K Way.” Here’s Why.
  2. Yes, Host Aggregate I/O Queue Depth is Important. But Why Overdo When Using All-Flash Array Technology? Complexity is Sometimes a Choice.
  3. Host I/O Queue Depth with XtremIO and SLOB Session Count. A Granular Look.
A Real Application Clusters Example

The example I give in this post is based on XtremIO storage array; however, the principles discussed in this post are applicable to most modern enterprise storage arrays. However, it is my assertion that adding space to ASM disk groups by resizing the individual ASM disks (LUNs) is really only something one should do in an All-Flash Array environment like XtremIO. I’ve made that point in the above-cited linked articles.

Resizing ASM disks in an XtremIO environment is every bit as simple as it is in non-RAC environments. The following example shows just how simple.

Figure 1 shows a screen capture of ASMCA reporting that all disk groups are mounting on both nodes of the RAC cluster and that the SALESDATA disk group has 2TB capacity at the beginning of the testing.

asmca-before-resize

Figure 1

Figure 2 shows the XtremIO GUI after all 4 of the ASM disks in the SALESDATA disk group have been resized to 1TB. Resizing XtremIO volumes is a completely non-disruptive operation.

XtremIO-GUI-Vols-resized

Figure 2

Figure 3 shows the simple commands the administrator needs to execute to rescan for block device changes on all nodes of the RAC cluster. Figure 3 also shows the commands necessary to verify that the block device reflects the new capacity given to each of the LUNs that map to the XtremIO volumes.

shell-commands-1

Figure 3

Figure 4 shows how a simple shell script (called resize_map.sh in this example) can be used to direct the multipathd(8) command to resize internal metadata for specific XtremIO volumes. The script can be executed on remote hosts via the bash(1)  “-s” option.

maps_resized

Figure 4

Figure 5 shows how the ASM disks were 512GB each until the disk group was altered to resize all the disks. That is, in spite of the fact that the block devices were resized at the operating system level, ASM had not yet been updated.

sqlplus-sysasm-resize

Figure 5

Once the ASM disks are resized as shown in Figure 5, the ASMCA command will also show that the disk group (SALESDATA in the example) has 4TB capacity as seen in Figure 6.

asmca-after-1TB-resize

Figure 6

This example has shown that resizing ASM disks in an XtremIO environment is the simplest, least impactful way to add space to an ASM disk group in a Real Application Clusters environment–just as it is in a non-RAC environment.

 

 

 

 


Filed under: oracle

Pages

Subscribe to Oracle FAQ aggregator