Feed aggregator

Links for 2016-11-13 [del.icio.us]

Categories: DBA Blogs

Validation procedure

Tom Kyte - Mon, 2016-11-14 01:46
How to write a validation procedure for three conditions which are dependent on each other . first condition will be for country zone in that there will be condition for branch for specific date range .
Categories: DBA Blogs

Chapter 12-user management

Tom Kyte - Mon, 2016-11-14 01:46
I am trying to create a role, but every time i type it in on sql plus, i get an error saying insufficient privileges. I was wondering what am i doing wrong. I have tried typing in CREATE ROLE PURCH_ROLE_1; and CREATE ROLE PURCH_ROLE_1 IDENTIFIED BY c...
Categories: DBA Blogs

Multi Table Insert with Differing Exists on Destination Tables

Tom Kyte - Mon, 2016-11-14 01:46
Is it possible to write a "Insert all into <table_1> ( col_1,col_2,col_3 ) values ( col_a,col_b,col_c) into <table_2> ( col_1,col_2,col_3 ) values ( col_a,col_b,col_c) into...
Categories: DBA Blogs

Data Loading with APEX - How to avoid upload of null values

Tom Kyte - Mon, 2016-11-14 01:46
Hi guys, I'm developing an application wich has a data loading wizard section. Users can upload several CSV files where some of the columns values are checked against a reference (I used Table Lookup for that). Problem is that I just realized tha...
Categories: DBA Blogs

Log File Sync Wait

Tom Kyte - Mon, 2016-11-14 01:46
Hi Tom, 1 ) As per my understanding, Log-File-Sync wait is the duration, after which I receive an acknowledgement that my commit is successful. In my database the only prominent issue I see is huge log-file-sync waits. 2 ) There are 32 cpu and ea...
Categories: DBA Blogs

50 Awesome Career Tips For IT Professionals

Complete IT Professional - Sun, 2016-11-13 21:00
The IT industry is a large field, and as a result, it can be hard to stand out from the rest of the people in the industry. There are some things you can do to improve your career, improve the way you do work, and make yourself look better for your company. Have a read […]
Categories: Development

12cR2 new index usage tracking

Yann Neuhaus - Sun, 2016-11-13 14:15

A common question is: how to know which indexes are not used, so that I can drop them. If you tried to use index monitoring you probably have seen the limits of it which make it difficult to use. It has been improved in 12.2 so let’s check if it helps to release the stress of performance regression when we drop an index… or not.

I’ll check two views here. Here is what documentation says about them:

  • DBA_INDEX_USAGE displays cumulative statistics for each index.
  • V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes.
    After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

The documentation about V$INDEX_USAGE_INFO show a column INDEX_STATS_COLLECTION_TYPE where description explains that by default the statistics are collected based on sampling (only a few of the executions are considered when collecting the statistics). The type of collection that collects the statistics for each execution may have a performance overhead.

SAMPLED

I’ve found an undocumented to control this collection, which defaults to ‘SAMPLED’ and I’ll set it to ‘ALL’ to get deterministic test case:
17:53:51 SQL> alter session set "_iut_stat_collection_type"=ALL;
Session altered.

So this is the first problem with how reliable index usage tracking is. If your boss is running a report once a month which needs a index, you may miss this execution and think that this index is unused and decide to drop it. And you will have a regression. Do you want to take the risk on a sample monitoring?

Execution using index

On the SCOTT schema I’m running a query that uses the index PK_DEPT

17:53:51 SQL> set autotrace on explain
Autotrace Enabled
Displays the execution plan only.
 
17:53:51 SQL> select * from emp join dept using(deptno) where ename like 'K%';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM DNAME LOC
10 7839 KING PRESIDENT 17-nov 00:00:00 5000 ACCOUNTING NEW YORK
 
Explain Plan
-----------------------------------------------------------
Plan hash value: 3625962092
 
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 117 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMP | 1 | 87 | 2 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 30 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
3 - filter("EMP"."ENAME" LIKE 'K%')
4 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- this is an adaptive plan
 
17:53:52 SQL> set autotrace off
Autotrace Disabled

When I look at the index usage tracking views, I don’t see this usage and the reason is that the last flush is from before the execution:

17:53:52 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 8 30790 13-NOV-16 05.48.12.218000000 PM 3
 
17:53:52 SQL> select * from dba_index_usage where owner='SCOTT';
 
no rows selected

The statistics are gathered in memory and are flushed to the dictionary every 15 minutes. For the moment, I’ve not found how to flush them manually, so I just wait 900 seconds:


17:53:52 SQL> host sleep 900
 
18:10:32 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 9 45898 13-NOV-16 06.03.13.344000000 PM 3
 
18:10:32 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 1 1 1 0 1 0 0 0 0 0 0 0 0 13-nov 18:03:13

Here is my index usage recorded. On execution. One row returned from the index.

DBMS_STATS

One drawback of index monitoring was that the statistics gathering was setting the monitoring to ‘YES’. Let’s see if it’s better in 12.2:


18:10:32 SQL> exec dbms_stats.gather_index_stats('SCOTT','PK_DEPT');
PL/SQL procedure successfully completed.

Again, waiting 15 minutes to get it flushed (and check LAST_FLUSH_TIME):


18:10:32 SQL> host sleep 900
 
18:27:12 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT F LUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 1 3 30000 1 0 48136 13-NOV-16 06.18.13.748000000 PM 3
 
18:27:12 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_CO UNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_CO UNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_10 00_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

It seems that the index tracking usage has been incremented here. Total rows returned incremented by 4 which is the number of rows in DEPT, read by dbms_stats.
This will be very difficult to use to detect unused index because we can expect that even unused indexes have statistics gathering on them.

Index on Foreign Key to avoid table locks

There’s another risk we have when we drop an index. It may not be used for access, but to avoid a TM Share lock on a child table when deleting rows from the referenced table. This is again something that was not monitored. When the parent table has few rows, like some lookup tables, the index on the foreign key will probably not be used to access to the child rows, or to check that there are no child rows when you delete a parent one. A full scan will be faster. But an index on it is still required to avoid to lock the whole table when we delete rows from the parent.

Let’s create such an index.


18:27:12 SQL> create index FK_EMP on EMP(DEPTNO);
Index FK_EMP created.

I’ll delete DEPTNO=50 and I can verify that checking that there are no child rows is done without the need of the index:


SQL_ID 1v3zkdftt0vv7, child number 0
-------------------------------------
select * from emp where deptno=50
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 3 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 1 | 0 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DEPTNO"=50)

Let’s delete the parent row and see if the index is used or not.


19:19:47 SQL> delete from DEPT where deptno='50';
0 rows deleted.
19:19:47 SQL> commit;
Commit complete.

This do not lock the EMP table because of the presence of the index FK_EMP. If the index were not there, a TM Share lock would have been acquired, which prevent concurreny DML on EMP table (at least).


19:19:48 SQL> host sleep 900
 
19:34:48 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 0 3 30000 12 48152 13-NOV-16 07.24.11.086000000 PM 3
 
19:34:48 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

No additional index usage has been detected. Do you take the risk to drop the index? Probably not. Even making the index invisible do not lower the risk. You may check DBA_TAB_MODIFICATIONS to know if the parent table is subject of deletes, but what if some transactions are updating the referenced key? This is also a case of TM Share lock, and this happens more that we think (for example when Hibernate updates all columns even those that do not change).

So what?

The new index usage tracking in 12.2 is very nice to get statistics on index usage, better than a simple ‘YES/NO’ flag as we have before. But detecting which index is not used and can be safely dropped is still something complex and that requires the application knowledge and comprehensive non-regression testing.
There is nothing yet that can tell you than all would have been the same if an index were not there.

 

Cet article 12cR2 new index usage tracking est apparu en premier sur Blog dbi services.

Populate varchar2 collection from object type

Tom Kyte - Sat, 2016-11-12 13:06
I get the following error when I execute the code listed below - <b>ORA-21700: object does not exist or is marked for delete</b> The example is trivial the sake of illustration, but in real-life, I would be handing sets of several thousands...
Categories: DBA Blogs

Connection fail with differente permissions on a network folder.

Tom Kyte - Sat, 2016-11-12 13:06
Hi, Tom We have a .NET application in a network path and when the user only have read & execute permission on the folder the connection to the Data Base can't be made, but when the user has also the modify permission the connection can be done. ...
Categories: DBA Blogs

Ubuntu: WLAN module for bcm43-chipset gone after reboot...

Dietrich Schroff - Sat, 2016-11-12 02:05
Today i ran with a laptop into the following problem:
After a reboot the system lost its WLAN:
 # uname -a Linux Notebook 3.13.0-86-generic #131-Ubuntu SMP Thu May 12 23:33:13 UTC 2016 x86_64 x86_64 x86_64 GNU/Linuxand
# lspci | grep 802.11
02:00.0 Network controller: Broadcom Corporation BCM43142 802.11b/g/n (rev 01)
Trying a modprobe wl failed...
There were some posts, which suggested to install kernel-drivers, dkms and so on, but this was not the problem, but it led me to the solution:
I tried to install this packages an i got the following:
root@Notebook:~# apt-get install linux-headers-$(uname -a) build-essentials dmks E: Der dpkg-Prozess wurde unterbrochen; Sie müssen manuell »sudo dpkg --configure -a« ausführen, um das Problem zu beheben.And here we go:

root@Notebook:~# dpkg --configure -a
mysql-client-core-5.5 (5.5.53-0ubuntu0.14.04.1) wird eingerichtet ...
libgpgme++2 (4:4.13.3-0ubuntu0.3) wird eingerichtet ...
sudo (1.8.9p5-1ubuntu1.3) wird eingerichtet ...
Konfigurationsdatei »/etc/sudoers«
 ==> Geändert (von Ihnen oder von einem Skript) seit der Installation.
 ==> Paketverteiler hat eine aktualisierte Version herausgegeben.
   Wie möchten Sie vorgehen? Ihre Wahlmöglichkeiten sind:
    Y oder I : Die Version des Paket-Betreuers installieren
    N oder O : Die momentan installierte Version beibehalten
       D     : Die Unterschiede zwischen den Versionen anzeigen
       Z     : Eine Shell starten, um die Situation zu begutachten
 Der Standardweg ist das Beibehalten der momentanen Version.
*** sudoers (Y/I/N/O/D/Z) [Vorgabe=N] ?
xscreensaver-data (5.15-3+deb7u1ubuntu0.1) wird eingerichtet ...
...
...
initramfs-tools (0. 103ubuntu4.4) wird eingerichtet ...
update-initramfs: deferring update (trigger activated)
bcmwl-kernel-source (6.30.223.248+bdcom-0ubuntu0.2) wird eingerichtet ...
Loading new bcmwl-6.30.223.248+bdcom DKMS files...
Building only for 3.13.0-86-generic
Building for architecture x86_64
Building initial module for 3.13.0-86-generic
Done.
wl:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/3.13.0-86-generic/updates/dkms/
depmod........
DKMS: install completed.
update-initramfs: deferring update (trigger activated)
unity (7.2.6+14.04.20160408-0ubuntu1) wird eingerichtet ...
...
...
Now the WLAN was up again...
Apparently the last system update was waiting for an enter, but i forgot this windows and did the reboot...

historic undo usage by day( last 90 days)

Tom Kyte - Fri, 2016-11-11 18:46
I am trying to size my undo tablespace properly( requirement is 24 hours retention for flashback query), So I need historic undo usage per day for last 90 days. and I am using the query below but I suspect that it is giving me wrong answer. Autoexten...
Categories: DBA Blogs

LAST and COUNT values after VARRAY trimming?

Tom Kyte - Fri, 2016-11-11 18:46
I would like to ask about the values of LAST and COUNT of the varray after operation TRIM. In the documentation is written, that " For varrays, COUNT always equals LAST." [https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/collection_method.htm...
Categories: DBA Blogs

Data pump export to multiple mount points

Tom Kyte - Fri, 2016-11-11 18:46
How to export a schema with size 500GB to different mount points using datapump? Expamle Shema size 500gb Mount point size /disk1 200gb /disk2 200gb /disk3 200gb I need to do export using above mount points. Thanks & Regards Krish...
Categories: DBA Blogs

Modify Nested table without altering the type

Tom Kyte - Fri, 2016-11-11 18:46
I have nested table like below which is already loaded. <code> create or replace TYPE access_t AS OBJECT ( AccessID VARCHAR2(50), Eligibility char(1) ); / create or replace TYPE Access_tab IS TABLE OF access_t; / c...
Categories: DBA Blogs

Partition file size in Oracle 12c

Tom Kyte - Fri, 2016-11-11 18:46
Dear Tom, Greetings and thanks for helping community here. I am in process of partitioning and using below logic... 1 MB can store = No Of Rows / Current Table Size(MB) => will give me the number of rows in 1MB Then I can decide how many r...
Categories: DBA Blogs

12cR2 has new SQL*Plus features

Yann Neuhaus - Fri, 2016-11-11 15:30

12cR2 is there. What’s new in SQL*Plus? For sure, you can’t expect lot of things from it. The new command line is the SQL*Developer sqlcl which aims to be 100% compatible with SQL*Plus with lot of more features. However, a few little things came here: default editor, command line history and easy row/LOB prefetch and statement caching.

_EDITOR

Yes, it seems that the default editor is ‘vi’ instead of ‘ed’, finally. This is a great improvement. Of course, you can set the VISUAL environment variable in your system. But when you come to another environment (which consultants do), this default will save lot of “define _editor=vi” keystroke.

The environment variables EDITOR and VISUAL are not set:

SQL> host set | grep -E "(^EDITOR|^VISUAL)"
 
SQL>

but the _EDITOR in sqlplus is set to ‘vi':

SQL> define
DEFINE _DATE = "11-NOV-16" (CHAR)
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)
DEFINE _USER = "SYS" (CHAR)
DEFINE _PRIVILEGE = "AS SYSDBA" (CHAR)
DEFINE _SQLPLUS_RELEASE = "1202000100" (CHAR)
DEFINE _EDITOR = "vi" (CHAR)
DEFINE _O_VERSION = "Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production" (CHAR)
DEFINE _O_RELEASE = "1202000100" (CHAR)
DEFINE 1 = "sqlplus" (CHAR)
DEFINE _RC = "1" (CHAR)

Here is the default. For sure, vi is better than ‘ed’. ‘ed’ was the line editor from the era of 2400 baud network.

Command history

Yes. In 2016 SQL*Plus has a command line history. Do you need it? Probably not. If you are on Windows, you can navigate with arrow-up and arrow-down in any command line program. If you are on Linux, you have probably installed rlwrap. And finally, if you want to do something friendly on command line, you probably use sqlcl.

However, in 12cR2 a very basic history has been introduced to SQL*Plus.
You have to enable it:

SQL> show HISTORY
history is OFF
SQL> set HISTORY on
SQL> show history
history is ON and set to "100"

so the default is 100 lines, but you can increase it:

SQL> set HISTORY 999999
SP2-0267: history option 999999 out of range (0 through 100000)
SQL> set HISTORY 100000

what can you do with it?

SQL> help HISTORY
 
HISTORY
-------
 
Stores, lists, executes, edits of the commands
entered during the current SQL*Plus session.
 
HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]  
N is the entry number listed in the history list.
Use this number to recall, edit or delete the command.
 
Example:
HIST 3 RUN - will run the 3rd entry from the list.
 
HIST[ORY] without any option will list all entries in the list.

Here are some examples:

SQL> show history
history is OFF
SQL> set history on
SQL> show history
history is ON and set to "100"
SQL> prompt 1
1
SQL> prompt 2
2
SQL> history
1 show history
2 prompt 1
3 prompt 2
 
SQL> history list
1 show history
2 prompt 1
3 prompt 2
 
SQL> history 2 run
1
SQL> history 2 edit
 
SQL> history 2 delete
SQL> history
1 show history
2 prompt 2
3 prompt 1
 
SQL> history clear
SQL> history
SP2-1651: History list is empty.

As you see, it’s not the most user friendly. But for the basic DBA task that you do on a server you may find it safer than up-arrow. Imagine that a ‘shutdown immediate’ is in the history. Do you want to take the risk to run it because of some network latency and you run the line above the one you wanted? Or do you prefer to be sure tho have read the command befor

SET LOBPREF[ETCH], SET ROWPREF[ETCH], and SET STATEMENTC[ACHE].

Here are important performance improvements:

SQL> show lobprefetch
lobprefetch 0
SQL> show rowprefetch
rowprefetch 1
SQL> show statementcache
statementcache is 0

Those are things that you can to on OCI or JDBC and that you can easily do now in SQL*Plus: prefetch rows and LOBs to avoid fetch roundtrips and use statement caching to avoid parse calls.

I’ll probably blog about prefetch in a future blog, so for the moment, here is a quick demo of statement caching.

By default, statement caching is off. I run 3 times the same query:

select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache hits 15
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.370333 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.370333 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (total) 6
11-NOV-16 05.00.41.370333 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache hits 22
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.373429 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.373429 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (total) 7
11-NOV-16 05.00.41.373429 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache hits 29
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.375993 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.375993 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (total) 8
11-NOV-16 05.00.41.375993 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (describe) 0
 
8 rows selected.

You can see that each one had its parse call. Of course, it’s not a hard parse because cursor is shared. It’s not even a soft parse thanks to session cursor cache. But it’s still a parse call.

Let’s set statement caching to one and run the query again 3 times:

set statementcache 1
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache hits 36
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.378937 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.378937 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.378937 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache hits 42
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.381403 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.381403 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.381403 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache hits 48
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.383844 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.383844 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.383844 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (describe) 0
 
8 rows selected.

One more parse call only. The cursor was cached at client side.

How many statements can you cache?

SQL> set statementcache 999999
SP2-0267: statementcache option 999999 out of range (0 through 32767)

from 1 to 32767. The value 0 disable statement caching.

set statementcache 32767

Not yet in 12.2 ?

If you did not upgrade yet to 12.2 you have a way to use statement caching. You can set it in oraaccess.xml which can enable those optimizations for all OCI clients.

sqlplus -F

Those performance settings can be set to default values with the ‘-F’ argument.
Let set which settings are different:

[oracle@OPC122 ~]$ sqlplus -s / as sysdba <<< "store set a.txt replace"
Wrote file a.txt
[oracle@OPC122 ~]$ sqlplus -s -F / as sysdba <<< "store set b.txt replace"
Wrote file b.txt
[oracle@OPC122 ~]$ diff a.txt b.txt
3c3
set arraysize 100
31c31
set lobprefetch 16384
46c46
set rowprefetch 2
59c59
set statementcache 20

Those settings avoid roundtrips and unnecessary work. Documentation says that PAGESIZE set to higher value but I don’t see it here and anyway, it’s about formatting output and not about performance.

VARIABLE

You may use SQL*Plus to test queries with bind variables. Here is what you do before 12.2:

SQL> variable text char
SQL> exec :text:='X'
 
PL/SQL procedure successfully completed.
 
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

You can now simply:

SQL> variable text char='X'
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

SQLcl the SQLDeveloper command line

Since 11g SQLDeveloper is shipped in ORACLE_HOME and in 12.2 it includes SQLcl, the SQLDeveloper command line that is fully compatible with SQL*Plus scripts.
The version we have on the DBCS lacks the executable flag and the right JAVA_HOME:

[oracle@SE222 ~]$ /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
-bash: /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql: Permission denied
[oracle@SE222 ~]$
[oracle@SE222 ~]$ bash /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:16:48 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
USER =
URL = jdbc:oracle:oci8:@
Error Message = No suitable driver found for jdbc:oracle:oci8:@
USER =
URL = jdbc:oracle:thin:@127.0.0.1:1521:CDB2
Error Message = No suitable driver found for jdbc:oracle:thin:@127.0.0.1:1521:CDB2
USER =
URL = jdbc:oracle:thin:@localhost:1521/orcl
Error Message = No suitable driver found for jdbc:oracle:thin:@localhost:1521/orcl
Username? (RETRYING) ('/ as sysdba'?)

I’ve defined the following alias:

alias sql='JAVA_HOME=$ORACLE_HOME/jdk bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'

and I’m ready to run it:

[oracle@SE222 ~]$ sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:20:15 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL>

I like SQLcl except one thing – it’s in java and is long to start:

[oracle@SE222 ~]$ time sql /nolog
real 0m2.184s
user 0m3.054s
sys 0m0.149s

2 seconds is long when you run it frequently. Compare with sqlplus:

[oracle@SE222 ~]$ time sqlplus /nolog
real 0m0.015s
user 0m0.008s
sys 0m0.006s

 

Cet article 12cR2 has new SQL*Plus features est apparu en premier sur Blog dbi services.

12cR2 versus 12cR1 scan time using Java inside the RDBMS - a DBAAS Cloud test

Marcelo Ochoa - Fri, 2016-11-11 08:47
As usual my first test with every Oracle RDBMS release is by Java inside the database, for this test I am using a heavy/intensive Java application inside the RDBMS named Scotas OLS.
Basically it provides full text search functionality by using an embedded Solr version running inside the DB.
By now Oracle 12cR2 is available at the Oracle Cloud DBAAS, the screen below is the services started to do the test
As I mentioned above Scotas OLS provides searching functionality through new domain index, during a creation or rebuild of the index and full table scan is done in batch of 5000 rows, this full scan is executed using a query like:
SELECT /*+ DYNAMIC_SAMPLING(L$MT 0) */ L$MT.rowid,..others_columns..
     FROM TABLE_NAME L$MT where rowid in (select /*+ cardinality(L$PT, 10) */ * from table(?) L$PT) for update nowait
table pipeline function receives as arguments a batch of 5000 rowids wrapped using ODCIRidList PLSQL Type.
The Java code running inside the DB look like:
                while (rs.next()) {
                    addCmd.clear();
                    addCmd.solrDoc = new SolrInputDocument();
                    String rowid = rs.getString(1);
                    addCmd.solrDoc.addField("rowid", rowid);
                    if (includeMasterColumn) {
                        Object o = rs.getObject(2);
                        readColumn(addCmd.solrDoc,this.colName,o);
                    }
                    for (int i = 0; i < numExtraCols; i++) {
                        Object o = rs.getObject(i + offset);
                        readColumn(addCmd.solrDoc,extraCols[i],o);
                    }
                    if (log.isTraceEnabled())
                      log.trace("Adding: " + addCmd.solrDoc.toString());
                    try {
                        processor.processAdd(addCmd);
                    } catch (Exception e) {
                        ....
                    }
                }
the scan time observed between the two DBAAS instances is summarized in this graph
Average scan time is 890 ms per batch in 12cR2 versus 1648 ms in 12cR1, GOOD JOB Oracle OJVM dev team for this improvement!!!!



Documentum – Cannot delete document due to VD/assembly references

Yann Neuhaus - Fri, 2016-11-11 02:46

I went into an error that you might already have once, so here is a quick guide on how to fix it.

When deleting a document from D2 I had a popup with an error like: Cannot delete file because it is still referenced in a virtual document or an assembly. Well indeed the document that I wanted to delete was previously in a VD, but for some reasons I couldn’t find the VD by the search engine as it was deleted, but not its old versions.

To summarize, my document was still referenced by the old versions of a deleted VD. So here’s the way I fixed it:

First, I needed to know which VD referenced it, for this simply use the table dmr_containment:

select * from dmr_containment where component_id='<DOCUMENT_ID>';

You should see a list of ids like:

r_object_id: ID of the current row (we will need it)
parent_id: ID of the VD that references our document
component_id: ID of our document
order_no: Index order of all documents in this VD

The problem with old versions VD is that it is set as immutable so you can’t remove the document from the VD through DQL and API. You first have to set it mutable.

So here is the condensed version of API to remove a component in an immutable VD:

fetch,c,<PARRENT_ID>
set,c,l,r_immutable_flag
F
save,c,l
removepart,c,l,<R_OBJECT_ID>
save,c,l
fetch,c,l
set,c,l,r_immutable_flag
T
save,c,l

If you have a lot of entries to remove, you may want a scripted version in DQMan:

First select all entries:

select * from dmr_containment where component_id='<DOCUMENT_ID>';

Select all rows, right click and generate the script:

fetch,c,{parent_id}
set,c,l,r_immutable_flag
F
save,c,l
removepart,c,l,{r_object_id}
save,c,l
set,c,l,r_immutable_flag
T
save,c,l

Verify that each entries have been deleted and now you should be able to remove your document without errors.

 

Cet article Documentum – Cannot delete document due to VD/assembly references est apparu en premier sur Blog dbi services.

Links for 2016-11-10 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator