Feed aggregator

How to destroy your performance: PL/SQL vs SQL

Yann Neuhaus - Fri, 2016-10-14 00:11

Disclaimer: This is in no way a recommendation to avoid PL/SQL. This post just describes a case I faced at a customer with a specific implementation in PL/SQL the customer (and me) believed is the most efficient way of doing it in PL/SQL. This was a very good example for myself to remind me to check the documentation and to verify if what I believed a feature does is really what the feature is actually doing. When I was doing PL/SQL full time in one my of previous jobs I used the feature heavily without really thinking on what happened in the background. Always keep learning …

Lets start by building the test case. The issue was on on Linux but I think this will be reproducible on any release (although, never be sure :) ).

SQL> create table t1 as select * from dba_objects;
SQL> insert into t1 select * from t1;
SQL> /
SQL> /
SQL> /
SQL> /
SQL> /
SQL commit;
SQL> select count(*) from t1;


SQL> create table t2 as select object_id from t1 where mod(object_id,33)=0;
SQL> select count(*) from t2;


This are my two tables used for the test: t1 contains around 5,5 millions rows and there is t2 which contains 168896 rows. Coming to the issue: There is a procedure which does this:

create or replace procedure test_update
  cursor c1 is select object_id from t2;
  type tab is table of t2.object_id%type index by pls_integer;
  ltab tab;
  open c1;
  fetch c1 bulk collect into ltab;
  close c1;
  forall indx in 1..ltab.count
    update t1 set owner = 'AAA' where object_id = ltab(indx);
end test_update;

The procedure uses “bulk collect” and “forall” to fetch the keys from t2 in a first step and then uses these keys to update t1 in a second step. Seemed pretty well done: Not a loop over each single row, compare with the list and then do the update when there is a match. I really couldn’t see an issue here. But when you execute this procedure you’ll wait for ages (at least if you are in VM running on a notebook and not on super fast hardware).

The situation at the customer was that I was told that the update, when executed as plain SQL in sqlplus, takes less than a second. And really, when you execute this on the test case from above:

SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:05.30
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.44
SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:06.34
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.70

It is quite fast (between 5 and 6 seconds on my environment). So why is the PL/SQL version so much slower? Aren’t “bulk collect” and “forall” the right methods to boost performance? Lets take a look at the execution plan for the plain SQL version:

| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |    A-Time     | Buffers | Reads  |  OMem |  1Mem |  O/1/M|
|   0 | UPDATE STATEMENT      |          |      1 |       |       | 24303 (100)|          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|   1 |  UPDATE               | T1       |      1 |       |       |            |          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|*  2 |   HASH JOIN           |          |      1 |    48 |  4416 | 24303   (1)| 00:00:01 |     168K|00:00:01.76    |   86719 |   9325 |  2293K|  2293K|  1/0/0|
|   3 |    VIEW               | VW_NSO_1 |      1 |   161K|  2044K|    72   (2)| 00:00:01 |    2639 |00:00:00.05    |     261 |     78 |       |       |       |
|   4 |     SORT UNIQUE       |          |      1 |     1 |  2044K|            |          |    2639 |00:00:00.04    |     261 |     78 |   142K|   142K|  1/0/0|
|   5 |      TABLE ACCESS FULL| T2       |      1 |   161K|  2044K|    72   (2)| 00:00:01 |     168K|00:00:00.01    |     261 |     78 |       |       |       |
|   6 |    TABLE ACCESS FULL  | T1       |      1 |  5700K|   429M| 23453   (1)| 00:00:01 |    5566K|00:00:05.88    |   86458 |   9247 |       |       |       |

It is doing a hash join as expected. What about the PL/SQL version? It is doing this:

SQL_ID  4hh65t1u4basp, child number 0

Plan hash value: 2927627013

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT   |      |       |       | 23459 (100)|          |
|   1 |  UPDATE            | T1   |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   951 | 75129 | 23459   (1)| 00:00:01 |

Uh! Why that? This is what I wasn’t aware of. I always thought when you use “forall” to send PL/SQL’s SQL to the SQL engine Oracle would rewrite the statement to expand the list in the where clause or do other optimizations. But this does not happen. The only optimization that takes place when you use “forall” is that the statements are send in batches to the SQL engine rather then sending each statement after another. What happens here is that you execute 168896 full table scans because the same statement (with a another bind variable value) is executed 168896 times. Can’t be really fast compared to the SQL version.

Of course you could rewrite the procedure to do the same as the SQL but this is not the point here. The point is: When you think what you have implemented in PL/SQL is the same as what you compare to when you run it SQL: Better think twice and even better read the f* manuals, even when you think you are sure what a feature really does :)


Cet article How to destroy your performance: PL/SQL vs SQL est apparu en premier sur Blog dbi services.

write data from a table to a csv file and save in it a directory.

Tom Kyte - Thu, 2016-10-13 19:06
I need to write data from a table to a csv file and save in it a directory. I am using below code: create or replace PROCEDURE get_query_result_as_csv_file( in_query IN VARCHAR2, in_filename IN VARCHAR2) IS l_blob BLOB; ...
Categories: DBA Blogs

Double select from same parametrized view

Tom Kyte - Thu, 2016-10-13 19:06
Dear all, I'm trying to use parameterized view, and i need select from some view twice with different values of same parameter vie UNION ALL. Could you say, having considered the above example, is it possible that the both queries will use o...
Categories: DBA Blogs

Is there a way to force DML error logging without adding LOG ERRORS INTO clause to INSERT, UPDATE etc?

Tom Kyte - Thu, 2016-10-13 19:06
For INSERT statements run against a table with an associated DML error log table, I would like to force DML error logging for INSERT statements that do not include a LOG ERRORS INTO clause. For Example: CREATE TABLE "WSB"."TEST" ("CCHAR" VA...
Categories: DBA Blogs

Audit trace for table

Tom Kyte - Thu, 2016-10-13 19:06
Hi Tom, we came across the request from customers. There were some activities on table (let's name it T1), audit was not turned on from the creation of db. Customer raised request to provide some trace or audit for T1- all activities for pas...
Categories: DBA Blogs


Tom Kyte - Thu, 2016-10-13 19:06
Dear Tom, Please help me with the below 1) I did full dump restoration from prod to test environment 2) Now I'm trying to import the prod stats to test environment 3) Created STAT_TABLE in the newly imported test environment like below beg...
Categories: DBA Blogs

How do i insert update and delete using db link

Tom Kyte - Thu, 2016-10-13 19:06
There are two instance on Unix server Instance A and Instance B i want to update one table on Instance B from Instance A What is the best way ?
Categories: DBA Blogs


Tom Kyte - Thu, 2016-10-13 19:06
Hi, I am trying to extract the DDL code for an index. Please have a look at the below code. The same returns BEGIN NULL; END; as the output. Please can you confirm if there is something wrong with the code? drop table a; create ta...
Categories: DBA Blogs

SQL query to find employees earning more than department average

Tom Kyte - Thu, 2016-10-13 19:06
My question.. how to write query to display the employee who are getting more salary then the average salary to the department_number from department table. thanks in advance...
Categories: DBA Blogs

October 26th: Exelon Corporation—Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Thu, 2016-10-13 17:24

Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, October 26, 2016.

Tim Hickey, Director of HR Strategic Services, and Duane Akin, Director of IT at Exelon Corporation, will discuss their company’s journey from on-premises Oracle PeopleSoft HR applications to Oracle HCM Cloud. You will hear how they are tracking toward the goal of lowering Excelon’s TCO and improving the efficiency and effectiveness of its HR operations.

Register now to attend the live forum on Wednesday, October 26, 2016, at 9:00 a.m. PT.

Public Facing Citizen Website

WebCenter Team - Thu, 2016-10-13 11:37

Authored by: Mitchell Palski, Oracle Fusion Middleware Specialist

‘Interactivity’ is the name of the game when it comes to your government’s web presence. Guiding citizens in your town, village, or city, to the correct information or tools can be a difficult task. Web experiences have to cater to a VERY wide range of end-users, ranging from the iPhone illiterates to the estranged Androidians. My grandmother bookmarks websites on her home screen and thinks they are all apps. Meanwhile, my neighbor is writing his own mobile app that notifies gamers when new video games are about to be released. 

Then there’s me (a typical millennial) – the self-proclaimed “UI guy” who grew up using Hotmail and AIM. I remember dial-up, but only like a bad dream. I was always the lead Java developer in my Undergraduate group projects, but I’ve never built a mobile app from scratch. I have to Google “Unix commands” almost every time I need to actually use them, but I’ve also never paid a utility bill with an actual check. The point is, I need my city website to work for me, and my grandmother, and my nerdy neighbor.

On the other side of the screen, there’s another persona – a “technical, but not a developer” type who wants to understand their users, who needs a portfolio-bulking project, and who works for my local government IT department. He/she knows graphic design and the English language better than Enterprise Architecture and C#. Now that we know who we’re dealing with, let’s make a deal.

We’re going to build a website that is useful, usable, desirable, findable, accessible, and credible (see https://www.usability.gov). Thousands of people who all live in the same geographic region are all going to log into this website to look up information, fill out forms, pay taxes/fees, reserve public parks, and maybe even market their businesses. To sum it up, we need a solution that is going to:
  1. Improve awareness of City services
  2. Encourage users to interact with Citizen services
  3. Speed-up the process by which those Citizens are engaged
The task can seem daunting, but luckily for you (I assume you’re faced with this task, otherwise you wouldn’t be reading this) it’s 2016 and plenty of people in the world have already started doing the same thing and come up with some pretty damn good best practices for getting it done, Oracle being one of them.
If you aren’t familiar with the term “Platform as a Service” (Paas) or why it matters, check out this article by Carol Hildebrand. If you don’t have the time, here is a summary:
  • Oracle can host your development tools in the Cloud for you.
  • You won’t have to pay for the hardware, the installation, or the maintenance of those tools.
  • You’ll be able to quickly and easily scale your environment.
  • You can focus on the project, rather than the infrastructure.

In Oracle’s portfolio of Cloud products, we’ve lifted our Enterprise grade web solutions into the Cloud so that you can take advantage of tools that help with:

  1. Drag-and-drop, simple, mobile-friendly, 508-compliant web design
  2. Web Content Management and Publishing
  3. Secure customer support – Chat and Knowledge base
Oracle’s Digital Experience Platform allows you to:
  • Quickly and easily define, manage, and update your site navigation
  • Build and organize web pages using drag-and-drop, WYSIWYG user interfaces
  • Integrate seamlessly with eCommerce and customer relationship management (CRM) tools

Oracle RightNow Chat Cloud Service, agents can always guide your customers to the best answers and outcomes. Sessions can be initiated by a customer or proactively based on company-defined rules and triggers. Resolve issues with higher productivity, increase conversion rates and boost satisfaction.

The end result of combining Oracle RightNow Chat Cloud Service with the DX Platform is a complete customer journey. Your solution will not only proactively engage citizens with a complete self-service website, but also allow your staff to quickly interact with citizens directly from the online service that they need assistance with. Intuitive design mixed with an online chat service will streamline the user experience, improve user satisfaction, and reduce operational costs of active call-centers.

Want to learn more about Oracle’s DX platform? Check out these blog posts:
Oracle Products:
Other Helpful Links:

MariaDB: audit plugin

Yann Neuhaus - Thu, 2016-10-13 11:19
Why should you Audit your MySQL Instances?

First to provide you a way to track user accessing sensible data
Secondly to investigate on suspicious queries in all your critical databases
Thirdly to comply with law and industry standards

The MariaDB Audit Plugin can help you to log all or part of the server activity as:
– who was connected and at which time
– which databases and tables were accessed
– which action/event (CONNECT, TABLE,…)
– which queries were run
All of them can be stored in a dedicated audit log file

This Plugin provides auditing not only for MariaDB where it is included by default, but also for Percona Server and
even for Oracle MySQL when using the community version

Installation on MariaDB

The MariaDB Audit Plugin library is shipped with the MariaDB server
Once the server is installed, you need still to locate your plugin directory and install/load it
mysqld7-[MariaDB]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/ |
mysqld7-[MariaDB]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';

Check then if it has been loaded
mysqld7-[MariaDB]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

Configuration of the important audit system variables

You can either set them manually with SET GLOBAL but I recommend to define them in the option file (my.cnf)
## Audit Plugin MariaDB
server_audit_events = CONNECT,QUERY,TABLE # specifies the types of events to log
server_audit_logging = ON # Enable logging
server_audit = FORCE_PLUS_PERMANENT # Load the plugin at startup & prevent it from beeing removed
server_audit_file_path = /u00/app/mysql/admin/mysqld8/log/mysqld8-audit.log # Path & log name
server_audit_output_type = FILE # separate log file
server_audit_file_rotate_size = 100000 # Limit of the log size in Bytes before rotation
server_audit_file_rotations = 9 # Number of audit files before the first will be overwritten
server_audit_excl_users = root # User(s) not audited

Restart the server and check the audit system variables
mysqld7-[MariaDB]>show global variables like "server_audit%";
| Variable_name | Value |
| server_audit_events | CONNECT,QUERY,TABLE |
| server_audit_excl_users | root |
| server_audit_file_path | /u00/app/mysql/admin/mysqld7/log/mysqld7-audit.log |
| server_audit_file_rotate_now | OFF |
| server_audit_file_rotate_size | 10000 |
| server_audit_file_rotations | 9 |
| server_audit_incl_users | sme |
| server_audit_loc_info | |
| server_audit_logging | ON |
| server_audit_mode | 0 |
| server_audit_output_type | file |
| server_audit_query_log_limit | 1024 |
| server_audit_syslog_facility | LOG_USER |
| server_audit_syslog_ident | mysql-server_auditing |
| server_audit_syslog_info | |
| server_audit_syslog_priority | LOG_INFO |
16 rows in set (0.00 sec)

Audit log file

In the Audit log file directory, you will find one current audit file and 9 archived audit log file as defined by the parameter server_audit_file_rotations
mysql@MariaDB:/u00/app/mysql/admin/mysqld7/log/ [mysqld7] ll
total 344
-rw-rw----. 1 mysql mysql 242 Oct 13 10:35
-rw-rw----. 1 mysql mysql 556 Oct 13 10:35 mysqld7-audit.log
-rw-rw----. 1 mysql mysql 10009 Oct 13 10:35 mysqld7-audit.log.1
-rw-rw----. 1 mysql mysql 10001 Oct 12 14:22 mysqld7-audit.log.2
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:47 mysqld7-audit.log.3
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:41 mysqld7-audit.log.4
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:34 mysqld7-audit.log.5
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:28 mysqld7-audit.log.6
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:22 mysqld7-audit.log.7
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:15 mysqld7-audit.log.8
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:09 mysqld7-audit.log.9

You can check the latest records in the current one
mysqld7-[MariaDB]>tail -f mysqld7-audit.log
20161013 14:22:57,MYSQL,sme,localhost,31,179,QUERY,employees,'create tables tst(name varchar(20))',1064
20161013 14:23:05,MYSQL,sme,localhost,31,180,CREATE,employees,tst,
20161013 14:23:05,MYSQL,sme,localhost,31,180,QUERY,employees,'create table tst(name varchar(20))',0
20161013 14:23:41,MYSQL,sme,localhost,31,181,WRITE,employees,tst,
20161013 14:23:41,MYSQL,sme,localhost,31,181,QUERY,employees,'insert into tst values(\'toto\')',0
20161013 14:24:26,MYSQL,sme,localhost,31,182,WRITE,employees,tst,
20161013 14:24:26,MYSQL,sme,localhost,31,182,QUERY,employees,'update tst set name=\'titi\'',0
20161013 14:24:53,MYSQL,sme,localhost,31,183,QUERY,employees,'delete from tst',1142
20161013 14:48:34,MYSQL,sme,localhost,33,207,READ,employees,tst,
20161013 14:48:34,MYSQL,sme,localhost,33,207,QUERY,employees,'select * from tst',0
20161013 15:35:16,MYSQL,sme,localhost,34,0,FAILED_CONNECT,,,1045
20161013 15:35:16,MYSQL,sme,localhost,34,0,DISCONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,0,CONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,210,QUERY,,'select @@version_comment limit 1',0
20161013 15:36:03,MYSQL,sme,localhost,35,0,DISCONNECT,,,0

The audit log file which is a plain-text format contains the following commas separated fields
timestamp : 20161012 10:33:58
serverhost : MYSQL
user : sme
host : localhost
connection id: 9
query id : 77
operation : QUERY
database : employees
object : ‘show databases’ # Executed query if QUERY or table name if TABLE operation.
return code : 0

Installation on Percona or MySQL

It is quite the same as on MariaDB
Once your server is installed, you have to look after your plugin directory
mysqld8-[Percona]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/Percona-Server-5.7.14-7-Linux.x86_64.ssl101/lib/mysql/plugin/ |

mysqld10-[MySQL]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/mysql-5.6.14-linux-glibc2.5-x86_64/lib/plugin/ |

then copy the MariaDB plugin server_audit.so in the Percona/MySQL plugin directory
mysql@Percona:[mysqld8] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so

mysql@MySQL:[mysqld10] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so

Install/load the plugin & check
mysqld8-[(Percona)]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld8-[Percona]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

mysqld10-[MySQL]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld10-[MySQL]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
**************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

Configuration of the important audit system variables

You can take exactly the same audit system variables defined for MariaDB


The MariaDB Auditing Plugin is really quick and easy to install
It is a good and cheap auditing solution and can be installed on different distributions
It lets you see exactly what SQL queries are being processed
Auditing information can really help you to track suspicious queries, detect mistakes and overall troubleshoot abnormal activity


Cet article MariaDB: audit plugin est apparu en premier sur Blog dbi services.

List with Details on a Single Page in Oracle Application Builder Cloud Service

Shay Shmeltzer - Thu, 2016-10-13 11:11

This question came up a couple of times from users so I figured I'll document how to achieve a layout that shows a list of items and allows you to pick an item from this list to show the details of this item on the same page.

list with details image

The default layout that ABCS creates is a list on one page with the ability to select an item and go see the details or edit that record on another page.

To combine the two into a single page, start from the edit or the details page that ABCS created.

On this page you then add the table or list for the same object, and set the link on a field to do the edit or details - this basically means that you'll do a navigation to the same page.

If you now run the page you'll be able click items in the table and see their details on the same page.

Here is a quick demo of how it is done:

Note that if you want this to be the default view that people see when navigating to your app - just update the navigation menu of your application to reflect this. 

Categories: Development

Partitioning – When data movement is not performed as expected

Yann Neuhaus - Thu, 2016-10-13 07:16

This blog is about an interesting partitioning story and curious data movements during merge operation. I was at my one of my customer uses intensively partitioning for various reasons including archiving and manageability. A couple of days ago, we decided to test the new fresh developed script, which will carry out the automatic archiving stuff against the concerned database in quality environment.

Let’s describe a little bit the context.

We used a range-based data distribution model. Boundary are number-based and are incremented monolithically with identity values.

We defined a partition function with range right values strategy as follows:

AS RANGE RIGHT FOR VALUES(@boundary_archive, @boundary_current)


The first implementation of the partition scheme consisted in storing all partitioned data inside the same filegroup.


Well, the initial configuration was as follows. We applied page compression to the archive partition because it contained cold data which is not supposed to be updated very frequently.

blog 106 - 0 - partitioned table starting point

blog 106 - 1 - partitioned table starting point config

As you may expect, the next step will consist in merging ARCHIVE and CURRENT partitions by using the MERGE command as follows:

MERGE RANGE (@b_archive);


Basically, we achieve a merge operation according to Microsoft documentation:

The filegroup that originally held boundary_value is removed from the partition scheme unless it is used by a remaining partition, or is marked with the NEXT USED property.


blog 106 - 2 - partitioned table after merge

At this point we may expect SQL Server has moved data from the middle partition to the left partition and according to this other Microsoft pointer

When two partitions are merged, the resultant partition inherits the data compression attribute of the destination partition.

But the results come as a little surprise because I expected to see a page compression value from my archive partition.

blog 106 - 3 - partitioned table after merge config

At this point, my assumption was either compression value doesn’t inherit correctly as mentioned in the Microsoft documentation or data movement is not performed as expected. The latter may be checked very quickly by looking at the corresponding records inside the transaction log file.

	COUNT(*) as nb_ops
FROM ::fn_dblog(NULL,NULL)
WHERE [Transaction ID] = (
     SELECT TOP 1 [Transaction ID]
     FROM ::fn_dblog(NULL,NULL)
     WHERE [Xact ID] = 164670)
GROUP BY [AllocUnitName], Operation

I put only the relevant sample here

blog 106 - 5 - log file after merge config

Referring to what I saw above, I noticed data movement was not performed as expected but rather as shown below:

blog 106 - 4 - partitioned table after merge config 2

So, this strange behavior seems to explain why compression state switched from PAGE to NONE in my case. Another strange thing is when we changed the partition scheme to include an archive filegroup, we returned to normality.



I doubled checked the Microsoft documentation to see if it exists one section to figure out the behavior I experienced in this case but without success. After further investigations I found out an interesting blog post from Sunil Agarwal (Microsoft) about partition merging and some performance improvements shipped with SQL Server 2008 R2. In short, I was in the specific context described in the blog post (same filegroup) and merging optimization came into action transparently because number of rows in the archive partition was lower than the one in the current partition at the moment of the MERGE operation.

Let me introduce another relevant information – the number of lines for each partition – to the following picture:

blog 106 - 6 - partitioned table optimization stuff

Just to be clear, this is an interesting and smart mechanism provided by Microsoft but it may be a little bit confusing when you’re not aware of this optimization. In my case, we finally decided with my customer to dedicate an archiving partition to store cold data that will be rarely accessed in this context and keep up the possibility to store cold data on cheaper storage when archiving partition will grow to a critical size. But in other cases, if storing data is the same filegroup is a still relevant scenario, keep in mind this optimization to not experience unexpected behaviors.

Happy partitioning!



Cet article Partitioning – When data movement is not performed as expected est apparu en premier sur Blog dbi services.

How can we use Oracle to deduplicate our data

Bar Solutions - Thu, 2016-10-13 05:13

Dear Patrick,

We have gone through a merger at our company where we are trying to merge the databases. The problem now is that we have duplicate records in our tables. We can of course go through all the records by hand and check if they exist twice. Another option is to build an application to do this. But using the Oracle Database there must be a better way to do this. Any ideas?

Ramesh Cunar

Dear Ramesh,

Going through all these records by hand seems like quite a time consuming process. I don’t know the number of records in your tables, but I would definitely not do this by hand. Writing an application to do this makes it possible to run the application multiple times, so you can update the rules making it better every time. Writing an application can be time consuming and running it can be as well.

I think your best shot would be to try and find a solution in either PL/SQL or SQL. Depending on the complexity and size of your data you could try any of the following solutions.

You can write a procedure that loops through your data and saves the records in an Associative array. If the record already exists in the array, or at least the key fields are equal, then it should not be added.

  CURSOR c_ot_emp IS
    SELECT e.empno
      FROM ot_emp e;
  r_ot_emp         ot_emp%ROWTYPE;
  l_unique_ot_emps t_ot_emp_aa;
  FUNCTION record_already_exists(record_in        IN ot_emp%ROWTYPE
                                ,collection_inout IN OUT t_ot_emp_aa) RETURN BOOLEAN IS
    l_indx        PLS_INTEGER := collection_inout.first;
    l_returnvalue BOOLEAN := FALSE;
    IF l_indx IS NOT NULL THEN
        l_returnvalue := l_returnvalue OR ((record_in.ename = collection_inout(l_indx).ename) AND
                         (record_in.job = collection_inout(l_indx).job));
        l_indx := collection_inout.next(l_indx);
        EXIT WHEN l_returnvalue OR(l_indx IS NULL);
      END LOOP;
    END IF;
    RETURN l_returnvalue;
  END record_already_exists;

  OPEN c_ot_emp;
    FETCH c_ot_emp
      INTO r_ot_emp;
    EXIT WHEN c_ot_emp%NOTFOUND;
    -- check if this record already exists
    IF NOT (record_already_exists(record_in => r_ot_emp, collection_inout => l_unique_ot_emps)) THEN
      l_unique_ot_emps(l_unique_ot_emps.count + 1) := r_ot_emp;
    END IF;
  FOR indx IN l_unique_ot_emps.first .. l_unique_ot_emps.last LOOP
    INSERT INTO ot_emp_deduplicated
END ot_deduplicate_aa;

You can speed up this process by using bulk processing for both the retrieval and the storing of the data. Watch out that you don’t blow up your memory by retrieving too much data at once. You can use the limit clause to prevent this.

Another way to go at this, is the use of the analytic functions in Oracle. The idea behind using analytics is to rank all the record in their own partition. This rank can be rather arbitrary so we can use ROW_NUMBER as a ranking mechanism. In the PARTITION BY clause we can add all the columns we need to check for duplicates on. And to define which record to keep we add the (obligatory) order by clause. Then, using this new column, we can state that we are just interested in the records where the ROW_NUMBER equals 1.

WITH emp_all_rows AS
 (SELECT e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm, e.deptno
       ,row_number() OVER (PARTITION BY e.ename, e.job ORDER BY e.empno ASC) rn
    FROM ot_emp e) 
  INTO ot_emp_deduplicated(empno, ename, job, mgr, hiredate, sal, comm, deptno)
(SELECT empno, ename, job, mgr, hiredate, sal, comm, deptno
   FROM emp_all_rows
  WHERE rn = 1)

You can of course use this same technique to delete the duplicate rows directly from the source table, but then you cannot use the query factoring (WITH clause) because that isn’t supported in SQL.

DELETE FROM ot_emp e
 WHERE e.rowid IN 
(SELECT ear.rowid
   FROM (SELECT e.rowid
               ,row_number() over(PARTITION BY e.ename ORDER BY e.empno ASC) rn
           FROM ot_emp e) ear --<-- EmpAllRows
  WHERE rn > 1)

It is probably faster to insert the de-duplicated rows into a new table, then drop the original table and rename the new table. Be aware of any triggers, constraints, indexes etc. that may exist and need to be recreated.

You can check out the full demo code at
Or you can send me an email and I will send you the demo script.

Hope this helps in your efforts to de-duplicate your data.

Happy Oracle’ing,
Patrick Barel

If you have any comments on this subject or you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Winter 2015.

MS16-104 Security Update for IE Breaks EBS JRE Download

Steven Chan - Thu, 2016-10-13 02:04

Microsoft IE LogoMS16-104: Security update for Internet Explorer: September 13, 2016 (KB3185319) breaks the JRE (oaj2se.exe) download function in Oracle E-Business Suite 12.1 and 12.2. This issue occurs with IE 11 on Windows 7, 8.1, and 10.

Expected behaviour: If an end-user does not have the required JRE plug-in release installed on their desktop client, a dialog box appears, asking whether they wish to download it from the EBS server. When the end-user confirms the download, the oaj2se.exe file downloads and runs.

Incorrect behaviour after the installation of KB3185319: When the end-user confirms the download via the dialog box, oaj2se.exe (the JRE) will not download.

How to fix this problem

Microsoft has published separate fixes for this issue for Windows 7, 8.1, and 10.  The appropriate patch must be installed on all affected end-user desktop clients:

Temporary workaround

If an end-user has installed KB3185319 but is unable to install the fix, end-users may work around the problem by entering the file's URL in Internet Explorer's address bar.  For example:



Related Articles

Categories: APPS Blogs

Documentum story – ADTS not working anymore?

Yann Neuhaus - Thu, 2016-10-13 02:00

A few weeks ago, on one of our Documentum environments, we find out thanks to our monitoring that the Renditions weren’t generated anymore by our CTS/ADTS Server… This happened in a Sandbox environment where a lot of dev/testing was done in parallel between EMC, the different Application Teams and the Platform/Architecture Team (us). A lot of changes at the same time means that it might not be easy to find out what caused this issue…


After a few checks on our monitoring scripts just to ensure that the issue wasn’t the monitoring itself, it appears that this part was working properly and indeed the renditions weren’t generated anymore. Therefore we checked the configuration of the docbase/rendition server but didn’t find anything suspicious on the configuration side and therefore we checked the logs of the Rendition Server. The CTS/ADTS Server often print a lot of different errors that are all linked but which appear to have a different root cause. Therefore to know which error is really relevant, I cleaned up the log file (stop CTS/ADTS, backup log file and remove it) and then I launched our monitoring script that basically remove all existing renditions for a test document, if any, and then request a new set of renditions to be generated by the CTS/ADTS Server.


After doing that, it was clear that the following error was the real one I needed to take a look at:

11:14:58,562  INFO [ Thread-61] CTSThreadPoolManagerImpl -       Added ICTSTask to the ICTSThreadPoolManager: dm_transcode_content
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Start. About to get Next ICTSTask from pool manager...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       ICTSThreadPoolManager: removing first item from the list for processing...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Removing a task to execute it. Number in waiting list: 1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Next CTSTask received...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       CTSThreadPoolManager has threadlimit -1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Processing next CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get notifier from CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get session from CTSTask...
11:14:59,203  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get RUN CTSTask...
11:14:59,859  WARN [Thread-153] CTSOperationsUtils -       [BOCS/ACS] exportContentFiles error - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: java.security.cert.CertPathBuilderException: Could not build a validated path.


11:14:59,859 ERROR [Thread-153] CTSThreadPoolManagerImpl -       Exception in CTSThreadPoolManagerImpl, notification :
com.documentum.cts.exceptions.internal.CTSServerTaskException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
com.documentum.cts.exceptions.CTSException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
DfException:: THREAD: Thread-153; MSG: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (090f446e800a303f, msw12, null); ERRORCODE: ff; NEXT: null
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx5(CTSOperationsUtils.java:626)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx4(CTSOperationsUtils.java:332)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx3(CTSOperationsUtils.java:276)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx2(CTSOperationsUtils.java:256)
                at com.documentum.cts.impl.services.task.CTSTask.exportInputContent(CTSTask.java:4716)
                at com.documentum.cts.impl.services.task.CTSTask.retrieveInputURL(CTSTask.java:4594)
                at com.documentum.cts.impl.services.task.CTSTask.initializeFromCommand(CTSTask.java:2523)
                at com.documentum.cts.impl.services.task.CTSTask.execute(CTSTask.java:922)
                at com.documentum.cts.impl.services.task.CTSTaskBase.doExecute(CTSTaskBase.java:514)
                at com.documentum.cts.impl.services.task.CTSTaskBase.run(CTSTaskBase.java:460)
                at com.documentum.cts.impl.services.thread.CTSTaskRunnable.run(CTSTaskRunnable.java:207)
                at java.lang.Thread.run(Thread.java:745)
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotificationAdmin : false
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotification : true


Ok so now it is clear that the error is actually the following one: “java.security.cert.CertPathBuilderException: Could not build a validated path”. This always means that a specific SSL Certificate Chain isn’t trusted. As you can see above, it is mentioned “BOCS/ACS” on the same line and actually the line just below contains the URL of the ACS… Therefore I thought about that and yes indeed one of the changes planned for that day was that the D2-BOCS has been installed and enabled on this environment. So what is the link between the ACS URL and the D2-BOCS installation? Well actually when installing the D2-BOCS, if you want to keep your environment secured, then you need the ACS to be switched to HTTPS because the D2-BOCS will force D2 to use the ACS URLs to download the documents to the client’s workstation when it is actually not using the ACS at all when there is no D2-BOCS installed. Therefore the installation of the D2-BOCS isn’t linked to the CTS/ADTS at all but one of our pre-requisites to install it was to setup the ACS in HTTPS and that is linked to the CTS/ADTS Server because it is actually using it to download the documents as you can see in the error above.


Now we know what was the error and just to confirm that, I switched back the ACS URL to HTTP (using DA: Administration > Distributed Content Configuration > ACS Servers > Right-click on ACS objects > Properties > ACS Server Connections) and re-init the Content Server (using DA: Administration > Basic Configuration > Content Servers > Right-click on CS objects > Properties > Check “Re-Initialize Server” and click OK).


Right after doing that, the monitoring switched back to green, meaning that renditions were created again and therefore this was indeed the one and only issue. So what to do if we want to use the ACS in HTTPS in correlation with the Rendition? Well we just have to explain to the CTS/ADTS Server that he can trust the ACS SSL Certificate and this is done by updating the cacerts file of Java used by the Rendition Server. This is done pretty easily using the following commands for which I will suppose that the Rendition Server has been installed on a D: Drive under “D:\CTS”.


So the first thing to do is to upload your Certificate Chain to the Rendition Server and put them under “D:\certs” (I will suppose there are two SSL Certificates in the chain: a Root and a Gold). Then simply start a command prompt as Administrator and execute the following commands to update the cacerts file of Java:

D:\> copy D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts.bck_YYYYMMDD
        1 file(s) copied.

D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias root_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Root_CA.cer
Enter keystore password:
Trust this certificate? [no]:  yes
Certificate was added to keystore
D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias gold_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Gold_CA1.cer
Enter keystore password:
Certificate was added to keystore


Now just switch the ACS to use HTTPS again, restart the Rendition services using the Windows Services or command line tool and the next time you will request a rendition, it will work without errors even in HTTPS. That’s actually a very common mistake to setup the SSL on the Content Server side which is great but you need not to forget that some other components might use what you switch to HTTPS and therefore these additional components need to trust your SSL Certificates too!


Note 1: in our case, the WebLogic Server hosting D2 was already in HTTPS and therefore it was already trusting the Internal Root & Gold SSL Certificates, reason why we could use the ACS in HTTPS from D2 without issue.

Note 2: in case you didn’t know about it, I think it is now clear that the CTS/ADTS Server is using the ACS to download the files… Therefore if you want a secured environment even without D2-BOCS, you absolutely need to switch your ACS to HTTPS!


Cet article Documentum story – ADTS not working anymore? est apparu en premier sur Blog dbi services.

how to improve the SQL performance of an 8 node Exadata Data Warehouse RAC

Tom Kyte - Thu, 2016-10-13 00:46
Hi Tom! Last day I work with an 8 node Exadata Data Warehouse to tune performance for the slow SQL,and I found that the table is not gather statistics and the execution plan is not the best, it executes with the TABLE ACCESS STORAGE FULL,the table...
Categories: DBA Blogs

change running code's execution plan

Tom Kyte - Thu, 2016-10-13 00:46
Hi Tom, Suppose a package is executing in prod environment, currently proc A is running and after that Proc B will be executed in next 45 mins. I have just spotted some problems in proc B execution plan(which was generated earlier). I cant chang...
Categories: DBA Blogs


Tom Kyte - Thu, 2016-10-13 00:46
Hi Guys, I am using linux to run the query. I have one query like select * from Emp; I want to spool the results into one file(this file has only exported data without number of rows selected message) and what ever the log i.e number of lines ...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator