Feed aggregator

MariaDB: audit plugin

Yann Neuhaus - Thu, 2016-10-13 11:19
Why should you Audit your MySQL Instances?

First to provide you a way to track user accessing sensible data
Secondly to investigate on suspicious queries in all your critical databases
Thirdly to comply with law and industry standards

The MariaDB Audit Plugin can help you to log all or part of the server activity as:
– who was connected and at which time
– which databases and tables were accessed
– which action/event (CONNECT, TABLE,…)
– which queries were run
All of them can be stored in a dedicated audit log file

This Plugin provides auditing not only for MariaDB where it is included by default, but also for Percona Server and
even for Oracle MySQL when using the community version

Installation on MariaDB

The MariaDB Audit Plugin library is shipped with the MariaDB server
Once the server is installed, you need still to locate your plugin directory and install/load it
mysqld7-[MariaDB]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/ |
mysqld7-[MariaDB]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';

Check then if it has been loaded
mysqld7-[MariaDB]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

Configuration of the important audit system variables

You can either set them manually with SET GLOBAL but I recommend to define them in the option file (my.cnf)
## Audit Plugin MariaDB
server_audit_events = CONNECT,QUERY,TABLE # specifies the types of events to log
server_audit_logging = ON # Enable logging
server_audit = FORCE_PLUS_PERMANENT # Load the plugin at startup & prevent it from beeing removed
server_audit_file_path = /u00/app/mysql/admin/mysqld8/log/mysqld8-audit.log # Path & log name
server_audit_output_type = FILE # separate log file
server_audit_file_rotate_size = 100000 # Limit of the log size in Bytes before rotation
server_audit_file_rotations = 9 # Number of audit files before the first will be overwritten
server_audit_excl_users = root # User(s) not audited

Restart the server and check the audit system variables
mysqld7-[MariaDB]>show global variables like "server_audit%";
| Variable_name | Value |
| server_audit_events | CONNECT,QUERY,TABLE |
| server_audit_excl_users | root |
| server_audit_file_path | /u00/app/mysql/admin/mysqld7/log/mysqld7-audit.log |
| server_audit_file_rotate_now | OFF |
| server_audit_file_rotate_size | 10000 |
| server_audit_file_rotations | 9 |
| server_audit_incl_users | sme |
| server_audit_loc_info | |
| server_audit_logging | ON |
| server_audit_mode | 0 |
| server_audit_output_type | file |
| server_audit_query_log_limit | 1024 |
| server_audit_syslog_facility | LOG_USER |
| server_audit_syslog_ident | mysql-server_auditing |
| server_audit_syslog_info | |
| server_audit_syslog_priority | LOG_INFO |
16 rows in set (0.00 sec)

Audit log file

In the Audit log file directory, you will find one current audit file and 9 archived audit log file as defined by the parameter server_audit_file_rotations
mysql@MariaDB:/u00/app/mysql/admin/mysqld7/log/ [mysqld7] ll
total 344
-rw-rw----. 1 mysql mysql 242 Oct 13 10:35
-rw-rw----. 1 mysql mysql 556 Oct 13 10:35 mysqld7-audit.log
-rw-rw----. 1 mysql mysql 10009 Oct 13 10:35 mysqld7-audit.log.1
-rw-rw----. 1 mysql mysql 10001 Oct 12 14:22 mysqld7-audit.log.2
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:47 mysqld7-audit.log.3
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:41 mysqld7-audit.log.4
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:34 mysqld7-audit.log.5
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:28 mysqld7-audit.log.6
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:22 mysqld7-audit.log.7
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:15 mysqld7-audit.log.8
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:09 mysqld7-audit.log.9

You can check the latest records in the current one
mysqld7-[MariaDB]>tail -f mysqld7-audit.log
20161013 14:22:57,MYSQL,sme,localhost,31,179,QUERY,employees,'create tables tst(name varchar(20))',1064
20161013 14:23:05,MYSQL,sme,localhost,31,180,CREATE,employees,tst,
20161013 14:23:05,MYSQL,sme,localhost,31,180,QUERY,employees,'create table tst(name varchar(20))',0
20161013 14:23:41,MYSQL,sme,localhost,31,181,WRITE,employees,tst,
20161013 14:23:41,MYSQL,sme,localhost,31,181,QUERY,employees,'insert into tst values(\'toto\')',0
20161013 14:24:26,MYSQL,sme,localhost,31,182,WRITE,employees,tst,
20161013 14:24:26,MYSQL,sme,localhost,31,182,QUERY,employees,'update tst set name=\'titi\'',0
20161013 14:24:53,MYSQL,sme,localhost,31,183,QUERY,employees,'delete from tst',1142
20161013 14:48:34,MYSQL,sme,localhost,33,207,READ,employees,tst,
20161013 14:48:34,MYSQL,sme,localhost,33,207,QUERY,employees,'select * from tst',0
20161013 15:35:16,MYSQL,sme,localhost,34,0,FAILED_CONNECT,,,1045
20161013 15:35:16,MYSQL,sme,localhost,34,0,DISCONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,0,CONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,210,QUERY,,'select @@version_comment limit 1',0
20161013 15:36:03,MYSQL,sme,localhost,35,0,DISCONNECT,,,0

The audit log file which is a plain-text format contains the following commas separated fields
timestamp : 20161012 10:33:58
serverhost : MYSQL
user : sme
host : localhost
connection id: 9
query id : 77
operation : QUERY
database : employees
object : ‘show databases’ # Executed query if QUERY or table name if TABLE operation.
return code : 0

Installation on Percona or MySQL

It is quite the same as on MariaDB
Once your server is installed, you have to look after your plugin directory
mysqld8-[Percona]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/Percona-Server-5.7.14-7-Linux.x86_64.ssl101/lib/mysql/plugin/ |

mysqld10-[MySQL]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/mysql-5.6.14-linux-glibc2.5-x86_64/lib/plugin/ |

then copy the MariaDB plugin server_audit.so in the Percona/MySQL plugin directory
mysql@Percona:[mysqld8] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so

mysql@MySQL:[mysqld10] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so

Install/load the plugin & check
mysqld8-[(Percona)]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld8-[Percona]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

mysqld10-[MySQL]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld10-[MySQL]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
**************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

Configuration of the important audit system variables

You can take exactly the same audit system variables defined for MariaDB


The MariaDB Auditing Plugin is really quick and easy to install
It is a good and cheap auditing solution and can be installed on different distributions
It lets you see exactly what SQL queries are being processed
Auditing information can really help you to track suspicious queries, detect mistakes and overall troubleshoot abnormal activity


Cet article MariaDB: audit plugin est apparu en premier sur Blog dbi services.

List with Details on a Single Page in Oracle Application Builder Cloud Service

Shay Shmeltzer - Thu, 2016-10-13 11:11

This question came up a couple of times from users so I figured I'll document how to achieve a layout that shows a list of items and allows you to pick an item from this list to show the details of this item on the same page.

list with details image

The default layout that ABCS creates is a list on one page with the ability to select an item and go see the details or edit that record on another page.

To combine the two into a single page, start from the edit or the details page that ABCS created.

On this page you then add the table or list for the same object, and set the link on a field to do the edit or details - this basically means that you'll do a navigation to the same page.

If you now run the page you'll be able click items in the table and see their details on the same page.

Here is a quick demo of how it is done:

Note that if you want this to be the default view that people see when navigating to your app - just update the navigation menu of your application to reflect this. 

Categories: Development

Partitioning – When data movement is not performed as expected

Yann Neuhaus - Thu, 2016-10-13 07:16

This blog is about an interesting partitioning story and curious data movements during merge operation. I was at my one of my customer uses intensively partitioning for various reasons including archiving and manageability. A couple of days ago, we decided to test the new fresh developed script, which will carry out the automatic archiving stuff against the concerned database in quality environment.

Let’s describe a little bit the context.

We used a range-based data distribution model. Boundary are number-based and are incremented monolithically with identity values.

We defined a partition function with range right values strategy as follows:

AS RANGE RIGHT FOR VALUES(@boundary_archive, @boundary_current)


The first implementation of the partition scheme consisted in storing all partitioned data inside the same filegroup.


Well, the initial configuration was as follows. We applied page compression to the archive partition because it contained cold data which is not supposed to be updated very frequently.

blog 106 - 0 - partitioned table starting point

blog 106 - 1 - partitioned table starting point config

As you may expect, the next step will consist in merging ARCHIVE and CURRENT partitions by using the MERGE command as follows:

MERGE RANGE (@b_archive);


Basically, we achieve a merge operation according to Microsoft documentation:

The filegroup that originally held boundary_value is removed from the partition scheme unless it is used by a remaining partition, or is marked with the NEXT USED property.


blog 106 - 2 - partitioned table after merge

At this point we may expect SQL Server has moved data from the middle partition to the left partition and according to this other Microsoft pointer

When two partitions are merged, the resultant partition inherits the data compression attribute of the destination partition.

But the results come as a little surprise because I expected to see a page compression value from my archive partition.

blog 106 - 3 - partitioned table after merge config

At this point, my assumption was either compression value doesn’t inherit correctly as mentioned in the Microsoft documentation or data movement is not performed as expected. The latter may be checked very quickly by looking at the corresponding records inside the transaction log file.

	COUNT(*) as nb_ops
FROM ::fn_dblog(NULL,NULL)
WHERE [Transaction ID] = (
     SELECT TOP 1 [Transaction ID]
     FROM ::fn_dblog(NULL,NULL)
     WHERE [Xact ID] = 164670)
GROUP BY [AllocUnitName], Operation

I put only the relevant sample here

blog 106 - 5 - log file after merge config

Referring to what I saw above, I noticed data movement was not performed as expected but rather as shown below:

blog 106 - 4 - partitioned table after merge config 2

So, this strange behavior seems to explain why compression state switched from PAGE to NONE in my case. Another strange thing is when we changed the partition scheme to include an archive filegroup, we returned to normality.



I doubled checked the Microsoft documentation to see if it exists one section to figure out the behavior I experienced in this case but without success. After further investigations I found out an interesting blog post from Sunil Agarwal (Microsoft) about partition merging and some performance improvements shipped with SQL Server 2008 R2. In short, I was in the specific context described in the blog post (same filegroup) and merging optimization came into action transparently because number of rows in the archive partition was lower than the one in the current partition at the moment of the MERGE operation.

Let me introduce another relevant information – the number of lines for each partition – to the following picture:

blog 106 - 6 - partitioned table optimization stuff

Just to be clear, this is an interesting and smart mechanism provided by Microsoft but it may be a little bit confusing when you’re not aware of this optimization. In my case, we finally decided with my customer to dedicate an archiving partition to store cold data that will be rarely accessed in this context and keep up the possibility to store cold data on cheaper storage when archiving partition will grow to a critical size. But in other cases, if storing data is the same filegroup is a still relevant scenario, keep in mind this optimization to not experience unexpected behaviors.

Happy partitioning!



Cet article Partitioning – When data movement is not performed as expected est apparu en premier sur Blog dbi services.

How can we use Oracle to deduplicate our data

Bar Solutions - Thu, 2016-10-13 05:13

Dear Patrick,

We have gone through a merger at our company where we are trying to merge the databases. The problem now is that we have duplicate records in our tables. We can of course go through all the records by hand and check if they exist twice. Another option is to build an application to do this. But using the Oracle Database there must be a better way to do this. Any ideas?

Ramesh Cunar

Dear Ramesh,

Going through all these records by hand seems like quite a time consuming process. I don’t know the number of records in your tables, but I would definitely not do this by hand. Writing an application to do this makes it possible to run the application multiple times, so you can update the rules making it better every time. Writing an application can be time consuming and running it can be as well.

I think your best shot would be to try and find a solution in either PL/SQL or SQL. Depending on the complexity and size of your data you could try any of the following solutions.

You can write a procedure that loops through your data and saves the records in an Associative array. If the record already exists in the array, or at least the key fields are equal, then it should not be added.

  CURSOR c_ot_emp IS
    SELECT e.empno
      FROM ot_emp e;
  r_ot_emp         ot_emp%ROWTYPE;
  l_unique_ot_emps t_ot_emp_aa;
  FUNCTION record_already_exists(record_in        IN ot_emp%ROWTYPE
                                ,collection_inout IN OUT t_ot_emp_aa) RETURN BOOLEAN IS
    l_indx        PLS_INTEGER := collection_inout.first;
    l_returnvalue BOOLEAN := FALSE;
    IF l_indx IS NOT NULL THEN
        l_returnvalue := l_returnvalue OR ((record_in.ename = collection_inout(l_indx).ename) AND
                         (record_in.job = collection_inout(l_indx).job));
        l_indx := collection_inout.next(l_indx);
        EXIT WHEN l_returnvalue OR(l_indx IS NULL);
      END LOOP;
    END IF;
    RETURN l_returnvalue;
  END record_already_exists;

  OPEN c_ot_emp;
    FETCH c_ot_emp
      INTO r_ot_emp;
    EXIT WHEN c_ot_emp%NOTFOUND;
    -- check if this record already exists
    IF NOT (record_already_exists(record_in => r_ot_emp, collection_inout => l_unique_ot_emps)) THEN
      l_unique_ot_emps(l_unique_ot_emps.count + 1) := r_ot_emp;
    END IF;
  FOR indx IN l_unique_ot_emps.first .. l_unique_ot_emps.last LOOP
    INSERT INTO ot_emp_deduplicated
END ot_deduplicate_aa;

You can speed up this process by using bulk processing for both the retrieval and the storing of the data. Watch out that you don’t blow up your memory by retrieving too much data at once. You can use the limit clause to prevent this.

Another way to go at this, is the use of the analytic functions in Oracle. The idea behind using analytics is to rank all the record in their own partition. This rank can be rather arbitrary so we can use ROW_NUMBER as a ranking mechanism. In the PARTITION BY clause we can add all the columns we need to check for duplicates on. And to define which record to keep we add the (obligatory) order by clause. Then, using this new column, we can state that we are just interested in the records where the ROW_NUMBER equals 1.

WITH emp_all_rows AS
 (SELECT e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm, e.deptno
       ,row_number() OVER (PARTITION BY e.ename, e.job ORDER BY e.empno ASC) rn
    FROM ot_emp e) 
  INTO ot_emp_deduplicated(empno, ename, job, mgr, hiredate, sal, comm, deptno)
(SELECT empno, ename, job, mgr, hiredate, sal, comm, deptno
   FROM emp_all_rows
  WHERE rn = 1)

You can of course use this same technique to delete the duplicate rows directly from the source table, but then you cannot use the query factoring (WITH clause) because that isn’t supported in SQL.

DELETE FROM ot_emp e
 WHERE e.rowid IN 
(SELECT ear.rowid
   FROM (SELECT e.rowid
               ,row_number() over(PARTITION BY e.ename ORDER BY e.empno ASC) rn
           FROM ot_emp e) ear --<-- EmpAllRows
  WHERE rn > 1)

It is probably faster to insert the de-duplicated rows into a new table, then drop the original table and rename the new table. Be aware of any triggers, constraints, indexes etc. that may exist and need to be recreated.

You can check out the full demo code at
Or you can send me an email and I will send you the demo script.

Hope this helps in your efforts to de-duplicate your data.

Happy Oracle’ing,
Patrick Barel

If you have any comments on this subject or you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Winter 2015.

MS16-104 Security Update for IE Breaks EBS JRE Download

Steven Chan - Thu, 2016-10-13 02:04

Microsoft IE LogoMS16-104: Security update for Internet Explorer: September 13, 2016 (KB3185319) breaks the JRE (oaj2se.exe) download function in Oracle E-Business Suite 12.1 and 12.2. This issue occurs with IE 11 on Windows 7, 8.1, and 10.

Expected behaviour: If an end-user does not have the required JRE plug-in release installed on their desktop client, a dialog box appears, asking whether they wish to download it from the EBS server. When the end-user confirms the download, the oaj2se.exe file downloads and runs.

Incorrect behaviour after the installation of KB3185319: When the end-user confirms the download via the dialog box, oaj2se.exe (the JRE) will not download.

How to fix this problem

Microsoft has published separate fixes for this issue for Windows 7, 8.1, and 10.  The appropriate patch must be installed on all affected end-user desktop clients:

Temporary workaround

If an end-user has installed KB3185319 but is unable to install the fix, end-users may work around the problem by entering the file's URL in Internet Explorer's address bar.  For example:



Related Articles

Categories: APPS Blogs

Documentum story – ADTS not working anymore?

Yann Neuhaus - Thu, 2016-10-13 02:00

A few weeks ago, on one of our Documentum environments, we find out thanks to our monitoring that the Renditions weren’t generated anymore by our CTS/ADTS Server… This happened in a Sandbox environment where a lot of dev/testing was done in parallel between EMC, the different Application Teams and the Platform/Architecture Team (us). A lot of changes at the same time means that it might not be easy to find out what caused this issue…


After a few checks on our monitoring scripts just to ensure that the issue wasn’t the monitoring itself, it appears that this part was working properly and indeed the renditions weren’t generated anymore. Therefore we checked the configuration of the docbase/rendition server but didn’t find anything suspicious on the configuration side and therefore we checked the logs of the Rendition Server. The CTS/ADTS Server often print a lot of different errors that are all linked but which appear to have a different root cause. Therefore to know which error is really relevant, I cleaned up the log file (stop CTS/ADTS, backup log file and remove it) and then I launched our monitoring script that basically remove all existing renditions for a test document, if any, and then request a new set of renditions to be generated by the CTS/ADTS Server.


After doing that, it was clear that the following error was the real one I needed to take a look at:

11:14:58,562  INFO [ Thread-61] CTSThreadPoolManagerImpl -       Added ICTSTask to the ICTSThreadPoolManager: dm_transcode_content
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Start. About to get Next ICTSTask from pool manager...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       ICTSThreadPoolManager: removing first item from the list for processing...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Removing a task to execute it. Number in waiting list: 1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Next CTSTask received...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       CTSThreadPoolManager has threadlimit -1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Processing next CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get notifier from CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get session from CTSTask...
11:14:59,203  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get RUN CTSTask...
11:14:59,859  WARN [Thread-153] CTSOperationsUtils -       [BOCS/ACS] exportContentFiles error - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: java.security.cert.CertPathBuilderException: Could not build a validated path.


11:14:59,859 ERROR [Thread-153] CTSThreadPoolManagerImpl -       Exception in CTSThreadPoolManagerImpl, notification :
com.documentum.cts.exceptions.internal.CTSServerTaskException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
com.documentum.cts.exceptions.CTSException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
DfException:: THREAD: Thread-153; MSG: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (090f446e800a303f, msw12, null); ERRORCODE: ff; NEXT: null
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx5(CTSOperationsUtils.java:626)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx4(CTSOperationsUtils.java:332)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx3(CTSOperationsUtils.java:276)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx2(CTSOperationsUtils.java:256)
                at com.documentum.cts.impl.services.task.CTSTask.exportInputContent(CTSTask.java:4716)
                at com.documentum.cts.impl.services.task.CTSTask.retrieveInputURL(CTSTask.java:4594)
                at com.documentum.cts.impl.services.task.CTSTask.initializeFromCommand(CTSTask.java:2523)
                at com.documentum.cts.impl.services.task.CTSTask.execute(CTSTask.java:922)
                at com.documentum.cts.impl.services.task.CTSTaskBase.doExecute(CTSTaskBase.java:514)
                at com.documentum.cts.impl.services.task.CTSTaskBase.run(CTSTaskBase.java:460)
                at com.documentum.cts.impl.services.thread.CTSTaskRunnable.run(CTSTaskRunnable.java:207)
                at java.lang.Thread.run(Thread.java:745)
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotificationAdmin : false
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotification : true


Ok so now it is clear that the error is actually the following one: “java.security.cert.CertPathBuilderException: Could not build a validated path”. This always means that a specific SSL Certificate Chain isn’t trusted. As you can see above, it is mentioned “BOCS/ACS” on the same line and actually the line just below contains the URL of the ACS… Therefore I thought about that and yes indeed one of the changes planned for that day was that the D2-BOCS has been installed and enabled on this environment. So what is the link between the ACS URL and the D2-BOCS installation? Well actually when installing the D2-BOCS, if you want to keep your environment secured, then you need the ACS to be switched to HTTPS because the D2-BOCS will force D2 to use the ACS URLs to download the documents to the client’s workstation when it is actually not using the ACS at all when there is no D2-BOCS installed. Therefore the installation of the D2-BOCS isn’t linked to the CTS/ADTS at all but one of our pre-requisites to install it was to setup the ACS in HTTPS and that is linked to the CTS/ADTS Server because it is actually using it to download the documents as you can see in the error above.


Now we know what was the error and just to confirm that, I switched back the ACS URL to HTTP (using DA: Administration > Distributed Content Configuration > ACS Servers > Right-click on ACS objects > Properties > ACS Server Connections) and re-init the Content Server (using DA: Administration > Basic Configuration > Content Servers > Right-click on CS objects > Properties > Check “Re-Initialize Server” and click OK).


Right after doing that, the monitoring switched back to green, meaning that renditions were created again and therefore this was indeed the one and only issue. So what to do if we want to use the ACS in HTTPS in correlation with the Rendition? Well we just have to explain to the CTS/ADTS Server that he can trust the ACS SSL Certificate and this is done by updating the cacerts file of Java used by the Rendition Server. This is done pretty easily using the following commands for which I will suppose that the Rendition Server has been installed on a D: Drive under “D:\CTS”.


So the first thing to do is to upload your Certificate Chain to the Rendition Server and put them under “D:\certs” (I will suppose there are two SSL Certificates in the chain: a Root and a Gold). Then simply start a command prompt as Administrator and execute the following commands to update the cacerts file of Java:

D:\> copy D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts.bck_YYYYMMDD
        1 file(s) copied.

D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias root_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Root_CA.cer
Enter keystore password:
Trust this certificate? [no]:  yes
Certificate was added to keystore
D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias gold_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Gold_CA1.cer
Enter keystore password:
Certificate was added to keystore


Now just switch the ACS to use HTTPS again, restart the Rendition services using the Windows Services or command line tool and the next time you will request a rendition, it will work without errors even in HTTPS. That’s actually a very common mistake to setup the SSL on the Content Server side which is great but you need not to forget that some other components might use what you switch to HTTPS and therefore these additional components need to trust your SSL Certificates too!


Note 1: in our case, the WebLogic Server hosting D2 was already in HTTPS and therefore it was already trusting the Internal Root & Gold SSL Certificates, reason why we could use the ACS in HTTPS from D2 without issue.

Note 2: in case you didn’t know about it, I think it is now clear that the CTS/ADTS Server is using the ACS to download the files… Therefore if you want a secured environment even without D2-BOCS, you absolutely need to switch your ACS to HTTPS!


Cet article Documentum story – ADTS not working anymore? est apparu en premier sur Blog dbi services.

how to improve the SQL performance of an 8 node Exadata Data Warehouse RAC

Tom Kyte - Thu, 2016-10-13 00:46
Hi Tom! Last day I work with an 8 node Exadata Data Warehouse to tune performance for the slow SQL,and I found that the table is not gather statistics and the execution plan is not the best, it executes with the TABLE ACCESS STORAGE FULL,the table...
Categories: DBA Blogs

change running code's execution plan

Tom Kyte - Thu, 2016-10-13 00:46
Hi Tom, Suppose a package is executing in prod environment, currently proc A is running and after that Proc B will be executed in next 45 mins. I have just spotted some problems in proc B execution plan(which was generated earlier). I cant chang...
Categories: DBA Blogs


Tom Kyte - Thu, 2016-10-13 00:46
Hi Guys, I am using linux to run the query. I have one query like select * from Emp; I want to spool the results into one file(this file has only exported data without number of rows selected message) and what ever the log i.e number of lines ...
Categories: DBA Blogs

How to replicate a nested table

Tom Kyte - Thu, 2016-10-13 00:46
I would like to know if there is any easier way to replicate a nested table. Say for a normal table, we can just do CREATE TABLE a_bkp as SELECT * FROM a; Is there a similar easy way to create replica of existing nested table ?
Categories: DBA Blogs


Tom Kyte - Thu, 2016-10-13 00:46
Can someone please help me with query for how can I extract the 'comment' section of all the tables ? Used this query to get all the table names: Select table_name, owner, tablespace_name from dba_tables; Used below query to gather the comm...
Categories: DBA Blogs


Tom Kyte - Thu, 2016-10-13 00:46
Hi All, It seems that only indexes that are listed in the explain plan are monitored by index monitoring feature. But indexes that are used for constraint checking might not be listed in explain plan and in this case the index monitoring featur...
Categories: DBA Blogs

Customers talk to each other—and everyone wins

Linda Fishman Hoyle - Wed, 2016-10-12 20:40

A Guest Post by Oracle Senior Vice President Chris Leone, Applications Development (pictured left)

Most customer panels are conversations between the moderator and the customer. Not in Cara Capretta’s HCM session at OpenWorld 2016. Two minutes into the conversation, the HR and IT leaders from Kaiser Permanente, SGS, Siemens Corporation, MoneyGram, and IPSOS were laughing and joking with each other.

“We’re in this Together”

It turns out that the panelists knew each other from reference calls.Caroline Villemin, HCM Systems Manager of IPSOS, said her reference call with SGS was “more than just getting feedback on the solution; we really discovered the customer community and everything around it.”

Etienne Delobel, HRIS Director of SGS, described the interaction with a professional colleague this way: “When Caroline and the CFO of IPSOS came for an on-site visit, they spent half a day with us looking into our HCM platform. I told them that if they go with HCM Cloud, we'll work together, and we'll be together." He added, “There are all these great features available in the cloud. The thing is, I do not have enough resources on my team to play with them, to have fun with them. So now I'm looking at how IPSOS is implementing and using things like self-service. They’re prototyping for us.”

Customers Connect Online

In the on-premises world, customers interacted mostly with vendors and partners. Now that everyone in the SaaS world is on the same release, customers are looking more to each other for advice. You see this in online communities such as Oracle’s Customer Connect, where customers join discussion forums on topics covering CX, HCM, ERP, EPM, and more.

The HCM Cloud community alone has 20,000 active community members. In this Profit Magazine interview with Oracle SVP Chris Leone (pictured left), he points out that activity in the cloud community has grown because people are sharing information across organizations. They’re saying, “Hey, take advantage of this new succession planning feature, or this new career development tool. They’re sharing reports.”

Oracle Listens

In addition, customers use the Idea Lab to submit ideas, collaborate on development, and vote for their favorite ideas. According to Leone, 80 percent of the enhancements in Release 12 were customer-driven. HCM customers have submitted 7,200 ideas so far.

Are you part of this online community? We encourage you to sign up today.

How to install Mirantis OpenStack 9.0 using VirtualBox – part 3

Yann Neuhaus - Wed, 2016-10-12 14:20

In the two first blogs, I installed the Fuel environment and deployed OpenStack in the Fuel slave nodes and all of that from the Fuel Master node.

In this blog, I will show you  all the steps to follow in order to create an instance in OpenStack. All is going to be done via the Command Line Interface and not via Horizon – the OpenStack Dashboard (I will explain this in an other blog) .


Let’s begin!

First of all let’s connect to the Fuel Master node and list all the nodes :

[root@fuel ~]# fuel2 node list
| id | name | status | os_platform | roles | ip | mac | cluster | platform_name | online |
| 1 | Storage | ready | ubuntu | [u'cinder'] | | 08:00:27:80:04:e8 | 1 | None | True |
| 2 | Compute | ready | ubuntu | [u'compute'] | | 08:00:27:cc:85:69 | 1 | None | True |
| 3 | Controller | ready | ubuntu | [u'controller'] | | 08:00:27:35:b0:77 | 1 | None | True |


Now, I connect to the controller node

[root@fuel ~]# ssh
Warning: Permanently added '' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation:  https://help.ubuntu.com/
Last login: Tue Oct 11 09:55:54 2016 from


Let’s put an alias for each of the Fuel slave nodes  :

[root@fuel ~]# cat /etc/hosts   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6       fuel.domain.tld fuel       fuel.domain.tld controller       fuel.domain.tld compute       fuel.domain.tld storage


Now I can connect with the aliases:

[root@fuel ~]# ssh controller
The authenticity of host 'controller (' can't be established.
ECDSA key fingerprint is 01:b5:15:22:03:d0:f9:bb:86:3a:06:a7:8c:19:bd:22.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'controller,' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 11 10:12:34 2016 from
root@node-3:~# exit


Repeat this step for the two left nodes (compute & storage)

I connect to the controller node :

[root@fuel ~]# ssh controller
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation:  https://help.ubuntu.com/
Last login: Tue Oct 11 11:24:16 2016 from


I  try to enter an OpenStack Command Line (list the instances for example) :

root@node-3:~# nova list
 ERROR (CommandError): You must provide a username or user ID via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]

It is normal because using OpenStack Clients Command Line  requires before to get a token from Keystone.

In order to do that, you have to specify where the controller node can reach Keystone and get the required informations from the  OpenStack API’s. Because it is an authentification process you will also need to specify a username and a password.

But doing this at each time you use the OpenStack Command Lines clients can  quickly become inconvenient. Hopefully, in OpenStack, there is a way to avoid this. The solution is to create a file with all environments variables that need to be exported for getting this token from Keystone.

Mirantis creates this file for us:

root@node-3:~# ls

Let’s see what this file contains :

root@node-3:~# cat openrc
export OS_NO_CACHE='true'
export OS_TENANT_NAME='admin'
export OS_PROJECT_NAME='admin'
export OS_USERNAME='admin'
export OS_PASSWORD='admin'
export OS_AUTH_URL=''
export OS_DEFAULT_DOMAIN='Default'
export OS_AUTH_STRATEGY='keystone'
export OS_REGION_NAME='RegionOne'
export NOVA_ENDPOINT_TYPE='internalURL'
export OS_ENDPOINT_TYPE='internalURL'
export MURANO_REPO_URL='http://storage.apps.openstack.org/'


I source the file to export the environment variables :

root@node-3:~# source openrc


I check if the variables were exported. It is via the OS_AUTH_URL that the controller node can reach Keystone

root@node-3:~# echo $OS_USERNAME
root@node-3:~# echo $OS_PASSWORD
root@node-3:~# echo $OS_AUTH_URL


Now I can list if there are some instances running :

root@node-3:~# nova list
| ID | Name | Status | Task State | Power State | Networks |



Let’s create the first instance

In order to create this instance, I need :

  • a flavor
  • an OS
  • a network
  • a keypair
  • a name


I list the available flavors

root@node-3:~# nova flavor-list
 | ID                                   | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
 | 1                                    | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
 | 2                                    | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
 | 3                                    | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
 | 4                                    | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
 | 5                                    | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
 | d942587a-c48b-48ca-9c96-cad3c358eb6e | m1.micro  | 64        | 0    | 0         |      | 1     | 1.0         | True      |


Then, I list the Operating Systems or images available. There is only one image created by default by Fuel which is a mini Linux, it is called Cirros.

root@node-3:~# nova image-list
| ID                                   | Name   | Status | Server |
| a3708fe7-60f7-49c9-91ed-a6eee1ab8ba4 | TestVM | ACTIVE |        |


I list also the networks (they had been created by Fuel during the deployment of OpenStack)

root@node-3:~# neutron net-list
| id                                   | name               | subnets                                               |
| b22e82c9-df6b-4580-a77e-cde8e93f30d8 | admin_floating_net | 7acc6b15-1c00-4447-b4f7-0fcced7a594b    |
| 09b1e122-cb63-44d5-af0b-244d3aa06331 | admin_internal_net | aea6fc29-dfb6-4586-9b09-70ce5c992315 |


I create a keypair that will be injected in the future instance in order to avoid a password authentification (unless if a password authentification was set up).. This step is not useful for this case because the Cirros image provides a password by default. But it can be helpful if you use other cloud images (ubuntu, centos, etc..)

root@node-3:~# nova keypair-add --pub-key ~/.ssh/authorized_keys mykey
root@node-3:~# nova keypair-list
| Name  | Type | Fingerprint                                     |
| mykey | ssh  | ee:56:e6:c0:7b:e2:d5:2b:61:23:d7:76:49:b3:d8:d5 |


Now I got all the informations that I need, let’s create the instance which I will name InstanceTest01

root@node-3:~# nova boot \
> --flavor m1.micro \
> --image TestVM \
> --key-name mykey \
> --nic net-id=09b1e122-cb63-44d5-af0b-244d3aa06331 \
> InstanceTest01
| Property | Value |
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | instancetest01 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-dyo0086w |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | bUQJDtwM3vjr |
| config_drive | |
| created | 2016-10-11T13:08:34Z |
| description | - |
| flavor | m1.micro (d942587a-c48b-48ca-9c96-cad3c358eb6e) |
| hostId | |
| host_status | |
| id | b84b49f1-1b01-4aa9-bd9c-c8691fca9298 |
| image | TestVM (a3708fe7-60f7-49c9-91ed-a6eee1ab8ba4) |
| key_name | mykey |
| locked | False |
| metadata | {} |
| name | InstanceTest01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | abfec6fc54c14da28f9971e04c344ec8 |
| updated | 2016-10-11T13:08:36Z |
| user_id | d0e1e11f84064f4c8aa02381f0d42ed2 |

Here is the instance running :

root@node-3:~# nova list
| ID | Name | Status | Task State | Power State | Networks |
| b84b49f1-1b01-4aa9-bd9c-c8691fca9298 | InstanceTest01 | ACTIVE | - | Running | admin_internal_net= |

So the instance is up and running


We connect to the compute node to check if the instance is really running on it:

Connection to controller closed.
[root@fuel ~]# ssh compute
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 11 18:07:23 2016 from
root@node-2:~# virsh list
Id Name State
2 instance-00000001 running


Yes, the instance is running.

I need to add the ssh rule for accessing the instance, I will add also the icmp one. These rules are managed by security groups.

Let’s try to ping the instance just created, from now all the operations are done on the controller node  :

 root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ping
 PING ( 56(84) bytes of data.
 --- ping statistics ---
 3 packets transmitted, 0 received, 100% packet loss, time 2004ms


I can not ping my instance, let’s see what rules are currently in the default security group created by OpenStack:

root@node-3:~# nova secgroup-list
 | Id                                   | Name    | Description            |
 | 954e4d85-ea38-4a4b-bbe6-e355946fdfb0 | default | Default security group |
root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
| IP Protocol | From Port | To Port | IP Range | Source Group |
|                                             | | | | default |
|                                             | | | | default |

There are no rules. So, I add the icmp rule in this default security group

root@node-3:~# nova secgroup-add-rule 954e4d85-ea38-4a4b-bbe6-e355946fdfb0 icmp -1 -1
 | IP Protocol | From Port | To Port | IP Range | Source Group |
 | icmp | -1 | -1 | | |


I check if the rule was added

root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
 | IP Protocol | From Port | To Port | IP Range  | Source Group |
 |             |           |         |           | default      |
 |             |           |         |           | default      |
 | icmp        | -1        | -1      | |              |


Let’s test if I can ping the instance, we need to use the network namespace qrouter which is basically the router that the controller node will use to reach the instance on the compute node

root@node-3:~# ip netns list




root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ping
 PING ( 56(84) bytes of data.
 64 bytes from icmp_seq=1 ttl=64 time=2.29 ms
 64 bytes from icmp_seq=2 ttl=64 time=0.789 ms
 --- ping statistics ---
 2 packets transmitted, 2 received, 0% packet loss, time 1001ms


I do the same with the ssh rule

First, I test if I can access my instance

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ssh cirros@ -v
 OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
 debug1: Reading configuration data /etc/ssh/ssh_config
 debug1: /etc/ssh/ssh_config line 19: Applying options for *
 debug1: Connecting to [] port 22.


No I can not, so I add the rule

root@node-3:~# nova secgroup-add-rule  954e4d85-ea38-4a4b-bbe6-e355946fdfb0 tcp 22 22
| IP Protocol | From Port | To Port | IP Range  | Source Group |
| tcp         | 22        | 22      | |              |


I check if the rule was added :

root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
| IP Protocol | From Port | To Port | IP Range | Source Group |
| | | | | default |
| tcp | 22 | 22 | | |
| | | | | default |
| icmp | -1 | -1 | | |

I connect to my instance, the username by default is cirros

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ssh cirros@ -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to [] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
debug1: Remote protocol version 2.0, remote software version dropbear_2015.67
debug1: no match: dropbear_2015.67
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 1b:f1:1c:34:13:83:cd:b1:37:a9:e4:32:37:65:91:c4
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is 1b:f1:1c:34:13:83:cd:b1:37:a9:e4:32:37:65:91:c4.
Are you sure you want to continue connecting (yes/no)? yes



Type yes, and the password is cubswin:)

Warning: Permanently added '' (ECDSA) to the list of known hosts.
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /root/.ssh/id_rsa
debug1: Trying private key: /root/.ssh/id_dsa
debug1: Trying private key: /root/.ssh/id_ecdsa
debug1: Trying private key: /root/.ssh/id_ed25519
debug1: Next authentication method: password
cirros@'s password: 
debug1: Authentication succeeded (password).
Authenticated to ([]:22).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
$ # "I am connected to my instance"

So now, I am connected to my instance


Let’s check if I can ping google

$ ping www.google.com
PING www.google.com ( 56 data bytes
64 bytes from seq=0 ttl=59 time=29.485 ms
64 bytes from seq=1 ttl=59 time=9.089 ms
64 bytes from seq=2 ttl=59 time=27.027 ms
64 bytes from seq=3 ttl=59 time=9.992 ms
64 bytes from seq=4 ttl=59 time=26.734 ms

This ended this series. Mirantis made OpenStack very simple to install but it requires good skills in Puppet and Astute if you want to customize the installation (like create a network node role or customize the installation of MySQL database in the controller node for example)

In other blogs, I will show you how to add nodes to the Fuel environment to make the cloud more powerful.


Cet article How to install Mirantis OpenStack 9.0 using VirtualBox – part 3 est apparu en premier sur Blog dbi services.

Court Orders Rimini Street to Stop Unlawful Conduct and Awards Oracle $27.7 Million in Prejudgment Interest

Oracle Press Releases - Wed, 2016-10-12 14:00
Press Release
Court Orders Rimini Street to Stop Unlawful Conduct and Awards Oracle $27.7 Million in Prejudgment Interest

Redwood Shores, Calif.—Oct 12, 2016

Yesterday, the United States District Court for the District of Nevada issued two significant rulings in Oracle’s litigation against Rimini Street and its CEO Seth Ravin.

The first ruling is a permanent injunction barring Rimini Street from continuing to infringe Oracle’s copyrights and other unlawful acts. The Court previously determined that a permanent injunction was warranted, noting Rimini Street’s “callous disregard for Oracle’s copyrights and computer systems when it engaged in the infringing conduct” and that “Rimini’s business model was built entirely on its infringement of Oracle’s copyrighted software and its improper access and downloading of data from Oracle’s website and computer systems.”

The Court’s four-page permanent injunction prohibits certain copying, distribution, and use of Oracle’s copyrighted software and documentation by Rimini Street and also imposes limitations on Rimini Street’s access to Oracle’s websites. The order states, for example, that Rimini Street may not use a customer’s software environment “to develop or test software updates or modifications for the benefit of any other licensee.” The order also prohibits Rimini Street’s preparation of certain derivative works from Oracle’s copyrighted software. The order applies not only to Rimini Street, but also “its subsidiaries, affiliates, employees, directors, officers, principals, and agents.”

“This permanent injunction imposes important restrictions on Rimini Street,” said Oracle’s General Counsel Dorian Daley, “and Oracle is grateful that the Court has taken steps to prevent continuing unlawful acts by Rimini Street and its executives.”

Although Rimini Street has stated that there was “no expected impact” from any injunction, Rimini Street also told the Court that it “could suffer significant harm to its current business practices if the proposed injunction were entered,” which it now has been. These contradictory positions raise the issue of whether Rimini is misleading the Court or making misrepresentations to customers.

In its second ruling, the Court awarded Oracle approximately $27.7 million in prejudgment interest, with an additional amount of prejudgment interest to be awarded based on the date of the final judgment. This is in addition to the $50 million jury verdict and the $46.2 million in attorneys’ fees and costs awarded to Oracle last month.

Contact Info
Deborah Hellinger
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Presenting at KScope can change your life

Dietmar Aust - Wed, 2016-10-12 13:16
Back in 2008 I was invited by Scott Spendolini to speak about my experiences with Oracle APEX at the German telecom shops at the KScope conference in New Orleans.

It was a truly great experience. 

At that time the system I had built was one of the larger deployments of Oracle APEX known in the community serving 6.000 users in the 800 telecom shops in Germany with 400.000 hits per day on the system.

Not only could I showcase what I had built with Oracle APEX 2.2 but I could even get more connected with the Oracle APEX community. It was awesome.

I had the chance to meet many people from the Oracle forums in person, like Scott Spendolini, Patrick Wolf, Dimitri Gielis, Roel Hartman, Karen Cannell, Anton Nielsen, John Scott but also many people from the APEX team itself like Mike Hichwa, Joel Kallman, Carl Backstrom, David Peake, Scott Spadafore and many others.

On this conference you will certainly meet the most members of the APEX team itself. It is a great opportunity to get in touch with the people in person ... this just makes a HUGE difference. So whenever you might need support later on ... your name will be known and people understand that you contribute just as well. It makes a difference !

From that time on in 2008 I got hooked and got more and more involved in the community. Each project became an opportunity to research yet another interesting topic for the next conference.

And I presented each year (I missed Long Beach ... what a bummer :( ) until now ... and I will submit again this week ... because the deadline is coming close ( 14.10.2016 ) !!!

This conference sets itself apart not only by the extremely high class content but also by the social activities, there is soo much going on. They always select really nice locations, everybody is relaxed and has so much fun. The APEX presentations are typically well attended ... even more than from the other tracks.

I can only encourage you to submit your presentation since the deadline is coming up soon: 14.10.2016 !!!

In order to submit a really compelling presentation, please read up these posts for guidance:
And remember ... we need presenters on all levels: beginners, intermediates and experts.

Hope to see you in San Antonio at KScope 2017 !!!


Oracle Documents Cloud Service Named in the Constellation Research ShortList for Enterprise File Sharing Suites

WebCenter Team - Wed, 2016-10-12 10:17

The Constellation ShortList presents vendors in different categories of the market relevant to early adopters. In addition, products included meet the threshold criteria for this category as determined by Constellation Research.

This Constellation ShortList of vendors for a market category is compiled through conversations with early adopter clients, independent analysts, and briefings with vendors and partners.

About the Enterprise File Sharing Suites Constellation ShortList

Enterprise file sharing and synchronization solutions allow individuals and teams to share and synchronize documents, photos, videos and other files across multiple devices in a secure manner. This category refers to people collaborating on and sharing business content in a distributed approach, allowing for real-time editing and file distribution as needed. The most advanced vendors enable developers to create business, content-centric, line-of-business applications that incorporate files as part of that process.

“Source:  Constellation Research, Inc., " Constellation Research ShortList for Enterprise File Sharing Suites", Alan Lepofsky, October 12, 2016".

Oracle Documents Cloud Service was included in the Constellation Research ShortList for Enterprise File Sharing Suites.

In today’s world, your employees need instant access to the content that drives your business. Whether they are at their desk, on the go, or working remotely, the ability to easily and securely access and collaborate on business content is one of the keys to being a nimble enterprise. Oracle Documents Cloud Service, an enterprise content collaboration solution with rich social features, gives you the file sync and share capabilities your employees need while providing the control and security your IT organization requires. All of the content can be accessed and collaborated on in context of business workflows, business applications and office productivity tools.

Oracle Documents Cloud Service lets you easily store your business content in the cloud, securely access it from anywhere, and share it with your colleagues and partners in real time. With a fast, intuitive web interface and easy-to-use desktop and mobile applications, employees can view and collaborate on files even when offline, keeping your organization running efficiently and your employees staying productive from any location.

Oracle Documents Cloud Service was pleased to be included in the Constellation Research ShortList for Enterprise File Sharing Suites. You can read the full report here.

OTN Appreciation Day: - the day after - BPEL, SOASuite and SCA (in that order...)

Darwin IT - Wed, 2016-10-12 07:21
Unfortunately I noticed this nice initiative only yesterday: OTN Appreciation Day. I did not had a change to cook something up, but I do like to add some mustard after the meal, as we say in Dutch.

In the titles of the 'OTN Appreciation Day'-blogs I miss BPEL. In 2004,when Oracle acquired Collaxa, I worked at Oracle Consulting in the Netherlands. I worked with Oracle Workflow and Interconnect. Oracle wasn't yet into SOA really. But with BPEL PM they acquired the tool that stood at the base of SOA Suite, together with Webservices Manager and what we now know as the Mediator. And it changed my professional life, really. Its extremely powerful, especially with the added  JCA Adapters, xslt-mapper, and since 11g the SCA architecture wich enables you to assemble composite applications with Adapters BusinessRules, Human Workflow, Mediator, Spring Components and of course ... BPEL 2.0.

Sorry, Tim, for being late.

Is it possible to find out the problem SQL in a procedure which was executed days ago

Tom Kyte - Wed, 2016-10-12 06:26
Hi Team, Our system was written by plenty a lot of procedures, and one of them did not performance well. This procedure includes tens of SQL statments, suppose that there's only 1 or 2 SQL statements in the procedure caused this problem. I mean, th...
Categories: DBA Blogs


Tom Kyte - Wed, 2016-10-12 06:26
CLSN00107: CRS-5017: L'action de ressource "ora.dbfor.db start" a rencontre l'erreur suivante : ORA-12546: TNS : acces refuse . Pour plus de details, reportez-vous a "(:CLSN00107:)" dans "/exec/products/oracle/grid/log/servfor/agent/ohasd/oraagent_...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator