Feed aggregator

Bordering Text

Tim Dexter - Tue, 2014-11-18 16:08

A tough little question appeared on one of our internal mailing lists today that piqued my interest. A customer wanted to place a border around all data fields in their BIP output. Something like this:


Naturally you think of using a table, embedding the field inside a cell and turning the cell border on. That will work but will need some finessing to get the cells to stretch or shrink depending on the width of the runtime text. Then things might get a bit squirly (technical term) if the text is wide enough to force a new line at the page edge. Anyway, it will get messy. So I took a look at the problem to see if the fields can be isolated in the page as far as the XSLFO code is concerned. If the field can be siolated in its own XSL block then we can change attribute values to get the borders to show just around the field. Sadly not.

This is an embedded field YEARPARAM in a sentence.

translates to

 <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
  font-family-generic="swiss" font-family="Calibri" 
  xml:space="preserve">This is an embedded field <xsl:value-of select="(.//YEARPARAM)[1]" xdofo:field-name="YEARPARAM"/> in a sentence.</fo:inline>


If we change the border on tis, it will apply to the complete sentence. not just the field.
So how could I isolate that field. Well we could actually do anything to the field. embolden, italicize, etc ... I settled on changing the background color (its easy to change it back with a single attribute call.) Using the highlighter tool on the Home tab in Word I change the field to have a yellow background. I now have:

 This gives me the following code.

<fo:block linefeed-treatment="preserve" text-align="start" widows="2" end-indent="5.4pt" orphans="2"
 start-indent="5.4pt" height="0.0pt" padding-top="0.0pt" padding-bottom="10.0pt" xdofo:xliff-note="YEARPARAM" xdofo:line-spacing="multiple:13.8pt"> 
 <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
  font-family-generic="swiss" font-family="Calibri" xml:space="preserve">This is an embedded field </fo:inline>
  <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
   font-family-generic="swiss" font-family="Calibri" background-color="#ffff00">
    <xsl:attribute name="background-color">white</xsl:attribute> <xsl:value-of select="(.//YEARPARAM)[1]" xdofo:field-name="YEARPARAM"/> 
  </fo:inline> 
 <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
  font-family-generic="swiss" font-family="Calibri" xml:space="preserve"> in a sentence.</fo:inline> 
</fo:block> 

Now we have the field isolated we can easily set other attributes that will only be applied to the field and nothing else. I added the following to my YEARPARAM field:

<?attribute@inline:background-color;'white'?> >>> turn the background back to white
<?attribute@inline:border-color;'black'?> >>> turn on all borders and make black
<?attribute@inline:border-width;'0.5pt'?> >>> make the border 0.5 point wide
<?YEARPARAM?> >>> my original field

The @inline tells the BIP XSL engine to only apply the attribute values to the immediate 'inline' code block i.e. the field. Collapse all of this code into a single line in the field.
When I run the template now, I see the following:

 


Its a little convoluted but if you ignore the geeky code explanation and just highlight/copy'n'paste, its pretty straightforward.

Categories: BI & Warehousing

Android Update: 5.0

Dietrich Schroff - Tue, 2014-11-18 14:27
Today my Nexus 7 got the upgrade to Android 5.0:
 After this upgrade, many things changed, like the system settings:



But everything is slower than before.... ;-(

For a complete history of all updates visit this posting.


My DOAG session re:Server-side JavaScript

Kuassi Mensah - Tue, 2014-11-18 04:31
#DOAG Wed 19/11 17:00 rm HongKong Server-side #JavaScript (#NodeJS) progrm#OracleDB using #nashorn & Avatar.js --#db12c @OracleDBDev #java

Will post shortly ablog re: JavaScript Stored Procedures. 

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster?
Most Oracle 12c installations will NOT be using the awesome Oracle Database in-memory features available starting in version 12.1.0.2.0. This experiment is about the performance impact of upgrading to 12c but disabling the in-memory features.

Every experiment I have performed comparing buffer processing rates, clearly shows any version of 12c performs better than 11g. However, in my previous post, my experiment clearly showed a performance decrease after upgrading from 12.1.0.1.0 to 12.1.0.2.0.

This posting is about why this occurred and what to do about it. The bottom line is this: make sure "off" is "totally off."

Turn it totally off, not partially off
What I discovered is by default the in-memory column store feature is not "totally disabled." My experiment clearly indicates that unless the DBA takes action, not only could they be a license agreement violation but a partially disabled in-memory column store slightly slows logical IO processing compared to the 12c non in-memory column store option. Still, any 12c version processes buffer faster than 11g.

My experiment: specific and targeted
This is important: The results I published are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducible it is a valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

You can download all my experimental results HERE. This includes the raw sqlplus output, the data values, the free R statistics package commands, spreadsheet with data nicely formatted and lots of histograms.

The instance parameter settings and results
Let me explain this by first showing the instance parameters and then the experimental results. There are some good lessons to learn!

Pay close attention to the inmemory_force and inmemory_size instance parameters.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
optimizer_inmemory_aware boolean TRUE

SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1409289056 bytes
Database Buffers 6174015488 bytes
Redo Buffers 13045760 bytes

In my experiment using the above settings the median buffers processing rate was 549.4 LIO/ms. Looking at the inmemory_size and the SGA contents, I assumed the in-memory column store was disabled. If you look at the actual experimental result file "Full ds2-v12-1-0-2-ON.txt", which contain the explain plan of the SQL used in the experiment, there is no mention of the in-memory column store being used. My assumption, which I think is a fair one, was that the in-memory column store had been disabled.

As you'll see I was correct, but only partially correct.

The parameter settings below are when the in-memory column store was totally disabled. They key is changing the default inmemory_force parameter value from DEFAULT to OFF.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string OFF
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
optimizer_inmemory_aware boolean TRUE
SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1291848544 bytes
Database Buffers 6291456000 bytes
Redo Buffers 13045760 bytes

Again, the SGA does not show any in-memory memory space. In my experiment with the above "totally off" settings, the median buffers processing rate was 573.5 LIO/ms compared to "partially off" 549.4 LIO/ms. Lesson: Make sure off is truly off.

It is an unfair comparison!
It is not fair to compare the "partially off" with the "totally off" test results. Now that I know the default inmemory_force must be changed to OFF, the real comparison should be made with the non in-memory column store version 12.1.0.1.0 and the "totally disabled" in-memory column store version 12.1.0.2.0. This is what I will summarize below. And don't forget all 12c versions showed a significant buffer processing increase compared to 11g.

The key question: Should I upgrade?
You may be thinking, if I'm NOT going to license and use the in-memory column store, should I upgrade to version 12.1.0.2.0? Below is a summary of my experimental results followed by the key points.


1. The non column store version 12.1.0.1.0 was able to process 1.1% more buffers/ms (median: 581.7 vs 573.5) compared to to "totally disabled" in-memory column store version 12.1.0.2.0. While this is statistically significant, a 1.1% buffer processing difference is probably not going to make-or-break your upgrade.

2. Oracle Corporation, I'm told, knows about this situation and is working on a fix. But even if they don't fix it, in my opinion my experimental "data point" would not warrant not upgrading to the in-memory column store version 12.1.0.2.0 even if you are NOT going to use the in-memory features.

3. Visually (see below) the non in-memory version 12.1.0.1.0 and the "totally off" in-memory version 12.1.0.2.0 samples sets look different. But they are pretty close. And as I mentioned above, statistically they are "different."

Note for the statistically curious: The red color 12.1.0.1.0 non in-memory version data set is highly variable. I don't like to see this in my experiments. Usually this occurs when a mixed workload sometimes impacts performance, I don't take enough samples or my sample time duration is too short. To counteract this, in this experiment I captured 31 samples. I also performed the experiment multiple times and the results where similar. What I could have done was used more application data to increase the sample duration time. Perhaps that would have made the data clearer. I could have also used another SQL statement and method to create the logical IO load.

What I learned from this experiment
To summarize this experiment, four things come to mind:

1. If you are not using an Oracle Database feature, completely disable it. My mistake was thinking the in-memory column store was disabled when I set it's memory size to zero and "confirmed" it was off by looking at the SGA contents.

2. All versions of 12c I have tested are clearly faster at processing buffers than any version of 11g.

3. There is a very slight performance decrease when upgrading from Oracle Database version 12.1.0.1.0 to 12.1.0.2.0.

4. It is amazing to me that with all the new features poured into each new Oracle Database version the developers have been able to keep the core buffer processing rate nearly at or below the previous version. That is an incredible accomplishment. While some people may view this posting as a negative hit against the Oracle Database, it is actually a confirmation about how awesome the product is.

All the best in your Oracle performance tuning work!

Craig.




Categories: DBA Blogs

Upgrading system's library/classes on 12c CDB/PDB environments

Marcelo Ochoa - Mon, 2014-11-17 17:41
Some days ago I found that the ODCI.jar included into 12c doesn't reflect latest update for oracle ODCI API.
This API is used when writing new domain indexes such as Scotas OLS, pipe-line tables and many other cool stuff.
ODCI.jar includes several Java classes which are wrappers of Oracle Object types such as ODCIArgDesc among others, the jar included into the RDBMS 11g/12c seem to be outdated, may be generated with 10g version database, for example it doesn't included attributes such as ODCICompQueryInfo which have information about Composite Domain Index (filter by/order by push predicates).
The content of ODCI.jar is a set of classes generated by the tool JPublisher and looks like:
oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ jar tvf ODCI.jar
     0 Mon Jul 07 09:12:54 ART 2014 META-INF/
    71 Mon Jul 07 09:12:54 ART 2014 META-INF/MANIFEST.MF
  3501 Mon Jul 07 09:12:30 ART 2014 oracle/ODCI/ODCIArgDesc.class
  3339 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescList.class
  1725 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescRef.class
....
  2743 Mon Jul 07 09:12:52 ART 2014 oracle/ODCI/ODCIStatsOptions.class
  1770 Mon Jul 07 09:12:54 ART 2014 oracle/ODCI/ODCIStatsOptionsRef.class
The complete list of classes do not reflect the list of object types that latest 12c RDBMS have, this list is about 38 types expanded later to more than 60 classes:
SQL> select * from dba_types where type_name like 'ODCI%'
SYS     ODCIARGDESC
SYS     ODCIARGDESCLIST
....
SYS     ODCIVARCHAR2LIST
38 rows selected
so there is a clear difference between the classes included at ODCI.jar and the actual list of object types included into the RDBMS.
Obviously these classes could be re-generated using JPublisher but I'll have to provide an input file with a template for case sensitive names, typically used in Java.
To quickly create a JPublisher input file I'll execute this anonymous PLSQL block on JDeveloper logged as SYS at the CDB:
set long 10000 lines 500 pages 50 timing on echo on
set serveroutput on size 1000000
begin
 for i in (select * from dba_types where type_name like 'ODCI%' order by type_name) loop
   if (i.typecode = 'COLLECTION') then
      dbms_output.put('SQL sys.'||i.type_name||' AS ');
      FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
         if (substr(j.text,1,4) = 'TYPE') then
            dbms_output.put(substr(j.text,6,length(j.name))||' TRANSLATE ');
         else
            dbms_output.put(upper(substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4))||' AS '||substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4));
         end if;
      end loop;
      dbms_output.put_line('');
   else
      dbms_output.put('SQL sys.'||i.type_name||' AS ');
      FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
         if (substr(j.text,1,4) = 'TYPE') then
            dbms_output.put(substr(j.text,6,length(j.name))||' TRANSLATE ');
         end if;
         if (substr(j.text,1,1) = ' ') then
            dbms_output.put(upper(substr(j.text,3,instr(j.text,' ',3)-3))||' AS '||substr(j.text,3,instr(j.text,' ',3)-3)||', ');
         end if;
      end loop;
      dbms_output.put_line('');
   end if;
 end loop;
end;
finally editing this file manually to remove latest coma sign I'll get this ODCI.in mapping file for JPublisher.
With above file is possible to use an Ant task calling JPublisher utiliy as:
            description="Generate a new ODCI.jar file with ODCI types wrappers using JPublisher">
          login="${dba.usr}/${dba.pwd}@${db.str}"
      dir="tmp"
      package="oracle.ODCI"
      file="../db/ODCI.in"/>
           basedir="tmp"
       includes="**/*.class"
     />
 
by executing above Ant task I'll have a new ODCI.jar with a content like:
oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ jar tvf ODCI.jar
     0 Sun Nov 16 21:07:50 ART 2014 META-INF/
   106 Sun Nov 16 21:07:48 ART 2014 META-INF/MANIFEST.MF
     0 Sat Nov 15 15:17:40 ART 2014 oracle/
     0 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/
102696 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyData.class
  1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyDataRef.class
 17435 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyType.class
  1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyTypeRef.class
  3347 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDesc.class
  2814 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIArgDescList.class
  2033 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDescRef.class
.....
  2083 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCITabFuncStatsRef.class
  2657 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIVarchar2List.class
Well now the new ODCI.jar is ready for uploading into the CDB, to simplify this task I'll put directly in a same directory as the original one:
oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ mv ODCI.jar ODCI.jar.orig
oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ mv /tmp/ODCI.jar ./ODCI.jar
NOTE: These next paragraph are examples to show that it will fail, see next paragraph to see the correct way.
To upload this new file into the CDB logged as SYS I'll execute:
SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;
SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');
to check if it works OK, I'll execute:
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
DBMS_JAVA.LONGNAME(OBJECT_NAME)
--------------------------------------------------------------------------------
oracle/ODCI/ODCIArgDesc
oracle/ODCI/ODCIArgDescList
oracle/ODCI/ODCIArgDescRef
...
oracle/ODCI/ODCITabFuncStats
oracle/ODCI/ODCITabFuncStatsRef
oracle/ODCI/ODCIVarchar2List
63 rows selected.
I assume a this point that a new jar uploaded into the CDB root means that all PDB will inherit this new implementation as a new binary/library file patched at ORACLE_HOME does, but this is not how the class loading system works into the multitenant environment, to check that I'll re-execute above query but using the PDB$SEED container (the template used for new databases):
SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
...
28 rows selected.
similar result will be displayed in any other PDB running/mounted on that CDB, more on this if I'll check a Java code on some PDB this exception will be thrown:
Exception in thread "Root Thread" java.lang.IncompatibleClassChangeError
 at oracle.jpub.runtime.MutableArray.getOracleArray(MutableArray.java)
 at oracle.jpub.runtime.MutableArray.getObjectArray(MutableArray.java)
 at oracle.jpub.runtime.MutableArray.getObjectArray(MutableArray.java)
 at oracle.ODCI.ODCIColInfoList.getArray(ODCIColInfoList.java)
 at com.scotas.solr.odci.SolrDomainIndex.ODCIIndexCreate(SolrDomainIndex.java:366)
this is because a code was compiled with latest API and the container have an oldest one.
So I'll re-load the new ODCI.jar into PDB$SEED and my PDBs, using similar approach as in the CDB for example:
SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database
this is because the PDB are blocked from altering classes inherit from the CDB.
As I mentioned early above way are incorrect when dealing in multitenant environments.
To fix that there is Perl script named catcon.pl, it automatically takes care of loading on ROOT first, then on PDB$SEED, then any/all open PDBs specified in the command line.
In my case I'll execute:
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -b initsoxx_output initsoxx.sql
before doing that is necessary to open all PDB (read write, or in restrict mode) or specifying which PDB will be patched. Note that I used initsoxx.sql script, this script is used by default during RDBMS installation to upload ODCI.jar.
Now I'll check if all PDBs have consistent ODCI classes.
SQL> ALTER SESSION SET CONTAINER = PDB$SEED;  
Session altered.
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
DBMS_JAVA.LONGNAME(OBJECT_NAME)
--------------------------------------------------------------------------------
oracle/ODCI/ODCITabFuncStatsRef
oracle/ODCI/ODCICompQueryInfoRef
....
oracle/ODCI/ODCIVarchar2List
63 rows selected.
SQL> ALTER SESSION SET CONTAINER = WIKI_PDB;
Session altered.
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
DBMS_JAVA.LONGNAME(OBJECT_NAME)
--------------------------------------------------------------------------------
oracle/ODCI/ODCIArgDesc
....
63 rows selected.
Finally all PDBs where patched with a new library.
More information about Development Java within RDBMS in multitenant environments are in this presentation, The impact of MultiTenant Architecture in the develop of Java within the RDBMS, for Spanish readers there is video with audio at YouTube from my talk at OTN Tour 14 ArOUG:








Think Stats, 2nd Edition Exploratory Data Analysis By Allen B. Downey; O'Reilly Media

Surachart Opun - Mon, 2014-11-17 08:15
Lots of Python with data analysis books. This might be a good one that is able to help readers perform statistical analysis with programs written in Python. Think Stats, 2nd Edition Exploratory Data Analysis by Allen B. Downey(@allendowney).
This second edition of Think Stats includes the chapters from the first edition, many of them substantially revised, and new chapters on regression, time series analysis, survival analysis, and analytic methods. Additional, It uses uses pandas, SciPy, or StatsModels in Python. Author developed this book using Anaconda from Continuum Analytics. Readers should use it, that will easy from them. Anyway, I tested on Ubuntu and installed pandas, NumPy, SciPy, StatsModels, and matplotlib packages. This book has 14 chapters relate with processes that author works with a dataset. It's for intermediate reader. So, Readers should know how to program (In a book uses Python), and skill in mathematical + statistical.
Each chapter includes exercises that readers can practice and get more understood. Free Sampler
  • Develop an understanding of probability and statistics by writing and testing code.
  • Run experiments to test statistical behavior, such as generating samples from several distributions.
  • Use simulations to understand concepts that are hard to grasp mathematically.
  • Import data from most sources with Python, rather than rely on data that’s cleaned and formatted for statistics tools.
  • Use statistical inference to answer questions about real-world data.
surachart@surachart:~/ThinkStats2/code$ pwd
/home/surachart/ThinkStats2/code
surachart@surachart:~/ThinkStats2/code$ ipython notebook  --ip=0.0.0.0 --pylab=inline &
[1] 11324
surachart@surachart:~/ThinkStats2/code$ 2014-11-17 19:39:43.201 [NotebookApp] Using existing profile dir: u'/home/surachart/.config/ipython/profile_default'
2014-11-17 19:39:43.210 [NotebookApp] Using system MathJax
2014-11-17 19:39:43.234 [NotebookApp] Serving notebooks from local directory: /home/surachart/ThinkStats2/code
2014-11-17 19:39:43.235 [NotebookApp] The IPython Notebook is running at: http://0.0.0.0:8888/
2014-11-17 19:39:43.236 [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
2014-11-17 19:39:43.236 [NotebookApp] WARNING | No web browser found: could not locate runnable browser.
2014-11-17 19:39:56.120 [NotebookApp] Connecting to: tcp://127.0.0.1:38872
2014-11-17 19:39:56.127 [NotebookApp] Kernel started: f24554a8-539f-426e-9010-cb3aa3386613
2014-11-17 19:39:56.506 [NotebookApp] Connecting to: tcp://127.0.0.1:43369
2014-11-17 19:39:56.512 [NotebookApp] Connecting to: tcp://127.0.0.1:33239
2014-11-17 19:39:56.516 [NotebookApp] Connecting to: tcp://127.0.0.1:54395

Book: Think Stats, 2nd Edition Exploratory Data Analysis
Author: Allen B. Downey(@allendowney)
Categories: DBA Blogs

PeopleSoft and Docker's value proposition

Javier Delgado - Sun, 2014-11-16 12:58
If you haven't heard yet about Docker and/or container technologies, you will soon do. Docker has made one of the biggest impacts in the IT industry in 2014. Since the release of its 1.0 version on past June, it has captured the attention of many big IT vendors, including Google, Microsoft and Amazon. As far as I'm aware, Oracle has not announced any initiative with Docker, except for the Oracle Linux container. Still, Docker can be used with PeopleSoft, and it can actually simplify your PeopleSoft system administration. Let's see how.

What is Container Technology?
Docker is an open platform to build, ship, and run distributed applications. Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

In a way, it is similar to virtualization technologies like VMWare or Virtualbox where you can get an image of a machine and run it anywhere you have the player installed. Docker is similar except that it just virtualizes the application and its dependencies, not the full machine.

Docker virtual machines are called containers. They run as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

Docker uses a layered file system for its containers, in a way that they can be updated by just including the changes since the last update. This greatly reduces the volume of information that needs to be shipped to deliver an update.

How can it be used with PeopleSoft?
As we have seen, Docker containers are much easier to deploy than an entire virtual machine. This means that activities such as installations can be greatly simplified. All you need is to have Docker installed and then download the PeopleSoft container. Of course, this requires that you first do an installation within a Docker container, but this is not more complex than doing an usual installation, it just requires some Docker knowledge in order to take advantage of all its features. Under my point of view, if you are doing a new installation, you should seriously consider Docker. At BNB we have prepared containers with the latest PeopleSoft HCM and FSCM installations so we can quickly deploy them to our customers.

Also, when you make a change to a Docker container, just the incremental changes are applied to existing running instances. This poses a great advantage when you apply a patch or run a PeopleTools upgrade. If you want to apply the patches to a new environments, you just need to make sure that you apply the latest container changes in all the servers running the environment.

Isolation between running instances is also a major advantage when you have multiple environments in the same server. Suppose you want to apply the later Tuxedo patch just in the Development environment, which coexists with other environments on the same server. Unless you had one Tuxedo installation for each environment (which is possible but normally unlikely), you would need to go ahead and hope the patch did not break anything (to be honest, this happens very rarely with Tuxedo, but some other product patches are not so reliable). If you have a separate container for the Development environment you can apply the patch just to it and later deploy the changes to the rest of environments.

Last but not least, the reduced size of Docker containers compared to an entire virtual machine greatly simplifies the distribution to and from the cloud. Docker is of great help if you want to move your on premise infrastructure to the cloud (or the other way around). This is even applicable when you want to keep a contingency system in the cloud, as delivering the incremental container changes made to your on premise system requires less time than using other methods.

Not only that, Docker can be hosted in most operating systems. This means that moving a container from one public cloud facility to another is significantly easier than it was with previous technologies. Exporting a virtual machine from Amazon EC2 to Google Cloud was quite complex (and under some circumstances even not possible).


Limitations
But as any other technology, Docker is no panacea. It has some limitations that may restrict its adoption for your PeopleSoft installation. The main ones I can think of are:

  • Currently there is no support for containers using Windows as a guest operating system. This is not surprising, as Docker in intimately linked to Unix/Linux capabilities. Still, Microsoft has announced a partnership with Docker that will hopefully help to overcome this limitation. For the moment, you will not be able to use Docker for certain PeopleSoft components, such as the PSNT Process Scheduler, which is bad news if you are still using Crystal Reports or Winword reports. Also, if you are using Microsoft SQL Server as your database, this may be a major limitation.


  • Docker is most useful when used for applications, but not data. Logs, traces and databases should normally be kept out of the Docker container.


Conclusions
Although container technology is still in its initial steps, significant benefits are evident for maintaining and deploying applications, PeopleSoft included. Surely enough, the innovation coming on this area will have a big impact in the way PeopleSoft systems are administered.

PS: I would like to thank Nicolás Zocco for his invaluable research on this topic, particularly in installing the proof of concept using PeopleSoft and Docker.

Interstellar Madness

FeuerThoughts - Sun, 2014-11-16 09:19
Saw Interstellar last night. Only had to wait through TWENTY MINUTES of trailers. Had to put fingers in my ears for much of it. So loud, so invasive, so manipulative. Anyway....

I don't watch TV anymore, rarely watch a movie or read a novel. So when I do subject myself to high-resolution artificial input to my brain, it is a jarring experience.

And enjoyable. I haven't stopped watching TV because I don't like it. I have stopped watching TV because I can't help but "like" it, be drawn to it. I am a product of millions of years of evolution, and both Madison Ave (marketeers) and Hollywood know it, and take advantage of it.

Anyway....

I enjoyed watching Interstellar, with its time-traveling plot ridiculousnesses and plenty of engaging human drama. 

But one line really ticked me off. The movie is, to a large extent, a propaganda campaign to get Americans excited about being "explorers and pioneers" again. 

Cooper (McConaughey) complains that "Now we're a generation of caretakers." and asserts that:

"Mankind was born on earth. It was never meant to die here."

That is the worst sort of human species-ism. It is a statement of incredible arrogance. And it is an encouragement to humans to continue to despoil this planet, because don't worry! 

Science and technology can and will save us! Right? 'Cause it sure has done the trick so far. We are feeding more people, clothing more people, putting more people in cars and inside homes with air conditioners, getting iPhones in the hands of more and more humans. 

Go, science, go!

And if we can't figure out how to grow food for 10 billion and then 20 billion people, if we totally exhaust this planet trying to keep every human alive and healthy into old age, not to worry! There are lots of other planets out there and, statistically, lots and lots of them should be able to support human life. Just have to find them and, oh, right, get there.

But there's no way to get there without a drastic acceleration of consumption of resources of our own planet. Traveling to space is, shall we say, resource-intensive.

Where and how did we (the self-aware sliver of human organisms) go so wrong? 

I think it goes back to the development of recorded knowledge (writing, essentially or, more broadly, culture). As long as humans were constrained by the ability to transmit information only orally, the damage we could do was relatively limited, though still quite destructive.

Once, however, we could write down what we knew, then we could build upon that knowledge, generation after generation, never losing anything but a sense of responsibility about how best to use that knowledge.

That sense of responsibility might also be termed "wisdom", and unfortunately wisdom is something that humans acquire through experience in the world, not by reading a book or a webpage. 

Mankind was born on earth and there is no reason at all to think that we - the entire species - shouldn't live and die right here on earth. Especially if we recognize that the price to be paid for leaving earth is the destruction of large swaths of earth and our co-inhabitants and....

Being the moral creatures that we like to think we are, we decide that this price is unacceptable.


Categories: Development

Card Flip Effect with Oracle Alta UI

Shay Shmeltzer - Fri, 2014-11-14 17:00

The Oracle Alta UI focuses on reducing clatter in the user interface. So one of the first thing you'll try and do when creating an Alta UI is decide which information is not that important and can be removed from the page.

But what happens if you still have semi-important information that the user would like to see, but you don't want it to overcrowd the initial page UI? You can put it on the other side of the page - or in the Alta UI approach - create a flip card.

Think of a flip card as an area that switches the shown content to reveal more information - and with ADF's support for animation you can make a flip effect.

In the demo below I show you how to create this flip card effect using the new af:deck and af:transition components in ADF Faces. 

A few other minor things you can see here:

  • Use conditional ELs and viewScope variables - specifically the code I use is 
#{viewScope.box eq 'box2' ? 'box2' : 'box1'} 
  • Add additional field to a collection after you initially drag and dropped it onto a page - using the binding tab
  • Setting up partialSubmit and PartialTriggers for updates to the page without full refresh 

Categories: Development

rdbms-subsetter

Catherine Devlin - Fri, 2014-11-14 10:06

I've never had a tool I really liked that would extract a chunk of a large production database for testing purposes while respecting the database's foreign keys. This past week I finally got to write one: rdbms-subsetter.

rdbms-subsetter postgresql://user:passwd@host/source_db postgresql://user:passwd@host/excerpted_db 0.001

Getting it to respect referential integrity "upward" - guaranteeing every needed parent record would be included for each child row - took less than a day. Trying to get it to also guarantee referential integrity "downward" - including all child records for each parent record - was a Quixotic idea that had me tilting at windmills for days. It's important, because parent records without child records are often useless or illogical. Yet trying to pull them all in led to an endlessly propagating process - percolation, in chemical engineering terms - that threatened to make every test database a complete (but extremely slow) clone of production. After all, if every row in parent table P1 demands rows in child tables C1, C2, and C3, and those child rows demand new rows in parent tables P2 and P3, which demand more rows in C1, C2, and C3, which demand more rows in their parent tables... I felt like I was trying to cut a little sweater out of a big sweater without snipping any yarns.

So I can't guarantee child records - instead, the final process prioritizes creating records that will fill out the empty child slots in existing parent records. But there will almost inevitably be some child slots left open when the program is done.

I've been using it against one multi-GB, highly interconnected production data warehouse, so it's had some testing, but your bug reports are welcome.

Like virtually everything else I do, this project depends utterly on SQLAlchemy.

I developed this for use at 18F, and my choice of a workplace where everything defaults to open was wonderfully validated when I asked about the procedure for releasing my 18F work to PyPI. The procedure is - and I quote -

Just go for it.

How to use the Oracle Sales Cloud New Simplified WebServices API

Angelo Santagata - Fri, 2014-11-14 04:05

Over the last two years my organisation has been working with multiple partners helping them create partner integrations and showing them how to use the variety of SOAP APIs available for Sales Cloud integrators. Based on this work Ive been working with development with the aim to simplify some of the API calls which require "multiple" calls to achieve a single objective.. For example to create a Customer Account you often need to create the Location first , and then the contacts and then the customer account.. In SalesCloud R9 you will have a new subset of APIs which will simplify this.

So you all have a head-start in learning the API I've worked with our documentation people and we've just released a new whitepaper/doc onto Oracle Support explaining the new API in lovely glorious detail. It also includes some sample code of each of the operations you might use and some hints and tips!

Enjoy and feel free to post feedback 

 You can download the documentation from Oracle Support, the document is called "Using Simplified SOAP WebServices" , its docId is 1938666.1 and this is a direct link to the document

NoCOUG watchers protest despicable tactics being used by NoCOUG management

Iggy Fernandez - Thu, 2014-11-13 13:49
FOR IMMEDIATE RELEASE NoCOUG watchers protest despicable tactics being used by NoCOUG management SILICON VALLEY (NOVEMBER 13, 2014) – Veteran NoCOUG watchers all over Northern California have been protesting the despicable tactics being used by NoCOUG management to lure Oracle Database professionals to the NoCOUG conference at the beautiful eBay Town Hall next week. Instead […]
Categories: DBA Blogs

Oracle Support Advisor Webcast Series for November

Chris Warticki - Thu, 2014-11-13 12:38
We are pleased to invite you to our Advisor Webcast series for November 2014. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts;
If you prefer to read about current events, Oracle Premier Support News provides you information, technical content, and technical updates from the various Oracle Support teams. For a full list of Premier Support News, go to My Oracle Support and enter Document ID 222.1 in Knowledge Base search.

Sincerely,
Oracle Support

shadow2 shadow3 pen November Featured Webcasts by Product Area: Database Oracle Database 12.1 Support Update for Linux on System z November 20 Enroll Database Oracle 12c: New Database Initialization Parameters November 26 Enroll E-Business Suite Topics in Inventory Convergence and Process Manufacturing November 12 Enroll E-Business Suite Oracle Receivables Posting & Reconciliation Process In R12 November 13 Enroll E-Business Suite Respecting Ship Set Constraints Rapid Planning November 13 Enroll E-Business Suite Overview of Intercompany Transactions November 18 Enroll E-Business Suite Empowering Users with Oracle EBS Endeca Extensions November 20 Enroll Eng System Oracle Exadata 混合列式压缩 (Oracle Exadata Hybrid Columnar Compression) - Mandarin only November 20 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Introduction and Demo on Multi Branch Plant MRP Planning (R3483) November 11 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: 9.1 Enhancement - Inventory to G/L Reconciliation Process November 12 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Installation and setup of the Web Development client November 13 Enroll JD Edwards EnterpriseOne Using JD Edwards EnterpriseOne Equipment Billing November 19 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne 2014 1099 Year End Refresher Webcast November 20 Enroll JD Edwards World JD Edwards World - Localizacao Brazil- Ficha Conteudo de Importacao(FCI) (PORTUGUESE) November 18 Enroll Middleware WLS どんな問い合わせもスムーズにすすむ最初の一歩 + 3 (Japanese Only) November 19 Enroll Middleware WLS 管理服务器与被管服务器之间的通讯机制 (Mandarin Only) November 26 Enroll Oracle Business Intelligence Using OBIEE with Big Data November 25 Enroll PeopleSoft Enterprise Financial Aid Regulatory 2015-2016 Release 1 (9.0 Bundle #35) November 12 Enroll PeopleSoft Enterprise PeopleSoft 1099 Update for 2014 - Get your Copy B’s out on time! November 13 Enroll PeopleSoft Enterprise Payroll for North America – Preparing for Year-End Processing and Annual Tax Reporting November 19 Enroll

Did You Know that You Can Buy OUM?

Jan Kettenis - Thu, 2014-11-13 09:18
The Oracle Unified Method (OUM) Customer Program has been changed in that, next to the already existing option to get it by involving Oracle Consulting, you now also can buy it if (for some reason) you don't want to involve Consulting.

Next to that there also is the option to purchase a subscription (initial for 3 years, after which it can be renewed annually) allowing to download updates for OUM.

OUM aims at supporting the entire Enterprise IT lifecycle, including the Cloud.

Got problems with Nulls with ServiceCloud's objects in REST?

Angelo Santagata - Wed, 2014-11-12 13:02
Using Oracle RightNow, Jersey 1.18, JAX-WS, JDeveloper 11.1.1.7.1

Whilst trying to creating a REST interface to our RightNow instance for a mobile application I was hitting an issue with null values being rendered by Jersey (Jackson)

The way I access RightNow is via JAX-WS  generated proxies which generates JAXB Objects. In the past I've been able to simply return the JAXB object to Jersey and it gets rendered all lovely jubbly.. However with the RightNow WSDL I'm getting lots of extra (null) fields rendered.. This doesn't occur with Sales Cloud so I'm assuming its something to do with the WSDL definition..

     
{ "incidents" : [
   {
       "Asset" : null, "AssignedTo" :  {
           "Account" : null, "StaffGroup" :  {
               "ID" :  {
                  "id" : 100885
              },"Name" : "B2CHousewares"
           },"ValidNullFields" : null
        },"BilledMinutes" : null, "Category" :  {
           "ID" :  {
               "id" : 124
           },"Parents" : [
           {
           .....
Look at all those "null" values, yuck...

Thankfully I found a workaround (yay!), I simply needed to create a custom Object Mapper and tell it *not* to render nulls. This worked for both the JAXB objects which were generated for me and other classes

Simply create a class which overrides the normal object Mapper factory and to make sure its used, ensure the @Provider tag is present

package myPackage;

import javax.ws.rs.ext.ContextResolver;
import javax.ws.rs.ext.Provider;

import org.codehaus.jackson.map.DeserializationConfig;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.SerializationConfig;
import org.codehaus.jackson.map.annotate.JsonSerialize;

@Provider
public class CustomJSONObjectMapper implements ContextResolver<ObjectMapper> {

    private ObjectMapper objectMapper;

   
    public CustomJSONObjectMapper() throws Exception {
        System.out.println("My object mapper init");
         objectMapper= new ObjectMapper();
         // Force all conversions to be NON_NULL for JSON
         objectMapper.setSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
    }

    public ObjectMapper getContext(Class<?> objectType) {
        System.out.println("My Object Mapper called");
        return objectMapper;
    }
}

And the result is lovely.. No null values and ready for my mobile app to consume .... 
{
    "organization" : [
    {
        "id" :  {
            "id" : 68
        },"lookupName" : "AngieSoft", "createdTime" : 1412166303000, "updatedTime" : 1412166303000, "name" : "AngieSoft", "salesSettings" :  {
            "salesAccount" :  {
                "id" :  {
                    "id" : 2
                }
            },"totalRevenue" :  {
                "currency" :  {
                    "id" :  {
                        "id" : 1
                    }
                }
            }
        },"source" :  {
            "id" :  {
                "id" : 1002
            },"parents" : [
            {
                "id" :  {
                    "id" : 32002
                }
            }
]
        },"crmmodules" :  {
            "marketing" : true, "sales" : true, "service" : true
        }
    }
]
}

Oh heads up Im using Jersey 1.18 because I want to deploy it to Oracle Java Cloud Service, if your using Jersey 2.x I believe the setSerializationInclusion method has changed..

Making Copies of Copies with Oracle RMAN

Don Seiler - Wed, 2014-11-12 10:13
I recently had need to make a copy of an image copy in Oracle rman. Since it wasn't immediately obvious to me, I thought it was worth sharing once I had it sorted out. I was familiar with making a backup of a backup, but had never thought about making a copy of a copy.

First you need to create an image copy of your database or tablespace. For the sake of example, I'll make a copy of the FOO tablespace. The key is to assign a tag to it that you can use for later reference. I'll use the tag "DTSCOPYTEST":

backup as copy 
    tablespace foo 
    tag 'DTSCOPYTEST'
    format '+DG1';

So I have my image copy in the DG1 tablespace. Now say we want to make copy of that for some testing purpose and put it in a different diskgroup. For that, we need the "BACKUP AS COPY COPY" command, and we'll want to specify the copy we just took by using the tag that was used:

backup as copy
    copy of tablespace foo
    from tag 'DTSCOPYTEST'
    tag 'DTSCOPYTEST2'
    format '+DG2';

As you'd guess, RMAN makes a copy of the first copy, writing it to the specified format location.

As always, hope this helps!
Categories: DBA Blogs

Mystery of java.sql.SQLRecoverableException: IO Error: Socket read timed out during adop/adpatch

Vikram Das - Tue, 2014-11-11 21:19
While applying the R12.2 upgrade driver, we faced the issue of WFXLoad.class failing in adworker log but showing up as running on adctrl

        Control
Worker  Code      Context            Filename                    Status
------  --------  -----------------  --------------------------  --------------
     1  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     2  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     3  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     4  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     5  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     6  Run       AutoPatch R120 pl                              Wait        
     7  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     8  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     9  Run       AutoPatch R120 pl  WFXLoad.class               Running      
    10  Run       AutoPatch R120 pl                              Wait        

adworker log shows:

Exception in thread "main" java.sql.SQLRecoverableException: IO Error: Socket read timed out
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:482)
        at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:678)
        at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:238)
        at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:34)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:567)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)
        at java.sql.DriverManager.getConnection(DriverManager.java:215)
        at oracle.apps.ad.worker.AdJavaWorker.getAppsConnection(AdJavaWorker.java:1041)
        at oracle.apps.ad.worker.AdJavaWorker.main(AdJavaWorker.java:276)
Caused by: oracle.net.ns.NetException: Socket read timed out
        at oracle.net.ns.Packet.receive(Packet.java:341)
        at oracle.net.ns.NSProtocol.connect(NSProtocol.java:308)
        at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1222)
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:330)
        ... 8 more

This was happening again and again. The DBAs were suspecting network issue, cluster issue, server issue and all the usual suspects.  In Database alert log we saw these errors coming every few seconds:

Fatal NI connect error 12537, connecting to:
 (LOCAL=NO)

  VERSION INFORMATION:
        TNS for Linux: Version 11.2.0.3.0 - Production
        Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production
        TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production
  Time: 11-NOV-2014 19:58:19
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12537

TNS-12537: TNS:connection closed
    ns secondary err code: 12560
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
opiodr aborting process unknown ospid (26388) as a result of ORA-609


We tried changing the parameters in sqlnet.ora and listener.ora as instructed in the article:
Troubleshooting Guide for ORA-12537 / TNS-12537 TNS:Connection Closed (Doc ID 555609.1)

Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120

However, the errors continued.  To rule out any issues in network, I also restarted the network service on Linux:

service network restart

One thing which I noticed was the extra amount of time that the connect was taking 4 seconds:

21:17:38 SQL> conn apps/apps
Connected.
21:17:42 SQL> 

Checked from remote app tier and it was same 4 seconds.

Stopped listener and checked on DB server that uses bequeath protocol:

21:18:47 SQL> conn / as sysdba
Connected.
21:18:51 SQL> conn / as sysdba
Connected.

Again it took 4 seconds.

A few days back, I had seen that connect time had increased after turning setting the DB initialization parameter pre_page_sga to true in a different instance.  On a hunch, I checked this and indeed pre_page_sga was set to true.  I set this back to false:

alter system set pre_page_sga=false scope=spfile;
shutdown immediate;
exit
sqlplus /nolog
conn / as sysdba
startup
SQL> set time on
22:09:46 SQL> conn / as sysdba
Connected.
22:09:49 SQL>

The connections were happening instantly.  So I went ahead and resumed the patch after setting:

update fnd_install_processes 
set control_code='W', status='W';

commit;

I restarted the patch and all the workers completed successfully.  And the patch was running significantly faster.  So I did a search on support.oracle.com to substantiate my solution with official Oracle documentation.  I found the following articles:

Slow Connection or ORA-12170 During Connect when PRE_PAGE_SGA init.ora Parameter is Set (Doc ID 289585.1)
Health Check Alert: Consider setting PRE_PAGE_SGA to FALSE (Doc ID 957525.1)

The first article (289585.1) says:
PRE_PAGE_SGA can increase the process startup duration, because every process that starts must access every page in the SGA. This approach can be useful with some applications, but not with all applications. Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off. The advantage that PRE_PAGE_SGA can afford depends on page size.

The second article (957525.1) says:
Having the PRE_PAGE_SGA initialization parameter set to TRUE can significantly increase the time required to establish database connections.

The golden words here are "Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off.".  That is exactly what happens when you do adpatch or adop.

Keep this in mind, whenever you do adpatch or adop, make sure that pre_page_sga is set to false.  It is possible that you may get the error "java.sql.SQLRecoverableException: IO Error: Socket read timed out" if you don't.  Also the patch will run significantly slower if pre_page_sga is set to true.  So set it to false and avoid these issues.



Categories: APPS Blogs

GoldenGate Not Keeping Up? Split the Process Using GoldenGate Range Function

VitalSoftTech - Tue, 2014-11-11 10:35
In most environments one set of GoldenGate process (1 Extract & 1 Replicat process) is sufficient for change data synchronization. But if your source database generates a huge volume of data then a single process may not be sufficient to handle the data volume. In such a scenario there may be need to split the […]
Categories: DBA Blogs

Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?
I was wondering if the new Oracle Database 12c version 12.1.0.2.0 in-memory column store feature will SLOW performance when it is NOT being used. I think this is a fair question because most Oracle Database systems will NOT be using this feature.

While the new in-memory column store feature is awesome and significant, with each new Oracle feature there is additional kernel code. And if Oracle is not extremely careful, these new lines of Oracle kernel code can slow down the core of Oracle processing, that is, buffer processing in Oracle's buffer cache.

Look at it this way, if a new Oracle release requires 100 more lines of kernel code to be executed to process a single buffer, that will be reflected in how many buffers Oracle can process per second.

To put bluntly, this article is the result of my research comparing core buffer processing rates between Oracle Database versions 11.2.0.2.0, 12.1.0.1.0 and 12.1.0.2.0.

With postings like this, it is very important for everyone to understand the results I publish are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducable it is valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

There are two interesting results from this research project. This article is about the first discovery and my next article will focus on the second. The second is by far the most interesting!

FYI. Back in August of 2013 performed a similar experiment where I compared Oracle database versions 11.2.0.2.0 with 12.1.0.1.0. I posted the article HERE.

Why "Faster" Means More Buffer Gets Processed Per Second
For this experiment when I say "faster" I am referring to raw buffered block processing. When a buffer is touched in the buffer cache it is sometimes called a buffer get or a logical IO. But regardless of the name, every buffer get increases the instance statistic, session logical reads.

I like raw logical IO processing experiments because they are central to all Oracle Database processing. Plus with each new Oracle release, as additional functionality is inserted it is likely more lines of Oracle kernel code will exist. To maintain performance with added functionality is an incredible feat. It's more likely the core buffer processing will be slower because of the new features. Is this case with Oracle's in-memory column store?

How I Setup The Experiment
I have included all the detailed output, scripts, R commands and output, data plots and more in the Analysis Pack that can be downloaded HERE.

There are a lot of ways I could have run this experiment. But two key items must exist for a fare comparison. First, all the processing must be in cache. There can be no physical read activity. Second, the same SQL must be run during the experiment and have the same execution plan. This implies the Oracle 12c column store will NOT be used. A different execution plan is considered "cheating" as a bad plan will clearly loose. Again, this is a very targeted and specific experiment.

The experiment compares the buffer get rates for a given SQL statement. For each Oracle version, I gathered 33 samples and excluded the first two, just to ensure caching was not an issue. The SQL statement runs for around 10 seconds, processes around 10.2M rows and touches around 5M buffers. I checked to ensure the execution plans are the same for each Oracle version. (Again, all the details are in the Analysis Pack for your reading pleasure.)

I ran the experiment on a Dell server. Here are the details:
$ uname -a
Linux sixcore 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux
To make this easier for myself, to perform the test I used my CPU Speed Test tool (version 1i). I blogged about this last month HERE. The latest version of this tool can be downloaded HERE.

The Results, Statistically
Shown below are the experimental results. Remember, the statistic I'm measuring is buffer gets per millisecond.


Details about the above table: The "Normal" column is about if the statistical distribution of the 31 samples is normal. If the p-value (far right column) is greater than 0.05 then I'll say they are normal. In all three cases, the p-value is less than 0.05. If fact, if you look at the histograms contained in the Analysis Pack every histogram is visually clearly not normal. As you would expect the "Average" and the "Median" are the statistical mean and median. The "Max" is the largest value in the sample set. The "Std Dev" is the standard deviation, which is doesn't mean much since our sample sets are not normally distributed.

As I blogged about before the Oracle Database 12c buffer processing is faster than Oracle Database 11g. However, the interesting part is Oracle version with in-memory column store 12.1.0.2.0 is slower then the previous version of 12c, 12.1.0.1.0. In fact, in my experiment the in-memory column store version is around 5.5% slower! This means version 12.1.0.1.0 "out of the box" can process logical buffers around 5.5% faster! Interesting.

In case you're wondering, I used the default out-of-the-box in-memory column store settings for version 12.1.0.2.0. I checked the in-memory size parameter, inmemory_size and it was indeed set to zero. Also, when I startup the Oracle instance there is no mention of the in-memory column store.

Statistically Comparing Each Version
As an important side bar, I did statistically compare the Oracle Database versions. Why? Because while a 5.5% decrease in buffer throughput may seem important, it may not be statistically significant, meaning this difference can not be explained with our sample sets.

So going around saying version 12.1.0.2.0 is "slower" by 5.5% would be misleading. But in my experiment, it would NOT be misleading because the differences in buffer processing are statistically significant. The relevant experimental details are shown below.

Version A Version B Statistical p-value
Difference
---------- ---------- ----------- -------
11.2.0.1.0 12.1.0.1.0 YES 0.0000
11.2.0.1.0 12.1.0.2.0 YES 0.0000
12.1.0.1.0 12.1.0.2.0 YES 0.0000

In all three cases the p-value was less than 0.05 signifying the two sample sets are statistically
different. Again, all the details are in the Analysis Pack.

The chart above shows the histograms of both Oracle Database 12c version sample sets together. Visually they look very separated and different with no data crossover. So from both a numeric and visual perspective there is a real difference between 12.1.0.1.0 and 12.1.0.2.0.


What Does This Mean To Me
To me this is surprising. First, there is a clear buffer processing gain upgrading from Oracle 11g to 12c. That is awesome news! But I was not expecting a statistically significant 5.5% buffer processing decrease upgrading to the more recent 12.1.0.2.0 version. Second, this has caused me to do a little digging to perhaps understand the performance decrease. The results of my experimental journey are really interesting...I think more interesting than this posting! But I'll save the details for my next article.

Remember, if you have any questions or concerns about my experiment you can run the experiment yourself. Plus all the details of my experiment are included in the Analysis Pack.

All the best in your Oracle performance tuning work!

Craig.





Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator