Feed aggregator

How to use the Oracle Sales Cloud New Simplified WebServices API

Angelo Santagata - Fri, 2014-11-14 04:05

Over the last two years my organisation has been working with multiple partners helping them create partner integrations and showing them how to use the variety of SOAP APIs available for Sales Cloud integrators. Based on this work Ive been working with development with the aim to simplify some of the API calls which require "multiple" calls to achieve a single objective.. For example to create a Customer Account you often need to create the Location first , and then the contacts and then the customer account.. In SalesCloud R9 you will have a new subset of APIs which will simplify this.

So you all have a head-start in learning the API I've worked with our documentation people and we've just released a new whitepaper/doc onto Oracle Support explaining the new API in lovely glorious detail. It also includes some sample code of each of the operations you might use and some hints and tips!

Enjoy and feel free to post feedback 

 You can download the documentation from Oracle Support, the document is called "Using Simplified SOAP WebServices" , its docId is 1938666.1 and this is a direct link to the document

NoCOUG watchers protest despicable tactics being used by NoCOUG management

Iggy Fernandez - Thu, 2014-11-13 13:49
FOR IMMEDIATE RELEASE NoCOUG watchers protest despicable tactics being used by NoCOUG management SILICON VALLEY (NOVEMBER 13, 2014) – Veteran NoCOUG watchers all over Northern California have been protesting the despicable tactics being used by NoCOUG management to lure Oracle Database professionals to the NoCOUG conference at the beautiful eBay Town Hall next week. Instead […]
Categories: DBA Blogs

Oracle Support Advisor Webcast Series for November

Chris Warticki - Thu, 2014-11-13 12:38
We are pleased to invite you to our Advisor Webcast series for November 2014. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts;
If you prefer to read about current events, Oracle Premier Support News provides you information, technical content, and technical updates from the various Oracle Support teams. For a full list of Premier Support News, go to My Oracle Support and enter Document ID 222.1 in Knowledge Base search.

Sincerely,
Oracle Support

shadow2 shadow3 pen November Featured Webcasts by Product Area: Database Oracle Database 12.1 Support Update for Linux on System z November 20 Enroll Database Oracle 12c: New Database Initialization Parameters November 26 Enroll E-Business Suite Topics in Inventory Convergence and Process Manufacturing November 12 Enroll E-Business Suite Oracle Receivables Posting & Reconciliation Process In R12 November 13 Enroll E-Business Suite Respecting Ship Set Constraints Rapid Planning November 13 Enroll E-Business Suite Overview of Intercompany Transactions November 18 Enroll E-Business Suite Empowering Users with Oracle EBS Endeca Extensions November 20 Enroll Eng System Oracle Exadata 混合列式压缩 (Oracle Exadata Hybrid Columnar Compression) - Mandarin only November 20 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Introduction and Demo on Multi Branch Plant MRP Planning (R3483) November 11 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: 9.1 Enhancement - Inventory to G/L Reconciliation Process November 12 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Installation and setup of the Web Development client November 13 Enroll JD Edwards EnterpriseOne Using JD Edwards EnterpriseOne Equipment Billing November 19 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne 2014 1099 Year End Refresher Webcast November 20 Enroll JD Edwards World JD Edwards World - Localizacao Brazil- Ficha Conteudo de Importacao(FCI) (PORTUGUESE) November 18 Enroll Middleware WLS どんな問い合わせもスムーズにすすむ最初の一歩 + 3 (Japanese Only) November 19 Enroll Middleware WLS 管理服务器与被管服务器之间的通讯机制 (Mandarin Only) November 26 Enroll Oracle Business Intelligence Using OBIEE with Big Data November 25 Enroll PeopleSoft Enterprise Financial Aid Regulatory 2015-2016 Release 1 (9.0 Bundle #35) November 12 Enroll PeopleSoft Enterprise PeopleSoft 1099 Update for 2014 - Get your Copy B’s out on time! November 13 Enroll PeopleSoft Enterprise Payroll for North America – Preparing for Year-End Processing and Annual Tax Reporting November 19 Enroll

Did You Know that You Can Buy OUM?

Jan Kettenis - Thu, 2014-11-13 09:18
The Oracle Unified Method (OUM) Customer Program has been changed in that, next to the already existing option to get it by involving Oracle Consulting, you now also can buy it if (for some reason) you don't want to involve Consulting.

Next to that there also is the option to purchase a subscription (initial for 3 years, after which it can be renewed annually) allowing to download updates for OUM.

OUM aims at supporting the entire Enterprise IT lifecycle, including the Cloud.

Got problems with Nulls with ServiceCloud's objects in REST?

Angelo Santagata - Wed, 2014-11-12 13:02
Using Oracle RightNow, Jersey 1.18, JAX-WS, JDeveloper 11.1.1.7.1

Whilst trying to creating a REST interface to our RightNow instance for a mobile application I was hitting an issue with null values being rendered by Jersey (Jackson)

The way I access RightNow is via JAX-WS  generated proxies which generates JAXB Objects. In the past I've been able to simply return the JAXB object to Jersey and it gets rendered all lovely jubbly.. However with the RightNow WSDL I'm getting lots of extra (null) fields rendered.. This doesn't occur with Sales Cloud so I'm assuming its something to do with the WSDL definition..

     
{ "incidents" : [
   {
       "Asset" : null, "AssignedTo" :  {
           "Account" : null, "StaffGroup" :  {
               "ID" :  {
                  "id" : 100885
              },"Name" : "B2CHousewares"
           },"ValidNullFields" : null
        },"BilledMinutes" : null, "Category" :  {
           "ID" :  {
               "id" : 124
           },"Parents" : [
           {
           .....
Look at all those "null" values, yuck...

Thankfully I found a workaround (yay!), I simply needed to create a custom Object Mapper and tell it *not* to render nulls. This worked for both the JAXB objects which were generated for me and other classes

Simply create a class which overrides the normal object Mapper factory and to make sure its used, ensure the @Provider tag is present

package myPackage;

import javax.ws.rs.ext.ContextResolver;
import javax.ws.rs.ext.Provider;

import org.codehaus.jackson.map.DeserializationConfig;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.SerializationConfig;
import org.codehaus.jackson.map.annotate.JsonSerialize;

@Provider
public class CustomJSONObjectMapper implements ContextResolver<ObjectMapper> {

    private ObjectMapper objectMapper;

   
    public CustomJSONObjectMapper() throws Exception {
        System.out.println("My object mapper init");
         objectMapper= new ObjectMapper();
         // Force all conversions to be NON_NULL for JSON
         objectMapper.setSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
    }

    public ObjectMapper getContext(Class<?> objectType) {
        System.out.println("My Object Mapper called");
        return objectMapper;
    }
}

And the result is lovely.. No null values and ready for my mobile app to consume .... 
{
    "organization" : [
    {
        "id" :  {
            "id" : 68
        },"lookupName" : "AngieSoft", "createdTime" : 1412166303000, "updatedTime" : 1412166303000, "name" : "AngieSoft", "salesSettings" :  {
            "salesAccount" :  {
                "id" :  {
                    "id" : 2
                }
            },"totalRevenue" :  {
                "currency" :  {
                    "id" :  {
                        "id" : 1
                    }
                }
            }
        },"source" :  {
            "id" :  {
                "id" : 1002
            },"parents" : [
            {
                "id" :  {
                    "id" : 32002
                }
            }
]
        },"crmmodules" :  {
            "marketing" : true, "sales" : true, "service" : true
        }
    }
]
}

Oh heads up Im using Jersey 1.18 because I want to deploy it to Oracle Java Cloud Service, if your using Jersey 2.x I believe the setSerializationInclusion method has changed..

Making Copies of Copies with Oracle RMAN

Don Seiler - Wed, 2014-11-12 10:13
I recently had need to make a copy of an image copy in Oracle rman. Since it wasn't immediately obvious to me, I thought it was worth sharing once I had it sorted out. I was familiar with making a backup of a backup, but had never thought about making a copy of a copy.

First you need to create an image copy of your database or tablespace. For the sake of example, I'll make a copy of the FOO tablespace. The key is to assign a tag to it that you can use for later reference. I'll use the tag "DTSCOPYTEST":

backup as copy 
    tablespace foo 
    tag 'DTSCOPYTEST'
    format '+DG1';

So I have my image copy in the DG1 tablespace. Now say we want to make copy of that for some testing purpose and put it in a different diskgroup. For that, we need the "BACKUP AS COPY COPY" command, and we'll want to specify the copy we just took by using the tag that was used:

backup as copy
    copy of tablespace foo
    from tag 'DTSCOPYTEST'
    tag 'DTSCOPYTEST2'
    format '+DG2';

As you'd guess, RMAN makes a copy of the first copy, writing it to the specified format location.

As always, hope this helps!
Categories: DBA Blogs

Mystery of java.sql.SQLRecoverableException: IO Error: Socket read timed out during adop/adpatch

Vikram Das - Tue, 2014-11-11 21:19
While applying the R12.2 upgrade driver, we faced the issue of WFXLoad.class failing in adworker log but showing up as running on adctrl

        Control
Worker  Code      Context            Filename                    Status
------  --------  -----------------  --------------------------  --------------
     1  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     2  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     3  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     4  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     5  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     6  Run       AutoPatch R120 pl                              Wait        
     7  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     8  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     9  Run       AutoPatch R120 pl  WFXLoad.class               Running      
    10  Run       AutoPatch R120 pl                              Wait        

adworker log shows:

Exception in thread "main" java.sql.SQLRecoverableException: IO Error: Socket read timed out
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:482)
        at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:678)
        at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:238)
        at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:34)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:567)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)
        at java.sql.DriverManager.getConnection(DriverManager.java:215)
        at oracle.apps.ad.worker.AdJavaWorker.getAppsConnection(AdJavaWorker.java:1041)
        at oracle.apps.ad.worker.AdJavaWorker.main(AdJavaWorker.java:276)
Caused by: oracle.net.ns.NetException: Socket read timed out
        at oracle.net.ns.Packet.receive(Packet.java:341)
        at oracle.net.ns.NSProtocol.connect(NSProtocol.java:308)
        at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1222)
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:330)
        ... 8 more

This was happening again and again. The DBAs were suspecting network issue, cluster issue, server issue and all the usual suspects.  In Database alert log we saw these errors coming every few seconds:

Fatal NI connect error 12537, connecting to:
 (LOCAL=NO)

  VERSION INFORMATION:
        TNS for Linux: Version 11.2.0.3.0 - Production
        Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production
        TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production
  Time: 11-NOV-2014 19:58:19
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12537

TNS-12537: TNS:connection closed
    ns secondary err code: 12560
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
opiodr aborting process unknown ospid (26388) as a result of ORA-609


We tried changing the parameters in sqlnet.ora and listener.ora as instructed in the article:
Troubleshooting Guide for ORA-12537 / TNS-12537 TNS:Connection Closed (Doc ID 555609.1)

Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120

However, the errors continued.  To rule out any issues in network, I also restarted the network service on Linux:

service network restart

One thing which I noticed was the extra amount of time that the connect was taking 4 seconds:

21:17:38 SQL> conn apps/apps
Connected.
21:17:42 SQL> 

Checked from remote app tier and it was same 4 seconds.

Stopped listener and checked on DB server that uses bequeath protocol:

21:18:47 SQL> conn / as sysdba
Connected.
21:18:51 SQL> conn / as sysdba
Connected.

Again it took 4 seconds.

A few days back, I had seen that connect time had increased after turning setting the DB initialization parameter pre_page_sga to true in a different instance.  On a hunch, I checked this and indeed pre_page_sga was set to true.  I set this back to false:

alter system set pre_page_sga=false scope=spfile;
shutdown immediate;
exit
sqlplus /nolog
conn / as sysdba
startup
SQL> set time on
22:09:46 SQL> conn / as sysdba
Connected.
22:09:49 SQL>

The connections were happening instantly.  So I went ahead and resumed the patch after setting:

update fnd_install_processes 
set control_code='W', status='W';

commit;

I restarted the patch and all the workers completed successfully.  And the patch was running significantly faster.  So I did a search on support.oracle.com to substantiate my solution with official Oracle documentation.  I found the following articles:

Slow Connection or ORA-12170 During Connect when PRE_PAGE_SGA init.ora Parameter is Set (Doc ID 289585.1)
Health Check Alert: Consider setting PRE_PAGE_SGA to FALSE (Doc ID 957525.1)

The first article (289585.1) says:
PRE_PAGE_SGA can increase the process startup duration, because every process that starts must access every page in the SGA. This approach can be useful with some applications, but not with all applications. Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off. The advantage that PRE_PAGE_SGA can afford depends on page size.

The second article (957525.1) says:
Having the PRE_PAGE_SGA initialization parameter set to TRUE can significantly increase the time required to establish database connections.

The golden words here are "Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off.".  That is exactly what happens when you do adpatch or adop.

Keep this in mind, whenever you do adpatch or adop, make sure that pre_page_sga is set to false.  It is possible that you may get the error "java.sql.SQLRecoverableException: IO Error: Socket read timed out" if you don't.  Also the patch will run significantly slower if pre_page_sga is set to true.  So set it to false and avoid these issues.



Categories: APPS Blogs

GoldenGate Not Keeping Up? Split the Process Using GoldenGate Range Function

VitalSoftTech - Tue, 2014-11-11 10:35
In most environments one set of GoldenGate process (1 Extract & 1 Replicat process) is sufficient for change data synchronization. But if your source database generates a huge volume of data then a single process may not be sufficient to handle the data volume. In such a scenario there may be need to split the […]
Categories: DBA Blogs

Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?
I was wondering if the new Oracle Database 12c version 12.1.0.2.0 in-memory column store feature will SLOW performance when it is NOT being used. I think this is a fair question because most Oracle Database systems will NOT be using this feature.

While the new in-memory column store feature is awesome and significant, with each new Oracle feature there is additional kernel code. And if Oracle is not extremely careful, these new lines of Oracle kernel code can slow down the core of Oracle processing, that is, buffer processing in Oracle's buffer cache.

Look at it this way, if a new Oracle release requires 100 more lines of kernel code to be executed to process a single buffer, that will be reflected in how many buffers Oracle can process per second.

To put bluntly, this article is the result of my research comparing core buffer processing rates between Oracle Database versions 11.2.0.2.0, 12.1.0.1.0 and 12.1.0.2.0.

With postings like this, it is very important for everyone to understand the results I publish are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducable it is valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

There are two interesting results from this research project. This article is about the first discovery and my next article will focus on the second. The second is by far the most interesting!

FYI. Back in August of 2013 performed a similar experiment where I compared Oracle database versions 11.2.0.2.0 with 12.1.0.1.0. I posted the article HERE.

Why "Faster" Means More Buffer Gets Processed Per Second
For this experiment when I say "faster" I am referring to raw buffered block processing. When a buffer is touched in the buffer cache it is sometimes called a buffer get or a logical IO. But regardless of the name, every buffer get increases the instance statistic, session logical reads.

I like raw logical IO processing experiments because they are central to all Oracle Database processing. Plus with each new Oracle release, as additional functionality is inserted it is likely more lines of Oracle kernel code will exist. To maintain performance with added functionality is an incredible feat. It's more likely the core buffer processing will be slower because of the new features. Is this case with Oracle's in-memory column store?

How I Setup The Experiment
I have included all the detailed output, scripts, R commands and output, data plots and more in the Analysis Pack that can be downloaded HERE.

There are a lot of ways I could have run this experiment. But two key items must exist for a fare comparison. First, all the processing must be in cache. There can be no physical read activity. Second, the same SQL must be run during the experiment and have the same execution plan. This implies the Oracle 12c column store will NOT be used. A different execution plan is considered "cheating" as a bad plan will clearly loose. Again, this is a very targeted and specific experiment.

The experiment compares the buffer get rates for a given SQL statement. For each Oracle version, I gathered 33 samples and excluded the first two, just to ensure caching was not an issue. The SQL statement runs for around 10 seconds, processes around 10.2M rows and touches around 5M buffers. I checked to ensure the execution plans are the same for each Oracle version. (Again, all the details are in the Analysis Pack for your reading pleasure.)

I ran the experiment on a Dell server. Here are the details:
$ uname -a
Linux sixcore 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux
To make this easier for myself, to perform the test I used my CPU Speed Test tool (version 1i). I blogged about this last month HERE. The latest version of this tool can be downloaded HERE.

The Results, Statistically
Shown below are the experimental results. Remember, the statistic I'm measuring is buffer gets per millisecond.


Details about the above table: The "Normal" column is about if the statistical distribution of the 31 samples is normal. If the p-value (far right column) is greater than 0.05 then I'll say they are normal. In all three cases, the p-value is less than 0.05. If fact, if you look at the histograms contained in the Analysis Pack every histogram is visually clearly not normal. As you would expect the "Average" and the "Median" are the statistical mean and median. The "Max" is the largest value in the sample set. The "Std Dev" is the standard deviation, which is doesn't mean much since our sample sets are not normally distributed.

As I blogged about before the Oracle Database 12c buffer processing is faster than Oracle Database 11g. However, the interesting part is Oracle version with in-memory column store 12.1.0.2.0 is slower then the previous version of 12c, 12.1.0.1.0. In fact, in my experiment the in-memory column store version is around 5.5% slower! This means version 12.1.0.1.0 "out of the box" can process logical buffers around 5.5% faster! Interesting.

In case you're wondering, I used the default out-of-the-box in-memory column store settings for version 12.1.0.2.0. I checked the in-memory size parameter, inmemory_size and it was indeed set to zero. Also, when I startup the Oracle instance there is no mention of the in-memory column store.

Statistically Comparing Each Version
As an important side bar, I did statistically compare the Oracle Database versions. Why? Because while a 5.5% decrease in buffer throughput may seem important, it may not be statistically significant, meaning this difference can not be explained with our sample sets.

So going around saying version 12.1.0.2.0 is "slower" by 5.5% would be misleading. But in my experiment, it would NOT be misleading because the differences in buffer processing are statistically significant. The relevant experimental details are shown below.

Version A Version B Statistical p-value
Difference
---------- ---------- ----------- -------
11.2.0.1.0 12.1.0.1.0 YES 0.0000
11.2.0.1.0 12.1.0.2.0 YES 0.0000
12.1.0.1.0 12.1.0.2.0 YES 0.0000

In all three cases the p-value was less than 0.05 signifying the two sample sets are statistically
different. Again, all the details are in the Analysis Pack.

The chart above shows the histograms of both Oracle Database 12c version sample sets together. Visually they look very separated and different with no data crossover. So from both a numeric and visual perspective there is a real difference between 12.1.0.1.0 and 12.1.0.2.0.


What Does This Mean To Me
To me this is surprising. First, there is a clear buffer processing gain upgrading from Oracle 11g to 12c. That is awesome news! But I was not expecting a statistically significant 5.5% buffer processing decrease upgrading to the more recent 12.1.0.2.0 version. Second, this has caused me to do a little digging to perhaps understand the performance decrease. The results of my experimental journey are really interesting...I think more interesting than this posting! But I'll save the details for my next article.

Remember, if you have any questions or concerns about my experiment you can run the experiment yourself. Plus all the details of my experiment are included in the Analysis Pack.

All the best in your Oracle performance tuning work!

Craig.





Categories: DBA Blogs

Past Sacrifice

Pete Scott - Mon, 2014-11-10 07:39
This year is the 100th anniversary of the beginning of World War 1. This was not the war to end wars it was said to be, instead it was the transition from the direct man-on-man combat of earlier conflicts to a scientific, engineered, calculated way of killing people efficiently, and not just military personnel. Explosives […]

Elevated Task Manager Shortcut on Windows 7

Mike Rothouse - Sun, 2014-11-09 07:05
Received a replacement laptop so now I have to perform some software installs and configure it the way I had my old laptop.  To get Task Manager to display All Processes without having to select this option every time, it must run as Administrator.  Searched Google and found this link which describes how to setup […]

Singapore Oracle Sessions - Beginnings

Doug Burns - Fri, 2014-11-07 20:31

Last Monday evening we had the first Singapore Oracle Sessions - an informal meetup of Oracle professionals thrown together at the last minute by a few of us.

Morten Egan (or as I believe he is called in Denmark now - The Traitor ;-)) mentioned to me months ago that if there was no user group when we arrived in Singapore, then we should start one. At the time he was the current (now retired) chairman of the Danish Oracle User Group (DOUG, strangely enough) and, as I've presented at and supported various Oracle user events over the years and am an ACE Director, it seemed fitting that we should try to build something for the Singapore Oracle community.

The fact that the Oracle ACE Hemant Chitale works for the same company and that the ACE Director Bjoern Rost would be spending a few days at my place before continuing on to the OTN APAC Tour was too much of an opportunity. After a short chat on Twitter we decided to bite the bullet and I started researching venues and contacted some of the locals. We only had 6 days to arrange it so it was either brave or stupid!

As it came together and (through a few very good contacts) we had more and more attendees registering it started to seem like a reality and eventually Bjoern, Madeleine and I found ourselves walking along to the Bugis area on Monday, hoping for the best. Despite some initial problems finding the venue, we arrived to find the extremely helpful Sean Low of Seminar Room who took excellent care of us. 

Within the matter of 15 minutes or so, 33 of the 36 or so who had registered were safely settled in their seats (including my other half Madeleine who *never* attends Oracle stuff!) for my brief introduction during which Sean insisted I try out the hand-held microphone.

My big Sinatra moment (Not)


My big Sinatra moment (not).

First up was Bjoern Rost of Portrix with "Change the way you think about tuning with SQL Plan Management" which, as those who've seen me present on the subject at Openworld, BGOUG or UKOUG would know is a subject dear to my heart. However, Bjoern seems to have had much more success with it than my failed attempts that were damned by literal values and Dynamic SQL. (I've since had a little more success, but mainly as a narrow solution to very specific problems.)

Bjoern and attentive audience

As you can see, the room was pretty full and the audience very attentive (except for a few people who appear to be mucking around with their phones!). They weren't afraid to ask some interesting and challenging questions too, which I always find very encouraging. 

Early in Bjoern's presentation we suffered what I would say was the only significant disappointment of the night as both the drinks and the pizza turned up early! It was nice of the delivery companies not to be late, but my stupid expectation that 7pm meant 7pm ensured that I was standing at the back of the room surrounded by obviously gorgeous pizza that was slowly going cold, not knowing whether I should stop Bjoern in his tracks or not. Manners dictated not (particularly as there were so many people in a small room) but the pizza experience later suggests I was wrong. Lesson learned! (Note that I had to ask others about the pizza as it's on my extensive list of things I don't eat.)

What obviously didn't go wrong at all was the social interaction between all of the attendees and speakers. It probably helped that there were a few attendees from some organisations and that people from different organisations had worked with each other in the past but it's a *long* time since I've felt such a vibrant energy during a break.

Attendees enjoying pizza and conversation

I was on next, presenting on "Real Time SQL Monitoring" and apart from a few hiccups with the clicker I borrowed from Bjoern and a couple of slide corrections I need to make, I think it went reasonably well and people seemed as enthused by SQL Mon reports as I've come to expect! With that done, and a quick smoke (I *love* organising an agenda :-)), it was time for Morten with his "Big Data Primer" 

Morten doing his thing

I think this might have been lots of peoples favourite presentation because it wasn't just about Oracle and Morten packed in plenty of the humour I've come to expect from him. Better still, it seemed to work for a quite cosmopolitan audience, so good work!

Afterwards he said a few words asking for people's feedback and whether there was a desire to setup a local user group or just continue with these informal sessions (sponsors permitting) and all of the feedback I heard later showed that people are very keen for a repeat run. 

Overall, Monday night felt like a great success. 

The passion and enthusiasm of the attendees was very encouraging and reflected in the subsequent feedback which has been consistently positive but also thoughtful so far. There's no question that a decent minority of the local Oracle community are looking for regular opportunities to hear decent speakers on subjects that interest them, meet and discuss issues with each other and also offer to present themselves, which is a great start for any Oracle User Group.

Strangely, I discovered a day or so later that there are already plans for a User Group and the Singapore launch event is next Wednesday. Coincidentally this is only 9 days after SOS! You can look into the APOUG website here and a number of colleagues and I will attend the launch event. I suppose it's a small shame that it's an APAC-wide user group, rather than specific to Singapore, which the number of attendees at such short notice would suggest Singapore can justify, but I'll be interested to see what APOUG has planned.

Big thanks for Alvin from Oracle for endless supplies of fine pizza and Bjoern Rost of Portrix Systems for the room hire (I bought the drinks, which some would say was appropriate but I couldn't possibly comment) and thanks again to all the attendees for making it a fun night!

I didn't notice until I was about to post this that Bjoern had already blogged about the evening and I think he's captured it perfectly.

Blog : Cloud Community France

Jean-Philippe Pinte - Fri, 2014-11-07 01:46
Intéressé par les solutions Cloud (IaaS & PaaS / Public & Privé) d'Oracle ?
Suivez le blog : https://blogs.oracle.com/Cloud-Community-France/

Evènement : Oracle & Mirantis

Jean-Philippe Pinte - Thu, 2014-11-06 07:47
Vous souhaitez
  • connaitre le partenariat Oracle & Mirantis
  • découvrir les actualités Oracle OpenWorld 2014 - OpenStack Summit
  • comprendre la proposition de valeur Oracle & OpenStack Mirantis
  • avoir une démonstration
  • etc  
Inscrivez-vous à l'évènement Openstack du 20 novembre prochain.

Understanding PeopleSoft Global Payroll Identification

Javier Delgado - Wed, 2014-11-05 08:49
The first stage in PeopleSoft Global Payroll processing is the identification of the employees to be calculated. Several criteria are used to determine which employees should be selected. Understanding why an employee is selected is not always evident to users. In this post I'm sharing how I normally determine the identification reason.

Once you run the identification stage, the employees to be processed are stored in the GP_PYE_PRC_STAT table. This table not only shows which employees are going to be calculated, but also indicates which calendars will be considered. This is particularly important when running retroactive calculations, as it allows you understanding the impact of this type of calculations.

In any case, going back to the identification, in this table you will find the SEL_RSN field, which contains a code that translates into the reason behind the employee identification. The valid values that this field may take are:

  • 01: The employee is active during the calendar period and included in the Payee List associated to the calendar.
  • 02: The employee is inactive (but was active before the start of the calendar period) and included in the Payee List associated to the calendar.
  • 03: The employee is active during the calendar period and has a positive input associated to him/her.
  • 04: The employee is active during the calendar period and has a retro trigger associated to him/her.
  • 05: The employee is active during the calendar period and associated to the calendar pay group.
  • 06: The employee is inactive during the calendar period and associated to a positive input in the current calendar.
  • 07: The employee is inactive (but still associated to the calendar pay group) and has a retro trigger associated to him/her.
  • 08: The employee is inactive but has a retroactive calculation delta from a previous calendar which has not been picked yet.
  • 09: The employee is inactive but has a retroactive calculation correction from a previous calendar which has not been picked yet.
  • 0A: The employee is active and linked to the calendar using an override.
  • 0B: The employee is inactive and linked to the calendar using an override.

From a technical standpoint, you can check the SQL used to select each reason by check the stored statement under the name GPPIDNT2_I_PRCnn, when nn is the SEL_RSN value.

Do you use other way to understand why was an employee identified? If so, please feel free to share your method in the comments, as I'm afraid my approach is a little bit too technical. ;)

The Importance of Backups (A Cautionary Block Recovery Tale)

Don Seiler - Tue, 2014-11-04 22:28
Just wanted to share a quick story with everyone. As I was in the airport waiting to fly to Oracle OpenWorld this year, I noticed a flurry of emails indicating that part of our storage infrastructure for our standby production database had failed. Long story short, my co-workers did a brilliant job of stabilizing things and keeping recovery working. However, we ended up with more than a few block corruptions.

Using the RMAN command "validate database", we could then see the list of corrupt blocks in the v$database_block_corruption view. All that was needed was to run "recover corruption list" in RMAN, which will dig into datafile copies and backups to do what it can to repair or replace the corrupt blocks and then recover the datafiles. Of course, nothing is ever that easy for us!

The storage we were writing our weekly backups to had been having problems and the latest weekly had failed. We ended up having to back 2 weeks into backups to get the datafile blocks and archivelogs to eventually complete the corruption recovery. I also immediately moved our backups to more reliable storage as well so that we're never in the situation of wondering whether or not we have the backups we need.

So, triple-check your backup plan, validate your backups and TEST RECOVERY SCENARIOS! You can't say your backups are valid until you use them to perform a restore/recovery, and you don't want to find out the hard way that you forgot something.
Categories: DBA Blogs

PeopleSoft's paths to the Cloud - Part II

Javier Delgado - Tue, 2014-11-04 00:01
In my previous post, I've covered some ways in which cloud computing features could be used with PeopleSoft, particularly around Infrastructure as a Service (IaaS) and non-Production environments. Now, I'm going to discuss how cloud technologies bring value to PeopleSoft Production environments.

Gain Flexibility



Some of the advantages of hosting PeopleSoft Production environments using an IaaS provider were also mentioned in the my past article as they are also valid for Non Production environments:

  • Ability to adjust processing power (CPU) and memory according to peak usage.
  • Storage may be enlarged at any time to cope with increasing requirements.
  • Possibility of replicating the existing servers for contingency purposes.

In terms of cost, hosting the Production environment in IaaS may not always be cheaper than the on premise alternative (this needs to be analyzed on a case by case basis). However, the possibility to add more CPU, memory and storage on the run gives IaaS solutions an unprecedented flexibility. It is true that you can obtain similar flexibility with in house virtualized environments, but not many in-house data centers have the available horsepower of Amazon, IBM or Oracle data centers, to name a few.

Be Elastic



Adding additional power to the existing servers may not be the best way to scale up. An alternative way is to add a new server to the PeopleSoft architecture. This type of architecture is called elastic (actually, Amazon EC2 stands for Elastic Computing), as the architecture can elastically grow or shrink in order to adapt to the user load.

Many PeopleSoft customers use Production environments with multiple servers for high availability purposes. You may have two web servers, two application servers, two process schedulers, and so on. This architecture guarantees a better system availability in case one of the nodes fails. Using an elastic architecture means that we can add, for instance, a third application server not only to increase redundancy, but also the application performance.

In order to implement an elastic architecture, you need to fulfill two requirements:

  1. You should be able to quickly deploy an additional instance of any part of the architecture. 
  2. Once the instance is created, it should be plugged in the rest of the components, without disrupting the system availability.

The first point is easily covered by creating an Amazon AMI which can be instantiated at any moment. I've discussed the basics about AMIs in my previous post, but there is plenty of information from Amazon.

The second point is a bit trickier. Let's assume we are adding a new application server instance. If you do not declare this application server in the web servers configuration.properties file, it will not be used.

Of course you can do this manually, but my suggestion is that you try to automate these tasks, as it is this automation which will eventually bring elasticity to your architecture. You need to plan the automation not only for enlarging the architecture, but also for potential reduction (in case you covered a usage peak by increasing the instances and then you want to go back to the original situation).

At BNB we have built a generic elastic architecture, covering all layers of a normal PeopleSoft architecture. If you are planning to move to a cloud infrastructure and you need assistance, we would be happy to help.

Coming Next...

In my next post on this topic, I will cover how Database as a Service could be used to host PeopleSoft databases and what value it brings to PeopleSoft customers.

Filtering PeopleTools SQL from Performance Monitor Traces

David Kurtz - Mon, 2014-11-03 15:01

I have been doing some on-line performance tuning on a PeopleSoft Financials system using PeopleSoft Performance Monitor (PPM).  End-users have collect verbose PPM traces. Usually, when I use PPM in a production system, all the components are fully cached by the normal activity of the user (except when the application server caches have recently been cleared).  However, when working in a user test environment it is common to find that the components are not fully cached. This presents two problems.
  • The application servers spend quite a lot of time executing queries on the PeopleTools tables to load the components, pages and PeopleCode into their caches. We can see in the screenshot of the component trace that there is a warning message that component objects are not fully cached, and that these  cache misses skew timings.
  • In verbose mode, the PPM traces collect a lot of additional transactions capturing executions and fetches against PeopleTools tables. The PPM analytic components cannot always manage the resultant volume of transactions.
    Figure 1. Component trace as collected by PPMFigure 1. Component trace as collected by PPMIf I go further down the same page and look in the SQL Summary, I can see SQL operations against PeopleTools tables (they are easily identifiable in that they generally do not have an underscore in the third character). Not only are 5 of the top 8 SQL operations related to PeopleTools tables, we can also see that they also account for over 13000 executions, which means there are at least 13000 rows of additional data to be read from PSPMTRANSHIST.
    Figure 2. SQL Summary of PPM trace with PeopleTools SQLFigure 2. SQL Summary of PPM trace with PeopleTools SQLWhen I open the longest running server round trip (this is also referred to as a Performance Monitoring Unit or PMU), I can only load 1001 rows before I get a message warning that the maximum row limit has been reached. The duration summary and the number of executions and fetches cannot be calculated and hence 0 is displayed.
     Details of longest PMU with PeopleTools SQLFigure 3: Details of longest PMU with PeopleTools SQL
    Another consequence of the PeopleTools data is that it can take a long time to open the PMU tree. There is no screenshot of the PMU tree here because in this case I had so much data that I couldn't open it before the transaction timed out!
    Solution My solution to this problem is to delete the transactions that relate to PeopleTools SQL and correct the durations, and the number of executions and fetches held in summary transactions. The rationale is that these transactions would not normally occur in significant quantities in a real production system, and there is not much I can do about them when they do.
    The first step is to clone the trace. I could work on the trace directly, but I want to preserve the original data.
    PPM transactions are held in the table PSPMTRANSHIST. They have a unique identifier PM_INSTANCE_ID. A single server round trip, also called a Performance Monitoring Unit (PMU), will consist of many transactions. They can be shown as a tree and each transaction has another field PM_PARENT_INST_ID which holds the instance of the parent. This links the data together and we can use hierarchical queries in Oracle SQL to walk the tree. Another field PM_TOP_INST_ID identifies the root transaction in the tree.
    Cloning a PPM trace is simply a matter of inserting data into PSPMTRANSHIST. However, when I clone a PPM trace I have to make sure that the instance numbers are distinct but still link correctly. In my system I can take a very simple approach. All the instance numbers actually collected by PPM are greater than 1016. So, I will simply use the modulus function to consistently alter the instances to be different. This approach may break down in future, but it will do for now.
    On an Oracle database, PL/SQL is a simple and effective way to write simple procedural processes.  I have written two anonymous blocks of code.
    Note that the cloned trace will be purged from PPM like any other data by the delivered PPM archive process.

    REM xPT.sql
    BEGIN --duplicate PPM traces
    FOR i IN (
    SELECT h.*
    FROM pspmtranshist h
    WHERE pm_perf_trace != ' ' /*rows must have a trace name*/
    -- AND pm_perf_trace = '9b. XXXXXXXXXX' /*I could specify a specific trace by name*/
    AND pm_instance_id > 1E16 /*only look at instance > 1e16 so I do not clone cloned traces*/
    ) LOOP
    INSERT INTO pspmtranshist
    (PM_INSTANCE_ID, PM_TRANS_DEFN_SET, PM_TRANS_DEFN_ID, PM_AGENTID, PM_TRANS_STATUS,
    OPRID, PM_PERF_TRACE, PM_CONTEXT_VALUE1, PM_CONTEXT_VALUE2, PM_CONTEXT_VALUE3,
    PM_CONTEXTID_1, PM_CONTEXTID_2, PM_CONTEXTID_3, PM_PROCESS_ID, PM_AGENT_STRT_DTTM,
    PM_MON_STRT_DTTM, PM_TRANS_DURATION, PM_PARENT_INST_ID, PM_TOP_INST_ID, PM_METRIC_VALUE1,
    PM_METRIC_VALUE2, PM_METRIC_VALUE3, PM_METRIC_VALUE4, PM_METRIC_VALUE5, PM_METRIC_VALUE6,
    PM_METRIC_VALUE7, PM_ADDTNL_DESCR)
    VALUES
    (MOD(i.PM_INSTANCE_ID,1E16) /*apply modulus to instance number*/
    ,i.PM_TRANS_DEFN_SET, i.PM_TRANS_DEFN_ID, i.PM_AGENTID, i.PM_TRANS_STATUS,
    i.OPRID,
    SUBSTR('xPT'||i.PM_PERF_TRACE,1,30) /*adjust trace name*/,
    i.PM_CONTEXT_VALUE1, i.PM_CONTEXT_VALUE2, i.PM_CONTEXT_VALUE3,
    i.PM_CONTEXTID_1, i.PM_CONTEXTID_2, i.PM_CONTEXTID_3, i.PM_PROCESS_ID, i.PM_AGENT_STRT_DTTM,
    i.PM_MON_STRT_DTTM, i.PM_TRANS_DURATION,
    MOD(i.PM_PARENT_INST_ID,1E16), MOD(i.PM_TOP_INST_ID,1E16), /*apply modulus to parent and top instance number*/
    i.PM_METRIC_VALUE1, i.PM_METRIC_VALUE2, i.PM_METRIC_VALUE3, i.PM_METRIC_VALUE4, i.PM_METRIC_VALUE5,
    i.PM_METRIC_VALUE6, i.PM_METRIC_VALUE7, i.PM_ADDTNL_DESCR);
    END LOOP;
    COMMIT;
    END;
    /
    Now I will work on the cloned trace. I want to remove certain transaction.
    • PeopleTools SQL. Metric value 7 reports the SQL operation and SQL table name. So if the first word is SELECT and the second word is a PeopleTools table name then it is a PeopleTools SQL operation. A list of PeopleTools tables can be obtained from the object security table PSOBJGROUP.
    • Implicit Commit transactions. This is easy - it is just transaction type 425. 
    Having deleted the PeopleTools transactions, I must also
    • Correct transaction duration for any parents of transaction. I work up the hierarchy of transactions and deduct the duration of the transaction that I am deleting from all of the parent.
    • Transaction types 400, 427 and 428 all record PeopleTools SQL time (metric 66). When I come to that transaction I also deduct the duration of the deleted transaction from the PeopleTools SQL time metric in an parent transaction.
    • Delete any children of the transactions that I delete. 
    • I must also count each PeopleTools SQL Execution transaction (type 408) and each PeopleTools SQL Fetch transaction (type 414) that I delete. These counts are also deducted from the summaries on the parent transaction 400. 
    The summaries in transaction 400 are used on the 'Round Trip Details' components, and if they are not adjusted you can get misleading results. Without the adjustments, I have encountered PMUs where more than 100% of the total duration is spent in SQL - which is obviously impossible.
    Although this technique of first cloning the whole trace and then deleting the PeopleTools operations can be quite slow, it is not something that you are going to do very often. 
    REM xPT.sql
    REM (c)Go-Faster Consultancy Ltd. 2014
    set serveroutput on echo on
    DECLARE
    l_pm_instance_id_m4 INTEGER;
    l_fetch_count INTEGER;
    l_exec_count INTEGER;
    BEGIN /*now remove PeopleTools SQL transaction and any children and adjust trans durations*/
    FOR i IN (
    WITH x AS ( /*returns PeopleTools tables as defined in Object security*/
    SELECT o.entname recname
    FROM psobjgroup o
    WHERE o.objgroupid = 'PEOPLETOOLS'
    AND o.enttype = 'R'
    )
    SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_trans_duration, h.pm_trans_defn_id
    FROM pspmtranshist h
    LEFT OUTER JOIN x
    ON h.pm_metric_value7 LIKE 'SELECT '||x.recname||'%'
    AND x.recname = upper(regexp_substr(pm_metric_value7,'[^ ,]+',8,1)) /*first word after select*/
    WHERE pm_perf_trace like 'xPT%' /*restrict to cloned traces*/
    -- AND pm_perf_trace = 'xPT9b. XXXXXXXXXX' /*work on a specific trace*/
    AND pm_instance_id < 1E16 /*restrict to cloned traces*/
    AND ( x.recname IS NOT NULL
    OR h.pm_trans_defn_id IN(425 /*Implicit Commit*/))
    ORDER BY pm_instance_id DESC
    ) LOOP
    l_pm_instance_id_m4 := TO_NUMBER(NULL);
     
        IF i.pm_parent_inst_id>0 AND i.pm_trans_duration>0 THEN
    FOR j IN(
    SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_top_inst_id, h.pm_trans_defn_id
    , d.pm_metricid_3, d.pm_metricid_4
    FROM pspmtranshist h
    INNER JOIN pspmtransdefn d
    ON d.pm_trans_defn_set = h.pm_trans_defn_set
    AND d.pm_trans_defn_id = h.pm_trans_Defn_id
    START WITH h.pm_instance_id = i.pm_parent_inst_id
    CONNECT BY prior h.pm_parent_inst_id = h.pm_instance_id
    ) LOOP
    /*decrement parent transaction times*/
    IF j.pm_metricid_4 = 66 /*PeopleTools SQL Time (ms)*/ THEN --decrement metric 4 on transaction 400
    --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value4 by '||i.pm_trans_duration);
    UPDATE pspmtranshist
    SET pm_metric_value4 = pm_metric_value4 - i.pm_trans_duration
    WHERE pm_instance_id = j.pm_instance_id
    AND pm_trans_Defn_id = j.pm_trans_defn_id
    AND pm_metric_value4 >= i.pm_trans_duration
    RETURNING pm_instance_id INTO l_pm_instance_id_m4;
    ELSIF j.pm_metricid_3 = 66 /*PeopleTools SQL Time (ms)*/ THEN --SQL time on serialisation
    --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value3 by '||i.pm_trans_duration);
    UPDATE pspmtranshist
    SET pm_metric_value3 = pm_metric_value3 - i.pm_trans_duration
    WHERE pm_instance_id = j.pm_instance_id
    AND pm_trans_Defn_id = j.pm_trans_defn_id
    AND pm_metric_value3 >= i.pm_trans_duration;
    END IF;

    UPDATE pspmtranshist
    SET pm_trans_duration = pm_trans_duration - i.pm_trans_duration
    WHERE pm_instance_id = j.pm_instance_id
    AND pm_trans_duration >= i.pm_trans_duration;
    END LOOP;
    END IF;

    l_fetch_count := 0;
    l_exec_count := 0;
    FOR j IN( /*identify transaction to be deleted and any children*/
    SELECT pm_instance_id, pm_parent_inst_id, pm_top_inst_id, pm_trans_defn_id, pm_metric_value3
    FROM pspmtranshist
    START WITH pm_instance_id = i.pm_instance_id
    CONNECT BY PRIOR pm_instance_id = pm_parent_inst_id
    ) LOOP
    IF j.pm_trans_defn_id = 408 THEN /*if PeopleTools SQL*/
    l_exec_count := l_exec_count + 1;
    ELSIF j.pm_trans_defn_id = 414 THEN /*if PeopleTools SQL Fetch*/
    l_fetch_count := l_fetch_count + j.pm_metric_value3;
    END IF;
    DELETE FROM pspmtranshist h /*delete tools transaction*/
    WHERE h.pm_instance_id = j.pm_instance_id;
    END LOOP;

    IF l_pm_instance_id_m4 > 0 THEN
    --dbms_output.put_line('ID:'||l_pm_instance_id_m4||' Decrement '||l_exec_Count||' executions, '||l_fetch_count||' fetches');
    UPDATE pspmtranshist
    SET pm_metric_value5 = pm_metric_value5 - l_exec_count
    , pm_metric_value6 = pm_metric_value6 - l_fetch_count
    WHERE pm_instance_id = l_pm_instance_id_m4;
    l_fetch_count := 0;
    l_exec_count := 0;
    END IF;

    END LOOP;
    END;
    /
    Now, I have a second PPM trace that I can open in the analytic component.
     Original and Cloned PPM tracesFigure 4: Original and Cloned PPM traces

    When I open the cloned trace, both timings in the duration summary have reduced as have the number of executions and fetches.  The durations of the individual server round trips have also reduced.
     Component Trace without PeopleTools transactionsFigure 5: Component Trace without PeopleTools transactions
    All of the PeopleTools SQL operations have disappeared from the SQL summary.
     SQL Summary of PPM trace after removing PeopleTools SQL transactionsFigure 6: SQL Summary of PPM trace after removing PeopleTools SQL transactions
    The SQL summary now only has 125 rows of data.
    Figure 7: SQL Summary of PMU without PeopleTools SQL
    Now, the PPM tree component opens quickly and without error.
     PMU Tree after removing PeopleTools SQLFigure 8: PMU Tree after removing PeopleTools SQL
    There may still be more transactions in a PMU than I can show in a screenshot, but I can now find the statement that took the most time quite quickly.

     Long SQL transaction further down same PMU treeFigure 9: Long SQL transaction further down same PMU tree
    Conclusions I think that it is reasonable and useful to remove PeopleTools SQL operations from a PPM trace.
    In normal production operation, components will mostly be cached, and this approach renders traces collected in non-production environments both usable in the PPM analytic components and more realistic for performance tuning. However, it is essential that when deleting some transactions from a PMU, that summary data held in other transactions in the same PMU are also corrected so that the metrics remain consistent.

    Annonce : Web Séminaire DBaaS

    Jean-Philippe Pinte - Mon, 2014-11-03 03:32

    Accélérer considérablement vos délais de mise à disposition des services DB avec une infrastructure de Cloud Privé pour vos bases de données
    (Database as a service).

    Mardi 18 novembre 2014 ; de 11h à 12h

    Les solutions Oracle pour bâtir un Cloud Privé permettent de transformer les organisations IT en apportant une architecture agile, évolutive et mutualisée permettant de répondre à leurs besoins et objectifs IT.
    Inscrivez-vous au webinar du 18 novembre 2014 pour comprendre la solution de cloud privé Database, les caractéristique, les enjeux, les bénéfices, etc...
    En effet, lors de ce webinar, vous nous présenterons:
    • Les principales fonctionnalités de la solution : automatisation, portail libre service, catalogue de services, snapclone, mesure et facturation de l’usage.
    • Les principaux bénéfices :  Réduction du délai de mise à disposition des services DB : jours/semaines è minutes/heures, réduction drastique des volumétries Hors Production, Plus d’agilité, Meilleur contrôle des ressources, Consolidation.
    • Une démonstration de notre solution DBaaS Private Cloud.
    • Des exemples Clients.
    Ce webinar s’adresse à toute personne impliquée dans la planification, le déploiement et l’administration d’un cloud privé mais également les équipes Hors Production exprimant le besoin de services DB : DBAs, Administrateurs systèmes, Architectes techniques, Ingénieurs qualité, Développeurs, Chefs de projet applicatif/Web, Testeurs etc...

    Inscrivez-vous dès maintenant en envoyant un e-mail à alexandre.lacreuse@oracle.com .

    Lecture : Oracle Magazine Novembre / Décembre 2014

    Jean-Philippe Pinte - Sun, 2014-11-02 13:11
    L'Oracle Magazine  de Novembre / Décembre est disponible.

    Pages

    Subscribe to Oracle FAQ aggregator