Skip navigation.

Feed aggregator

Best of OTN - Week of April 6th

OTN TechBlog - Fri, 2014-04-11 13:58
Architect Community

Video: Preview of Great Lakes Oracle Conference, May 13-14, 2014
This year's Great Lakes Oracle Conference (GLOC) includes more than 40 technical sessions, including keynote presentations by Tom Kyte and Oracle ACE Director Scott Spendolini, plus sessions by Oracle ACE Directors Steven Feuerstein, Alex Gorbachev, and Cameron Lackpour. In this video interview Northeast Ohio Oracle Users Group (NEOOUG) President John Hurley and Sr. NEOOUG Chairman Rumpi Gravenstein share details and a little background on the Conference.

Deploying Applications using #WLST | René van Wijk
Oracle ACE Director Rene van Wijk describes how to use the Oracle WebLogic Server extensions to the Java EE deployment API specification.

How to configure Oracle SOA/BPM task auto release | Jan van Zoggel
Oracle ACE Jan van Zoggel shows you how to use the "task auto release" feature in the Oracle SOA-INFRA to configure the automatic release times that determine when Oracle SOA/BPM tasks are made available for all other users.

Friday Funny from OTN Architect Community Manager Bob Rhubart:
Don Draper is Stuck in an Elevator, by the fine folks at Funny or Die.

Get involved in community conversations on the following OTN channels...

Moving away from wordpress

Lutz Hartmann - Fri, 2014-04-11 13:55

I am sick of this advertisement on my site.

Therefor I am about to move most of my posts to

http://sysdba.ch/index.php/postlist

 

Thanks for following my blog for so long.

Lutz Hartmann


Categories: DBA Blogs

PeopleSoft Roadshow / What’s next for PeopleSoft … a Correction

Duncan Davies - Fri, 2014-04-11 09:15

Thanks for all your feedback on the roadshow / what’s next for PeopleSoft write-up. It’s wonderful that there’s such a large and active PeopleSoft community out there that’s so positive about the new functionality that Oracle is adding to the product.

After I posted the article Marc Weintraub got in touch and has marc weintraubasked me to correct an inconsistency in one of the sections. I had included a paraphrased quote which gave the impression that net-new PeopleSoft opportunities are not important, and this was misleading. I’m happy to concede that about a week elapsed between his session and when I posted it here and my memory might not have been as fresh as if I’d live-blogged it.

I’ve updated the article, and the paragraph in question now reads:

Also, there was another statement which I didn’t realise the signifcance of until letting things percolate down through on the train journey home, but Marc’s statement that PeopleSoft has a “95% retention rate, and the focus is on our existing customers.” is quite important. It’s great that Oracle are focusing on keeping existing customers happy – that’s what the ongoing licence fee is for, after all – and 95% is a good success rate and ongoing investment is designed to add value to existing customers.

Marc also added “The 95% retention rate of existing PeopleSoft customers is accurate. The point I wished to convey is that our future investments are more aligned to meeting the needs of our existing customers. Oracle still does secure net-new customers for PeopleSoft at a rate significant rate.”

This last point is something that we can testify to as Succeed have implemented at least one greenfield PeopleSoft implementation every year for the last 5 years.

Apologies if this has caused any confusion and I’m happy to set the record straight.


Oracle 12c Adaptive Plan & inflection point

Yann Neuhaus - Fri, 2014-04-11 08:25

The Oracle 12c Adaptive Plan feature was already presented by Nicolas Jardot in OOW 2013: Solving customer issues with the 12c Optimizer.

I recently had to answer several questions about its behavior at execution time. Maybe the term 'adaptive' is misleading. It's not that a join will stop and restart to another join method. Even with adaptive plan there will only be one join method to be applied. The feature only defers a decision that was made at parse time in previous versions and that will now be made at execution time - after reading a few rows.

In order to show what happens exactly at execution time, I will reproduce the kind of exercise that we do in our training session Oracle 12c New Features workshop in this posting.

Oracle RMAN Restore to the Same Machine as the Original Database

Pythian Group - Fri, 2014-04-11 07:52

Among the most critical but often most neglected database administration tasks is testing restore from backup. But sometimes, you don’t have a test system handy, and need to test the restore on the same host as the source database. In such situations, the biggest fear is overwriting the original database. Here is a simple procedure you can follow, which will not overwrite the source.

  1. Add an entry to the oratab for the new instance, and source the new environment:
    oracle$ cat >> /etc/oratab <<EOF
    > foo:/u02/app/oracle/product/11.2.0/dbhome_1:N
    > EOF
    
    oracle$ . oraenv
    ORACLE_SID[oracle]? foo
    The Oracle base remains unchanged with value /u02/app/oracle
  2. Create a pfile and spfile with a minimum set of parameters for the new instance. In this case the source database is named ‘orcl’ and the new database will have a DB unique name of ‘foo’. This example will write all files to the +data ASM diskgroup, under directories for ‘foo’. You could use a filesystem directory as the destination as well. Just make sure you have enough space wherever you plan to write:
    oracle$ cat > $ORACLE_HOME/dbs/initfoo.ora <<EOF
    > db_name=orcl
    > db_unique_name=foo
    > db_create_file_dest=+data
    > EOF
    
    oracle$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 9 15:35:00 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to an idle instance.
    
    SQL> create spfile from pfile;
    File created.
    
    SQL> exit
    Disconnected
  3. Now, using the backup pieces from your most recent backup, try restoring the controlfile only. Start with the most recently written backup piece, since RMAN writes the controlfile at the end of the backup. It may fail once or twice, but keep trying backup pieces until you find the controlfile:
    oracle$ ls -lt /mnt/bkup
    total 13041104
    -rwxrwxrwx 1 root root      44544 Apr  4 09:32 0lp4sghk_1_1
    -rwxrwxrwx 1 root root   10059776 Apr  4 09:32 0kp4sghi_1_1
    -rwxrwxrwx 1 root root 2857394176 Apr  4 09:32 0jp4sgfr_1_1
    -rwxrwxrwx 1 root root 3785719808 Apr  4 09:31 0ip4sgch_1_1
    -rwxrwxrwx 1 root root 6697222144 Apr  4 09:29 0hp4sg98_1_1
    -rwxrwxrwx 1 root root    3647488 Apr  4 09:28 0gp4sg97_1_1
    
    $ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Wed Apr 9 15:37:10 2014
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database (not started)
    
    RMAN> startup nomount;
    Oracle instance started
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> restore controlfile from '/mnt/bkup/0lp4sghk_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=1 device type=DISK
    channel ORA_DISK_1: restoring control file
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 04/09/2014 15:42:10
    ORA-19870: error while restoring backup piece /mnt/bkup/0lp4sghk_1_1
    ORA-19626: backup set type is archived log - can not be processed by this conversation
    
    RMAN> restore controlfile from '/mnt/bkup/0kp4sghi_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=19 device type=DISK
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
    output file name=+DATA/foo/controlfile/current.348.844443549
    Finished restore at 09-APR-14

    As you can see above, RMAN will report the path and name of the controlfile that it restores. Use that path and name below:

    RMAN> sql "alter system set
    2>  control_files=''+DATA/foo/controlfile/current.348.844443549''
    3>  scope=spfile";
    
    sql statement: alter system set 
    control_files=''+DATA/foo/controlfile/current.348.844443549'' 
    scope=spfile
  4. Mount the database with the newly restored controlfile, and perform a restore to the new location. The ‘set newname’ command changes the location that RMAN will write the files to the db_create_file_dest of the new instance. The ‘switch database’ command updates the controlfile to reflect the new file locations. When the restore is complete, use media recovery to apply the archived redologs.
    RMAN> startup force mount
    Oracle instance started
    database mounted
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> }
    
    executing command: SET NEWNAME
    Starting restore at 09-APR-14
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=23 device type=DISK
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00002 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0hp4sg98_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0hp4sg98_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to +data
    channel ORA_DISK_1: restoring datafile 00004 to +data
    channel ORA_DISK_1: restoring datafile 00005 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0ip4sgch_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0ip4sgch_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:05
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00003 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0jp4sgfr_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0jp4sgfr_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
    Finished restore at 09-APR-14
    
    RMAN> switch database to copy;
    
    datafile 1 switched to datafile copy "+DATA/foo/datafile/system.338.844531637"
    datafile 2 switched to datafile copy "+DATA/foo/datafile/sysaux.352.844531541"
    datafile 3 switched to datafile copy "+DATA/foo/datafile/undotbs1.347.844531691"
    datafile 4 switched to datafile copy "+DATA/foo/datafile/users.350.844531637"
    datafile 5 switched to datafile copy "+DATA/foo/datafile/soe.329.844531637"
    
    RMAN> recover database;
    
    Starting recover at 09-APR-14
    using channel ORA_DISK_1
    starting media recovery
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_25_841917031.dbf thread=1 sequence=25
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_26_841917031.dbf thread=1 sequence=26
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_27_841917031.dbf thread=1 sequence=27
    media recovery complete, elapsed time: 00:00:01
    Finished recover at 09-APR-14
    
    RMAN> exit
    
    Recovery Manager complete.
  5. Before opening the database, we need to re-create the controlfile so that we don’t step on any files belonging to the source database. The first step is to generate a “create controlfile” script, and to locate the trace file where it was written:
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 16 10:56:28 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    SQL> alter database backup controlfile to trace;
    Database altered.
    
    SQL> select tracefile
      2  from v$session s,
      3       v$process p
      4  where s.paddr = p.addr
      5  and s.audsid = sys_context('USERENV', 'SESSIONID');
    TRACEFILE
    ----------------------------------------------------------
    /u02/app/oracle/diag/rdbms/foo/foo/trace/foo_ora_19168.trc
    
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition
  6. Next, we need to edit the controlfile creation script so that all we have left is the “create controlfile … resetlogs” statement, and so that all file paths to the original database are removed or changed to reference the db_unique_name of the test database.Below is a pipeline of clumsy awks I created that creates a script called create_foo_controlfile.sql. It should take care of most permutations of these trace controlfile scripts.
    $ sed -n '/CREATE.* RESETLOGS/,$p' /u02/app/oracle/diag/rdbms/foo/foo/trace/foo_ora_18387.trc | \
    > sed '/.*;/q' | \
    > sed 's/\(GROUP...\).*\( SIZE\)/\1\2/' | \
    > sed 's/orcl/foo/g' | \
    > sed 's/($//' | \
    > sed 's/[\)] SIZE/SIZE/' | \
    > grep -v "^    '" > create_foo_controlfile.sql

    If it doesn’t work for you, just edit the script from your trace file, so that you end up with something like this:

    CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS  ARCHIVELOG
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 
      SIZE 50M BLOCKSIZE 512,
      GROUP 2 
      SIZE 50M BLOCKSIZE 512,
      GROUP 3 
      SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
      '+DATA/foo/datafile/system.338.845027673',
      '+DATA/foo/datafile/sysaux.347.845027547',
      '+DATA/foo/datafile/undotbs1.352.845027747',
      '+DATA/foo/datafile/users.329.845027673',
      '+DATA/foo/datafile/soe.350.845027673'
    CHARACTER SET WE8MSWIN1252
    ;
  7. The next step is to use the above script to open the database with the resetlogs option on a new OMF controlfile:
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 16 10:56:28 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    SQL> alter system reset control_files scope=spfile;
    System altered.
    
    SQL> startup force nomount
    ORACLE instance started.
    
    Total System Global Area  238034944 bytes
    Fixed Size                  2227136 bytes
    Variable Size             180356160 bytes
    Database Buffers           50331648 bytes
    Redo Buffers                5120000 bytes
    
    SQL> @create_foo_controlfile
    Control file created.
    
    SQL> select value from v$parameter where name = 'control_files';
    VALUE
    -------------------------------------------
    +DATA/foo/controlfile/current.265.845031651
    
    SQL> alter database open resetlogs;
    Database altered.
  8. Last but not least, don’t forget to provide a tempfile or two to the temporary tablespaces:
    SQL> alter tablespace temp
      2  add tempfile size 5G;
    Tablespace altered.
Categories: DBA Blogs

The Growing Trend Toward Data Infrastructure Outsourcing

Pythian Group - Fri, 2014-04-11 07:45

Today’s blog post is the first of three in a series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper.

Despite the strong push to outsource corporate functions that began more than two decades ago, many IT shops have been hesitant to outsource their data management requirements.

Generally, the more mission-critical the data, the more organizations have felt compelled to control it, assigning that responsibility to their brightest and most trusted in-house database experts. The reasoning has been that with greater control comes greater security.

That mindset is rapidly changing. Macroeconomic trends are putting mounting pressure on organizations to rethink the last bastion of IT in-housing—data infrastructure management—and instead look for flexible, cost-effective outsourcing solutions that can help them improve operational efficiency, optimize performance, and increase overall productivity.

This trend toward outsourcing to increase productivity, and not simply reduce costs, is supported by a recent Forrester Research report that highlights the key reasons companies  are looking for outside help: too many new technologies and data sources, and difficulty finding people with the skills and experience to optimize and manage them.

To learn how to develop a data infrastructure sourcing strategy, download the rest of our white paper, Data Infrastructure Outsourcing.

Categories: DBA Blogs

HeartBleed and Oracle

Fuad Arshad - Fri, 2014-04-11 07:23
There are a lot of people asking about Heartbleed and how it has impacted the web.Oracle has published  MOS Note 1645479.1 that talks about all the products impacted and if and when fixes will be available.The following blog post is also a good reference about the vulnerability.  https://blogs.oracle.com/security/entry/heartbleed_cve_2014_0160_vulnerability


Service Provider initiated SSO on WLS11g using SAML2.0

Darwin IT - Fri, 2014-04-11 05:25
IntroductionAt a recent customer I got the assignment to implement a SAML 2.0 configuration.

The customer is in this setup a Service Provider. They provide a student-administration application for the Dutch Higher Education Sector, like Colleges and Universities. The application conventionally is implemented on premise. But they like to move to a SaaS model. One institute is going to use the application from 'the cloud'. In the Dutch education sector, an organization called SurfConext serves as an authentication broker.

A good schematic explanation of the setup is in the Weblogic 11g docs:



When a user connects to the application, Weblogic finds that the user is not authenticated: it lacks a SAML2.0 token (2). So when configured correctly the browser is rerouted to SurfConext (3). On an authentication request SurfConext displays a so-called ‘Where Are You From’ (WAYF) page, on which a user can choose the institute to which he or she is connected. SurfConext then provides a means to enter the username and password (4). On submit SurfConext validates the credentials against the actual IdP, which is provided by the user’s institute (5). On a valid authentication, SurfConext provides a SAML2.0 token identifying the user with possible assertions (6). The page is refreshed and redirected to the landing page of the application (7).

For Weblogic SurfConext is in fact the Identity Profider, although in fact, based on the choice on the WAYF page, it reroutes the authentication request to the IdP of the particular institute.

Unfortunately I did not find a how-to of that particular setup in the docs. Although I found this.  But I did find the following blog: https://blogs.oracle.com/blogbypuneeth/entry/steps_to_configure_saml_2, that helped me much. Basically the setup is only the service provider part of that description.

So let me walk you through it. This is a larger blog, in fact I copy&paste larger parts from the configuration document I wrote for the customer

Configure Service providerPre-RequisitesTo be able to test the setup against a test-IdP of SurfConext the configured Weblogic need to be reachable from internet. Appropriate firewall and proxy-server configuration need to be done upfront to enable both SurfConext to connect to the Weblogic Server as well as a remote user.

All configuration regarding url’s need to be done using the outside url’s configured above.

A PC with a direct internet connection that is enabled to connect through these same URL’s is needed to test the configuration. When connecting a pc to the intranet of the customer enables the pc to connect to internet, but the internal network configuration prevented connecting to the weblogic server using the remote url’s.

During the configuration a so called SAML Metadata file is created. This file is requested by SurfConext to get acquainted with the Service Provider. This configuration can change through reconfigurations. So SurfConext requests this through a HTTPS url. This url need to be configured, and also remotely connectable. An option is the htdocs folder of a webserver that is connectable through https. In other SAML2 setups you might need to upload the metadata-file to the identity provider's server.

You also need the SAML metadata of SurfConext. It can be downloaded from: https://wiki.surfnet.nl/display/surfconextdev/Connection+metadata.
Update Application
The application need to be updated and redeployed to use the weblogic authenticators instead of the native logon-form. To do so the web.xml need to be updated. In the web.xml (in the WEB-INF of the application war file) look for the following part:
  <login-config>
<auth-method>FORM</auth-method>
<realm-name>jazn.com</realm-name>
<form-login-config>
<form-login-page>/faces/security/pages/Login.jspx>/form-login-page>
<form-error-page>/loginErrorServlet>/form-error-page>
</form-login-config>
</login-config>
And replace it with:
  <login-config>
<auth-method>BASIC>/auth-method>
<realm-name>myrealm>
</login-config>
Repackage and redeploy the application to weblogic.
Add a new SAML2IdentityAserterHere we start with the first steps to configure Weblogic: create a SAML2IdentityAsserter on the Service Provider domain.
  1. Login to ServiceProvider domain - Weblogic console
  2. Navigate to “Security Realms”:
  3.  Click on ”myrealm” 
  4. Go to the tab  ”Providers–>Authentication” :
  5. Add a new “SAML2IdentityAsserter”
  6. Name it for example: “SurfConextIdentityAsserter”:
  7. Click Ok, Save and activate changes if you're in a production domain (I'm not going to repeat that every time again in the rest of this blog). 
  8. Bounce the domain (All WLServers including AdminServer)
Configure managed server to use SAML2 Service Provider In this part the managed server(s) serving the application need to be configured for the so called 'federated services'. It need to know how to behave as a SAML2.0 Service Provider.
 So perform the following steps:
  1.  Navigate to the managed server, and select the “Federation Services–>SAML 2.0 Service Provider” sub tab:

  2. Edit the following settings:
  3. FieldValueEnabledCheckPreferred BindingPOSTDefault URLhttp://hostname:portname/application-URI.
    This URL should be accessible from outside the organization, that is from SurfConext.
  4. Click Save.
  5. Navigate to the managed server, and select the “Federation Services–>SAML 2.0 General” sub tab:
  6. Edit the following settings:
  7. FieldValueReplicated Cache EnabledUncheck or Check if neededContact Person Given NameEg. Jean-MichelContact Person SurnameEg. JarreContact Person TypeChoose one from the list, like 'technical'.Contact Person CompanyEg. Darwin-IT ProfessionalsContact Person Telephone NumberEg. 555-12345Contact Person Email Addressinfo@hatseflats.comOrganization NameEg. Hatseflats B.V.Organization URLwww.hatseflats.comPublished Site URLhttp://www.hatseflats.com:7777/saml2
    This URL should be accessible from outside the organization, that is from SurfConext. The Identity Provider needs to be able to connect to it.Entity IDEg. http://www.hatseflats.com
    SurfConext expect an URI with at least a colon (‘:’), usually the URL of the SP.Recipient Check EnabledUncheck.
    When checked Weblogic will check the responding Url to the URL in the original request. This could result in a ‘403 Forbidden’ message.Single Sign-on Signing Key Aliasdemoidentity
    If signing is used the alias of the proper private certificate in the keystore that is configured in WLS is to be provided.Single Sign-on Signing Key Pass PhraseDemoIdentityPassPhraseConfirm Single Sign-on Signing Key Pass PhraseDemoIdentityPassPhrase
  8. Save the changes and export the IDP metadata into a XML file: 
    1. Restart the server
    2. Click on 'Publish Meta Data'
      1. Restart the server
        1. Click on 'Publish Meta Data'
        2. Provide a valid path, like /home/oracle/Documents/... and click 'OK'.
        3. Copy this to a location on a http-server that is remotely connectable through HTTPS and provide the url to SurfConext.
        Configure Identity Provider metadata on SAML Service Provider in Managed ServerAdd new “Web Single Sign-On Identity Provider Partner” named for instance "SAML_SSO_SurfConext".
        1. In Admin Console navigate to the myrealm Security Realm and select the “Providers–>Authentication
        2. Select the SurfConextIdentityAsserter SAML2_IdentityAsserter and navigate to the “Management” tab:
        3. Add a new “Web Single Sign-On Identity Provider Partner
          1. Name it: SAML_SSO_SurfConext
          2. Select “SurfConext-metadata.xml”
          3. Click 'OK'.
        4. Edit the created SSO Identity Provider Partner “SAML_SSO_SurfConext” and Provide the following settings:
        5. FieldValueNameSAML_SSO_SurfConextEnabledCheckDescriptionSAML Single Sign On partner SurfConextRedirect URIs/YourApplication-URI
          These are URI’s relative to the root of the server.
        Add SAMLAuthenticationProviderIn this section an Authentication provider is added.
        1. Navigate to the ‘Providers->Authentication’ sub tab of the ‘myrealm’ Security Realm:
        2. Add a new Authentication Provider. Name it: ‘SurfConextAuthenticator’ and select as type: 'SAMLAuthenticator'.
          Click on the new Authenticator and set the Control Flag to ‘SUFFICIENT’:
        3. Return to the authentication providers and click on 'Reorder'.
          Use the selection boxes and the arrow buttons to reorder the providers as follows: The SurfConext authenticator and Identity Asserter should be first in the sequence.
        Set all other authentication providers to sufficientThe control flag of the Default Authenticator is by default set to ‘REQUIRED’. That means that for an authentication request this one needs to be executed. However, for the application we want the SAMLAuthentication be Sufficient, thus that the other authenticators need not to be executed. So set these other ones (if others including the DefaultAuthenticator exist) to ‘SUFFICIENT’ as well.
        Enable debug on SAMLTo enable debug messages on SAML, navigate to the 'Debug' tab of the Managed Server:
        Expand the nodes ‘weblogic -> security’. Check the node ‘Saml2’ and click 'Enable'. This will add SAML2 related logging during authentication processes to the server.log. To disable the logging, check the node or higher level nodes and click 'Disable'.
        Deploy the Identity Name MapperSurfConnext generates a userid for each connected user. SurfConext provides two options for this: a persistent userid throughout all sessions or a userid per session. Either way, the userid is generated as a GUID that is not registered within the customers application and also on itself not to relate to known users in the application. In the SAML token however, also the username is provided. To map this to the actual userid that Weblogic provides to the application, an IdentityMapper class is needed. The class implements a certain interface of weblogic, and uses a custom principal class that implements a weblogic interface as well. The implementation is pretty straightforward. I found an example that uses an extra bean for a Custom Principal. The IdentityMapper class is as follows:
        package nl.darwin-it.saml-example;

        import com.bea.security.saml2.providers.SAML2AttributeInfo;
        import com.bea.security.saml2.providers.SAML2AttributeStatementInfo;
        import com.bea.security.saml2.providers.SAML2IdentityAsserterAttributeMapper;
        import com.bea.security.saml2.providers.SAML2IdentityAsserterNameMapper;
        import com.bea.security.saml2.providers.SAML2NameMapperInfo;

        import java.security.Principal;

        import java.util.ArrayList;
        import java.util.Collection;
        import java.util.logging.Logger;

        import weblogic.logging.LoggingHelper;

        import weblogic.security.service.ContextHandler;


        public class SurfConextSaml2IdentityMapper implements SAML2IdentityAsserterNameMapper,
        SAML2IdentityAsserterAttributeMapper {
        public static final String ATTR_PRINCIPALS = "com.bea.contextelement.saml.AttributePrincipals";
        public static final String ATTR_USERNAME = "urn:mace:dir:attribute-def:uid";

        private Logger lgr = LoggingHelper.getServerLogger();
        private final String className = "SurfConextSaml2IdentityMapper";


        @Override
        public String mapNameInfo(SAML2NameMapperInfo saml2NameMapperInfo,
        ContextHandler contextHandler) {
        final String methodName = className + ".mapNameInfo";
        debugStart(methodName);
        String user = null;

        debug(methodName,
        "saml2NameMapperInfo: " + saml2NameMapperInfo.toString());
        debug(methodName, "contextHandler: " + contextHandler.toString());
        debug(methodName,
        "contextHandler number of elements: " + contextHandler.size());

        // getNames gets a list of ContextElement names that can be requested.
        String[] names = contextHandler.getNames();

        // For each possible element
        for (String element : names) {
        debug(methodName, "ContextHandler element: " + element);
        // If one of those possible elements has the AttributePrinciples
        if (element.equals(ATTR_PRINCIPALS)) {
        // Put the AttributesPrincipals into an ArrayList of CustomPrincipals
        ArrayList<CustomPrincipal> customPrincipals =
        (ArrayList<CustomPrincipal>)contextHandler.getValue(ATTR_PRINCIPALS);
        int i = 0;
        String attr;
        if (customPrincipals != null) {
        // For each AttributePrincipal in the ArrayList
        for (CustomPrincipal customPrincipal : customPrincipals) {
        // Get the Attribute Name and the Attribute Value
        attr = customPrincipal.toString();
        debug(methodName, "Attribute " + i + " Name: " + attr);
        debug(methodName,
        "Attribute " + i + " Value: " + customPrincipal.getCollectionAsString());
        // If the Attribute is "loginAccount"
        if (attr.equals(ATTR_USERNAME)) {
        user = customPrincipal.getCollectionAsString();
        // Remove the "@DNS.DOMAIN.COM" (case insensitive) and set the username to that string
        if (!user.equals("null")) {
        user = user.replaceAll("(?i)\\@CLIENT\\.COMPANY\\.COM", "");
        debug(methodName, "Username (from loginAccount): " + user);
        break;
        }
        }
        i++;
        }
        }

        // For some reason the ArrayList of CustomPrincipals was blank - just set the username to the Subject
        if (user == null || "".equals(user)) {
        user = saml2NameMapperInfo.getName(); // Subject = BRID

        debug(methodName, "Username (from Subject): " + user);
        }

        return user;
        }
        }

        // Just in case AttributePrincipals does not exist
        user = saml2NameMapperInfo.getName(); // Subject = BRID
        debug(methodName, "Username (from Subject): " + user);

        debugEnd(methodName);

        // Set the username to the Subject
        return user;

        // debug(methodName,"com.bea.contextelement.saml.AttributePrincipals: " + arg1.getValue(ATTR_PRINCIPALS));
        // debug(methodName,"com.bea.contextelement.saml.AttributePrincipals CLASS: " + arg1.getValue(ATTR_PRINCIPALS).getClass().getName());


        // debug(methodName,"ArrayList toString: " + arr2.toString());
        // debug(methodName,"Initial size of arr2: " + arr2.size());


        }
        /* public Collection<Object> mapAttributeInfo0(Collection<SAML2AttributeStatementInfo> attrStmtInfos, ContextHandler contextHandler) {
        final String methodName = className+".mapAttributeInfo0";
        if (attrStmtInfos == null || attrStmtInfos.size() == 0) {
        debug(methodName,"CustomIAAttributeMapperImpl: attrStmtInfos has no elements");
        return null;
        }

        Collection<Object> customAttrs = new ArrayList<Object>();

        for (SAML2AttributeStatementInfo stmtInfo : attrStmtInfos) {
        Collection<SAML2AttributeInfo> attrs = stmtInfo.getAttributeInfo();
        if (attrs == null || attrs.size() == 0) {
        debug(methodName,"CustomIAAttributeMapperImpl: no attribute in statement: " + stmtInfo.toString());
        } else {
        for (SAML2AttributeInfo attr : attrs) {
        if (attr.getAttributeName().equals("AttributeWithSingleValue")){
        CustomPrincipal customAttr1 = new CustomPrincipal(attr.getAttributeName(), attr.getAttributeNameFormat(),attr.getAttributeValues());
        customAttrs.add(customAttr1);
        }else{
        String customAttr = new StringBuffer().append(attr.getAttributeName()).append(",").append(attr.getAttributeValues()).toString();
        customAttrs.add(customAttr);
        }
        }
        }
        }
        return customAttrs;
        } */

        public Collection<Principal> mapAttributeInfo(Collection<SAML2AttributeStatementInfo> attrStmtInfos,
        ContextHandler contextHandler) {
        final String methodName = className + ".mapAttributeInfo";
        Collection<Principal> principals = null;
        if (attrStmtInfos == null || attrStmtInfos.size() == 0) {
        debug(methodName, "AttrStmtInfos has no elements");
        } else {
        principals = new ArrayList<Principal>();
        for (SAML2AttributeStatementInfo stmtInfo : attrStmtInfos) {
        Collection<SAML2AttributeInfo> attrs = stmtInfo.getAttributeInfo();
        if (attrs == null || attrs.size() == 0) {
        debug(methodName,
        "No attribute in statement: " + stmtInfo.toString());
        } else {
        for (SAML2AttributeInfo attr : attrs) {
        CustomPrincipal principal =
        new CustomPrincipal(attr.getAttributeName(),
        attr.getAttributeValues());
        /* new CustomPrincipal(attr.getAttributeName(),
        attr.getAttributeNameFormat(),
        attr.getAttributeValues()); */
        debug(methodName, "Add principal: " + principal.toString());
        principals.add(principal);
        }
        }
        }
        }
        return principals;
        }

        private void debug(String methodName, String msg) {
        lgr.fine(methodName + ": " + msg);
        }

        private void debugStart(String methodName) {
        debug(methodName, "Start");
        }

        private void debugEnd(String methodName) {
        debug(methodName, "End");
        }

        }
        The commented method ‘public Collection<Object> mapAttributeInfo0’ is left in the source as an example method. The CustomPrincipal bean:
        package nl.darwin-it.saml-example;

        import java.util.Collection;
        import java.util.Iterator;

        import weblogic.security.principal.WLSAbstractPrincipal;
        import weblogic.security.spi.WLSUser;


        public class CustomPrincipal extends WLSAbstractPrincipal implements WLSUser{

        private String commonName;
        private Collection collection;
        public CustomPrincipal(String name, Collection collection) {
        super();
        // Feed the WLSAbstractPrincipal.name. Mandatory
        this.setName(name);
        this.setCommonName(name);
        this.setCollection(collection);
        }

        public CustomPrincipal() {
        super();
        }

        public CustomPrincipal(String commonName) {
        super();
        this.setName(commonName);
        this.setCommonName(commonName);
        }

        public void setCommonName(String commonName) {
        // Feed the WLSAbstractPrincipal.name. Mandatory
        super.setName(commonName);
        this.commonName = commonName;
        System.out.println("Attribute: " + this.getName());
        // System.out.println("Custom Principle commonName is " + this.commonName);
        }

        public Collection getCollection() {
        return collection;
        }

        public String getCollectionAsString() {
        String collasstr;
        if(collection != null && collection.size()>0){
        for (Iterator iterator = collection.iterator(); iterator.hasNext();) {
        collasstr = (String) iterator.next();
        return collasstr;
        }
        }
        return "null";
        }

        public void setCollection(Collection collection) {
        this.collection = collection;
        // System.out.println("set collection in CustomPrinciple!");
        if(collection != null && collection.size()>0){
        for (Iterator iterator = collection.iterator(); iterator.hasNext();) {
        final String value = (String) iterator.next();
        System.out.println("Attribute Value: " + value);
        }
        }
        }

        @Override
        public int hashCode() {
        final int prime = 31;
        int result = super.hashCode();
        result = prime * result + ((collection == null) ? 0 : collection.hashCode());
        result = prime * result + ((commonName == null) ? 0 : commonName.hashCode());
        return result;
        }

        @Override
        public boolean equals(Object obj) {
        if (this == obj)
        return true;
        if (!super.equals(obj))
        return false;
        if (getClass() != obj.getClass())
        return false;
        CustomPrincipal other = (CustomPrincipal) obj;
        if (collection == null) {
        if (other.collection != null)
        return false;
        } else if (!collection.equals(other.collection))
        return false;
        if (commonName == null) {
        if (other.commonName != null)
        return false;
        } else if (!commonName.equals(other.commonName))
        return false;
        return true;
        }

        }
        Package the classes as a java archive (jar) and place it in a folder on the weblogic server. For instance $DOMAIN_HOME/lib. Although the $DOMAIN_HOME/lib is in the classpath for many usages, for this usage the jar file is not picked-up by the class-loaders. Probably due to the class-loader hierarchy. To have the jar-file (SurfConextSamlIdentityMapper.jar) in the system class path, add the complete path to the jar file to the classpath on the Startup-tab on both the AdminServer as well as the Managed server. In this the AdminServer is needed, since class is configured through the Realm, and during the configuration the existence of the class is checked. Apparently it is required to also add the weblogic.jar before the SurfConextSamlIdentityMapper.jar to the startup-classpath. Then restart the AdminServer as well as the managed servers.
        Configure the Identity Name MapperNow the Identity Name mapper class can be configured:
        1. In Admin Console navigate to the myrealm Security Realm and select the “Providers–>Authentication
        2. Select the SurfConextIdentityAsserter SAML2_IdentityAsserter and navigate to the “Management” tab:
        3. Edit the created SSO Identity Provider Partner “SAML_SSO_SurfConext”.

          Provide the following settings:
          FieldValueIdentity Provider Name Mapper Class Namenl.darwin-it.saml-example.SurfConextSaml2IdentityMapper
        Test the applicationAt this point the application can be tested. Browse using the external connected PC to the application using the remote URL. For instance: https://www.hatseflats.com:7777/YourApplication-URI. If all is well, the browser is redirected to SurfConext’s WhereAreYouFrom page. Choose the following provider:

        Connect as ‘student1’ with password ‘student1’ (or one of the other test creditials like student2, student3, see https://wiki.surfnet.nl/display/surfconextdev/Test+and+Guest+Identity+Providers). After a succesfull logon, the browser should be redirected to the application. The choosen credential should of course be known as a userid in the application.
        ConclusionThis is one of the bigger stories on this blog. I actually edited the configuration document as a blog entry. I hope you'll find it usefull. With this blog you have a complete how-to for the ServiceProvider part for an ServiceProvider Initiated SSO setup.

        SAML2 seemed complicated to me at first. And under the covers it still might be. But it turns out that Weblogic11g has a great implementation for it, that is neatly configurable. It's a little pity that you need a mapper class for the identity-mapping. It would be nice if you could configure the attribute-value to be returned as a userid. But the mapper class is not that complicated.

        OBIEE Security: Repositories and Three Layers of Security

        This blog series reviewing OBIEE security has to this point identified how users are defined and authenticated within WebLogic, the major security concerns with WebLogic and how application roles are defined and mapped to LDAP groups within Enterprise Manager. We will now review OBIEE authorization, how OBIEE determines what data users can see after they login. 

        The OBIEE Repository is comprised of three layers. A very simplistic summary is below:

        • Physical layer: Defines all database or data source connections (user id and passwords are entered and stored here), the physical table and columns, primary and foreign key relationships.  
        • Business Model Mapping layer (BMM):  Referencing the physical layer, here is where logical structures are built and aggregation rules are defined.  The BMM is really the heart of an OBIEE application
        • Presentation layer:  Referencing the BMM, this layer presents the tables and columns to end users. For example, remove unwanted columns or rename awkwardly named columns.
        Object and Data Level Security

        Object (Physical layer) and Data (BMM) level security is defined within the identity manager in the Repository. Object security can be set to either allow or deny access to a physical table or column. Data security allows rules to be applied to logical tables or columns (BMM layer). These rules can use static values as well as session variables.

        Navigation:  Open identity manager within the RPD -> select user or role -> click on permissions

        Identity Manager

        Data Filter

         

        Object Filter

        Presentation Layer Security Rule

        If you have questions, please contact us at info@integrigy.com

         -Michael Miller, CISSP-ISSMP

        References Tags: ReferenceOracle Business Intelligence (OBIEE)Security Resource
        Categories: APPS Blogs, Security Blogs

        Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

        Senthil Rajendran - Fri, 2014-04-11 04:12
        Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
        Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

        Let us expand our environment.
        If your NoSQL store has write bottleneck then adding a storage node would help.
        If your NoSQL store had read bottlenech then increasing the replication factor would help.

        Steps to make 3x4 (to increase the write throughput)

        kv-> plan deploy-sn -dc dc1 -port 5300 -wait -host server4
        kv-> plan change-parameters -service sn4 -wait -params capacity=3
        kv-> topology clone -current -name 3x4
        kv-> topology change-repfactor -name 3x4 -pool AllStorageNodes -rf 4 -dc dc1
        kv-> topology preview -name 3x4
        kv-> plan deploy-topology -name 3x4 -wait



        Steps to make 4x4 (to increase the read throughput)

        kv-> plan change-parameters -service sn1 -wait -params capacity=4
        kv-> plan change-parameters -service sn2 -wait -params capacity=4
        kv-> plan change-parameters -service sn3 -wait -params capacity=4
        kv-> plan change-parameters -service sn4 -wait -params capacity=4
        kv-> topology clone -current -name 4x4
        kv-> topology redistribute -name 4x4 -pool AllStorageNodes
        kv-> topology preview -name 4x4
        kv-> plan deploy-topology -name 4x4 -wait



        Install latest patch of APEX 4.2 (4.2.5)

        Dimitri Gielis - Fri, 2014-04-11 02:34
        A few days ago Oracle brought out a new patch for APEX 4.2, this will be the latest version of this build, the next version of APEX will be 5.0.
        If you already have APEX 4.2.x installed you can download a patch from support.oracle.com, the patch number is 17966818.
        If you have an earlier version of APEX you can download the full version of APEX and install that.
        As with other patch sets, this one is not different; it includes some bug fixes, updates in the packaged apps and the introduction of some new apps. You find the full patch set notes here.
        Installing the patch in my APEX 4.2.4 environment took less than 15 minutes and everything went fine. 

        I recommend everybody moving to the latest version as this is the final build of APEX 4.2.

        Update 16-APR-2014: we actually hit one issue, which was fixed by Oracle today. So I would install this additional patch too. In support.oracle.com search for Patch 18609856: APEX_WEB_SERVICE.CLOBBASE642BLOB CONVERTS INCORRECTLY.
        Categories: Development

        Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

        Senthil Rajendran - Fri, 2014-04-11 00:07
        Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

        Oracle NoSQL can be smoke tested in different ways but the most common one is the ping command and a simple java program.
        Customers can design their own somke testing program as needed.

        Let us compile what is in the documentation
        $ export KVBASE=/oraclenosql/lab
        $ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        $ cd $KVHOME
        $ javac -cp lib/kvclient.jar:examples examples/hello/*.java
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $

        With all the three storage nodes up and running the below is the output of ping command and the java program

        $ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5112
                Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
                Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5210
                Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5212
                Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5211
        $
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $


        Let us take down the third storage node. You will see the ping confirming that the third storage node is unreachable and the java program works fine with the storage nodes.

        $ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server3/storage
        $
        $ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 137 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 137 haPort: 5112
                Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
                Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
                Rep Node [rg3-rn1]      Status: UNREACHABLE
                Rep Node [rg2-rn3]      Status: UNREACHABLE
                Rep Node [rg1-rn3]      Status: UNREACHABLE
        $
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $

        Let us take down the second storage node. With this we are up and running with one storage node and two are down.
        It is very clear from the java program that the nosql store is not functional because the default commit policy is simple majority which requires two replicas.

        $ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server2/storage
        $
        $ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 257 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,UNKNOWN at sequence number: 137 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,UNKNOWN at sequence number: 135 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1] UNREACHABLE
                Rep Node [rg3-rn3]      Status: UNREACHABLE
                Rep Node [rg1-rn2]      Status: UNREACHABLE
                Rep Node [rg2-rn1]      Status: UNREACHABLE
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
                Rep Node [rg3-rn1]      Status: UNREACHABLE
                Rep Node [rg2-rn3]      Status: UNREACHABLE
                Rep Node [rg1-rn3]      Status: UNREACHABLE
        $
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        oracle.kv.DurabilityException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master. (11.2.2.0.39)
        Fault class name: com.sleepycat.je.rep.InsufficientReplicasException
        Remote stack trace: com.sleepycat.je.rep.InsufficientReplicasException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master.
        $

        By bring up storage nodes 2 & 3 our store is operational.

        $ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
        $ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server3/storage &
        $ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
        $ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server2/storage &

        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5112
                Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5111
                Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5110
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5210
                Rep Node [rg2-rn3]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5212
                Rep Node [rg1-rn3]      Status: RUNNING,MASTER at sequence number: 265 haPort: 5211
        $

        $ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $

        The Riley Family, Part III

        Chet Justice - Thu, 2014-04-10 20:44


        That's Mike and Lisa, hanging out at the hospital. Mike's in his awesome cookie monster pajamas and robe...must be nice, right? Oh wait, it's not. You probably remember why he's there, Stage 3 cancer. The joys.

        In October, we helped to send the entire family to Game 5 of the World Series (Cards lost, thanks Red Sox for ruining their night).

        In November I started a GoFundMe campaign, to date, with your help, we've raised $10,999. We've paid over 9 thousand dollars to the Riley family (another check to be cut shortly).

        In December, Mike had surgery. Details can be found here. Shorter: things went fairly well, then they didn't. Mike spent 22 days in the hospital and lost 40 lbs. He missed Christmas and New Years at home with his family. But, as I've learned over the last 6 months, the Riley family really knows how to take things in stride.

        About 6 weeks ago Mike started round 2 of chemo, he's halfway through that one now. He complains (daily, ugh) about numbness, dizziness, feeling cold (he lives in St. Louis, are you sure it's not the weather?), and priapism (that's a lie...I hope).

        Mike being Mike though, barely a complaint (I'll let you figure out where I'm telling a lie).

        Four weeks ago, a chilly (65) Saturday night, Mike and Lisa call. "Hey, I've got some news for you."

        "Sweet," I think to myself. Gotta be good news.

        "Lisa was just diagnosed with breast cancer."

        WTF?

        ARE YOU KIDDING ME? (Given Mike's gallows humor, it's possible).

        "Nope. Stage 1. Surgery on April 2nd."

        FFS

        (Surgery was last week. It went well. No news on that front yet.)

        Talking to them two of them that evening you would have no idea they BOTH have cancer. Actually, one of my favorite stories of the year...the hashtag for Riley Family campaign was #fmcuta. Fuck Mike's Cancer (up the ass). I thought that was hilarious, but I didn't think the Riley's would appreciate it. They did. They loved it. I still remember Lisa's laugh when I first suggested it. They've dropped the latest bad news and Lisa is like, "Oh, wait until you hear this. I have a hashtag for you."

        "What is it?" (I'm thinking something very...conservative. Not sure why, I should know better by now).

        #tna

        I think about that for about .06 seconds. Holy shit! Did you just say tna? Like "tits and ass?"

        (sounds of Lisa howling in the background).

        Awesome. See what I mean? Handling it in stride.

        "We're going to need a bigger boat." All I can think about now is, "what can we do now?"

        First, I raised the campaign goal to 50k. This might be ambitious, that's OK, cancer treatments are expensive enough for one person, and 10K (the original amount) was on the low side. So...50K.

        Second, Scott Spendolini created a very cool APEX app, ostensibly called the Riley Support Group (website? gah). It's a calendar/scheduling app that allows friends and family coordinate things like meals, young human (children) care and other things that most of us probably take for granted. Pretty cool stuff. For instance, Tim Gorman provides pizza on Monday nights (Dinner from pizza hut...1 - large hand-tossed cheese lovers, 1 - large thin-crispy pepperoni, 1 - 4xpepperoni rolls, 1 - cheesesticks).

        Third. There is no third.

        So many of you have donated your hard earned cash to the Riley family, they are incredibly humbled by, and grateful for, everyone's generosity. They aren't out of the woods yet. Donate more. Please. If you can't donate, see if there's something you can help out with (hit me up for details, Tim lives in CO, he's not really close). If you can't do either of those things, send them your prayers or your good thoughts. Any and all help will be greatly appreciated.
        Categories: BI & Warehousing

        How to install and manage a Kerberos Server

        Yann Neuhaus - Thu, 2014-04-10 19:04


        For some time now, I have been working on how to set up a Single Sign-On (SSO) solution in my company. As a big fan of Open Source solutions, I have obviously proposed the implementation of a solution based on Kerberos. What I mean by that is a solution based on the true Kerberos, i. e. MIT Kerberos. Indeed, Kerberos was originally a research project at the Massachusetts Institute for Technology (MIT) in the early 1980s.

        Before starting this kind of project, it's important to clearly define and have in mind the following points:

        • Architecture of the enterprise
        • Operating systems used by end users
        • Operating systems used by applications which must be kerberized
        • Is it difficult to kerberized these applications?

        The answers to these elements provide insight on which types of solutions are possible. For example if there is no restrictions on which operating system an end user can use (Windows or different Linux distribution or ...) then the introduction of a solution based on a Linux Kerberos could be a good idea. That's why in this blog, I will first explain how to install a MIT Kerberos Server. In the second part, I will focus on how to manage this Server.

         

        1. Install MIT Kerberos

        It's important to note that the server and the client share the same release and that the MIT Keberos server could only be installed on a Unix-like. The Mac release is available as part of the Mac OS X since version 10.3 (the current release is Mavericks: 10.9). The Key Distribution Center (KDC) is the Kerberos Server where all identities (users, computers and kerberized applications) will be stored.

        For this installation, let's define the followings properties/variable:

        • example.com = the DNS Domain
        • EXAMPLE.COM = the KDC REALM which should be the DNS Domain in UPPERCASE. In case where there should be more than one KDC, all names must be unique and self descriptive
        • kdc01oel.example.com = the FQDN of the KDC
        • 192.168.1.2 = the IP of kdc01oel.example.com

        So let's begin the installation. Obviously, the first thing to do is to download the current release of the MIT Kerberos distribution for the target operating system. This could be done at the following URL: http://web.mit.edu/kerberos/dist/index.html. The current Linux release is krb5-1.12.1-signed.tar:

        [root@oel opt]# wget http://web.mit.edu/kerberos/dist/krb5/1.12/krb5-1.12.1-signed.tar
        --2014-04-01 14:00:28--  http://web.mit.edu/kerberos/dist/krb5/1.12/krb5-1.12.1-signed.tar
        Resolving web.mit.edu... 23.58.214.151
        Connecting to web.mit.edu|23.58.214.151|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 11950080 (11M) [application/x-tar]
        Saving to: “krb5-1.12.1-signed.tar”

        100%[===============================================>] 11,950,080  1.52M/s   in 7.3s
        2014-04-01 14:00:38 (1.56 MB/s) - “krb5-1.12.1-signed.tar” saved [11950080/11950080]
        [root@oel opt]# tar -xvf krb5-1.12.1-signed.tar
        krb5-1.12.1.tar.gz
        krb5-1.12.1.tar.gz.asc

        As you could see, this file is signed and you could (should) verify the integrity and identity of the software. This can be done, for example, using GNU Privacy Guard (need another file that can be found on the MIT Kerberos download page):

        [root@oel opt]# gpg --verify krb5-1.12.1.tar.gz.asc

        After that, just extract the MIT Kerberos source code and build it:

        [root@oel opt]# tar -zxf krb5-1.12.1.tar.gz
        [root@oel opt]# cd krb5-1.12.1/src/
        [root@oel src]# ./configure
        ...
        [root@oel src]# yum install *yacc*
        ...
        [root@oel src]# make
        ...
        [root@oel src]# make install
        ...

        At this step, Kerberos should be installed properly and the binaries, libraries and the documentation should be under /usr/local. The default location is sufficient in almost all cases:

        [root@oel src]# krb5-config --all
        Version:     Kerberos 5 release 1.12.1
        Vendor:      Massachusetts Institute of Technology
        Prefix:      /usr/local
        Exec_prefix: /usr/local

        As Kerberos should be installed properly, the next step is to configure it. This is done through a configuration file named krb5.conf:

        [root@oel src]# vi /etc/krb5.conf

        [libdefaults]
          default_realm = EXAMPLE.COM
          forwardable = true
          proxiable = true

        [realms]
          EXAMPLE.COM = {
            kdc = kdc01oel.example.com:88
            admin_server = kdc01oel.example.com:749
            default_domain = example.com
          }

        [domain_realm]
          .example.com = EXAMPLE.COM
          example.com = EXAMPLE.COM

        [logging]
          kdc = FILE:/var/log/krb5kdc.log
          admin_server = FILE:/var/log/kadmin.log
          default = FILE:/var/log/krb5lib.log

        To avoid hostname resolution issues, the file /etc/hosts must contain the fully qualified domain name of the server as well as the IP address:

        [root@oel src]# vi /etc/hosts

        127.0.0.1         localhost   kdc01oel
        192.168.1.2       kdc01oel.example.com   kdc01oel

        The next thing to do is to create the realm and the KDC database. Let's begin with the creation of the database parent folder:

        [root@oel src]# cd /usr/local
        [root@oel local]# mkdir /usr/local/var
        [root@oel local]# mkdir /usr/local/var/krb5kdc
        [root@oel local]# chmod 700 /usr/local/var/krb5kdc

        The file krb5.conf (just above) is the generic Kerberos configuration file but the KDC also has is own configuration file (kdc.conf). Create this file and populate it as follow:

        [root@oel local]# cd /usr/local/var/krb5kdc/
        [root@oel krb5kdc]# vi kdc.conf

        [kdcdefaults]
          kdc_ports = 749,88

        [realms]
          EXAMPLE.COM = {
            database_name = /usr/local/var/krb5kdc/principal
            admin_keytab = /usr/local/var/krb5kdc/kadm5.keytab
            acl_file = /usr/local/var/krb5kdc/kadm5.acl
            key_stash_file = /usr/local/var/krb5kdc/.k5.EXAMPLE.COM
            kdc_ports = 749,88
            max_life = 10h 0m 0s
            max_renewable_life = 7d 0h 0m 0s
          }

        So let's create the Kerberos database using this configuration file:

        [root@oel krb5kdc]# /usr/local/sbin/kdb5_util create -s
        Loading random data
        Initializing database '/usr/local/var/krb5kdc/principal' for realm 'EXAMPLE.COM',
        master key name 'K/document.write(['M','EXAMPLE.COM'].join('@'))'
        You will be prompted for the database Master Password.
        It is important that you NOT FORGET this password.
        Enter KDC database master key:
        Re-enter KDC database master key to verify:
        [root@oel krb5kdc]#

        If there is any error at this point, it is certainly due to a misconfiguration of the /etc/krb5.conf file or because Kerberos can't resolve the hostname (the /etc/hosts file isn't configure properly).

        This finaly conclude the first part about the installation of the MIT Kerberos Server.


        2. Manage the KDC

        For this part, I assume that the KDC is set up according to what I've explain above. In the previous part, I just shown how to install the KDC but in fact the KDC isn't running for the moment. So the first thing to do is to configure which will be able to connect to the KDC (that mean obtain a ticket) and with which permissions.

        To enter to the KDC administration console, use the kadmin.local (only for the local machine):

        [root@oel krb5kdc]# /usr/local/sbin/kadmin.local
        Authenticating as principal root/document.write(['admin','EXAMPLE.COM'].join('@')) with password.
        kadmin.local:

        Once in the kadmin.local, several command can be used to manage the KDC. The following command can be used to list them all:

        kadmin.local:  ?
        Available kadmin.local requests:

        add_principal, addprinc, ank
                                 Add principal
        delete_principal, delprinc
                                 Delete principal
        modify_principal, modprinc
                                 Modify principal
        rename_principal, renprinc
                                 Rename principal
        change_password, cpw     Change password
        get_principal, getprinc  Get principal
        list_principals, listprincs, get_principals, getprincs
                                 List principals
        add_policy, addpol       Add policy
        modify_policy, modpol    Modify policy
        delete_policy, delpol    Delete policy
        get_policy, getpol       Get policy
        list_policies, listpols, get_policies, getpols
                                 List policies
        get_privs, getprivs      Get privileges
        ktadd, xst               Add entry(s) to a keytab
        ktremove, ktrem          Remove entry(s) from a keytab
        lock                     Lock database exclusively (use with extreme caution!)
        unlock                   Release exclusive database lock
        purgekeys                Purge previously retained old keys from a principal
        get_strings, getstrs     Show string attributes on a principal
        set_string, setstr       Set a string attribute on a principal
        del_string, delstr       Delete a string attribute on a principal
        list_requests, lr, ?     List available requests.
        quit, exit, q            Exit program.

        So for example let's create two principal. One with administrator capabilities (xxx/admin) and another one without:

        kadmin.local:  addprinc mpatou/admin
        WARNING: no policy specified for mpatou/document.write(['admin','EXAMPLE.COM'].join('@')); defaulting to no policy
        Enter password for principal "mpatou/document.write(['admin','EXAMPLE.COM'].join('@'))":
        Re-enter password for principal "mpatou/document.write(['admin','EXAMPLE.COM'].join('@'))":
        Principal "mpatou/document.write(['admin','EXAMPLE.COM'].join('@'))" created.

        kadmin.local:  addprinc mpatou
        WARNING: no policy specified for document.write(['mpatou','EXAMPLE.COM'].join('@')); defaulting to no policy
        Enter password for principal "document.write(['mpatou','EXAMPLE.COM'].join('@'))":
        Re-enter password for principal "document.write(['mpatou','EXAMPLE.COM'].join('@'))":
        Principal "document.write(['mpatou','EXAMPLE.COM'].join('@'))" created.

        With a new "listprincs", the two new principals should be displayed but for now, mpatou/admin has no administrative access because this account isn't declared in the access control list. In the ACL file, permissions can be defined by using the following characters:

        • A = Addition of users or policies into the KDC database
        • D = Deletion of users or policies from the KDC database
        • M = Modification of users or policies in the KDC database
        • C = Changing principals' passwords in the KDC database
        • I = Inquiries into the database, to list principal information
        • L = Listing of the principals contained in the database
        • * = Grants the user all of the above permissions

        Moreover, the wildcard (*) can be used to match more than one user. For example */admin will match all administrative account. Let's create the ACL file as define in the KDC configuration file (kdc.conf):

        kadmin.local:  exit
        [root@oel krb5kdc]# vi /usr/local/var/krb5kdc/kadm5.acl
        */document.write(['admin','EXAMPLE.COM'].join('@'))        ADMCIL

        So there is a kadmin.local for local administration but there is also a remote administration console which is kadmin. This remote access can't be used for now because it need a file that doesn't exist. This file (a "keytab"), is a file that store a principal and an encryption key derived from the principal's password. It could be used to log into Kerberos without being prompted for a password and that's why this is useful for all kerberized applications.

        [root@oel krb5kdc]# /usr/local/sbin/kadmin.local
        Authenticating as principal root/document.write(['admin','EXAMPLE.COM'].join('@')) with password.
        kadmin.local:  ktadd -k /usr/local/var/krb5kdc/kadm5.keytab kadmin/admin kadmin/changepw
        Entry for principal kadmin/admin with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/admin with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/admin with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/admin with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        kadmin.local:  exit

        The location of this kadm5.keytab is also define in the kdc.conf file. Now the KDC server process is ready to start:

        [root@oel krb5kdc]# /usr/local/sbin/krb5kdc
        [root@oel krb5kdc]# /usr/local/sbin/kadmind

        If there is no error, then the KDC should be running and ready to reply to any client with a valid principal. The easiest way to test it is to try to obtain a TGT (Ticket Granting Ticket) using the kinit command:

        [root@oel krb5kdc]# cd /usr/local/bin
        [root@oel bin]# klist
        klist: Credentials cache file '/tmp/krb5cc_0' not found
        [root@oel bin]# kinit mpatou
        Password for document.write(['mpatou','EXAMPLE.COM'].join('@')):
        [root@oel bin]# klist
        Ticket cache: FILE:/tmp/krb5cc_0
        Default principal: document.write(['mpatou','EXAMPLE.COM'].join('@'))

        Valid starting       Expires              Service principal
        04/03/2014 09:54:48  04/03/2014 19:54:48  krbtgt/document.write(['EXAMPLE.COM','EXAMPLE.COM'].join('@'))
            renew until 04/04/2014 09:54:47
        [root@oel bin]# kdestroy
        [root@oel bin]# klist
        klist: Credentials cache file '/tmp/krb5cc_0' not found

        The klist command can be used to list all existing tickets whereas the kdestroy is used to remove them. The KDC is now fully operational and some possible additional steps can be done (e.g. set up slaves KDC).

        This finally concludes this blog about how to install a MIT Kerberos Server. If you need more information about Kerberos (MIT or Heimdal or Active Directory implementation), I strongly suggest you to read the book "Kerberos, The Definitive Guide" by Jason Garman. This book was for me the best source of knowledge on this subject.

        Oracle Priority Service Infogram for 10-APR-2014

        Oracle Infogram - Thu, 2014-04-10 16:35

        Security
        The Heartbleed vulnerability is causing a major stir. Here are a couple of articles to help clarify what you should do and when:
        From lifehackerLastPass Now Tells You Which Heartbleed-Affected Passwords to Change.
        and another good article from Mashable: The Heartbleed Hit List: The Passwords You Need to Change Right Now.
        Hadoop
        Using Sqoop for Loading Oracle Data into Hadoop on theBigDataLite VM, from RittmanMead.
        RDBMS
        From The Oracle Instructor: Initialization Parameter Handling for Pluggable Databases in #Oracle 12c.
        12c New Feature: Limit the PGA, from Peter's DBA Blog.
        From The ORACLE-BASE Blog: Online Move Datafile in Oracle 12c
        Oracle Internals
        From the internals guru Tanel Poder: Oracle X$ tables – Part 1 – Where do they get their data from?.
        High Availability
        Improving Performance via Parallelism in Oracle Event Processing Pipelines with High-Availability, from the Oracle A-TEAM Chronicles.
        Linux
        From the dbwhisperer: Multi-threaded Oracle 12c architecture on Linux.
        OIM
        OIM 11g R2 Self Registration with CAPTCHA, from Oracle A-TEAM Chronicles.
        Middleware
        From Proactive Support - WebCenter Content: Free Learning Sessions on Oracle Fusion Middleware.
        SOA
        SOA Governance Through Enterprise Architecture, from the SOA & BPM Partner Community Blog.
        Business
        Presentations are serious business, which is why you have to look serious, but relaxed: 10 Body Language Tips Every Speaker Must Know (Infographic), from Entrepeneur.
        EBS
        Over at the Oracle E-Business Suite Support Blog:
        Should You Apply The R12.1.3+ EBS Wide RPC or Wait for a Payables Specific RPC?
        Learn All About Channel Revenue Management Rebates
        Webcast: E-Business Tax Series, Session 2 – Basic Overview, Regime To Rate Setup & Transactional Flow (US Based Setup) From A Financials Perspective
        Webcast: E-Business Tax Series, Session 1 – Prerequisites for Regime to Rate Flow Creation
        Asset Tracking: How to Capitalize Serialized Normal Items Through Sales Order Shipment
        How Can One Disable Continuous Price Breaks in R12?

        ‘Heartbleed’ (CVE-2014-0160) Vulnerability in OpenSSL

        Oracle Security Team - Thu, 2014-04-10 12:44

        Hi, this is Eric Maurice.

        A vulnerability affecting certain versions of the OpenSSL libraries was recently publicly disclosed.  This vulnerability has received the nickname ‘Heartbleed’ and the CVE identifier CVE-2014-0160. 

        Oracle is investigating the use of the affected OpenSSL libraries in Oracle products and solutions, and will provide mitigation instructions when available for these affected Oracle products. 

        Oracle recommends that customers refer to the 'OpenSSL Security Bug - Heartbleed CVE-2014-0160' page on the Oracle Technology Network (OTN) for information about affected products, availability of fixes and other mitigation instructions.  This page will be periodically updated as Oracle continues its assessment of the situation.   Oracle customers can also open a support ticket with My Oracle Support if they have additional questions or concerns.

         

        For More Information:

        The CVE-2014-016 page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html

        The Heartbleed web site is located at http://heartbleed.com/.  This site is not affiliated with Oracle and provides a list of affected OpenSSL versions.

        The My Oracle Support portal can be accessed by visiting https://support.oracle.com

         

        Complément : Présentations de l'évènement DB Innovation Day

        Jean-Philippe Pinte - Thu, 2014-04-10 09:52
        Retrouvez ci-dessous les présentations faites lors de l'évènement Database Innovation Day :

        How to restrict data coming back from a SOAP Call

        Angelo Santagata - Thu, 2014-04-10 09:51

        In sales cloud a big positive of the SOAP interface is that lots of related data is returned by issuing a single query, including master-detail data (ie multiple email addresses in contacts) however these payloads can be very very large, e.g. In my system querying single person you get 305 Lines(!), whereas I only want the firstName,LastName and partyId which is 3 lines per record..

        Solution

        For each findCriteria element you can add multiple <findAttribute> element indicating what elements you want to return. By default if you provide <findAttribute> entries then only those attributes are returned, and this functionality can be reversed by setting the <excludeAttributes> to true.


        Example 1 :  only retrieving PersonLastName,PersonFirstName,PartyId

        <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/cdm/foundation/parties/personService/applicationModule/types/" xmlns:typ1="http://xmlns.oracle.com/adf/svc/types/">

           <soapenv:Header/>

           <soapenv:Body>

              <typ:findPerson>

                 <typ:findCriteria xsi:type="typ1:FindCriteria" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

                        <typ1:fetchStart>0</typ1:fetchStart>

                        <typ1:fetchSize>100</typ1:fetchSize>

        <typ1:findAttribute>PersonLastName</typ1:findAttribute>

                        <typ1:findAttribute>PersonFirstName</typ1:findAttribute>

                        <typ1:findAttribute>PartyId</typ1:findAttribute>

                    <typ1:excludeAttribute>false</typ1:excludeAttribute>

                 </typ:findCriteria>

              </typ:findPerson>

           </soapenv:Body>

        </soapenv:Envelope>

        Notes

        findAttributes work on the level1 attributes of that findCriteria, the value can be a attribute or an element

        If you want to restrict SubElements you can use childFindCriterias for that subelement and then add findAttributes within that

        Example 2 :  Only Retrieving PartyId, and within Email element only EmailAddress     

        <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/cdm/foundation/parties/personService/applicationModule/types/" xmlns:typ1="http://xmlns.oracle.com/adf/svc/types/">

           <soapenv:Header/>

           <soapenv:Body>

              <typ:findPerson>

                 <typ:findControl>

                    <typ1:retrieveAllTranslations/>

                 </typ:findControl>

                 <typ:findCriteria xsi:type="typ1:FindCriteria" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

                    <typ1:fetchStart>0</typ1:fetchStart>

                    <typ1:fetchSize>100</typ1:fetchSize>

        <typ1:findAttribute>PartyId</typ1:findAttribute>

                    <typ1:findAttribute>Email</typ1:findAttribute>

                    <typ1:excludeAttribute>false</typ1:excludeAttribute>

        <typ1:childFindCriteria>

                       <typ1:fetchStart>0</typ1:fetchStart>

                       <typ1:fetchSize>10</typ1:fetchSize>

        <typ1:findAttribute>EmailAddress</typ1:findAttribute>

                       <typ1:excludeAttribute>false</typ1:excludeAttribute>

                       <typ1:childAttrName>Email</typ1:childAttrName>

        </typ1:childFindCriteria>

                 </typ:findCriteria>

              </typ:findPerson>

           </soapenv:Body>

        </soapenv:Envelope>

        Notes

        For a childFindCriteria to work you must query it in the parent, which is why “Email” is referenced in a findAttribute on line 14

        What Happens in Vegas, Doesn’t Stay in Vegas – Collaborate 14

        Pythian Group - Thu, 2014-04-10 08:04

        IOUG’s Collaborate 14, is star-studded this year with the Pythian team illuminating various tracks in the presentation rooms. It’s acting like a magnet in the expo halls of The Venetian for data lovers. It’s a kind of rendezvous for those who love their data. So if you want your data to be loved, feel free to drop by at Pythian booth 1535.

        Leading from the front is Paul Vallée with an eye-catching title, with real world gems. Then there is Michael Abbey’s rich experience, Marc Fielding’s in-depth technology coverage and Vasu’s forays into Apps Database Administration. There is my humble attempt at Exadata IORM, and Rene’s great helpful tips, and Alex Gorbachev’s mammoth coverage of mammoth data – it’s all there with much more to learn, share and know.

        Vegas Strip is buzzing with the commotion of Oracle. Even the big rollers are turning their necks to see what the fuss is about. Poker faces have broken into amazed grins, and even the weird, kerbside card distribution has stopped. Everybody is focused on the pleasures of Oracle technologies.

        Courtesy of social media, all of this fun isn’t confined to Vegas. You can follow @Pythian on Twitter to know it all, live, and in real time.

        Come Enjoy!

        Categories: DBA Blogs

        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

        Senthil Rajendran - Thu, 2014-04-10 04:23
        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

        With the current 3x1 setup the NoSQL store is write efficient. In order to make it read efficient the replication factor has to be increased which internally creates more copies of the data to improve performance.

        In the below scenario we are going to increase the replication from 1 to 3 to the  existing topology to make it read friendly.


        export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
        kv-> show topologystore=mystore  numPartitions=30 sequence=60  dc=[dc1] name=datacenter1 repFactor=1
          sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING    [rg1-rn1] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING    [rg2-rn1] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING    [rg3-rn1] RUNNING          No performance info available
          shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3
        kv-> plan change-parameters -service sn1 -wait -params capacity=3Executed plan 8, waiting for completion...Plan 8 ended successfullykv-> plan change-parameters -service sn2 -wait -params capacity=3Executed plan 9, waiting for completion...Plan 9 ended successfullykv-> plan change-parameters -service sn3 -wait -params capacity=3Executed plan 10, waiting for completion...Plan 10 ended successfullykv-> topology clone -current -name 3x3Created 3x3kv-> topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1Changed replication factor in 3x3kv-> topology preview -name 3x3Topology transformation from current deployed topology to 3x3:Create 6 RNs
        shard rg1  2 new RNs : rg1-rn2 rg1-rn3shard rg2  2 new RNs : rg2-rn2 rg2-rn3shard rg3  2 new RNs : rg3-rn2 rg3-rn3
        kv-> plan deploy-topology -name 3x3 -waitExecuted plan 11, waiting for completion...Plan 11 ended successfullykv-> show topologystore=mystore  numPartitions=30 sequence=67  dc=[dc1] name=datacenter1 repFactor=3
          sn=[sn1]  dc=dc1 server1:5000 capacity=3 RUNNING    [rg1-rn1] RUNNING          No performance info available    [rg2-rn2] RUNNING          No performance info available    [rg3-rn2] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=3 RUNNING    [rg1-rn2] RUNNING          No performance info available    [rg2-rn1] RUNNING          No performance info available    [rg3-rn3] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=3 RUNNING    [rg1-rn3] RUNNING          No performance info available    [rg2-rn3] RUNNING          No performance info available    [rg3-rn1] RUNNING          No performance info available
          shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1    [rg1-rn2] sn=sn2    [rg1-rn3] sn=sn3  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2    [rg2-rn2] sn=sn1    [rg2-rn3] sn=sn3  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3    [rg3-rn2] sn=sn1    [rg3-rn3] sn=sn2
        kv->


        So what we have done ?

        plan change-parameters -service sn1 -wait -params capacity=3plan change-parameters -service sn2 -wait -params capacity=3plan change-parameters -service sn3 -wait -params capacity=3We are increasing the capacity from 1 to 3 with the change-parameters command.
        topology clone -current -name 3x3We are cloning the current topology with the new name 3x3
        topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1We are using the change-repfactor method to modify the replication factor to 3. The replication factor cannot be changed for this topology after executing this command.
        You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x3 distributions.