Skip navigation.

Feed aggregator

How to setup passwordless ssh in Exadata using dcli

Alejandro Vargas - Sun, 2014-10-05 02:57

 




Normal
0




false
false
false

EN-US
X-NONE
HE













MicrosoftInternetExplorer4















DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:"Times New Roman";
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:Arial;
mso-bidi-theme-font:minor-bidi;}


Setting passwordless ssh root connection using dcli is fast and simple and will easy later to execute commands on all servers using this utility.


In order to do that you should have either:


DNS resolution to all Database and Storage nodes OR have them registered in /etc/hosts


1) Create a parameter file that contains all the server names you want to reach via dcli, tipically we have a cell_group for storage cells, a dbs_group for database servers and an all_group for both of them.


The parameter files will have only the server name, in short format


ie: all_group will have on an Exadata quarter rack:


dbnode1
dbnode2
cell1
cell2
cell3


2) As root user create ssh equivalence:


ssh-keygen   -t    rsa


3) Distribute the key to all servers


dcli -g ./all_group -l root -k -s '-o StrictHostKeyChecking=no'


4) check 


dcli -g all_group -l root hostname 



 

Categories: DBA Blogs

Streaming for Hadoop

DBMS2 - Sun, 2014-10-05 02:56

The genesis of this post is that:

  • Hortonworks is trying to revitalize the Apache Storm project, after Storm lost momentum; indeed, Hortonworks is referring to Storm as a component of Hadoop.
  • Cloudera is talking up what I would call its human real-time strategy, which includes but is not limited to Flume, Kafka, and Spark Streaming. Cloudera also sees a few use cases for Storm.
  • This all fits with my view that the Current Hot Subject is human real-time data freshness — for analytics, of course, since we’ve always had low latencies in short-request processing.
  • This also all fits with the importance I place on log analysis.
  • Cloudera reached out to talk to me about all this.

Of course, we should hardly assume that what the Hadoop distro vendors favor will be the be-all and end-all of streaming. But they are likely to at least be influential players in the area.

In the parts of the problem that Cloudera emphasizes, the main tasks that need to be addressed are:

  • Getting data into the plumbing from whatever systems it’s being generated in. This is the province of Flume, one of Cloudera’s earliest projects. I’d add that this is also one of the core competencies of Splunk.
  • Getting data where it needs to go. Flume can do this. Kafka, a publish/subscribe messaging system, can do it in a more general way, because streams are sent to a Kafka broker, which then re-streams them to their ultimate destination.
  • Processing data in flight. Storm can do this. Spark Streaming can do it more easily. Spark Streaming is or soon will be a part of every serious Hadoop distribution. Flume can do some lightweight processing as well.
  • Serving up data for further query. Cloudera would like you to do this via HBase or Impala. But Oracle is a fine choice too, and indeed a popular choice among Cloudera customers.

I guess there’s also a step of receiving data out of the plumbing system. Cloudera and I glossed over that aspect when we talked, but I’ll say:

  • Spark commonly lives over HDFS (Hadoop Distributed File System).
  • Flume feeds HDFS. Flume was also hacked years ago — rah-rah open source! — to feed Kafka instead, and also to be fed by it.

Cloudera has not yet decided whether to make Kafka part of CDH (which stands for Cloudera Distribution yada yada Hadoop). Considerations in that probably include:

  • Kafka has impressive adoption among high-profile internet companies, but not so much among conventional enterprises.
  • Surely not coincidentally, Kafka is missing features in areas such as security (e.g. it lacks Kerberos integration).
  • Kafka lacks cool capabilities to let you configure rather than code, although Cloudera thinks that in some cases you can work around this problem by marrying Kafka and Flume.

I still find it bizarre that a messaging system be named after an author famous for writing about depressingly inescapable situations. Also, I wish that:

  • Kafka had something to do with transformations.
  • The name Kafka had been used by a commercial software company, which could offer product trials.

Highlights from the Storm vs. Spark Streaming vs. Samza part of my discussion with Cloudera include:

  • Storm has a companion project Trident that makes Storm somewhat easier to program and/or configure. But Trident only has some of the usability advantages of Spark Streaming.
  • Cloudera sees no advantages to Samza, a Kafka companion project, when compared with whichever of Spark Streaming or Storm + Trident is better suited to a particular use case.
  • Cloudera likes the rich set of primitives that Spark Streaming inherits from Spark. Cloudera also notes that, if you learn to program over Spark for any reason, then you will in particular have learned how to program over Spark Streaming.
  • Spark Streaming lets you join Spark Streaming data to other data that Spark can get access to. I agree with Cloudera that this is an important advantage.
  • Cloudera sees Storm’s main advantages as being in latency. If you need 10-200 millisecond latency, Storm can give you that today while Spark Streaming can’t. However, Cloudera notes that to write efficiently to your persistent store — which Cloudera fondly hopes but does not insist will be HBase or Impala — you may need to micro-batch your writes anyway.

Also, Spark Streaming has a major advantage over bare Storm in whether you have to manually configure your topology, but I wasn’t clear as to how far Trident closes that particular gap.

Cloudera and I didn’t particularly talk about data-consuming technologies such as BI, predictive analytics, or analytic applications, but we did review use cases a bit. Nothing too surprising jumped out. Indeed, the discussion reminded me of a 2007 list I did of applications — other than extreme low-latency ones — for CEP (Complex Event Processing).

  • Top-of-mind were things that fit into one or more of the buckets “internet”, “retail”, “recommendation/personalization”, “security” or “anti-fraud”.
  • Transportation/logistics got mentioned, to which I replied that the CEP vendors had all seemed to have one trucking/logistics client each.
  • At least in theory, there are potentially huge future applications in health care.

In general, candidate application areas for streaming-to-Hadoop match those that involve large volumes of machine-generated data.

Edit: Shortly after I posted this, Storm creator Nathan Marz put up a detailed and optimistic post about the history and state of Storm

Categories: Other

Bash security fix made available for Exadata

Alejandro Vargas - Sun, 2014-10-05 02:29

Complete information about the security fix availability should be reviewed, before applying the fix, in MOS DOC:


 Responses to common Exadata security scan findings (Doc ID 1405320.1)


The security fix is available for download from:


http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/x86_64/getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


The summary installation instructions are as follows:


1) Download getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


2) Copy bash-3.2-33.el5_11.4.x86_64.rpm into /tmp at both database and storage nodes.


3) Remove rpm  exadata-sun-computenode-exact



rpm -e exadata-sun-computenode-exact



4) On compute nodes install bash-3.2-33.el5_11.4.x86_64.rpm using this command:



 rpm -Uvh /tmp/bash-3.2-33.el5_11.4.x86_64.rpm



5) On storage nodes  install bash-3.2-33.el5_11.4.x86_64.rpm using this command:




rpm -Uvh --nodeps /tmp/bash-3.2-33.el5_11.4.x86_64.rpm


6) Remove /tmp/bash-3.2-33.el5_11.4.x86_64.rpm from all nodes


As a side effect of applyin this fix,  during future upgrades on the database nodes, a warning will appear informing:



The "exact package" was not found and it will use minimal instead.


That's a normal and expected message and will not interfere with the upgrade. 







Categories: DBA Blogs

OOW 2014: Day 1

Doug Burns - Sat, 2014-10-04 23:26
Disclosure: I'm attending Openworld at the invitation of the OTN ACE Director program who are paying for my flights, hotel and conference fee. My employer has helpfully let me attend on work time, as well as sending other team mates because they recognise the educational value of attending. Despite that, all of the opinions expressed in these posts are, as usual, all my own.
After the very welcome tradition of breakfast at Lori's Diner, I had time to register and then get myself down to Moscone South for my first session of the day. I'd planned to listen to Paul Vallee's security talk because I'd been unable to register for Gwen Shapira's Analyzing Twitter data with Hadoop session but noticed spare seats as I passed the room, so switched. I love listening to Gwen talk on any subject because her enthusiasm is contagious. A few of the demos went a little wrong but I still got a nice overview of the various components of a Hadoop solution (which is an area I've never really looked at much) so the session flew by. Good stuff.
Next up was Yet-another-Oracle-ACE-Director Arup Nanda's presentation on Demystifying Cache Buffer Chains. The main reason I attended was to see how he presented the subject and wasn't expecting to learn too much but it's an important subject, particularly now I'm working with RAC more often and consolidated environments. CBC latch waits are on my radar once more!
Next up was 12 things about 12c, a session of 12 speakers given 5 minutes to talk about, well, 12c stuff. Debra Lilley organised this and despite all her concerns that she'd expressed leading up to it, it went very smoothly, so hats off to Debra and to the speakers for behaving themselves with the timing! I was particularly concerned that we kicked off with Jonathan Lewis ;-) Big problem with putting him on first - will he actually be able to stay within the time constraints? Because he'll get too excited and want to talk about things in more depth. He did do it, but it was tough as he raced towards the finishing line ;-)
The only thing that bugged me about this was that I hadn't realised it was two session slots (makes complete sense if I'd performed some simple maths!) but it was very annoying when they kicked everyone out of the room at half-time before readmitting them. Yes, there are rules, but this was one of the more stupid. It annoyed me enough that I decided to skip the second half and attend the Enkitec panel session instead.
What an amazing line-up of Exadata geek talent they had on one stage for Expert Oracle Exadata: Then and Now ....
Enkitec Panel

Including most of the authors of the original book as well as the authors who are writing the next edition which should be out before the end of the year.

From left-to-right : Karl Arao, Martin Bach, Kerry Osborne, Andy Colvin, Tanel Poder and Frits Hoogland.

They talked a little about the original version of the book (largely based on V2) and how far Exadata had come since then, but it was a pretty open session with questions shooting around all over the place and great fun. Nice way for me to wrap up my user group conference activities for the day and head out into the sun for Larry's Opening Keynote. 
First we had the traditional vendor blah-blah-blah about stuff I couldn't care less about but, in shocking news, I actually enjoyed it! Maybe it's because it was Intel and so I'm probably more interested in what they're doing, but it was pretty ok stuff. All the keynotes are available online here.
Then it was LarryTime. Seemed on pretty good form by recent standards although I can summarise it simply as Cloud, Cloud and more Cloud. There's no getting away from the fact that it's been quite the about-turn from him in his attitude towards the Cloud. I did appreciate the "we're only just getting started" message and I suppose I've become innured to how accurate the actual facts are in his presentations and to the attacks on competitors so I sort of enjoy his keynotes more than most.
At this stage, the jetlag was biting *hard* and I ended up missing yet another ACE dinner but from all the reports I heard it was the best ever by some distance so I was gutted to miss out on it. But when you're body is saying sleep whilst you're walking, sometimes you have to listen to it! Then again, when it decides to wake you up again at 2:30, perhaps you should tell it to go and take a running jump!

Upgrading PeopleTools with Zero Downtime (1/3)

Javier Delgado - Sat, 2014-10-04 21:35
A few months ago, BNB concluded a PeopleTools upgrade with a quite curious approach. Our customer, a leading Spanish financial institution, had PeopleSoft CRM 8.4 installation running under PeopleTools 8.42. Their CRM application was being used to provide support to their 24x7 call centres, and the only reason they had to perform the PeopleTools upgrade was to be able to update their database and WebLogic releases, as the existing ones were already out of support.

Now, the organisation was going under a major structural change, so the customer wanted to perform the PeopleTools upgrade with a minimal disruption of their activities, as it was difficult at that time to obtain the needed sponsorship from higher managerial levels. In other words, they wanted to perform the upgrade as silently as possible. This translated in two particular requirements:
  • Ability to perform the PeopleTools change with zero downtime, in order to avoid any impact on the users.
  • Ability to gradually move users from the old PeopleTools release to the new one, practically limiting the impact of any product issue related to the upgrade. In case anything failed, they wanted to be able to move the users back to the old release.
Having performed quite a few PeopleTools upgrades in the past, I knew that following the standard procedures would not help us in providing a satisfactory answer to the client. So, after some discussions, the customer agreed on trying a non-standard way of upgrade PeopleTools. We agreed to do a prototype, test and if everything went well, then move to Production. If it did not work out, we would need to do it in the standard way. As it finally turned out, the suggested approach worked out.

I cannot say it would work for any other combination of PeopleTools and application versions, nor different customer usage of the application. Anyhow, I thought it may be useful to share it with you, in case any of you can enrich the approach with your feedback. In the next post I will describe the approach and in the third and final one I will describe the issues we faced during the implementation. So... keep tuned ;).

OOW14 Day 5 - not only Oracle OpenWorld

Yann Neuhaus - Sat, 2014-10-04 11:45

Oracle's OpenWorld has ended. It was the fist time I attended this great event and it really is a "great" event:

  • 60000 attendees from 145 countries
  • 500 partners or customers in the exhibit hall
  • 400 demos in the DEMOgrounds
  • 2500 sessions

11g Adaptive Cursor Sharing --- does it work only for SELECT statements ? Using the BIND_AWARE Hint for DML

Hemant K Chitale - Sat, 2014-10-04 08:52
Test run in 11.2.0.2

UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

Some of you may be familiar with Adaptive Cursor Sharing.

This is an 11g improvement over the "bind peek once and execute repeatedly without evaluating the true cost of execution" behaviour that we see in 10g.  Thus, if the predicate is skewed and the bind value is changed, 10g does not "re-peek" and re-evaluate the execution plan. 11g doesn't "re-peek" at the first execution with  new bind but if it finds the true cardinality returned by the execution at signficant variance, it decides to "re-peek" at a subsequent execution.  This behaviour is determined by the new attributes "IS_BIND_SENSITIVE" and "IS_BIND_AWARE" for the SQL cursor.

If a column is highly skewed, as determined by the presence of  Histogram, the Optimizer, when parsing an SQL with a bind against the column as a predicate, marks the SQL as BIND_SENSITIVE. If two executions with two different bind values return very different counts of rows for the predicate, the SQL is marked BIND_AWARE.  The Optimizer "re-peeks" the bind and generates a new Child Cursor that is marked as BIND_AWARE.

Here is a demo.


SQL> -- create and populate table
SQL> drop table demo_ACS purge;

Table dropped.

SQL>
SQL> create table demo_ACS
2 as
3 select * from dba_objects
4 where 1=2
5 /

Table created.

SQL>
SQL> -- populate the table
SQL> insert /*+ APPEND */ into demo_ACS
2 select * from dba_objects
3 /

75043 rows created.

SQL>
SQL> -- create index on single column
SQL> create index demo_ACS_ndx
2 on demo_ACS (owner) nologging
3 /

Index created.

SQL>
SQL> select count(distinct(owner))
2 from demo_ACS
3 /

COUNT(DISTINCT(OWNER))
----------------------
42

SQL>
SQL> select owner, count(*)
2 from demo_ACS
3 where owner in ('HEMANT','SYS')
4 group by owner
5 /

OWNER COUNT(*)
-------- ----------
HEMANT 55
SYS 31165

SQL>
SQL> -- create a histogram on the OWNER column
SQL> exec dbms_stats.gather_table_stats('','DEMO_ACS',estimate_percent=>100,method_opt=>'FOR COLUMNS OWNER SIZE 250');

PL/SQL procedure successfully completed.

SQL> select column_name, histogram, num_distinct, num_buckets
2 from user_tab_columns
3 where table_name = 'DEMO_ACS'
4 and column_name = 'OWNER'
5 /

COLUMN_NAME HISTOGRAM NUM_DISTINCT NUM_BUCKETS
------------------------------ --------------- ------------ -----------
OWNER FREQUENCY 42 42

SQL>

So, I now have a table that has very different row counts for 'HEMANT' and 'SYS'. The data is skewed. The Execution Plan for queries on 'HEMANT' would not be optimal for queries on 'SYS'.

Let's see a query executing for 'HEMANT'.

SQL> -- define bind variable
SQL> variable target_owner varchar2(30);
SQL>
SQL> -- setup first SQL for 'HEMANT'
SQL> exec :target_owner := 'HEMANT';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

55 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 0
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 805812326

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"=:TARGET_OWNER)


19 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 1 55

SQL> commit;

Commit complete.

SQL>

We see one execution of the SQL Cursor with an Index Range Scan and Plan_Hash_Value 805812326. The SQL is marked BIND_SENSITIVE because of the presence of a Histogram indicating skew.

Now, let's change the bind value from 'HEMANT' to 'SYS' and re-execute exactly the same query.

SQL> -- setup second SQL for 'SYS'
SQL> exec :target_owner := 'SYS';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

31165 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 0
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 805812326

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"=:TARGET_OWNER)


19 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 2 31220

SQL> commit;

Commit complete.

SQL>

This time, for 31,165 rows (instead of 55 rows), Oracle has used the same Execution Plan -- the same Plan_Hash_Value and the same expected cardinality of 55 rows. However, the Optimizer is now "aware" that the 55 row Execution Plan actually returned 31.165 rows.

The next execution will see a re-parse because of this awareness.

SQL> -- rerun second SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

31165 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 1
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 1893049797

------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 299 (100)| |
|* 1 | TABLE ACCESS FULL| DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"=:TARGET_OWNER)


18 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 2 31220
1820xq3ggh6p6 1 1893049797 Y Y 1 31165

SQL> commit;

Commit complete.

SQL>

Aha ! This time we have a new Plan_Hash_Value (1893049797) for a Full Table Scan, being represented as a new Child Cursor (Child 1) that is now BIND_AWARE.






Now, here's the catch I see.  If I change the "SELECT ....." statement to an "INSERT .... SELECT ....", I do NOT see this behaviour.  I do NOT see the cursor becoming BIND_AWARE as a new Child Cursor.
Thus, the 3rd pass of an "INSERT ..... SELECT ..... " being the second pass with the Bind Value 'SYS' is correctly BIND_SENSITIVE but not BIND_AWARE.  This is what it shows :


SQL> -- rerun second SQL
SQL> insert into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID cqyhjz5a5xyu4, child number 0
-------------------------------------
insert into target_tbl ( select owner, object_name from demo_ACS where
owner = :target_owner )

Plan hash value: 805812326

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 3 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("OWNER"=:TARGET_OWNER)


21 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = 'cqyhjz5a5xyu4'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
cqyhjz5a5xyu4 0 805812326 Y N 3 62385

SQL> commit;

Commit complete.

SQL>

Three executions -- one with 'HEMANT' and the second and third with 'SYS' as the Bind Value -- all use the *same* Execution Plan.

So, does this mean that I cannot expect ACS for DML ?


UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

SQL> -- run SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

55 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 0
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 805812326

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 3 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("OWNER"=:TARGET_OWNER)


21 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55

SQL> commit;

Commit complete.

SQL>
SQL> -- setup second SQL for 'SYS'
SQL> exec :target_owner := 'SYS';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 1
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 1893049797

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 299 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"=:TARGET_OWNER)


20 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55
0cca9xusptauj 1 1893049797 Y Y 1 31165

SQL> commit;

Commit complete.

SQL>
SQL> -- rerun second SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 1
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 1893049797

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 299 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"=:TARGET_OWNER)


20 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55
0cca9xusptauj 1 1893049797 Y Y 2 62330

SQL> commit;

Commit complete.

SQL>

However, there is a noticeable difference.  With the BIND_AWARE Hint, the SQL is Bind Aware right from the first execution (for :target_owner='HEMANT').  So, even at the second execution (for the first run of :target_owner='SYS'), it re-peeks and generates a new Execution Plan (the Full Table Scan) for a new Child (Child 1).
.
.
.
Categories: DBA Blogs

News and Updates from Oracle Openworld 2014

Rittman Mead Consulting - Sat, 2014-10-04 08:48

It’s the Saturday after Oracle Openworld 2014, and I’m now home from San Francisco and back in the UK. It’s been a great week as usual, with lots of product announcements and updates to the BI, DW and Big Data products we use on current projects. Here’s my take on what was announced this last week.

New Products Announced

From a BI and DW perspective, the most significant product announcements were around Hadoop and Big Data. Up to this point most parts of an analytics-focused big data project required you to code the solution yourself, with the diagram below showing the typical three steps in a big data project – data ingestion, analysis and sharing the results.

NewImage

At the moment, all of these steps are typically performed from the command-line using languages such as Python, R, Pig, Hive and so on, with tools like Apache Flume and Apache Sqoop used to bring data into and out of the Hadoop cluster. Under the covers, these tools use technologies such as MapReduce or Spark to do their work, automatically running jobs in parallel across the cluster and making use of the easy scalability of Hadoop and NoSQL databases.

You can also neatly divide the work up on a big data project into two phases; the “discovery” phase typically performed by a data scientist where data is loaded, analysed, correlated and otherwise “understood” to provide the initial insights, and then an “exploitation” phase where we apply governance, provide the output data in a format usable by BI tools and otherwise share the results with the wider corporate audience. The updated Information Management Reference Architecture we collaborated on with Oracle and launched by in June this year had distinct discovery and exploitation phases, and the architecture itself made a clear distinction between the Innovation part that enabled the discovery phase of a project and the Execution part that delivered the insights and data in a more governed, production setting.

NewImage

This was the theme of the product announcements around analytics, BI, data warehousing and big data during Openworld 2014, with Oracle’s Omri Traub in the photo below taking us through Oracle’s big data product strategy. What Oracle are doing here is productising and “democratising” big data, putting it clearly in context of their existing database, engineered systems and BI products and linking them all together into an overall information management architecture and delivery process.

NewImage

So working through from ingestion through to data analysis, these steps have typically been performed by data scientists using scripting tools and rudimentary data visualisation engines, making them labour-intensive and reliant on a small set of people conversant with these tools and process. Oracle Big Data Discovery is aimed squarely at these steps, and combines Apache Spark-based data preparation and transformation capabilities with an analysis and visualisation engine based on Endeca Server.

NewImage

Key features of Big Data Discovery include:

  • Ability to analyse, parse, explore and “wrangle” data using graphical tools and a Spark-based transformation engine
  • Create a catalog of the data on your Hadoop cluster, and then search that catalog using Endeca Server search technologies
  • Create recommendations of other datasets that might interest you, based on what you’re looking at now
  • Visualize your datasets to help understand what they contain, and discover new insights

Under the covers it comprises two parts; the data loading, transformation and profiling part that uses Apache Spark to do its work in parallel across all the nodes in the cluster, and the analysis part, which takes data prepared by Apache Spark and loads into the Endeca Server in-memory engine to perform the analysis, aggregation and data visualisation. Unlike the Spark part the Endeca server element runs just on one node and limits the size of the analysis dataset to what can run in-memory in the Endeca Server engine, but in practice you’re going to work with a sample of the data rather than the entire dataset at that stage (in time the assumption is that the Endeca Server engine will be unbundled and run natively on YARN, giving it the same scalability as the Spark-based data ingestion and transformation part). Initially Big Data Discovery will run on-premise with a cloud version later on, and it’s not dependent on Big Data Appliance – expect to see something later this year / early next year.

Another new product that addresses the discovery phase and discovery lab part of a big data project is Oracle Data Enrichment Cloud Service, from the Oracle Data Integration team and designed to complement ODI and Oracle EDQ. Whilst Oracle positioned ODECS as something you’d use as well as Big Data Discovery and typically upstream from BDD, to me there seemed to be a fair bit of overlap between the products, with both tools doing data profiling and transformation but BDD being more focused on the exploration and discovery part, and ODECS being more focused on early-stage data profiling and transformation.

NewImage

ODECS is clearly more of an ETL tool complement and runs natively in the cloud, right from the start. It’s most probably aimed at customers with their Hadoop dataset already in the cloud, maybe using Amazon Elastic MapReduce or Oracle’s new Hadoop-as-a-Service and has more in common with the old Data Quality Option for Oracle Warehouse Builder than Endeca’s search-first analytic interface. It’s got a very nice interface including a mobile-enabled website and the ability to include and merge in external datasets, including Oracle’s own Data as a Service platform offering. Along with the new Metadata Management tool Oracle also launched at Openworld it’s a great addition to the Oracle Data Integration product suite, but I can’t help thinking that its initial availability only on Oracle’s public cloud platform is going to limit its use with Oracle’s typical customers – we’ll have to just wait and see.

The other major product that addresses big data projects was Oracle Big Data SQL. Partly addressing the discovery phase of big data projects but mostly (to my mind) addressing the exploitation phase, and the execution part of the information management architecture, Big Data SQL gives Oracle Exadata the ability to return data from Hive and NoSQL on the Big Data Appliance as well as data from its normal relational store. I covered Big Data SQL on the blog a few weeks ago and I’ll be posting some more in-depth articles on it next week, but the other main technical innovation with the product is its bringing of Exadata’s SmartScan feature to Hadoop, projecting and filtering data at the Hadoop storage node level and also giving Hadoop the ability to understand regular Oracle SQL, rather than the cut-down version you get with HiveQL.

NewImage

Where this then leaves us is with the ability to do most of a big data project using (Oracle) tools, bringing big data analysis within reach of organisations with Oracle-style budgets but without access to rare data scientist-type resources. Going back to my diagram earlier, a post-OOW big data project using the new products launched in this last week could look something like this:

NewImage

Big Data SQL is out now and depends on BDA and Exadata for its use; Big Data Discovery should be out in a few months time, runs on-premise but doesn’t require BDA, whilst ODECS is cloud-only and runs on a BDA in the background. Expect more news and more integration/alignment from the products as 2014 ends and 2015 starts, and we’re looking forward to using them on Oracle-centric Hadoop projects in the near future. 

Product Updates for BI, Data Integration, Exalytics, BI Applications and OBIEE

Other news announced over the week for products we more commonly use on projects include:

Finally, something that we were particularly pleased to see was the updated Oracle Information Management Architecture I mentioned earlier referenced in most of the analytics sessions, with Oracle’s Balaji Yelamanchili for example introducing it in his big data and business analytics general session mid-way through the week. 

NewImage  

We love the way this brings together the big data components and puts them in the context of the wider data warehouse and analytic processes, and compared to a few years ago when Hadoop and big data was considered completely separate to data warehousing and BI and done by staff completely different to the core business analytics team, this new reference architecture puts it squarely within the world of BI and analytics we work in. It also emphasises the new abilities Hadoop, NoSQL databases and big data can bring us – support for wider sets of data sources with dynamic schemas, the ability to economically work with and analyse much larger datasets, and support discovery-type upfront analysis work. Finally, it recognises that to get true value out of analysis you start on Hadoop, you eventually need to add proper data governance, make the results more widely available using full SQL tools, and use the right tools – relational databases, OLAP servers and the like – to analyse the data once its in a more structured form. 

If you missed our write-up on the updated Information Management Reference Architecture you can can read our two-part blog post here and here, read the Oracle white paper, or listen to the podcast with OTN Archbeat’s Bob Rhubart. For now though I’m looking forward to seeing the family after a week and a half away in San Francisco – thanks to OTN and the Oracle ACE Director Program for sponsoring my visit over to SF for Openworld, and we’ll post our conference presentation slides later next week when we’re back in the UK and US offices.

Categories: BI & Warehousing

Error unzipping PeopleSoft Images

Duncan Davies - Fri, 2014-10-03 18:14

The new PUM images are a boon for anyone wanting to get a PeopleSoft instance up and running quickly. Once you’ve downloaded the zip archives however, you might find that the delivered zip file doesn’t work by default for everyone.

The line:

unzip HCM-920-UPD-008_OVA_2of11.zip

gives me the following error:

'unzip' is not recognized as an internal or external command

I’m not sure where the unzip utility is supposed to be from, but it’s not delivered as part of Windows 8.1. I typically use the excellent 7zip utility for my zip/archiving needs, so I need to amend the script slightly.

I add the following line near the top:

set PATH=%PATH%;C:\Program Files\7-Zip\

so that I can reference the extraction tool with just the filename, then I change each archive line to use 7zip instead, thus:

7z e HCM-920-UPD-008_OVA_2of11.zip

PeopleSoft Interaction Hub 9.1/Revision 3 Now Available

PeopleSoft Technology Blog - Fri, 2014-10-03 17:29

Peoplesoft is proud to announce that the PeopleSoft Interaction Hub 9.1/Revision 3 is now Generally Available for new installations. The release is available for download on OSDC or physical shipment through Customer Care.

Here are a few highlights of Revision 3.  See the Release Value Proposition and Release Notes for more detail on what's in this important release.

Branding In PeopleTools 8.54, the Branding feature has been migrated from Interaction Hub to PeopleTools along with several enhancements.  However, there are still useful branding capabilities provided by the Interation Hub. In a cluster environment, the Interaction Hub will be used to brand across the cluster. The Interaction Hub provides a new Branding WorkCenter that can be used to create, manage, and assign role-specific themes. We've also delivered an “out-of-the-box” Hub similar to the one that is typically shown in our demo examples.   Fluid UX Support The 8.54 PeopleTools release represents a landmark for PeopleTools and the PeopleSoft user experience. With this release, PeopleSoft introduces the PeopleSoft Fluid User Interface. The Interaction Hub provides some Fluid content like The Company News (news publications) pagelet. Interaction Hub Cluster Setup Improvements The Interaction Hub includes the Unified Navigation WorkCenter, which makes it easier for administrators to install, set up, and monitor the  Interaction Hub cluster with other PeopleSoft applications. The Unified Navigation WorkCenter also has a diagnostics page that provides  information on the In Network nodes and the SSO setup. This provides a central location for diagnostic and troubleshooting for the PeopleSoft cluster.  Content Management  Content Management is a powerful and popular feature in the Interaction Hub.  Revision 3 delivers a new WorkCenter that provides a simple mechanism to create and publish content. The WorkCenter guides the user through the content creation and publishing process.  Revision 3 also has an enhancement that enables the creation of Rich Text Editor (RTE) templates.  WCAG 2.0 Support Enterprises and public sector institutions globally are conforming to accessibility regulations. Revision 3 enhancements make it possible for customers to deliver accessible content in the Interaction Hub. PeopleTools 8.54 makes it possible for PeopleSoft applications to conform to the WCAG 2.0 standards. In Revision 3, the Interaction Hub product conforms to WCAG 2.0 AA standards.   Where to Go For More Information 

There is a lot of information available on this new release.  The following documentation is available on the Oracle Technology Network and my.oracle.support:

PeopleSoft Portal Solutions Interaction Hub 9.1 Documentation Home Page.  Pretty much all related documentation is available here.
Here are some links to particularly useful items:
Release Notes
Revision 3 Installation Home Page
Revision 3 Hardware and Software Requirements
Revision 3 Upgrade Home Page
PeopleTools 8.54 Licensing Notes with the new content:

Health care an open target for hackers [VIDEO]

Chris Foot - Fri, 2014-10-03 13:31

Transcript

Think hackers are only after you credit card numbers? Think again.

Hi, welcome to RDX. While the U.S. health care industry is required by law to secure patient information, many organizations are only taking basic protective measures.

According to Reuters, the FBI stated Chinese cybercriminals had broken into a health care organization's database and stole personal information on about 4.5 million patients. Names, birth dates, policy numbers, billing information and other data can be easily accessed by persistent hackers.

Databases holding this information need to employ active monitoring and automated surveillance tools to ensure unrestricted access isn't allowed. In addition, encrypting patient files is a critical next step.

Thanks for watching. For more security tips, be sure to check in frequently.  

The post Health care an open target for hackers [VIDEO] appeared first on Remote DBA Experts.

An OOW Summary from the ADF and MAF perspective

Shay Shmeltzer - Fri, 2014-10-03 12:39

Another Oracle OpenWorld is behind us, and it was certainly a busy one for us. In case you didn't have a chance to attend, or follow the twitter frenzy during the week, here are the key take aways that you should be aware of if you are developing with either Oracle ADF or Oracle MAF.

 Oracle Alta UI

We released our design patterns for building modern applications for multiple channels. This include a new skin and many samples that show you how to create the type of UIs that we are now using for our modern cloud based interfaces.

All the resources are at http://bit.ly/oraclealta

The nice thing is that you can start using it today in both Oracle ADF Faces and Oracle MAF - just switch the skin to get the basic color scheme. Instructions here.

Note however that Alta is much more than just a color change, if you really want an Alta type UI you need to start designing your UI differently - take a look at some of the screen samples or our demo application for ideas.

Cloud Based Development

A few weeks before OOW we released our Developer Cloud Service in production, and our booth and sessions showing this were quite popular. For those who are not familiar, the Developer Cloud Service, gives you a hosted environment for managing your code life cycle (git version management, Hudson continuos integration, and easy cloud deployment), and it also gives you a way to track your requirements, and manage team work.

While this would be relevant to any Java developing team, for ADF developers there are specific templates in place to make things even easier.

You can get to experience this in a trial mode by getting a trial Java service account here.

Another developer oriented cloud service that got a lot of focus this year was on the upcoming Oracle Mobile Cloud Service - which includes everything your team will need in order to build mobile backends (APIs, Connectors, Notification, Storage and more). We ran multiple hands-on labs and sessions covering this, and it was featured in many keynotes too.

 In the Application development tools general session we also announced that in the future we'll provide a capability called Oracle Mobile Application Accelerator (which we call Oracle MAX for short) which will allow power users to build on device mobile applications easily through a web interface. The applications will leverage MAF as the framework, and as a MAF developer you'll be able to provide additional templates, components and functionality for those.

Another capability we showed in the same session was a cloud based development environment that we are planning to add to both the Developer Cloud Service and the Mobile Cloud Service - for developers to be able to code in the cloud with the usual functions that you would expect from a modern code editor.

dcs

The Developer Community is Alive and Kicking

The ADF and MAF sessions were quite full this year, and additional community activities were successful as well. Starting with a set of ADF/MAF session by users on the Sunday courtesy of ODTUG and the ADF EMG. In one of the sessions there members of the community announced a new ADF data control for XML. Check out the work they did!

ODTUG also hosted a nice meet up for ADF/MAF developers, and announced their upcoming mobile conference in December. They also have their upcoming KScope15 summer conference that is looking for your abstract right now!

Coding Competition

Want to earn some money on the side? Check out the Oracle MAF Developer Challenge - build a mobile app and you can earn prizes that range from $6,000 to $1,000.

Sessions

With so many events taking place it sometime hard to hit all the sessions that you are interested in. And while the best experience is to be in the room, you might get some mileage from just looking at the slides. You can find the slides for many sessions in the session catalog here. And a list of the ADF/MAF sessions here.

See you next year. 

Categories: Development

LinkedIn Releases College Ranking Service

Michael Feldstein - Fri, 2014-10-03 09:57

I have long thought that LinkedIn has the potential to be one of the most transformative companies in ed tech for one simple reason: They have far more cross-institutional longitudinal outcomes data than anybody else—including government agencies. Just about anybody else who wants access to career path information of graduates across universities would face major privacy and data gathering hurdles. But LinkedIn has somehow convinced hundreds of millions of users to voluntarily enter that information and make it available for public consumption. The company clearly knows this and has been working behind the scenes to make use of this advantage. I have been waiting to see what they will come up with.

I have to say that I’m disappointed with their decision that their first foray would be a college ranking system. While I wouldn’t go so far as to say that these sorts of things have zero utility, they suffer from two big and unavoidable problems. First, like any standardized test—and I mean this explicitly in the academic meaning of the term “test”—they are prone to abuse through oversimplification of their meaning and overemphasis on their significance. (It’s not obvious to me that they would be subject to manipulation by colleges the way other surveys are, given LinkedIn’s ranking method, so at least there’s that.) Second and more importantly, they are not very useful even when designed well and interpreted properly. Many students change their majors and career goals between when they choose their college and when they graduate. According to the National Center for Education Statistics, 80% of undergraduates change their majors at least once, and the average student changes majors three times. Therefore, telling high schools students applying to college which school is ranked best for, say, a career in accounting has less potential impact on the students’ long-term success and happiness than one might think.

It would be more interesting and useful to have LinkedIn tackle cross-institutional questions that could help students make better decisions once they are in a particular college. What are the top majors for any given career? For example, if I want to be a bond trader on Wall Street, do I have to major in finance? (My guess is that the answer to this question is “no,” but I would love to see real data on it.) Or how about the other way around: What are the top careers for people in my major? My guess is that LinkedIn wanted to start off with something that (a) they had a lot of data on (which means something coarse-grained) and (b) was relatively simple to correlate. The questions I’m suggesting here would fit that bill while being more useful than a college ranking system (and less likely to generate institutional blow-back).

The post LinkedIn Releases College Ranking Service appeared first on e-Literate.

OCP 12C – Real-Time Database Operation Monitoring

DBA Scripts and Articles - Fri, 2014-10-03 09:34

What is Real Time Database Operation Monitoring ? Real Time Database Operation Monitoring will help you track the progress of a set of sql statements and let you create a report. Real Time Database Operation Monitoring acts as a superset of all monitoring components like : ASH, DBMS_MONITOR … You can generate Active Reports which are [...]

The post OCP 12C – Real-Time Database Operation Monitoring appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

SQL Patch: Another way to change the plan without changing the code

Yann Neuhaus - Fri, 2014-10-03 09:02

Recently, at a customer site, I faced a performance issue. However, as often the statement is embedded in the application so it's not possible to rewrite the query. In this blog post, we'll change the execution plan to solve the problem without changing the code - thanks to SQL Patch.

Log Buffer #391, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-03 08:04

Oracle Open World is in full bloom. Enthusiasts of Oracle and MySQL are flocking to extract as much knowledge, news, and fun as possible. SQL Server aficionados are not far behind too.

Oracle:

Frank Nimphius have announced REST support for ADF BC feature on OOW today. Probably this functionality will be available in the next JDeveloper 12c update release.

RMAN Enhancements New Privilege A new SYSBACKUP privilege is created in Oracle 12c,  it allows the grantee to perform BACKUP and RECOVERY operations with RMAN SQL in RMAN.

To continue with the objective of separating duties and the least privileges, Oracle 12c introduce new administrative privileges all destined to accomplish specific duties.

Unified Auditing offers a consolidated approach, all the audit data is consolidated in a single place. Unified Auditing consolidate audit records for the following sources.

SOA Suite 12c – WSM-02141 : Unable to connect to the policy access service.

SQL Server:

Data Compression and Snapshot Isolation don’t play well together, you may not see a performance benefit.

Tim Smith answers some questions on SQL Server security like: Is It Better To Mask At the Application Level Or The SQL Server Database Level?

Since SQL Server delivered the entire range of window functions, there has been far less justification for using the non-standard ex-Sybase ‘Quirky Update’ tricks to perform the many permutations of running totals in SQL Server.

Easily synchronize live Salesforce data with SQL Server using the Salesforce SSIS DataFlow Tasks.

Change All Computed Columns to Persisted in SQL Server.

MySQL:

Low-concurrency performance for point lookups: MySQL 5.7.5 vs previous releases.

How to get MySQL 5.6 parallel replication and XtraBackup to play nice together.

The InnoDB labs release includes a snapshot of the InnoDB Native Partitioning feature.

Visualizing the impact of ordered vs. random index insertion in InnoDB.

Single thread performance in MySQL 5.7.5 versus older releases via sql-bench.

Categories: DBA Blogs

Virtualbox: only 32bit guests possible even though virtualization enabled in BIOS / Intel Process Identification Utility shows opposite to BIOS virtualization setting

Dietrich Schroff - Fri, 2014-10-03 03:08
Virtualbox on my Windows 8.1 stopped running 64bit guests a while ago. I did not track down this problem. Now some months later i tried again and found some confusing things.
First setting:BIOS virtualization enabled
Intel Processor Identification Utlility in 8.1: virtualization disabled
Second setting
BIOS virtualization disabled
Intel Processor Identification Utlility in 8.1: virtualization enabledWith both settings: Virtualbox runs 32bit guests but no 64bit guests.
 

After some searching, i realized, what was happening:
I added Microsofts Hyper-V virtualization. With that enabled Windows 8.1 is no longer a real host. It is just another guest (the most important guest) on this computer. So with Hyper-V enabled i was trying to run Virtualbox inside an already virtualized Windows 8.1.
After that it was easy: Just disable Hyper-V on Windows 8.1:


And after a restart of Windows 8.1 i was able to run 64bit guests on Virtualbox again....

Java get Class names from package String in classpath

Yann Neuhaus - Fri, 2014-10-03 01:28

As a Java developer you probably used to know about reflexion. However, in order to keep your software architecture flexible, some functionalities are sometimes not provided out of the box by the JVM.

In my particular case, I needed to find out every Class and Sub-Classes inside a package, thus reparteed within several Jars.

Internet has lots of solution, but it remains complicated for everybody to reach this goal. After googleing, I found a link which provided a partial solution. I would like to thank the website author:

http://www.java2s.com/Code/Java/Reflection/Attemptstolistalltheclassesinthespecifiedpackageasdeterminedbythecontextclassloader.htm

Some other solution invited us to deploy external libraries as well. But I was not interested to manage another lib in my soft just for that purpose.

So, the solution was to recover all jars from the context classloader and loop on them in order to find out the classes we are looking for.

Following, you will see a complete Java class resolving this issue:

 

import java.io.File;

import java.io.IOException;

import java.io.UnsupportedEncodingException;

import java.net.URL;

import java.net.URLClassLoader;

import java.net.URLDecoder;

import java.util.ArrayList;

import java.util.Enumeration;

import java.util.HashMap;

import java.util.List;

import java.util.jar.JarEntry;

import java.util.jar.JarFile;

 

/**

 *

 *

 *

 * @author Philippe Schweitzer dbi services Switzerland

 *

 */

public class ClassFinder {

 

    public static void main(String[] args) throws ClassNotFoundException {

 

        List<Class> classes = ClassFinder.getClassesFromPackage("YOUR PACKAGE NAME");

 

        System.out.println("START ClassList:");

        for (Class c : classes) {

            System.out.println(c.toString());// + " " + c.getCanonicalName());

        }

        System.out.println("END ClassList:");

    }

 

    /**

     *

     * Attempts to list all the classes in the specified package as determined     *

     * by the context class loader...

     *

     * @param pckgname the package name to search

     * @return a list of classes that exist within that package

     * @throws ClassNotFoundException if something went wrong

     *

     */

    public static List getClassesFromPackage(String pckgname) throws ClassNotFoundException {

 

        ArrayList result = new ArrayList();

        ArrayList<File> directories = new ArrayList();

        HashMap packageNames = null;

 

        try {

            ClassLoader cld = Thread.currentThread().getContextClassLoader();

            if (cld == null) {

                throw new ClassNotFoundException("Can't get class loader.");

            }

 

            for (URL jarURL : ((URLClassLoader) Thread.currentThread().getContextClassLoader()).getURLs()) {

                System.out.println("JAR: " + jarURL.getPath());

               

                getClassesInSamePackageFromJar(result, pckgname, jarURL.getPath());

                String path = pckgname;

                Enumeration<URL> resources = cld.getResources(path);

 

                File directory = null;

 

                while (resources.hasMoreElements()) {

                    String path2 = resources.nextElement().getPath();

                    directory = new File(URLDecoder.decode(path2, "UTF-8"));

                    directories.add(directory);

                }

 

                if (packageNames == null) {

                    packageNames = new HashMap();

                }

                packageNames.put(directory, pckgname);

            }

 

        } catch (NullPointerException x) {

            throw new ClassNotFoundException(pckgname + " does not appear to be a valid package (Null pointer exception)");

 

        } catch (UnsupportedEncodingException encex) {

            throw new ClassNotFoundException(pckgname + " does not appear to be a valid package (Unsupported encoding)");

 

        } catch (IOException ioex) {

            throw new ClassNotFoundException("IOException was thrown when trying to get all resources for " + pckgname);

 

        }

 

        for (File directory : directories) {

            if (directory.exists()) {

                String[] files = directory.list();

 

                for (String file : files) {

                    if (file.endsWith(".class")) {

                        try {

                      //      System.out.println(packageNames.get(directory).toString() + '.' + file.substring(0, file.length() - 6));

 

                            result.add(Class.forName(packageNames.get(directory).toString() + '.' + file.substring(0, file.length() - 6)));

                        } catch (Throwable e) {

                        }

                    }

                }

            } else {

                throw new ClassNotFoundException(pckgname + " (" + directory.getPath() + ") does not appear to be a valid package");

 

            }

        }

        return result;

 

    }

 

    /**

     *

     * Returns the list of classes in the same directories as Classes in

     * classes.

     *

     * @param result

     * @param classes

     * @param jarPath

     *

     */

    private static void getClassesInSamePackageFromJar(List result, String packageName, String jarPath) {

 

        JarFile jarFile = null;

 

        try {

            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();

 

            while (en.hasMoreElements()) {

                JarEntry entry = en.nextElement();

                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

 

                if (entryName != null && entryName.endsWith(".class") && entryName.startsWith(packageName)) {

                    try {

                        Class entryClass = Class.forName(entryName.substring(0, entryName.length() - 6).replace('/', '.'));

 

                        if (entryClass != null) {

                            result.add(entryClass);

                        }

                    } catch (Throwable e) {

// do nothing, just continue processing classes

                    }

                }

            }

        } catch (Exception e) {

 

        } finally {

            try {

                if (jarFile != null) {

                    jarFile.close();

                }

 

            } catch (Exception e) {

            }

        }

    }

}

OOW14 Update: Oracle OpenWorld 2014 comes to an end

Javier Delgado - Fri, 2014-10-03 00:11
Today was the last day of Oracle OpenWorld 2014 at San Francisco. Even though it started a bit later due to yesterday's Appreciation Event which hosted Aerosmith, Spacehog and Macklemore & Ryan Lewis (which I did not attend, but that's a different story), the day was packed with good sessions. I have particularly appreciated the PeopleTools Meet the Experts session, which allowed me to network with Oracle PeopleTools experts and share points of view with other partners and customers.

From a PeopleSoft perspective, the event has produced some news, but actually nothing unexpected or that was not rumoured on the internet in the latest weeks. Here is a summary of the news that I found more interesting (*):

  • Fluid interface was the hottest topic from a PeopleSoft standpoint. As previously seen on this blog, Oracle announced the availability of the first applications in the coming days.
  • Fluid is initially intended for casual and executive users, but there is a plan to extend it to the power users. Under my point of view, not only the interface would need to improve a bit in order to achieve that, but also the development should be somehow simplified, as currently designing Fluid pages requires more effort than traditional PIA pages.
  • These are features in the roadmap for the Fluid interface: wizards for tile creation, related contents, activity guides and master/detail page template.
  • Oracle has no plans to deliver PeopleSoft 9.3. Still, this does not mean that they will stop investing on PeopleSoft (read more).
  • I was nicely surprised by the interest shown by attendants for the PeopleSoft Test Framework sessions. This tool has been around for a while, but the customer adoption has been slow. The new Continuous Delivery Model may bring some interest to this tool, as testing should become more iterative.
  • On the architecture side, the ability to use the new in-memory features of Oracle DB 12c under PeopleTools 8.54 brings unprecedented performance to PeopleSoft environments. Still, you would need to dedicate a minimum of 100 Gb of memory to the in-memory part of the database SGA, but if you have the money, it seems worth going for it.


This has been a very interesting and intense week. Now, a few days to rest and return home, and then back to work with some new perspectives and ideas.

(*) Keep in mind Oracle's Safe Harbor statement, which practically says that what was presented during the sessions does not express a commitment from Oracle.

REST Support for ADF BC in 12c

Andrejus Baranovski - Thu, 2014-10-02 18:45
Frank Nimphius have announced REST support for ADF BC feature on OOW today. Probably this functionality will be available in the next JDeveloper 12c update release.

Once REST will be enabled for Application Module, new XML definition file and project will be created. Here you can see how new wizard will look like for REST definition on top of ADF BC: