Skip navigation.

Feed aggregator

August 12: Oracle on Oracle HR Cloud―Customer Forum by Oracle’s Anje Dodson

Linda Fishman Hoyle - 7 hours 53 min ago
Join us for an Oracle HCM Cloud Forum on Wednesday, August 12, 2015. Anje Dodson (pictured left), HR Vice President at Oracle, will explain how and why Oracle is moving 126,000 employees from Oracle E-Business Suite on premise to the Oracle HCM Cloud.

During this customer forum call, the host, Sr. Director Linda Fishman, Oracle HCM Cloud, Customer Adoption Strategy, will interview Dodson to discuss the reasons Oracle decided to switch from on premise to the cloud. They will also talk about why the company chose a phased approach for the implementation and where it is in that process.Finally, Dodson will explain the expectations for and benefits of the new modern HR system.

Register now to attend the live forum on Wednesday, August 12, 2015, at 09:00 a.m. Pacific Time / 12:00 p.m Eastern Time.

permission issue due to one role

Laurent Schneider - Tue, 2015-08-04 09:21

Most permissions issues are due to a missing role or privilege.

But in the following test case you need to revoke the right to get more privileges.


create table tt(x number);
create view v as select * from tt;
create role rw;
grant all on v to rw;

I’ve created a read-write role on a view. The owner of the role is the DBA, but the owner of the view is the application. Next release, the role may prevent an application upgrade


SQL> create or replace view v as select * from dual;
ORA-01720: grant option does not exist for 'SYS.DUAL'

Ok, if I drop the role, it works


SQL> drop role r;
Role dropped.
SQL> create or replace view v as select * from dual;
View created.

It is not always a good thing to grant privileges on a view, when you are not the owner of that view

Microsoft Edge and PeopleSoft

Duncan Davies - Tue, 2015-08-04 08:00

edgeThe Windows 10 upgrade was released late last week, and with it came a new web browser – Microsoft Edge. Formerly codenamed Project Spartan, Edge is the default web browser in Windows 10. Internet Explorer 11 is also be included with the new OS, but is basically unchanged from the version of IE11 found in Windows 7 and 8.1.

Although it might be a while until Windows 10 gains widespread enterprise adoption, it’ll likely have reasonably swift uptake in the home so Edge will start becoming an important browser for externally exposed PeopleSoft systems within 6 months or so.

First Impressions of Edge

It’s actually pretty nice. It’s clean, unobtrusive (unlike those Firefox skins) and snappy to use. It doesn’t work for all websites however – some sites give the following:

IE is needed

 

This is controlled by a ‘blacklist’ of sites however, so there’s no need to worry about your PeopleSoft implementation giving this message.

Edge and PeopleSoft

So, does it work with PeopleSoft? The answer is Yes, it certainly seems to. I’ve spent a fair amount of time noodling through some ‘difficult’ pages and they look OK to me. I compared with HCM 92 Image 13 – the latest at time of writing – and both Fluid and Classic UIs look great.

fluid in edge


WordPress 4.2.4

Tim Hall - Tue, 2015-08-04 07:22

wordpressBy the time you read this, you are probably auto-magically running on WordPress 4.2.4. :)

It’s a security release. You can read about the changes here.

Have a good time sitting back and doing nothing while it takes care of itself! :)

Cheers

Tim…

WordPress 4.2.4 was first posted on August 4, 2015 at 2:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Blackboard Potential Sale: Market timing, financials, and some thoughts on potential buyers

Michael Feldstein - Tue, 2015-08-04 06:45

By Phil HillMore Posts (347)

With Reuters’ story last week that Blackboard is putting itself up for sale through an auction, one question to ask is ‘why now?’. As Michael has pointed out, Blackboard is in the midst of a significant, but incomplete and late, re-architecture of its product line.

Bottom line: If you think that Ultra is all about playing catch-up with Instructure on usability, then the company’s late delivery, functionality gaps, and weird restrictions on where the product can and cannot be run look pretty terrible. But that’s probably not the right way to think about Ultra. The best analogy I can come up with is Apple’s Mac OS X. In both cases, we have a company that is trying to bring a large installed base of customers onto a substantially new architecture and new user experience without sending them running for the hills (or the competitors). This is a really hard challenge.

Market Timing

On the surface, it seems to be a high-risk move to try and sell a company before the changes are solidly in place and customers have demonstrated that they will move to new architecture rather than “running for the hills”.

Assuming that the Reuters story is accurate, I believe the answer to the question on ‘why now’ is that this move is about market timing – Blackboard wants to ride the current ed tech investment wave, and Providence Equity Partners (their owners) believe they can get maximum value now. This consideration trumps the otherwise logical strategy of waiting until more of the risk from the new user experience and cloud platform roll-out is removed by getting real products into significant number of customers’ hands. VC investment and M&A activity are at high and potentially unsustainable levels. 2U has shown that ed tech companies can go public and be a success. Lynda.com has shown that relatively mature companies can be acquired for very high valuations. Instructure is likely to go public in early 2016. If you want to get a high price, sometimes it’s worth going on a hot market before addressing most of the re-architecture risk.

Key Financials

There was one piece of information from the Reuters story that has gotten very little attention in follow-up reporting and analysis.

Blackboard has annual earnings before interest, tax, depreciation and amortization of around $200 million, some of the people added.

Two of the people said that Blackboard could fetch a valuation between 14 times to 17 times EBITDA, up to $3.4 billion, based on current multiples of subscription-based software companies.

Moody’s placed Blackboard’s EBITDA at roughly $180 million based on $1.4 billion of total rated debt and a 7.8 debt to EBITDA ratio. I suspect that the $200 million figure is based on forward-looking estimates while the $180 million is actuals. This Moody’s rating does give rough confirmation of the Reuters’ numbers.

For non startups, one of the most common valuation metrics is Enterprise Value / EBITDA.[1] Based on Blackboard’s earnings reports when it was publicly traded as well as financial analyst estimates, the company’s EBITDA was $96 million in 2010, estimated $120 in 2011, and was expected to rise to $154 million in 2012. Current EBITDA of $200 million would indicate a 30% increase in profitability in the past three years. That increase is far less than the 60% EBITDA growth in the previous two years, but it is healthy.

Blackboard’s revenue in 2010 was $447 million, and the current EBITDA estimates would indicate current revenues between $600 – $700 million.

Financial or Strategic Buyers

Industry medians for financial buyers (e.g. another private equity firm buying Blackboard from Providence) and subscription-based software companies is over 10x EBITDA (let’s say 10x – 13x, but likely not the 14x – 17x mentioned in article), even in the ed tech market (the company got just over 12x in 2011). This would lead to an expected valuation of roughly $2 – $2.5 billion, far less than the desired $3 billion but more than the $1.64 billion price paid in 2011. If Blackboard really thinks they can get up to $3 billion based on the Reuters quote above, then I would assume that they are thinking more of a strategic buyer (e.g. another, larger, company buying Blackboard for some strategic asset such as installed customer base or rapidly growing international presence). The challenge is that $3 billion is a big price tag, and there are not that many education-related companies that could afford this purchase. Pearson, Google, Microsoft, and LinkedIn are example companies with some foothold in education and the financial ability to make a purchase (speculation alert, as I am not saying I think any of those companies will make a bid).

To get a sense on the bet that Blackboard might be making, consider the case of Lynda.com being purchased by LinkedIn for $1.5 billion. Founded in 1995, Lynda.com’s 2014 revenues were only $150 million with EBITDA of $7.5 – $15 million. That’s an EBITDA ration of at least 100x. Only a strategic buyer could justify those numbers.

My suspicion is that Blackboard’s investment bankers have already done their research and have a rough idea of who has enough money and who might make a bid. This also leads to a question of timing.

How Would This Affect Customers?

Unlike 2011, we are not seeing the potential of a publicly-traded company going private, which was a far greater risk for customers worried about change. This time, the relative risk depends on who, if anyone, acquires Blackboard. If Blackboard settles for far lower price in auction and goes with a financial buyer, I would assume that the new owners would be betting on the same strategy that the current ownership and management team has in place (remove silos, streamline operations, grow internationally, re-architect product line), but they would plan to subsequently take Blackboard public in a few years.[2]

If, however, Blackboard is able to find a strategic buyer willing to pay the higher price, then the affect on current and prospective customers would largely depend on why that company is making the purchase. This scenario would lead to greater chance of significant change for customers, whether for good or bad (or both).

It will be interesting to find out if the Reuters story is indeed accurate and if Blackboard does get acquired. We’ll keep watching.

Update 8/4: Clarified language on EBITDA multiples for software industry.

  1. From Wikinvest: EBITDA, which stands for “Earnings Before Interest, Taxes, Depreciation, and Amortization”, is a way of evaluating a company’s profitability which excludes items that make comparisons across companies difficult and which are viewed as not central to the company’s core operations.
  2. Credit to Michael Feldstein for the thought process in this paragraph.

The post Blackboard Potential Sale: Market timing, financials, and some thoughts on potential buyers appeared first on e-Literate.

The Proliferation of I.T. Voodoo…

Tim Hall - Tue, 2015-08-04 05:35

When I say “voodoo” in this context, I’m really talking about bullshit explanations for things based on guesswork, rather than reasoned argument built using facts and investigation.

It’s really easy for voodoo explanations to proliferate when people are starved of facts. There are several ways this can happen, but a couple of them that spring to mind and really piss me off are:

  • A user reports a problem. You fix it, but don’t give a meaningful explanation of what you have done. As a result, the user is left to “make up” an explanation to tell their superiors, which then becomes part of the folklore of that department. When you fix a problem, you need to provide a technical explanation of what you have done for those that can cope with it and a more layman friendly version for those that can’t. If you don’t do this, you are starting to dig a really big hole for yourself. Users will make shit up that will haunt you forever. Next time you want to process that big change it will be blocked because, “Bob says that we always have a problem when you reboot the payroll server on a Tuesday if the parking barrier is locked in an upright position. Today is Tuesday and the parking barrier was locked in an upright position this morning, so we don’t want to risk it!” Once this shit takes hold, there is no going back!
  • A user reports a problem.  You don’t do anything, but it mysteriously “fixes” itself. You need to make sure all parties know you’ve done nothing. You also need to suggest someone actually finds the root cause of the issue, without trying to start a witch hunt. Unless you know why something has happened, people will make up bullshit explanations and they will become department folklore. etc. See previous point.

For so long I.T. has had a poor reputation where user engagement is concerned and it *always* generates more problems for us than it actually does for the users. Get with the flippin’ program!

Cheers

Tim…

PS. Can you tell I’m pissed off about something? :)

The Proliferation of I.T. Voodoo… was first posted on August 4, 2015 at 12:35 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Apps DBA Training : How to become Apps (R12) DBA Part II : Installation

Online Apps DBA - Tue, 2015-08-04 00:12

OracleAppsR12_InstallThis is second part in series How to become Oracle Apps DBA from our Oracle Apps DBA (R12) Training (next batch starts on 8th Aug and only 3 seats left). In first part, I mentioned that first and foremost thing you need to understand is Architecture of Oracle E-Business Suite (WebLogic is introduced in WebLogic 12.2 so that is another topic an Apps DBA should learn) .

After architecture, You should look at installation of Oracle Apps (E-Business Suite). Refer to Oracle E-Business Suite 12.2 installation guide here

Here are key points for Oracle Apps R12 installation

  • You can use Express Install or Normal Install (Express Install is for quick install with default values like username/mount points etc )
  • You start R12 installer as root user on Unix
  • You can do single user or multi user install (in multi user Application Tier and Database Tier file system are owned by different user)
  • From 12.2 onwards you can configure Oracle RAC for database during Oracle Apps Installation (In prior release RAC was not possible during R12 installation. You change database to RAC after installation)
  • 12.2 uses two application tier file system (FS1 & FS2) – Run Time & Patch Time so space requirement on 12.2 is slightly higher than previous R12 versions.

 

Attend Oracle Apps DBA (R12 – 12.2) training to learn from experts and know more about Oracle E-Business 12.2 installation

 

References/Related

 

Get 100 USD off by registering by 4th Aug for Oracle E-Business Suite Apps (R12.1) DBA Training and use code A1OFF at time of checkout (We limit seats per batch and only 4 seats are left so register early to avoid disappointment).

Previous in series Related Posts for 12.2 New Features
  1. ADOP : Online Patching in Oracle Apps (E-Business Suite) R12 12.2 : Apps DBA’s Must Read
  2. How to become/learn Oracle Apps DBA R12.2 : Part I
  3. Oracle Apps DBA Training : How to become Apps (R12) DBA Part II : Installation

The post Oracle Apps DBA Training : How to become Apps (R12) DBA Part II : Installation appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Using TAs As Key Component Of Active Learning Transformation at UC Davis

Michael Feldstein - Mon, 2015-08-03 17:07

By Phil HillMore Posts (347)

Last week I described how UC Davis is making efforts to personalize one of the most impersonal of learning experiences – large lecture introductory science courses. It is telling that the first changes that they made were not to the lecture itself but to the associated discussion sections led by teaching assistants (TAs). It is well known that much of the instruction in lower division classes at large universities is led not by faculty but by TAs. This situation is often seen as a weakness of the business model of a research university, but it can also be leveraged as an opportunity to lead educational change. Consider this interview with staff from the iAMSTEM group at UC Davis from our e-Literate TV series on personalized learning:


Phil Hill: In biology, the UC Davis team applied the same learning principles but also included personalized learning software in the mix. The Open Learning Initiative or OLI is a grant funded group at Carnegie Mellon University that started in 2001. UC Davis is using the OLI platform within in the discussion sections of the redesigned biology course.

Erin Becker: In biology, we started at the level of the teaching assistants who are the ones that the students spend the majority of their face time with. They’re also the ones that we had the most control over because we had kind of more say over what the teaching assistants do than what the instructors do.

We went into the introductory biology course here and did a rigorous practice-based training program where we trained the TAs on various techniques that have evidence of effectiveness and increasing student leaning. From that, we then have expanded into working with the instructors in that same biology course.

Phil Hill: If you had to summarize the learning-based changes that have been made, how would you describe them?

Erin Becker: I guess if I had to sum it up, I’d say keeping the students accountable in class. We trained the TAs to (it sounds very simple) call on the students; make the students know that they are responsible for having the knowledge in class time in real time.

Chris Pagilarulo: Come prepared.

The interview then went on to discuss the role of the OLI software in this teaching transformation for TAs. I found it interesting that the TA (Amanda) positioned the software as a feedback mechanism – automatic feedback freeing up TA grading time and immediate feedback for learning.

Amanda Fox: Where now, we have Online Learning Initiative (OLI) as a pre-lab, and this is a set of somewhere between 15 to maybe 25 questions that they do online on their own. It doesn’t take the time from the TAs to grade that, so that there’s more questions that they can be asked, and be doing on their on their own time.

Phil Hill: Do you also get feedback from the system?

Amanda Fox: The OLI System?

Phil Hill: Yeah, from OLI?

Amanda Fox: We have a head TA who’s is charge of going over OLI, but I myself don’t do the grading for it, but I do get a readout of what questions they’re having the most difficult time with and what questions they felt comfortable with. Then I look at that. From each week we have a set of pre-discussions due, and I look at the results from that, and then I see the questions that most of my class had a difficulty with, and then I cover that in the next discussion.

I address maybe the top three questions out of fifteen or so that they had problems with, and I go over, “Do you understand why this is the correct answer, and why what you said is not correct?” One thing that’s awesome is that they get immediate response to whether I push a button, and it tells me right there: Did I push the correct button for the correct answer or not?

That immediate feedback, I think, is very helpful because otherwise these students were answering these questions maybe a week before they would turn it in to me. I would go over the answers in discussion, but that lapse in time between when they first thought about the question, and when they get the answer to it—I think it’s really good to have that immediate response between the two.

Software as feedback is only component of the redesign, however, as a different TA (Guy) described the iAMSTEM training into effective teaching styles.

Guy Shani: One of the biggest concepts that have been emphasized towards us—we have TA meetings for two hours every week before our discussions.And part of that is to go over the material for the week and especially with some of the TAs that are not in the exact field that that week’s discussion is on—we need a little bit of review on that.

But the other part is sort of understanding how we should be approaching teaching because a lot of the time (a lot of discussions), especially early on in lower-division classes, the approach has frequently been just lecture.Basically, having the students standing up with our backs to the class writing up on the board, and that’s not the most effective way to communicate an idea to the students.

So, we go over techniques like cold calling where we actively involve the students. I will say most of a sentence and call on a student at random to complete it, or I will ask a question and involve the students. This gives me both a good way to gauge whether they understand, and it keeps them all on their toes.

This theme of leveraging TAs for change by combining pedagogical training as well as focused software usage for immediate feedback to students is not unique to UC Davis. We saw a similar approach at Arizona State University.

The post Using TAs As Key Component Of Active Learning Transformation at UC Davis appeared first on e-Literate.

OTN Tour of Latin America 2015 : UYOUG, Uruguay – Day 1

Tim Hall - Mon, 2015-08-03 15:53

After 15 hours of sleep I still managed to feel tired. :) I went for breakfast at 6:30, then started to feel a little weird, so I took some headache pills and headed back to bed for a hour before meeting up with Debra and Mike to head down to the venue for the first day of the UYOUG leg of the tour…

The order of events went like this:

  • Pablo Ciccarelo started with an introduction OTN and the ACE program, which was in Spanish, so I ducked out of it. :)
  • Mike Dietrich speaking about “How Oracle Single/Multitenant will change a DBA’s life”.
  • Me with “Pluggable Databases : What they will break and why you should use them anyway!” There was some crossover with Mike’s session, but we both emphasised different things, which is interesting in itself. :)
  • Debra Lilley with “PaaS4SaaS”. The talk focused on a POC for using PaaS to extend the functionality of Fusion Apps, which is a SaaS product.
  • Me with “Oracle Database Consolidation : It’s not all about Oracle database 12c”. I think this is the least technical talk I’ve ever done and that makes me rather nervous. Technical content and demos are a reassuring safety blanket for me, so having them taken away feels a bit like being naked in public (why am I now thinking of Bjoern?). The session is a beginner session, so I hope people didn’t come expecting something more than I delivered. See, I’m paranoid already!
  • Mike Dietrich on “Simple Minimal Downtime Migration to Oracle 12c using Full Transportable Export/Import”. I think I’ve used every feature discussed in this session, but I’ve never used them all together in this manner. I think I may go back to the drawing board for one of the migrations I’ve got coming up in the next few months.
  • Debra Lilley with “Are cloud apps really ready?”. There was some similarity between the message Debra was putting out here and some of the stuff I spoke about in my final talk.
  • Me with “It’s raining data! Oracle Databases in the Cloud”. This was also not a heavy technical session, but because so few people have experience of running databases in the cloud at the moment, I think it has a wider appeal, so I’m not so paranoid about the limited technical content.

So that was the first day of the UYOUG conference done. Tomorrow is an easy day for me. I’ve got a panel session in the middle of the day, then I’m done. :)

Thanks to everyone who came to my sessions. I hope you found them useful.

Having slept through yesterday’s social event, I will be going out to get some food tonight. They eat really late here, so by the time we get some food I’ll probably be thinking about bed. :)

Cheers

Tim…

OTN Tour of Latin America 2015 : UYOUG, Uruguay – Day 1 was first posted on August 3, 2015 at 10:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Using a Tomcat provided buildpack in Bluemix

Pas Apicella - Mon, 2015-08-03 15:48
By default if you push a java application into public Bluemix you will use the Liberty java buildpack. If you want to use tomcat you can do that as follows.

1. Show the buildpacks available as follows

> cf buildpacks

2. The buildpack which uses Tomcat is as follows

java_buildpack

3. Specify you would like to use the buildpack as shown below when using

cf push pas-props -d mybluemix.net -i 1 -m 256M -b java_buildpack -p props.war

Example:

pas@Pass-MacBook-Pro-2:~/bluemix-apps/simple-java$ cf push pas-props -d mybluemix.net -i 1 -m 256M -b java_buildpack -p props.war
Creating app pas-props in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Creating route pas-props.mybluemix.net...
OK

Binding pas-props.mybluemix.net to pas-props...
OK

Uploading pas-props...
Uploading app files from: props.war
Uploading 2.9K, 6 files
Done uploading
OK

Starting app pas-props in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (4.0K)
-----> Java Buildpack Version: v3.0 | https://github.com/cloudfoundry/java-buildpack.git#3bd15e1
-----> Downloading Open Jdk JRE 1.8.0_51 from https://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.8.0_51.tar.gz (11.6s)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.2s)
-----> Downloading Tomcat Instance 8.0.24 from https://download.run.pivotal.io/tomcat/tomcat-8.0.24.tar.gz (2.4s)
       Expanding Tomcat to .java-buildpack/tomcat (0.1s)
-----> Downloading Tomcat Lifecycle Support 2.4.0_RELEASE from https://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.4.0_RELEASE.jar (0.1s)
-----> Downloading Tomcat Logging Support 2.4.0_RELEASE from https://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.4.0_RELEASE.jar (0.4s)
-----> Downloading Tomcat Access Logging Support 2.4.0_RELEASE from https://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.4.0_RELEASE.jar (0.4s)

-----> Uploading droplet (50M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-props was started using this command `JAVA_HOME=$PWD/.java-buildpack/open_jdk_jre JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh -Xmx160M -Xms160M -XX:MaxMetaspaceSize=64M -XX:MetaspaceSize=64M -Xss853K -Daccess.logging.enabled=false -Dhttp.port=$PORT" $PWD/.java-buildpack/tomcat/bin/catalina.sh run`

Showing health and status for app pas-props in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-props.mybluemix.net
last uploaded: Mon Aug 3 21:40:36 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-08-03 02:41:47 PM   0.0%   136.7M of 256M   125.7M of 1Ghttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

History Lesson

Floyd Teter - Mon, 2015-08-03 12:53
I'm a student of history.  There is so much to be learned from it.  Today's lesson comes from NASA and relates directly to enterprise software projects.

From 1992 to 1999, NASA launched 16 major missions under the umbrella of the "Faster, Better, Cheaper" or "FBC" program umbrella.  These unmanned space exploration missions included five trips to Mars, one to the Moon, four Earth-orbiting satellites and an asteroid rendezvous.  10 of the 15 missions were great successes, including:
  • The NEAR Earth Asteroid Rendezvous (NEAR)
  • The Pathfinder mission to Mars
  • The Stardust mission that collected, analyzed and returned comet tail particles to Earth
The nine successful FBC missions started with tight budgets, tight scopes, and tight schedules. They all delivered early and under budget.

So long as NASA stuck to the core principles of FBC, the program was a great success:  9 missions successfully executed in seven years.  By comparison the Cassini mission, while also very successful, took over 15 years to execute.  And all 10 successful missions were completed for a fraction of the cost of the Cassini mission.  The FBC program came to a grinding halt when NASA strayed from the core ideas that made the program work:  the failed Mars Polar Lander and the Mars Climate Observer came from the latter part of the FBC program.

Let's look at the core principles that made FBC successful:
  • Do it wrong:  Alexander Laufer and Edward Hoffman explained in a 1998 report entitled "Ninety-Nine Rules for Managing Faster, Better, Cheaper Projects" that in order to do things quickly and right, you have to be willing to do it wrong first.  Experience is the best teacher.
  • Reject good ideas:  NEAR project manager Thomas Coughlin had no shortage of well-meaning good ideas for features, parts and functions to add to the spacecraft.   Essentially all stayed on the cutting room floor.  Reject good ideas and stick to your original scope.
  • Simplify and accelerate everything:  the NEAR project used a 12-line project schedule for the entire mission.  That's right - 12 lines.  Progress reports were limited to three minutes.  If we can build spacecraft with a 12-line project schedule, tell me again why our enterprise project schedules run multiple pages?
  • Limit innovation by keeping it relevant.  While innovation is always cool, it's not relevant if it does not contribute meaningfully to your project's objectives.  Shipping something that actually does something well is much better than shipping something built on the newest technology that is complex to use or fails to perform reliably in a multitude of circumstances.
  • You can't inspect quality into the system.  NASA's failure to stick with this principle lead to the poor ending for the FBC program.  To a great degree, the Mars Pathfinder was a success at JPL because the project was so small that it flew under the radar - no significant administrative oversight.  When FBC oversight increased after 1999 at all NASA centers, the successes stopped coming.  You can put the clues together here, can't you?
Do you recognize the themes here?  Simplicity.  Restraint.  Freedom to act and take risks within tight constraints.  The combination led to some elegant and highly successful projects.

And, by the way, the recent New Horizons mission sending us pictures and data from Pluto?  Lots of heritage from FBC.  So, yes, these ideas still work...for missions much more complex than enterprise software.

So, you want to #beat39 with your enterprise software projects?  This history lesson is a great place to start.

PeopleSoft Spotlight Series: Fluid

Jim Marion - Mon, 2015-08-03 11:03

Have you seen the new PeopleTools Spotlight series? I just finished the Developing Fluid Applications lesson. This 1-hour video walks you through creating a fluid component, including some of the special CSS style classes, such as psc_column-2. The example is very well done and should get new fluid developers headed down the right path.

Open last one-note page

Laurent Schneider - Mon, 2015-08-03 10:52

If you got a one-note document, you may want to automatically go to the last page. This is possible with powershell.

First you create a ComObject. There are incredibly many ComObject that could be manipulated in powershell.


$o = New-Object -ComObject OneNote.Application

Now it get’s a bit confusing. First you open your document


[ref]$x = ""
$o.OpenHierarchy("Z:\Reports.one", "", $x, "cftNone")

Now you get the XML


$o.GetHierarchy("", "hsPages", $x)

With the XML, you select the last page. For instance :


$p = (([xml]($x.value)).Notebooks.OpenSections.Section.Page | select -last 1).ID

And from the id, you generate an URL the GetHyperlinkToObject.


[ref]$h = ""
$o.GetHyperlinkToObject($p,"",$h)

Now we can open the url onenote:///Z:\Reports.one#W31&section-id={12345678-1234-1234-123456789ABC}&page-id={12345678-1234-1234-123456789ABC}&end


start-process $h.value

Provisioning Virtual ASM (vASM)

Steve Karam - Mon, 2015-08-03 06:30
Create vFiles

In this post, we’re going to use Delphix to create a virtual ASM diskgroup, and provision a clone of the virtual ASM diskgroup to a target system. I call it vASM, which is pronounced “vawesome.” Let’s make it happen.

She’s always hungry. She always needs to feed. She must eat. – Gollum

Most viewers assume Gollum was talking about Shelob the giant spider here, but I have it on good authority that he was actually talking about Delphix. You see, Delphix (Data tamquam servitium in trinomial nomenclature) is the world’s most voracious datavore. Simply put, Delphix eats all the data.

Now friends of mine will tell you that I absolutely love to cook, and they actively make it a point to never talk to me about cooking because they know I’ll go on like Bubba in Forrest Gump and recite the million ways to make a slab of meat. But if there’s one thing I’ve learned from all my cooking, it’s that it’s fun to feed people who love to eat. With that in mind, I went searching for new recipes that Delphix might like and thought, “what better meal for a ravenous data muncher than an entire volume management system?”

vASM

Delphix ArchitectureIn normal use, Delphix links to an Oracle database and ingests changes over time by using RMAN “incremental as of SCN” backups, archive logs, and online redo. This creates what we call a compressed, deduped timeline (called a Timeflow) that you can provision as one or more Virtual Databases (VDBs) from any points in time.

However, Delphix has another interesting feature known as AppData, which allows you to link to and provision copies of flat files like unstructured files, scripts, software binaries, code repositories, etc. It uses rsync to build a Timeflow, and allows you to provision one or more vFiles from any points in time. But on top of that (and even cooler in my opinion), you have the ability to create “empty vFiles” which amounts to an empty directory on a system; except that the storage for the directory is served straight from Delphix. And it is this area that serves as an excellent home for ASM.

We’re going to create an ASM diskgroup using Delphix storage, and connect to it with Oracle’s dNFS protocol. Because the ASM storage lives completely on Delphix, it takes advantage of Delphix’s deduplication, compression, snapshots, and provisioning capabilities.

Some of you particularly meticulous (read: cynical) readers may wonder about running ASM over NFS, even with dNFS. I’d direct your attention to this excellent test by Yury Velikanov. Of course, your own testing is always recommended.

I built this with:

  • A Virtual Private Cloud (VPC) in Amazon Web Services
  • Redhat Enterprise Linux 6.5 Source and Target servers
    • Oracle 11.2.0.4 Grid Infrastructure
    • 11.2.0.4 Oracle Enterprise Edition
  • Delphix Engine 4.2.4.0
  • Alchemy
Making a vASM Diskgroup

Before we get started, let’s turn on dNFS while nothing is running. This is as simple as using the following commands on the GRID home:

[oracle@ip-172-31-0-61 lib]$ cd $ORACLE_HOME/rdbms/lib
[oracle@ip-172-31-0-61 lib]$ pwd
/u01/app/oracle/product/11.2.0/grid/rdbms/lib
[oracle@ip-172-31-0-61 lib]$ make -f ins_rdbms.mk dnfs_on
rm -f /u01/app/oracle/product/11.2.0/grid/lib/libodm11.so; cp /u01/app/oracle/product/11.2.0/grid/lib/libnfsodm11.so /u01/app/oracle/product/11.2.0/grid/lib/libodm11.so
[oracle@ip-172-31-0-61 lib]$

Now we can create the empty vFiles area in Delphix. This can be done through the Delphix command line interface, API, or through the GUI. It’s exceedingly simple to do, requiring only a server selection and a path.

Create vFiles Choose Path for vASM disks Choose Delphix Name Set Hooks Finalize

Let’s check our Linux source environment and see the result:

[oracle@ip-172-31-0-61 lib]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            9.8G  5.0G  4.3G  54% /
tmpfs                 7.8G   94M  7.7G   2% /dev/shm
/dev/xvdd              40G   15G   23G  39% /u01
172.31.7.233:/domain0/group-35/appdata_container-32/appdata_timeflow-44/datafile
                       76G     0   76G   0% /delphix/mnt

Now we’ll create a couple ASM disk files that we can add to an ASM diskgroup:

[oracle@ip-172-31-0-61 lib]$ cd /delphix/mnt
[oracle@ip-172-31-0-61 mnt]$ truncate --size 20G disk1
[oracle@ip-172-31-0-61 mnt]$ truncate --size 20G disk2
[oracle@ip-172-31-0-61 mnt]$ ls -ltr
total 1
-rw-r--r--. 1 oracle oinstall 21474836480 Aug  2 19:26 disk1
-rw-r--r--. 1 oracle oinstall 21474836480 Aug  2 19:26 disk2

Usually the “dd if=/dev/zero of=/path/to/file” command is used for this purpose, but I used the “truncate” command. This command quickly creates sparse files that are perfectly suitable.

And we’re ready! Time to create our first vASM diskgroup.

SQL> create diskgroup data
  2  disk '/delphix/mnt/disk*';

Diskgroup created.

SQL> select name, total_mb, free_mb from v$asm_diskgroup;

NAME				 TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA				    40960      40858

SQL> select filename from v$dnfs_files;

FILENAME
--------------------------------------------------------------------------------
/delphix/mnt/disk1
/delphix/mnt/disk2

The diskgroup has been created, and we verified that it is using dNFS. But creating a diskgroup is only 1/4th the battle. Let’s create a database in it. I’ll start with the simplest of pfiles, making use of OMF to get the database up quickly.

[oracle@ip-172-31-0-61 ~]$ cat init.ora
db_name=orcl
db_create_file_dest=+DATA
sga_target=4G
diagnostic_dest='/u01/app/oracle'

And create the database:

SQL> startup nomount pfile='init.ora';
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size		    2260088 bytes
Variable Size		  838861704 bytes
Database Buffers	 3422552064 bytes
Redo Buffers		   12107776 bytes
SQL> create database;

Database created.

I’ve also run catalog.sql, catproc.sql, and pupbld.sql and created an SPFILE in ASM, but I’ll skip pasting those here for at least some semblance of brevity. You’re welcome. I also created a table called “TEST” that we’ll try to query after the next part.

Cloning our vASM Diskgroup

Let’s recap what we’ve done thus far:

  • Created an empty vFiles area from Delphix on our Source server
  • Created two 20GB “virtual” disk files with the truncate command
  • Created a +DATA ASM diskgroup with the disks
  • Created a database called “orcl” on the +DATA diskgroup

In sum, Delphix has eaten well. Now it’s time for Delphix to do what it does best, which is to provision virtual objects. In this case, we will snapshot the vFiles directory containing our vASM disks, and provision a clone of them to the target server. You can follow along with the gallery images below.

Snapshot vASM area Provision vFiles Choose the Target and vASM Path Name the Target for vASM Set Hooks Finalize Target

Here’s the vASM location on the target system:

[oracle@ip-172-31-2-237 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            9.8G  4.1G  5.2G  44% /
tmpfs                 7.8G   92M  7.7G   2% /dev/shm
/dev/xvdd              40G   15G   23G  39% /u01
172.31.7.233:/domain0/group-35/appdata_container-34/appdata_timeflow-46/datafile
                       76G  372M   75G   1% /delphix/mnt

Now we’re talking. Let’s bring up our vASM clone on the target system!

SQL> alter system set asm_diskstring = '/delphix/mnt/disk*';

System altered.

SQL> alter diskgroup data mount;

Diskgroup altered.

SQL> select name, total_mb, free_mb from v$asm_diskgroup;

NAME				 TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA				    40960      39436

But of course, we can’t stop there. Let’s crack it open and access the tasty “orcl” database locked inside. I copied over the “initorcl.ora” file from my source so it knows where to find the SPFILE in ASM. Let’s start it up and verify.

SQL> startup;
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size		    2260088 bytes
Variable Size		  838861704 bytes
Database Buffers	 3422552064 bytes
Redo Buffers		   12107776 bytes
Database mounted.
Database opened.
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/orcl/datafile/system.259.886707477
+DATA/orcl/datafile/sysaux.260.886707481
+DATA/orcl/datafile/sys_undots.261.886707485

SQL> select * from test;

MESSAGE
--------------------------------------------------------------------------------
WE DID IT!

As you can see, the database came online, the datafiles are located on our virtual ASM diskgroup, and the table I created prior to the clone operation came over with the database inside of ASM. I declare this recipe a resounding success.

Conclusion

A lot happened here. Such is the case with a good recipe. But in the end, my actions were deceptively simple:

  • Create a vFiles area
  • Create disk files and an ASM diskgroup inside of Delphix vFiles
  • Create an Oracle database inside the ASM diskgroup
  • Clone the Delphix vFiles to a target server
  • Bring up vASM and the Oracle database on the target server

With this capability, it’s possible to do some pretty incredible things. We can provision multiple copies of one or more vASM diskgroups to as many systems as we please. What’s more, we can use Delphix’s data controls to rewind vASM diskgroups, refresh them from their source diskgroups, and even create vASM diskgroups from cloned vASM diskgroups. Delphix can also replicate vASM to other Delphix engines so you can provision in other datacenters or cloud platforms. And did I mention it works with RAC? vFiles can be mounted on multiple systems, a feature we use for multi-tier EBS provisioning projects.

But perhaps the best feature is that you can use Delphix’s vASM disks as a failgroup to a production ASM diskgroup. That means that your physical ASM diskgroups (using normal or high redundancy) can be mirrored via Oracle’s built in rebalancing to a vASM failgroup comprised of virtual disks from Delphix. In the event of a disk loss on your source environment, vASM will protect the diskgroup. And you can still provision a copy of the vASM diskgroup to another system and force mount for the same effect we saw earlier.

There is plenty more to play with and discover here. But we’ll save that for dessert. Delphix is hungry again.

The post Provisioning Virtual ASM (vASM) appeared first on Oracle Alchemist.

Demo data

Jonathan Lewis - Mon, 2015-08-03 06:26

One of the articles I wrote for redgate’s AllthingsOracle site some time ago included a listing of the data distribution for some client data which I had camouflaged. A recent comment on the article asked how I had generated the data – of course the answer was that I hadn’t generated it, but I had done something to take advantage of its existence without revealing the actual values.  This article is just a little note showing what I did; it’s not intended as an elegant and stylish display of perfectly optimised SQL, it’s an example of a quick and dirty one-off  hack that wasn’t (in my case) a disaster to run.

I’ve based the demonstration on the view all_objects. We start with a simple query showing the distribution of the values of column object_type:


break on report
compute sum of count(*) on report

select
        object_type, count(*)
from
        all_objects
group by object_type
order by
        count(*) desc
;

OBJECT_TYPE           COUNT(*)
------------------- ----------
SYNONYM                  30889
JAVA CLASS               26447
...
JAVA RESOURCE              865
TRIGGER                    509
JAVA DATA                  312
...
JAVA SOURCE                  2
DESTINATION                  2
LOB PARTITION                1
EDITION                      1
MATERIALIZED VIEW            1
RULE                         1
                    ----------
sum                      76085

44 rows selected.

Starting from this data set I want 44 randomly generated strings and an easy way to translate the actual object type into one of those strings. There are various ways to do this but the code I hacked out put the original query into an inline view, surrounded it with a query that added a rownum to the result set to give each row a unique id, then used the well-known and much-loved  “connect by level” query against  dual to generate a numbered list of randomly generated strings as an inline view that I could use in a join to do the translation.


execute dbms_random.seed(0)

column random_string format a6

select
        generator.id,
        dbms_random.string('U',6)       random_string,
        sum_view.specifier,
        sum_view.ct                     "COUNT(*)"
from
        (
        select
                rownum  id
                from    dual
                connect by
                        level <= 100
        )       generator,
        (
        select
                rownum          id,
                specifier,
                ct
        from
                (
                select
                        object_type specifier, count(*) ct
                from
                        all_objects
                group by
                        object_type
                order by
                        count(*) desc
                )
        )       sum_view
where
        sum_view.id = generator.id
order by
        ct desc
;

        ID RANDOM SPECIFIER             COUNT(*)
---------- ------ ------------------- ----------
         1 BVGFJB SYNONYM                  30889
         2 LYYVLH JAVA CLASS               26447
...
         9 DNRYKC JAVA RESOURCE              865
        10 BEWPEQ TRIGGER                    509
        11 UMVYVP JAVA DATA                  312
...
        39 EYYFUJ JAVA SOURCE                  2
        40 SVWKRC DESTINATION                  2
        41 CFKBRX LOB PARTITION                1
        42 ZWVEVH EDITION                      1
        43 DDAHZX MATERIALIZED VIEW            1
        44 HFWZBX RULE                         1
                                      ----------
sum                                        76085

44 rows selected.

I’ve selected the id and original value here to show the correspondance, but didn’t need to show them in the original posting. I’ve also left the original (now redundant) “order by” clause in the main inline view, and you’ll notice that even though I needed only 44 distinct strings for the instance I produced the results on I generated 100 values as a safety margin for testing the code on a couple of other versions of Oracle.

A quick check for efficiency – a brief glance at the execution plan, which might have prompted me to add a couple of /*+ no_merge */ hints if they’d been necessary – showed that the work done was basically the work of the original query plus a tiny increment for adding the rownum and doing the “translation join”. Of course, if I’d then wanted to translate the full 76,000 row data set and save it as a table I’d have to join the result set above back to a second copy of all_objects – and it’s translating full data sets , while trying to deal with problems of referential integrity and correlation, where the time disappears when masking data.

It is a minor detail of this code that it produced fixed length strings (which matched the structure of the original client data). Had I felt the urge I might have used something like: dbms_random.string(‘U’,trunc(dbms_random.value(4,21))) to give me a random distribution of string lengths between 4 and 20. Getting fussier I might have extracted the distinct values for object_type and then generated a random string that matched the length of the value it was due to replace. Fussier still I might have generated the right number of random strings matching the length of the longest value, sorted the original and random values into alphabetical order to align them, then trimmed each random value to the length of the corresponding original value.

It’s extraordinary how complicated it can be to mask data realistically – even when you’re looking at just one column in one table. And here’s a related thought – if an important type of predicate in the original application with the original data is where object_type like ‘PACK%’ how do you ensure that your masked data is consistent with the data that would be returned by this query and how do you determine the value to use instead of “PACK” as the critical input when you run the critial queries against the masked data ? (Being privileged may give you part of the answer, but bear in mind that the people doing the testing with that data shouldn’t be able to see the unmasked data or any translation tables.)

 

 

 


We’d Appreciate Your Help

Duncan Davies - Mon, 2015-08-03 05:35

We’d really appreciate your help. But first, a bit of background:

vote-buttonThe Partner of the Year awards is an annual awards ceremony held by the UK Oracle User Group. It allows customers to show appreciation for partners that have provided a good service to them over the previous 12 months. As you would imagine, being voted best by end-users is a wonderful accolade.

If you’re the recipient of any Cedar service – and this can be consultancy, advisory, or even the free PeopleSoft and Fusion Weekly newsletters that we send out – we’d be very, very grateful if you gave 3 minutes of your time to vote for us.

We’re up for 3 awards, scroll down to see why we think we deserve your vote. In case you’re already convinced, here’s the drill:

What we’d like you to do:

1) Click here (opens in new window).

2) Fill in your company name, first name and surname. Then click Next.

3) Enter your email address in both fields, then click Next.

4) Select any checkboxes if you want ‘follow-up communications’ from the UKOUG, or leave all blank, and click Next.

5) Select Cedar Consulting from the drop-down, and click Next.

6) On the PeopleSoft page, select the Gold radio button on the Cedar Consulting row (note, it’s the 3rd column!), then click Next.

7) Repeat by selecting the Gold radio button on the Cedar Consulting row of the Managed Services page, then click Next.

8) Repeat by selecting the Gold radio button on the Cedar Consulting row of the Fusion page, then click Next.

9) Click Submit.

And you’re done. Thank you very much. If you want some gratitude for your 3 minutes of effort drop me an email and I’ll thank you personally!

Why Vote For Us?

PeopleSoft Partner of the Year

This year we have worked with over 40 PeopleSoft clients. We have completed major global implementations, have 15 PS v9.2 upgrade projects either complete or in progress as well as delivering a busy programme of user-focused events.  Our events included the 16th PeopleSoft Executive Dinner, PeopleSoft Optimisation, the PeopleSoft & Oracle Cloud Day plus this year’s Executive Forums.  We also presented at the UKOUG conference and SIG.

Oracle Cloud (Fusion / Taleo) Partner of the Year

Over the last few years, we have developed from a PeopleSoft implementer to a specialist provider of Oracle HR Cloud services. We have a completed Fusion implementation under our belt and are currently implementing multiple Fusion and Taleo projects for different clients.

Managed Service / Outsourcing Partner of the Year

Throughout last year, 20 PeopleSoft customers have trusted us to provide market-leading Managed Services. This makes us the largest PeopleSoft Managed Services provider in the UK. Add to this the ability for us to outsource work to our Cedar India office to deliver for clients at a lower price point and we have a strong set of offerings.

We’ve had great success in this competition in the last couple of years and would value your vote. Thanks for your time.


Data messes

DBMS2 - Mon, 2015-08-03 03:58

A lot of what I hear and talk about boils down to “data is a mess”. Below is a very partial list of examples.

To a first approximation, one would expect operational data to be rather clean. After all, it drives and/or records business transactions. So if something goes awry, the result can be lost money, disappointed customers, or worse, and those are outcomes to be strenuously avoided. Up to a point, that’s indeed true, at least at businesses large enough to be properly automated. (Unlike, for example — :) — mine.)

Even so, operational data has some canonical problems. First, it could be inaccurate; somebody can just misspell or otherwise botch an entry. Further, there are multiple ways data can be unreachable, typically because it’s:

  • Inconsistent, in which case humans might not know how to look it up and database JOINs might fail.
  • Unintegrated, in which case one application might not be able to use data that another happily maintains. (This is the classic data silo problem.)

Inconsistency can take multiple forms, including: 

  • Variant names.
  • Variant spellings.
  • Variant data structures (not to mention datatypes, formats, etc.).

Addressing the first two is the province of master data management (MDM), and also of the same data cleaning technologies that might help with outright errors. Addressing the third is the province of other data integration technology, which also may be what’s needed to break down the barriers between data silos.

So far I’ve been assuming that data is neatly arranged in fields in some kind of database. But suppose it’s in documents or videos or something? Well, then there’s a needed step of data enhancement; even when that’s done, further data integration issues are likely to be present.

All of the above issues occur with analytic data too. In some cases it probably makes sense not to fix them until the data is shipped over for analysis. In other cases, it should be fixed earlier, but isn’t. And in hybrid cases, data is explicitly shipped to an operational data warehouse where the problems are presumably fixed.

Further, some problems are much greater in their analytic guise. Harmonization and integration among data silos are likely to be much more intense. (What is one table for analytic purposes might be many different ones operationally, for reasons that might span geography, time period, or application legacy.) Addressing those issues is the province of data integration technologies old and new. Also, data transformation and enhancement are likely to be much bigger deals in the analytic sphere, in part because of poly-structured internet data. Many Hadoop and now Spark use cases address exactly those needs.

Let’s now consider missing data. In operational cases, there are three main kinds of missing data:

  • Missing values, as a special case of inaccuracy.
  • Data that was only collected over certain time periods, as a special case of changing data structure.
  • Data that hasn’t been derived yet, as the main case of a need for data enhancement.

All of those cases can ripple through to cause analytic headaches. But for certain inherently analytic data sets — e.g. a weblog or similar stream — the problem can be even worse. The data source might stop functioning, or might change the format in which it transmits; but with no immediate operations compromised, it might take a while to even notice. I don’t know of any technology that does a good, simple job of addressing these problems, but I am advising one startup that plans to try.

Further analytics-mainly data messes can be found in three broad areas:

  • Problems caused by new or changing data sources hit much faster in analytics than in operations, because analytics draws on a greater variety of data.
  • Event recognition, in which most of a super-high-volume stream is discarded while the “good stuff” is kept, is more commonly a problem in analytics than in pure operations. (That said, it may arise on the boundary of operations and analytics, namely in “real-time” monitoring.
  • Analytics has major problems with data scavenger hunts, in which business analysts and data scientists don’t know what data is available for them to examine.

That last area is the domain of a lot of analytics innovation. In particular:

  • It’s central to the dubious Gartner concept of a Logical Data Warehouse, and to the more modest logical data layers I advocate as alternative.
  • It’s been part of BI since the introduction of Business Objects’ “semantic layer”. (See, for example, my recent post on Zoomdata.)
  • It’s a big part of the story of startups such as Alation or Tamr.
  • In a failed effort, it was part of Greenplum’s pitch some years back, as an aspect of the “enterprise data cloud”.
  • It led to some of the earliest differentiated features at Gooddata.
  • It’s implicit in the some BI collaboration stories, in some BI/search integration, and in ClearStory’s “Data You May Like”.

Finally, suppose we return to the case of operational data, assumed to be accurately stored in fielded databases, with sufficient data integration technologies in place. There’s still a whole other kind of possible mess than those I cited above — applications may not be doing a good job of understanding and using it. I could write a whole series of posts on that subject alone … but it’s going slowly. :) So I’ll leave that subject area for another time.

Categories: Other

OTN Tour of Latin America 2015 : The Journey Begins – Arrival at Montevideo, Uruguay

Tim Hall - Mon, 2015-08-03 01:45

ace-directorAfter the quick flight to Montevideo, I was met by Edelwisse and Nelson. A couple of minutes later Mike Dietrich arrived. You know, that guy that pretends to understand upgrades! We drove over to the hotel, arriving at about 11:00. Check in was not until 15:00, so I had to wait a few minutes for them to prep my room. The others were going out to get some food, but I had a hot date with my bed. I got to my room, showered and hit the hay.

I was meant to meet up with the others at about 19:00 to get some food, but I slept through. In fact, I slept until about 04:00 the next day, which was about 15 hours. I think that may be a record… I’m feeling a bit punch-drunk now, but I’m sure once I start moving things will be fine…

Today is the first day of the tour proper. Fingers crossed…

Cheers

Tim…

OTN Tour of Latin America 2015 : The Journey Begins – Arrival at Montevideo, Uruguay was first posted on August 3, 2015 at 8:45 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

User Hash Support

Anthony Shorten - Sun, 2015-08-02 21:45

In Oracle Utilities Application Framework V4.x, a new column was added to the user object to add an additional layer of security. This field is a user hash that generates on the complete user object. The idea behind the hash is that when a user logs in a hash is calculated for the session and is checked against the user record registered in the system. If the user hash generated does not match the user hash recorded on the user object then the user object may not be valid so the user cannot login.

This hash is there to detect any attempt to alter the user definition using an invalid method. If there is an alteration was not using the provided interfaces (using the online or a Web Service) then the record cannot be trusted so the user cannot use that identity. The idea is that if someone "hacks" the user definition using an invalid method, the user object will become invalid and therefore effectively locked. It protects the integrity of the user definition.

This facility typically causes no issues but here are a few guidelines to use it appropriately:

  • The user object should only be modified using the online maintenance transaction, F1-LDAP job, user preferences maintenance or a Web Service against the user object. The user hash is regenerated correctly when a valid access method is used.
  • If you are loading new users from a repository, the user hash must be generated. It is recommended to use a Web Services based interface to the user object to load the users to avoid the hash becoming invalid.
  • If a user uses a valid identity and the valid password but gets a message Invalid Login then it is more likely the user hash compare has found an inconsistency. You might want to investigate this before resolving the user hash inconsistency.
  • The user hash is generated using the keystore key used by the product installation. If the keystore or values in the keystore are changed, you will need to regenerate ALL the hash keys.
  • There are two ways of addressing this use:
    • A valid administrator can edit the individual user object within the product and make a simple change to force the hash key to be regenerated.
    • Regenerate the hash keys globally using the commands outlined in the Security Guide. This should be done if it is a global issue or at least an issue for more than one user.

For more information about this facility and other security facilities, refer to the Security Guide shipped with your product.