Feed aggregator

Upcoming Webcast: Do You Have The Right Directory Services For Cloud Computing?

Mark Wilcox - Sun, 2011-03-20 23:40

I'm giving a new webcast this week about making sure you choose the right directory service for cloud computing:
Webcast Date: Thursday, March 24, 2011
Webcast Time: 10:00 AM Pacific Daylight Time / 1:00 PM Eastern Daylight Time

Please register and attend to learn about the key points you need to keep in mind when choosing a directory service for your cloud initiatives.

Posted via email from Virtual Identity Dialogue

Open cursor paranoia

Rob van Wijk - Thu, 2011-03-17 17:15
Most PL/SQL developers will likely have witnessed this phenomenon several times during their career. But only in other people's code, of course :-). I'm talking about PL/SQL code where every program unit ends like this:exceptionwhen others then if c%isopen then close c; end if; raise;end;where lines 3 to 6 are repeated for every cursor in the block above.Proponents of open cursor paranoia Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com6

EJB 3 In Action, 2nd Edition

Debu Panda - Thu, 2011-03-17 00:00
The second edition of EJB 3 In Action was announced recently. Ryan Cuprak joined as a new author of the book. Ryan and Reza are doing most of the work on the book. We have made a lot of changes in the content to include EJB 3.1 and other Java EE features such as CDI.
Here is the table of contents for the book:

Part I: Overview of the EJB landscape
1. What's what in EJB 3.1
2. A first taste of EJB 3

Part II: Working with EJB 3 components
3. Building business logic with session beans
4. Messaging and message-driven beans
5. EJB runtime context, dependency injection, and aspect oriented programming
6. Transactions and security
7. Scheduling and timers in EJB
8. Exposing EJBs as SOAP and REST web services

Part III: Using EJB 3 with JPA and CDI
9. JPA entities
10. Managing entities
11. Using CDI with EJB 3

Part IV: Putting EJB 3 into action
12. Packaging EJB 3 applications
13. EJB 3 testing
14. Designing EJB-based systems
15. EJB performance and scalability
16. EJB 3, Seam, and Spring
17. The future of EJB 3

Appendices
A. RMI primer
B. Migrating from EJB 2.1 to EJB 3
C. Annotations reference
D. Deployment descriptors reference
E. Installing and configuring the Java EE 6 SDK
F. EJB 3 developer certification exam
G. EJB 3 tools support

The book is available on Manning Early Release Program. You can join MEAP and help improve the program

Web 2.0 Solutions with Oracle WebCenter 11g (book review)

Eduardo Rodrigues - Wed, 2011-03-16 20:16
by Fábio SouzaHello People! This was supposed to be a post to celebrate the new year, but, as you all can notice, the things didn't happen the way I was expecting (again haha). Today I will talk...

This is a summary only. Please, visit the blog for full content and more.

Web 2.0 Solutions with Oracle WebCenter 11g (book review)

Java 2 Go! - Wed, 2011-03-16 20:16
by Fábio SouzaHello People! This was supposed to be a post to celebrate the new year, but, as you all can notice, the things didn't happen the way I was expecting (again haha). Today I will talk...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Why we do not use PowerConnect to access PeopleSoft Tree

Dylan Wan - Wed, 2011-03-16 12:51

1. It does not allow you to use parameters to the PeopleSoft connect. It may be changed later. However, it was a big issue when we try to address customer issues.

2. It requires EFFDT as an option.It expect that people change the EFFDT using Mapping Editor. How can a business user does that every month?

3. It asks for a Tree Name. Many PeopleSoft tree structure supports multiple trees. Tree is just a header of the hierarchy. Whenever you add a new Tree, you need to create a new mapping!!

It does not make sense to use PowerConnect due to the customer demands. All requirements are from customers.

We have no choice but stop using it.

Categories: BI & Warehousing

Oracle JHeadstart 11.1.1.3 Now Available

JHeadstart - Wed, 2011-03-16 05:27

Oracle JHeadstart 11.1.1.3 is now available for download (build 11.1.1.3.35).
This release is compatible with JDeveloper 11 releases 11.1.1.4, as well as 11.1.1.3 and 11.1.1.2.
Customers who own a JHeadstart supplement option license can download it from the Consulting Supplement Option portal.

In addition to many small enhancements and bug fixes, the following features have been added to JHeadstart 11.1.1.3:


  • Support for Dynamic Tabs: There is a new page template insipred by the dynamic tabs functional pattern. Note that this template and associated managed beans are all included with JHeadstart and can be easily customized. The implementation does not depend on the "Oracle Extended Page Templates" library. Additional JHeadstart-specific features include: automatically marking the current tab dirty, initially displaying one or more tabs, ability to uniquely identify tabs by a tab unique identifier, close tab icon displayed on tab itself. When using the item Display Type "groupLink" there is a new allowable value "New Dynamic Tab" for property Display Linked Group In that can be used when you use the dynamic tabs template. For example, this allows you to have a first "EmployeeSearch" group that searches employees and shows the result in a table, and then clicking a groupLink edit icon for one employee in the table, which opens a new dynamic tab showing the linked "CustomerEdit" group for this specific employee. See section 9.3 "Using Dynamic Tabs when Opening a Menu Item" in the JHeadstart Developer's Guide for more information.

    Dynamic Tabs



  • Support for Function Keys: There is a new application level property Enable Function Keys. When this property is checked, JHeadstart will generate context-sensitive function keys. The list of function keys can be
    seen by clicking Ctrl-K. The actual function keys shown in the list depend on the location of the cursor in the page. The default mapping of function keys to ADF actions is inspired by Oracle Forms function key mapping, but can very easily be changed. Function keys for custom buttons can easily be added as well. See section 11.7 "Using Function Keys" in the JHeadstart Developer's Guide for more information.

    Context-Sensitive Function Keys

  • Support for Popup Regions: There are two new region container layout styles: Modeless Popup Window and Modal Popup Window. With these layout styles, the content of the region container and its child regions are displayed in a poupup window. If the Depends on Item(s) property is set on the region container, the item context facet of the depends-on-item is used to launch the region container popup. If the depends-on-item is a button item, the popup will be launched when clicking the button. If the Depends on Item(s) property is not set for the popup region conatiner, an additional button with the label set to the region title is generated to launch the popup. See section 5.10.4 "Generating Content in a Popup Window" in the JHeadstart Developer's Guide for more information.

    Popup Region Container with Method Call Button

  • Support for Dynamic Iterator Bindings: There is a new group property Data collection Expression. In this property you can specify an EL expression that determines at runtime which data collection (view object usage) should be used. This can be useful when reusing the same group taskflow multiple times. For example, the Employees group can be used as a top-level taskflow, and as a group region taskflow under the departments group. Rather then setting up bind variables and a query where clause, this use case can be implemented much easier by dynamically switching the view object usage between a top-level employees view object usage and a nested usage under the departments view object usage. The data collection expression can then refer to a taskflow parameter which specifies the actual view object usage name.
  • Support for Custom Toolbar Buttons: There are two new item display types toolbarButton and groupLinkToolbarButton. Items with this display type are generated into the group-level toolbar for form layout, and added to the table toolbar for table layouts

    Custom Iconic Toolbar Button

  • Control over Label Position: There is a new property Label Alignment for groups and item regions that allows you to specify whether labels (prompts) should be positioned at the left of an item, or above the item.
  • Support for Icons: There is a new icon property available at group and item level. At item level, this can be used to generate iconic buttons and group links. At group level, the icon is displayed in the header bar of the group, and in buttons that navigate to the group taskflow
  • Support for Online Help: At the application level, there is a new property Online Help Provider. Two new properties Help Text and Instruction Text have been added to the group, region container, group region, item region and item elements. These two new properties are only used when a help provider is set at application level. If a help text is entered a help icon will be displayed at the right of the element title, or in case of an item, at the left of the item prompt. If an instruction text is entered, it will be displayed below the element title, or in case of an item, in a popup when the user clicks in the input component of the item. See section 11.6 "Using Online Help" in the JHeadstart Developer's Guide for more information.

    Online Help

  • Deeplinking from External Source Like E-Mail: You can now launch the application with a specific page (region) being displayed by adding "jhsTaskFlowName" as parameter to the request URL.
    The value of "jhsTaskFlowName" should be set to a valid group name as defined in the application definition. Any other request parameters will be set as taskflow parameters.
    For example the following URL will start the application with the Employees taskflow, displaying employee with employeeid 110:

    http://127.0.0.1:7101/MyJhsTutorial/UIShell?jhsTaskFlowName=Employees&rowKeyValueEmployees=110
  • Ability to Call Business Method from Button: There is a new item level property Method Call where you can select a method from the list of application module methods that have been added to the client interface.
    Using item parameters, you can specify the method arguments. The return value of a method call can easily be displayed in an unbound item using a simple EL expression. See section 6.10.3 "Executing a Button Action" in the JHeadstart Developer's Guide for more information.
  • Requery when Entering Task Flow: The group-level combobox property Requery Condition has a new allowable value "When Entering the Task Flow" to ensure the latest data are queried when the user enters a task flow. As before, you can still enter a custom boolean EL expression in this property as well.
  • Additional Item Properties: There is a new item property "Additional Properties" where you specify additional properties that are added to the ADF Faces
    Component generated for the item. See section 12.4.4 "Adding Custom Properties to a Generated Item" in the JHeadstart Developer's Guide for more information.
  • New Custom Properties: You can now specify 5 custom properties against region containers, item regions and group regions.
  • Easier Taskflow Customization: Most of the content of the bounded taskflow Velocity template for a top group (groupAdfcConfig.vm) has been refactored into separate templates to make customization of generated bounded taskflows easier and faster. Placeholder (empty) templates to easily add custom managed beans, custom taskflow activities and custom control flow rules have been added as well. See section 12.5 "Customizing Task Flows" in the JHeadstart Developer's Guide for more information.
  • Easier File Generation Customization: The fileGenerator.vm now uses logical template names instead of hardcoded template paths to generate files. You can now use the Application Definition Editor to create a custom template for a specific file, just like all other templates. In addition, to prevent generation of a file, you can set the template to default/common/empty.vm, the file generator will no longer create files with empty content. See section 12.6 "Customizing Output of the File Generator" in the JHeadstart Developer's Guide for more information.
  • Better Support for ADF Libraries: A new paragraph (2.4) in the JHeadstart Developer's Guide describes JHeadstart-specific steps to take when using ADF Libraries. In addition, it is now possible to "import" JHeadstart service definitions from other projects that are packaged as ADF Library so you can reference JHeadstart groups in other projects in the JHeadstart Application Definition editor of the project that contains the ADF libraries with JHeadstart-generated content. See section 2.4 "Packaging JHeadstart-Generated ViewController Project as ADF Library" in the JHeadstart Developer's Guide for more information.
  • Use of ADFLogger: The JHeadstart runtime classes now use the ADFLogger instead of Log4j. The ADFLogger nicely integrates with WebLogic, allowing you to configure log levels dynamically at runtime and monitoring log messages from specific threads. To see all JHeadstart debug messages during development, go to the WebLogic log window in JDeveloper, click on the "Actions" dropdown list and choose "Configure Oracle Diagnostic Logging". Now add a persistent logger with name "oracle.jheadstart" and log level "INFO". You can do this while Weblogic is already running.

For a complete list of all existing features, use this link.

Categories: Development

ISA Consulting Bought by E&Y

Look Smarter Than You Are - Wed, 2011-03-16 01:52
And so the consulting company acquisitions continue.  I haven't written about this in over a year mostly because these acquisition entries take so many hours to research (cry me a river, Edward), so let's bury the lead by first covering all the major acquisitions that have occurred since my last entry:


November 24, 2009: PWC acquires Paragon
Those in the Oracle EPM areas in Europe & Asia knew of Paragon.  With close to 100 employees, they were a significant player in the UK, Turkey, and Singapore markets.  It's not known how many of Paragon's employees made the transition to PWC, but press releases seem to reflect around 40.


March 29, 2010: Perficient acquires Kerdock
Kerdock was a major, long-standing Oracle BI/EPM vendor dating back to roughly 2002.  Based out of Houston, they had close to 65 employees at their peak.  When they were bought last year by Perficient (a public-traded company - NASDAQ: PRFT - with about 1,400 employees), they had roughly 45 employees and about $8MM in annual revenue.  They were bought for $6MM (of which $3.4MM was in cash and $2.6MM in PRFT stock).


May 4, 2010: Idhasoft acquires TLC Technologies
TLC is a long-time Oracle EPM partner based out of Pennsylvania.  Though they dated back to the late 90's, they were never that large.  Last year, a controlling interest in TLC was acquired by Idhasoft (through their Prism Informatica subsidiary) for an undisclosed sum.
If you hadn't heard of Meridian when Edgewater acquired them, you weren't alone.  They were only a few years old (and they were pretty small) but they had begun developing a reputation as a Hyperion Strategic Finance implementer that was able to compete with the focused expertise of BlueStone.  We'll never know if they would have fulfilled that promise of HSF experience, though, because they were acquired too early on by Edgewater.  They did have several former Alcar executives (the company that became HSF) on their leadership team (including Alcar's former head of services, Ricardo Rasche), so their acquisition was significant.


August 31, 2010: E&Y acquires Global Analytics
Global Analytics, as you may recall, bought Narratus (the former "Data into Action") a couple of years ago and in 2010, they were gobbled up themselves.  Largely through the strengths of Hyperion installation expert, Bill Beach, Global Analytics had developed a reputation in the Hyperion infrastructure world.  For a time, they were one of only 5 companies (interRel was one of the others) with a significant infrastructure practice around Hyperion which included them subcontracting to other larger global systems integrators.  They had several areas outside of Hyperion, and my guess is that's why  E&Y bought them in 2010.  The small size of their Hyperion practice doesn't seem like it would have warranted E&Y's attention.  Though maybe this should have been a predictor of the acquisition of ISA?


October 21, 2010: IBM acquires Clarity
In my opinion, this was the most significant acquisition in the Oracle EPM, Hyperion, and Essbase world in 2010.  Clarity Systems out of Canada (same place my high school girlfriend lived, by the way) was the first substantial partner to build a pre-packaged budgeting solution on top of Essbase that way pre-dated Hyperion Planning.  Originally a consulting partner at Arbor, Clarity turned their spreadsheet-based front-end to Essbase eventually into a full-featured financial planning, consolidation and reporting product.  What was once a fairly pleasant working relationship got contentious for a number of reasons including alleged licensing violations and what later turned into a compete between Clarity and Hyperion's own Planning and Financial Management products.  As Clarity began to score some competitive wins over Hyperion at companies like Southwest Airlines and Alcon Labs, the relationship took a turn for the downright hostile.


Eventually, Clarity started integrating with non-Hyperion products as they continued their expansion.  Interestingly, when IBM bought them last year, IBM made no secrets about their intentions to kill off most of the Clarity suite (including the planning and financial consolidation functionality).  This actually makes complete sense since they already have the Cognos and TM/1 products doing virtually the same functions.  So why did they acquire them?  Consultant bodies to implement BI/EPM at IBM's consulting clients? Clarity's client list? Just to eliminate a competitor.  None of the above.  Apparently, IBM noticed a weakness in their XBRL reporting and one component of Clarity handled this functionality.  Seems like overkill to me, but then I'm not a company the size of IBM.


Throughout 2010: Palladium founders leave to form other firms
As disastrous as the Hyperion/Arbor merger was back in 1998, there are many who feel that the merger of Balanced Scorecard Collaborative, Painted Word, and ThinkFast into Palladium was even worse.  While I'm not one to judge, it has definitely been true that  Palladium has been bleeding talent (in the Hyperion/EPM world, at least) since their founding.  The last 15 months have been particularly harsh with three major group personnel departures:
  • Painted Word executives including Scot MacGillivray, Jim Leavitt, Chris Boulanger, and Peter Graham all left to found Cervello.  All of these people were founders and/or executives at Painted Word when it became part of Palladium.  They stuck it out for a few years and then left as a group to create Cervello which seems to be doing Oracle BI and EPM consulting.  I can't vouch for that personally, because I haven't run into them at all, but their departure from Palladium was definitely a blow.
  • Tom Phelps left Palladium to start up ClearLine Group.  Tom Phelps was the original founder of the company that later became ThinkFast (one of the three components of Palladium).  Tom and his brother, Marty, founded a company that appears to be doing Oracle EPM consulting (but again, like Cervello, I haven't run into them yet).  With Tom Phelps departing and the Painted Words executives departing, the only founders of the component companies that are still part of Palladium are the Balanced Scorecard guys.
  • Palladium Pace team members including Dean Tarpley, Michael Wright, Carolyn Sieben, and a few others left to join Alvarez and Marsal in August, 2010.  The Pace product hadn't been selling anywhere near what its creators expected and this was the final nail in the coffin of the product.  While Pace is still mentioned on Palladium's website, it doesn't seem that there's anyone left at Palladium still working on the product.  Palladium had been shopping around for a buyer of their Pace business unit for a while, so it's unclear as to if Palladium sold the developers to Alvarez or if they simply were hired en masse.  Since there wasn't any sort of "predatory workpractices" lawsuit, I'm concluding that it was a purchase of the talent and Alvarez didn't want Pace at all.


March 15+ 2011: Ernst & Young acquires ISA
Well, I'd love to point to a press release on this, but there isn't one simply because it's not been announced yet. [Editor's Note: it is now public.  Scroll to the end of the story for more.]  Normally, I wouldn't do a blog entry on this until it was official, but this is the least stealthy acquisition in history.  I have heard about it from no fewer than three sources at three different companies, and since offers have already been extended to the employees that are going to get them at ISA Consulting, the affected people already know.  Keep watching Ernst & Young and ISA's news pages and I'm sure something will be up in the next week or two.


ISA is based out of Pennsylvania and is a very large player in the Oracle BI and EPM space.  Though they do other products, ISA is still considered by many to be a primarily Hyperion partner.  Based on what I've been told, E&Y is acquiring ISA primarily for their consulting expertise.  While they're letting almost all the sales and back office staff go (Mitch Rubin and Cliff Matthews being notable exceptions), most all of the consultants seem to be getting offers to join E&Y.  The partners at ISA do seem to be coming on as either partners or close to it at E&Y.


Even though E&Y is one of the 10 largest privately held companies in the USA, this is a significant acquisition because ISA does appear to have well over 100 people focused around BI, EPM, and data warehousing.  Whether they end up putting ISA in the BI & Data Warehousing group or into financial transformation (or split them between them), this acquisition will significantly increase the number of individuals in those areas. If E&Y does manage to hold on to the talent from ISA, they will now be able to much more directly compete with Deloitte on the BI & EPM front.


I haven't heard terms of the acquisition, but since E&Y doesn't need ISA's client list or sales expertise but rather just wants the consulting bodies, the dollars are presumably based on a multiple of EBITDA. Based on other similar deals in the last year, I expect the multiple is 6.5 times 12-month EBITDA (give or take a factor of 1.5).  If anyone knows any different, by all means, either shoot me an e-mail (I'll keep you anonymous) or post it in the comments to this entry.


Who's Next?
If you go way back to my posting from January 5, 2009, I offered up this list of potential targets for acquisition: 
One could speculate that it might be interRel, PII, Kerdock, Global Analytics, US-Analytics, Analytic Vision, HCG, TopDown, or even the Hyperion arm of Palladium, but it could just as likely be some other tiny Hyperion vendor that's not on anyone's radar screen right now. Heck, it might even expand beyond the consulting world to one of the Hyperion software partners like Applied OLAP or Star Analytics.I then went on to say that interRel could be removed from the list.  Well, I was right on Kerdock, Global Analytics, and the Hyperion arm of Palladium, so that leaves PII, US-Analytics, Analytic Vision, HCG, TopDown, Applied OLAP, and Star Analytics.  I guess I would add MarketSphere to that list too even though they're obviously in areas beyond Oracle EPM.  While many of these companies are too small to attract the attention of Deloitte, IBM, E&Y, and Oracle, don't be shocked if one or more of them is gobbled up in the next year by an off-shore consulting firm looking to fill in the EPM/BI gaps in their offerings.


It's now almost 2AM and I have to present to the HUG group in Minneapolis in a few hours, so I'm going to post and then sleep.  If I've stated anything incorrectly above, feel free to comment and please assume I wasn't trying to be malicious.  It's just been a long day and this entry (essay?) was almost 1,800 words.


UPDATE April 5, 2011: E&Y Officially Buys ISA Consulting
It took a week into April, but E&Y finalized the ISA deal and announced the deal publicly.  The press release states that ISA had 130 employees (I'd speculated 100+) and financial terms were not disclosed.  Read more about it here.
Categories: BI & Warehousing

Runtime error ORA-01031: insufficient privileges

Rob van Wijk - Tue, 2011-03-15 16:13
After a new version of software was installed in production, the end users reported a runtime error: ORA-01031: insufficient privileges, when selecting from a view. The developers of the code were investigating the problem and half way through, they asked me to have a look at the problem. I saw a function from schema3, which was used in a view in schema2, which was used by schema1. I had just Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com1

Anonymous exposes sensitive bank emails

Simon Thorpe - Mon, 2011-03-14 03:46

 

anonymous As expected for quite a while, emails purporting to reveal alleged naughtiness at a major bank have been released today. A bank spokesman says "We are confident that his extravagant assertions are untrue".

The BBC report concludes...  "Firms are increasingly concerned about the prospect of disgruntled staff taking caches of sensitive e-mails with them when they leave, said Rami Habal, of security firm Proofpoint.

"You can't do anything about people copying the content," he said.

But firms can put measures in place, such as revoking encryption keys, which means stolen e-mails become unreadable, he added."

Actually, there is something you can do to guard against copying. While traditional encryption lets authorised recipients make unprotected copies long before you revoke the keys, Oracle IRM provides encryption AND guards against unprotected copies being made. Recipients can be authorised to save protected copies, and cut-and-paste within the scope of a protected workflow or email thread - but can be prevented from saving unprotected copies or pasting to unprotected files and emails. 

The IRM audit trail would also help track down attempts to open the protected emails and documents by unauthorised individuals within or beyond your perimeter.

 

Personalized News Recommendations

Khanderao Kand - Sun, 2011-03-13 19:53
A couple of days back, on March 10, Barron’s reported that “NYTimes.com Adds Recommendation Feature‘. Back in Nov 2010 MyBantu powered ‘Personalized News Recommendations‘ was launched for Samachar, largest news portal about India. This personalized news recommendation, one of the firsts, not only increased visitors(readers) traffic to the Samachar site but it also resulted in the readers spending more time reading these personalized articles.  more ....

Explain this

Oracle WTF - Sat, 2011-03-12 09:04

On the subject of cryptic OTN posts, this one has to get an honorary mention as well:

explain this

hi,

write query to find out order detail of oder_date 2 year before (sorry i forget exact question)

No solutions so far.

Make Me One With Everything

Oracle WTF - Sat, 2011-03-12 08:52

Seen on OTN Forums recently (part of a question entitled "HTML not working in PL/SQL block", so I suppose we were warned):

l_col VARCHAR2(30) := to_number(to_char(to_date('01-feb-2011','dd-mon-yyyy'),'dd'));

So the string '01-feb-2011' becomes first a date, then a string again, then a number, before being assigned to a string variable. Much more interesting than boring old

l_col VARCHAR2(30) := extract (day from date '2011-02-01');

Or even,

l_col VARCHAR2(30) := '1';

IRM Item Codes – what are they for?

Simon Thorpe - Fri, 2011-03-11 07:51

 

barcode

A number of colleagues have been asking about IRM item codes recently - what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers.

An item code is part of the metadata of every sealed document - unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination.

This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly.

In most scenarios, item codes are not relevant to the evaluation of a user's rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution.

Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code - when you hover the mouse pointer over a sealed file.

tooltip As you see, the item code for this freshly created file combines a timestamp with the file name.

But what are item codes for?

The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle - internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx.

This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don't want to be managing each file individually, but never say never.

Item codes can also be used for the opposite effect - to include a file in a user's rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file - the one shown above.

So how are item codes set?

In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI.

It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required.

Setting the item code manually using the IRM Desktop

The manual process is a simple extension of the sealing task. An authorised user can select the Advanced... sealing option, and will see a dialog that offers the option to specify the item code.

setitemcode

 

To see this option, the user's role needs the Set Item Code right - you don't want most users to give any thought at all to item codes, so by default the option is hidden.

Setting the item code programmatically

A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document's unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code.

The Payslip Scenario

To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips.

To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code - and that item code might match the employee's payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique - you can deliberately set the same code on many files for ease of administration.

The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient.

All that remains is to ensure that as each user's payslip is sealed, it is assigned the correct item code - something that is easily managed by a simple IRM sealing application. Each month, an employee's payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to - they have access to all documents that carry their employee code.

 

Hospital fined $1m for Patient Data Breach

Simon Thorpe - Thu, 2011-03-10 22:14

 

hospital-finedAs an illustration of the potential cost of accidental breaches, the US Dept of Health and Human Services recently fined a hospital $1m for losing documents relating to some of its patients. Allegedly, the documents were left on the subway by a hospital employee.

For incidents in the UK, several local government bodies have been fined between £60k and £100k. Evidently, the watchdogs are taking an increasingly firm position.

 

GUI or not GUI

alt.oracle - Thu, 2011-03-10 20:01
One of the longest and loudest controversies in the DBA world is that of the graphical user interface vs command line.  Some of the opinions sound like this…

“GUIs are for newbies who don’t know what they’re doing.”
“Why should I learn all the commands – there’s already a tool to do that.”
“GUIs are too slow.”
“Learning the command line takes too long.”
“I don’t need to learn a bunch of commands that I’ll never use – I just want to get my job done.”

My own feelings about this go back to my early days as a DBA.  I had this supervisor who was an absolute wizard when it came to Enterprise Manager.  Now, we’re talking the early OEM that came with Oracle version 8.0, here.  Ancient stuff.  If it could be done with OEM, this guy could “git ‘er done”.  One day tho, some kind of devastating emergency happened.  As a newbie, I wasn’t always trusted to handle the big issues, so I went to the supervisor and told him the situation. 

“Hey boss, we need to do so-and-so.” 
“Oh,” says Boss, “I don’t know how to do that with Enterprise Manager.” 
“Um,” I says, “I don’t think you *can* do that with Enterprise Manager.” 
“Oh,” says Boss, “Then what do we do?”

I remember the look of defeat on his face.  He was a nice guy, he wanted to help, he was responsible to help, but since Oracle hadn’t written that particular ability into his GUI tool, he had no idea as to how to do it.  It made an impression on me.  I decided then and there - that wasn’t going to be me.  I made a commitment that lasted for years – I will not use GUI tools.  No matter how much longer it takes me to do the job, with looking up commands and all, I will abstain from the evil of the GUI.  And so I did.

As a result, I learned the command line.  I REALLY learned the command line.  SQL*Plus was my home.  Not only did I learn a ton of data dictionary views by heart, over time, I sort of developed a “feel” for syntax even if I didn’t know it.  I could kinda intuit what might be in a certain v$ view or I could guess what the columns of a particular dba_* view should be.  It was and is incredibly useful and I don’t regret it.  I wrote and saved my little scripts to do things.  But, over time, I started to look down on my peers who used GUI tools, inwardly thinking they really couldn’t hack it from the command line.  You obviously don’t say something like that, but you joke about it, etc, just to let them know.  It probably didn’t help matters that in the ultimate GUI vs command line deathmatch, Windows vs Linux, I was (and am) squarely on the Linux side.

What started to change me was, ironically, Enterprise Manager.  Although I didn’t use it, I’d kept up with OEM, watching it get, for the most part, better and better.  But when 10g was released, it was like OEM had a bar mitzvah, sweet sixteen and a coming-out party all in one.  Re-christened as Grid/Database Control, you could do dang near EVERYTHING with OEM now.  OEM was finally a comprehensive tool.  It was so comprehensive, that it started to shake my “GUIs are for losers” mentality.  I thought, I could really do some damage with this OEM thing (in a good way).  I started to think in terms of what would be more efficient, OEM or command line, for different situations.  Command line was still winning in my mind, but not by as much as before.

The thing that finally “brought balance to the force” for me was a quote I read by a well-known Oracle consultant/author/blogger guy.  If I said his name, you’d probably recognize it.  I read something of his where he was consulting for a client and said this, almost verbatim, “I knew their DBAs were incompetent because they were using Enterprise Manager.”  Whoa.  Now it’s true that I didn’t want to be like my old boss, unable to do anything without a GUI, but I sure didn’t want to be like this arrogant bastard either.  Besides that, I had seen enough of Grid/Database Control to know that his reasoning was crap.

In the end, the command line versus GUI war boils down to a few principles for me.  A good DBA needs to be efficient.  If you’re more efficient using a GUI than command line, then go for it.  If, on the other hand, the only reason you use a GUI is that you’re just too lazy to learn the commands, then you get what you deserve.    I’m still heavily command line oriented, but, in truth, I know there are instances where it would just be faster to use a GUI tool.  Take, for instance, performance tuning.  Everybody has their own way of doing it, but Grid/Database Control really does a good job of pulling a lot of different metrics together.  It would take a lot of scripts to pull that much information into one place.  It’s not for everyone, but it shouldn’t just be written off without a second thought.  And when you decide which one's "faster", you have to take into consideration the amount of time it took for you to come up with that whiz-bang script of yours.

In the end, I think everyone should aspire to learn how to leverage the command line.  It’s powerful, open ended, versatile and doesn’t tie you down to any particular toolset.  A GUI will always be limited by its programming.  If the programmer didn't dream it, you probably can't do it.  But the point is to get the job done.  If Enterprise Manager helps you bust out your super ninja DBA skillz, I won’t stop you.

And if you're still a hardcore command liner, I'll try to change your mind next time.  What if you could make your own GUI?  Hmm?
Categories: DBA Blogs

Collaborative Filtering Vs Personal Preferences Based Recommendations

Khanderao Kand - Thu, 2011-03-10 04:19
From my other blogs:

http://www.mybantu.com/blog/2011/03/10/collaborative-vs-personalized-recommendations/

Lessons From OpenId, Cardspace and Facebook Connect

Mark Wilcox - Wed, 2011-03-09 23:12

Teach and Listen
(c) denise carbonell

I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect.

However, I'm more interested in the lessons we can take away from this.

First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about.

Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen.

Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs. 

Posted via email from Virtual Identity Dialogue

Buzz Around Non-Relational DBs

Khanderao Kand - Tue, 2011-03-08 18:34
Reposting from my other blog http://texploration.wordpress.com/2011/03/09/buzz-around-nonrelational-db/


Last Saturday we (GITPRO – Global Indian Tech Professionals Association) arranged Tech Talk on NoSQL (nonRelational actually) DBs and Scaling Hadoop. It was very well attended. In the general introduction session when many introduced themselves they told their interests in Hadoop and NoSQL DB. It was nice to see a good size crowd sacrificing their Saturday evening to attend this informative session. It was more surprising to see many of them were actually users of these technology.

We at MyBantu are using MongoDb which is a document orient database. We store XML document (actually when store it is BSON in MongoDB) and queries use Scripting language for conditions. Other alternative in this class is CouchDB which is more Web-like and gives REST based access. Other famous Non-Relational (popularly called as NoSQL) we of course Hadoop and Cassandra. Both are apache projects with few very good show case implementations. However, recently when Diggs had problem and was using Cassandra, it got a bad name which is not that accurate. Anyway, Hadoop and its database called HBase are making more buzz. It was interesting news when Facebook also moved their messaging system from Cassandra to HBase. Its interesting especially because Cassandra originally came from engineers at Facebook. They used in their InBox search. There is some interesting work on Hadoop is happening in Facebook. They are the original contributors of Hive which is a data manipulation add of targeted towards implementing warehousing on top of Hadoop. While MapReduce databases created a lot of buzz around NoSQL, it is interesting that Hive and Hbase are SQL. so, when folks say NoSQL, it is actually non-Relational databases. Another warehousing related add-on to Hadoop is Pig (Apache Pig) which has originally coming out of Yahoo.

Anyway, its interestingly rapid development happening in this space and the major drive is due to the huge user generated data being handled in the social networking giants like Facebook, Zynga, LinkedIn,.. but the original credit to this concept of Big Table goes to Google from where the Map Reduce database 

Pages

Subscribe to Oracle FAQ aggregator