Feed aggregator

Debugging Tip with Weblogic and Oracle Virtual Directory

Mark Wilcox - Thu, 2010-10-28 09:01

I helped one of our other teams debug an issue with an app protected with Oracle Weblogic server and Oracle Virtual Directory (OVD).

You can read more about it here

Posted via email from Virtual Identity Dialogue

Bird's eye view of OAM 11g Install process

Pankaj Chandiramani - Thu, 2010-10-28 01:09

Bird's eye view of OAM 11g Install process

Configuration process requires 2 steps
1)Database schema configuration using Repository Creation Utility (RCU)
2)Product install , configuration and deployment using WebLogic Configuration Wizard

Database schema configuration
RCU allows customers to choose the product for which they want to create database schema and creates the schema after providing the database details

OAM Product install , configuration and deployment
OAM 11g installs using Oracle Universal Installer (OUI) , The installation process copies all the software bits to the host machine but. does not perform product configuration

OAM 11g is a J2EE application that deploys into a container ,The deployment and configuration is handled by WebLogic Configuration Wizard by using configuration templates provided by each product to configure the product .Finally, it deploys the product into a new or existing WLS domain.

OAM11g_Install.JPG

Categories: DBA Blogs

Handlng Adapter Shutdown Events

Ramkumar Menon - Tue, 2010-10-26 08:22

One of the long forgotten but useful snip of information.
You have a BPEL Process that has an inbound adapter listening to MQ, file, DB or whatever. YOu wish to handle adapter shutdown events -i.e. lets say the MQ went down, and the process goes to the "off" state - You want to know that such a thing happened - you can use the "fatalErrorFailoverProcess" activationAgent property.

Link: http://download.oracle.com/docs/cd/B14099_19/integrate.1012/b25307/adptr_file.htm#CACGDFAA

The behavior is very similar to how you would define a rejection handler.

 

CP8 for 10.1.2.3 released

Michael Armstrong-Smith - Tue, 2010-10-26 03:35
Just wanted to let you know that on October 5, 2010, Oracle has released CP8 for 10.1.2.3. You will find it on MetaLink as patch number 9694503. When compared to CP7, 10 bugs have been fixed.

So far this cumulative patch has been released for the following platforms:
  • IBM AIX on Power Systems (64-bit)
  • Microsoft Windows 32-bit
  • Linux x86 (works for both 32 bit and 64 bit)
  • Oracle Solaris on SPARC (64-bit)
  • Oracle Solaris on x86 (32-bit)
If you are upgrading to CP8 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.

This patch needs to be applied to all Oracle Homes, i.e. Infrastructure home as well as all related midtier homes.

Bug 4398431 - HANG WHEN RETRIEVING A CONNECTION FROM THE IMPLICIT CONNECTION CACHE

The following posting has been updated:

Cell level write-back via PL/SQL

Keith Laker - Fri, 2010-10-22 10:03
A topic of conversation that regularly comes up when I talk to customers and developers about the OLAP Option is write-back to OLAP cubes. The most frustrating of these conversations usually involves someone saying 'but... the OLAP Option doesn't support write-back'. This is not the case and never has been.

Since the first OLAP Option release in 9i it has always been possible to write-back to cubes via the Java OLAP API and OLAP DML. But in recent releases, a new PL/SQL package based API has been developed. My thanks go to the ever-excellent David Greenfield of the Oracle OLAP product development group for bringing this to my attention.

At the most simple level, it is possible to write to a qualified cell:

dbms_cube.build(
'PRICE_COST_CUBE USING (
SET PRICE_COST_CUBE.PRICE["TIME" = ''24'', PRODUCT = ''26''] = 711.61, SOLVE)')

In the example above, a cube solve is executed after the cell write. The objects are referenced by their logical (ie. AWM) names.

This approach is very flexible. For example you can qualify only some dimensions, in this case the assignment is for all products:

dbms_cube.build(
'PRICE_COST_CUBE USING (
SET PRICE_COST_CUBE.PRICE["TIME" = ''24''] = 711.61, SOLVE)')

You can also skip the aggregation:

dbms_cube.build(
'PRICE_COST_CUBE USING (
SET PRICE_COST_CUBE.PRICE["TIME" = ''24'', PRODUCT = ''26''] = 711.61)')

or run multiple cell updates in one call:

dbms_cube.build(
'PRICE_COST_CUBE USING (
SET PRICE_COST_CUBE.PRICE["TIME" = ''24'', PRODUCT = ''26''] = 711.61,
SET PRICE_COST_CUBE.PRICE["TIME" = ''27'', PRODUCT = ''27''] = 86.82,
SOLVE)');

You can also copy from one measure to another.

dbms_cube.build('UNITS_CUBE USING (SET LOCAL_CUBE.UNITS = UNITS_CUBE.UNITS'));

This will copy everything from the UNITS measure in UNITS_CUBE to the UNITS measure in the LOCAL_CUBE. You can put fairly arbitrary expressions on the right hand side and the code will attempt to loop the appropriate composite. You can also control status.


For more details, take a look at the PL/SQL reference documentation
Categories: BI & Warehousing

ruby-plsql-spec upgraded to use RSpec 2.0

Raimonds Simanovskis - Thu, 2010-10-21 16:00

Initial version of ruby-plsql-spec gem was using RSpec version 1.3. But recently RSpec 2.0 was released which API is not compatible with previous RSpec 1.x API and as a result plsql-spec utility was failing if just RSpec was upgraded to version 2.0.

Therefore I updated also ruby-plsql-spec to use latest RSpec 2.0 gem and released ruby-plsql-spec gem version 0.2.1. You can install the latest version with

gem install ruby-plsql-spec
Upgrade from previous version

If you previously already installed initial ruby-plsql-spec version 0.1.0 then you need to update your spec/spec_helper.rb file to use RSpec 2.0. You can do it by running one more time

plsql-spec init

which will check which current files are different from the latest templates. You need to update just spec_helper.rb file. When you will be prompted to overwrite spec_helper.rb file then at first you can enter d to see differences between current file and new template. If you have not changed original spec_helper.rb file then you will see just one difference

- Spec::Runner.configure do |config|
+ RSpec.configure do |config|

You can then answer y and this file will be updated. When you will be prompted to overwrite other files then you can review the changes in the same way and decide if you want them to be overwritten or not (e.g. do not overwrite database.yml file as it has your specific database connection settings).

HTML output option

In addition plsql-spec utility now has --html option which will generate test results report as HTML report. It might be useful for usage in text editors where you can define which command line utility to run when pressing some shortcut key and then display generated HTML output report. If you will execute

plsql-spec run --html

then it will generate HTML report in test-results.html file. You can override this file name as well using --html output_file_name.html option.

Questions or suggestions

If you have any other feature suggestions or questions about ruby-plsql-spec then please post comments here or report any bugs at GitHub issues page.

Categories: Development

My upcoming 2-Day Hands-on Seminar in Kuala Lumpur

Kuassi Mensah - Thu, 2010-10-21 12:06
If you can make it to Kuala Lumpur on Dec 20th, you don't want to miss http://ning.it/aDMfVN

Clarifying OVD-AD EUS Password Question

Mark Wilcox - Mon, 2010-10-18 07:48

Got a question from a customer:
"We had a question about one of the attributes added by the schema extension: orclCommonAttribute.


Is the user’s password hash stored in this attribute when using Kerberos authentication (OVD and AD option)? Is the user’s password hash stored in any other AD attributes? Or does this attribute remain empty?"

First a quick explanation about orclCommonAttribute. If you use EUS with username and password authentication, the database fetches the password hash and compares it locally instead of doing a traditional LDAP bind. One of the reasons it does this is because this way the password is never communicated to the database in clear-text. Yes, there are a variety of ways to prevent that (such as encrypted network connection) - but when EUS was first conceived (over a decade ago) - those were not as common as they are today.

For most directories - this isn't a problem - they store the passwords in a hashed format that can be retrieved. Except for Microsoft Active Directory. To work-around this problem - OVD-EUS uses a password filter to capture password changes on the domain controller, hashes it and stores it in an extended attribute orclCommonAttribute.

If a customer were to choose to deploy EUS with Kerberos instead, there wouldn't be any reason to deploy the password filter and thus the attribute wouldn't be populated. Not only that - the database won't even query the directory for the password since the authentication would happen via Kerberos. Instead EUS is just providing user to schema mapping and more importantly - role to group mapping.

Posted via email from Virtual Identity Dialogue

Document security in the real world, experience from the field

Simon Thorpe - Mon, 2010-10-18 06:29

BrandonCrossLogo.pngI've invited Justin Cross from Brandon Cross Technologies to share some of the experience gained in the industry when implementing IRM solutions. So over to you Justin...

I began working with IRM at SealedMedia and I have seen it grow and mature through the refinement which only comes from many, many real world deployments, where we need to apply thoughtful consideration to the protection of real business information, against real security risks; while keeping real business users happy and assured that the technology wont get in the way.

I decided take on the challenge of forming my own company, Brandon Cross Technologies, just as SealedMedia were being acquired by Oracle. As Brandon Cross Technologies I've had the good fortune of working with a number of vendors, including Oracle, to provide the consultancy to successfully deploy software which requires an understanding of how software really gets used in practice, by real people, as well the technical know-how.

We have recently been working with some of the largest oil & gas and telecom companies, among others, to deploy their IRM solutions to address their concerns regarding the dramatic increase in data security threats.

 

Secure from the inside
Despite the best efforts of virus checkers and firewalls, platform vulnerabilities and malware provide lots of scope for bad guys to punch holes in your defences, disrupt your systems, and steal your data. If you ensure your own business users can only access and use information they legitimately require, while retaining the ability to revoke that access, then any external threat will be no more able to extract information from your organisation than your own people. Information Rights Management therefore enables us to limit the threat from perimeter security breaches, as well as potential misuse of information by legitimate business users.

 

 

User buy-in
As with other security solutions, successful IRM deployments must be simple to use and work without impeding existing business processes. Any solution which slows or limits a business user's ability to do their daily work will be unpopular, but more importantly the user may actually end up putting business information at greater risk by avoiding such systems. In the case of IRM, users may create, request, distribute or keep unprotected files, or use an IRM Context or document classification intended for less sensitive information to avoid the more stringent controls intended by the business.

 

Of course once information is IRM protected it is under the full control of the appropriate information owner; but it does need to be sealed / protected in the first place. Protecting information using IRM needs to be a continual, business-as-usual process. While IRM provides simple tools to protect information, manual protection does involve the user making the decision to protect information as it is created, and being in the habit of doing so. This can be addressed through creation of clear guidelines, policy requirements and training.

 

Integrated solutions
Protecting information using IRM should be performed at the earliest point in the information life cycle. One way to ensure information is appropriately secured using IRM is to automate the protection / sealing process. Oracle IRM has open programmatic interfaces which allow information to be sealed and for rights to be programmatically managed. This allows IRM protection to be integrated with other content management, workflow and security products.

 

For example Oracle IRM can be integrated with SharePoint, ensuring that any documents which are added into a SharePoint site are automatically IRM protected as they are uploaded. Information is then protected in storage, protecting against privileged users with server access, while still allowing documents to be found by keyword search using Oracle's unique search capabilities. Automated protection can therefore allow users to collaborate in the normal way without having to make the conscious decision to protect it first, or even needing to be aware that such a step is necessary. In this way, taking the manual protection step away from users, the level of usage and consistency with which IRM protection is applied can be substantially improved.

Another policy enforcement technology which can be used in conjunction with IRM is DLP (Data Loss Prevention). There are a variety of vendors which provide DLP solutions and, as with IRM, these solutions work in a variety of ways with different features and capabilities. What they do have in common is the ability to monitor the movement of data within your organisations network, with many also having the ability to control that movement. Some will purely monitor network communications using dedicated network appliances; others monitor file system, device and inter-process communications at the desktop. These capabilities can be used to make sure data does not leave your systems and networks without the necessary IRM protection being applied.

 

Brandon Cross Technologies
Brandon Cross Technologies is based in the UK, but has delivered projects internationally. It believes it is possible to take the pain and uncertainty out of deploying client-server and web based technologies, simply through listening to customers and sharing experience and expertise.

 

http://www.brandoncross.co.uk/
http://www.irmprotection.co.uk/

11g OAM , bind or compare for authentication ?

Pankaj Chandiramani - Sun, 2010-10-17 18:27

I had this question in mind when i started looking at 11g OAM .
bind or compare for authentication ? And Why ?

So what i found .......
11g OAM uses Ldapcompare
why ? because
An ldapcompare works like an ldapbind as far as username /password check is concerned, it doesnt create a session an thus is far more efficient that using ldapbind for a server to authenticate multiple users

Categories: DBA Blogs

New Release of Oracle IRM Wrapper version 1.5.0

Simon Thorpe - Thu, 2010-10-14 20:54

The wrapper tool has been updated again - this time to provide an installer script for Linux systems, and to improve compatibility between the IRM Desktop and the wrapper when installed on the same machine.

For further info, see the 1.4.0 announcement.

If you download and experiment with this tool, drop us a line to let us know how you get on.

Auditing IRM Protected Content - updated

Simon Thorpe - Thu, 2010-10-14 20:45

If auditing interests you, please note that I have updated my earlier entry on that topic. The entry now describes how auditing can be applied selectively according to user role, and discusses some of the benefits of Oracle IRM in terms of auditing both online and offline use, maintaining segregation of duties, and providing business-friendly reports.

Netflix in the Cloud

Vikas Jain - Thu, 2010-10-14 15:51
Netflix is adopting (public) cloud with full force. Check out these few slides around the drivers and their roadmap for such move. Does it mean in the future IaaS providers will start to provision nVidia/ATI GPU based machines for faster video codec processing?

AS 11 SOA EDN Log URL

Khanderao Kand - Wed, 2010-10-13 15:41
Since many folks asked about an ability to view posted events, here is the url to do so
http://:/soa-infra/events/edn-db-log

How to Programmatically Disclose a Row in ADF Faces Table

JHeadstart - Tue, 2010-10-12 23:53

If you have defined the detailStamp facet on an ADF Faces Table, you can disclose the content of this facet using the expand/collapse icon rendered at the beginning of the row.
Sometimes, you want to programmatically disclose this content, for example when the user adds a new row, it is convenient that alle inout fields in the detailStamp facet are displayed right away.

In ADF 10.1.3, the following code used to work:

RowKeySet rks = table.getDisclosureState();
rks.getKeySet().add(row.getKey());

In ADF 11, implementation has slightly changed. Since a tree binding is now also used to render plain tables, we need to pass in a list of keys, the key path. Unlike a tree or treeTable, the path is always one level deep, so we can create a list with just the key of the row we want to disclose:

RowKeySet rks = table.getDisclosedRowKeys();
List keyList = Collections.singletonList(row.getKey());
rks.add(keyList);

Note that when you actually use this code when adding a new row, the primary key must be pre-populated by the model. This is required anyway with the current ADF 11.1.3 release, as is documented in the JDeveloper release notes (JDeveloper 6894412).

Categories: Development

Quick guide to Oracle IRM 11g: Sample use cases

Simon Thorpe - Tue, 2010-10-12 21:42
Quick guide to Oracle IRM 11g index

Oracle-IRM-Quick-Guide-Logo-Regular.gifIf you've been following this guide step by step, you'll now have a fully functional IRM service and a good understanding of how to start creating some contexts to match your business needs to secure content. The classification design article in the guide goes over some essential advice in creating your classification model in IRM and what follows is additional information in the form of common use cases that I see a lot in our customers. For each I'll walk through the important decisions made and resulting context design to help you understand how IRM is used in the real world. Contents Work in progressLet's look at the use case of a financial reporting process where highly sensitive documents are created by a small group of executives. These work in progress (WIP) documents may change content quickly during review and therefore it is important that the wrong and inaccurate versions of the documents do not end up outside the working group. Once a document is ready for wider review it is then secured against another context with a much wider readership. All the unapproved documents are still secured against a context available only to the initial working group. Finally the document is approved to be published and becomes public knowledge. At which time the document may change format, e.g. from a sealed Word document to an unprotected PDF which has no IRM protection at all. This is a nice example of how IRM can protect content through its life.

Financial Reports - Work In Progress (Standard template) Role Assigned Users & Groups Contributor Finance Executives Reviewer Company Board Reader - No Print bill.smith@abc-attorneys.com Financial Reports - Review (Standard template) Contributor david.lee (VP of Finance)
alex.johnson (CFO) Reviewer Legal Executives
Finance Executives
Company Board
bill.smith@abc-attorneys.com Financial Reports - Published (Export template) Contributor with export alex.johnson (CFO)
The first context secures work in progress content. Participants are identified as those who are involved in the creation and review of the information and are given contributor and reviewer roles respectively. Note that in this use case there is an attorney privy to the information who is external to the company. However due to the sensitive nature of the material, this external person has been given very restrictive rights, essentially they can only open the content, no printing, editing etc. The offline period for this role may be a matter of hours, allowing the revocation of access to the documents in a very timely manner.

After several iterations of the report have been created, it needs to be reviewed by a wider audience of executives. At this point David Lee (VP of finance) or Alex Johnson (CFO) have the authority to reseal the latest revision to the review context. Therefore there is a trust relationship between the WIP context and the Review context to allow this information to be reclassified. David and Alex are the only authorized users to be able to perform this task and therefore provide a control point for the reclassification of information. Note also that the external attorney now has the ability to review this reclassified document. The Reviewer role allows them to edit, print and use the clipboard within the bounds of the document. Their access to the previous, more sensitive versions remains unchanged.

One aspect of the reviewer role is that in Word change tracking is enforced. This means that every change made in the entire review process is tracked. Up until this enforcement with Oracle IRM, change tracking in Word was only useful if you trusted the end user to not switch it off. IRM brings security to this simple functionality and makes it a powerful tool for document review. Imagine if this was a contract negotiation process, you can be assured that every change to the contract has been recorded.

Finally, the last stage of the life cycle for this financial document is the approval of the report to be released to the investors, employees and the public at large. There is one more context which only the CFO has access to. This context allows for the export of the unprotected document so that it resides outside the realm of IRM security. Such a powerful role is only given to a highly trusted executive, in this example the VP. Again, IRM still protects all the previous versions of content that contain information not appropriate for public consumption.

All the steps in this use case are easy and familiar for the users. All they are doing is opening, editing and working with Word and Excel documents, activity they are used to performing. They may find a slight inconvenience if they are prevented from printing or cut and pasting content into a non-secure location, but overall they require little to no training on how to use IRM content.
Using IRM with a classification modelThere are customers with a very mature security strategy which includes a clearly defined and communicated classification policy implemented with procedures and technology to enforce controls and provide monitoring. When IRM is added to the mix of security technologies it is common for the customer to ask how to implement their existing security classification system within IRM. When we deployed IRM at Oracle this was the first point of reference when trying to determine the correct convention for the creation of IRM contexts.

Before we go into the detail of this, it is worth noting that in this use case we are manually recreating elements of an existing security policy inside IRM. There may well be a situation where another product contains all this logic and replicating the information inside IRM would be redundant and costly. For example the Oracle Beehive 2.0 platform is integrated with IRM and as such IRM doesn't use the built in context model but simply leverages the existing security model inside Beehive. So it is possible for Oracle IRM to externalize the entire classification system. This however requires consulting effort which may or may not be appropriate for the return in automation.

But back on topic, let's look at what a security classification model looks like. A common standard that people work to is the ISO 17799 guidelines which was the result of a group of organizations documenting their best practice for security classification. Below is an example of the sort of classification system ISO 17799 recommends.

Level Class Description 1 Top Secret Highly sensitive information about strategies, plans, designs, mergers & acquisitions 2 Highly Confidential Serious impact if shared internally or made public 3 Proprietary Procedures, project plans, specifications and designs for use by authorized personnel 4 Controlled For controlled use within the extended enterprise, but not approved for public circulation 5 Public Information in the public domain
There is an increase in sensitivity of information as you move from bottom to the top of this table. Inversely, the amount of information that is classified decreases as you increase the level of classification. This is important because as you wish to create a model for protecting top secret information, you need to have more control over who can open the documents and who has the power to assign new rights to people. This increases the administration of the solution because someone has to make these decisions. Luckily IRM places this control in the hands of the business users, so those managing top secret contexts are the people who are working with the top secret information. A good example is in Oracle we have a single classification across the entire company for controlled information. Everyone in Oracle has access to this and the provisioning of rights is automatic. However when IRM is used to protect mergers and acquisitions (M&A) documents in Oracle, very top secret information, a small group of users have access and only one or two people can administrate the context. These people however are the ones directly involved in the M&A activity.
PublicLooking at each of these we can determine how IRM might apply. For publicly classified content the response is immediate and quite obvious. You don't use IRM because the information has low to zero risk from a security perspective and therefore requires no controls. However there have been times where documents may be sealed to a public context simply to provide usage statistics.
ControlledFor controlled content there may be strong reasons to leverage IRM security. However the sensitivity of the information is such that the risks are relatively low. Therefore consider a single company, or at least department wide context. This is born from our best practice which leans towards a simple, wide context model which balances risk versus the usability and manageability of the technology. Essentially controlled information needs some level of security, but it isn't important enough to warrant a fine grained approach with a high cost of maintenance. Usually every professional member of staff is a contributor to the context which allows them to create new content, edit, print etc. This at a minimum provides security of content if it is accidentally lost, emailed to the wrong person outside the company and provides a clear indication that the information has some value and should be treated with due care and attention. Yes allowing everyone the ability to cut and paste information outside the IRM document exists, but disallowing this to a low level of classification may impact business productivity. If control of the information is that necessary, then it should result in a higher classification.

Business partners are given appropriate roles which allow them to open, print and interact with the content but not have the authority to create controlled information or copy and paste to other documents. For the rare exceptions where you wish to give access to un-trusted users you can create guest roles which are assigned as part of a work flow requesting for exceptions to the rule.
ProprietaryAs we move up through the classification policy we find an increase in the need for security from finer grained control. Proprietary information carries with it a greater risk if exposed outside the company. Therefore the balance of risk and usability requires a finer granularity of access than a single context. So now you have to decide at what level of granularity these contexts are created and this varies. There are however some good common rules. Avoid a general "proprietary" context, this would undermine the value of the classification. Follow a similar pattern to the work-in-progress use case defined above. Be careful to not be too generous about assigning the contributor role, restricting this group guarantee's document authenticity. Remember with IRM you can add/change access rights at any time in the future, so here is a chance to start out with a limited list and grow as the business requires.
Highly ConfidentialAs we get closer to your organizations most important information, we start to see an increase in the amount of contexts you need to provide adequate security. Highly confidential information requires a high level of security and as such the risk versus usability trade off favors a more granular approach. Here you are identifying explicit business owners of classifications instead of groups of users or using an automated system for unchecked provisioning of access. Training increases a little here as well because as you hand these classifications into the business, they need to know how to administrate the classification and understand the impact of their assignments of rights. The contexts also become very specific in their naming because instead of relating to wide groups of data, they now apply to very specific, high risk information. The right level of granularity and administration is hard to predict, therefore always start with a few contexts initially and pilot with a small number of business units with well defined use cases. You will learn as you go the right approach and more contexts will emerge over time.
Top SecretLast but most definitely not least, the Top Secret contexts. Sometimes these are the first to be created because they protect the most important documents in the company. These contexts are very controlled and tightly managed. Even the knowledge that these exist can be a security issue and as such the contexts are not visible to the support help desk. The number of top secret contexts is also typically very small due to the nature of the information. A company will only generate a small number of highly sensitive financial documents or a few critical documents which contain the secret sauce of the product your company creates. Top secret contexts also can have a short life span as they sometimes apply to a short lived, top secret project. Mergers and acquisitions is again another good example, these are often very top secret but also short lived. L1 classified contexts quite often contain external users, executives from a target acquisition or attorneys from your legal firm. But the sensitivity of the information means external users are closely monitored by the context managers.
Example context mapTypically to map a classification policy to IRM requires a business consulting project which asks each elements of the business how they use sensitive information, who should be able allowed to open and it and manage the access. At the end of this exercise you end up with a context map. This is a simple table which shows the IRM contexts and their relationship to the classification policy. Here is an example table from when we used the technology in SealedMedia before we were acquired by Oracle.

Top Secret Highly Confidential Proprietary Controlled L1 L2 L3 L4 Board Communications Executive WIP Executive Company Intellectual Property   Competitive   Security Product Management WIP Product Management     Professional Services WIP Professional Services     Sales WIP Sales     Marketing WIP Marketing     Finance WIP Finance     Engineering WIP Engineering       External External
Note the use of the labels L1 through L4 to indicate level of sensitivity. This would be used as part of the actual context name, e.g. "L1 (Top Secret) Intellectual Property". This serves a few purposes, firstly if a user has access to many classifications, they will be listed in order or sensitivity with the most important at the top when users are making decisions about classification of documents. Also it makes it very clear how sensitive each classification is. If I attempt to open a document I do not have rights to, the IRM software redirects me to a web page informing me that I don't have access to "L1 (Top Secret) Security". Immediately I understand that I shouldn't be opening this top secret document because it is classified above my access level. Note that in the above map only ongoing contexts are documented. There may well be a context called "L1 (Top Secret) Smith versus Jones dispute" which would be used to secure the information about a highly confidential law suit. But this classification exists for only a short period of time and therefore is created as and when needed. The context map is designed to document classifications which will exist for ongoing future of the company.
Periodic expiry & version controlThe last example in this set of use cases is when IRM can allow for the periodic expiry of access to information which in turn can also be used to implement security related version control. Consider the situation where your company has some very valuable product roadmap documents which detail information on the next release of your products. This information may have valuable insight to the direction of the company and the disclosure of such information to competitors, the press or just the general public may have a significant impact to your business. However road map information changes often and therefore not only do you need to ensure who has access to it, but ensure that authorized users are access the right versions. Another useful aspect of IRM is that you may wish to review who has access to your product road maps on a annual basis and examine if the rights model you've decided on is still appropriate, e.g. do you still want users to be able to print the documents. IRM can satisfy both of these requirements when you appropriately design the classification model. Consider the context below;

Context title 2010 L1 (Top Secret) Product Roadmap Contributor VP Product Management Item Readers Trusted users in the company who have been training on how to deliver product roadmap presentations and messaging Context managers VP of product development and those who approve and verify the training of trusted users
This is a very simple definition of a context but a great demonstration of the powerful capabilities of Oracle IRM. The only person who can create product roadmap documents is the VP. This is because this person is the last point in the review and approval process and as such has the authority to reseal the final product roadmap document from the work in progress context to this published context. The Item Reader role by default gives no access to anything in the context. So as each person completes the product roadmap training, they are given the role Item Reader and at the same time you add the specific documents which they've been trained on. There is of course an administrative overhead here, if you have hundreds of users being trained a month, someone has to be administrating IRM. Using groups at this point does allow for the management to be simplified. You might have a group called "Trained 2010 product roadmap presentation field sales users" and this group has been given the Item Reader role with the document restriction of the current 2010 product roadmap presentation. Then the management of users who can access these documents is done in the user directory, such as managing group membership in Active Directory. A better solution for the management of this rights assignment would be to use a provisioning system such as the Oracle Identity Manager. Here you can centralize the workflow of users being trained and then not only give them access to the IRM context but also automate the provisioning to the location where the documents are stored.

ProductRoadmapItemLock.png Periodic expiryBecause the context name is prepended with the year it means that in 2011 the owner of this classification needs to review this classification. This review may decide that users with the "Item Reader" role can be trusted to print the content and that the 2 week offline period is too long and should be reduced to 1 week. The use case may also require that for each year users must be trained on the presentation of product roadmap information. So the creation of a new context, "2011 L1 (Top Secret) Product Roadmap" is created with a blank list of Item Readers, ready for new trained users to be given access to the new product roadmap. All Item Readers in the 2010 context are then removed and in one simple action you now ensure that nobody can access the old, out dated 2010 information. Because Oracle IRM separates out all the access rights from the documents themselves, there is nothing else to do. You remove access from the server, and as the offline periods to these documents expire, so does the access. The advantage for this retirement of access to old content, is that in the future if you ever need to be able to access a product roadmap document from 2010, the IRM administrator can simply go back to the old context and give access to a specific person.
Version controlWith the Item Reader role you are explicitly defining what documents users have access to. Whilst this might incur an administrative cost in maintaining this list, the value from a security perspective is very fine grained control and high visibility of who can access what. Another benefit of this is because Oracle IRM allows you to change your access rights at any time, you can update this list. So imagine that you have a group of trained users assigned with an Item Reader role that has version 1 of the product roadmap presentation listed. Then after a few months, the roadmap changes, as it often does and a new version 2 is created. After making this new version available somewhere you can now remove the groups access to version 1 and add version 2. What does this mean? Now everyone in that group trying to open version 1 is going to get an access denied message. But, this message is in the form of a web status page which you have full control over. You can now modify that status page to provide the link to the new version 2, which they do have the ability to open.

This is incredibly powerful. Not only is IRM providing the means to ensure only authorized users have access to your most sensitive information, but it is ensuring they can only access the latest versions of that information AND allowing you to easily communicate to them where to GET that latest version from.

These are just a few of the many uses for Oracle IRM, if you would like to discuss your own particular use cases and see how Oracle IRM can help, please contact us.

New White Paper: Oracle Unbreakable Enterprise Kernel for Linux

Sergio's Blog - Tue, 2010-10-12 05:18

Today we published a new white paper on the Oracle Unbreakable Enterprise Kernel for Linux. This white paper covers the following topics:

  • What is Oracle's Unbreakable Enterprise Kernel for Linux
  • Installation Requirements
  • Installing From Oracle's Public Yum Server
  • Known Issues With installing Via Yum
  • Installing from Unbreakable Linux Network (ULN)
  • Verifying Kernel Installation
  • Compatibility and Third Party Software
  • Uninstalling Unbreakable Enterprise Kernel
Read the white paper here.
Categories: DBA Blogs

Cloud SSO heating up

Vikas Jain - Thu, 2010-10-07 18:40
In the early part of this decade, SSO vendors (Oblix, Netegrity, Tivoli, etc.) provided solution that made life simple and brought efficiencies for both employees and IT by eliminating the need to remember and maintain/reset tens if not hundreds of username/password combinations that allowed employees to access internal applications needed for their job.

In the next wave, these SSO solutions moved into partner and consumer facing applications where federation was brought in to mediate between different security systems leading to popularization of SAML standard.

Fast forward to now - As new set of applications get delivered as SaaS, SSO had to catch-up with this new deployment model, and new products/solutions are emerging to solve these challenges.

  • TriCipher (acquired by VmWare) - VmWare saw this need early on as it tries to deliver the vCloud platform. This piece may also become the security mediator between vCloud deployments and external SaaS/cloud offerings. Will have to watch what VmWare does with it.
  • PingIdentity - The PingFederate solution addresses this need. PingIdentity has been a pioneer in the SAML federation space.
  • Symplified - Started by ex-PingIdentity folks, it has quickly earned a name for itself in this space.
  • Vordel - It's Cloud Service Broker provides solution in this space.
  • Citrix OpenCloud Access - This is the latest addition to this space, available as an optional module for Citrix Netscaler. Announced yesterday at Citrix Synergy (Citrix's annual user conference), this should also help Citrix implicitly sell more of it's GoToMeeting product line.
As you can see the market for Cloud SSO is heating up ...

Access Google address book via LDAP using OVD

Vikas Jain - Thu, 2010-10-07 17:45
My colleague Mark Wilcox who also runs a blog created an integration between Oracle Virtual Directory (OVD) and Google address book.
This solves use cases for customers who use Google Apps for business, and would also like to use Google as their source of identity instead of maintaining user profiles in their own LDAP stores. OVD provides a nice virtual LDAP interface on top of this Google identity store. Customers can leverage it for SSO of their enterprise apps using Google identities. Where there's a need to add custom attributes to the user's Google profile, OVD has a provision to allow addition of such attributes without modifying the schema of Google identity store (which anyways is inaccessible).

Note that this is different from the SAML federation that Google supports for access to "Google Apps" using enterprise identities that come from enterprise LDAP.

OMBPlus and child types in a mapping or process flow

Klein Denkraam - Thu, 2010-10-07 12:21

Some time ago I needed a script to get a hierarchical list of all mappings and process flows that where tied together. I had some problems finding out the type of activities of a process flow. Because, if I encountered a Sub Process flow I wanted to recursively go into that subprocess and get all activities there. I could not find any property that contained the needed information but I found out that you can get a list of all subprocess activities or mappings in a processflow. Like this:

OMBRETRIEVE PROCESS_FLOW '$p_procesFlow'  GET SUBPROCESS ACTIVITIES
OMBRETRIEVE PROCESS_FLOW '$p_procesFlow'  GET MAPPING ACTIVITIES

My immediate problem was solved. However there are many more types that can be used and it sure was not the most elegant way to do this. But I had no idea how to do it better until I found an undocumented property here. Below the procedure I created to get the type of an activity or operator.

proc get_typ {p_parentType p_parent p_childType p_child o_typ } {
   upvar $o_typ l_typ
   switch $p_childType {
      "OPERATOR" { set pos 6 }
      "ACTIVITY" { set pos 5 }
   }
   set l_typ [lindex \
                [split [OMBRETRIEVE $p_parentType '$p_parent' \
				                $p_childType '$p_child' \
                                GET PROPERTIES (STRONG_TYPE_NAME) ] '.' ] $pos ]
   switch [string toupper $l_typ] {
      "AGGREGATION"             { set l_typ AGGREGATOR }
      "ANYDATACAST"             { set l_typ ANYDATA_CAST }
      "VARIABLES"               { set l_typ CONSTANT }
      "USERTYPES"               { set l_typ CONSTRUCT_OBJECT }
      "PSEUDOCOLUMN"            { set l_typ DATA_GENERATOR }
      "DISTINCT"                { set l_typ DEDUPLICATOR }
      "EXTERNALTABLE"           { set l_typ EXTERNAL_TABLE }
      "FLATFILE"                { set l_typ FLAT_FILE }
      "MAPPINGINPUTPARAMETERS"  { set l_typ INPUT_PARAMETER }
      "JOIN"                    { set l_typ JOINER }
      "KEYLOOKUP"               { set l_typ KEY_LOOKUP }
      "MATERIALIZEDVIEW"        { set l_typ  MATERIALIZED_VIEW }
      "NAMEADDRESS"             { set l_typ NAME_AND_ADDRESS }
      "MAPPINGOUTPUTPARAMETERS" { set l_typ OUTPUT_PARAMETER }
      "SUBMAP"                  { set l_typ PLUGGABLE_MAPPING }
      "POSTMAPTRIGGER"          { set l_typ POSTMAPPING_PROCESS }
      "PREMAPTRIGGER"           { set l_typ PREMAPPING_PROCESS }
      "SETOPERATION"            { set l_typ SET_OPERATION }
      "ORDERBY"                 { set l_typ SORTER }
      "TABLEFUNCTION"           { set l_typ TABLE_FUNCTION }
      "TRANSFORMFUNCTION"       { set l_typ TRANSFORMATION }
      default                   { set l_typ [string toupper $l_typ ] }
   }
}

I created a temporary process flow called TMP and a likewise named mapping. In both I included all possible activitiy or operator types. Not resulting in a valid object, but that was not the point of the excercise. As shown in the procedure above the names of the types retrieved with the STRONG_TYPE_NAME property does not exactly match the name used in the OMB language. So I had to add a translation switch in the procedure. Maybe a bit too straightforward, but it works.
Now I can find the type and use that to generate or alter all kinds of operators or activities.
Below a simple example showing the possible types.

OMBCC '/<project>/<oracle_module>'
set map TMP
set ops [OMBRETRIEVE MAPPING '$map' GET OPERATORS]
foreach op $ops {
   get_typ MAPPING $map OPERATOR $op typ
   puts "Operator $op is of type $typ"
   puts "OMBRETRIEVE MAPPING '$map' GET $typ OPERATORS"
   OMBRETRIEVE MAPPING '$map' GET $typ OPERATORS
}

OMBCC '/<project>/<processflow_module>/<processflow_package>/'
set pf TMP
set acts [OMBRETRIEVE PROCESS_FLOW '$pf' GET ACTIVITIES]
foreach act $acts {
   get_typ PROCESS_FLOW $pf ACTIVITY $act typ
   puts "Activity $act is of type $typ"
   puts "OMBRETRIEVE PROCESS_FLOW '$pf' GET $typ ACTIVITIES"
   if {$typ == "START" } { puts "START activity is special"
   } else {
      OMBRETRIEVE PROCESS_FLOW '$pf' GET $typ ACTIVITIES
   }
}

And the output is below. I explicitly named the operators/activities after the type they were to enable an easy translation.

Operator AGGREGATOR is of type AGGREGATOR
Operator ANYDATA_CAST is of type ANYDATA_CAST
Operator CONSTANT is of type CONSTANT
Operator CONSTRUCT_OBJECT is of type CONSTRUCT_OBJECT
Operator DATA_GENERATOR is of type DATA_GENERATOR
Operator DEDUPLICATOR is of type DEDUPLICATOR
Operator EXPAND_OBJECT is of type CONSTRUCT_OBJECT
Operator EXPRESSION is of type EXPRESSION
Operator EXTERNAL_TABLE is of type EXTERNAL_TABLE
Operator FILTER is of type FILTER
Operator FLAT_FILE is of type FLAT_FILE
Operator INPUT_PARAMETER is of type INPUT_PARAMETER
Operator JOINER is of type JOINER
Operator KEY_LOOKUP is of type KEY_LOOKUP
Operator LCRSPLITTER is of type LCRSPLITTER
Operator LCR_CAST is of type LCRCAST
Operator MATCHMERGE is of type MATCHMERGE
Operator MATERIALIZED_VIEW_1 is of type MATERIALIZED_VIEW
Operator NAME_AND_ADDRESS is of type NAME_AND_ADDRESS
Operator OUTPUT_PARAMETER is of type OUTPUT_PARAMETER
Operator PIVOT is of type PIVOT
Operator PLUGGABLE_MAPPING is of type PLUGGABLE_MAPPING
Operator POST_MAPPING_PROCESS is of type POSTMAPPING_PROCESS
Operator PRE_MAPPING_PROCESS is of type PREMAPPING_PROCESS
Operator SEQEUNCE is of type SEQUENCE
Operator SET_OPERATION is of type SET_OPERATION
Operator SORTER is of type SORTER
Operator SPLITTER is of type SPLITTER
Operator TABLE_ is of type TABLE
Operator TABLE_FUNCTION is of type TABLE_FUNCTION
Operator TRANSFORMATION is of type TRANSFORMATION
Operator UNPIVOT is of type PIVOT
Operator VARRAY_ITERATOR is of type CONSTRUCT_OBJECT
Operator VIEW_ is of type VIEW
Activity WHILE_LOOP is of type WHILE_LOOP
Activity EMAIL is of type EMAIL
Activity NOTIFICATION is of type NOTIFICATION
Activity END_SUCCESS is of type END_SUCCESS
Activity MAPPING is of type MAPPING
Activity ROUTE is of type ROUTE
Activity OR1 is of type OR
Activity TRANSFORMATION is of type TRANSFORMATION
Activity AND1 is of type AND
Activity END_WARNING is of type END_WARNING
Activity END_LOOP is of type END_LOOP
Activity WAIT is of type WAIT
Activity MANUAL is of type MANUAL
Activity USER_DEFINED is of type USER_DEFINED
Activity START1 is of type START
Activity ASSIGN is of type ASSIGN
Activity END_LOOP_1 is of type END_LOOP
Activity FORK is of type FORK
Activity SET_STATUS is of type SET_STATUS
Activity FILE_EXISTS is of type FILE_EXISTS
Activity END_LOOP_2 is of type END_LOOP
Activity FTP is of type FTP
Activity SQLPLUS is of type SQLPLUS
Activity END_ERROR is of type END_ERROR
Activity FOR_LOOP is of type FOR_LOOP
Activity DATA_AUDITOR_MONITOR is of type DATA_AUDITOR
Activity SUBPROCESS is of type SUBPROCESS

Pages

Subscribe to Oracle FAQ aggregator