Feed aggregator

The Unicorn Factor @ The Twitter Conference

Ken Pulverman - Thu, 2009-05-28 01:38
140, The Twitter Conference just ended - a conference focused on the tools and trends of the real time web . We were there to see what we could leverage immediately in our marketing efforts. One of the presenters revealed the perfect quote originally served up as a tweet to sum up much of the content at events like this - 'All Unicorn, No Cupcake.' We got some Cupcake nonetheless, taking away a few gems.

Here's a fun fact from the conference. Did you know that there are now 10 thousand apps leveraging the Twitter API? Lots of these utilities and their founders made an appearance at the show. Very inspirational. Literally one guy not in a garage, but living above the garage in many cases. The guy next to me today had a very cool venture going that he was driving in his free time when he is not working at an mid-west farm equipment manufacturer. Seeing the entrepreneurial energy and the great utilities in progress is clearly not the source of my comment above, as I think many of these folks are delivering real Cupcake.

The Unicorn part of the reference relates more to some of the panelist consultants, conference groupies, and technology observers. No point calling him out here, but a well known social media observer told the audience that people are just going to need to get better at creating filters . Pure Unicorn. This from a guy who has 5 computers to geek out with wallstreet style while tracking the social web. Can you imagine joe six pack trying to promote his lawn fertilizer business with the same rig? Absolutely not.

There was also a ton navel gazing musings on the future from the panel jockies who aren't sure if they put the clothes in the dryer before boarding the flight to Mountain View. The exception to this was was Jason Calacanis who gave an excellent view of the likely future of Twitter and the real time web complete with possible screen mockups. He offered real justifications and potential business models for what he proposed. The rule, was unfortunately the folks from Twitter by providing just vague notions of where they are headed and really demonstrating for all that the business that they created has clearly gotten away from them.

In the battle of Cupcake vs. Unicorns though, Cupcake did win out in the passion and brilliant ideas of the developers that showed their wares. Search, analytics, collaboration, etc. The Unicorns occasionally offered value too when they pointed to the best of these utilities like Twitrratr.com, an elegant and simple way check the sentiment of posts for your brand.

Bottom line. The real time web is here. Time for all marketers to figure it out. Time spent waiting to figure out if Twitter is going to make it or come up with a business model is time wasted. The notion of needing to make sense of real time data from customers, employees, friends and enemies not to mention your car, house, and maybe even a pet or two is happening whether we like the notion or not. Learning to converse in this new world takes time, so it is best to start now.

I guarantee there will be a real time web solution without the need for a frequent Fail Whale in our near future. True, the utilities out there are numerous, sometimes shaky betas, and arguably all features vs. complete applications. Having said that, there are a lot of brilliant ideas to get your head around like location centric twittering, real time integrations with enterprise apps, and tweets that are automatically triggered and trigger other social interactions automatically. whew.

My advice for businesses......chances are your Twitter "phone" is already ringing. Answer it and begin unravelling how to leverage the Real Time web for your company.

First assignment - figure out what a # tag is and peruse the details from this Twitter conference using the tag #140tc. I'll let you apply your own Cupcake filter.

Oracle Enterprise Performance Management Architect (EPMA) – Administering Essbase, Planning and HFM – Setup

Venkat Akrishnan - Wed, 2009-05-27 12:37

One of the major advantages of Oracle acquiring so many companies is the fact that it integrates all the products in some fashion or the other, that too quickly. For example, if you look at the different possible ways of loading data into Essbase, the list of options available is sometimes absolutely daunting(to an extent flexibility as well) especially for users who are new to the product stack. The following list gives all the possible ways that i can think of for loading data into Essbase

1. Using Essbase Administration Services or EAS
2. Using Essbase Studio (from EPM 11 version)
3. Using Oracle Data Integrator (uses JAPI)
4. Using Hyperion DIM or Data Integration Management
5. Using custom Java, C & VB APIs
6. Using MaxL Scripts through an unix shell or batch script
7. Using Essbase Integration Services (Superceded now by Essbase Studio)
8. Using Oracle Hyperion Enterprise Performance Management Architect

I have covered 7 of the above 8 in prior blog entries. Today we shall see another important tool that is quite commonly used for managing the metadata of Oracle EPM products like Planning, Financial Management & Profitability and Cost Management. This tool called as Enterprise Performance Management Architect or EPMA provides a uniform platform for metadata management across most of the Hyperion components. Apart from its capabilities, one strange aspect to this product is the fact that it depends on IIS to web enable itself. So, you would need a mandatory windows server to host this. When i started using this a couple of months back, my initial impressions were far from impressive. I am not sure whether it was because of my environment or because of the product itself. Whenever EPMA was started, it seemed to be consuming the entire memory(more than the memory occupied by SGA of Oracle and the Essbase Kernel). Having said that, its features(and also the concept of common metadata management) are really good and of course with more usage i am starting to like this more and more.

Today we shall see what it takes to setup EPMA to manage Essbase Applications/Outlines. Just remember the fact that EPMA was never designed to do individual application(Essbase, Planning etc) level administration. That is meant for the respective product administration tools. EPMA is meant for managing the metadata like dimensions, hierarchies, data loads across product sets etc. So, in effect it is like ODI or Hyperion DIM but with more product specific multi-dimensional features thrown in. So, if you are managing Planning, Essbase and HFM (or any of these 2) in your organization EPMA could be a good fit when you want to use a single tool for managing the common dimensions, data loads etc.

To use EPMA for managing Essbase specific application outline members, ensure that you are using Essbase in external mode i.e shared services security should be used. By default when you install EPM in windows, Essbase will be installed with SSO configured against shared services. SSO is not sufficient to use EPMA against Essbase. Ensure that you convert Essbase from local authentication to External authentication. That is done by externalizing the users from Essbase Administration Services.

Once that is done, the next step is to create a data source for the interface tables using the EPM Configurator.

These interface tables are used for importing the pre-defined common dimension types that are available across EPMA. The idea is to have custom members defined in these interface tables and then import them into EPMA. This step is needed only if you have pre-defined members defined in the interface tables. If you are starting from scratch then this is not needed. Once the data source is created, log into EPMA and go to Application Library.

Then create a new Essbase (ASO) application.

Then edit this ASO application and create the dimensions and the corresponding members as shown below

After this step, go to shared services and ensure that the user with which you have logged in as has the necessary privileges to access Essbase. Basically one would need application and database administrator privileges.

Then you can deploy this application directly from EPMA to Essbase.

The major drawback with this is that this does not provide any restructuring options when data is loaded into a deployed cube. Also, one should not be changing the application directly within Essbase using EAS. Then the sync between EPMA and Essbase would be lost and all changes would be over-written when the sync is done from EPMA again. Typically EPMA is not recommended for Essbase applications. It is very good for managing Planning and HFM metadata. One can use shared dimensions wherein the same dimension can be reused across applications.This should give you an idea of how EPMA works. I would go into details of how one can do data synchronization across applications using EPMA in the coming blog entries.


Categories: BI & Warehousing

New job, lots of exciting stuff

Dan Norris - Tue, 2009-05-26 16:31

It’s been a week since I started my new job at Oracle Corporation. I’m a remote worker which means that the first day of work wasn’t the usual event since I just went to my home office and got on a concall with my new manager. After getting connectivity and accounts set up properly, I was able to pretty quickly work through the new hire checklist of forms and mandatory training.

My new Oracle-provided laptop arrived around mid-week and I realized that, at least for now, I’ll have to revert back to using the Windows-based laptop and (hopefully temporarily) put my MacBook Pro on the shelf. Actually, my wife is very excited since she’ll get the MBP to use now and we’ll do the usual “trickle down” to the kids so that the oldest computer in the “fleet” will get ditched.

I tried “upgrading” to a new DSL line (from cable modem) last week too, but that appears to have failed as the DSL modem drops my connection on a regular basis for a few seconds. That’s just long enough to break the VPN connection and make it appear that I’m bouncing off and on instant messaging every hour or so. Annoying.

I was excited to finally get on to some real technical work late last week and got to login to my first Exadata storage server. The chores were to re-image 4 servers and apply the latest patches to them. The real fun is starting now!

Of course, the most important part of any job (in my opinion) is people. I have had several virtual meetings with my peers and manager so far and, as expected, they’re all superb. I’m looking forward to working more with the team and others in our companion teams as well as the days roll ahead.

Scuba diving pre-ODTUG Kaleidoscope, Monterey, 21-June-2009

Dan Norris - Tue, 2009-05-26 14:28

I’m very pleased to report that I will be able to meet up with ODTUG Kaleidoscope attendees at both the ODTUG Community Service Day (2nd Annual!) and my own scuba dive outing as well. If you can, I’d love for you to attend both events. If you’re not a certified scuba diver, then you can at least participate in the Community Service Day festivities and help out the local area while enjoying some California weather too!

For those certified scuba divers that will (or can) be in the Monterey Bay area on 21-June, I invite you to come diving with me. I’ve arranged some reserved spots on the Beachhopper II dive charter that I’ve dove with before. Brian and Mary Jo (the captain, crew, and bottle washers) are top notch and we had a great time last fall at the first annual pre-OpenWorld scuba event (look for more details on the 2nd annual event later this summer). The boat isn’t huge, but 10 divers is enough for a lot of fun.

The pre-Kaleidoscope dive day is Sunday, 21-June (Father’s Day). The boat will depart the K dock at Monterey Bay harbor at 8am, so load-up is 7:30am. We’ll have a nice morning, drain 2 tanks at some of the best sites you’ll see in northern California (specific sites will be determined that morning by the captain and diver requests), and then motor back to the harbor probably shortly after noon or 1pm. Mary Jo said that she’d also entertain the option of an afternoon 2-tank trip as well, if there is interest (I know I’m interested). Oh, I almost forgot to mention that snacks are provided and they are amazing–made by Mary Jo herself!

The boat costs break down like this:

  • $70 for the boat trip (weights are not included)
  • plus $20 for two tanks of air ($90 total)
  • or $30 for 2 tanks of Nitrox ($100 total)

The charter doesn’t offer gear rental, so we’ll have to pick that up separately. I previously rented from Glenn’s Aquarius 2 which is located pretty close to the harbor and opens at 7am for morning pickup. Their pricing for rental are:

  • Weights only: $8
  • Wetsuit, hood, gloves: $21
  • Full gear (BCD, reg, exposure suit, etc.): $65

We’re less than 1 month away (I just found out I was going to be able to attend last week), so let me know ASAP if you’re interested in diving with us. Once you contact me, I’ll send you the signup instructions. I’m releasing the remaining open seats on 29-May, but there may still be open spots after that, so contact me (comment below, or email) if you’re interested.

As a special treat, Stanley will be joining us for his first scuba dive as well!

Why do I have hundreds of child cursors when cursor_sharing set to similar in 10g

Oracle Optimizer Team - Tue, 2009-05-26 12:04

Recently we received several questions regarding a usual situation where a SQL Statement has hundreds of child cursors. This is in fact the expected behavior when

  1. CURSOR_SHARING is set to similar

  2. Bind peeking is in use

  3. And a histogram is present on the column used in the where clause predicate of query

You must now be wondering why this is the expected behavior. In order to explain, let's step back and begin by explaining what CURSOR_SHARING actually does. CURSOR_SHARING was introduced to help relieve pressure put on the shared pool, specifically the cursor cache, from applications that use literal values rather than bind variables in their SQL statements. It achieves this by replacing the literal values with system generated bind variables thus reducing the number of (parent) cursors in the cursor cache. However, there is also a caveat or additional requirement on CURSOR_SHARING, which is that the use of system generated bind should not negatively affect the performance of the application. CURSOR_SHARING has three possible values: EXACT, SIMILAR, and FORCE. The table below explains the impact of each setting with regards to the space used in the cursor cache and the query performance.

.nobrtable br { display: none }



CURSOR_SHARING VALUESPACE USED IN SHARED POOLQUERY PERFORMANCE
EXACT (No literal replacement)Worst possible case - each stmt issued has its own parent cursorBest possible case as each stmt has its own plan generated for it based on the value of the literal value present in the stmt
FORCEBest possible case as only one parent and child cursor for each distinct stmtPotentially the worst case as only one plan will be used for each distinct stmt and all occurrences of that stmt will use that plan
SIMILAR without histogram presentBest possible case as only one parent and child cursor for each distinct stmtPotentially the worst case as only one plan will be used for each distinct stmt and all occurrences of that stmt will use that plan
SIMILAR with histogram presentNot quite as much space used as with EXACT but close. Instead of each stmt having its own parent cursor they will have their own child cursor (which uses less space)Best possible case as each stmt has its own plan generated for it based on the value of the literal value present in the stmt




In this case the statement with hundreds of children falls into the last category in the above table, having CURSOR_SHARING set to SIMILAR and a histogram on the columns used in the where clause predicate of the statement. The presence of the histogram tells the optimizer that there is a data skew in that column. The data skew means that there could potentially be multiple execution plans for this statement depending on the literal value used. In order to ensure we don't impact the performance of the application, we will peek at the bind variable values and create a new child cursor for each distinct value. Thus ensuring each bind variable value will get the most optimal execution plan. It's probably easier to understand this issue by looking at an example. Let's assume there is an employees table with a histogram on the job column and CURSOR_SHARING has been set to similar. The following query is issued

select * from employees where job = 'Clerk';

The literal value 'Clerk' will be replaced by a system generated bind variable B1 and a parent cursor will be created as

select * from employees where job = :B1;

The optimizer will peek the bind variable B1 and use the literal value 'Clerk' to determine the execution plan. 'Clerk' is a popular value in the job column and so a full table scan plan is selected and child cursor C1 is created for this plan. The next time the query is executed the where clause predicate is job='VP' so B1 will be set to 'VP', this is not a very popular value in the job column so an index range scan is selected and child cursor C2 is created. The third time the query is executed the where clause predicate is job ='Engineer' so the value for B1 is set to 'Engineer'. Again this is a popular value in the job column and so a full table scan plan is selected and a new child cursor C3 is created. And so on until we have seen all of the distinct values for job column. If B1 is set to a previously seen value, say 'Clerk', then we would reuse child cursor C1.
.nobrtable br { display: none }



Value for B1Plan UsedCursor Number
ClerkFull Table ScanC1
VPIndex Range ScanC2
EngineerFull Table ScanC3



As each of these cursors is actually a child cursor and not a new parent cursor you will still be better off than with CURSOR_SHARING set to EXACT as a child cursor takes up less space in the cursor cache. A child cursor doesn't contain all of the information stored in a parent cursor, for example, the SQL text is only stored in the parent cursor and not in each child.

Now that you know the explanation for all of the child cursors you are seeing you need to decide if it is a problem for you and if so which aspect affects you most, space used in the SHARED_POOL or query performance. If your goal is to guarantee the application performance is not affected by setting CURSOR_SHARING to SIMILAR then keep the system settings unchanged. If your goal is to reduce the space used in the shared pool then you can use one of the following solutions with different scopes:

  1. Individual SQL statements - drop the histograms on the columns for each of the affected SQL statements

  2. System-wide - set CURSOR_SHARING to FORCE this will ensure only one child cursor per SQL statement


Both of these solutions require testing to ensure you get the desired effect on your system. Oracle Database 11g provides a much better solution using the Adaptive Cursor Sharing feature. In Oracle Database 11g, all you need to do is set CURSOR_SHARING to FORCE and keep the histograms. With Adaptive Cursor Sharing, the optimizer will create a cursor only when its plan is different from any of the plans used by other child cursors. So in the above example, you will get two child cursors (C1 and C2) instead of 3.

Why do I have hundreds of child cursors when cursor_sharing set to similar in 10g

Inside the Oracle Optimizer - Tue, 2009-05-26 12:04

Recently we received several questions regarding a usual situation where a SQL Statement has hundreds of child cursors. This is in fact the expected behavior when

  1. CURSOR_SHARING is set to similar

  2. Bind peeking is in use

  3. And a histogram is present on the column used in the where clause predicate of query

You must now be wondering why this is the expected behavior. In order to explain, let's step back and begin by explaining what CURSOR_SHARING actually does. CURSOR_SHARING was introduced to help relieve pressure put on the shared pool, specifically the cursor cache, from applications that use literal values rather than bind variables in their SQL statements. It achieves this by replacing the literal values with system generated bind variables thus reducing the number of (parent) cursors in the cursor cache. However, there is also a caveat or additional requirement on CURSOR_SHARING, which is that the use of system generated bind should not negatively affect the performance of the application. CURSOR_SHARING has three possible values: EXACT, SIMILAR, and FORCE. The table below explains the impact of each setting with regards to the space used in the cursor cache and the query performance.

.nobrtable br { display: none }



CURSOR_SHARING VALUESPACE USED IN SHARED POOLQUERY PERFORMANCE
EXACT (No literal replacement)Worst possible case - each stmt issued has its own parent cursorBest possible case as each stmt has its own plan generated for it based on the value of the literal value present in the stmt
FORCEBest possible case as only one parent and child cursor for each distinct stmtPotentially the worst case as only one plan will be used for each distinct stmt and all occurrences of that stmt will use that plan
SIMILAR without histogram presentBest possible case as only one parent and child cursor for each distinct stmtPotentially the worst case as only one plan will be used for each distinct stmt and all occurrences of that stmt will use that plan
SIMILAR with histogram presentNot quite as much space used as with EXACT but close. Instead of each stmt having its own parent cursor they will have their own child cursor (which uses less space)Best possible case as each stmt has its own plan generated for it based on the value of the literal value present in the stmt




In this case the statement with hundreds of children falls into the last category in the above table, having CURSOR_SHARING set to SIMILAR and a histogram on the columns used in the where clause predicate of the statement. The presence of the histogram tells the optimizer that there is a data skew in that column. The data skew means that there could potentially be multiple execution plans for this statement depending on the literal value used. In order to ensure we don't impact the performance of the application, we will peek at the bind variable values and create a new child cursor for each distinct value. Thus ensuring each bind variable value will get the most optimal execution plan. It's probably easier to understand this issue by looking at an example. Let's assume there is an employees table with a histogram on the job column and CURSOR_SHARING has been set to similar. The following query is issued

select * from employees where job = 'Clerk';

The literal value 'Clerk' will be replaced by a system generated bind variable B1 and a parent cursor will be created as

select * from employees where job = :B1;

The optimizer will peek the bind variable B1 and use the literal value 'Clerk' to determine the execution plan. 'Clerk' is a popular value in the job column and so a full table scan plan is selected and child cursor C1 is created for this plan. The next time the query is executed the where clause predicate is job='VP' so B1 will be set to 'VP', this is not a very popular value in the job column so an index range scan is selected and child cursor C2 is created. The third time the query is executed the where clause predicate is job ='Engineer' so the value for B1 is set to 'Engineer'. Again this is a popular value in the job column and so a full table scan plan is selected and a new child cursor C3 is created. And so on until we have seen all of the distinct values for job column. If B1 is set to a previously seen value, say 'Clerk', then we would reuse child cursor C1.
.nobrtable br { display: none }



Value for B1Plan UsedCursor Number
ClerkFull Table ScanC1
VPIndex Range ScanC2
EngineerFull Table ScanC3



As each of these cursors is actually a child cursor and not a new parent cursor you will still be better off than with CURSOR_SHARING set to EXACT as a child cursor takes up less space in the cursor cache. A child cursor doesn't contain all of the information stored in a parent cursor, for example, the SQL text is only stored in the parent cursor and not in each child.

Now that you know the explanation for all of the child cursors you are seeing you need to decide if it is a problem for you and if so which aspect affects you most, space used in the SHARED_POOL or query performance. If your goal is to guarantee the application performance is not affected by setting CURSOR_SHARING to SIMILAR then keep the system settings unchanged. If your goal is to reduce the space used in the shared pool then you can use one of the following solutions with different scopes:

  1. Individual SQL statements - drop the histograms on the columns for each of the affected SQL statements

  2. System-wide - set CURSOR_SHARING to FORCE this will ensure only one child cursor per SQL statement


Both of these solutions require testing to ensure you get the desired effect on your system. Oracle Database 11g provides a much better solution using the Adaptive Cursor Sharing feature. In Oracle Database 11g, all you need to do is set CURSOR_SHARING to FORCE and keep the histograms. With Adaptive Cursor Sharing, the optimizer will create a cursor only when its plan is different from any of the plans used by other child cursors. So in the above example, you will get two child cursors (C1 and C2) instead of 3.

Categories: DBA Blogs, Development

Enterprise Linux 4 Update 8 Released on ULN, public-yum.oracle.com

Sergio's Blog - Tue, 2009-05-26 07:36

We've just released Oracle Enterprise Linux 4, Update 8 on ULN (linux.oracle.com) and on public-yum.oracle.com. DVD ISOs are available for Unbreakable Linux support customers by calling support. They'll be available soon on edelivery.oracle.com/linux

Categories: DBA Blogs

Oracle BI EE 10.1.3.4.1 – Writebacks to Essbase – Using JAPI and Custom HTML – Part 1

Venkat Akrishnan - Mon, 2009-05-25 14:10

Considering the amount of expectations surrounding the BI EE and Essbase connectivity, i thought it would make a lot of sense to blog about another interesting piece of the Essbase and BI EE integration i.e Writebacks to Essbase from BI EE. Actually i have seen this question being asked by customers/users in quite a few internal presentations that i have been involved recently. Writebacks to Essbase from BI EE is not supported by default. Having said that one can create a custom solution to enable a writeback to an Essbase cell. We shall see one approach of doing this today.

As you would probably know, Writebacks to a relational source from BI EE is supported through custom XML messages. Unfortunately, as of this release, this method cannot be reused for non-relational sources. Basically our requirement is pretty simple. In a BI EE report (reporting against an Essbase source), the end user should have the ability to enter custom values and update the corresponding intersections back in Essbase. The below screenshot explains the requirement pretty clearly.

The high level architecture diagram to enable these writebacks is given below

The high level flow is, for every cell update, the end user will enter the new value and then will click on the update button. That will pass on the parameters to a JSP page using the HTML Form GET method. The JSP will accept the parameters and will in turn pass on the values to the JAPI. The JAPI will then update the Essbase cell. To illustrate this, we shall use the default Demo->Basic cube. Remember the fact that one cannot use the write backs textboxes directly as currently BI EE does not provide a means of referencing the updated/entered value in a cell outside of the XML template. So, we would need to write our own custom HTML to generate textbox and the update buttons.

Import the Demo Basic cube into the repository and create the BMM and presentation layers by drag and drop. Change the physical and the BMM layer aggregations(of the all the measures for which you want to enable writebacks) to SUM instead of Aggr_External. The main reason for doing this is to ensure that we can use string manipulation functions like concatenation from Answers. For more details on each of these aggregations check my blog entries here, here, here and here.

Now, lets go to JDeveloper and create a simple JSP page. Use the below code in the JSP. You can customize this to your needs.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<%@ page contentType="text/html;charset=windows-1252"%>
<%@ page import="java.io.*" %>
<%@ page import="java.util.Map" %>
<%@ page import="java.util.Map.Entry" %>
<%@ page import="java.util.jar.Attributes" %>
<%@ page import="java.util.Iterator" %>
<%@ page import="com.essbase.api.base.*" %>
<%@ page import="com.essbase.api.dataquery.*" %>
<%@ page import="com.essbase.api.session.*" %>
<%@ page import="com.essbase.api.datasource.*" %>
<%@ page import="com.essbase.api.domain.*" %>
<%@ page import="com.essbase.api.metadata.*" %>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
    <title>WriteBackEssbase</title>
  </head>
    <body>
		<font size="5"><%="WriteBack Started" %></font>

<%

        String s_userName = "admin";
        String s_password = "password";
        String s_olapSvrName = "localhost";
        String s_provider = "http://localhost:13080/aps/JAPI";

        try
        {
        IEssbase ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
        IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);
        IEssOlapServer olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);
        olapSvr.connect();
        IEssCubeView cv = dom.openCubeView("Data Update Example",s_olapSvrName, "Demo", "Basic");

        String v_Market = request.getParameter("p_Market");
        String v_Product = request.getParameter("p_Product");
        String v_Accounts = request.getParameter("p_Accounts");
        String v_Scenario = request.getParameter("p_Scenario");
        String v_Year = request.getParameter("p_Year");
        String v_Value = request.getParameter("p_Value");

        IEssGridView grid = cv.getGridView();
        grid.setSize(2, 5);
        grid.setValue(0, 1, v_Market);
        grid.setValue(0, 2, v_Product);
        grid.setValue(0, 3, v_Accounts); ;
        grid.setValue(0, 4, v_Scenario);
        grid.setValue(1, 0, v_Year);

        cv.performOperation(cv.createIEssOpRetrieve());
        System.out.println("\nData Cell at 2nd-row, 2nd-column: " + grid.getValue(1,1).toString());
        System.out.println ("Market: "+v_Market+" Product: "+v_Product+" Accounts: "+v_Accounts+" Scenario: "+v_Scenario+" Year: "+v_Year+" Value: "+v_Value);

        int row = 1, col = 1;
        if (grid.getCellContentType(row, col) ==
                IEssGridView.CELL_CONTENT_TYPE_DOUBLE) {
            IEssValueAny val = grid.getValue(row, col);
            double dblVal = val.getDouble();

            grid.setValue(row, col, Double.valueOf(v_Value).doubleValue());
        } else if (grid.getCellContentType(row, col) ==
                IEssGridView.CELL_CONTENT_TYPE_MISSING) {
            grid.setValue(row, col, Double.valueOf(v_Value).doubleValue());
        }

        IEssOpUpdate opUpd = cv.createIEssOpUpdate();
        cv.performOperation(opUpd);

        }catch (EssException x){
            System.out.println("ERROR: " + x.getMessage());
        }
%>
<font size="5"><%="WriteBack Ended" %></font>
		<font size="5"><%="WriteBack Ended" %></font>
        </body>
</html>
<%
    //response.sendRedirect("http://localhost:9704/analytics");
%>

I would not explain this in detail as the JSP is self-explanatory. But there is one aspect to writebacks using JAPI that one would have to be aware of. Whenever JAPI is used for writebacks ensure that you have a IEssGridView which basically visualizes your output as an Excel add-in grid. The rows and columns are numbered in an increasing order from zero.

The code snippet that actually does the writeback is given below

Once this is done, this JSP would have to be deployed on a web server that is accessible to BI EE. In order to accomplish that, create a custom WAR profile which would basically contain all the dependent jar files and also the manifest related information.

Then deploy this WAR file on the same application server as BI EE (or OC4J).

Once the deployment is done, test the jsp page by passing the url as shown below

http://localhost:9704/WriteBack/WriteBack.jsp?&p_Value=2000&p_Market=Market&p_Product=Product&p_Accounts=Accounts&p_Scenario=Actual&p_Year=Qtr1

Once this is done, lets create a BI EE report containing all the dimensions. Remember that for writebacks to work in Essbase, we would need a value/member from every dimension. Create a custom column and in the custom colum enter the formula as shown below

'<form name="input" action="http://localhost:9704/WriteBack/WriteBack.jsp">
<input type="text" name="p_Value" size="10"/>
<input type="submit" value="Update" />
<input type="hidden" name="p_Accounts" value="Sales" />
<input type="hidden" name="p_Year" value="'||"Year"."Gen2,Year"||'"/>
<input type="hidden" name="p_Market" value="'||Market."Gen2,Market"||'"/>
<input type="hidden" name="p_Product" value="'||Product."Gen1,Product"||'"/>
<input type="hidden" name="p_Scenario" value="'||Scenario."Gen2,Scenario"||'"/>
</form>'

And change the column format to HTML.

Basically what the formula above does is, it create an HTML input field in the report itself. One parameter p_Value would be obtained from the value of the textbox and the remaining hidden form parameters will be used for passing the dimension member attributes to the url.

If you go to the report now, you can alter the existing values and on submit, these values would be submitted back to Essbase.

The same methodology can be used for writing data back into Oracle OLAP as well. This is one method of doing writebacks. There is one more method as well wherein one can use the Writeback template itself. I will discuss that in future. But for now, the above should give an idea of how Writebacks can be enabled in BI EE against Essbase data sources. There are quite a few moving parts. But again these are required due to the nature of the connectivity as of today.


Categories: BI & Warehousing

VMWare ESXi Hypervisor

Duncan Mein - Fri, 2009-05-22 06:01
Server virtualisation is something I have been using for a few years now both at work and at home. I mainly use VMWare Fusion on my MacBook Pro and VMWare Workstation / Player on an old Windows Laptop. For everyday tasks these products are amazing as i can run an XP VM on my MacBook Pro for all those times I need to open MS Project or run TOAD.

On my server at home, I didn't want to install a host OS and then install VMWare Workstation / Server to host several Linux VM's. Instead I explored the option of a "Bare Metal" hypervisor from VMWare (ESXi 3.5 Update 3). A hypervisor is a very small linux kernel that runs natively against your servers hardware without the burden of having to install a host OS. From here you can create and manage all your VM's remotely using the VMWare Client tool.

For more information in ESXi, check out VMWare's website here

Reading the documentation suggested that ESXi was very particular about the hardware it supports and so began the quest to build a "White Box" ESXi server at home. Whilst this is not supported by VMWare, I wanted to share the process and components utilised in case anyone else was thinking of building there own ESXi Server.

Step 1. Download the ESXi 3.5 ISO from the VMWare Website (You will need to register for an account)

Step 3. Create a Bootable USB Stick (yes you can boot the Hypervisor from a USB stick if your Motherboard supports it) by following this concise guide

Step 3. Plug your USB stick into your server and configure the IP Address, DNS Server, Hostname and Gateway.

Step 4. Open a Browser window on a client machine and navigate to: http://ip address to download the VMWare Client tool.

Step 5. Enjoy using ESXi

My Server Configuration:

CPU: Inter Core i7 920
MOBO: Gigabyte GA-EX58-UD5
Memory: 12GB of Corsair DDR3 XMS3 INTEL I7 PC10666 1333MHZ (3X2GB)
SATA Controller: Sweex PU102
NIC: 3Com 3c90x

The following sites list loads of compatible hardware with notes and issues encountered:

http://ultimatewhitebox.com/index.php

http://www.vm-help.com/esx/esx3.5/Whiteboxes_SATA_Controllers_for_ESX_3.5_3i.htm

With the setup outlined above, I can run 5 Linux machines running Oracle Database, Application Server, APEX and OBIEE without any issue.

If this is something you are interested in evaluating, i can recommend spending a few hundred pounds on the components as its beats spending thousands on a supported Server from HP.

Oracle BI EE 10.1.3.4.1 – Scheduling Essbase/Planning Calculation Scripts – Action Framework & Custom Java Remote Procedure calls

Venkat Akrishnan - Thu, 2009-05-21 11:48

In the blog entry here, i had shown you how to run an essbase calculation script from the BI EE dashboards. To make it even more complete, lets look at a means of calling a calculation script through BI Scheduler. So, our idea is to use BI EE as a scheduler to schedule Essbase/Planning calculations. In order to do that there are certain pre-requisites. I have covered this in my blog entry before here. Once the setup is done, lets open the JDeveloper and create a new project. Include the schedulerrpccalls.jar, Ess_japi.jar and Ess_es_server.jar as part of the project properties.

The idea is to basically write a custom java program which in turn will run an Essbase calculation script through the Java API. The java program would then be bundled in the form of a jar file and called from an iBot. Use the below code to login to Essbase and then to execute the calculation.

package bischeduler;

import java.io.*;

import com.siebel.analytics.scheduler.javahostrpccalls.SchedulerJavaExtension;
import com.siebel.analytics.scheduler.javahostrpccalls.SchedulerJobInfo;
import com.siebel.analytics.scheduler.javahostrpccalls.SchedulerJobException;

import com.essbase.api.base.*;
import com.essbase.api.dataquery.*;
import com.essbase.api.session.*;
import com.essbase.api.datasource.*;
import com.essbase.api.domain.*;
import com.essbase.api.metadata.*;

public class BIScheduler implements SchedulerJavaExtension{
    public BIScheduler() {   

        String s_userName = "admin";
        String s_password = "password";
        String s_olapSvrName = "localhost";
        String s_provider = "http://localhost:13080/aps/JAPI";

        try
        {
        IEssbase ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
        IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);
        IEssOlapServer olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);
        olapSvr.connect();

        IEssCube cube = olapSvr.getApplication("Global").getCube("Global");
        String maxLstat = "import database 'Global'.'Global' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' on error abort;";
        //cube.loadData(IEssOlapFileObject.TYPE_RULES,"UnitsLd",0,"SH",false,"global","global");
        cube.calculate(false,"CalcAll");
        //cube.beginDataload("UnitsLd",IEssOlapFileObject.TYPE_RULES,"global","global",0);
        //IEssMaxlSession maxLsess = (IEssMaxlSession)dom.getOlapServer(s_olapSvrName);
        //maxLsess.execute(maxLstat);
        }catch (EssException x){
            System.out.println("ERROR: " + x.getMessage());
        }

    }
    public void run(SchedulerJobInfo jobInfo) throws SchedulerJobException{
        new BIScheduler();
        System.out.println("JobID is:" + jobInfo.jobID());
        System.out.println("Instance ID is:" + jobInfo.instanceID());
        System.out.println("JobInfo to string is:" + jobInfo.toString());
    }
    public void cancel(){};

        public static void main(String[] args) {
        new BIScheduler();

    }
}

Once this is done, create a deployment profile to bundle the code as a jar file. Ensure that the deployment also bundles the 3 jar files mentioned above. Then deploy the jar file.

Now copy the deployed jar file to the {OracleBI}\web\javahost\lib directory (or a directory that you have mentioned as the lib path in the Java host config.xml). Now create an ibot and then in the advanced tab call the deployed jar file as shown below.

You can create ibots in such a way that once the java class run is completed, it would trigger another ibot to deliver a report based on an Essbase cube. This will ensure that the administrator is sure of the fact that the calculation has run successfully.

Now you should be able to run essbase calculations from BI Scheduler directly.  One can even run ODI packages using BI EE. I would cover that as well in future. Keep watching this space next week for a method to do write backs into Essbase. I would present a couple of methods and both of them can be used effectively in a prod like environment.


Categories: BI & Warehousing

Oracle BI EE 10.1.3.4.1 and Essbase Connectivity – Report Based and Essbase based Grand Totals – Answers Based Aggregation

Venkat Akrishnan - Wed, 2009-05-20 05:13

This post here by Christian prodded me to write about another interesting feature in the BI EE and Essbase Connectivity. As you would probably know BI EE supports report based grand totals/sub-totals in a table view. There are 2 types of totals. One is BI Server based totals wherein BI Server would do the totalling on a result set. The other is data source specific totalling wherein the query is fired back to the underlying data source to obtain the totals. For example, lets just quickly import the Demo->Basic cube in to the repository and build a very simple report as shown below.

As you see, it is a very simple report containing a report based total and a sub-total at the market level. Before going further, lets look at the outline of the Basic cube first. As you see, every dimension top member is set as a stored only member.

If you look at the MDX of the above report, you would notice that 3 MDX queries would be fired. One for the base report , one for the grand total and the other for the market sub-total.

The aggregation for the Measure Sales is Aggr_External in both the physical and logical layer. In Answers, the aggregation is set as default. Now, lets go to the outline and convert the Year Dimension top member to be a label only member as shown below.

Now, try running the same report above. You would notice that the report level totals and sub-totals are totally wrong as shown below.

The main reason for this is the fact that since we have converted the topmost member of the outline to be label-only. So for the Year dimension, it will always pick the Qtr1 value instead of totalling all the quarters. For report developers, this would turn out to be an absolute nightmare considering the fact a report based total is created under the assumption that, the totalling is done on the report and not at the data source level. Now, from Answers lets change the aggregation os the sales measure to SUM.

And look at the report.

Basically, a report level SUM ensures that all the custom aggregations/totalling occuring in a pivot table/table are done at the report level instead of at the Essbase layer. So by default, ensure that you always have SUM at the report level to ensure that you do not get wrong answers especially for totals and sub-totals.

This should have given you an idea of how the aggregations at 3 layers (Physical, BMM and the Answers) can affect a report. I would cover the usage of Report Aggregations across different BMM and Physical Layer aggregations in the future.


Categories: BI & Warehousing

Upgrade Java plug-in (JRE) to the latest certified version

Aviad Elbaz - Wed, 2009-05-20 03:15

If you have already migrated to Java JRE with Oracle EBS 11i you may want to update EBS to the latest update from time to time. For example, if your EBS environment is configured to work with Java JRE 6 update 5 and you want to upgrade your clients with the latest JRE 6 update 13.

This upgrade process is very simple:

  1. Download the latest Java JRE installation file
    The latest update can be downloaded from here.
    Download the "JRE 6 Update XX" under "Java SE Runtime Environment".
     
  2. Copy the above installation file to the appropriate directory:
    $> cp jre-6uXX-windows-i586-p.exe $COMMON_TOP/util/jinitiator/j2se160XX.exe
    We have to change the installation file name by the following format:   "j2se160XX.exe"  where XX indicates the update version.
     
  3. Execute the upgrade script:
    $> cd $FND_TOP/bin
    $> ./txkSetPlugin.sh 160XX

That's all....

Since we upgraded our system to JRE 6 update 13 (2 weeks ago), our users don't complain about mouse focus issues and some other forms freezes they have experienced before. So... it was worth it...

If you haven't migrated from Jinitiator to the native Sun Java plug-in yet, it's highly recommended to migrate soon. Jinitiator is going to be desupported soon.

See the following post for detailed, step by step, migration instructions: Upgrade from Jinitiator 1.3 to Java Plugin 1.6.0.x.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Making problems for myself

Claudia Zeiler - Wed, 2009-05-20 01:23

Playing around with my toy database I asked myself, "What happens if DUAL has more than 1 row?" I found out.


SQL> insert into dual values ('Y');



1 row created.



SQL> select * from dual;

D
-
X


SQL> select count(*) from dual;

COUNT(*) ---------- 1


I tried it again. Same result. "Oh, I guess I can't insert into DUAL", says I, and I went about my business.


Later I logged on as SCOTT and tried to drop a table. Playing I have more EMP tables than employees.



SQL> DROP TABLE EMP4;


ERROR at line 1:

ORA-00604: error occurred at recursive SQL level 1

ORA-01422: exact fetch returns more than requested number of rows



WHAT??!!?


Yes it is there and there is only 1 table called EMP4.


SELECT OWNER, OBJECT_NAME FROM ALL_OBJECTS WHERE OBJECT_NAME = 'EMP4'


OWNER OBJECT_NAME

------------------------------ ------------------------------

SCOTT EMP4



I looked the matter up at orafaq. and followed the instructions.

SQL> select * from dual;

D

-

X


SQL> create table temp_dual as select * from dual;

Table created.


SQL> select * from temp_dual;

D

-

X

Y

Y


Yes, I plead guilty. I DID succeed in inserting those rows into DUAL.


SQL> delete from dual where dummy = 'Y';

1 row deleted.


Strange. It deleted 1 row even though I had put 2 in.




SQL> drop table temp_dual;

drop table temp_dual

*

ERROR at line 1:

ORA-00604: error occurred at recursive SQL level 1

ORA-01422: exact fetch returns more than requested number of rows



I deleted the second excess row:

SQL> delete from dual where dummy = 'Y';

1 row deleted.


and I had a functioning database back.


SQL> DROP TABLE TMP_DUAL;

Table dropped.


and then as SCOTT


SQL> DROP TABLE EMP4;

Table dropped.



OK, I get it, Oracle consults DUAL in the drop process. And don't go messing up a database of any importance. But it is odd how the fact that I was succeeding to mess things up was hidden from me. Yes it told me that I had inserted the row, but then it didn't display it with a select. It was an interesting bit of play.




SURVEY - Reporting from Oracle E-Business Suite

Richard Byrom - Tue, 2009-05-12 11:17

Reporting from Oracle EBS is an important and current issue, so it would be great if you can share your views on what’s good and bad and any tips via the survey that Simon Tomey has set up.

Simon’s going to the UKOUG EBS Financials SIG in May to help with a workshop on this, so it would be tremendous to get your views, so we can have a discussion with some bite. Moreover I think we will all find the results of interest. If you’re not currently working with an organisation, please complete the survey from the perspective of the organisation that you last worked for.   

Take part in the Oracle E-Business Suite Reporting Survey   

Results

Reporting Survey Results for May 2009 UKOUG SIG

What’s Mine Is Mine

Mary Ann Davidson - Mon, 2009-05-11 05:15

The 2009 RSA Conference is over and it was, as always, a good chance to catch up with old friends and new trends. I was on four panels (including the Executive Security Action Forum on the Monday before RSA) and it was a pleasure to be able to discuss interesting issues with esteemed colleagues. One such panel was on the topic of cloud computing security (ours was not the only panel on that topic, needless to say). One of the biggest issues in getting the panel together was manifest at the outset when, like the famous story of 6 blind men and the elephant, everyone had a different “feel” for what cloud computing actually is.

The “what the heck is cloud computing, anyway?” definitional problem is what makes discussions of cloud computing so thorny. Some proponents of cloud computing are almost pantheists in their pronouncements. “The cloud is everything; everything is the cloud. I’m a cloud, you’re a cloud, we’re a cloud, it’s all the cloud; are in you in touch with your inner cloud?” It’s hard to even discuss cloud computing with them because you need to know what faction of the radical cult you are with to understand how they even approach the topic.

One of the reasons it is hard to debunk cloud computing theology is that the term itself is so nebulous. If by cloud computing, one means software as a service, this is nothing new (so what’s all the fuss about?). Almost as long as there have been computers, there have been people paying other people to manage the equipment and software for them using a variety of different business models. When I was in college, students got “cloud services,” in a way. You got so many computer hours at the university computer center. You’d punch out your program on a card deck, drop it off at the university computer center, someone would load the deck, and later you’d stop by, pick up your card deck and your output. Someone else managed running the program for you (loading the deck) and debited your account for the amount of computing time it took to run the program. (I know, I know, given all the power on a mere desktop these days, this reminiscence is the computing equivalent of “I walked 20 miles to school through snow drifts.” But people who remember those days also remember dropping a card deck, which was the equivalent of “the dog ate my homework” if you couldn’t literally get your cards lined up in time to turn your homework in. Ah, the good old days.)

Today, many companies run hosted applications for their customers through a variety of business models. In some cases, the servers and software are managed at a data center the service provider owns and manages (the @myplace model); in other cases, the service provider manages the software remotely, where the servers and software remain at the customer site (the @yourplace model). What both of these models have in common is knowing “what’s mine is mine.” That is, where the servers are located is not as important as the principle that a customer knows where the data is, what is being done to secure it and that “what’s mine is mine.” If you are not actually managing your own data center, you still will do due diligence – and have a well-written contract, with oversight provisions – to ensure that someone else is securing your data to your satisfaction. If it is not done to your satisfaction you either needed to write a better contract or to terminate the service contract you have for cause.

I therefore find some of the pronouncements about cloud computing to be completely ludicrous if you are talking about anything important, because you want to know a) where something is that is of value to you and b) that it is being secured appropriately. “Being secured” is not just a matter of using secure smoke and mirrors – oops, I mean, a secure cloud protocol – but a bunch of things (kind of like the famous newspaper reporting example – who, what, when, how, why and where). Maybe “whatever” also begins with a W, but nobody would accept that as an answer to the question, “It’s 11PM, do you know where your data is and who is accessing it?”

I’ve used the following example before, most recently at the 2009 RSA Conference, but it’s worth repeating here. Suppose you have a daughter named Janie, who is the light of your life. Can you imagine the following conversation when you call her day care provider at 2pm.?

You: “Where is Janie?”DCP: “Well, we aren’t really sure right now. Janie is off in the day care cloud. Somewhere. But we are sure she’ll be at the door by 5 when you come to pick her up.”

The answer is, you wouldn’t tolerate such “wherever, whatever” answers and you’d yank Janie out of there ASAP. Similarly, if your data is important, you aren’t going to be happy with a “secure like, wherever” cloud protocol.

There is another reason “the cloud is everything, everywhere” mantra is nonsense. The reality is that if the cloud is everything and everywhere, then you have to protect everything, which is simply not possible (basic military strategy 101, courtesy of Frederick II: “He who defends everything defends nothing”). It’s not even worth trying to do that. If everything is in the cloud then one of two things will happen. Either security will have to rise to that digital equivalent of Ft. Knox everywhere: if not all data is gold, some of it is and you have to protect the gold above all else. Or, security devolves to the lowest common denominator, and we are back to little Janie – nobody is going to drop off their precious jewels in some cloud where nobody is sure where they are or how they are being protected. (You might drop off the neighbor’s kid into an insecure day care cloud because he keeps teasing your cat, but not little Janie.)

One of the reasons the grandiose claims about cloud computing don’t sit well is that most people have an intuitive defensiveness about “what’s mine.” You want to know “what’s mine is mine, what’s yours is yours” and most of us don’t have megalomaniacal tendencies to claim what’s yours as mine. Nor frankly, do we generally care about “what’s yours” unless you happen to be a friend or there are commons that affect both of us (e.g., if three houses in the neighborhood get burgled, I’m more likely to join neighborhood watch since what affects my neighbor is likely to affect me, too).

I buy the idea of having someone else manage your applications because I learned at an early age you could pay people to do unpleasant things you don’t want to do for yourself. My mother reminded me of this only last weekend. When I was a 21-year-old ensign stationed in San Diego, my command had a uniform inspection in khakis. I did not like khakis and had not ever had to wear them (the particular shade of khaki the uniforms were made of at that time made everyone look as if he/she had malaria, and the material was a particularly yucky double knit). I was moaning and groaning about having to hem my khaki uniform skirt when my mother reminded me that the Navy Exchange had a tailor shop and they’d probably hem my skirt for a nominal fee (the best five dollars I ever spent, as it happens). If you don’t want to manage your applications (in business parlance, because it is not your “core competence”), you can pay someone else to do it for you. You’re not completely off the hook in that you have to substitute due diligence and contract management skills for hands-on IT skills, but this model works for a lot of people.

What I don’t buy is the idea that – for anything of value – grabbing storage or computing on the fly is a model anybody is going to want to use. A pharmaceutical company running clinical trials isn’t going to store their latest test results “somewhere, out there.” They aren’t necessarily going to rent computing power on the fly, either if the raw data itself is sensitive (how valuable would it be to a competitor to learn that new Killer Drug isn’t doing so well in clinical trials?) You want to know “what’s mine is mine, and is being protected to my verifiable satisfaction.” If it’s not terribly valuable – or, more precisely, if it is not something you mind sharing broadly - then the cloud is fine. A lot of people store their photographs on some web site somewhere which means a) if their hard drive is corrupted, they have a copy somewhere and b) it’s easier to share with lots of people – easier than emailing .JPG files around. I heard one presenter at RSA describe how his company crunched big numbers using “the power of the cloud” but he admitted that the data being crunched was already public data. So, the model that worked was “this is mine; I am happy to share,” or “this is already being shared, and is not really mine.”

Speaking of “what’s mine is mine,” I mentioned in my previous blog entry that I’d had the privilege of testifying in front of Congress in mid-March (the Homeland Security Subcommittee on Emerging Threats, Cybersecurity, Science and Technology). As I only had five minutes for my remarks, I wanted to make a few strong recommendations that I hoped would have an impact. The third of the three recommendations was that the US should invoke the Monroe Doctrine in cyberspace. (One of my co-panelists then started referring to this idea as the Davidson Doctrine, which I certainly cringe at using myself. James Monroe was the president who first invoked the doctrine that bears his name – he gets to have a major doohickey in foreign policy named after him since he was – well, The President. I am clearly not the president or even not a president, unless it is president of the Ketchum, Idaho Maunalua Fan Club.)

For those who have forgotten their history, the Monroe Doctrine – created in 1823 – was a basic enumeration that the United States had a declared sphere of influence in the Western Hemisphere, that further efforts by European governments to colonize or interfere with states in the Western Hemisphere would be viewed by the US as aggressive acts requiring US intervention. The Monroe Doctrine is one of the United States’ oldest foreign policy constructs, it has been invoked multiple times by multiple presidents (well into the 20th century), and has morphed to include other areas of intervention (the so-called Roosevelt Corollary).* In short, the Monroe Doctrine was a declared line in the sand: the United States’ way of saying “what’s mine is mine.”

My principle reason for recommending invocation of the Monroe Doctrine is that we already have foreign powers stealing intellectual property, invading our networks, probing our critical defense systems (and other critical infrastructure systems). Nobody wants to say it, but there is a war going on in cyberspace. Putting it differently, if a hostile foreign power bombed our power plants, would that be considered an act of war? If a group of (non-US) actors systemically denied us the use of critical systems by physically taking control of them, would that be considered an act of war? I am certainly not suggesting that the Monroe Doctrine should govern (if it is invoked in cyberspace) the entire doctrine of cyberwar. But it is the case that before great powers can develop doctrines of cyberwar, they need to declare what is important. “What’s mine is mine: stay out or face the consequences.”

Another incident from the RSA Conference brought this home to me. In the Q and A session after a panel I was on: a woman mentioned she had grown up during the Cold War, when it was obvious who the enemy was. Who, she asked, is the enemy now? My response was, “We aren’t actually allowed to have enemies now. Wanting to annihilate western civilization is a different, equally valid value system that needs to be respected in the interests of diversity.” This sarcastic remark went right over her head for no reason that I can fathom. It is, however, true, that a lot of people don’t want to use the term “enemy” anymore, in part because they don’t even want to acknowledge that we are at war. From what is already public knowledge, we can state honestly that we have numerous enemies attacking our interests in cyber space – from individual actors to criminal organizations to nation states – part of our problem is that because we have not developed a common understanding of what “cyber war” is, we are unable to map these enemies to appropriate responders in the same way we pair street crime up with local cops and attacks on military supply lines with the armed forces.

We need to at least begin to elucidate a larger cyberwar doctrine by declaring a sphere of influence and that messing with that will lead to retribution. Like the Monroe Doctrine, we do not need to publicly elucidate exact responses, but our planning must include specific exercises such as “if A did B, what would our likely response be, where ‘response’ could include signaling and other activities in the non-cyber world?” Nations and others do “signal” each other of intentions, which often allows others to gracefully avoid conflict escalation by reading the signals correctly and backing off.

Slight aside: there are parents more worried about their children’s self esteem than stopping their obnoxious behavior Right This Second. My mother had a great escalation protocol using signaling that I wish all the Gen-Xers, Gen-Yers and Millennials would adopt instead of “we want Johnny to feel good about being a rude brat.” Mom has not had to invoke this on the Davidson kids in several decades because she invoked it so well before we were 10:

Defcon 5 - (Child behaves himself or herself)Defcon 4 - The “look” (narrowed eyes, direct eye contact, tense body language)Defcon 3 - The hiss through clenched teethDefcon 2 - “Stop That Right This Minute Or We Are Leaving. I Mean It.” Defcon 1 - The arm pinch and premise-vacating

This was, my siblings and I can attest to, a well-established escalation protocol with predictable “payoffs” at each level. As a result, we only rarely made it to Defcon 1 (and, in defense of my mother, I richly deserved it when we did).

So, below are some thoughts I wrote up as a later expansion on my remarks to the subcommittee. Invoking the Monroe Doctrine in cyberspace is, I believe, a useful construct for approaching how we think about cybersecurity as the critical national security interest I believe it is.

Applicability of the Monroe Doctrine to Cyberspace

1. The essential truth of invoking a Cyber Monroe Doctrine is that what we are seeing in cyberspace is no different from the kinds of real-world activities and threats our nation (and all nations) have been dealing with for years; we must stop thinking cyberspace falls outside of the existing system of how we currently deal with threats, aggressive acts and appropriate responses.

Referencing the Monroe Doctrine is meant to simplify the debate while highlighting its importance. The Monroe Doctrine became an organizing principle of US foreign policy. Through the concept of the Americas sphere of influence, it publicly identified an area of national interest for the US and clearly indicated a right to defend those interests without limiting the response. Today cyberspace requires such an organizing principle to assist in prioritization of US interest. While cyberspace by its name connotes virtual worlds, we should recall that cyberspace maps to places and physical assets we care about that are clearly within the US government's remit and interest.

Conceptually, how we manage the cyber threat should be no different than how we manage various real-world threats (from domestic crime to global terrorism and acts of aggression by hostile nation-states). Just as the Monroe Doctrine compelled the US government to prioritize intercontinental threats, a Cyber Monroe Doctrine also forces the US government to prioritize: simply put, some cyber-assets are more important than others and we should prioritize protection of them accordingly. We do not treat the robbery of a corner liquor store with the same response (or same responders) as we treat an attempt to release a dirty bomb into a population center, for example. With this approach, policy makers also benefit from existing legal systems and frameworks that ensure actions are appropriate and that protect our civil liberties.

Similarly, not all European incursions into the Western hemisphere have warranted a response under the Monroe Doctrine. For example in 1831, Argentina, which claimed sovereignty over the Falkland Islands, seized three American schooners in a dispute over fishing rights. The US reacted by sending the USS Lexington, whose captain, Silas Duncan, “seized property taken from the American ships, released the American seamen, spiked the fort’s cannon, captured a number of Argentine colonists, and posted a decree that anyone interfering with American fishing rights would be considered a pirate”(The Savage Wars of Peace, Max Boot, page 46).

The territorial dispute ended in 1833 when Great Britain sent a landing party of Royal Marines to seize the Falklands. In this instance the US specifically did not respond by invoking the Monroe Doctrine; the Falklands were deemed of insufficient importance to risk a crisis with London.2. The initial and longstanding value of the Monroe Doctrine was that it sent a signal to foreign powers that the US had a territorial sphere of influence and that incursions would be met with a response. Precisely because we did not specify all possible responses in advance, the Monroe Doctrine proved very flexible (e.g., it was later modified to support other objectives).

It is understandable that the United States would have concerns about ensuring the safety of the 85% of US critical (cyber) infrastructure that is in private hands given that much of this critical infrastructure (if attacked or brought down) has a direct link to the economic well-being of the United States in addition to other damage that might result. That said, declaring a national security interest in such critical infrastructure should not mean militarizing all of it or placing it under military or other governmental control any more than the Monroe Doctrine led to colonization (“planting the flag”) or militarization (military occupation and/or permanent bases) of all of the Western hemisphere. Similarly, the US should not make a cyberspace “land grab” for the Western hemisphere, or even our domestic cyber-infrastructure.

A 21st century Cyber Monroe Doctrine would have the same primary value as the original Monroe Doctrine - a signal to others of our national interests and a readiness to action in defense of those interests. Importantly, any consideration of our cyber interests must be evaluated within the larger view of our national security concerns and our freedoms. For example, it is clear where the defacement of a government website ranks in comparison to a weapons of mass destruction (WMD) attack on a major city. All cyber-risks are not created equal nor should they have a precisely “equal” response.

Another reason to embrace a Cyber Monroe Doctrine (and the innate flexibility it engendered) is the fact that cyberspace represents a potentially “liquid battlefield.” Traditionally, wars have been fought for fixed territory whose battlefields did not dramatically expand overnight (e.g., the attack by Imperial Japan on Pearl Harbor did not overnight morph into an attack on San Francisco, Kansas City and New York City). By contrast, in cyberspace there is no “fixed” territory and thus the boundaries of what is attacked are fluid. For a hostile entity, almost any potential cybertarget is 20 microseconds away.

A Cyber Monroe Doctrine must also accommodate the fundamental architecture of the Internet. Since the value of the Internet is driven by network effects, policies that decrease the value of the Internet through (real or perceived) balkanization will harm all participants. While a Cyber Monroe Doctrine can identify specific critical cyber infrastructure of interest to the U.S., parts of the cyber infrastructure are critical to all global stakeholders. In short, even as the United States may have a cybersphere of influence, there are nonetheless cybercommons. This is all the more true as attacks or attackers move through or use the infrastructure of those cybercommons. Therefore, the US must find mechanisms to be inclusive rather than exclusive when it comes to stewardship and defense of our cybercommons.

3. Placing the critical assets we care about within a framework that maps to existing legal, policy and social structures/institutions is the shortest path to success.

For example, military bases are protected by the military, and a nation-state attack (physical or cyber) against a military base or military cyberassets should fit within a framework that can offer appropriate and proportionate responses (ranging from State Department harassment of the local embassy official, to application of kinetic force). Critical national assets (power plants, financial systems) require similar flexibility, but through engagement of the respective front-line institutions in a manner that permits escalation appropriate to the nature of the attack. Challenges

There are a number of challenges in applying a Cyber Monroe Doctrine. Below is a representative but by no means exhaustive list of them.

1. Credibility

A deterrence strategy needs teeth in it to be credible. Merely telling attackers “we are drawing a line in the sand, step over it at your peril,” without being able to back it up with an actual and proportionate response is the equivalent of moving the line in the sand repeatedly in an attempt to appear fierce while actually doing nothing. (The Chinese would rightly call such posturers “paper tigers.”) Mere words without at least the possibility of a full range of supporting actions is no deterrent at all. A credible deterrent can be established through non-military options as well - for some a sharply worded public rebuke may change behavior as much as if we were sending in the Marines.

Because the Monroe Doctrine did not detail all potential responses to provocation in advance, the United States was able to respond as it saw fit to perceived infractions of the Monroe Doctrine on multiple occasions and over much of our history. The response was measured and flexible, but there was a response.

2. Invocation Scenarios

To bolster credibility, the “teeth” part of a cyber doctrine should include a potential escalation framework and some “for instances” in which a Cyber Monroe Doctrine would be invoked. This planning activity can take place in the think tank realm, the cyber exercise realm, or a combination thereof.

We know how to do this. Specifically, military strategists routinely look at possible future war scenarios. In fact, it is not possible to do adequate military planning by waiting for an incident and only then deciding if you have the right tools, war plans, and defense capabilities to meet it, if for no other reason than military training and procurement take years and not days to implement.

Similarly, “changing the battlefield” could be one supporting activity for a Cyber Monroe Doctrine. For example, it has been argued (by Michael Oren in Power, Faith and Fantasy: America in The Middle East 1776 to the Present) that the United States only developed a strong Navy (and the centralized government that enabled it) as a result of the wars of the Barbary pirates. Similarly, the fabric of our military may change and likely will change in support of a Cyber Monroe Doctrine and that could include not only fielding new “troops” – the Marines first made a name for themselves by invading Tripoli – but new technologies to support a changed mission. One would similarly expect that a Cyber Monroe Doctrine as a policy construct would be supported by specific planning exercises instead of “shoot from the hip” responses.

3. Attribution

A complicating factor in cybersecurity is that an attack - especially if it involves infiltration/exfiltration and not a “frontal assault” (e.g., denial of service attack) - and the perpetrator of it may not be obvious. Thus two of the many challenges of cybersecurity are detecting attacks or breaches in the first place, and attributing them correctly in the second place. No one would want to initiate a response to a cyber attack if one cannot correctly target the adversary. In particular, highly reliable attribution is critical in cyberoffense, since the goal is to take out attackers or stop the attacks, not necessarily to create collateral damage by taking down systems being hijacked by attackers. Notwithstanding this challenge, “just enough attribution” may be sufficient for purposes of “shot over the bow warnings,” even if it would be insufficient for escalated forms of retaliation.

For example, in cybersecurity circles last year there were a number of discussions about the types of activities that occur when one takes electronic devices overseas (e.g., hard drives being imaged, cell phones being remotely turned on an used as listening devices) and the precautions that one should take to minimize risk. While specific countries were not singled out on one such draft document (outlining the risks and the potential mitigation of those risks), the discussion included whether such warnings should be released in advance of the Beijing Olympics. Some expressed a reluctance to issue such warnings because of the concern that it would cause China to “lose face.”

Ultimately, the concern was rendered moot since Joel Brenner, a national counterintelligence executive in the Bush Administration, otherwise made the topic public (http://blogs.computerworld.com/slurping_and_other_cyberspying_expected_at_olympics). It seems ludicrous in hindsight that the concern over making a government “feel bad” about activities that they were widely acknowledged to be doing should be greater than protecting people who did not know about those risks. (Do we warn people against walking through high crime areas at night, or are we worried that criminals might be offended if we did so?) Even when we choose to exercise diplomacy instead of countermeasures, diplomacy inevitably includes some element of “you are doing X, we’d prefer that you not do so,” if not an actual “cease and desist” signal.

The difficulty of proper attribution of non-state actors deserves specific attention because of the need for multi-stakeholder cooperation in order to identify and eliminate the threat. When an attacker resides in one location, uses resources distributed around the world, and targets a victim in yet another country, the authorities and individuals responsible for finding out who (or what) is behind the attack may only have portions of the information or resources needed to properly carry out their job. Taking a unilateral approach will at times be simply impossible, and may not offer the quickest path to success. However, working collaboratively with other governments and stakeholders not only builds our collective capacity to defend critical infrastructures around the world, but also ensures that our weakest links do not become havens for cyber criminals or terrorists.

While it can be at times harder in cyberspace to distinguish what kind of foe we face, a Cyber Monroe Doctrine will work best when we can clearly distinguish who is conducting an attack so that we can deliver the appropriate response. This is not an easy task, and will require new skill sets across the entire government to ensure cyber threats are properly categorized.

* The government of the Dominican Republic stopped payment on debts of more than $32 million to various nations, which caused President Theodore Roosevelt to invoke (and expand upon) the Monroe Doctrine to avoid having European powers come to the Western Hemisphere for the purpose of collecting debts. This expansion of the Monroe Doctrine became known as the Roosevelt Corollary

For More InformationBook of the Week

The Forgotten Man by Amity Shlaes

http://www.amazon.com/Forgotten-Man-History-Great-Depression/dp/0066211700

This is a fascinating economic history of the Depression and why Hoover’s and Roosevelt’s economic policies made the Depression worse – much worse. It’s worth reading for such gems as (quoting philosopher Wiliam Graham Sumner): "The type and formula of most schemes of philanthropy or humanitarianism is this: A and B put their heads together to decide what C shall be made to do for D. The radical vice of all these schemes, from a sociological point of view, is that C is not allowed a voice in the matter, and his position, character, and interests, as well as the ultimate effects on society through C's interests, are entirely overlooked. I call C the Forgotten Man." Roosevelt, of course, twisted this to make D the Forgotten Man. Very well written and a reminder of what disastrous government intervention in the economy looks like.

More Useful Hawaiian:Na´u keia mea. Nou kēlā mea. (This is mine. That is yours.)

More on the Monroe Doctrine:

http://en.wikipedia.org/wiki/Monroe_Doctrine

About DEFCON:

http://en.wikipedia.org/wiki/DEF_CON

About William Graham Sumner:

http://en.wikipedia.org/wiki/William_Graham_Sumner

What’s Mine Is Mine

Mary Ann Davidson - Mon, 2009-05-11 05:15

The 2009 RSA Conference is over and it was, as always, a good chance to catch up with old friends and new trends. I was on four panels (including the Executive Security Action Forum on the Monday before RSA) and it was a pleasure to be able to discuss interesting issues with esteemed colleagues. One such panel was on the topic of cloud computing security (ours was not the only panel on that topic, needless to say). One of the biggest issues in getting the panel together was manifest at the outset when, like the famous story of 6 blind men and the elephant, everyone had a different “feel” for what cloud computing actually is.

The “what the heck is cloud computing, anyway?” definitional problem is what makes discussions of cloud computing so thorny. Some proponents of cloud computing are almost pantheists in their pronouncements. “The cloud is everything; everything is the cloud. I’m a cloud, you’re a cloud, we’re a cloud, it’s all the cloud; are in you in touch with your inner cloud?” It’s hard to even discuss cloud computing with them because you need to know what faction of the radical cult you are with to understand how they even approach the topic.

One of the reasons it is hard to debunk cloud computing theology is that the term itself is so nebulous. If by cloud computing, one means software as a service, this is nothing new (so what’s all the fuss about?). Almost as long as there have been computers, there have been people paying other people to manage the equipment and software for them using a variety of different business models. When I was in college, students got “cloud services,” in a way. You got so many computer hours at the university computer center. You’d punch out your program on a card deck, drop it off at the university computer center, someone would load the deck, and later you’d stop by, pick up your card deck and your output. Someone else managed running the program for you (loading the deck) and debited your account for the amount of computing time it took to run the program. (I know, I know, given all the power on a mere desktop these days, this reminiscence is the computing equivalent of “I walked 20 miles to school through snow drifts.” But people who remember those days also remember dropping a card deck, which was the equivalent of “the dog ate my homework” if you couldn’t literally get your cards lined up in time to turn your homework in. Ah, the good old days.)

Today, many companies run hosted applications for their customers through a variety of business models. In some cases, the servers and software are managed at a data center the service provider owns and manages (the @myplace model); in other cases, the service provider manages the software remotely, where the servers and software remain at the customer site (the @yourplace model). What both of these models have in common is knowing “what’s mine is mine.” That is, where the servers are located is not as important as the principle that a customer knows where the data is, what is being done to secure it and that “what’s mine is mine.” If you are not actually managing your own data center, you still will do due diligence – and have a well-written contract, with oversight provisions – to ensure that someone else is securing your data to your satisfaction. If it is not done to your satisfaction you either needed to write a better contract or to terminate the service contract you have for cause.

I therefore find some of the pronouncements about cloud computing to be completely ludicrous if you are talking about anything important, because you want to know a) where something is that is of value to you and b) that it is being secured appropriately. “Being secured” is not just a matter of using secure smoke and mirrors – oops, I mean, a secure cloud protocol – but a bunch of things (kind of like the famous newspaper reporting example – who, what, when, how, why and where). Maybe “whatever” also begins with a W, but nobody would accept that as an answer to the question, “It’s 11PM, do you know where your data is and who is accessing it?”

I’ve used the following example before, most recently at the 2009 RSA Conference, but it’s worth repeating here. Suppose you have a daughter named Janie, who is the light of your life. Can you imagine the following conversation when you call her day care provider at 2pm.?

You: “Where is Janie?”
DCP: “Well, we aren’t really sure right now. Janie is off in the day care cloud. Somewhere. But we are sure she’ll be at the door by 5 when you come to pick her up.”

The answer is, you wouldn’t tolerate such “wherever, whatever” answers and you’d yank Janie out of there ASAP. Similarly, if your data is important, you aren’t going to be happy with a “secure like, wherever” cloud protocol.

There is another reason “the cloud is everything, everywhere” mantra is nonsense. The reality is that if the cloud is everything and everywhere, then you have to protect everything, which is simply not possible (basic military strategy 101, courtesy of Frederick II: “He who defends everything defends nothing”). It’s not even worth trying to do that. If everything is in the cloud then one of two things will happen. Either security will have to rise to that digital equivalent of Ft. Knox everywhere: if not all data is gold, some of it is and you have to protect the gold above all else. Or, security devolves to the lowest common denominator, and we are back to little Janie – nobody is going to drop off their precious jewels in some cloud where nobody is sure where they are or how they are being protected. (You might drop off the neighbor’s kid into an insecure day care cloud because he keeps teasing your cat, but not little Janie.)

One of the reasons the grandiose claims about cloud computing don’t sit well is that most people have an intuitive defensiveness about “what’s mine.” You want to know “what’s mine is mine, what’s yours is yours” and most of us don’t have megalomaniacal tendencies to claim what’s yours as mine. Nor frankly, do we generally care about “what’s yours” unless you happen to be a friend or there are commons that affect both of us (e.g., if three houses in the neighborhood get burgled, I’m more likely to join neighborhood watch since what affects my neighbor is likely to affect me, too).

I buy the idea of having someone else manage your applications because I learned at an early age you could pay people to do unpleasant things you don’t want to do for yourself. My mother reminded me of this only last weekend. When I was a 21-year-old ensign stationed in San Diego, my command had a uniform inspection in khakis. I did not like khakis and had not ever had to wear them (the particular shade of khaki the uniforms were made of at that time made everyone look as if he/she had malaria, and the material was a particularly yucky double knit). I was moaning and groaning about having to hem my khaki uniform skirt when my mother reminded me that the Navy Exchange had a tailor shop and they’d probably hem my skirt for a nominal fee (the best five dollars I ever spent, as it happens). If you don’t want to manage your applications (in business parlance, because it is not your “core competence”), you can pay someone else to do it for you. You’re not completely off the hook in that you have to substitute due diligence and contract management skills for hands-on IT skills, but this model works for a lot of people.

What I don’t buy is the idea that – for anything of value – grabbing storage or computing on the fly is a model anybody is going to want to use. A pharmaceutical company running clinical trials isn’t going to store their latest test results “somewhere, out there.” They aren’t necessarily going to rent computing power on the fly, either if the raw data itself is sensitive (how valuable would it be to a competitor to learn that new Killer Drug isn’t doing so well in clinical trials?) You want to know “what’s mine is mine, and is being protected to my verifiable satisfaction.” If it’s not terribly valuable – or, more precisely, if it is not something you mind sharing broadly - then the cloud is fine. A lot of people store their photographs on some web site somewhere which means a) if their hard drive is corrupted, they have a copy somewhere and b) it’s easier to share with lots of people – easier than emailing .JPG files around. I heard one presenter at RSA describe how his company crunched big numbers using “the power of the cloud” but he admitted that the data being crunched was already public data. So, the model that worked was “this is mine; I am happy to share,” or “this is already being shared, and is not really mine.”

Speaking of “what’s mine is mine,” I mentioned in my previous blog entry that I’d had the privilege of testifying in front of Congress in mid-March (the Homeland Security Subcommittee on Emerging Threats, Cybersecurity, Science and Technology). As I only had five minutes for my remarks, I wanted to make a few strong recommendations that I hoped would have an impact. The third of the three recommendations was that the US should invoke the Monroe Doctrine in cyberspace. (One of my co-panelists then started referring to this idea as the Davidson Doctrine, which I certainly cringe at using myself. James Monroe was the president who first invoked the doctrine that bears his name – he gets to have a major doohickey in foreign policy named after him since he was – well, The President. I am clearly not the president or even not a president, unless it is president of the Ketchum, Idaho Maunalua Fan Club.)

For those who have forgotten their history, the Monroe Doctrine – created in 1823 – was a basic enumeration that the United States had a declared sphere of influence in the Western Hemisphere, that further efforts by European governments to colonize or interfere with states in the Western Hemisphere would be viewed by the US as aggressive acts requiring US intervention. The Monroe Doctrine is one of the United States’ oldest foreign policy constructs, it has been invoked multiple times by multiple presidents (well into the 20th century), and has morphed to include other areas of intervention (the so-called Roosevelt Corollary).* In short, the Monroe Doctrine was a declared line in the sand: the United States’ way of saying “what’s mine is mine.”

My principle reason for recommending invocation of the Monroe Doctrine is that we already have foreign powers stealing intellectual property, invading our networks, probing our critical defense systems (and other critical infrastructure systems). Nobody wants to say it, but there is a war going on in cyberspace. Putting it differently, if a hostile foreign power bombed our power plants, would that be considered an act of war? If a group of (non-US) actors systemically denied us the use of critical systems by physically taking control of them, would that be considered an act of war? I am certainly not suggesting that the Monroe Doctrine should govern (if it is invoked in cyberspace) the entire doctrine of cyberwar. But it is the case that before great powers can develop doctrines of cyberwar, they need to declare what is important. “What’s mine is mine: stay out or face the consequences.”

Another incident from the RSA Conference brought this home to me. In the Q and A session after a panel I was on: a woman mentioned she had grown up during the Cold War, when it was obvious who the enemy was. Who, she asked, is the enemy now? My response was, “We aren’t actually allowed to have enemies now. Wanting to annihilate western civilization is a different, equally valid value system that needs to be respected in the interests of diversity.” This sarcastic remark went right over her head for no reason that I can fathom. It is, however, true, that a lot of people don’t want to use the term “enemy” anymore, in part because they don’t even want to acknowledge that we are at war. From what is already public knowledge, we can state honestly that we have numerous enemies attacking our interests in cyber space – from individual actors to criminal organizations to nation states – part of our problem is that because we have not developed a common understanding of what “cyber war” is, we are unable to map these enemies to appropriate responders in the same way we pair street crime up with local cops and attacks on military supply lines with the armed forces.

We need to at least begin to elucidate a larger cyberwar doctrine by declaring a sphere of influence and that messing with that will lead to retribution. Like the Monroe Doctrine, we do not need to publicly elucidate exact responses, but our planning must include specific exercises such as “if A did B, what would our likely response be, where ‘response’ could include signaling and other activities in the non-cyber world?” Nations and others do “signal” each other of intentions, which often allows others to gracefully avoid conflict escalation by reading the signals correctly and backing off.

Slight aside: there are parents more worried about their children’s self esteem than stopping their obnoxious behavior Right This Second. My mother had a great escalation protocol using signaling that I wish all the Gen-Xers, Gen-Yers and Millennials would adopt instead of “we want Johnny to feel good about being a rude brat.” Mom has not had to invoke this on the Davidson kids in several decades because she invoked it so well before we were 10:

Defcon 5 - (Child behaves himself or herself)
Defcon 4 - The “look” (narrowed eyes, direct eye contact, tense body language)
Defcon 3 - The hiss through clenched teeth
Defcon 2 - “Stop That Right This Minute Or We Are Leaving. I Mean It.”
Defcon 1 - The arm pinch and premise-vacating

This was, my siblings and I can attest to, a well-established escalation protocol with predictable “payoffs” at each level. As a result, we only rarely made it to Defcon 1 (and, in defense of my mother, I richly deserved it when we did).

So, below are some thoughts I wrote up as a later expansion on my remarks to the subcommittee. Invoking the Monroe Doctrine in cyberspace is, I believe, a useful construct for approaching how we think about cybersecurity as the critical national security interest I believe it is.

Applicability of the Monroe Doctrine to Cyberspace

1. The essential truth of invoking a Cyber Monroe Doctrine is that what we are seeing in cyberspace is no different from the kinds of real-world activities and threats our nation (and all nations) have been dealing with for years; we must stop thinking cyberspace falls outside of the existing system of how we currently deal with threats, aggressive acts and appropriate responses.

Referencing the Monroe Doctrine is meant to simplify the debate while highlighting its importance. The Monroe Doctrine became an organizing principle of US foreign policy. Through the concept of the Americas sphere of influence, it publicly identified an area of national interest for the US and clearly indicated a right to defend those interests without limiting the response. Today cyberspace requires such an organizing principle to assist in prioritization of US interest. While cyberspace by its name connotes virtual worlds, we should recall that cyberspace maps to places and physical assets we care about that are clearly within the US government's remit and interest.

Conceptually, how we manage the cyber threat should be no different than how we manage various real-world threats (from domestic crime to global terrorism and acts of aggression by hostile nation-states). Just as the Monroe Doctrine compelled the US government to prioritize intercontinental threats, a Cyber Monroe Doctrine also forces the US government to prioritize: simply put, some cyber-assets are more important than others and we should prioritize protection of them accordingly. We do not treat the robbery of a corner liquor store with the same response (or same responders) as we treat an attempt to release a dirty bomb into a population center, for example. With this approach, policy makers also benefit from existing legal systems and frameworks that ensure actions are appropriate and that protect our civil liberties.

Similarly, not all European incursions into the Western hemisphere have warranted a response under the Monroe Doctrine. For example in 1831, Argentina, which claimed sovereignty over the Falkland Islands, seized three American schooners in a dispute over fishing rights. The US reacted by sending the USS Lexington, whose captain, Silas Duncan, “seized property taken from the American ships, released the American seamen, spiked the fort’s cannon, captured a number of Argentine colonists, and posted a decree that anyone interfering with American fishing rights would be considered a pirate”(The Savage Wars of Peace, Max Boot, page 46).

The territorial dispute ended in 1833 when Great Britain sent a landing party of Royal Marines to seize the Falklands. In this instance the US specifically did not respond by invoking the Monroe Doctrine; the Falklands were deemed of insufficient importance to risk a crisis with London.

2. The initial and longstanding value of the Monroe Doctrine was that it sent a signal to foreign powers that the US had a territorial sphere of influence and that incursions would be met with a response. Precisely because we did not specify all possible responses in advance, the Monroe Doctrine proved very flexible (e.g., it was later modified to support other objectives).

It is understandable that the United States would have concerns about ensuring the safety of the 85% of US critical (cyber) infrastructure that is in private hands given that much of this critical infrastructure (if attacked or brought down) has a direct link to the economic well-being of the United States in addition to other damage that might result. That said, declaring a national security interest in such critical infrastructure should not mean militarizing all of it or placing it under military or other governmental control any more than the Monroe Doctrine led to colonization (“planting the flag”) or militarization (military occupation and/or permanent bases) of all of the Western hemisphere. Similarly, the US should not make a cyberspace “land grab” for the Western hemisphere, or even our domestic cyber-infrastructure.

A 21st century Cyber Monroe Doctrine would have the same primary value as the original Monroe Doctrine - a signal to others of our national interests and a readiness to action in defense of those interests. Importantly, any consideration of our cyber interests must be evaluated within the larger view of our national security concerns and our freedoms. For example, it is clear where the defacement of a government website ranks in comparison to a weapons of mass destruction (WMD) attack on a major city. All cyber-risks are not created equal nor should they have a precisely “equal” response.

Another reason to embrace a Cyber Monroe Doctrine (and the innate flexibility it engendered) is the fact that cyberspace represents a potentially “liquid battlefield.” Traditionally, wars have been fought for fixed territory whose battlefields did not dramatically expand overnight (e.g., the attack by Imperial Japan on Pearl Harbor did not overnight morph into an attack on San Francisco, Kansas City and New York City). By contrast, in cyberspace there is no “fixed” territory and thus the boundaries of what is attacked are fluid. For a hostile entity, almost any potential cybertarget is 20 microseconds away.

A Cyber Monroe Doctrine must also accommodate the fundamental architecture of the Internet. Since the value of the Internet is driven by network effects, policies that decrease the value of the Internet through (real or perceived) balkanization will harm all participants. While a Cyber Monroe Doctrine can identify specific critical cyber infrastructure of interest to the U.S., parts of the cyber infrastructure are critical to all global stakeholders. In short, even as the United States may have a cybersphere of influence, there are nonetheless cybercommons. This is all the more true as attacks or attackers move through or use the infrastructure of those cybercommons. Therefore, the US must find mechanisms to be inclusive rather than exclusive when it comes to stewardship and defense of our cybercommons.

3. Placing the critical assets we care about within a framework that maps to existing legal, policy and social structures/institutions is the shortest path to success.

For example, military bases are protected by the military, and a nation-state attack (physical or cyber) against a military base or military cyberassets should fit within a framework that can offer appropriate and proportionate responses (ranging from State Department harassment of the local embassy official, to application of kinetic force). Critical national assets (power plants, financial systems) require similar flexibility, but through engagement of the respective front-line institutions in a manner that permits escalation appropriate to the nature of the attack.

Challenges

There are a number of challenges in applying a Cyber Monroe Doctrine. Below is a representative but by no means exhaustive list of them.

1. Credibility

A deterrence strategy needs teeth in it to be credible. Merely telling attackers “we are drawing a line in the sand, step over it at your peril,” without being able to back it up with an actual and proportionate response is the equivalent of moving the line in the sand repeatedly in an attempt to appear fierce while actually doing nothing. (The Chinese would rightly call such posturers “paper tigers.”) Mere words without at least the possibility of a full range of supporting actions is no deterrent at all. A credible deterrent can be established through non-military options as well - for some a sharply worded public rebuke may change behavior as much as if we were sending in the Marines.

Because the Monroe Doctrine did not detail all potential responses to provocation in advance, the United States was able to respond as it saw fit to perceived infractions of the Monroe Doctrine on multiple occasions and over much of our history. The response was measured and flexible, but there was a response.

2. Invocation Scenarios

To bolster credibility, the “teeth” part of a cyber doctrine should include a potential escalation framework and some “for instances” in which a Cyber Monroe Doctrine would be invoked. This planning activity can take place in the think tank realm, the cyber exercise realm, or a combination thereof.

We know how to do this. Specifically, military strategists routinely look at possible future war scenarios. In fact, it is not possible to do adequate military planning by waiting for an incident and only then deciding if you have the right tools, war plans, and defense capabilities to meet it, if for no other reason than military training and procurement take years and not days to implement.

Similarly, “changing the battlefield” could be one supporting activity for a Cyber Monroe Doctrine. For example, it has been argued (by Michael Oren in Power, Faith and Fantasy: America in The Middle East 1776 to the Present) that the United States only developed a strong Navy (and the centralized government that enabled it) as a result of the wars of the Barbary pirates. Similarly, the fabric of our military may change and likely will change in support of a Cyber Monroe Doctrine and that could include not only fielding new “troops” – the Marines first made a name for themselves by invading Tripoli – but new technologies to support a changed mission. One would similarly expect that a Cyber Monroe Doctrine as a policy construct would be supported by specific planning exercises instead of “shoot from the hip” responses.

3. Attribution

A complicating factor in cybersecurity is that an attack - especially if it involves infiltration/exfiltration and not a “frontal assault” (e.g., denial of service attack) - and the perpetrator of it may not be obvious. Thus two of the many challenges of cybersecurity are detecting attacks or breaches in the first place, and attributing them correctly in the second place. No one would want to initiate a response to a cyber attack if one cannot correctly target the adversary. In particular, highly reliable attribution is critical in cyberoffense, since the goal is to take out attackers or stop the attacks, not necessarily to create collateral damage by taking down systems being hijacked by attackers. Notwithstanding this challenge, “just enough attribution” may be sufficient for purposes of “shot over the bow warnings,” even if it would be insufficient for escalated forms of retaliation.

For example, in cybersecurity circles last year there were a number of discussions about the types of activities that occur when one takes electronic devices overseas (e.g., hard drives being imaged, cell phones being remotely turned on an used as listening devices) and the precautions that one should take to minimize risk. While specific countries were not singled out on one such draft document (outlining the risks and the potential mitigation of those risks), the discussion included whether such warnings should be released in advance of the Beijing Olympics. Some expressed a reluctance to issue such warnings because of the concern that it would cause China to “lose face.”

Ultimately, the concern was rendered moot since Joel Brenner, a national counterintelligence executive in the Bush Administration, otherwise made the topic public (http://blogs.computerworld.com/slurping_and_other_cyberspying_expected_at_olympics). It seems ludicrous in hindsight that the concern over making a government “feel bad” about activities that they were widely acknowledged to be doing should be greater than protecting people who did not know about those risks. (Do we warn people against walking through high crime areas at night, or are we worried that criminals might be offended if we did so?) Even when we choose to exercise diplomacy instead of countermeasures, diplomacy inevitably includes some element of “you are doing X, we’d prefer that you not do so,” if not an actual “cease and desist” signal.

The difficulty of proper attribution of non-state actors deserves specific attention because of the need for multi-stakeholder cooperation in order to identify and eliminate the threat. When an attacker resides in one location, uses resources distributed around the world, and targets a victim in yet another country, the authorities and individuals responsible for finding out who (or what) is behind the attack may only have portions of the information or resources needed to properly carry out their job. Taking a unilateral approach will at times be simply impossible, and may not offer the quickest path to success. However, working collaboratively with other governments and stakeholders not only builds our collective capacity to defend critical infrastructures around the world, but also ensures that our weakest links do not become havens for cyber criminals or terrorists.

While it can be at times harder in cyberspace to distinguish what kind of foe we face, a Cyber Monroe Doctrine will work best when we can clearly distinguish who is conducting an attack so that we can deliver the appropriate response. This is not an easy task, and will require new skill sets across the entire government to ensure cyber threats are properly categorized.

* The government of the Dominican Republic stopped payment on debts of more than $32 million to various nations, which caused President Theodore Roosevelt to invoke (and expand upon) the Monroe Doctrine to avoid having European powers come to the Western Hemisphere for the purpose of collecting debts. This expansion of the Monroe Doctrine became known as the Roosevelt Corollary

For More Information

Book of the Week

The Forgotten Man by Amity Shlaes

http://www.amazon.com/Forgotten-Man-History-Great-Depression/dp/0066211700

This is a fascinating economic history of the Depression and why Hoover’s and Roosevelt’s economic policies made the Depression worse – much worse. It’s worth reading for such gems as (quoting philosopher Wiliam Graham Sumner): "The type and formula of most schemes of philanthropy or humanitarianism is this: A and B put their heads together to decide what C shall be made to do for D. The radical vice of all these schemes, from a sociological point of view, is that C is not allowed a voice in the matter, and his position, character, and interests, as well as the ultimate effects on society through C's interests, are entirely overlooked. I call C the Forgotten Man." Roosevelt, of course, twisted this to make D the Forgotten Man. Very well written and a reminder of what disastrous government intervention in the economy looks like.

More Useful Hawaiian:

Na´u keia mea. Nou kēlā mea
. (This is mine. That is yours.)

More on the Monroe Doctrine:

http://en.wikipedia.org/wiki/Monroe_Doctrine

About DEFCON:

http://en.wikipedia.org/wiki/DEF_CON

About William Graham Sumner:

http://en.wikipedia.org/wiki/William_Graham_Sumner

newspeak strikes again...

Nuno Souto - Thu, 2009-05-07 17:06
Frankly, the amount of hype surrounding the "cloud" thing is reaching the limits of what is acceptable by anyone with half a brain and a working mind!If anything, it is only discrediting the architecture and reducing it to yet another "j2ee" scam.I recently found this in one of the blog feeds.“Private, on-premise clouds are also an option that that may lessen security-related concerns”I beg your Noonsnoreply@blogger.com12

The Undocumented "/1000" currency formatting function

Oracle WTF - Sun, 2009-05-03 06:32

Forum question:

Hi,

How can I format currency values to shorthand?

i.e. how can I display 12500 as 12.5, 2700 as 2.7, 700 as 0.7 etc?

I have tried using various masks but can't achieve the results I'm looking for.

That's a tough one. How to make 700 into 0.7? Could there be some Oracle feature to help with this?

Two quick replies later:

Thanks for the replies guys

I wasnt aware of the "/1000" feature, but it has done exactly what I need.

Oracle needs to do more to promote these display format features. What else are they hiding? That's what we want to know.

Querying v$lock

Jared Still - Wed, 2009-04-29 12:24
There have been a number of scripts made available for querying v$lock to diagnose locking issues.

One example is one I got long ago from tsawmiller on Oracle-L. The original script showlock.sql, or something close to it is still available at OraFaq.com showlock.sql

showlock.sql has morphed over the years to keep up with changing versions of Oracle.

At one time the showlock.sql resembled the OH/rdbms/admin/utllockt.sql script, in that it created a temporary table to speed up the results, as the join on v$lock, dba_sessions and dba_waiters was so slow.

That was remedied at one point by the use of the ordered hint. That hint may no longer be necessary, but the script is still fast on all versions of Oracle that I need it on, (9i-11g) and I am too lazy to test something that isn't broken.

This script could still be further updated by the use of the v$lock_type view, eliminating the large decode statements in the script. As v$lock_type is not available in 9i though, I leave the decodes in. When the last 9i database is gone from our environment however, the script can be shortened considerably.

The decode statements were mostly lifted from a script provided by Oracle. MetaLink document (or My Oracle Support now I guess) # 1020008.6 has a 'fully decoded' locking script that is current though 11g I believe.

The problem with that script however is that it does not correctly look up the name of the object that is locked.

The reason I have even brought this up is that a bit of my workday yesterday was spent updating the script, and making sure it worked as expected. The COMMAND column was also added. In addition, the outer joins were converted to the much neater ANSI join syntax, and one outer join was eliminated.

Here's the output from a test. It may be easier to read if you cut and paste it into a text editor, as the formatting here doesn't work well for wide output. Better yet, test the script and look at the output for yourself.


       Oracle              Database                                  Lock                  Mode            Mode       OS                 OS
SID Usernam WATR BLKR Object COMMAND Type Lock Description Held Requested Program Process
------ ------- ----- ----- ------------------------- --------------- ---- ---------------- --------------- ---------- ------------------ -------
73 JKSTILL 83 JKSTILL.A SELECT TM DML enqueue lock Exclusive None sqlplus@poirot (TN 21430
83 JKSTILL 73 JKSTILL.A LOCK TABLE TM DML enqueue lock None Exclusive sqlplus@poirot (TN 21455

2 rows selected.




Though utllockt.sql may work well enough, it does have a couple of drawbacks:

1. it does not provide enough information
2. it creates a temporary table.

That second item means that you better be sure to run the script from a session separate from any holding locks. In production that probably does not matter, as that is what would normally be done anyway. During testing however it can be a bit frustrating until you realize the the DDL in the script is causing your locks to be released.

What I like about this script is that it shows me what I need to know, and it is very fast.
Of course, now that I have stated that someone will run it on a system where it performs poorly...

For showlock.sql to work, the dba_waiters view must be created.
If this has not already been done, it can be created by logging in as SYSDBA and running the OH/rdbms/admin/catblock.sql script.

Here's how you can easily test sh0wlock.sql:

Session A -
create table a (a integer);
lock table a in exclusive mode;

Session B
lock table a in exclusive mode;

Now either from session A or a new session, run the showlock.sql script.

Here's the script.


-- showlock.sql - show all user locks
--
-- see ML Note 1020008.6 for fully decoded locking script
-- parts of the that script to not work correctly, but the
-- lock types are current
-- (script doesn't find object that is locked )
--
-- speeded up greatly by changing order of where clause,
-- jks 04/09/1997 - show lock addresses and lockwait

-- jks 04/09/1997 - outer join on all_objects
-- encountered situation on 7.2
-- where there was a lock with no
-- matching object_id
-- jks 02/24/1999 - join to dba_waiters to show waiters and blockers
-- jkstill 05/22/2006 - revert back to previous version without tmp tables
-- update lock info
-- add lock_description and rearrange output
-- jkstill 04/28/2008 - added command column
-- updated lock types
-- removed one outer join by using inline view on sys.user$
-- jkstill 04/28/2008 - added subquery factoring
-- converted to ANSI joins
-- changed alias for v$lock to l and v$session to s

set trimspool on
ttitle off
set linesize 150
set pagesize 60
column command format a15
column osuser heading 'OS|Username' format a7 truncate
column process heading 'OS|Process' format a7 truncate
column machine heading 'OS|Machine' format a10 truncate
column program heading 'OS|Program' format a18 truncate
column object heading 'Database|Object' format a25 truncate
column lock_type heading 'Lock|Type' format a4 truncate
column lock_description heading 'Lock Description'format a16 truncate
column mode_held heading 'Mode|Held' format a15 truncate
column mode_requested heading 'Mode|Requested' format a10 truncate
column sid heading 'SID' format 999
column username heading 'Oracle|Username' format a7 truncate
column image heading 'Active Image' format a20 truncate
column sid format 99999
col waiting_session head 'WATR' format 9999
col holding_session head 'BLKR' format 9999

with dblocks as (
select /*+ ordered */
l.kaddr,
s.sid,
s.username,
lock_waiter.waiting_session,
lock_blocker.holding_session,
(
select name
from sys.user$
where user# = o.owner#
) ||'.'||o.name
object,
decode(command,
0,'BACKGROUND',
1,'Create Table',
2,'INSERT',
3,'SELECT',
4,'CREATE CLUSTER',
5,'ALTER CLUSTER',
6,'UPDATE',
7,'DELETE',
8,'DROP',
9,'CREATE INDEX',
10,'DROP INDEX',
11,'ALTER INDEX',
12,'DROP TABLE',
13,'CREATE SEQUENCE',
14,'ALTER SEQUENCE',
15,'ALTER TABLE',
16,'DROP SEQUENCE',
17,'GRANT',
18,'REVOKE',
19,'CREATE SYNONYM',
20,'DROP SYNONYM',
21,'CREATE VIEW',
22,'DROP VIEW',
23,'VALIDATE INDEX',
24,'CREATE PROCEDURE',
25,'ALTER PROCEDURE',
26,'LOCK TABLE',
27,'NO OPERATION',
28,'RENAME',
29,'COMMENT',
30,'AUDIT',
31,'NOAUDIT',
32,'CREATE EXTERNAL DATABASE',
33,'DROP EXTERNAL DATABASE',
34,'CREATE DATABASE',
35,'ALTER DATABASE',
36,'CREATE ROLLBACK SEGMENT',
37,'ALTER ROLLBACK SEGMENT',
38,'DROP ROLLBACK SEGMENT',
39,'CREATE TABLESPACE',
40,'ALTER TABLESPACE',
41,'DROP TABLESPACE',
42,'ALTER SESSION',
43,'ALTER USER',
44,'COMMIT',
45,'ROLLBACK',
46,'SAVEPOINT',
47,'PL/SQL EXECUTE',
48,'SET TRANSACTION',
49,'ALTER SYSTEM SWITCH LOG',
50,'EXPLAIN',
51,'CREATE USER',
52,'CREATE ROLE',
53,'DROP USER',
54,'DROP ROLE',
55,'SET ROLE',
56,'CREATE SCHEMA',
57,'CREATE CONTROL FILE',
58,'ALTER TRACING',
59,'CREATE TRIGGER',
60,'ALTER TRIGGER',
61,'DROP TRIGGER',
62,'ANALYZE TABLE',
63,'ANALYZE INDEX',
64,'ANALYZE CLUSTER',
65,'CREATE PROFILE',
66,'DROP PROFILE',
67,'ALTER PROFILE',
68,'DROP PROCEDURE',
69,'DROP PROCEDURE',
70,'ALTER RESOURCE COST',
71,'CREATE SNAPSHOT LOG',
72,'ALTER SNAPSHOT LOG',
73,'DROP SNAPSHOT LOG',
74,'CREATE SNAPSHOT',
75,'ALTER SNAPSHOT',
76,'DROP SNAPSHOT',
79,'ALTER ROLE',
85,'TRUNCATE TABLE',
86,'TRUNCATE CLUSTER',
87,'-',
88,'ALTER VIEW',
89,'-',
90,'-',
91,'CREATE FUNCTION',
92,'ALTER FUNCTION',
93,'DROP FUNCTION',
94,'CREATE PACKAGE',
95,'ALTER PACKAGE',
96,'DROP PACKAGE',
97,'CREATE PACKAGE BODY',
98,'ALTER PACKAGE BODY',
99,'DROP PACKAGE BODY',
command||'-UNKNOWN'
) COMMAND,
-- lock type
-- will always be TM, TX or possible UL (user supplied) for user locks
l.type lock_type,
decode
(
l.type,
'BL','Buffer hash table instance lock',
'CF',' Control file schema global enqueue lock',
'CI','Cross-instance function invocation instance lock',
'CS','Control file schema global enqueue lock',
'CU','Cursor bind lock',
'DF','Data file instance lock',
'DL','Direct loader parallel index create',
'DM','Mount/startup db primary/secondary instance lock',
'DR','Distributed recovery process lock',
'DX','Distributed transaction entry lock',
'FI','SGA open-file information lock',
'FS','File set lock',
'HW','Space management operations on a specific segment lock',
'IN','Instance number lock',
'IR','Instance recovery serialization global enqueue lock',
'IS','Instance state lock',
'IV','Library cache invalidation instance lock',
'JQ','Job queue lock',
'KK','Thread kick lock',
'LA','Library cache lock instance lock (A=namespace)',
'LB','Library cache lock instance lock (B=namespace)',
'LC','Library cache lock instance lock (C=namespace)',
'LD','Library cache lock instance lock (D=namespace)',
'LE','Library cache lock instance lock (E=namespace)',
'LF','Library cache lock instance lock (F=namespace)',
'LG','Library cache lock instance lock (G=namespace)',
'LH','Library cache lock instance lock (H=namespace)',
'LI','Library cache lock instance lock (I=namespace)',
'LJ','Library cache lock instance lock (J=namespace)',
'LK','Library cache lock instance lock (K=namespace)',
'LL','Library cache lock instance lock (L=namespace)',
'LM','Library cache lock instance lock (M=namespace)',
'LN','Library cache lock instance lock (N=namespace)',
'LO','Library cache lock instance lock (O=namespace)',
'LP','Library cache lock instance lock (P=namespace)',
'LS','Log start/log switch enqueue lock',
'MB','Master buffer hash table instance lock',
'MM','Mount definition gloabal enqueue lock',
'MR','Media recovery lock',
'PA','Library cache pin instance lock (A=namespace)',
'PB','Library cache pin instance lock (B=namespace)',
'PC','Library cache pin instance lock (C=namespace)',
'PD','Library cache pin instance lock (D=namespace)',
'PE','Library cache pin instance lock (E=namespace)',
'PF','Library cache pin instance lock (F=namespace)',
'PF','Password file lock',
'PG','Library cache pin instance lock (G=namespace)',
'PH','Library cache pin instance lock (H=namespace)',
'PI','Library cache pin instance lock (I=namespace)',
'PI','Parallel operation lock',
'PJ','Library cache pin instance lock (J=namespace)',
'PK','Library cache pin instance lock (L=namespace)',
'PL','Library cache pin instance lock (K=namespace)',
'PM','Library cache pin instance lock (M=namespace)',
'PN','Library cache pin instance lock (N=namespace)',
'PO','Library cache pin instance lock (O=namespace)',
'PP','Library cache pin instance lock (P=namespace)',
'PQ','Library cache pin instance lock (Q=namespace)',
'PR','Library cache pin instance lock (R=namespace)',
'PR','Process startup lock',
'PS','Library cache pin instance lock (S=namespace)',
'PS','Parallel operation lock',
'PT','Library cache pin instance lock (T=namespace)',
'PU','Library cache pin instance lock (U=namespace)',
'PV','Library cache pin instance lock (V=namespace)',
'PW','Library cache pin instance lock (W=namespace)',
'PX','Library cache pin instance lock (X=namespace)',
'PY','Library cache pin instance lock (Y=namespace)',
'PZ','Library cache pin instance lock (Z=namespace)',
'QA','Row cache instance lock (A=cache)',
'QB','Row cache instance lock (B=cache)',
'QC','Row cache instance lock (C=cache)',
'QD','Row cache instance lock (D=cache)',
'QE','Row cache instance lock (E=cache)',
'QF','Row cache instance lock (F=cache)',
'QG','Row cache instance lock (G=cache)',
'QH','Row cache instance lock (H=cache)',
'QI','Row cache instance lock (I=cache)',
'QJ','Row cache instance lock (J=cache)',
'QK','Row cache instance lock (L=cache)',
'QL','Row cache instance lock (K=cache)',
'QM','Row cache instance lock (M=cache)',
'QN','Row cache instance lock (N=cache)',
'QO','Row cache instance lock (O=cache)',
'QP','Row cache instance lock (P=cache)',
'QQ','Row cache instance lock (Q=cache)',
'QR','Row cache instance lock (R=cache)',
'QS','Row cache instance lock (S=cache)',
'QT','Row cache instance lock (T=cache)',
'QU','Row cache instance lock (U=cache)',
'QV','Row cache instance lock (V=cache)',
'QW','Row cache instance lock (W=cache)',
'QX','Row cache instance lock (X=cache)',
'QY','Row cache instance lock (Y=cache)',
'QZ','Row cache instance lock (Z=cache)',
'RE','USE_ROW_ENQUEUE enforcement lock',
'RT','Redo thread global enqueue lock',
'RW','Row wait enqueue lock',
'SC','System commit number instance lock',
'SH','System commit number high water mark enqueue lock',
'SM','SMON lock',
'SN','Sequence number instance lock',
'SQ','Sequence number enqueue lock',
'SS','Sort segment lock',
'ST','Space transaction enqueue lock',
'SV','Sequence number value lock',
'TA','Generic enqueue lock',
'TD','DDL enqueue lock',
'TE','Extend-segment enqueue lock',
'TM','DML enqueue lock',
'TO','Temporary Table Object Enqueue',
'TS',decode(l.id2,
0,'Temporary segment enqueue lock (ID2=0)',
1,'New block allocation enqueue lock (ID2=1)',
'UNKNOWN!'
),
'TT','Temporary table enqueue lock',
'TX','Transaction enqueue lock',
'UL','User supplied lock',
'UN','User name lock',
'US','Undo segment DDL lock',
'WL','Being-written redo log instance lock',
'WS','Write-atomic-log-switch global enqueue lock',
'UNKOWN'
) lock_description,
decode
(
l.lmode,
0, 'None', /* Mon Lock equivalent */
1, 'No Lock', /* N */
2, 'Row-S (SS)', /* L */
3, 'Row-X (SX)', /* R */
4, 'Share', /* S */
5, 'S/Row-X (SRX)', /* C */
6, 'Exclusive', /* X */
to_char(l.lmode)
) mode_held,
decode
(
l.request,
0, 'None', /* Mon Lock equivalent */
1, 'No Lock', /* N */
2, 'Row-S (SS)', /* L */
3, 'Row-X (SX)', /* R */
4, 'Share', /* S */
5, 'S/Row-X (SSX)', /* C */
6, 'Exclusive', /* X */
to_char(l.request)
) mode_requested,
s.osuser,
s.machine,
s.program,
s.process
from
v$lock l
join v$session s on s.sid = l.sid
left outer join sys.dba_waiters lock_blocker on lock_blocker.waiting_session = s.sid
left outer join sys.dba_waiters lock_waiter on lock_waiter.holding_session = s.sid
left outer join sys.obj$ o on o.obj# = l.id1
where s.type != 'BACKGROUND'
)
select
--kaddr,
sid,
username,
waiting_session,
holding_session,
object,
command,
lock_type,
lock_description,
mode_held,
mode_requested,
--osuser,
--machine,
program,
process
from dblocks
order by sid, object
/

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator