Skip navigation.

Feed aggregator

What Does “Backup Restore Throttle Speed” Wait Mean?

Pythian Group - Thu, 2014-12-04 11:29

After migrating a 10g database to 11g, I asked the Application team to start their tests in order to validate that everything was working as expected. I decided to keep an eye on OEM’s Top Activity page while they were running the most mportant job. I already knew what kind of “colors” I would  find because I had checked its behavior in the former version. Suddenly, a strange kind of wait appeared on my screen: it was my first encounter with Backup Restore Throttle Speed.


OEM graph 2

I had never seen this wait before. It was listed in a user’s session so its name really confused me. No RMAN operations were running at that time. FRA was almost empty. I checked Oracle’s documentation and My Oracle Support. I found nothing but one Community post from 24-SEP-2013 with no conclusions. In the meantime, the job ended and I got the confirmation that everything was well, even faster than in the old version. Weird, very weird. It was time to review the PL/SQL code.

After reading lots of lines, a function inside the package caught my attention:

Sleep (l_master_rec.QY_FIRST_WAIT_MIN * 60);

Since the job was using a log table to keep track of its execution, I was able to match the wait time with this function pretty quickly. This code was inside the function’s DDL:

for i in 1 .. trunc( seconds_to_sleep/600 )
end loop;
sys.DBMS_BACKUP_RESTORE.SLEEP( seconds_to_sleep-trunc(seconds_to_sleep/
600)*600 );

Finally I found the reason for this wait (and the explanation for its backup/restore related name): DBMS_BACKUP_RESTORE.SLEEP. As described in MOS note “How to Suspend Code Execution In a PL/SQL Application (Doc ID 1296382.1)”, the package was used to pause job’s execution while waiting for another task to be finished.

Lastly, it’s worth noting that OEM did not graph this wait on the 10g database but it was always there.

Categories: DBA Blogs

All You Need, and Ever Wanted to Know About the Dynamic Rolling Year

Rittman Mead Consulting - Thu, 2014-12-04 09:50


Among the many reporting requirements that have comprised my last few engagements has been what, from now on, I’ll be referring to as the ‘Dynamic Rolling Year’. That is, through the utilization of a dashboard prompt, the user is able to pick a year – month combination and the chosen view, in this case a line graph, will dynamically display a years’ worth of figures with the chosen year – month as the most current starting point. The picture above better demonstrates this feature. You can see that we’ve chosen the ‘201212’ year – month combination and as a result, our view displays revenue data from 201112 through to 201212. Let’s choose another combination and make sure things work properly and to demonstrate functionality.


From the results on the picture above, it seems we’ve gotten our view to do as expected. We’ve chosen ‘201110’ as our starting year – month combination and the view has changed to display revenue trend back to ‘201010’. Now how do we do this? A quick web search will offer up myriad complex and time-consuming solutions which, while completely feasible, can be onerous to implement. While it was tempting to reap the benefits of someone else’s perhaps hard fought solution, I just knew there had to be an easier way. As a note of caution, I’ll preface the next bit by saying that this only works on a dashboard through using a dashboard prompt in tandem with the target analysis. This solution was tested via a report prompt, however the desired results were not obtained, even when ‘hard coding’ default values for our presentation variables. As well, there are a few assumptions I’m making around what kinds of columns you have in your subject area’s time dimension, namely that you have some sort of Year and Month column within your hierarchy.

1. Establish Your Analysis and Prompt


I’ve brought in the Month (renamed it Year – Month) column from Sample App’s Sample Sales subject area along with the Revenue fact. For the sake of this exercise, we’re going to keep things fairly simple and straightforward. Our Month column exists in the ‘YYYY / MM’ format so we’re going have to dice it up a little bit to get it to where we need it to be. To get our dynamic rolling year to work, all we’re going to do is a little math. In a nutshell, we’re going to turn our month into an integer as ‘YYYYMM’ and simply subtract a hundred as a filter using the presentation variables that correspond to our new month column. To make the magic happen, bring in a dummy column from your time dimension, concatenate the year and month together, cast them as a char and then the whole statement as an integer (as seen in the following formula: CAST(substring(“Time”.”T02 Per Name Month”, 1, 4)||substring(“Time”.”T02 Per Name Month”, -2, 2) AS INT) ). Then, apply a ‘between’ filter to this column using two presentation variables.


Don’t worry that we’ve yet to establish them on our dashboard prompt as that will happen later. In the picture above, I’ve set the filter to be between two presentation variables entitled ‘YEARMONTH_VAR’. As you can see, I’ve subtracted a hundred as, in this case, my year – month concatenation exists in the ‘YYYYMM’ format. You may alter this number as needed. For example, a client already had this column established in their Time dimension, yet it was in the ‘YYYYMMM’ format with a leading zero before the month. In that case, we had to subtract 1000 in order to establish the rolling year. Moving on…. After you’ve set up the analysis, save it off and let’s build the dashboard prompt. The results tab will error out. This is normal. We’re going to be essentially doing the same thing here.

We need to get our analysis and prompt to work together (like they should) and so we’ll need to, once again, format the year – month column on the prompt the same way we did it on our analysis through the ‘prompt for column’ link. You can simply copy and paste the formula you used in the ‘Edit Formula’ column on your analysis into this screen. Don’t forget to assign a presentation variable to this column with whatever name you used on the filter in your analysis (in this case it was YEARMONTH_VAR).




2. Build the Dashboard


Starting a new dashboard, bring in your prompt and analysis from the Shared Folder. In this exercise, I’ve placed the prompt below the analysis and have done some custom formatting. If you’d like to create this example exactly, then in addition to the default table view, you’ll need to create a line graph view that groups by our year – month column and uses the revenue column as the measure. And voila. Save and run the dashboard. Note that if you didn’t adopt a default value for your dashboard prompt, it will error out before you actually pick and apply a value from the prompt drop-down. Admittedly, the closer to a concatenated year-month (YYYYMM) column you have in your time dimension, the easier the implementation of this method will be. You could even set your default value to an appropriate server/repository variable so that your dashboard opens to the most current and / or a desired year-month combination. However, now that you’ve seen this, hopefully it’s sparked some inspiration into how you might apply this technique to other facets of your work and future reporting.


Categories: BI & Warehousing

Private or Public Training? (and a Discount Code)

Rittman Mead Consulting - Thu, 2014-12-04 08:22

Rittman Mead offers world class training that covers various Business Intelligence, Data Warehousing and Data Integration tools.

Rittman Mead currently offers two types of training, Public and Private – so what is the best option for your organization?

Both Private and Public options are staffed by the same highly qualified, experienced instructors. One thing to note is our instructors aren’t merely technology instructors, but they are real world consultants.

What does that mean for you? It means questions aren’t answered by the instructor flipping to a slide in a deck and reading off the best practice. You’ll probably hear an answer more like “I was on site with a company a few weeks ago who had the same issue. Let me tell you how we helped that client!”

Let’s start with Public courses. Our offerings include an OBIEE Bootcamp, an ODI Bootcamp, Exalytics for System Administrators, RPD Modeling and our all new OBIEE Front End Development, Design and Best Practices.

Why Public? If you have four or fewer attendees, the public classes are typically the most cost effective choice. You’ll attend the course along with other individuals from various public and private sector organizations. There’s plenty of time to learn the concepts, get your hands dirty by completing labs as well as getting all of your questions answered by our highly skilled instructors.

Now, our Private courses. There’s primarily two reasons a company would choose private training – cost, and the ability to customize.

Cost : Our Private Training is billed based on the instructor’s time not the number of attendees. If you have five or more individuals, it is most likely going to be a more cost effective option for us to come to you. Also, you’ll be covering travel costs for one instructor rather than your entire team.

Customization : Once you select the course that is best for your team, we’ll set up a call with a trainer to review your team’s experience level and your main goals. Based on that conversation, we can tweak the syllabus to make sure the areas you need extra focus are thoroughly covered.

One thing to note is we do offer a few courses (OWB 11gR2 & ODI 11g) privately that aren’t offered publicly.

We also offer a Custom Course Builder ( which allows you to select modules out of various training classes and build a customized course for your team – this flexibility is a great reason to select Private Training.

Some of our clients love having their team get away for the week and focus strictly on training while others prefer to have training on site at their offices with the flexibility to cut away for important meetings or calls. Whichever program works best for your team, we have got you covered!

For more information, reach out to us at If you’ve made it all the way to the end of this blog post, congratulations! As a way of saying thanks, you can mention the code “RMTEN” to save 10% on your next private or public course booking!

Categories: BI & Warehousing

Influence execution plan without adding hints

Oracle in Action - Thu, 2014-12-04 04:54

RSS content

We often encounter situations when a SQL runs optimally when it is hinted but  sub-optimally otherwise. We can use hints to get the desired plan but it is not desirable to use hints in production code as the use of hints involves extra code that must be managed, checked, and controlled with every Oracle patch or upgrade. Moreover, hints freeze the execution plan so that you will not be able to benefit from a possibly better plan in future.

So how can we make such queries use optimal plan until a provably better plan comes along without adding hints?

Well, the answer is to use SQL Plan Management which ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones.

To demonstrate the procedure, I have created two tables CUSTOMER and PRODUCT having CUST_ID and PROD_ID respectively as primary keys. PROD_ID column in CUSTOMER table is the foreign key and is indexed.

SQL>onn hr/hr

drop table customer purge;
drop table product purge;

create table product(prod_id number primary key, prod_name char(100));
create table customer(cust_id number primary key, cust_name char(100), prod_id number references product(prod_id));
create index cust_idx on customer(prod_id);

insert into product select rownum, 'prod'||rownum from all_objects;
insert into customer select rownum, 'cust'||rownum, prod_id from product;
update customer set prod_id = 1000 where prod_id > 1000;

exec dbms_stats.gather_table_stats (USER, 'customer', cascade=> true);
exec dbms_stats.gather_table_stats (USER, 'product', cascade=> true);

– First, let’s have a look at the undesirable plan which does not use the index on PROD_ID column of CUSTOMER table.

SQL>conn / as sysdba
    alter system flush shared_pool;

    conn hr/hr

    variable prod_id number
    exec :prod_id := 1000

    select cust_name, prod_name
    from customer c, product p
    where c.prod_id = p.prod_id
    and c.prod_id = :prod_id;

    select * from table (dbms_xplan.display_cursor());

----------------------------------------------------------------------SQL_ID  b257apghf1a8h, child number 0
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 3134146364

| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |              |       |       |   412 (100)|          |
|   1 |  NESTED LOOPS                |              | 88734 |    17M|   412   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| PRODUCT      |     1 |   106 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0010600 |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | CUSTOMER     | 88734 |  9098K|   410   (1)| 00:00:01 |

– Load undesirable plan into baseline  to establish a SQL plan baseline for this query into which the desired plan will be loaded later

SQL>variable cnt number
    exec :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'b257apghf1a8h');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from     dba_sql_plan_baselines
    where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------- ----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   YES

– Disable undesirable plan so that this plan will not be used

SQL>variable cnt number
    exec :cnt := dbms_spm.alter_sql_plan_baseline (-
    SQL_HANDLE => 'SQL_7d3369334b24a117',-
    PLAN_NAME => 'SQL_PLAN_7ucv96d5k988rfe19664b',-
    ATTRIBUTE_NAME => 'enabled',-

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from   dba_sql_plan_baselines
     where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Now we use hint in the above SQL to generate the optimal plan which uses index on PROD_ID column of CUSTOMER table

SQL>conn hr/hr

variable prod_id number
exec :prod_id := 1000

select /*+ index(c)*/ cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

SQL_ID  5x2y12dzacv7w, child number 0
select /*+ index(c)*/ cust_name, prod_name from customer c, product p
where c.prod_id = p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |

– Now we will load the hinted plan  into baseline –
– Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement.

SQL>variable cnt number
exec :cnt := dbms_spm.load_plans_from_cursor_cache(-
sql_id => '5x2y12dzacv7w',  -
plan_hash_value => 4263155932, -
sql_handle => 'SQL_7d3369334b24a117');

– Verify that there are now two plans loaded for that SQL statement:

  •  Unhinted sub-optimal plan is disabled
  •  Hinted optimal plan which even though is for a  “different query,”  can work with earlier unhinted query (SQL_HANDLE is same)  is enabled.
SQL>col sql_text for a35 word_wrapped
col enabled for a15

select  sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines
where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rea320380                                                   YES

select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Verify that hinted plan is used even though we do not use hint in the query  –
– The note confirms that baseline has been used for this statement

SQL>variable prod_id number
exec :prod_id := 1000

select cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

SQL_ID  b257apghf1a8h, child number 0
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |

Predicate Information (identified by operation id):

3 - access("P"."PROD_ID"=:PROD_ID)
4 - filter("C"."PROD_ID"=:PROD_ID)

- SQL plan baseline SQL_PLAN_7ucv96d5k988rea320380 used for this statement

With this baseline solution, you need not employ permanent hints the production code and hence no upgrade issues. Moreover, the plan will evolve with time as optimizer discovers better ones.

Note:  Using this method, you can swap  the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying  an optimizer setting, playing around with statistics etc. and then associate sub-optimally performing statement with the optimal plan.

I hope this post was useful.

Your comments and suggestions are always welcome!



Related links:

Tuning Index


Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Influence execution plan without adding hints], All Right Reserved. 2014.

The post Influence execution plan without adding hints appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Updated (XML) Content Section

Marco Gralike - Wed, 2014-12-03 15:34
“Once per year I try to update the “XML Content” page that, in principle, should…

SSL with PKCS12 truststore

Laurent Schneider - Wed, 2014-12-03 14:31

Many many moons ago I vaguely remember having a similar issue with java keystore / truststore and microsoft certificates stores.

When you start using SSL for your listener, you could potentially face a large number of issues amoung your toolsets. In my opinion, the most disastrous one is that you cannot monitor your database with Enterprise Manager and a TCPS listener. I tried with 10g, 11g, 12c and I seriously doubt it will come in 13c, even a dozen of ERs have been filled. The best workaround I found is to use a separate listener to monitor your database and monitor the ssl-listener itself with IPC.

Today I had to deal with a driver from Datadirect, which finally works perfectly fine with SSL, but the challenge was to know what to put in the keystore and truststore…

In SQLNET, you use the single-sign-on wallet (cwallet.sso) created by OWM/orapki or ldap.

In Java, per default, you use a java keystore, that you generate with keytool (or even use the default cacerts). There is only a lexical difference between a keystore and a truststore, they could both be the same file. As documented in the JSSE Ref
A truststore is a keystore that is used when making decisions about what to trust

But for some other tools, the java keytool won’t do the trick, if the truststore cannot be of the type JKS.

One common type is the PKCS12. This is the your ewallet.p12 you get with the Wallet Manager.

E.g. from java :***

To use single-sign-on, use trustStoreType=SSO as I wrote there : jdbc-ssl

Other command formats are X509 base64 or DER file. The openssl command line allows you easy conversion

openssl pkcs12 -in ewallet.p12 -out file.pem
openssl x509 -outform der -in file.pem -out file.der

or in Windows Explorer, just click on your p12 file and then click on the certificate to export in the certificate store.

Is Kuali Guilty of “Open Washing”?

Michael Feldstein - Wed, 2014-12-03 14:05

Phil and I don’t write a whole lot about Student Information Systems (SISs) and the larger Enterprise Resource Management (ERP) suites that they belong to, not because they’re unimportant but because the stories about them often don’t fit with our particular focus in educational technology. Plus, if I’m being completely honest, I find them to be mostly boring. But the story of what’s going with Kuali, the open source ERP system for higher education, is interesting and relevant for a couple of reasons. First, while we hear a lot of concern from folks about the costs of textbooks and LMSs, those expenses pale in comparison to what colleges and universities pay in SIS licensing and support costs. A decent sized university can easily pay a million dollars or more for their SIS, and a big system like UC or CUNY can pay tens or even hundreds of millions. One of the selling points of Kuali has been to lower costs, thus freeing up a lot of that money to meet other needs that are more closely related to the core university mission. So what happens in the SIS space has a potentially huge budgetary impact on teaching and learning. Second, there’s been a lot of conversation about what “open” means in the context of education and, more specifically, when that label is being abused. I am ambivalent about the term “open washing” because it encourages people to flatten various flavors of openness into “real” and “fake” open when, in fact, there are often legitimately different kinds of open with different value (and values). That said, there certainly exist cases where use of the term “open” stretches beyond the bounds of even charitable interpretation.

When Kuali, a Foundation-developed project released under an open source license, decided to switch its model to a company-developed product where most but not all of its code would be released under a different open source license, did they commit an act of “open washing”? Finding the answer to that question turns out to be a lot more complicated than you might expect. As I started to dig into it, I found myself getting deep into details of both open source licensing and cloud computing, since the code that KualiCo will be withholding is related to cloud offerings. It turned out to be way too long and complex to deal with both of those topics in one post, so I am primarily going to follow up on Phil’s earlier posts on licensing here and save the cloud computing part for a future post.

Let’s start by recapping the situation:

  • The Kuali Foundation is a non-profit entity formed by a group of schools that wanted to pool their resources to build a suite of open source components that together could make up an ERP system suitable for colleges and universities.
  • One of the first modules, Kuali Financials, has been successfully installed and run by a small number of schools which have reported dramatic cost savings, both in the obvious licensing but also in the amount of work it took to get the system running.
  • Other elements of the suite have been been more problematic. For example, the registrar module, Kuali Student, has been plagued with multi-year delays and members dropping out of the project.
  • Kuali code has historically been released under an open source license called the “Educational Community License (ECL)”, which is a slight variation of the Apache license.
  • The Kuali Foundation announced that ownership of the code would be transferred to a new commercial entity, KualiCo. Their reason for doing so is to accelerate the development of the suite, although they have not been entirely clear about what they think the causes of their development speed problem are and why moving to a commercial structure will solve the problems.
  • KualiCo intends to release most, but not all, of future code under a different open source license, the Affero Gnu Public License (AGPL).
  • Some of the new code to be developed by KualiCo will not be released as open source. So far the only piece they have named in this regard is their multitenant code.
  • These changes happened fairly quickly, driven by the board, without a lot of community discussion.

So is this “open washing”? Is it a betrayal of trust of an open source community, or deceptively claiming to be open and proving to be otherwise? In one sense, the answer to that question depends on the prior expectations of the community members. I would argue that the biggest and clearest act of open washing may have occurred over a decade ago with the creation of the term “community source.” Back in the earlier days of Sakai—another “community source” project—I made a habit of asking participants what they thought “community source” meant. Very often, the answer I got was something along the lines of, “It’s open source, but with more emphasis on building a community.” But that’s not what Brad Wheeler meant at all when he popularized the term. As I have discussed in an earlier post, “community source” was intended to be a consortial model of development with an open source license tacked onto the end product. Far from emphasizing community, it was explicitly designed to maximize the control and autonomy of the executive decision-makers from the consortial partners—and to keep a lid on the decision-making power of other community participants. Remember the motto: “If you’ve got the gold, then you make the rules.” Community source, as defined by those who coined the term, is a consortium with a license. But community source was always marketed as an improvement on open source. “It’s the pub between the cathedral and the bazaar where all the real work gets done,” Brad liked to say.

Different “community source” projects followed the centralized, financially-driven model to different degrees. Kuali, for example, was always explicitly and deliberately more centralized in its decision-making processes than Sakai. As an outside observer of Kuali’s culture and decision-making processes, and as a close reader of Brad Wheeler’s articles and speeches about community source, I can’t say that the move to KualiCo surprised me terribly. Nor can I say that it is inconsistent with what Brad has said all along about how community source works and what it is for. The consortial leaders, whose membership I assume was roughly defined by their financial contributions to the consortium, made a decision that supports what they believe is in their interest. All code that was previously released under an open source license will remain under an open source license. Presumably the Kuali Foundation and KualiCo will be clear going forward about which consortial contributions go toward creating functionality that will be open source or private source in the future. I am not privy to the internal politics of the foundation and therefore am not in a position to say whether some of those who brought the gold were left out of the rule-making process. To the degree that Brad’s golden rule was followed, the move to KualiCo is consistent with the clearly stated (if craftily marketed) philosophy of community source.

The question of how much these changes practically affect open source development of Kuali is a more complicated one to answer. It is worth stating that another tenet of community source was that it was specifically intended to be commercial-friendly, meaning that the consortia tried to avoid licensing or other practices that discouraged the development of a healthy and diverse ecosystem of support vendors. (Remember, community source frames problems with the software ecosystem as procurement problems. As such, its architects are concerned with maintaining a robust range of support contracting options.) Here the balancing act is more delicate. On the one hand, to the degree that the changes give KualiCo advantages over potential competitors, Kuali will be violating the commercial-friendly principle of community source. On the other hand, to the degree that the changes do not give KualiCo advantages over potential competitors, it’s not clear why one would think that KualiCo will be a viable and strong enough company to move development faster than the way it has been until now.

The first thing to point out here is that, while KualiCo has only said so far that it will keep the multitenant code private source, there is nothing to prevent them from keeping more code private in the future. Instructure Canvas, which started out with only the multitenant code as private source, currently has the following list of features that are not in the open source distribution:

  • Multi-tenancy extensions
  • Mobile integration
  • Proprietary SIS integrations
  • Migration tools for commercial LMSs
  • Other minor customizations that only apply to our hosted environment
  • Chat Tool
  • Attendance Tool (Roll Call)

I don’t think there is a clear and specific number of private source features that marks the dividing line between good faith open source practices and “open washing”; nor am I arguing that Instructure is open washing here. Rather, my point is that, once you make the decision to be almost-all-but-not-completely open source, you place your first foot at the top of a slippery slope. By saying that they are comfortable withholding code on any feature for the purposes of making their business viable, KualiCo’s leadership opens the door to private sourcing as much of the code as they need to in order to maintain their competitive advantage.

Then there’s the whole rather arcane but important question about the change in open source licenses. Unlike the ECL license that Kuali has used until now, AGPL is “viral,” meaning that anybody who combines AGPL-licensed code with other code must release that other code under the AGPL as well. Anybody, that is, except for the copyright holder. Open source licenses are copyright licenses. If KualiCo decides to combine  open source code to which they own the copyright with private source code, they don’t have to release the private source code under the AGPL. But if a competitor, NOTKualiCo, comes along and combines KualiCo’s AGPL-licensed code with their own proprietary code, then NOTKualiCo has to release their own code under the AGPL. This creates two theoretical problems for NOTKualiCo. First, NOTKualiCo does not have the option of making the code they develop a proprietary advantage over their competitors. They have to give it away. Second, while NOTKualiCo has to share its code with KualiCo, KualiCo doesn’t have the same obligation to NOTKualiCo. So theoretically, it would be very hard for any company to compete on product differentiators when they are building upon AGPL-licensed code owned by another company.

I say “theoretically” because, in practice, it is much more complicated than that. First, there is the question of what it means to “combine” code. The various GPL licenses recognize that some software is designed to work with other software “at arm’s length” and therefore should not be subject to the viral clause. For example, it is permissible under the license to run AGPL applications on a Microsoft Windows or Apple Mac OS X operating system without requiring that those operating systems also be released under the GPL. Some code combinations fall clearly into this category, while others fall clearly into the category of running as part of the original open source program and therefore subject to the viral clause of the GPL. But there’s a vast area in the murky middle. Do tools that use APIs specifically designed for integration fall under the viral clause? It depends on the details of how they integrate as well as who you ask. It doesn’t help that the language used to qualify what counts as “combining” in Gnu’s documentation uses terms that are specific to the C programming language.

KualiCo has said that they will specifically withhold multitenant capabilities from future open source distributions. If competitors developed their own multitenant capabilities, would they be obliged to release that code under the AGPL? Would such code be “combining” with Kuali, or could it be sufficiently arm’s-length that it could be private source? It depends on how it’s developed. Since KualiCo’s CEO is the former CTO of Instructure, let’s assume for the sake of argument that Kuali’s multitenant capabilities will be developed similarly to Canvas’. Zach Wily, Instructure’s Chief Architect, described their multitenant situation to me as follows:

[O]ur code is open-source, but only with single-tenancy. The trick there is that most of our multi-tenancy code is actually in the open source code already! Things like using global identifiers to refer to an object (instead of tenant-local identifiers), database sharding, etc, are all in the code. It’s only a couple relatively thin libraries that help manage it all that are kept as closed source. So really, the open-source version of Canvas is more like a multi-tenant app that is only convenient to run with a single tenant, rather than Canvas being a single-tenant app that we shim to be multi-tenant.

The good news from NOTKualiCo’s (or NOTInstructureCo’s) perspective is that it doesn’t sound like there’s an enormous amount of development required to duplicate that multitenant functionality. Instructure has not gone through contortions to make the development of multitenant code harder for competitors; I will assume here that KualiCo will follow a similar practice. The bad news is that the code would probably have to be released under the AGPL, since it’s a set of libraries that are intended to run as a part of Kuali. That’s far from definite, and it would probably require legal and technical experts evaluating the details to come up with a strong conclusion. But it certainly seems consistent with the guidance provided by the Gnu Foundation.

OK, so how much of a practical difference does this make for NOTKualiCo to be able to compete with KualiCo? Probably not a huge amount, for several reasons. First, we’re not talking about an enormous amount of code here; nor is it likely to be highly differentiated. But also, NOTKualiCo owns the copyright on the libraries that they release. While anybody can adopt them under the AGPL, if KualiCo wanted to incorporate any of NOTKualiCo’s code, then the viral provision would have to apply to KualiCo. The practical net effect is that KualiCo would almost certainly never use NOTKualiCo’s code. A third competitor—call them ALSONOTKualiCo—could come in and use NOTKualiCo’s code without incurring any obligations beyond those that they already assumed by adopting KualiCo’s AGPL code, so there’s a disadvantage there for NOTKualiCo. But overall, I don’t think that withholding multitenant code from KualiCo’s open source releases—assuming that it’s done the way Instructure has done it—is a decisive barrier to entry for competitors. Unfortunately, that may just mean that KualiCo will end up having to withhold other code in order to maintain a sustainable advantage.

So overall, is Kuali guilty of “open washing” or not? I hope this post has helped make clear why I don’t love that term. The answer is complicated and subjective. I believe that “community source” was an overall marketing effort that entailed some open washing, but I also believe that (a) Brad has been pretty clear about what he really meant if you listened closely enough, and (b) not every project that called itself community source followed Brad’s tenets to the same degree or in the same way. I believe that KualiCo’s change in license and withholding of code are a violation of the particular flavor of openness that community source promised, but I’m not sure how much of a practical difference that makes to the degree that one cares about the “commercial friendliness” of the project. Would I participate in work with the Kuali Foundation today if I were determined to work only on projects that are committed to open source principles and methods? No I would not. But I would have given you the same answer a year ago. So, after making you wade through all of those arcane details, I’m sorry to say that my answer to the question of whether Kuali is guilty of open washing is, “I’m not sure that I know what the question means, and I’m not sure how much the answer matters.”

The post Is Kuali Guilty of “Open Washing”? appeared first on e-Literate.

Where Is Talent When Sizing Up The Success of a Business?

Linda Fishman Hoyle - Wed, 2014-12-03 13:51

I like this picture of an animated Mark Hurd (Oracle's CEO) with the holiday baubles hanging in the background. I like it even more that he’s talking to a group of HR executives about the importance of talent in an organization. “I want the best people, I don’t want to lose them, and I want them completely engaged,” said Hurd.

At the London, England conference, Hurd explained the hurdles companies face in the search for good talent. One is an aging workforce that will affect attrition rates when employees start to retire. The other is new college grads who have higher expectations of their career paths and reward systems. Both of these demographic forces will require “HR as a real-time application, on a par with financials” according to Hurd.

Companies who take advantage of the internet, mobile devices, and social networks will be in a much better position that those with outdated systems. Technology, and specifically cloud applications, can help companies standardize best practices across geographies and keep up with the latest features and functionality.

However, the SaaS-based modern HR tools won’t help any company differentiate itself in the marketplace without an earnest pledge to use them to help attract, retain, and motivate the best people for the jobs that need to be done. According to Michael Hickins, who wrote the article in Forbes entitled Oracle CEO Mark Hurd: Make HR A Real-Time Application, a company’s future can hinge on how well it runs the race for talent.

Innovative Work Life Solutions Help Employees Balance Their Professional and Personal Lives

Linda Fishman Hoyle - Wed, 2014-12-03 13:47

A Guest Post by Mark Bennett, Oracle Director of HCM Product Strategy (pictured left)

The best human capital management applications have evolved from just automating standard human resource (HR) processes to become strategic tools for managing talent and analyzing workforce capabilities.

Now, a new wave of innovation is taking HR evolution a step further with work/life applications that are designed to help employees better balance their professional and personal lives. In the following interview Oracle Director of HCM Product Strategy Mark Bennett explains how Oracle’s new Oracle Work Life Solutions Cloud offerings can enhance creativity, boost productivity, and help individuals become more effective inside and outside of work. 

Q: How do you define work/life solutions?

A: They’re applications and features of products that help individuals better understand various facets of their professional and personal lives. Many people want to find a better balance between these two areas, but they’re finding they don’t always have the information they need to make that happen.

Q: Can you give us some examples of work/life solutions?

A: Oracle Work Life Solutions Cloud offers an extensible set of solutions including My Reputation, a feature that helps employees better understand how they are seen within their social media communities. It can also help them develop their professional brand as it aligns with that of their company to help with their career development and advancement. Another example is My Wellness, a feature that leverages wearable devices that enable people to monitor their activity levels throughout the day and their sleep patterns at night. So for example, people with a low energy level may decide to get more sleep to see if that makes a difference.

Offerings that are based around competition, such as the My Competitions feature of Oracle Human Capital Management Cloud, can help organizations create incentives for employees. Software developers may engage in friendly competition to see who fixes the most bugs in a week. Or someone might organize a contest for workgroup members to come up with ideas about new features for a product redesign project.

In the future, work/life solutions might include taking advantage of the video capabilities in smart phones to provide an easy way to upload clips to ask a question of coworkers, share a new research findings, or even to link to short profiles of a person’s appointment contacts—helping individuals connect on a more personal level and gain a better understanding of the context of meetings.

Q: How do organizations benefit from making work/life solutions available to employees?

A: There are three primary ways. First, individuals can better understand their work habits, so, for example, they don’t push themselves so hard they risk burnout or don’t pay enough attention to other facets of their lives.

Second, these solutions send a powerful signal to the workforce by indicating a company’s interest in their employees that goes beyond their professional contributions. Many employees will appreciate that their company is paying attention to their personal as well as professional needs.

Third, there may be a dollars-and-cents payoff. If the information collected from one of the solutions encourages more employees to participate in wellness programs, healthcare costs may start to decrease through rebates from carriers or from reduced claims.

Q: What should employers keep in mind to assure work/life solutions succeed?

A: It’s essential that companies make it clear they are not monitoring any of this data but instead are simply providing tools to help employees gain insights into their lives to help them decide whether they want to make adjustments based on their personal goals. The key is giving employees the tools they need to make their own decisions about their own lives.

Q: How do we learn more about these solutions?

A: Resources for Work Life Solutions include:

December 10: AXA HCM Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2014-12-03 13:37

Join us for an Oracle HCM Cloud Customer Forum on Wednesday, December 10, 2014, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time. 

AXA had multiple core HR systems across the globe, including PeopleSoft and SAP. After much review of the market, AXA determined that a cloud solution would suit its requirements more effectively than going through an expensive upgrade and customization process.

In this Forum, Kwasi Boateng, Head of Group HRIS Development at AXA, will talk about the company's selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the expectations and benefits of its new modern HR system.

AXA is a French, multinational company, present in 56 countries. With 157,000 employees and distributors, AXA is committed to serving 102 million clients. AXA’s areas of expertise are reflected in a range of products and services adapted to the needs of each client in three major business lines: property-casualty insurance, life and savings, and asset management.

Invite your customers and prospects. You can register now to attend the live Forum on Wednesday, December 10, 2014 at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time and learn more from AXA directly.

SAP certification for Oracle's Virtual Compute Appliance X4-2 (VCA X4-2)

Wim Coekaerts - Wed, 2014-12-03 12:17
We have been working with SAP to certify their products, based on SAP NetWeaver 7.x (specifically on the following OS versions : Oracle Linux 5, Oracle Linux 6, Oracle Solaris 10, Oracle Solaris 11), in a Virtual Compute Appliance Environment. It is also possible to run 2-tier and 3-tier configurations/installations of Oracle Database and SAP applications on VCA.

For more detail you can go to SAP Note 2052912.

The Virtual Compute Appliance is a great, cost effective, easy to deploy converged infrastructure solution. Installations can be done by the customer, it takes just a few hours to bring up and start using a VCA system and deploy applications. The entire setup is pre-wired, pre-installed, pre-configured, using our best practices. All the software and hardware components in VCA are standard off the shelf, proven, products. Oracle Linux for the management node OS, Oracle VM for the compute nodes. Any application or product certified with Oracle VM will work without change or without the need for re-certification inside a VCA environment.

It is very exciting to have the SAP certification for VCA published.

Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST

Cole OrndorffThere’s still time to register for the webinar that Fishbowl Solutions and Oracle will be holding tomorrow from 1 PM-2 PM CST! Innovation in Managing the Chaos of Everyday Project Management will feature Fishbowl’s AEC Practice Director Cole Orndorff. Orndorff, who has a great deal of experience with enterprise information portals, said the following about the webinar:

“According to Psychology Today, the average employee can lose up to 40% of their productivity switching from task to task. The number of tasks executed across a disparate set of systems over the lifecycle of a complex project is overwhelming, and in most cases, 20% of each solution is utilized 80% of the time.

I am thrilled to have the opportunity to present on how improving workforce effectiveness can enhance your margins. This can be accomplished by providing a consistent, intuitive user experience across the diverse systems project teams use and by reusing the intellectual assets that already exist in your organization.”

To register for the webinar, visit Oracle’s website. To learn more about Fishbowl’s new Enterprise Information Portal for Project Management, visit our website.

The post Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

From Zero to Hero....In About 2 Hours

Joel Kallman - Wed, 2014-12-03 11:23

This is an example of a real-world problem, an opportunistic one, being solved via a mobile application created with Oracle Application Express.

First, a brief bit of background.  Our son is 9 years old and is in the Cub Scouts.  Cub Scouts in the United States is an organization that is associated with Boy Scouts of America.  It's essentially a club that is geared towards younger boys, and teaches them many valuable skills - hiking, camping out, shooting a bow and arrow, tying different knots, nutrition, etc.  This club has a single fundraiser every year, where the boys go door-to-door selling popcorn, and the proceeds of the popcorn sale fund the activities of the Cub Scouts local group for the next year.  There is a leader who organizes the sale of this popcorn for the local Cub Scout group, and this leader gets the unenvious title of "Popcorn Kernel".  For the past 2 years, I've been the "Popcorn Kernel" for our Cub Scout Pack (60 Scouts).

I was recently at the DOAG Konferenz in Nürnberg, Germany and it wasn't until my flight home that I began to think about how I was going to distribute the 1,000 items to 60 different Scouts.  My flight home from Germany was on a Sunday and I had pre-scheduled the distribution of all of this popcorn to all 60 families on that next day, Monday afternoon.  Jet lag would not be my friend.

The previous year, I had meticulously laid out 60 different orders across a large meeting room and let the parents and Scouts pick it up.  This year, I actually had 4 volunteer helpers, but I had no time.  All I had in my possession was an Excel spreadsheet which was used to tally the orders across all 60 Cub Scouts.   But I knew I could do better than 60 pieces of paper, which was the "solution" last year.

On my flight home, on my iPad, I sketched out the simple 4-page user interface to locate and manage the orders.  As well, I wrote the DDL on my iPad for a single table.  Normally, I would use SQL Developer Data Modeler as my starting point, but this application and design needed to be quick and simple, so a single denormalized table was more than sufficient.

Bright and early on Monday morning, I logged into an existing workspace on  I created my single table using the Object Browser in SQL Commands, created a trigger on this table, uploaded the spreadsheet data into this table, and then massaged the data using some DML statements in SQL Commands.  Now that my table and data were complete, it was now time for my mobile application!

I created a simple Mobile User Interface application with navigation links on the home page.  There are multiple "dens" that make up each group in a Cub Scout Pack, and these were navigation aids as people would come and pick up their popcorn ("Johnny is in the Wolf Den").  These ultimately went to the same report page but with different filters.

Once a list view report was accessed, I showed the Scout's name, the total item count for them, and then via a click, drill down to the actual number of items to be delivered to the Scout.  Once the items were handed over and verified, the user of this application had to click a button to complete the order.  This was the only DML update operation in the entire application.

I also added a couple charts to the starting page, so we could keep track of how many orders for each den had already been delivered and how many were remaining.

I also added a chart page to show how many of each item was remaining, at least according to our records. This enabled us to do a quick "spot check" at any given point in time, and assess if the current inventory we had remaining was also accurately reflected in our system.  It was invaluable!  And remember - this entire application was all on a single table in the Oracle Database.  At one point in time, 8 people were all actively using this system - 5 to do updates and fulfill orders, and the rest to simply view and monitor the progress from their homes.  Concurrency was never even a consideration.  I didn't have to worry about it.

Now some would say that this application:
  • isn't pixel perfect
  • doesn't have offline storage
  • isn't natively running on the device
  • can't capitalize on the native features of the phone
  • doesn't have a badge icon
  • isn't offered in a store

And they would be correct.  But guess what?  None of it mattered.  The application was used by 5 different people, all using different devices, and I didn't care what type of devices they were using.  They all thought it was rocket science.  It looked and felt close enough to a native application that none of them noticed nor cared.  The navigation and display were consistent with what they were accustomed to.  More importantly, it was a vast improvement over the alternative - consisting of either a piece of paper or, worse yet, 5 guys huddling around a single computer looking at a spreadsheet.  And this was something that I was able to produce, starting from nothing to completed solution, in about two hours.  If I hadn't been jet lagged, I might have been able to do it in an hour.

You might read this blog post and chuckle to yourself.  How possibly could this trivial application for popcorn distribution to Cub Scouts relate to a "real" mobile enterprise application?  Actually, it's enormously relevant.

  • For this application, I didn't have to know CSS, HTML or mobile user interfaces.
  • I only needed to know SQL.  I wrote no PL/SQL.  I only wrote a handful of SQL queries for the list views, charts, and the one DML statement to update the row.
  • It was immediately accessible to anyone with a Web browser and a smart phone (i.e., everyone).
  • Concurrency and scalability were never a concern.  This application easily could have been used by 1,000 people and I still would not have had any concern.  I let the Oracle Database do the heavy lifting and put an elegant mobile interface on it with Oracle Application Express.

This was a simple example of an opportunistic application.  It didn't necessarily have to start from a spreadsheet to be opportunistic.  And every enterprise on the planet (including Oracle) has a slew of application problems just like this, and which today are going unsolved.  I went from zero to hero to rocket scientist in the span of two hours.  And so can you.

A demo version of this application (with fictitious names) is here.  I left the application as is - imperfect on the report page and the form (I should have used a read-only display).  Try it on your own mobile device.

ORA 700's make me grumpy ( well or at least confused )

Grumpy old DBA - Wed, 2014-12-03 09:16
Geez Louise I guess I coulda/shoulda known about this before now.

At some point the oracle software ( 11.2 ish ? 11.1 ish ? ) now starts getting worried and kind of grumpy.

You apparently can get ORA 700's under certain circumstances.  It's not a huge problem apparently ( yet ) when the software decides to let you know ... ( but it might become one later maybe ? ).

I saw this one on a new test vm I am setting up ( Database using ASM and also OEM ).

ORA 700 [kskvmstatact: excessive swapping observed]

So anyway I started doing some looking around at the memory config stuff after seeing that ( but that's a longer story ).

Categories: DBA Blogs

Oracle Support Advisor Webcasts Series for December

Chris Warticki - Wed, 2014-12-03 08:41

Dear Valued Support Customer,
We are pleased to invite you to our Advisor Webcast series for December 2014. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts;
If you prefer to read about current events, Oracle Premier Support News provides you information, technical content, and technical updates from the various Oracle Support teams. For a full list of Premier Support News, go to My Oracle Support and enter Document ID 222.1 in Knowledge Base search.

Oracle Support

shadow2 shadow3 pen December Featured Webcasts by Product Area: Database Oracle Database 12c Patching New Features December 17 Enroll Database Oracle Database中的Mutex - Mandarin only December 18 Enroll E-Business Suite Get Proactive with Doc ID 432.1 December 9 Enroll E-Business Suite Discrete Costing Functional Changes And Bug Fixes For 12.2.3 And 12.2.4 December 10 Enroll E-Business Suite Rapid Planning: Enabling Mass Updates to Demand Priorities and Background Processing December 11 Enroll E-Business Suite Order Management Corrupt Data and Data Fixes December 16 Enroll E-Business Suite AutoLockbox Validation: Case Studies For Customer Identification & Receipt Application December 18 Enroll E-Business Suite eAM Mobile App Overview and Product Tour December 18 Enroll Fusion Applications Want to Learn more about how to Setup and Troubleshoot Email notifications in Fusion Applications? December 11 Enroll Hyperion EPM Getting Started with Essbase Aggregate Storage Option - ASO 101 December 10 Enroll JD Edwards EnterpriseOne 2014 Year-End Processing for US Payroll - How To Have A Successful Year-End December 4 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Canadian Year-End Processing for 2014 December 11 Enroll JD Edwards EnterpriseOne JD Edwards World to EnterpriseOne Migration: Migration Plan and Conversions December 16 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne 2014 Year-End ESU Install for W2, T4 and 1099 December 17 Enroll JD Edwards World JD Edwards World: 1099 Address Book setup and Guidelines Refresher 2014 December 2 Enroll JD Edwards World JD Edwards World: A/P Ledger Method Refresher 2014 December 3 Enroll JD Edwards World JD Edwards World: 1099 G/L Method Refresher 2014 December 4 Enroll JD Edwards World JD Edwards World: Preparing for the 2014 W-2 Year-End Processing Season December 9 Enroll JD Edwards World JD Edwards World: Reviewing Encumbrance Rollover (P4317) & Program Changes in Release A9.3 Update 1 December 9 Enroll JD Edwards World JD Edwards World: Preparing for the 2014 Canadian T4 & Releve Year-End Processing Season December 10 Enroll Middleware Tuxedo GWTDOMAIN的基本配置和常见问题 (Mandarin Only) December 16 Enroll Middleware G1 Garbage Collector 早わかり (Japanese Only) December 17 Enroll PeopleSoft Enterprise PeopleSoft Payroll for North America 9.2 (Product Image 9): New Capabilities December 2 Enroll PeopleSoft Enterprise Tax Update 14-F General Information Session December 9 Enroll PeopleSoft Enterprise PeopleSoft HCM Time & Labor Mobile Application, In-Memory Rules & Troubleshooting December 10 Enroll PeopleSoft Enterprise PeopleSoft Payroll for North America – 2014 Year-End, Special Topics December 16 Enroll PeopleSoft Enterprise PeopleSoft Process Scheduler and Report Distribution PeopleTools 8.54 New Features December 17 Enroll Hardware and Software Engineered to Work Together
Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices | Privacy SEV100370159

Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time.

Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

Integrating BPM12c with BAM

Darwin IT - Wed, 2014-12-03 07:21
In BPM12c there is a tight integration with BAM, like it was with 11g. However, BAM is not automatically installed in BPM. You need to do that seperately. But having done that, you need to get BPM acquainted with BAM and instruct it to enable process analytics.

For a greater part the configuration is similar to the 11g story. My former oracle colleague wrote a blog about it: 'Configuration of BAM and BPM for process analytics'.
There is a little difference, because in 12c you won't find the mentioned property 'DisableActions' under the '' -> 'BPMNConfig'. But you have to enable process analytics on the bpm-server(s). The 12c docs tell you how: '11.1 Understanding Oracle BAM and BPM Integration'.
Taken from that document here a short step-list:
  1. Log in to the Fusion Middleware Control (http:Adminserver-host:port/em) console. 
  2. In the Target Navigation pane, expand the Weblogic Domain node. 
  3. Select the domain in which the Oracle BAM server is installed. 
  4. Open the MBean Browser by right-clicking on the domain and select System MBean Browser. 
  5. Expand the Application Defined MBeans node. 
  6. Navigate to node -> 'Server: server_name' -> AnalyticsConfig -> analytics.
  7. Disable the 'DisableProcessMetrics'-property by setting the value to false. 
  8. You might want to do the same with the 'DisableMonitorExpress'-property.
  9. Click Apply.

Deploying Spring Boot Applications to Pivotal Cloud Foundry from STS

Pas Apicella - Wed, 2014-12-03 05:14
The example below shows how to use STS (Spring Tool Suite) to deploy a spring boot web application directly from the IDE itself. I created a basic spring boot web application using the template engine thymeleaf. The application isn't that fancy it simply displays a products page of some mock up Products. This blog entry just shows how you could deploy this to Pivotal Cloud Foundry from the IDE itself.
1. First create a Pivotal Cloud Foundry Server connection. The image blow shows the connection and one single application.

2. Right click on your Spring Boot application and select "Configure -> Enable as cloud foundry app"
3. Drag and Drop The project onto the Cloud Foundry Connection.
4. At this point a dialog appears asking for an application name as shown below.

5. Click Next
6. Select deployment options and click Next

7. Bind to existing services if you need to 

8. Click next9. Click finish
At this point it will push the application to your Cloud Foundry Instance

Once complete the Console window in STS will show something as follows
Checking application - SpringBootWebCloudFoundryGenerating application archiveCreating applicationPushing applicationApplication successfully pushedStarting and staging applicationGot staging request for app with id bb3c63f5-c32d-4e27-a834-04076f2af35aUpdated app with guid bb3c63f5-c32d-4e27-a834-04076f2af35a ({"state"=>"STARTED"})-----> Downloaded app package (12M)-----> Java Buildpack Version: v2.4 (offline) |> Downloading Open Jdk JRE 1.7.0_60 from (found in cache)       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (0.9s)-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from (found in cache)-----> Uploading droplet (43M)Starting app instance (index 0) with guid bb3c63f5-c32d-4e27-a834-04076f2af35a
  .   ____          _            __ _ _ /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/  ___)| |_)| | | | | || (_| |  ) ) ) )  '  |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot ::        (v1.1.9.RELEASE)2014-12-03 11:09:50.434  INFO 32 --- [           main] loudProfileApplicationContextInitializer : Adding 'cloud' to list of active profiles2014-12-03 11:09:50.447  INFO 32 --- [           main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext2014-12-03 11:09:50.497  INFO 32 --- [           main] nfigurationApplicationContextInitializer : Adding cloud service auto-reconfiguration to ApplicationContext2014-12-03 11:09:50.521  INFO 32 --- [           main] apples.sts.web.Application               : Starting Application on 187dfn5m5ve with PID 32 (/home/vcap/app started by vcap in /home/vcap/app)2014-12-03 11:09:50.577  INFO 32 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@374d2f77: startup date [Wed Dec 03 11:09:50 UTC 2014]; root of context hierarchy2014-12-03 11:09:50.930  WARN 32 --- [           main] .i.s.PathMatchingResourcePatternResolver : Skipping [/home/vcap/app/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.4.0_RELEASE.jar] because it does not denote a directory2014-12-03 11:09:51.600  WARN 32 --- [           main] .i.s.PathMatchingResourcePatternResolver : Skipping [/home/vcap/app/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.4.0_RELEASE.jar] because it does not denote a directory2014-12-03 11:09:52.349  INFO 32 --- [           main] urceCloudServiceBeanFactoryPostProcessor : Auto-reconfiguring beans of type javax.sql.DataSource2014-12-03 11:09:52.358  INFO 32 --- [           main] urceCloudServiceBeanFactoryPostProcessor : No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.2014-12-03 11:09:53.109  INFO 32 --- [           main] .t.TomcatEmbeddedServletContainerFactory : Server initialized with port: 610972014-12-03 11:09:53.391  INFO 32 --- [           main] o.apache.catalina.core.StandardService   : Starting service Tomcat2014-12-03 11:09:53.393  INFO 32 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/7.0.562014-12-03 11:09:53.523  INFO 32 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext2014-12-03 11:09:53.524  INFO 32 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 2950 ms2014-12-03 11:09:54.201  INFO 32 --- [ost-startStop-1] o.s.b.c.e.ServletRegistrationBean        : Mapping servlet: 'dispatcherServlet' to [/]2014-12-03 11:09:54.205  INFO 32 --- [ost-startStop-1] o.s.b.c.embedded.FilterRegistrationBean  : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]2014-12-03 11:09:54.521  INFO 32 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]2014-12-03 11:09:54.611  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.String apples.sts.web.WelcomeController.welcome(org.springframework.ui.Model)2014-12-03 11:09:54.612  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/products],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.String apples.sts.web.ProductController.listProducts(org.springframework.ui.Model)2014-12-03 11:09:54.615  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[text/html],custom=[]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest)2014-12-03 11:09:54.616  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public org.springframework.http.ResponseEntity> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)2014-12-03 11:09:54.640  INFO 32 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]2014-12-03 11:09:54.641  INFO 32 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]2014-12-03 11:09:55.077  INFO 32 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup2014-12-03 11:09:55.156  INFO 32 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 61097/http2014-12-03 11:09:55.167  INFO 32 --- [           main] apples.sts.web.Application               : Started Application in 5.918 seconds (JVM running for 6.712)
You can also view the deployed application details in STS by double clicking on it as shown below.
Categories: Fusion Middleware

VirtualBox and Windows driver verifier

Amardeep Sidhu - Wed, 2014-12-03 04:34

I was troubleshooting some Windows hangs on my Desktop system running Windows 8 and enabled driver verifier. Today when I tried to start VirtualBox it failed with this error message.


Most of the online forums were asking to reinstall VirtualBox to fix the issue. But one of the thread mentioned that it was being caused by Windows Driver Verifier. I disabled it, restarted Windows and VirtualBox worked like a charm. Didn’t have time to do more research as i quickly wanted to test something. May be we can skip some particular stuff from Driver Verifier and VirutalBox can then work.

Categories: BI & Warehousing


Jonathan Lewis - Wed, 2014-12-03 02:24

I have a simple script that creates two identical tables , collects stats (with no histograms) on the pair of them, then executes a join. Here’s the SQL to create the first table:

create table t1
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
	generator	v1,
	generator	v2
	rownum <= 1e6

-- gather stats: no histograms

The two tables have 1,000,000 rows each and t2 is created from t1 with a simple “create as select”. The columns are all defined to be integers, and the naming convention is simple – n_400 holds 400 distinct values with uniform distribution from 0 – 399, n_750 holds 750 values from 0 – 749, and so on.

Here’s the simple query:

        t1.*, t2.*
        t1, t2
        t1.n_400 = 0
and     t2.n_72  = t1.n_90
and     t2.n_750 = t1.n_600
and     t2.n_400 = 1

Since I’ve created no indexes you might expect the query to do a couple of and a hash join to get its result – and you’d be right; but what do you think the predicted cardinality would be ?

Here are the results from running explain plan on the query and then reporting the execution plan – for three different versions of Oracle:
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
|   0 | SELECT STATEMENT     |             |    96 |  4992 |  1230  (10)|
|*  1 |  HASH JOIN           |             |    96 |  4992 |  1230  (10)|
|*  2 |   TABLE ACCESS FULL  | T1          |  2500 | 65000 |   617  (11)|
|*  3 |   TABLE ACCESS FULL  | T2          |  2500 | 65000 |   613  (10)|

Predicate Information (identified by operation id):
   1 - access("T2"."N_750"="T1"."N_600" AND "T2"."N_72"="T1"."N_90")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |   116 |  6032 |  1229  (10)| 00:00:07 |
|*  1 |  HASH JOIN         |      |   116 |  6032 |  1229  (10)| 00:00:07 |
|*  2 |   TABLE ACCESS FULL| T1   |  2500 | 65000 |   616  (11)| 00:00:04 |
|*  3 |   TABLE ACCESS FULL| T2   |  2500 | 65000 |   612  (10)| 00:00:04 |

Predicate Information (identified by operation id):
   1 - access("T2"."N_750"="T1"."N_600" AND "T2"."N_72"="T1"."N_90")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |  2554 |   139K|  1225  (10)| 00:00:07 |
|*  1 |  HASH JOIN         |      |  2554 |   139K|  1225  (10)| 00:00:07 |
|*  2 |   TABLE ACCESS FULL| T1   |  2500 | 70000 |   612  (10)| 00:00:04 |
|*  3 |   TABLE ACCESS FULL| T2   |  2500 | 70000 |   612  (10)| 00:00:04 |

Predicate Information (identified by operation id):
   1 - access("T2"."N_72"="T1"."N_90" AND "T2"."N_750"="T1"."N_600")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

The change for (which is still there for I didn’t check to see if it also appears in is particularly worrying. When you see a simple query like this changing cardinality on the upgrade you can be fairly confident that some of your more complex queries will change their plans – even if there are no clever new optimizer transformations coming into play.

I’ll write up an explanation of how the optimizer has produced three different estimates some time over the next couple of weeks; but if you want an earlier answer this is one of the things I’ll be covering in my presentation on calculating selectivity at “Super Sunday” at UKOUG Tech 14.