Feed aggregator

Using Agile Practices to Create an Agile Presentation

Cary Millsap - Fri, 2011-06-17 13:25
What’s the best way to make a presentation on Agile practices? Practice Agile practices.

You could write a presentation “big bang” style, delivering version 1.0 in front of your big audience of 200+ people at Kscope 2011 before anybody has seen it. Of course, if you do it that way, you build a lot of risk into your product. But what else can you do?

You can execute the Agile practices of releasing early and often, allowing the reception of your product to guide its design. Whenever you find an aspect of your product that doesn’t get the enthusiastic reception you had hoped for, you fix it for the next release.

That’s one of the reasons that my release schedule for “My Case for Agile Methods” includes a little online webinar hosted by Red Gate Software next week. My release schedule is actually a lot more complicated than just one little pre-ODTUG webinar:

2011-04-15Show key conceptual graphics to son (age 13)2011-04-29Review #1 of paper with employee #12011-05-18Review #2 of paper with customer2011-05-14Review #3 of paper with employee #12011-05-18Review #4 of paper with employee #22011-05-26Review #5 of paper with employee #32011-06-01Submit paper to ODTUG web site2011-06-02Review #6 of paper with employee #12011-06-06Review #7 of paper with employee #32011-06-10Submit revised paper to ODTUG web site2011-06-13Present “My Case for Agile Methods” to twelve people in an on-site customer meeting2011-06-22Present “My Case for Agile Methods” in an online webinar hosted by Red Gate Software2011-06-27Present “My Case for Agile Methods” at ODTUG Kscope 2011 in Long Beach, California
(By the way, the vast majority of the work here is done in Pages, not Keynote. I think using a word processor, not an operating system for slide projectors.)

Two Agile practices are key to everything I’ve ever done well: incremental design and rapid iteration. Release early, release often, and incorporate what you learn from real world use back into the product. The magic comes from learning how to choose wisely in two dimensions:
  1. Which feature do you include next?
  2. To whom do you release next?
The key is to show your work to other people. Yes, there’s tremendous value in practicing a presentation, but practicing without an audience merely reinforces, it doesn’t inform. What you need while you design something is information—specifically, you need the kind of information called feedback. Some of the feedback I receive generates some pretty energetic arguing. I need that to fortify my understanding of my own arguments so that I’ll be more likely to survive a good Q&A session on stage.

To lots of people who have seen teams run projects into the ground using what they call “Agile,” the word “Agile” has become a synonym for sloppy, irresponsible work habits. When you hear me talk about Agile, you’ll hear about practices that are highly disciplined and that actually require a lot of focus, dedication, commitment, practice, and plain old hard work to execute.

Agile, to me, is about injecting discipline into a process that is inevitably rife with unpredictable change.

Bug 11858963: optimization goes wrong with FIRST_ROWS_K (11g)?

Charles Schultz - Fri, 2011-06-17 08:07
At the beginning of March, I noticed some very odd things in a 10053 trace of a problem query I was working on. I also made some comments on Kerry Osborn's blog related to this matter. Oracle Support turned this into a new bug (11858963), unfortunately an aberration of Fix 4887636. I was told that this bug will not be fixed in 11gR1 (as 11.1.0.7 is the terminal release), but it will be included in future 11gR2 patches.

If you have access to SRs, you can follow the history in SR 3-314198695. For those that cannot, here is a short summary.

We had a query that suffered severe performance degradation after upgrading from 10.2.0.4 to 11.1.0.7. I attempted to use SQLT but initially run into problems with the different versions of SQLT, so I did the next best thing and looked at the 10053 traces directly. After a bit of digging, I noticed several cases where the estimated cardinality was completely off. For example:


First K Rows: non adjusted N = 1916086.00, sq fil. factor = 1.000000
First K Rows: K = 10.00, N = 1916086.00
First K Rows: old pf = 0.1443463, new pf = 0.0000052
Access path analysis for FRRGRNL
***************************************
SINGLE TABLE ACCESS PATH (First K Rows)
Single Table Cardinality Estimation for FRRGRNL[FRRGRNL] 
Table: FRRGRNL Alias: FRRGRNL
Card: Original: 10.000000 Rounded: 10 Computed: 10.00 Non Adjusted: 10.00



So, the idea behind FIRST_ROWS_K is that you want the entire query to be optimized (Jonathan Lewis would spell it with an "s") for the retrieval of the first K rows. Makes sense, sounds like a good idea. The problem I had with this initial finding is that every single rowsource was being reduced to having a cardinality of K. That is just wrong. Why is it wrong? Let's say you have a table with, um, 1916086 rows. Would you want the optimizer to pretend it has 10 rows and make it the driver of a Nested Loop? Not me. Or likewise, would you want the optimizer to think "Hey, look at that, 10 rows, I'll use an index lookup". Why would you want FIRST_ROWS_K to completely obliterate ALL your cardinalities?

I realize I am exposing some of my naivete above. Mauro, my Support Analyst corrected some of my false thinking with the following statement:

The tables are scaled under First K Rows during the different calculations (before the final join order is identified) but I cannot explain any further how / when / why.
Keep in mind that the CBO tweak -> cost -> decide (CBQT is an example)
Unfortunately we cannot discuss of the CBO algorithms / behaviora in more details, they are internal materials.
Regarding the plans yes, they are different, the "bad plan" is generated with FIRST_ROWS_10 in 11g
The "good" plan is generated in 10.2.0.4 (no matter which optimizer_mode you specify, FIRST_ROWS_10 is ignored because of the limitation) or in 11g when you disable 4887636 (that basically reverts the optimizer_mode to ALL_ROWS).
Basically the good plan has never been generated under FIRST_ROWS_10 since because of 4887636 FIRST_ROWS_10 has never been used before



I still need to wrap my head around "the limitation" in 10.2.0.4 and how we never used FIRST_ROWS_K for this particular query, but I believe that is exactly what Fix 4887636 was supposed to be addressing.

Here are some of the technical details from Bug 1185896:

]]potential performance degradation in fkr mode
]]with fix to bug4887636 enabled, if top query block
]]has single row aggregation
REDISCOVERY INFORMATION:
fkr mode, top query block contains blocking construct (i.e, single row aggregation). Plan improves with 4887636 turned off
WORKAROUND:
_fix_control='4887636:off'
I assume fkr mode is FIRST_ROWS_K, shortened to F(irst)KR(ows). The term "blocking construct" is most interesting - why would a single row aggregation be labeled as a "block construct"?

Also, this was my first introduction to turning a specific fix off. That in itself is kinda cool.

Fine tuning your logical to physical DB transform

Susan Duncan - Thu, 2011-06-16 07:23
When you use JDeveloper's class modeler to design your logical database you are able to run the DB Transform to get a physical DB model. But have you ever been frustrated at the limitations of the transform? for instance -
  • name to be used for the physical table
  • attribute to be used as the PK column
  • datatype and settings to be applied to a specific attribute
  • foreign key or intersection column names
  • different datatypes to be created for different database types
In JDeveloper 11.1.2 additional fine grain and reusable transform rules are available to you using a UML Profile. This is a UML2 concept but for database designers using JDeveloper it means you can set many properties at the logical model that are used when the transform to physical model(s) is run.

There is a section in Part2 of the tutorial Using Logical Models in UML for Database Development that details using the DB Profile, so I will not repeat the whole story here, but give you a flavour of the capabilities. Once you've applied the profile to the UML package of your class diagram you can set stereotypes against any element in your diagram. In the image below you are looking at the stereotype properties of the empno attribute - that will become a column in the database. Note that this column is to be used as the primary key on the table. Also note the list of Oracle and non-Oracle datatypes listed. Here you can specify exactly how empno should be transformed when multiple physical databases may be required. If the property for any of these is not set then the default transform will be applied.

You could say that having this ability as part of the class model bridges the gap between this model (being used as a logical DB model) and the tranformed physical database model - it provides some form of relative model capability.



Looking at another new feature in 11.1.2.0, the extract right shows other elements on the class diagram. I've created a primitive type (String25Type) and using the stereotype have specified how this type should be transformed. Now I can use that type on different classes in my diagram and the transformer will use it as necessary.

There are many other ways to fine tune your transform, run through the tutorial and try them for yourself

Hudson and me!

Susan Duncan - Thu, 2011-06-16 05:21
Over the past months I've been working more and more with Hudson, the continuous integration server. If you're familiar with Hudson then no doubt you are familiar with the changes that faced it in that time. If you're not - well, it's a long, well-documented story in the press and I will not bore you with it here!

But the most important thing is that Hudson is a great (free) continuous integration tool and continues to grow in popularity and status. Oracle became its supported from Sun's original open source project. As well as its community of users and developers Oracle has a full-time team working on it, including me as Product Manager, and it recently started the process of moving to the Eclipse Foundation as a top-level project.

Internally we use Hudson across the organization for all manner of build and test jobs and I know that many of you do too.

In JDeveloper 11.1.2 we've added new features to Team Productivity Center (TPC) to integrate Hudson (or Cruise Control) build/test results into the IDE and relate those to code checkins via the TPC work items. You can see a quick demo of that here

If you use Hudson I'd like to hear from you - in fact, I'd like to hear from you anyway! So please contact me at the usual Oracle address. Other ways to keep up with Hudson are through its mailing lists, wiki and of course twitter - @hudsonci

IRM Hotfolder update - seal docs automatically

Simon Thorpe - Tue, 2011-06-14 03:09

wrapper linkAnother update of the IRM Hotfolder tool was announced a few days ago - 3.2.0.

The main enhancement this time is to preserve timestamps, ownership and file system permissions during the automated sealing process. Earlier versions would create sealed files with timestamps reflecting the time of sealing, and ownership attributed to the wrapper utility, etc. This version lets you preserve the properties of the file prior to sealing. 

The documentation has also been updated to clarify the permissions needed to use the utility.

For those who aren't familiar with the IRM Hotfolder, it is a simple utility that uses IRM APIs to seal and unseal files automatically by monitoring file system folders, WebDAV folders, SharePoint folders, application output folders, and so on.

Native String Aggregation in 11gR2

Duncan Mein - Tue, 2011-06-14 03:07
A fairly recent requirement meant that we had to send a bulk email to all users of each department from within our APEX application. We have 5000 records in our users table and the last thing we wanted to do was send 5000 distinct emails (one email per user) for both performance and to be kind on the mail queue / server.

In essence, I wanted to to perform a type of string aggregation where I could group by department and produce a comma delimited sting of all email address of users within that department. With a firm understanding of the requirement, so began the hunt for a solution. Depending on what version of the database you are running, the desired result can be achieved in a couple of ways.

Firstly, the example objects.

CREATE TABLE app_user
(id NUMBER
,dept VARCHAR2 (255)
,username VARCHAR2(255)
,email VARCHAR2(255)
);

INSERT INTO app_user (id, dept, username, email)
VALUES (1,'IT','FRED','fred@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (2,'IT','JOE','joe@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (3,'SALES','GILL','gill@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (4,'HR','EMILY','emily@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (5,'HR','BILL','bill@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (6,'HR','GUS','gus@mycompany.com');

COMMIT;

If you are using 11gR2, you can expose the new LISTAGG function as follows to perform your string aggregation natively:

SELECT dept
,LISTAGG(email,',') WITHIN GROUP (ORDER BY dept) email_list
FROM app_user
GROUP BY dept;

DEPT EMAIL_LIST
-----------------------------------------------------------------
HR emily@mycompany.com,bill@mycompany.com,gus@mycompany.com
IT fred@mycompany.com,joe@mycompany.com
SALES gill@mycompany.com

If running 11g or earlier, you can achieve the same result using XMLAGG as follows:

SELECT au.dept
,LTRIM
(EXTRACT
(XMLAGG
(XMLELEMENT
("EMAIL",',' || email)),'/EMAIL/text()'), ','
) email_list
FROM app_user au
GROUP BY au.dept;

DEPT EMAIL_LIST
-----------------------------------------------------------------
HR emily@mycompany.com,bill@mycompany.com,gus@mycompany.com
IT fred@mycompany.com,joe@mycompany.com
SALES gill@mycompany.com

The introduction of native string aggregation into 11gR2 is a real bonus and a function that has already proved to have had huge utility within our applications.

Back from Tallinn, Estonia

Rob van Wijk - Sat, 2011-06-11 16:22
This morning I arrived back from a trip to Tallinn. Oracle Estonia had given me the opportunity to present my SQL Masterclass seminar at their training center in Tallinn, on Thursday and Friday. Thank you all those who spent two days hearing me. Here is a short story about my trip including some photos.I arrived at Tallinn airport around 1PM local time on Wednesday. My hotel room was located at Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

Clouds Leak - IRM protects

Simon Thorpe - Sat, 2011-06-11 06:46

leaky cloudIn a recent report, security professionals reported two leading fears relating to cloud services:

"Exposure of confidential or sensitive information to unauthorised systems or personnel"

"Confidential or sensitive data loss or leakage"

 

These fears are compounded by the fact that business users frequently sign themselves up to cloud services independently of whatever arrangements are made by corporate IT. Users are making personal choices to use the cloud as a convenient place to store and share files - and they are doing this for business information as well as personal files. In my own role, I was recently invited by a partner to review a sensitive business document using Googledocs. I just checked, and the file is still there weeks after the end of that particular project - because users don't often tidy up after themselves.

So, the cloud gives us new, seductively simple ways to scatter information around, and our choices are governed by convenience rather than compliance. And not all cloud services are equal when it comes to protecting data. Only a few weeks ago, it was reported that one popular service had amended its privacy assurance from "Nobody can see your private files..." to "Other [service] users cannot...", and that administrators were "prohibited" from accessing files - rather than "prevented". This story demonstrates that security pros are right to worry about exposure to unauthorised systems and personnel.

passwordAdded to this, the recent Sony incident highlights how lazy we are when picking passwords, and that services do not always protect passwords anything like as well as they should. Reportedly millions of passwords were stored as plain text, and analysis shows that users favoured very simple passwords, and used the same password for multiple services. No great surprise, but worrying to a security professional who knows that users are just as inconsiderate when using the cloud for collaboration.

No wonder then that security professionals put the loss or exposure of sensitive information firmly at the top of their list of concerns. They are faced with a triple-whammy - distribution without control, administration with inadequate safeguards, and authentication with weak password policy. A compliance nightmare.

So why not block users from using such services? Well, you can try, but from the users' perspective convenience out-trumps compliance and where there's a will there's a way. Blocking technologies find it really difficult to cover all the options, and users can be very inventive at bypassing blocks. In any case, users are making these choices because it makes them more productive, so the real goal, arguably, is to find a safe way to let people make these choices rather than maintain the pretence that you can stop them.

seal to protect cloud docsThe relevance of IRM is clear. Users might adopt such services, but sealed files remain encrypted no matter where they are stored and no matter what mechanism is used to upload and download them. Cloud administrators have no more access to them than if they found them on a lost USB device. Further, a hacker might steal or crack your cloud passwords, but that has no bearing on your IRM service password, which is firmly under the control of corporate policy. And if policy changes such that the users no longer have rights to the files they uploaded, those files become inaccessible to them regardless of location.  You can tidy up even if users do not.

Finally, the IRM audit trail can give insights into the locations where files are being stored.

So, IRM provides an effective safety net for your sensitive corporate information - an enabler that mitigates risks that are otherwise really hard to deal with.

EPM, CPM, EIM and other confusing acronyms – making sense out of Oracle’s data management offerings

Andrews Consulting - Thu, 2011-06-09 10:41
Oracle proclaims itself to be the leader in the Enterprise Performance Management (EPM) market – an assertion that is hard to dispute given the imprecise way in which that acronym is used. Like all of its competitors, Oracle has its own unique vocabulary and associated definitions of the jargon and acronyms it uses. For those […]
Categories: APPS Blogs

A Right Pig's Ear of a Circular Reference

Duncan Mein - Wed, 2011-06-08 16:31
If you have ever used a self referencing table within Oracle to store hierarchical data (e.g. an organisations structure), you will have undoubtedly used CONNECT BY PRIOR to build your results tree. This is something we use on pretty much every project as the organisation is very hierarchy based.

Recently, the support cell sent the details of a recent call they received asking me to take a look. Looking down the call, I noticed that the following Oracle Error Message was logged:

"ORA-01436: CONNECT BY loop in user data"

A quick look at the explanation of -01436 and it was clear that there was a circular reference in the organisation table i.e. ORG_UNIT1 was the PARENT of ORG_UNIT2 and ORG_UNIT2 was the PARENT of ORG_UNIT1. In this example, both ORG_UNITS were the child and parent of each other. Clearly this was an issue which was quickly resolved by the addition of a application and server side validation to prevent this from re-occurring.

The outcome of this fix was a useful script that I keep to identify if there are any circular references within a self referencing table. The example below shows this script in action:
CREATE TABLE ORGANISATIONS (ORG_ID NUMBER
, PARENT_ORG_ID NUMBER
, NAME VARCHAR2(100));

INSERT INTO ORGANISATIONS VALUES (1, NULL, 'HQ');
INSERT INTO ORGANISATIONS VALUES (2, 1, 'SALES');
INSERT INTO ORGANISATIONS VALUES (3, 1, 'RESEARCH');
INSERT INTO ORGANISATIONS VALUES (4, 1, 'IT');
INSERT INTO ORGANISATIONS VALUES (5, 2, 'EUROPE');
INSERT INTO ORGANISATIONS VALUES (6, 2, 'ASIA');
INSERT INTO ORGANISATIONS VALUES (7, 2, 'AMERICAS');

COMMIT;

A quick tree walk query shows the visual representation of the hierarchy with no errors:
SELECT LPAD ('*', LEVEL, '*') || name tree
,LEVEL lev
FROM organisations
START WITH org_id = 1
CONNECT BY PRIOR org_id = parent_org_id;

TREE LEV
---------------------
*HQ 1
**SALES 2
***EUROPE 3
***ASIA 3
***AMERICAS 3
**RESEARCH 2
**IT 2

Now lets create a circular reference so that EUROPE is the parent of SALES:
UPDATE ORGANISATIONS SET parent_org_id = 5 WHERE NAME = 'SALES';
COMMIT;

Re-running the query from the very top of the tree completes but gives an incorrect
and incomplete result set:
TREE         LEV
---------------------
*HQ 1
**RESEARCH 2
**IT 2

If you running Oracle Database 9i and backwards, this Experts Exchange article provides a nice procedural solution and a quick mod to the PL/SQL gave me exactly the information I needed:
SET serveroutput on size 20000

DECLARE
l_n NUMBER;
BEGIN
FOR rec IN (SELECT org_id FROM organisations)
LOOP
BEGIN
SELECT COUNT ( * )
INTO l_n
FROM ( SELECT LPAD ('*', LEVEL, '*') || name tree
, LEVEL lev
FROM organisations
START WITH org_id = rec.org_id
CONNECT BY PRIOR org_id = parent_org_id);
EXCEPTION
WHEN OTHERS
THEN
IF SQLCODE = -1436
THEN
DBMS_OUTPUT.put_line
(
rec.org_id
|| ' is part of a Circular Reference'
);
END IF;
END;
END LOOP;
END;
/
As Buzz Killington pointed out in the comments section, Oracle Database 10g onwards introduces CONNECT BY NOCYCLE which will instruct Oracle to return rows even if it is involved in a self referencing loop. When used with the CONNECT_BY_ISCYCLE pseudocolumn, you can easily identify erroneous relationships via SQL without the need to switch to PL/SQL. An example of this can be seen by executing the following query:

WITH
my_data
AS
( SELECT ORG.PARENT_ORG_ID
,ORG.ORG_ID
,ORG.NAME org_name
FROM organisations org
)
SELECT
SYS_CONNECT_BY_PATH (org_name,'/') tree
,PARENT_ORG_ID
,ORG_ID
,CONNECT_BY_ISCYCLE err
FROM
my_data
CONNECT BY NOCYCLE PRIOR org_id = parent_org_id
ORDER BY 4 desc;

TREE PARENT_ORG_ID ORG_ID ERR
------------------------------------------------
/SALES/EUROPE 2 5 1
/EUROPE/SALES 5 2 1
/EUROPE 2 5 0
/EUROPE/SALES/ASIA 2 6 0
/EUROPE/SALES/AMERICAS 2 7 0
/ASIA 2 6 0
/AMERICAS 2 7 0
/SALES 5 2 0
/SALES/ASIA 2 6 0
/SALES/AMERICAS 2 7 0
/HQ 1 0
/HQ/RESEARCH 1 3 0
/HQ/IT 1 4 0
/RESEARCH 1 3 0
/IT 1 4 0

Any value > 0 in the ERR column indicates that the row is involved in a self referencing join. This is a much neater and more performant way to achieve to desired result. Thanks to Buzz for pointing out a much better way to original PL/SQL routine.

Pen Test Tool for APEX

Duncan Mein - Wed, 2011-06-08 07:13
Just a quick plug for a cool Penetration Test tool that we have been using on-site for a few months now. The application is called: Application Express Security Console and developed by a company called Recx Ltd

This can be used to identify areas of you APEX applications that are vulnerable to:
SQL Injection, XSS as well as inadequate access control etc. It kindly suggests ways in which the vulnerability can be addressed as well.

We have built the use of this into our formal release process now and has definitely proved value for money to organisation.

Why KScope?

Cary Millsap - Fri, 2011-06-03 10:09
Early this year, my friend Mike Riley from ODTUG asked me to write a little essay in response to the question, “Why Kscope?” that he could post on the ODTUG blog. He agreed that cross-posting would help the group reach more people, so I’ve reproduced my response to that question here. I’ll hope to see you at Kscope11 in Long Beach June 26–30. If you develop applications for Oracle systems, you need to be there.

MR: Why KScope?

CM: Most people in the Oracle world who know my name probably think of me as a database administrator. In my heart, I am a software designer and developer. Before my career with Oracle, I worked in the semiconductor industry as a language designer. I wrote compilers for a living. Designing and writing software has always been my professional true love. I’ve never strayed too far away from it; I’ve always found a reason to write software, no matter what my job has been. [Ed: Examples include the Oracle*APS suite and a compiler design project he did for Great West Life in the 1990s, the queueing theory models he worked on in the late 1990s, the Method R Profiler software (Cary wrote all the XSLT code), and finally today, he spends about half of his time designing and writing the MR Tools suite.]

My career as an Oracle performance specialist is really a natural extension of my software development background. It is still really weird to me that in the Oracle market, performance is regarded as a job done primarily by operations people instead of by development people. Developers control at least 90% of the leverage over how fast an application will be able to run. I think that performance became a DBA responsibility in the formative years of our Oracle world because so many early Oracle projects had DBA teams but no professional development teams.

Most of those big projects were people implementing big off-the-shelf applications like Oracle Financial and Manufacturing Applications (which grew into the Oracle E-Business Suite). The only developers that most of those implementation teams had were what I would call nonprofessional developers. Now, I don’t mean people who were in any way unprofessional. I mean they were predominantly businesspeople who had never been educated as software developers, but who’d been told that of course anybody could write computer programs in this new “fourth-generation language” called SQL.

Just about any time you implement a vendor’s highly customizable new application with 20,000+ database objects underneath it, you’re going to run into performance problems. Someone had to attend to those problems, and the DBAs and sysadmins were the only technical people anywhere near the project who could do it. Those DBAs and Oracle sysadmins were also the people who organized the early Oracle conferences, and I think this is where the topic of “performance tuning” became embedded into the DBA track.

The resulting problem that I still see today is that the topic became dominated by “tips and techniques”—lists of tricks that operational people could try to maybe make their systems go a little bit faster. The word “tuning” says it all. I almost never use the word except facetiously, because it’s a cheap imitation of what systems really need, which is performance optimization, which is what designers and developers of software are supposed to do. Even the evolution of Oracle tools for the performance analyst mirrors this post-production tips-and-techniques “tuning” mentality. That’s why most performance management tools you see today are predominantly oriented toward viewing performance from a system resource perspective (the DBA’s perspective), rather than the code path perspective (the developer’s perspective).

The whole key to performance is the application design and development team, especially when you realize that the performance of an application is not just its code path speed, but its overall interaction with the person using it. So many of the performance problems that I’ve found are caused by applications that are just stupid in how they’re designed to interact with me. For example, if you’ve seen my “Messed-up apps” presentation before, you might remember the self-service bus ticket kiosk that made me wait for over a minute while the application tallied the more-than-2,000 different bus trips for which I might want to buy a ticket. That’s an app with a broken specification. There’s nothing that a run-time operations team can do to make that application any fun to use (short of sending it back for redesign).

My goal as a software designer is not just to make software that runs quickly. My goal is also to make applications that are delightful to use. It’s the difference between an application that you use because you must and one that feels like it’s a necessary part of who you are. Making software like that is the kind of thing that a designer learns from studying Don Norman, Edward Tufte, Christopher Alexander, and Jonathan Ive. It’s a level of performance that just isn’t on the menu for operational run-time support staff to even think about, because it’s beyond their control.

So: why Kscope? The ODTUG conferences are the best places I can go in the Oracle market where I can be with people who think and talk about these things. …Or for that matter, who understand that these ideas even exist and deserve to be studied. KScope is just the right place for me to be.

Recent conference presentations

Raimonds Simanovskis - Thu, 2011-06-02 16:00

Recently I has not posted any new posts as I was busy with some new projects as well as during May attended several conferences and in some I also did presentations. Here I will post slides from these conferences. If you are interested in some of these topics then ask me to come to you as well and talk about these topics :)

Agile Riga Day

In March I spoke at Agile Riga Day (organized by Agile Latvia) about my experience and recommendations how to adopt Agile practices in iterative style.

RailsConf

In May I travelled to RailsConf in Baltimore and I hosted traditional Rails on Oracle Birds of a Feather session there and gave overview about how to contribute to ActiveRecord Oracle enhanced adapter.

TAPOST

Then I participated in our local Theory and Practice of Software Testing conference and there I promoted use of Ruby as test scripting language.

RailsWayCon

And lastly I participated in Euruko and RailsWayCon conferences in Berlin. In RailsWayCon my first presentation was about multidimensional data analysis with JRuby and mondrian-olap gem. I also published mondrian-olap demo project that I used during presentation.

And second RailsWayCon presentation was about CoffeeScript, Backbone.js and Jasmine that I am recently using to build rich web user interfaces. This was quite successful presentation as there were many questions and also many participants were encouraged to try out CoffeeScript and Backbone.js. I also published my demo application that I used for code samples during presentation.

Next conferences

Now I will rest for some time from conferences :) But then I will attend FrozenRails in Helsinki and I will present at Oracle OpenWorld in San Francisco. See you there!

Categories: Development

Opensource WebRTC for Browser 2 Browser communication Coming up

Khanderao Kand - Thu, 2011-06-02 13:56
WebRTC, A new open source for browser to browser communication is making some progress. http://sites.google.com/site/webrtc/ has been launched and would go through W3C standardizatio. It may be part of HTML5. As per my knowledge, Google may adapt it and many browswer would support it. This browser to browser communication without any server involved would replaced traditional P2P communication including chats and talks.

Here is WebRTC architecture diagram:



For developers : http://sites.google.com/site/webrtc/reference
However, currently it is not yet ready. The current demo still needs a demo server. but it will be there soon.

If you want to join the effort: http://sites.google.com/site/webrtc/build

Growing Risks: Mobiles, Clouds, and Social Media

Simon Thorpe - Thu, 2011-06-02 07:05

ics2 logoThe International Information Systems Security Certification Consortium, Inc., (ISC)²®, has just published a report conducted on its behalf by Frost & Sullivan.

The report highlights three growing trends that security professionals are, or should be, worried about - mobile device proliferation, cloud computing, and social media.

Mobile devices are highlighted because survey respondents ranked them second in terms of threat (behind application vulnerabilities). Frost & Sullivan comment that "With so many mobile devices in the enterprise, defending corporate data from leaks either intentionally or via loss or theft of a device is challenging.". Most respondents reported that they have policies and technologies in place, with rights management being reported as part of the technology mix.

Cloud computing was ranked considerably lower by respondents, but Frost & Sullivan highlighted it as a growing concern for which the security professionals consistently cited the need for more training and awareness.

The security professionals also reported that their two most feared cloud-related threats are:

  • "Exposure of confidential or sensitive information to unauthorised systems or personnel"
  • "Confidential or sensitive data loss or leakage"

These two concerns were ranked head and shoulders above access controls, cyber attacks, and disruptions to operation, and concerns about compliance audits and forensic reporting.

Rather contrarily, the third trend is highlighted because respondents reported that it is not a major concern. Frost & Sullivan observe that many security professionals appear to be under-estimating the risks of social computing, with 28% of respondents saying that they impose no restrictions at all on the use of social media, and most imposing few restrictions.

So, interesting reading although no great surprises - and reason enough for me to write three pieces on what Oracle IRM brings to the party for each of these three challenging trends.

A comment on mobile device proliferation is already available here.

A comment on cloud adoption is available here

KEEP Pool

Jeff Hunter - Wed, 2011-06-01 17:11
I have a persnickety problem with a particular table being aged out of the buffer cache.  I have one query that runs on a defined basis.  Sometimes when it is run, it does a lot of physical reads for this table.  Other times, it does no physical reads. So I decided to play around with a different buffer pool and let this table age out of cache on it's own terms rather than competing in the

Oracle Certification Group on Linkedin Crosses Ten Thousand Members

OCP Advisor - Wed, 2011-06-01 00:52
On May 23, 2011 - Oracle Certification Group on Linkedin cross 10,000 members. The group consists of certified professionals, certification candidates and top resourcing and HR executives from all around the world. Oracle certified professionals have Oracle Expert, Specialist, Associate, Professional or Master credentials.

To countdown the journey to 10,000 members a contest was held to guess the date when the group will cross the magic figure. Four group members guessed within 1 day of the actual date and will be awarded $25 Amazon gift certificates. The winners are as follows:
  • Pratik Kange
  • Prahlad Sharma
  • Agha Jameel
  • Joan Marc Amenós Villanueva
Congratulations everyone!

Working with SSL certificates on Oracle Enterprise Gateway or OWSM

Marc Kelderman - Wed, 2011-06-01 00:43
Working with SSL certificates is not common sense. Applying a new certificate on a server for outgoing messages is not a walk in the park. Do not think installing a client certificate on top of SSL configuration is easy.

In this article I want to share you some useful statements that for creating SSL connection on the Oracle Application Server, this is the Weblogic Server. But can also be applied on others app servers.

To create an outgoing SSL connection, you need the public certificate from the external party you want to connect. This can be obtained via your browser; enter the https://servername:443/query/end/point?WSDL in your browser.

Click on the icon in the location bar to show the certificate. Now you can export this public certificate to a ".cer" file. This file you need to apply on your application server.





On the application server, in my example a Java Application Server; such as Weblogic, the public certificate must be loaded into the "keystore". The keystore is a file that contains all the public certificates which you application server is using to connect to secure sites. To control your keystore, use the following statements;

Notes: 
  • By default in java, the default keystore is named 'cacerts' and has the default password 'changeit'
  • The cacerts file is located in your $JAVA_HOME/jre/lib/security directory.
  • Make a copy of your cacerts file before making any changes.

List all the public certificates

keytool -list -v -keystore ./cacerts -storepass changeit
keytool -list -v -alias www.thawte.com -keystore ./cacerts -storepass changeit

Delete a public certificate based on an alias:

keytool -delete -alias www.thawte.com -keystore ./cacerts -storepass changeit

Add a public certificate with an alias:

keytool -import -alias www.thawte.com -keystore ./cacerts -file public_thawte_com.cer -storepass changeit

Add a public certificate with an alias and trust all the CA's:

keytool -import -v -trustcacerts -alias staatdernederlandenrootca -file staatdernederlandenrootca.crt -keystore ./cacerts -storepass changeit

Export a public certificate from the keystore:

keytool -export -alias www.thawte.com -keystore ./cacerts -file public_thawte_com.cer -storepass changeit

Certificates come in different formats; p7b, p12, pem and cer. Each format has its own purpose. In general, a p7b file contains only the public certificate. The p12 contains the public certificate and the private key. The p12 file is used to for exchanging client certificates.

To convert file formats for your keystore, you should use OpenSSL. This is by default the best tool, available on any platform. The tool is command line based, but there is also various  GUI tools available.

Converting a p7b file to p12 format:
openssl pkcs7 -print_certs -in vijfhuizen.com.p7b > vijfhuizen.com.cer

Change the vijfhuizen.com.cer file: remove any chain certificates:
-----BEGIN CERTIFICATE-----
d2bmW4werweNSIdV7qXEntvJILc519AHJJDePHrT9SjavljmK0lTRfM1awv5n4355
HUsvvi3c0AEsjypd3bIcm4fXY6IF34cuRVpb++fzASVO8Bwl3LOE9PqnHr9zIRtlv
....
MIIsE2zCCA8OgAwIBAgIEATFjtjANBgkqhkiG9w0BAQUFADBZMQswCQYDVQQGwwJO
Rmh3IrH60ylbuqmeGRnJM8qYBHzVyOWAT2ruVhNKMcXD+TnUEU2QZDfmcnNKOIM
-----END CERTIFICATE----

openssl pkcs12 -export -in vijfhuizen.com.cer -inkey vijfhuizen.com.private.key -out vijfhuizen.com.p12 -name vijfhuizen.com.name

Convert PEM format in to DER format:
openssl x509 -in vijfhuizen.com.pem -inform PEM -out vijfhuizen.com.crt -outform DER

After you have created your SSL certificates, key, keystores, you want to test if the SSL configuration is valid. Here is a nice tool to do:

#!/bin/bassh
export ORACLE_HOME=/opt/weblogic/Middleware
export PATH=.:$PATH:$ORACLE_HOME/jdk/bin

EXEC_DIR=`dirname $0`
STOR_DIR=$ORACLE_HOMEjdk/jre/lib/security

java -cp $EXEC_DIR -Djavax.net.ssl.trustStore=$STOR_DIR/cacerts
-Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.debug=ssl,handshake
-Djavax.net.ssl.keyStore=$STOR_DIR/vijfhuizen.com.p12 -Djavax.net.ssl.keyStoreType=pkcs12
-Djavax.net.ssl.keyStorePassword=changeit Client https://www.thawte.com/roots

The Client class can be downloaded here.

If the test is not working, you could get an error such as:

"unable to find valid certification path to requested target"

This due to the fact, that the certificate in your keystore is not complete, or the certificate is not available at all. A very cool solution is written here. This tool will automatic download the public certificate from the website and load this into a copy of your existing keystore (cacerts) into a file named jsscacerts. De java code for this tool is here.

The only thing you have to do, is to use this jsscacerts file to replace the existing keystore, or export the public certificate from this keystore,  based on the alias, and import this in the keystore.

Purple Sweaters and Oracle 11g

alt.oracle - Tue, 2011-05-31 09:15

If you're geeky enough like me, you get a little excited whenever Oracle puts out a new version. Most of us have wondered at one time or another what it would be like to be able to beta-test the new Oracle version before it comes out. You may read the pre-release articles about the new features, if nothing else to keep ahead of the technology curve. Like geeky me, you may download the new version on the same day it comes out – maybe to play with it a little or just to see how it's different. But as intrigued as I get when new versions come out, I'm generally a little scared too. What the hell did they do now, I wonder. Grabbing the newest version of Oracle when it comes out is a little like getting a Christmas present from that family member who always buys you something you didn't ask for. You're glad to get a gift, but you pray to God that it's not another purple sweater. And Oracle's version history is littered with plenty of purple sweaters.

Sometimes I think back longingly to the days of Oracle 8i. Despite being the first version with that silly single-letter suffix thing (Oracle 8i – the "i" is for INTERNET DATABASE!), it was streamlined, compact, and just worked well. Then 9i came out - with it's 700+ new features. Instead of fitting on a single CD, the 9i install now needed three CDs, which either meant you were dealing with a lot of CD swapping or pushing three times the amount of data over the network just to do an install. And that would've been fine if the new stuff was good stuff. Unfortunately, all that extra cruft was stuff you almost certainly didn't need. Did you really need to install Oracle's application server will every one of your database installs? Did you really need an HTTP server? Oh, and that wasn't even the best part. With all that extra crap came... wait for it... SECURITY HOLES! Oracle 9i was the version where Oracle started to get creamed in the tech press about its glaring security risks. Which means if you installed Oracle 9i using the click next... click next... click next... method, you might as well leave the doors to your company unlocked.

To Oracle's credit, they listened. I remember going to several pre-release seminars before Oracle 10g came out. Oracle made a big deal about how they put it together. In a revolutionary move, Oracle actually asked DBAs what they liked and didn't like about the Oracle database. DBAs said it was too big, took too long to install and had too much junk. Oracle responded. Version 10g had plenty of new features, but a lot of them were actually useful. And in a move that must be a first in the history of software, 10g was actually smaller than 9i – going back to one CD instead of three. Security was tighter. It installed quickly. All in all, a really forward-thinking move on Oracle's part, if you could ignore that dumb "g is for grid" thing.

Well, like I said, whoever thought up the approach to 10g obviously got fired, because now we have 11g. Before I go too far, yes, I know 11g has some good new features, although a quick list of the useful ones doesn't exactly spring to mind. But, in a total reversal of the slim and trim approach of 10g, version 11g has now become an even bigger, more unwieldy behemoth than 9i. A shining example of software crafted by suits instead of engineers. With 11g, you now get to drag 2GBs worth of crap from server to server in a vain attempt to do a simple database install. In fairness, you can separate out the "database" directory after you download the entire mess, but still... that leaves about 1.5GB of purple sweaters.

Every software company deals with bloat - how do you sell the next version? I get that. And Oracle has bought half the planet and needs to integrate those acquisitions across the board. Yep – I got it. But I also know that the RDBMS is Oracle’s flagship product. The company that produced 11g is the same company that was smart enough to ask DBAs what they should put in 10g. 10g was an incredibly successful version for Oracle – why screw with that?

I mentioned last time that, as great as Automatic Storage Management (ASM) is, Oracle had managed to screw it up in 11g. Here’s why. After telling you last time that ASM was so good that it should be used in single-instance systems as well as RAC, Oracle has gone and screwed me over. In 11gR2, ASM is now bundled with the “grid infrastructure” – the set of components used to run Real Application Clusters. Does that mean that you can’t use ASM with a single-instance database? Nope, but it makes it incredibly inconvenient. If you wanted to standardize on ASM across your database environments, you’d have to install the entire grid infrastructure on every one of your servers. If you manage 5 databases, it’s not too big a deal. If you manage 500, it's a much bigger deal. So c'mon Oracle – when you make good tools, make it easy for us to use them. This is incredibly discouraging.

On an unrelated positive note, I'm pleased to note that alt.oracle has been picked up by the Oracle News Aggregator at http://orana.info, which is just about the biggest Oracle blog aggregator in the universe. So thanks to Eddie Awad and the fine folks at OraNA.
Categories: DBA Blogs

Simple IRM Demonstration

Simon Thorpe - Mon, 2011-05-30 21:31

The demo server has recently been retired after many years of faithful service. Please contact your local Oracle representative if you would like a demo, or see the demos on the Oracle IRM YouTube channel.

Pages

Subscribe to Oracle FAQ aggregator