The need to move on
So I pack a bag
And move on"
Can't believe Bowie has taken that final train.
David Bowie's music has been part of my life pretty much since I started listening to pop music seriously. Lodger was the first Bowie album I listened to all the way through. It's probably his most under-appreciated album. It's funny to think that back then in 1979 Bowie was dismissed as past it, a boring old fart who should be swept aside by the vital surge of post-punk bands. Because those bands were raised on Ziggy, they were taught to dance by the Thin White Duke and they learnt that moodiness from listening to Low in darkened bedrooms too many times.
Even if you don't listen to Bowie, probably your favourite bands did. If they style their hair or wear make up, they listened to Bowie. If they play synths they listened to Bowie. If they make dance music for awkward white boys at indie discos they listened to Bowie. If they lurk in shadows smoking cigarettes in their videos they listened to Bowie. That's a large part of his legacy.
The other thing about Bowie is that his back catalogue has something for pretty much everybody. People who loved Ziggy Stardust might loath the plastic soul phase. Hardly anybody gets Hunky Dory; but for some fans it's their favourite album. My favourite is the first side of "Heroes" and the second side of Low, but that whole stretch from Young Americans to Lodger is a seam of sustained musical invention unparallelled by any other pop act. (Judicious picking of collaborators is an art in itself.)
Of course, there was a long fallow period. Tin Machine weren't as bad as we thought at the time, but the drum'n'bass was too 'Dad dancing at a wedding reception' for comfort. So it was a relief when he finally started producing decent albums again. Heathen has some lovely moments. The Next Day was something of a return to form (although a bit too long to be a classic). Then there's Blackstar.
It's almost as though Bowie hung on just long enough that Blackstar would be reviewed as his latest album, rather than his last one. The four and five star reviews earned through merit rather than the mawkishness which would have accompanied a posthumous release. And it really is pretty good. When I first heard the title track it sounded like Bowie was taking a cue from Scott Walker's latter period: edgy, experimental and deliberately designed not to be fan pleaser. But, unlike Walker, Bowie can't do wilfully unlistenable. Even in the midst of all that drone and skronk there are tunes. He can't help himself, his pop sensibility is too strong. Which is why I've already listened to Blackstar more times than I've listened to Bish Bosch.
So, farewell David Bowie. We're all going to miss you terribly. "May God's love be with you."
Here it is:
RMOUG - Denver, CO
One of the larger conferences, the year usually starts out in Denver for me, where crowds are always large and appreciative. RMOUG has some of the most dedicated volunteers and puts on a great conference year after year.
GAOUG - Atlanta, GA
This will be my first time at GAOUG, and I'm excited to help them get their annual conference started. Lots of familiar faces will be in attendance. At only $150, if you near the Atlanta drive, it's worth checking out.
OCOJ - Williamsburg, VA (submitted)
This will (hopefully) also be my first Oracle Conference on the James. Held in historic Williamsburg, OCOJ is also a steal at just $99.
UTOUG - Salt Lake City, UT
I'll head back out west to Utah for UTOUG. Always good to catch up with the local Oracle community in Utah each year. Plus, I make my annual SLC brewery tour while there.
GLOC - Cleveland, OH (submitted)
Steadily growing in popularity, the folks at GLOC put on an excellent conference. Waiting to hear back on whether my sessions got accepted.
KSCOPE - Chicago, IL
Like everyone, I'm looking forward to one of the best annual technical conferences that I've regularly attended. In addition to the traditional APEX content, there's few surprises planned this year!
ECO - Raleigh/Durham, NC (planning on submitting)
ECO - formerly VOUG - is also growing in numbers each year. There's a lot of tech in the RDU area, and many of the talented locals present here. Bonus: Jeff Smith curated brewery/bar tour the night before.
OOW - San Francisco, CA (planning on submitting)
As always, the conference year typically ends with the biggest one - Oracle Open World. While there's not as much APEX content as there once way, it's always been more focused on the marketing side of technology, which is good to hear every now and then.
User group conferences are one of the best types of training available, especially since they typically cost just a couple hundred dollars. I encourage you to try to check out one near you. Smaller groups are also great places to get an opportunity to present. In addition to annual conferences, many smaller groups meet monthly or quarterly and are always on the look out for new content.
Happy New Year all!!!
To start this year's blogging I thought id highlight a nice and easy quick win on getting started with Oracle Java Cloud Service.. This link https://www.youtube.com/watch?v=5DHsE2x5mks takes you to a youtube video which I found very clear and easy to follow.
Last year we got a kitten. Little Cleo was full of the joys of…well…being a cat. Then, one day, she just disappeared.
Several months later, having given up hope of ever seeing her again, we adopted Nutmeg.
Then, just before Christmas, Cleo suddenly re-appeared.
It’s a complete mystery as to where she had been for the last year and she has not condescended to comment on the matter.
The end result is that we are now a two cat family.
This brings with it certain complications.
When they aren’t studiously ignoring each other, the cats sit there giving each other hard stares for hours on end.
I think there may be some tension over exactly just who owns that fluffy ball.
To ensure that our sleep is not disturbed by these two fighting like cats in a sack, it’s necessary to ensure that they are in separate rooms before we retire for the evening.
As a result we’ve become rather expert at the art of Cat Herding, which largely consists of bribery with cat-nip, among other things.
Whilst acquiring a reputation as a “dealer” among the feline population of Milton Keynes, I have had cause to reflect on the similarity of our new hobby with the trials and tribulations of persuading DBMS_METADATA.GET_DDL that you do actually have permissions to see the source code you’ve asked for.
This is regularly a fairly tortuous process. In what follows I will be examining just why SELECT_CATALOG_ROLE is DBMS_METADATA cat-nip…and why SELECT ANY DICTIONARY isn’t.
I’ll also look at how you can stop chasing your tail and get this function to behave itself when invoked from within a stored program unit…
According to the Oracle Documentation, the SELECT ANY DICTIONARY privilege allows you to :
“Query any data dictionary object in the SYS schema. This privilege lets you selectively override the default FALSE setting of the O7_DICTIONARY_ACCESSIBILITY initialization parameter.”
Some explanation is required here. Prior to Oracle 8, a user with the SELECT ANY TABLE privilege had access to any table in the database. From 8 onwards, this privilege was restricted to exclude the Data Dictionary. This is where SELECT ANY DICTIONARY came in.
Before we go any further, let’s just make sure that the O7_DICTIONARY_ACCESSIBILITY parameter is not set to TRUE…
select value from v$parameter where name = 'O7_DICTIONARY_ACCESSIBILITY' / VALUE -------------------------------------------------------------------------------- FALSE
Now, to see the effect of this privilege on DBMS_METADATA.GET_DDL, let’s create a user :
grant create session, select any dictionary to cleo identified by password_you_can_type_with_paws /
If we now connect to the database as cleo, we can see that she has permissions to query the DBA_SOURCE view, among other things, and therefore to retrieve the source for a procedure in the HR schema :
set heading off set pages 500 set lines 130 select text from dba_source where owner = 'HR' and name = 'ADD_JOB_HISTORY' and type = 'PROCEDURE' order by line /
Sure enough, we get the expected output :
PROCEDURE add_job_history ( p_emp_id job_history.employee_id%type , p_start_date job_history.start_date%type , p_end_date job_history.end_date%type , p_job_id job_history.job_id%type , p_department_id job_history.department_id%type ) IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END add_job_history; 13 rows selected. SQL>
By default, EXECUTE on the DBMS_METADATA package are granted to PUBLIC. So, invoking the GET_DDL function for the same program unit should return the DDL statement required to re-create it. We already know we have access to the source so this should be no problem, right ?
set long 5000 select dbms_metadata.get_ddl( schema => 'HR', name => 'ADD_JOB_HISTORY', object_type => 'PROCEDURE') from dual / ERROR: ORA-31603: object "ADD_JOB_HISTORY" of type PROCEDURE not found in schema "HR" ORA-06512: at "SYS.DBMS_METADATA", line 4018 ORA-06512: at "SYS.DBMS_METADATA", line 5843 ORA-06512: at line 1 no rows selected SQL>
Has Oracle forgotten about this procedure ? Maybe it’s rolled under the sofa ?
Either way, further investigation would seem to be in order…
Like a cat, DBMS_METADATA.GET_DDL can appear to be, to put it delicately, rather indepenently minded. It certainly doesn’t always do what it’s told.
To try to determine what’s happening, we could do some tracing and look through the recursive statements to see which precisely what is causing the error. However, there is a quicker way.
Let’s start by looking at the comments in the DBMS_METADATA package header :
-- SECURITY -- This package is owned by SYS with execute access granted to PUBLIC. -- It runs with invokers rights, i.e., with the security profile of -- the caller. It calls DBMS_METADATA_INT to perform privileged -- functions. -- The object views defined in catmeta.sql implement the package's security -- policy via the WHERE clause on the public views which include syntax to -- control user access to metadata: if the current user is SYS or has -- SELECT_CATALOG_ROLE, then all objects are visible; otherwise, only -- objects in the schema of the current user are visible.
This points us in the direction of the file…
Sure enough, when we get to line 10209, things become a bit clearer…
-- base view for procedures, functions, packages and package bodies create or replace force view ku$_base_proc_view of ku$_proc_t with object identifier (obj_num) as select '1','1', oo.obj#, oo.type#, value(o), sys.dbms_metadata_util.get_source_lines(oo.name,oo.obj#,oo.type#) from sys.ku$_edition_schemaobj_view o, sys.ku$_edition_obj_view oo where (oo.type# = 7 or oo.type# = 8 or oo.type# = 9 or oo.type# = 11) and oo.obj# = o.obj_num and oo.linkname is NULL AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR EXISTS ( SELECT * FROM sys.session_roles WHERE role='SELECT_CATALOG_ROLE' )) /
Unless you are either the owner of the stored program unit you’re trying to retrieve, or connected as USER_ID 0 ( i.e. SYS), the only way that you’re going to get anything back from a query on this view is if you have been granted a role called SELECT_CATALOG_ROLE.
To verify this, let’s create another user…
grant create session to nutmeg identified by must_have_catnip / grant select_catalog_role to nutmeg /
Once again, we have access to the DBA_SOURCE view…
set heading off set pages 500 set lines 130 select text from dba_source where owner = 'HR' and name = 'ADD_JOB_HISTORY' and type = 'PROCEDURE' order by line / PROCEDURE add_job_history ( p_emp_id job_history.employee_id%type , p_start_date job_history.start_date%type , p_end_date job_history.end_date%type , p_job_id job_history.job_id%type , p_department_id job_history.department_id%type ) IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END add_job_history; 13 rows selected. SQL>
…however, we can now also use DBMS_METADATA.GET_DDL …
set long 5000 select dbms_metadata.get_ddl( schema => 'HR', name => 'ADD_JOB_HISTORY', object_type => 'PROCEDURE') from dual / CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY" ( p_emp_id job_history.employee_id%type , p_start_date job_history.start_date%type , p_end_date job_history.end_date%type , p_job_id job_history.job_id%type , p_department_id job_history.department_id%type ) IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END add_job_history; SQL>
It seems that DBMS_METADATA has retrieved it’s fluffy ball from under the sofa.DBMS_METADATA.GET_DDL in a Stored Program Unit
The fact that SELECT_CATALOG_ROLE is a role rather than a system privilege does tend to make life interesting if you put calls to DBMS_METADATA.GET_DDL into a stored program unit.
To demonstrate, let’s create a function that does just that (in the nutmeg schema – i.e. a schema that does not have the role granted to it) :
alter session set current_schema = nutmeg / create or replace function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2) return clob is begin dbms_output.put_line('Listing active session roles...'); for r_role in (select role from sys.session_roles) loop dbms_output.put_line('Role : '||r_role.role); end loop; dbms_output.put_line('Roles listed'); return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type); end; /
This should work fine for a user with the role granted, shouldn’t it ?
Let’s test it (once again, connected as cleo)…
set serveroutput on size unlimited select get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE') from dual / ERROR: ORA-31603: object "ADD_JOB_HISTORY" of type PROCEDURE not found in schema "HR" ORA-06512: at "SYS.DBMS_METADATA", line 4018 ORA-06512: at "SYS.DBMS_METADATA", line 5843 ORA-06512: at "NUTMEG.GET_DDL", line 10 no rows selected Listing active session roles... Roles listed SQL>
Because we’re calling the DBMS_METADATA.GET_DDL function from inside a stored program unit, the role is disabled.
In order for this to work we need to make the stored program unit invoker’s rights…
alter session set current_schema = nutmeg / create or replace function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2) return clob authid current_user is begin dbms_output.put_line('Listing active session roles...'); for r_role in (select role from sys.session_roles) loop dbms_output.put_line('Role : '||r_role.role); end loop; dbms_output.put_line('Roles listed'); return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type); end; /
Now, when we invoke this function (as cleo once again), the role is still applicable…
set serveroutput on size unlimited set long 5000 set heading off select get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE') from dual / CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY" ( p_emp_id job_history.employee_id%type , p_start_date job_history.start_date%type , p_end_date job_history.end_date%type , p_job_id job_history.job_id%type , p_department_id job_history.department_id%type ) IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END add_job_history; Listing active session roles... Role : SELECT_CATALOG_ROLE Role : HS_ADMIN_SELECT_ROLE Roles listed SQL>
It would seem then, that only a user granted SELECT_CATALOG_ROLE can usefully use DBMS_METADATA.GET_DDL in a stored program unit to retrieve DDL for objects not owned by them.
Something you might want to consider at this point, is that SELECT_CATALOG_ROLE is itself granted the HS_ADMIN_SELECT_ROLE role. Between them, these roles have SELECT access on quite a large number of SYS objects :
select count(*) from dba_tab_privs where grantee in ('SELECT_CATALOG_ROLE', 'HS_ADMIN_SELECT_ROLE') / COUNT(*) ---------- 2207 SQL>
At this point, you may well ask if there is any way for users to utilise our function without having this role granted.
After all, a common approach to application security is to bestow execute access to users on a stored program unit without them having any visibility of the underlying tables and views.
Well, there is…Doing without the SELECT_CATALOG_ROLE
Before we go any further I think I should point out that there are several issues with creating objects in the SYS schema.
The objects in the schema are effectively part of the Oracle software. The schema can be seen as being analogous to root on a *nix system.
There are various admonitions against performing DDL in the SYS schema. These include
As with most “golden rules” however, there is at least one exception – in this case, the Password Verify Function springs to mind.
I suppose the best reason for avoiding this sort of thing is that it would only take one moment of inattention to cause potentially irreparable damage to your Oracle installation. Even with a backup re-installing Oracle is no trivial matter. Therefore, I strongly suggest that you consider carefully whether the benefits of the approach I’m about to take here outweigh the issues of granting SELECT_CATALOG_ROLE.
So then, as SYS…
create function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2) return clob is begin dbms_output.put_line('Listing active session roles...'); for r_role in (select role from sys.session_roles) loop dbms_output.put_line('Role : '||r_role.role); end loop; dbms_output.put_line('Roles listed'); dbms_output.put_line('Userid is : '||sys_context('userenv', 'current_userid')); return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type); end; / grant execute on get_ddl to cleo /
Note that we’re using CREATE rather than CREATE OR REPLACE to ensure that we don’t accidentally overwrite anything.
Now, when we call this function as cleo…
set serveroutput on set heading off set lines 130 set long 5000 select sys.get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE') from dual / CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY" ( p_emp_id job_history.employee_id%type , p_start_date job_history.start_date%type , p_end_date job_history.end_date%type , p_job_id job_history.job_id%type , p_department_id job_history.department_id%type ) IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END add_job_history; Listing active session roles... Roles listed Userid is : 0 SQL>
The fact that we don’t have the role becomes irrelevant because the function is running with definer’s rights (i.e. as SYS, which as a CURRENT_USERID of 0).Conclusions
The security model implemented for DBMS_METADATA is quite unusual for Oracle supplied PL/SQL packages.
As we have seen, invoking this package, especially it’s GET_DDL function often behaves in unexpected (and possibly entertaining) ways.
Filed under: Oracle, PL/SQL Tagged: 07_dictionary_accessibility, catmeta.sql, current_userid, DBMS_METADATA, DBMS_METADATA.GET_DDL, definer's rights, ku$_edition_schemaobj_view, ORA-31603, select any dictionary, SELECT_CATALOG_ROLE, sys.session_roles, SYS_CONTEXT
Here’s where I’ll hang out in the following months:
26-28 January 2016: BIWA Summit 2016 in Redwood Shores, CA
- Keynote: The Place of SQL in the Hybrid World
- Presentation: Real-Time SQL Monitoring in Oracle Database 12c
10-11 February 2016: RMOUG Training Days in Denver, CO
- Presentation: Real-Time SQL Monitoring in Oracle Database 12c
- Presentation: Connecting Hadoop and Oracle
25 February 2016: Yorkshire Database (YoDB) in Leeds, UK
- Presentation: Connecting Hadoop and Oracle
6-10 March 2016: Hotsos Symposium, Dallas, TX
- Presentation: Real-Time SQL Monitoring in Oracle Database 12c
- Hacking session: Modern Linux Tools for Oracle Performance Diagnosis
10-14 April 2016: IOUG Collaborate, Las Vegas, NV
- Beer session: Not speaking myself but planning to hang out on a first couple of conference days, drink beer and attend Gluent colleague Maxym Kharchenko‘s presentations
24-26 April 2016: Enkitec E4, Barcelona, Spain
- Half-day seminar: The New Hybrid World: Oracle + Hadoop
- The abstract submission is still open until 24 January 2016
18-19 May 2016: Great Lakes Oracle Conference (GLOC) in Cleveland, OH
- I plan to submit abstracts (and hope to get some accepted :)
- The abstract submission is still open until 1st February 2016
2-3 June 2016: AMIS 25 – Beyond the Horizon near Leiden, Netherlands
- This AMIS 25th anniversary event will take place in a pretty cool location – an old military airport hangar (and abstract submission is still open :)
- Update: I unfortunately had to cancel my speaking plans at the AMIS event
5-7 June 2016: Enkitec E4, Dallas, TX
- Half-day seminar: The New Hybrid World: Oracle + Hadoop
- The abstract submission is still open until 24 January 2016
As you can see, I have changed my “I don’t want to travel anymore” policy ;-)
NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)
Here’s the story.
When you type, you’re inputting data into a machine. I know you like feeling like you’re in charge, but really you’re not in charge of all the rules you have to follow while you’re inputting your data. Other people—like the designers of the machine you’re using—have made certain rules that you have to live by. For example, if you’re using a QWERTY keyboard, then the ‘A’ key is in a certain location on the keyboard, and whether it makes any sense to you or not, the ‘B’ key is way over there, not next to the ‘A’ key like you might have expected when you first started learning how to type. If you want a ‘B’ to appear in the input, then you have to reach over there and push the ‘B’ key on the keyboard.
In addition to the rules imposed upon you by the designers of the machine you’re using, you follow other rules, too. If you’re writing a computer program, then you have to follow the syntax rules of the language you’re using. There are alphabet and spelling and grammar rules for writing in German, and different ones for English. There are typographical rules for writing for The New Yorker, and different ones for the American Mathematical Society.
A lot of people who are over about 40 years old today learned to type on an actual typewriter. A typewriter is a machine that used rods and springs and other mechanical elements to press metal dies with backwards letter shapes engraved onto them through an inked ribbon onto a piece of paper. Some of the rules that governed the data input experience on typewriters included:
- You had to learn where the keys were on the keyboard.
- You had to learn how to physically return the carriage at the end of a line.
- You had to learn your project’s rules of spelling.
- You had to learn your project’s rules of grammar.
- You had to learn your project’s rules of typography.
On your typewriter, you might not have realized it, but you did adhere to some typography rules. They might have included:
- Use two carriage returns after a paragraph.
- Type two spaces after a sentence-ending period.
- Type two spaces after a colon.
- Use two consecutive hyphens to represent an em dash.
- Make paragraphs no more than 80 characters wide.
- Never use a carriage return between “Mr.” and the proper name that follows, or between a number and its unit.
- Double-space all paragraph text.
Most people who didn’t write for different publishers got by just fine on the one set of typography rules they learned in high school. To them, it looked like there were only a few simple rules, and only one set of them. Most people had never even heard of a lot of the rules they should have been following, like rules about widows and orphans.
In the early 1980s, I began using computers for most of my work. I can remember learning how to use word processing programs like WordStar and Sprint. The rules were a lot more complicated with word processors. Now there were rules about “control keys” like ^X and ^Y, and there were no-break spaces and styles and leading and kerning and ligatures and all sorts of new things I had never had to think about before. A word processor was much more powerful than a typewriter. If you did it right, typesetting could could make your work look like a real book. But word processors revealed that typesetting was way more complicated than just typing.
Doing your own typesetting can be kind of like doing your own oil changes. Most people prefer to just put gas in the tank and not think too much about the esoteric features of their car (like their tires or their turn signal indicators). Most people who went from typewriters to word processors just wanted to type like they always had, using the good-old two or three rules of typography that had been long inserted into their brains by their high school teachers and then committed by decades of repetition.
Donald Knuth published The TeXBook in 1984. I think I bought it about ten minutes after it was published. Oh, I loved that book. Using TeX was my first real exposure to the world of actual professional-grade typography, and I have enjoyed thinking about typography ever since. I practice typography every day that I use Keynote or Pages or InDesign to do my work.
Many people don’t realize it, but when you type input into programs like Microsoft Word should follow typography rules including these:
- Never enter a blank line (edit your paragraph’s style to manipulate its spacing).
- Use a single space after a sentence-ending period (the typesetter software you’re using will make the amount of space look right as it composes the paragraph).
- Use a non-breaking space after a non-sentence-ending period (so the typesetter software won’t break “Mr. Harkey” across lines).
- Use a non-breaking space between a number and its unit (so the typesetter software won’t break “8 oz” across lines).
- Use an en dash—not a hyphen—to specify ranges of numbers (like “3–8”).
- Use an em dash—not a pair of hyphens—when you need an em dash (like in this sentence).
- Use proper quotation marks, like “this” and ‘this’ (or even « this »).
So, it’s always funny to me when people get into heated arguments on Facebook about using one space or two after a period. It’s the tiniest little tip of the typography iceberg, but it opens the conversation about typography, for which I’m glad. In these discussions, two questions come up repeatedly: “When did the rule change? Why?”
Well, the rule never did change. The next time I type on an actual typewriter, I will use two spaces after each sentence-ending period. I will also use two spaces when I create a Courier font court document or something that I want to look like it was created in the 1930s. But when I work on my book in Adobe InDesign, I’ll use one space. When I use my iPhone, I’ll tap in two spaces at the end of a sentence, because it automatically replaces them with a period and a single space. I adapt to the rules that govern the situation I’m in.
It’s not that the rules have changed. It’s that the set of rules was always a lot bigger than most people ever knew.
I will be attending the 2016 Oracle Utilities America’s Product Development Customer Advisory Board this year which is in Phoenix Arizona from the Feb 29th till 3rd March. This year we are running a dedicated technical stream highlighting specific technical features and also running a Technical Q&A Panel to answer technical questions and discuss our directions.
The sessions will be focused on technical aspects of the solution and be a combination of presentations on topics, live demonstrations and question/answer sessions with product experts. The planned sessions this year are:
Session Overview Security Features and Functions
In this session the new and improved security features of the Oracle Utilities Application Framework will be discussed including integrations to various security technologies to understand and take advantage of the advanced security solutions now available.
Web Services Integration
In this session the new Inbound Web Services, Message Driven beans and REST functionality of the Oracle Utilities Application Framework are highlighted to understand the integration capabilities for implementation. This session will include integrations to SOA products.
Information Lifecycle Management
In this session the new Information Lifecycle Management solution will be outlined and discussed to highlight the capabilities, implementation strategies and techniques for reducing storage costs whilst retaining data for business purposes
Managing your Utilities environment using Oracle Enterprise Manager In this session the techniques and capabilities of reducing your IT management costs using Oracle Enterprise Manager are outlined including using the base capabilities of the console and using the various packs available to augment the solution including the Oracle Application Management Pack for Oracle Utilities.
Technical Cloud Solutions
In this session the technical architecture of Oracle Cloud offerings for Software As A Service (SaaS) and Platform As A Service (PaaS) will be discussed. This session highlights all the technology used in the solution as well as the architecture of those solutions.
Oracle Utilities Framework Roadmap
In this session the roadmap of the Oracle Utilities Application Framework will be outlined.
Oracle Utilities Testing Solutions
In this session the new Oracle Application Testing Suite based testing accelerators for Oracle Utilities products will be outlined and demonstrated for quick adoption of automated testing. The solution includes Functional/Regression Testing, Performance/Load Testing and Testing Management.
Mobile Framework Overview
In this session the planned Mobile Server integration architecture and technology will be highlighted to allow connected and disconnected mobile clients for Oracle Utilities products.
In this session the short, medium and long term technology strategy will be discussed to outline the technology directions and integrations for the Oracle Utilities Application Framework in future releases. There will be a Q&A session in this session as well to discuss technology options.
Technical Implementation Q&A Panel
This session will be generic panel session where product managers and product developers are available for customer and partner questions and discussions on technical aspects of implementations.
I will be available for all these sessions with other product managers and will also be attending the Customer User Group meetings after the CAB has completed. These sessions are designed for technical personnel rather than business personnel.
I look forward to seeing you at the CAB. For those in APAC, I am also attending the APAC CAB in Melbourne (my home town) in Mid February 2016 with a subset of these sessions.
Today, I wrote on our Oracle APEX Columbus Meetup board a short summary of the information we reviewed last night. For those people who've been doing APEX for years, none of this is going to be new. But the information I posted may be especially helpful to those who are very new to APEX, or even curious about APEX. I decided to simply share it again here, in the hopes that someone else just as new as Shannon will find this useful.
We discussed a few things last night and I wished to summarize them here:
1) There are ways to remain connected to the APEX community via Social media:
Twitter: The hashtag for Oracle Application Express is #orclapex. Most everyone who attended last night is on Twitter. You can follow many of us. I’m at @joelkallman. The APEX news is at @oracleapexnews. If you don't know anyone on twitter, just do a Twitter search for #orclapex.
I’ll be honest - almost everyone in the APEX community is heavily engaged on Twitter, a lot less on LinkedIn, and almost never on Facebook.
2) You should get registered on https://apex.world
It’s the APEX Community site, written by others in the APEX community (outside of Oracle). There are jobs, plug-ins, open source, twitter feeds, news, and more. You should also get registered on Slack, because apex.world is also integrated with Slack. Follow the instructions on apex.world to get a Slack invitation. It’s worth it.
3) I spoke of some upcoming conferences
There is an upcoming conference in May in Cleveland, the Great Lakes Oracle Conference. Not only will Jason Straub and I be there, doing a couple sessions (about what’s coming in APEX 5.1), but we’re also doing a pre-conference workshop. There will be other non-Oracle people there presenting on APEX. You should think about presenting at this conference, and you can submit your abstracts until February. As I tried to convey to attendees last night, don’t think that you have to submit the most exotic, obtuse topic possible. How you’re using APEX, the challenges you’ve encountered and how you worked around them, may be a very useful topic. The conference committee wants to expand their APEX offerings, and I think those of us in Ohio should help them. https://www.neooug.org/gloc/
b) In June, in Chicago, is the Oracle Development Tools User Group (ODTUG) annual Kscope conference. This is the place to be on the planet if you do any APEX whatsoever. Just in the APEX track alone, there will be 46 sessions over 5 days. On the Sunday before the conference starts, there will be the Sunday Symposium, which will be exclusively from the Oracle APEX product development team. From a global perspective, this is the place to be for APEX. It’s highly technical, and attendees and speakers from around the world assemble here. http://kscope16.com
4) How to get started, especially for someone who is new. I offered a couple suggestions:
a) Go to https://apex.oracle.com, and scroll down to the "Learn More" section, where there are links to documentation, tutorials, videos, hands-on-labs, etc.
b) An Oracle employee mentioned that he took the APEX training class on Udemy, and for 7 hours of training, he thought it was pretty good. I can't vouch for the training, and this isn't an official recommendation, but he thought it was worth his time and money. He also said that while it's priced at $25, they often run specials for as low as $10. https://www.udemy.com/create-web-apps-with-apex-5/
5) Lastly, I showed Oracle’s community site for APEX, https://apex.oracle.com/community
I showed the numerous customer quotes we’ve received, and I put another plea out to attendees that, if you’re using APEX, please consider going through your management chain to get approvals for a quote. At least ask. There is no huge legal process involved, approvals can all be done via email. The hard part is taking time out of your day job and pursuing this at your employer (or customer). It will be a huge benefit to the entire APEX community.
P.S. I never showed it last night, but ODTUG also has a nice community site for APEX, at http://odtug.com/apex
The December 2015 My Oracle Support Release added Customer User Administrator (CUA) functionality to the Oracle Cloud Support portal. The change makes it much easier for customers using the Cloud Portal (including many Oracle MICROS and Oracle Hospitality customers) to manage user access and Support Identifier (SI) tasks.
If you are a CUA, you will be notified of current administrative tasks in the notification area in the top area of the Cloud Portal page. You can also click your name, then the Administration sub-link, and you'll be taken to the CUA information view in the Cloud Portal.
Review the video found in "Cloud Support Portal - How to Use the CUA Functionality" (Document 2072499.1).
You can also review existing trainings on how to perform common CUA tasks by visiting "Customer User Administrator (CUA) Roles and Responsibilities" (Document 1544004.2).
The Cloud Portal allows you to verify user approval actions and review Support Identifier notices. However, Cloud Portal does not fully support managing Support Identifier assignments and renewals. To perform these actions you will need to continue using the standard My Oracle Support portal for the time being. Review the video training linked above for more details.
In the last week or so, we placed an updated version of the Next Generation Outline Extractor on our website. This version provides support for some updated Essbase versions, including 22.214.171.124.002, 126.96.36.199.003, and 188.8.131.52.005. More importantly, it addresses a bug where alias names were improperly associated with parent members when using the MaxL extraction source.. This bug was reported to us by a number of users and we are glad we were able to address it. Here is a list of the issues that were addressed:
2015.11.23 - Issue 1401 - Resolved an issue where only one alias table is exported when using MaxL as the extract source.
2015.11.23 - Issue 1402 - Resolved an issue where extracts using MaxL input and having members specified with Unicode may print incorrect characters in the output.
2015.11.23 - Issue 1403 - Resolved an issue where aliases and udas may have been improperly placed on parent members.
At my current assignment I needed to create an update statement to copy data from one table to another. Quite a simple task, I would say, but an error or actually a constraint violation sent me the wrong way in finding my solution.
Suppose I have two tables:
CREATE TABLE a ( mykey NUMBER , thisvalue VARCHAR2(20) NOT NULL ) /
CREATE TABLE b ( mykey NUMBER , thatvalue VARCHAR2(20) NOT NULL ) /
and some data in them:
INSERT INTO a (mykey,thisvalue) VALUES (1,'Larry'); INSERT INTO a (mykey,thisvalue) VALUES (2,'Bryn'); INSERT INTO a (mykey,thisvalue) VALUES (3,'Steven'); INSERT INTO a (mykey,thisvalue) VALUES (4,'Patrick'); INSERT INTO b (mykey,thatvalue) VALUES (1,'Larry Ellison'); INSERT INTO b (mykey,thatvalue) VALUES (2,'Bryn Llewellyn'); INSERT INTO b (mykey,thatvalue) VALUES (3,'Steven Feuerstein'); COMMIT /
Now I want to update the values in table a with the values of table b. My first idea was to write a statement like this:
UPDATE a SET a.thisvalue = (select b.thatvalue FROM b WHERE b.mykey = a.mykey) /
but this statement led to the following error:
ORA-01407: cannot update ("DEMO"."A"."THISVALUE") to NULL
No problem, I thought, if the new value is somehow NULL, then just use the old value:
UPDATE a SET a.thisvalue = (select NVL(b.thatvalue, a.thisvalue) FROM b WHERE b.mykey = a.mykey) /
but this still resulted in the same error:
ORA-01407: cannot update ("DEMO"."A"."THISVALUE") to NULL
Then it dawned upon me. For mykey=4 there would be no match in table B, which resulted in no row returned, hence a NULL value.
The error given is absolutely correct, but it sent me the wrong way in finding a solution. If I would have gotten a NO DATA FOUND error, I would have known right away what was the problem.
The solution was pretty easy, just update the rows that have a matching row in table B:
UPDATE a SET a.thisvalue = (select b.thatvalue FROM b WHERE b.mykey = a.mykey) WHERE EXISTS (select b.thatvalue FROM b WHERE b.mykey = a.mykey) /
Another solution might be using the MERGE statement:
MERGE INTO a USING (select b.mykey, b.thatvalue FROM b) b ON (a.mykey = b.mykey) WHEN MATCHED THEN UPDATE SET a.thisvalue = b.thatvalue /
If the subselect results in more than one row you get an equivalent of the TOO_MANY_ROWS exception, but if the subselect results in no rows you don’t get the NO_DATA_FOUND (or equivalent) exception.
Yesterday I enhanced the script to also do an automatic install of Java and adapt the environment settings to have the JDeveloper settings specific placed in C:\Data\JDeveloper\SOA.
To run the script you should have the following files placed in one folder (for instance c:\temp\soabpm):
here. If you use another version adapt the script below to change the JAVA_HOME as the place where it should be installed and the reference to the installer (jdk-8u65-windows-x64.exe) to the version. The script below ignores any other installation of Java (either an other update of jdk8 or earlier versions).
Using the 'setx' command (see docs) the JAVA_HOME is explicitly set, and also later on the variables JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA pointing to C:\Data\JDeveloper\SOA. I picked the jdev.boot from an earlier install adapted the ide.user.dir.var variable as follows:
# The ide.user.dir.var specifies the name of the environment variable
# that points to the root directory for user files. The system and
# mywork directories will be created there. If not defined, the IDE
# product will use its base directory as the user directory.
ide.user.dir.var = JDEV_USER_HOME_SOA,JDEV_USER_DIR_SOA
A copy of this jdev.boot file I placed in the same folder as the script. The scripts sets the FMW_HOME as 'C:\oracle\JDeveloper\12210_BPMQS'. After the install of JDeveloper-SOA/BPM QuickStart, it copies this jdev.boot file to the right place in the FMW_HOME (%FMW_HOME%\jdeveloper\jdev\bin).
Make sure that you have the installer jar files (fmw_184.108.40.206.0_bpm_quickstart.jar and fmw_220.127.116.11.0_bpm_quickstart.jar) unzipped in the same folder. You could have downloaded them in zip files. You can use this script also to do a complete silent install of the JDeveloper Studio Edition, but then use the generic installer downloaded here (Oracle JDeveloper 12c 18.104.22.168.0 Generic/Others(2.1GB)).
The installBpmQS.bat script is as follows.
set JAVA_HOME=c:\Program Files\Java\jdk1.8.0_65
echo setx -m JAVA_HOME "%JAVA_HOME%"
setx -m JAVA_HOME "%JAVA_HOME%"
rem Check Java
if exist "%JAVA_HOME%" goto :JAVA_HOME_EXISTS
echo Install %JAVA_HOME%
jdk-8u65-windows-x64.exe /s INSTALLDIR="%JAVA_HOME%"
echo %JAVA_HOME% exists
rem check BPM12.2 QS
if exist "%FMW_HOME%" goto :BPMQS_HOME_EXISTS
echo Install %FMW_HOME%
rem echo "Current dir2:" %~dp0
echo "Silent install BPM QuickStart, using response file:" %cd%\bpmqs1221_silentInstall.rsp
"%JAVA_HOME%\bin\java.exe" -jar fmw_22.214.171.124.0_bpm_quickstart.jar -silent -responseFile %cd%\bpmqs1221_silentInstall.rsp -nowait
echo %FMW_HOME% exists
if not exist c:\Data mkdir c:\Data
if not exist c:\Data\JDeveloper mkdir c:\Data\JDeveloper
if not exist c:\Data\JDeveloper\SOA mkdir c:\Data\JDeveloper\SOA
rem set JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA as 'c:\Data\JDeveloper\SOA'
echo set JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA as c:\Data\JDeveloper\SOA
setx -m JDEV_USER_DIR_SOA c:\Data\JDeveloper\SOA
setx -m JDEV_USER_HOME_SOA c:\Data\JDeveloper\SOA
echo copy jdev.boot naar "%FMW_HOME%\jdeveloper\jdev\bin"
copy jdev.boot "%FMW_HOME%\jdeveloper\jdev\bin" /Y
For completeness, the content of the response file is:
[ENGINE]Save this as bpmqs1221_silentInstall.rsp. If you use this for the installation of JDeveloper Studio Edition or any other version of JDeveloper, then adapt the name, also in installBpmQS.bat above, and adapt the FMW_HOME/ORACLE_HOME in both installBpmQS.bat and the response-file.
#DO NOT CHANGE THIS.
Response File Version=126.96.36.199.0
#Set this to true if you wish to skip software updates
#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
I could have enhanced this script to put the possible points to change in variables. But I think with my eloboration you should get the point...
One of the nice features of Oracle Cloud is that it comes with some additional command-line tooling in the form of the dbaascli command.
When you run this command (while logged into your DBaaS instance) you’ll see
[oracle@DEMO ~]# dbaascli DBAAS CLI version 1.0.0 DBAAS>
This dumps you into an interactive shell, typing ‘help’ gives us a bit more info.
DBAAS>help Help for dbaascli dbaascli is a command line interface for different tools to be used with Oracle Cloud DB. This command line interface supports logging, command history and autocomplete. Logging is done on a session level. Each time dbaascli is run the last log is overwritten. To access command history use the up/down arrow keys as you would normally do in a terminal. Autocomplete uses the Tab key. Autocomplete will complete to the longest common string in case there is not an exact match. To obtain help on a command use the help keyword at the begining or end. Autocomplete is supported if the first word is the "help" keyword. Special commands are available. The list of special commands is: -H -HELP ? - Get dbaascli help CLEAR - Clear the screen HELP - Get dbaascli help HIST - Print command line history HISTORY - Print command line history LIST - List available commands Q - Exit Command Line Interface QUIT - Exit Command Line Interface DBAAS>
Note – you can also run the command non-interactively by specifying the options on the command-line.
So, let’s list the commands available.
DBAAS>list Very Long Text, press q to quit Available commands: database bounce database changepassword database start database status database stop dataguard failover dataguard reinstate dataguard status dataguard switchover dbpatchm dv off dv off cdb dv off pdb dv on dv on cdb dv on pdb gg setup gg status glassfish start glassfish status glassfish stop listener bounce listener start listener status listener stop mrec netsec config netsec config encryption netsec config integrity netsec deconfig netsec deconfig encryption netsec deconfig integrity netsec status netsec status encryption netsec status integrity ore setup orec tde config tde rotate tde rotate masterkey tde status tfa install tfa uninstall DBAAS>
Most of these commands are wrappers around other commands but it does give you one “go-to” centralised command which is consistent across all your DBaaS instances rather than trying to remember each command (and perhaps having to deal with different configurations).
So let’s try a few commands, this time I’ll run them non-interactively (type ‘quit’ to exit the interactive version).Database Commands
Let’s check the database status
[oracle@DEMO ~]$ dbaascli database status DBAAS CLI version 1.0.0 Executing command database status Database Status: Database is open Database name: ORCL Oracle Database 12c EE Extreme Perf Release 188.8.131.52.0 - 64bit Production PL/SQL Release 184.108.40.206.0 - Production CORE 220.127.116.11.0 Production TNS for Linux: Version 18.104.22.168.0 - Production NLSRTL Version 22.214.171.124.0 - Production
You could leverage this script to perhaps grep for the phrase ‘Database is open’ to check for problems.
We could also do the commands
dbaascli database bounce dbaascli database start dbaascli database stopGlassfish Commands
Lets check Glassfish
[oracle@DEMO ~]$ dbaascli glassfish status DBAAS CLI version 1.0.0 Executing command glassfish status Unable to run command. This command should be run as user: root. Currently: oracle
Hmmm ok, so depending on the command we need to run it as different users, lets switch to root and try it again
[root@DEMO ~]# dbaascli glassfish status DBAAS CLI version 1.0.0 Executing command glassfish status Calculating the status and metrics: Authentication failed for user: admin with password from password file: /root/statuspwd.txt (Usually, this means invalid user name and/or password) Authentication failed for user: admin with password from password file: /root/statuspwd.txt (Usually, this means invalid user name and/or password) domain1 running Command list-domains executed successfully. Command list-http-listeners failed. Command uptime failed.
Notice the error message stating it’s reading the /root/statuspwd.txt file to get the password. Once I entered the password there and re-ran the command worked (I’ll omit the output here as it’s quite lengthy).Listener Commands
Let’s check the listener
[root@DEMO ~]# dbaascli listener status DBAAS CLI version 1.0.0 Executing command listener status Invalid parameter(s) Command Error Command: listener status Listener Status
Ok, this is getting annoying, maybe I need to be the oracle user for this one?
[root@DEMO ~]# su - oracle [oracle@DEMO ~]$ dbaascli listener status DBAAS CLI version 1.0.0 Executing command listener status Starting listener LSNRCTL for Linux: Version 126.96.36.199.0 - Production on 13-MAY-2016 13:15:50 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=DEMO.compute-ae.oraclecloud.internal)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 188.8.131.52.0 - Production Start Date 4-JAN-2015 23:52:34 Uptime 1 days 13 hr. 23 min. 15 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/DEMO/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=DEMO.compute-ae.oraclecloud.internal)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=DEMO.compute-ae.oraclecloud.internal)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/12.1.0/dbhome_1/admin/ORCL/xdb_wa llet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "ORCL.ae.oraclecloud.internal" has 1 instance(s). Instance "ORCL", status READY, has 1 handler(s) for this service... Service "ORCLXDB.ae.oraclecloud.internal" has 1 instance(s). Instance "ORCL", status READY, has 1 handler(s) for this service... Service "pdb1.ae.oraclecloud.internal" has 1 instance(s). Instance "ORCL", status READY, has 1 handler(s) for this service... The command completed successfully
So that worked, obviously the output is quite lengthy, but again you could see how this simple command could be used in a monitoring script.Command Specific Help
If you’re unsure what a particular command might do, rather than risking running it, you can get some help on each command. For example what does that mrec command do?
[oracle@DEMO ~]$ dbaascli mrec help DBAAS CLI version 1.0.0 Executing command mrec help Command: mrec Help for MREC tool. Parameters: --args Format: Not defined Required: NO Usage: mrec --args -oss_cfgfile <cfgfile> -old_hostname <vm_name> mrec --args -oss_cfgfile <cfgfile> -old_hostname <vm_name> -sid <oracle_sid> Where: -help = Print this help. -oss_cfgfile = full path to OSS config file. -old_hostname = Name of the host to recover. -sid = ORACLE SID of the DB instance to recover. Default is orcl Examples: mrec --args -oss_cfgfile /var/opt/oracle/mrec/oss.cfg -old_hostname vmprod01 mrec --args -oss_cfgfile /var/opt/oracle/mrec/oss.cfg -old_hostname vmprod01 -sid orcl
Ok, so it doesn’t tell me what it actually does (I had to search the documentation to figure out MREC = Media Recovery), but it does at least tell you what parameters it expects.
More parts to follow.
We are pleased to announce that a new version of the Oracle Application Management Pack for Oracle Utilities has been released to support the new release of Oracle Enterprise Manager 13c. We are excited to offer this new pack which now supports the new features of Oracle Enterprise Manager including:
- The user interface has been updated to reflect the new Alta look and feel implemented by Oracle Enterprise Manager
- The Always On feature is now supported that is used by Oracle Enterprise Manager to drastically reduce downtime for Oracle Enterprise Manager or pack maintenance
- The System Broadcast feature is now supported allowing broadcast across all Oracle Enterprise Manager users
- Support for Brownouts is now included where non-scheduled outages are now calculated separately for Service Level Agreement checking
- and many more...
The functionality of the pack is the same as the latest release of the pack for Oracle Enterprise 12c for backward compatibility reasons. This pack requires Oracle Enterprise Manager 13c. The new version of the pack is available from Self Update within Oracle Enterprise Manager 13c and Oracle Software Delivery Cloud.
A new release of the pack is also scheduled in the near future with additional functionality to fully exploit additional new and exciting features of Oracle Enterprise Manager 13c. For more information about Oracle Enterprise Manager 13c refer to the EM blog post.
One of the most common questions I get from partners is support for features that are typically in the infrastructure. The main issue here is that some partners confuse what is in the product and what is in the infrastructure and the implementation solution. Let me explain.
The Oracle Utilities Application Framework based products are applications housed within J2EE infrastructure (such as Oracle WebLogic and in some cases IBM WebSphere) and for batch, housed in a runtime version of Oracle Coherence.
Now there is a degree of separation between the product and the infrastructure. Each has distinct roles and those roles are only duplicated across what we call touchpoints between the product and the infrastructure. Another complication comes into play is the role of the solution which the particular configuration of the product and the infrastructure to suit a particular need.
When I was considering writing this article to highlight the differences in product, infrastructure and solutions I bounced around a few ways of describing it but I found the nest way is in the form of a common example.
Lets use the example of security authentication (aka who are you?). This is essentially the feature of securing and identifying the user when connecting to the product. The most common example of this is known as challenge and response (or more commonly userid and password).
In terms of the roles security authentication is described as follows in terms of product, infrastructure and solution:
- The product does not store userid and password itself. It does not make sense in the context of an enterprise application as typically security is enterprise wide, for efficiency reasons, not just for a particular product. This is delegated to the J2EE container (Oracle WebLogic/IBM WebSphere) to authenticate the user. The product relies on the container to pass or fail an authentication attempt.
- The J2EE container, which is part of the infrastructure, supports various security repositories and standards via security connectors. For example, if you have a corporate security server that holds users and passwords then you can connect it via LDAP to the container to now implement a common identity store. The J2EE container supports a wide range of adapters and in the case of Oracle WebLogic you can implement multiples for different parts of your business. An example of this is where you can separate administration accounts from common users using different identity stores.
- A solution for the product is a distinct configuration of the J2EE container with appropriately configured security connectors. This can also mean that you externalize this function even further by implementing an Identity Management solution such as Oracle Identity Management Suite.
As you see in the example, there are distinct differences between the product, solution and infrastructure. You can apply the same logic to a wide range of implementation aspects needed to be considered.
Now, lets focus on a particular issue using the example above. Where should the users be able to change their password?
- The product does not have inbuilt password change functionality. This is because in a solution context, it makes no sense. This is why we do not supply one. It does not mean you cannot add this functionality to the menu as a common function.
- The product is always connected to a security repository via the J2EE container (even the default one shipped with the J2EE container). The password change function is at the infrastructure level not the product level.
- Typically you can change passwords from external sources which is much more logical. Lets take the common example of reusing the same security repository for LAN login and the product (via a common LDAP source with or without SSO). If you use this example, typically the LAN login allows you to change your password which would apply to all connected applications. It makes no sense in this example to also duplicate the functionality in the product. Also why would you let the product change a security repository.
The above example brings the discussion into sharp focus.
Now, how do I deal with these situations? I call it "What would product <blank> do in this situation?", where <blank> is your favorite desktop application. I usually use Office as an example (not a great example but something most people understand). You would not expect Word or its equivalent to have a password maintenance function. No, it does not make sense. Word in this example, uses the features of the operating system to do all sorts of functions like printing, scanning etc... The application does not have all these functions inbuilt (otherwise it would not be a word processor really).
Hope this clarifies the situation.