Feed aggregator

Oracle 12c Limitations to RESOURCE, SELECT ANY DICTIONARY

ContractOracle - Thu, 2013-06-27 01:38
Oracle 12c has implemented a few improvements to the existing system privileges.  

Dictionary tables containing password hashes (DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$XS$VERIFIERSare no longer included in the SELECT ANY DICTIONARY system privilege.  This makes it safer to give developers access to dictionary tables for tuning and debugging, without giving them the chance to run brute force attacks ...

Unlimited Tablespace is no longer included in the RESOURCE role. This should reduce the number of times developers create segments in SYSTEM tablespace ....

SQL> grant select any dictionary to god;

Grant succeeded.

SQL> grant resource to god;

Grant succeeded.

SQL> connect god/god@T12P1
Connected.
SQL> select password from user$;
select password from user$
                     *
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> select default_tablespace from dba_users where username = 'GOD';

DEFAULT_TABLESPACE
------------------------------
USERS

SQL> create table test(id integer) tablespace system;

Table created.

SQL> insert into test values (1);
insert into test values (1)
            *
ERROR at line 1:
ORA-01950: no privileges on tablespace 'SYSTEM'

Categories: DBA Blogs

Oracle 12c New Features - Data Redaction via DBMS_REDACT

ContractOracle - Wed, 2013-06-26 21:55
Oracle 12c has added functionality to redact data in specific columns depending on a policy configured by the DBMS_REDACT package.  

First create table EMPLOYEE in schema TEST and grant select to GOD and NOBODY.

SQL> connect test/test@T12P1
Connected.
SQL> create table employee (emp_id integer primary key, emp_name varchar2(10), salary number);

Table created.

SQL> grant select on employee to god;

Grant succeeded.

SQL> grant select on employee to nobody;

Grant succeeded.

SQL> insert into employee (emp_id, emp_name, salary) values (1,'John',100000);

1 row created.

SQL> insert into employee (emp_id, emp_name, salary) values (2,'Ben',80000);

1 row created.

SQL> commit;

Commit complete.

Now we create the data redaction policy using DBMS_REDACT.ADD_POLICY.  Only someone logged in as GOD should be allowed to see the values stored in TEST.EMPLOYEE.SALARY.  In this case we are using function type FULL which obscures all data, but other options include PARTIAL, RANDOM, REGEXP.

SQL> connect system/password@T12P1
Connected.
SQL> BEGIN
  2  DBMS_REDACT.add_policy(object_schema => 'TEST'
  3  ,object_name => 'EMPLOYEE'
  4  ,policy_name => 'Salary Redaction'
  5  ,expression => 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') != ''GOD'' OR SYS_CONTEXT(''USERENV'',''SESSION_USER'') IS NULL'
  6  ,column_name => 'SALARY'
  7  ,function_type => dbms_redact.FULL
  8  );
  9  END;
 10  /

PL/SQL procedure successfully completed.

When we are logged in as nobody the salary column is redacted.

SQL> connect nobody/nobody@T12P1
Connected.
SQL> select * from test.employee;

              EMP_ID EMP_NAME                 SALARY
-------------------- ---------- --------------------
                   1 John                          0
                   2 Ben                           0

But when we are logged in as god we can see the data.

SQL> connect god/god@T12P1
Connected.
SQL> select * from test.employee;

              EMP_ID EMP_NAME                 SALARY
-------------------- ---------- --------------------
                   1 John                     100000
                   2 Ben                       80000

Note that redaction policies do NOT apply to users with the EXEMPT REDACTION POLICY system privilege, which by default is granted via EXP_FULL_DATABASE and DBA roles.  This means that by default DBAs will still have access to redacted data.

SQL> connect system/password@T12P1
Connected.
SQL> select * from test.employee;

              EMP_ID EMP_NAME                 SALARY
-------------------- ---------- --------------------
                   1 John                     100000
                   2 Ben                       80000

SQL> select grantee, privilege from dba_sys_privs where privilege = 'EXEMPT REDACTION POLICY';

GRANTEE
-----------------------------------------------------------------
EXP_FULL_DATABASE

SQL> select grantee from dba_role_privs where granted_role = 'EXP_FULL_DATABASE';

GRANTEE
-----------------------------------------------------------------
SYS
DATAPUMP_EXP_FULL_DATABASE
DBA
DATAPUMP_IMP_FULL_DATABASE







Categories: DBA Blogs

Oracle database 12c

Amardeep Sidhu - Wed, 2013-06-26 20:58

So there is a new toy in the market for database geeks : Oracle has released database 12c. Every social platform is abuzz with the 12c activity. So thought that I should also complete the ritual Winking smile

In this post Aman has already summed up many important links.

Maria Colgan has posted some useful links here.

And here is a link to a slidedeck about Upgrading and Migrating to 12c.

Happy 12c’ing !

Categories: BI & Warehousing

Oracle 12c New Features - In Database Row Archiving

ContractOracle - Wed, 2013-06-26 20:57
Oracle 12c has a new feature called In Database Row Archiving.  Instead of deleting old records they can now be marked as archived, be invisible to running applications, but remain in the original table in case they need to be restored at a later date.  This should reduce the need for DBAs to restore old backups, and should keep auditors happy.

First create a test table.

SQL> create table employee (emp_id integer primary key, emp_name varchar2(10), archive_date date);

Table created.

SQL> insert into employee (emp_id, emp_name) values (1,'John');

1 row created.

SQL> insert into employee (emp_id, emp_name) values (2,'Ben');

1 row created.

SQL> commit;

Commit complete.

Now enable archival for the table.

SQL> alter table employee row archival;

Table altered.

We can see from hidden attribute ORA_ARCHIVE_STATE=0 that the records are not archived.

SQL> select * from employee;

EMP_ID EMP_NAME   ARCHIVE_D
------ ---------- ---------
     1 John
     2 Ben

SQL> select emp_id, emp_name, ora_archive_state from employee;

EMP_ID EMP_NAME   ORA_ARCHIVE_STATE
------ ---------- -----------------------------------------------
     1 John       0
     2 Ben        0

Now we want to archive the record for employee Ben (he resigned).

SQL> update employee
set ora_archive_state=dbms_ilm.archivestatename(1), archive_date=sysdate
where emp_id=2; 

1 row updated.

SQL> commit;

Commit complete.

The record record for Ben is no longer visible to normal select operations.

SQL> select * from employee;

EMP_ID EMP_NAME   ARCHIVE_D
------ ---------- ---------
     1 John

But if we set row archive visibility=all then we can see that the record still exists in the table.  

SQL> alter session set row archival visibility = all;

Session altered.

SQL> select emp_id, emp_name, ora_archive_state from employee;

EMP_ID EMP_NAME   ORA_ARCHIVE_STATE
------ ---------- -----------------------------------------------
     1 John       0
     2 Ben        1


SQL> alter session set row archival visibility = active;

Session altered.

And while the record is invisible to the application, it is still considered by constraints.

SQL> select * from employee;

EMP_ID EMP_NAME   ARCHIVE_D
------ ---------- ---------
     1 John

SQL> insert into employee (emp_id, emp_name) values (2,'Ben');
insert into employee (emp_id, emp_name) values (2,'Ben')
*
ERROR at line 1:
ORA-00001: unique constraint (TEST.SYS_C009859) violated

And if we re-hire Ben, the record can be brought back from archive.

SQL> alter session set row archival visibility = all;

Session altered.

SQL> update employee
set ora_archive_state=dbms_ilm.archivestatename(0), archive_date=null
where emp_id=2; 

1 row updated.

SQL> commit;

Commit complete.

SQL> alter session set row archival visibility = active;

Session altered.

SQL> select * from employee;

EMP_ID EMP_NAME   ARCHIVE_D
------ ---------- ---------
     1 John
     2 Ben


Categories: DBA Blogs

Oracle 12c New Features - Partial Indexing on Partitioned Tables

ContractOracle - Wed, 2013-06-26 03:42
Oracle 12c allows DBAs to set specific table partitions "INDEXING OFF" and create indexes with "INDEXING PARTIAL".  This means that index partitions won't be created for the specified table partitions.  This allows DBAs to have multiple indexing strategies for different partitions, or to rapidly create indexes on specific partitions.

Example :- create a partitioned table :-

SQL> CREATE TABLE test_range
(id  NUMBER(5),
att1 char(1),
att2 char(1),
att3 char(1))
PARTITION BY RANGE(id)
(
PARTITION id_10 VALUES LESS THAN(10),
PARTITION id_20 VALUES LESS THAN(20)
);

Table created.

Create a local index.  By default it will create index partitions for every table partition.

SQL> create index test_index1 on test_range(att1) local;

Index created.

Now set indexing off for one partition, and create a second index with partial indexing enabled.

SQL> alter table test_range modify partition id_20 indexing off;

Table altered.

SQL> create index test_index2 on test_range(att2) local indexing partial;

Index created.

Insert to create the segments.

SQL> insert into test_range values (1,'a','a','a');

1 row created.

SQL> insert into test_range values (11,'b','b','b');

1 row created.

SQL> commit;

Commit complete.

And we can see that for index TEST_INDEX2 with PARTIAL INDEXING set, no segment was created for table partition ID_20 with INDEXING OFF.

SQL> select index_name, partition_name, segment_created from dba_ind_partitions where index_owner = 'TEST';

INDEX_NAME
--------------------------------------------------------------------------------
PARTITION_NAME
--------------------------------------------------------------------------------
SEGMENT_CREATED
-------------------------
TEST_INDEX1
ID_10
YES

TEST_INDEX1
ID_20
YES

TEST_INDEX2
ID_10
YES

TEST_INDEX2
ID_20
NO

SQL> select segment_name, segment_type, count(*) from dba_segments where owner = 'TEST' group by segment_name, segment_type;

SEGMENT_NAME
--------------------------------------------------------------------------------
SEGMENT_TYPE         COUNT(*)
------------------ ----------
TEST_RANGE
TABLE PARTITION             2

TEST_INDEX1
INDEX PARTITION             2

TEST_INDEX2
INDEX PARTITION             1


Categories: DBA Blogs

What's in Oracle Database 12c for Java?

Kuassi Mensah - Wed, 2013-06-26 00:05
Looking to exploit the new Oracle Multitenant Container Database with Java?
Have you ever needed to deploy Java applications with tens of thousands of concurrent users? If you are a Java architect, Java designer or wannabe looking to exploit new Oracle database 12c enhancements in the areas of performance, scalability, availability, security and manageability/ease-of-use, this is the paper for you.

 Have you ever experienced paying twice the same flight ticket, the same article or your taxes? Have you ever wanted the system to just deal with database failure and not ask you to restart your transaction from start? Upon database outages (hardware, software, network, or storage failure), four problems confront applications: hangs, errors handling, determining the outcome of in-flight work (i.e., last COMMIT), and the resubmission in-flight work.
If you are application developer, database and system administrator, integrator or ISV looking to better exploit Oracle RAC and Active Data Guard to achieve maximum application availability, this is the paper for you (although with a Java focus).

Oracle Database 12c is 'out' - some pleasant surprises

Hans Forbrich - Tue, 2013-06-25 21:18
Without fanfare, other than a "didyaknow" on Oracle-L, Oracle 12c for Linux and Solaris became available on Oracle's edelivery site.

Some time later, the docs showed up at http://www.oracle.com/pls/db121/homepage and via http://docs.oracle.com (but not yet on tahiti.oracle.com) and on OTN's http://download.oracle.com

My FIRST look at the documentation these days is at the Licensing guide.  For me, that's even more important than the New Features, the Concepts (especially the What's New chapter) and the Administrator's Guide.

Wheeeeeeeeeeee!

Personal Edition For Linux!!!  It's here!

Finally.

THANK YOU ORACLE.   And, since I was able to peek at the product early, that is the first of MANY thanks.

 -------

Update:  One of the Oracle-L people pointed out that the Personal Edition for Linux has been available for 11.2 as well.  They snuck that in - but I am still very thankful.

Personal Edition is for developers who need ALL the features and options (except RAC and OEM packs), as well as support, at a very low price.  The limitation - it's a One Named User Only license.
Categories: DBA Blogs

Oracle Database 12c

Antonio Romero - Tue, 2013-06-25 17:30

Exciting day today as Oracle Database 12c is released. You can find lots of information on the release on OTN here. With this release comes another milestone on Oracle's Data Integration roadmap - OWB is no longer shipped with the database. You will notice that the OWB documentation is no longer included with the Oracle Database documentation, you can compare and contrast the 11.2 and 12.1 documentation below.

OWB 11gR2 is still supported with Oracle Database 12c, you will need 11.2.0.3 plus at least CP2 which has been certified with Oracle Database 12c. The 11.2.0.4 release will wrapper this into one install.

Starting Again: A Year in Review

Sue Harper - Tue, 2013-06-25 05:50
It's June, it's ODTUG time of year and it's New Orleans... Well, that's true for many old friends and colleagues.  I'm still in London writing in the watery sunshine that should be summer.  

ODTUG feels a little like a milestone for me.  It was a conference I always loved and made many good friends over the years.  Two years ago at ODTUG in LA, I announced my departure from Oracle after 19 years and in my blog Updates, News and Beta Software later that year, I said that I'd  "decided to “hang up my boots” and try something new." This entailed taking a two month break and then starting a new job in the City.  It was an interesting switch; still working with the Oracle database, and introducing new graphical software to a new customer base, I joined the craziness that is the rat run into London daily and joined a team in the City. Working in London was great, one can never tire of the view of St.Pauls nor the buzz that you feel when in the City.  
The truth was that I wanted a break, a real break. Like so many, I'd started working from the moment I got my degree and hadn't stopped.  Actually I started working while finishing my degree, so I have 25 years under the belt and really felt I was needed somewhere else. This was a head/heart decision. In my head, the right thing to do was stay in a good job; the economic climate says that, everyone looking for work says that and the job market is just not the place to mess with at the moment. (Besides... as some have pointed out...women over a certain age aren't employable ...! As a Christian, the heart decision was different. The heart decision was to get more involved in my local community, to help others more, to take a step back and let the right thing happen. Besides, I think I have 20 years more work left in me!
So that's what I did:  Just before ODTUG last year, and with the blessing of my manager at Microgen, I took a deep breath and decided to take a year off, with the view to doing those things I'd wanted to do and get involved.  I had no plans,  except to let things evolve and to be back earning money by August 2013. 

If you know me, I'm a frantic list maker and planner, so the lists went off the scale. Needless to say, I have not accomplished all of the hundreds of tasks I'd set out to do, like learning to speak Italian or French or German or even Hindi (one would have been good), but there are a few things I've had fun doing. I should mention that at no point have I been bored, every day is packed (when is there time for work?), but I've never quite shaken the advice from the head and hoping the heart will triumph!   

Things started slowly by paring down the world I live in; plans for buying a new house went by the way, I sold my car and did lots of runs to charity shops and free cycle and even filled a skip!  (Very freeing stuff!) For soul food and revival I went to the Art Academy in London and did a Summer School drawing class, following that with a class working with green wood at the Good Life centre.  This was such fun that I enrolled on a 12 week carpentry course, even more fun and I still have all my fingers.

My other half and I switched to another church,  as part of a Church plant, when our then current church sent a vicar and a small group of the congregation to start a new church in a multi-cultural community.  Here they need "all hands on deck" to get the church and community up and running and we're loving it.  I now read with a group of 6 & 7 year olds, at a local school and am involved in other aspects of the church community.  Bentley, my beautiful mutt, is now approved at a Pets as Therapy Dog, although we have yet to do our first visit.
I have been framing my own photographs for sale on and off for a while and have taken that a step further and now have a shed that you can loosely describe as a workshop, where I'm framing pieces for others.  I love to work with the wood and enjoy making bespoke items to suit the artwork.

Well, short of giving you my life history here, the final step is finding a new job, and one that will fit in with some of the voluntary work I'm involved with.  So, ...I've decided to go it on my own.  Some of the small steps have been taken and I'm working on a website, with help, using Joomla! Is there no getting away from technology? Once that's up and running I'll pop back here and let you know.

If you're reading this while at ODTUG - have fun.  If you're not, stick it in your diary for next year - I can really recommend it, assuming you like technology.

Bye for now
Sue

KScope 13 - Monday

Brent Martin - Tue, 2013-06-25 05:39

This week I'm in the Big Easy hanging out at ODTUG’s Kscope 13 conference. I have heard this is the best conference for Hyperion and Oracle Business Intelligence so my expectations are high. I’m not as strong in either topic as I am with PeopleSoft but there’s nothing like a good conference to get up to speed.


So here's a quick run down of the sessions I was able to attend today along with a quick synopsis:


Balance Sheet Reporting in HFM 6/24/13


Alexandre Seran


Balance sheet analysis is often done outside of HFM in Excel.  CTA is often done in the same way.  This is a problem because of tight timelines and accuracy concerns.


OK You can’t automate everything. There will always be entries that have to be manually made in the financial statements. But this doesn’t necessarily mean Excel. Alexandre presented a methodology to capture the manual adjustments (and supporting documentation) in Webforms and/or Smartview and create proof of CTA, cash flow statements for each entity, and a detailed balance sheet movement report all within the boundaries of HFM.


This session went pretty deep into the business requirements of balance sheet analysis and reporting which was mostly over my head (I admit), but I left with an understanding that a good HFM architecture and design lays the foundation for streamlined automated reporting at month end, and without that foundation you may find yourself with a highly manual reporting process at month end.


Using OBIEE to Retrieve Essbase Data


Edward Roske and Tracy McMullen


Oracle has done a lot of work to integrate Essbase into everything.  OBI is no exception.  But integrating a relational tool like OBI with an OLAP tool like Essbase requires some planning and effort.  This session did a great job laying out what’s required to make this integration happen and boiling it down into 7 steps.  Ok the 7 steps shouldn’t be taken TOO literally but definitely download this presentation or check out my favorite Hyperion Planning authors’ blog(s) if this is something you’re interested in.


Introduction to the Essbase Java API


Tim Tow


Tim is an admitted Essbase geek and self-taught Java programmer.  Not too shabby for someone with an accounting background.  This session focused on the Essbase Java API and what you can do with it.  It included specific programming examples and use cases.  While I personally think he spent way too much time on the ins and outs of Java programming (what is a class, what makes a good IDE, etc), the session provided a detailed view of what you can do with the Java API.  It’s good to know we’re not limited to .NET and Visual Basic to working programmatically with Essbase anymore.


Automating Hyperion Reporting for the Rest of Your Organization


Jim Wilking


This session turned out to be a session about how to use delivered Smartview VBA functions to do generate advanced Smartview reports.  Topics included how to add prompts to the report, how to refresh specific sections of the report based on those prompts, etc.  And if you have a lot of reports to run, Jim showed how that could be automated as well.


VBA in Excel always demos well (especially to accountants!) and there’s nothing like creating custom automation for a specific task to improve efficiencies.  But the functionality in this presentation could also be met with Hyperion Financial Reporting as far as I could tell.  Jim did point out that there are disadvantages to a high dependence on VBA to generating your monthly reporting package.  Having to maintain a set of reference values in Excel and having to keep advanced VBA skills on your team were a couple he called out.  From experience I know what a headache it can be to upgrade when you have a lot of custom VBA code to re-test so I would add that to the list.


Summary


All in all it was a good day and I definitely left with a lot to think about.  Looking forward to day 2 tomorrow!


 

Google reader is dead, long live RSS

Mathias Magnusson - Tue, 2013-06-25 04:00

So calling Google Reader dead may be a bit premature, but Googles announcement of their intention to kill their baby all but killed it. On monday it is RIP for the reader. I’ve gone through disbelief to mourning the loss to a search for a replacement that best emulates what Google Reader does.

In the end I decided to not just find something that just replaces it with a new tool providing the same exact feature only with a different name.

If I had to find a different tool, I might as well try to find something better. The fact is that although Google Reader was one of my favorite Google tools, I did not use it. Yes, it is a bit odd that I’m writing a post about replacing a tool I didn’t use. And when I say that I didn’t use it, I use the term “did not” very recklessly. The fact is that I could not read all the blog I follow without it. Google Reader had however turned into an infrastructure piece for me. I used it to collect the posts and keep track of what I had read, but the tool of my choice for reading was the Reeder. Unless you are familiar with it, it may seem like I just referred to the Google Reader again, but it is spelled differently and this is a wonderful tool for reading blogs and it uses the Google Reader to manage the blogs and lets Google Reader track which posts are read and which are not.

If the Reeder is such a wonderful app, why not keep using it. At first I thought of abandoning it as Google Reader was going away. But it turns out the Reeder will continue to work (or so they claim). It supports Feedbin and possibly even feedly in the future. There has not been much talk about it on their own website or even on their twitter account. There are lots of fans asking what is going on, scarcely little from @reederapp themselves.

I started thinking of how I consume text these days. I do still surf on a computer, but it is when looking for something. I prefer reading on my iPad. My workflow has actually changed such that when I find a great article or blog post, I flip it so I later can read it in Flipboard on the iPad. If the article is worth saving for the future after having read it I add it to Evernote. As that is my workflow for things I do not have to read right now, why should my workflow for reading blogs be any different? After all, I read them when time allows.

After that epiphany I took a look at Flipboard wondering if I could get a similar tool for reading blogs on the iPad. It turns out I can, they have ceased the opportunity Google created. You can now read your blogs in the tool we all love. Getting my blogs to Flipboard has made them seem so much more enjoyable. It integrates with Google Reader so it imports all the blogs you have there and lets you follow the same blogs.

The one thing that can be confusing is that after opting to read your blogs in flipboard, you will want to go into the settings for Google Reader in Flipboard and possibly turn on to have number of unread items indicated, to set posts you read in Flipboard to read and to show only unread posts. I have them all set to ON and it works very well with the way I want to read my blogs.

For me reading blogs on the iPad is so much more enjoyable than to try to catch up in the browser on the computer. The latter never happens  and I tend to fall far behind and have to set aside a few hours to catch up, since moving my blog reading to Flipboard I have stayed current on all blogs I read.

If you haven’t found a way to read your blogs yet that you really love, give Flipboard a try. I think you’re gonna love it.


Let me SLEEP!

Andrew Clarke - Mon, 2013-06-24 12:11
DBMS_LOCK is a slightly obscure built-in package. It provides components which so we build our own locking schemes. Its obscurity stems from the default access on the package, which is restricted to its owner SYS and the other power user accounts. Because implementing your own locking strategy is a good way to wreck a system, unless you really know what you're doing. Besides, Oracle's existing functionality is such that there is almost no need to need to build something extra (especially since 11g finally made the SELECT ... FOR UPDATE SKIP LOCKED syntax legal). So it's just fine that DBMS_LOCK is private to SYS. Except ...

... except that one of the sub-programs in the package is SLEEP(). And SLEEP() is highly useful. Most PL/SQL applications of any sophistication need the ability to pause processing for a short while, either a fixed time or perhaps polling for a specific event. So it is normal for PL/SQL applications to need access to DBMS_SLEEP.LOCK().

Commonly this access is granted at the package level, that is grant execute on dbms_lock to joe_dev. Truth to be told, there's not much harm in that. The privilege is granted to a named account, and if somebody uses the access to implement a roll-your-own locking strategy which brings Production to its knees, well, the DBAs know who to look for.

But we can employ a schema instead. The chief virtue of a schema is managing rights on objects. So let's create a schema for mediating access to powerful SYS privileges:

create user sys_utils identified by &pw
temporary tablespace temp
/
grant create procedure, create view, create type to sys_utils
/

Note that SYS_UTILS does not get the create session privilege. Hence nobody can connect to the account, a sensible precaution for a user with potentially damaging privileges. Why bar connection in all databases and not just Production? The lessons of history tell us that developers will assume they can do in Production anything they can do in Development, and write their code accordingly.

Anyway, as well as granting privileges, the DBA user will need to build SYS_UTIL's objects on its behalf:
grant execute on dbms_lock to sys_utils
/
create or replace procedure sys_utils.sleep
( i_seconds in number)
as
begin
dbms_lock.sleep(i_seconds);
end sleep;
/
create public synonym sleep for sys_utils.sleep
/
grant execute on sys_utils.sleep to joe_dev
/

I think it's a good idea to be proactive about creating an account like this; granting it some obviously useful privileges before developers ask for them, simply because some developers won't ask. The forums occasionally throw up extremely expensive PL/SQL loops whose sole purpose is to burn CPU cycles or wacky DBMS_JOB routines which run every second. These WTFs have their genesis in ignorance of, or lack of access to, DBMS_LOCK.SLEEP().

Oracle 10g - a time traveller's tale

Andrew Clarke - Mon, 2013-06-24 11:03
Time travel sucks, especially going back in time. Nobody takes a bath, there are no anaesthetics and you can't get a decent wi-fi signal anywhere. As for killing your own grandfather, forget about it.

The same is true for going back in database versions. In 2009 I had gone straight from an Oracle 9i project to an Oracle 11g one. So when I eventually found myself on a 10g project it was rather disorientating. I would keep reaching for tools which weren't in the toolbox: LISTAGG(), preprocessor scripts for external tables, generalized invocation for objects.

I had missed out on 10g while it was shiny and new, and now it just seemed limited. Take Partitioning. Oracle 10g supported exactly the same composite partitioning methods as 9i: just Range-hash and Range-List, whereas 11g is full of wonders like Interval-Range, Hash-Hash and the one I needed, List-List. Faking a List-List composite partitioning scheme in 10gConsider this example of a table with a (slightly forced) need for composite List-List partitioning. It is part of a engineering stock control system, in which PRODUCTS are grouped in LINES (Ships, Cars, Planes) and COMPONENTS are grouped into CATEGORIES (Frame, interior fittings, software, etc). We need an intersection table which links components to products.

There are hundreds of thousands of components and tens of thousands of products. But we are almost always only interested in components for a single category within a single product line (or product) so composite partitiong on (product_line, component_category) is a good scheme. In 11g the List-List method works just fine:
SQL> create table product_components
2 (product_line varchar2(10) not null
3 , product_id number not null
4 , component_category varchar2(10) not null
5 , component_id number not null
6 , constraint pc_pk primary key (product_id, component_id )
7 , constraint pc_prd_fk foreign key (product_id )
8 references products (product_id)
9 , constraint pc_com_fk foreign key (component_id )
10 references components (component_id)
11 )
12 partition by range(product_line) subpartition by list(component_category)
13 subpartition template
14 (subpartition sbody values ('BODY')
15 , subpartition sint values ('INT')
16 , subpartition selectr values ('ELECTR')
17 , subpartition ssoft values ('SOFT')
18 )
19 (partition pship values ('SHIP')
20 , partition pcar values ('CAR')
21 , partition pplane values ('PLANE')
22 )
23 /

Table created.

SQL>

But in 10g the same statement hurls ORA-00922: missing or invalid option. The workaround is a bit of a nasty hack: replace the first List with a Range, producing a legitimate Range-List composite:
SQL> create table product_components
2 (product_line varchar2(10) not null
3 , product_id number not null
4 , component_category varchar2(10) not null
5 , component_id number not null
6 , constraint pc_pk primary key (product_id, component_id )
7 , constraint pc_prd_fk foreign key (product_id )
8 references products (product_id)
9 , constraint pc_com_fk foreign key (component_id )
10 references components (component_id)
11 )
12 partition by range(product_line) subpartition by list(component_category)
13 subpartition template
14 (subpartition sbody values ('BODY')
15 , subpartition sint values ('INT')
16 , subpartition selectr values ('ELECTR')
17 , subpartition ssoft values ('SOFT')
18 )
19 (partition pcar values less than ('CAS')
20 , partition pplane values less than ('PLANF')
21 , partition pship values less than ('SHIQ')
22 )
23 /

Table created.

SQL>

Note the wacky spellings which ensure that 'CAR' ends up in the right partition. Also we have to re-order the partition clause so that the partition bounds don't raise an ORA-14037exception. We are also left with the possibility that a rogue typo might slip records into the wrong partition, so we really ought to have a foreign key constraint on the product_line column:

alter table product_components add constraint pc_prdl_fk foreign key (product_line)
references product_lines (line_code)
/

I described this as a nasty hack. It is not really that nasty, in fact it actually works very well in daily processing. But managing the table is less intuitive. Say we want to manufacture another line, rockets. We cannot just add a new partition:

SQL> alter table product_components
add partition prock values less than ('ROCKEU')
/
2 3 add partition prock values less than ('ROCKEU')
*
ERROR at line 2:
ORA-14074: partition bound must collate higher than that of the last partition


SQL>

Instead we have to split the PSHIP partition in two:

SQL> alter table product_components split partition pship
2 at ('ROCKEU')
3 into (partition prock, partition pship)
4 /

Table altered.

SQL>

The other snag is, that once we do get back to the future it's a bit of a chore to convert the table to a proper List-List scheme. Probably too much of a chore to be worth the effort. Even with a time machine there are only so many hours in the day.

Kscope - Oracle Business Analytics Strategy & New Features

Look Smarter Than You Are - Sun, 2013-06-23 11:34
"Business Analytics is a key strategic priority for Oracle."
                 - Paul Rodwick
I'm sitting in the Kscope13 BI Symposium listening to keynote speaker Paul Rodwick, VP of Oracle BI Product Management. Paul was rather interesting despite his flight having landed in New Orleans at 4AM.  On 3-4 hours sleep, Paul reviewed Oracle's Business Analytics strategy.  It's surprising to me how little Oracle's EPM/BI architecture has changed over the last 5 years (other than the renaming to "Oracle Business Analytics."  This is a good thing.
Why?  Because over the last 5 years, the architecture has gone from a products-integrating-is-a-theoretically-good-idea-so-let's-put-it-on-a-slide-cross-our-fingers-and-see-what-happens to an actual integrated solution that uses the various products in the Oracle Business Analytics line together with each product doing a key part.  Instead of "Essbase or OBIEE or an application?" it's "Essbase as the cube platform, OBIEE as the front-end, applications for needs that are often common across multiple companies."

So now that Oracle has gotten the basics out of the way, they're looking to expand their Business Analytics offerings.  Their key focuses for the immediate future are big data, mobile, in-memory computing, and cloud-based analytics.  The last two really speak to technology of deployment (in-memory and cloud), big data seems to be one of those things that everyone is talking about and no one's quite sure what to do with for the moment, but mobile is on everyone's minds and people are actually doing something about it.  To further that immediate mobile need, Oracle is releasing new functionality in every release or patch of the Oracle mobile analytics products.  For instance, Oracle 11.1.1.7 now has a full mobile security toolkit (available on OTN) for companies that want greater security than native Apple iOS provides.

Paul discussed some of the key features in the 11.1.1.7 release (including Smart View as the primary Office front-end for BI going forward).  He mentioned that the bundled patch for OBIEE 11.1.1.7 will be out on a few weeks, so prep yourself for 11.1.1.7.1.  He also talked about some recent improvements to Endeca in version 3.0 of that product.  While I love Endeca's extremely powerful ability to discover information in unstructured data, right now, most companies are still focused on analyzing their structured information.  Unstructured analysis is definitely coming: it's just only being deployed by a handful of leading-edge companies at the moment.
Where Are They Going?The key releases we should see in the next 9-12 months will revolve around these themes:
  • Visual analysis.  They're trying to make the analysis more intuitive because the majority of users don't spend their day being analysts: they want the system to help them find issues quickly so they can make better business decisions faster.
  • Mobile Analytics. Oracle is planning to create a BI Mobile "Applications Designer" that will allow developers to make HTML5 applications purpose-built for mobile deployment.  They will also continue to improve the mobile applications every version but they didn't go into what some of the new improvements are going to be specifically beyond more HTML5 deployment.
  • Exalytics.  They promised a new Exalytics announcement in the near future.  I'm presuming this refers to the new Exalytics X3-4 version that's mentioned on the June 4 Oracle Engineered System Price List (page 5).  I expect this will be detailed more during Steve Liebermensch's session later this week.
  • Cloud analytics.  Oracle is making a huge investment in the cloud and it looks like there will be more and more applications in Oracle Business Analytics that run in the cloud.  This makes it a lot easier for customers to get immediate ROI from a BI implementation without huge server investments.
  • Big data.  Part of Oracle's strategy in this area is to tie into any data in any source behind the scenes into Oracle BI.  Data agnostic
  • Predictive analytics.  Paul didn't really talk to this one other than to tease that they do have dedicated resources to expanding the Predictive Analytics capabilities of Oracle BI Foundation Suite.  There is some P.A. functionality in Hyperion Planning, Crystal Ball, and Hyperion Strategic Finance and that sounds like it will be expanded into the BI layer in future releases.
The one thing that's really apparent from Paul's session is that Business Analytics is now a $1+ billion dollar portion of Oracle revenue... and they're treating it as such in terms of research and development.  It's a fast growing space and Oracle seems determined to maintain their market share in overall Business Analytics.

I hope to blog later in the week if any new announcements come out.  Coming to you from Kscope13, this is your humble reporter, Edward Roske.
Categories: BI & Warehousing

Where's SCOTT?

Andrew Clarke - Fri, 2013-06-21 10:53
The database on the Developer Days Database App VBox appliance doesn't have the SCOTT schema. This is fair enough, as the sample schemas aren't include by default any more (for security reasons, obviously). I know the official sample schemas used in the documentation - HR, OE, and so on - are more realistic and useful for developing prototypes. But nothing beats the simplicity of SCOTT.EMP for explaining something in an online forum.

So, where is the script for building the SCOTT schema?

Back in the day it was part of the SQL*Plus build: $ORACLE_HOME/sqlplus/demo/demobld.sql (or something, I'm doing this from memory). But in 11gR2 there are no demo scripts in the sqlplus sub-directory. This was also probably the case in 10g but I never had occasion to look for it on that platform. Anyway, in 11gR2 its location is $ORACLE_HOME/admin/rdbms/utlsampl.sql.

Presumably Oracle have obscured it because they want everybody to stop using SCOTT and standardise on the modern samples. But this schema is part of the Oracle heritage. It should have Grade II listed status.

Too much chrome

Mathias Magnusson - Wed, 2013-06-19 07:00

You know how it is, when you have that feeling. You are on top of your game. You have a few quick brush strokes to add to a system to make it more dynamic. You have all the small needed changes in your head and you know it is just gonna work. You sit down for some quality time with your computer and with the application builder. It’s gonna be fun and you will get to bask in the glow of your success before the day is over. yup, that is the feeling I’m talking about.

Then there is that other feeling, then one we want to avoid. You know the kind. When you have an easy fix to do. It all goes well until it doesn’t. And then, nothing. You turn the know this way, nothing. You turn the know that way, nothing. You turn it up, you turn it down. No matter what. The same freaking result. And it is the wrong result. Time goes by, what was embarrassing after 15 minutes is annoyingly embarrassing after a couple of hours. Yes, that is the feeling I’m referring to.

Worst of all is of course when the first feeling turns into the second feeling. That is an afternoon that is sure to suck, and the more it sucks the more you get annoyed that you cannot see what is sure to be a very obvious mistake.

A day a few weeks ago I had this happen to me for the umpteenth time. Not with the same issue of course, but with one of those ridiculous things that just throws you out of your flow. What was a magical afternoon changed to one where I felt like a complete beginner.

It all started very innocent with me needing to add a table where I could store the location of different places a tab should point to. So I had an application item, I had a process that would populate it on new instances(sessions) and I had referenced that application item in the link target. The target returned was set to be “127.0.0.1:8080”. A target as good as any…

It didn’t show up in the link from the tab. It however was available so it could be displayed on a region on the same page. What the ….

Could it really be that APEX creates the HTML text for the tab before the application processes runs? It just doesn’t make sense, a test of hard coding the value to “abc” in the application process proved that it in fact ran before.

Could it be that a value passed in from the process was treated differently for tabs or application processes? I doubted it, but facing odd issues one tends to consider all kinds of illogical things. But hardcoding “abc” in the process once again showed that it came through to the tab.

What on earth. A friend tested the same page on his Mac and (fortunately?) got the same exact result. So it wasn’t my browser that was acting up.

Now we were really stretching and started looking at the page rendered in Safari and Firefox. What on earth, the link shows up just fine in them?

We’re considering a bug in APEX. I have read about things they have had to do to make IE, Chrome, Firefox, Safari, and Opera work well with APEX applications. Could there be a bug in there with how links with dynamic values were treated? It certainly was possible. If it wasn’t for the fact that some links showed up just fine, the “abc” one looked like a page on the same ip-address as the APEX server when looking at where Chrome wanted to send us. So it really wasn’t reasonable that APEX would have a bug related just to localhost adresses.

After some more coffe it dawned on me, maybe this wasn’t APEX at all. Maybe this was a “feature” of Chrome. After a peek in the source for the rendered page, the text “127.0.0.1:8080” was found in the link attribute for the tab. So APEX renders it like I expected, but Chrome didn’t honor it as a link.

It turns out that if there’s just text, then Chrome will assume it is a relative document located downstream from the DocumentRoot. However, if it is not a path that can be parsed as file system location, then Chrome will not allow the link to be used. What was needed was to put http:// in front of the address. That should of course be there to be a well formed URL. I just expected it to show up when hovering over the link anyway just as it does in other browsers.

It would be nice if Chrome rather than just linking to “blank” would link to a page somewhere that just said that it was a malformed URL. However, the solution took seconds to implement while the troubleshooting took WAY longer than it should have. Even worse two senior troubleshooters was stumped on this for a very long time.

So if you end up with an empty link in your APEX application, or any other web page for that matter, you know that it is most likely the result of having a malformed URL rendered in Chrome.

So yes I started of with the first feeling and feeling good about myself, quickly started the downward spiral of the second feeling. That day never really got back up to the first happy feeling again. Happy with having solved the issue, but not nearly feeling on top of my game when the day ended. Oh well, there will be more days to convince myself that I might be on top of my game (at least sometimes).


June 2013 Critical Patch Update for Java SE Released

Oracle Security Team - Tue, 2013-06-18 14:51

Hello, this is Eric Maurice again.

Oracle today released the June 2013 Critical Patch Update for Java SE.  This Critical Patch Update provides 40 new security fixes.  37 of these vulnerabilities are remotely exploitable without authentication.

34 of the fixes brought with this Critical Patch Update address vulnerabilities that only affect client deployments.  The highest CVSS Base Score for these client-only fixes is 10.0. 

 4 of the vulnerabilities fixed in this Critical Patch Update can affect client and server deployments.  The most severe of these vulnerabilities has received a CVSS Base Score of 7.5. 

One of the vulnerabilities fixed in this Critical patch Update affects the Java installer and can only be exploited locally. 

Finally, one of the fixes included in this Critical Patch Update affects the Javadoc tool and the documents it creates.  Some HTML pages that were created by any 1.5 or later versions of the Javadoc tool are vulnerable to frame injection.  This means that this vulnerability (CVE-2013-1571, also known as CERT/CC VU#225657) can only be exploited through Javadoc-generated HTML files hosted on a web server.  If exploited, this vulnerability can result in granting a malicious attacker the ability to inject frames into a vulnerable web page, thus allowing the attacker to direct unsuspecting users to malicious web pages through their web browsers.  This vulnerability has received a CVSS Base Score of 4.3.  With the release of this Critical Patch Update, Oracle has fixed the Javadoc tool so that it doesn’t produce vulnerable pages anymore, and additionally produced a utility, the “Java API Documentation Updater Tool,” to fix previously produced (and vulnerable) HTML files.  More information about this vulnerability is available on the CERT/CC web site at http://www.kb.cert.org/vuls/id/225657. 

Oracle recommends that this Critical Patch Update be applied as soon as possible because it includes fixes for a number of severe vulnerabilities.  Note that the vulnerabilities fixed in this Critical Patch Update affect various components and, as a result, may not affect the security posture of all Java users in the same way. 

Desktop users can leverage the Java Autoupdate or visit Java.com to ensure that they are running the most recent version.  As a reminder, security fixes delivered through the Critical Patch Update for Java SE are cumulative: in other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.

<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /> 

For More Information:

The Advisory for the June 2013 Critical Patch Update for Java is located at http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html

More information about the Javadoc tool is available at http://www.oracle.com/technetwork/java/javase/documentation/index-jsp-135444.html

ODTUG KScope 2013 : I'm Speaking

Luc Bors - Tue, 2013-06-18 03:52

error on line 1 at column 1: Document is empty

Dave Best - Mon, 2013-06-17 15:38
<!--[if !mso]> v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} x\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} <![endif]--> I'm not sure why this happens but every now and then the invalidator password gets corrupted.  When that happens, the following error will be seen when you try to access portal: This page contains the following errors:

agent deployment error in EM 12c

Amardeep Sidhu - Sun, 2013-06-16 12:04

Yesterday I was configuring EM 12c for a Sun Super Cluster system. There were a total of 4 LDOMs where I needed to deploy the agent (Setup –> Add targets –> Add targets manually). Out of these 4 everything went fine for 2 LDOMs but for the other two it failed with an error message. It didn’t give much details on the EM screen but rather gave a message to try to secure/start the agent manually. When I tried to do that manually the secure agent part worked fine but the start agent command failed with the following error message:

oracle@app1:~$emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Starting agent ………………………………………………………. failed.
HTTP Listener failed at Startup
Possible port conflict on port(3872): Retrying the operation…
Failed to start the agent after 1 attempts.  Please check that the port(3872) is available.

I thought that there was something wrong with the port thing so I cleaned the agent installation, made sure that the port wasn’t being used and did the agent deployment again. This time it again failed with the same message but it reported a different port number ie 1830 agent port no:

oracle@app1:~$emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Starting agent ……………………………………………. failed.
HTTP Listener failed at Startup
Possible port conflict on port(1830): Retrying the operation…
Failed to start the agent after 1 attempts.  Please check that the port(1830) is available.

Again checked few things but found nothing wrong. All the LDOMs had similar configuration so what worked for the other two should have worked for these two also.

Before starting with the installation I had noted the LDOM hostnames and IPs in a notepad file and had swapped the IPs of two LDOMs (actually these two only Smile with tongue out ). But later on I found that and corrected. While looking at the notepad file it occurred to me that the same stuff could be wrong in /etc/hosts of the server where EM is deployed. Oh boy that is what it was. While making the entries in /etc/hosts of EM server, I copied it from the notepad and the wrong entries got copied. The IPs for these two LDOMs got swapped with each other and that was causing the whole problem.

deinstalled the agent, correct the /etc/hosts and tried to deploy again…all worked well !

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator