Skip navigation.

Feed aggregator

Windows Server 2012 R2: solving .NET Framework 3.5 installation problems

Yann Neuhaus - 15 hours 31 min ago

I faced a problem at a customer site last week when I tried to install the .NET Framework 3.5 - a prerequisite for installing SQL Server 2012 on a Windows Server 2012 R2. I opened the Server Manager and then navigated to the Manage, Add Roles and Features section:

 

b2ap3_thumbnail_pic01.jpg

 

I selected the .NET Framework 3.5 Features option:

 

b2ap3_thumbnail_pic02.jpg

 

I specified an alternate source path:

 

b2ap3_thumbnail_pic03.jpg

 

... and surprise! Even though an ISO of Windows Server 2012 R2 was mapped to my D: drive, the installation failed with this strange error: "The source file could not be found..."

After some investigations, I found that this problem is quite common and that Microsoft has published a fix ... which unfortunately does not work for me!

I tried the same installation with different ways: command prompt, PowerShel l... but absolutely NO RESULT.

I finally decided to open a PowerShell console to check the Windows Features available on my server with the cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_Pic1.jpg

 

Strangely, the status of the .NET Framework 3.5 is not showing "Available", but "Removed"!

 

b2ap3_thumbnail_Pic2.jpg

 

So, how do I change this state from removed to available?

After some investigations and after having tried some fixes provided by persons who faced to the same problem as me, I finally found the Standalone Offline Installer tool that solved my problem by enabling the .NET Framework 3.5 (many thanks to Abbodi1406).

I downloaded this exe file and executed it on my server.

An installer screen appeared:

 

b2ap3_thumbnail_pic4.jpg

 

After clicking on the Next button, a command prompt screen appeared which showed the completion state of the process.

pic6.jpg

 

As soon as the process was finished, I went back to my PowerShell screen to check if my .NET Framework 3.5 is now available - by running my PowerShell cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_pic10.jpg

 

The .NET Framework 3.5 now was available and I as able to restart the installation process from the beginning by navigating to the server manager, selecting the concerned feature and giving the alternate source path.

pic8.jpg

I finally succeded in installing my .NET Framework 3.5!

I hope that my blog post will help some of you to resolve this installation problem ;-)

How to *really* send a script to the background

Laurent Schneider - Thu, 2014-11-27 10:55

Let’s check this small script

foo.sh


#!/bin/sh
echo foo.1:`date` | tee $HOME/tmp/foo.txt
sleep 3
echo foo.2:`date` | tee -a $HOME/tmp/foo.txt


$ $HOME/tmp/foo.sh
foo.1:Thu Nov 27 17:34:53 CET 2014
foo.2:Thu Nov 27 17:34:56 CET 2014

Very obvious, I write to the console, wait three seconds, then write to the console.

Ok, let’s take another script that would call this script in the background using &

bar.sh


#!/bin/sh
echo bar.1:`date`
$HOME/tmp/foo.sh &
echo bar.2:`date`


$ $HOME/tmp/bar.sh
bar.1:Thu Nov 27 17:36:32 CET 2014
bar.2:Thu Nov 27 17:36:32 CET 2014
$ 
foo.1:Thu Nov 27 17:36:32 CET 2014
foo.2:Thu Nov 27 17:36:35 CET 2014

bar is printing the date, calling foo in the background, then printing the date, then it returns to you, and foo is still running.

BUT this is only in a relative background …

Let’s try this


$ time $HOME/tmp/bar.sh > /dev/null

real    0m0.01s
user    0m0.00s
sys     0m0.00s

So it takes no time to run bar you believe ?

Let’s try, for instance, over ssh (or cron or whatever)


$ time ssh localhost $HOME/tmp/bar.sh > /dev/null
real    0m3.81s
user    0m0.01s
sys     0m0.01s

running bar suddenly waits 3 seconds for foo to finish.

To be sure the script is sent to the farest background, you need to close the file descriptors, stdin, stdout, stderr

I rewrote it as

baz.sh


#!/bin/sh
echo bar.1:`date`
$HOME/tmp/foo.sh <&- >&- 2>&- &
echo bar.2:`date`


$ time ssh localhost $HOME/tmp/baz.sh >/dev/null
real    0m0.44s
user    0m0.00s
sys     0m0.00s

Now the script baz is immediately finished and does not wait for foo to complete

The mess that is fast-refresh join-only Materialized Views

Dominic Brooks - Thu, 2014-11-27 09:23

Every now and then you come across a feature or a combination of features which has turned into such a dog’s dinner that you wonder how many people can possibly be using it.

This week – fast fresh materialized views.
I would have thought that this was a very commonly used feature.

This is quite a long article so I will do a top-level TOC first

  1. Why am I looking at a fast-refresh, on-commit, join-only materialized view?
  2. Show me it working
  3. What’s it done to my commit time?
  4. How can this be avoided this with _mv_refresh_use_stats?
  5. Is it any different if I use MLOG stats?
  6. Why might I not want to mess with _mv_use_refresh_stats?


First, why am I looking at a fast refresh, on-commit join-only materialized view.

I have got two datasets which only produce a small resultset when they are joined.
But neither is, on their own, highly selective.

The model below might not be 100% representative of my real-world situation.
I’ve tried to sanitise and simplify but as a result I’ve then had to bump up the volumes a bit to make the point.

drop table t1;
drop table t2;

create table t1
as
select rownum oid
,      case when rownum <= 1000000 then 'Y' else 'N' end flag
,      rownum oid_t2
,      rpad('X',200,'X') padding
from   dual
connect by rownum <= 10000000
order by dbms_random.value;

create unique index i_t1_1 on t1 (oid);
create index i_t1_2 on t1(flag);
create index i_t1_3 on t1(oid_t2);

create table t2
as
select rownum oid
,      case when rownum >= 999901 then 'Y' else 'N' end flag
,      rpad('X',200,'X') padding
from   dual
connect by rownum <= 10000000
order by dbms_random.value;

create unique index i_t2_1 on t2 (oid);
create index i_t2_2 on t2(flag);

Currently, I can crunch my data in a few seconds:

select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Gives

-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |    100 |00:00:15.91 |     606K|    303K|       |       |          |
|*  1 |  HASH JOIN                   |        |      1 |    860K|    100 |00:00:15.91 |     606K|    303K|   238M|  7667K|  292M (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    860K|   1000K|00:00:01.54 |     303K|      0 |       |       |          |
|*  3 |    INDEX RANGE SCAN          | I_T1_2 |      1 |    860K|   1000K|00:00:00.21 |    1814 |      0 |       |       |          |
|*  4 |   TABLE ACCESS FULL          | T2     |      1 |   8275K|   9000K|00:00:09.78 |     303K|    303K|       |       |          |
------------------------------------------------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):
--------------------------------------------------- 

   1 - access("T1"."OID_T2"="T2"."OID")
   3 - access("T1"."FLAG"='Y')
   4 - filter("T2"."FLAG"='Y')

But I want to query these tables quite frequently and preferably I want this driving logic to be subsecond.
And there will have reasonable inserts and updates incoming at various times.

So, one of the better options is a join-only materialized view.


So, let’s get going:

create materialized view log on t1 with rowid;
create materialized view log on t2 with rowid;

create materialized view mv_t1_t2
refresh fast on commit
enable query rewrite
as
select t1.rowid   t1_rowid
,      t1.oid     t1_oid
,      t1.flag    t1_flag
,      t1.oid_t2  t1_oid_t2
,      t1.padding t1_padding
,      t2.rowid   t2_rowid
,      t2.oid     t2_oid
,      t2.flag    t2_flag
,      t2.padding t2_padding
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

create index i_mv_t1_t2_1 on mv_t1_t2(t1_rowid);
create index i_mv_t1_t2_2 on mv_t1_t2(t2_rowid);

Tangent alert!
Now, if I wasn’t going to go off on this tangent because the article will be long enough anyway but when running up the article, a number of questions about
“hy was this not being rewritten?” came up so, we might as well cover more bases.

So, if we run our SELECT now, what happens?

select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |    100 |00:00:08.22 |     606K|       |       |          |
|*  1 |  HASH JOIN                   |        |      1 |    860K|    100 |00:00:08.22 |     606K|   238M|  7667K|  260M (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    860K|   1000K|00:00:01.58 |     303K|       |       |          |
|*  3 |    INDEX RANGE SCAN          | I_T1_2 |      1 |    860K|   1000K|00:00:00.21 |    1814 |       |       |          |
|*  4 |   TABLE ACCESS FULL          | T2     |      1 |   8275K|   9000K|00:00:02.69 |     303K|       |       |          |
---------------------------------------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------  

   1 - access("T1"."OID_T2"="T2"."OID")
   3 - access("T1"."FLAG"='Y')
   4 - filter("T2"."FLAG"='Y')

Bother!

We could go digging around but let’s try the easy way out – DBMS_MVIEW.EXPLAIN_REWRITE

CREATE TABLE REWRITE_TABLE(
  statement_id          VARCHAR2(30),   -- id for the query
  mv_owner              VARCHAR2(30),   -- owner of the MV
  mv_name               VARCHAR2(30),   -- name of the MV
  sequence              INTEGER,        -- sequence no of the msg
  query                 VARCHAR2(2000), -- user query
  query_block_no        INTEGER,        -- block no of the current subquery
  rewritten_txt         VARCHAR2(2000), -- rewritten query
  message               VARCHAR2(512),  -- EXPLAIN_REWRITE msg
  pass                  VARCHAR2(3),    -- rewrite pass no
  mv_in_msg             VARCHAR2(30),   -- MV in current message
  measure_in_msg        VARCHAR2(30),   -- Measure in current message
  join_back_tbl         VARCHAR2(30),   -- Join back table in message
  join_back_col         VARCHAR2(30),   -- Join back column in message
  original_cost         INTEGER,        -- Cost of original query
  rewritten_cost        INTEGER,        -- Cost of rewritten query
  flags                 INTEGER,        -- associated flags
  reserved1             INTEGER,        -- currently not used
  reerved2              VARCHAR2(10))   -- currently not used;

DECLARE
l_sql CLOB :=
q'{select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y'}';
BEGIN
 DBMS_MVIEW.EXPLAIN_REWRITE(l_sql,'MV_T1_T2','mysql');
END;
/

select message from rewrite_table;
MESSAGE
-------------------------------------
QSM-01150: query did not rewrite
QSM-01001: query rewrite not enabled

Yep – easy one.
Let’s set query_rewrite_enabled to true (would be system level if this was not just a test/demo) and repeat.

alter session set query_rewrite_enabled = true;
select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |      1 |        |    101 |00:00:00.01 |      16 |
|   1 |  MAT_VIEW REWRITE ACCESS FULL| MV_T1_T2 |      1 |    101 |    101 |00:00:00.01 |      16 |
--------------------------------------------------------------------------------------------------

Great – I have my subsecond response time.

But what about updates to the data?

First of all, for completeness, let’s observe what happens when we make an uncommitted change.

update t1 set flag = 'Y' where oid = 1000001;

1 rows updated
select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |    101 |00:00:08.27 |     606K|       |       |          |
|*  1 |  HASH JOIN                   |        |      1 |    860K|    101 |00:00:08.27 |     606K|   238M|  7667K|  260M (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    860K|   1000K|00:00:01.63 |     303K|       |       |          |
|*  3 |    INDEX RANGE SCAN          | I_T1_2 |      1 |    860K|   1000K|00:00:00.22 |    1814 |       |       |          |
|*  4 |   TABLE ACCESS FULL          | T2     |      1 |   8275K|   9000K|00:00:02.65 |     303K|       |       |          |
---------------------------------------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------   

   1 - access("T1"."OID_T2"="T2"."OID")
   3 - access("T1"."FLAG"='Y')
   4 - filter("T2"."FLAG"='Y')

I’ve lost my usage of the MV.
This is expected.
Why?
DBMS_MVIEW.EXPLAIN_REWRITE tells us:

MESSAGE
------------------------------------------------------------------------------------------------------------------------
QSM-01150: query did not rewrite
QSM-01279: query rewrite not possible because DML operation occurred on a table referenced by materialized view MV_T1_T2

We can’t use the materialized view because it’s stale.
To use it we have to consider whether we want to run with query_rewrite_integrity set to stale_tolerated.
In this case, not.

If I continue to leave the update uncommitted and check another session, the it continues to use the MV.

But, another tangent!!
In that other session, if I set statistics_level to all, then it won’t use the MV and DBMS_MVIEW.EXPLAIN_REWRITE can’t tell me why.
I’m going to conveniently leave that investigation for another time but just show the evidence:

alter session set statistics_level = all;
alter session set query_rewrite_enabled = true;

select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |    101 |00:00:08.10 |     606K|       |       |          |
|*  1 |  HASH JOIN                   |        |      1 |    860K|    101 |00:00:08.10 |     606K|   238M|  7667K|  260M (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    860K|   1000K|00:00:01.49 |     303K|       |       |          |
|*  3 |    INDEX RANGE SCAN          | I_T1_2 |      1 |    860K|   1000K|00:00:00.22 |    1814 |       |       |          |
|*  4 |   TABLE ACCESS FULL          | T2     |      1 |   8275K|   9000K|00:00:02.72 |     303K|       |       |          |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------                                                                         

   1 - access("T1"."OID_T2"="T2"."OID")
   3 - access("T1"."FLAG"='Y')
   4 - filter("T2"."FLAG"='Y')

But

alter session set statistics_level = typical;
alter session set query_rewrite_enabled = true;

select *
from   t1
,      t2
where  t1.flag   = 'Y'
and    t1.oid_t2 = t2.oid
and    t2.flag   = 'Y';

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
----------------------------------------------------------
| Id  | Operation                    | Name     | E-Rows |
----------------------------------------------------------
|   0 | SELECT STATEMENT             |          |        |
|   1 |  MAT_VIEW REWRITE ACCESS FULL| MV_T1_T2 |    101 |
----------------------------------------------------------   

Note
-----
   - Warning: basic plan statistics not available.....

So… anyway… we have our fast-refresh on commit materialized view.


What happens when we commit my previous change?

set timing on
update t1 set flag = 'Y' where oid = 1000001;

1 rows updated.
Elapsed: 00:00:00.029

commit

committed.
Elapsed: 00:00:02.098

Gadzooks! My COMMIT now takes two seconds!

At this point, if you read a lot of technical articles, you might get lucky and have that vague sense of nagging familiarity about having read about this at some point…

Alberto Dell’Era has an excellent set of articles which helps understand what is going on here and what we might do about it.

If we trace my session, we can find out what is contributing to this unpleasant two seconds.
I’m going to cheat, flush the shared pool, do the update and commit and check v$sql
Here is a summary of the significant contributing sql snippets.

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE EXECUTIONS ELAPSED_TIME/1000/1000 SQL_FULLTEXT
------------- ------------ --------------- ---------- ---------------------- --------------------------------------------------------------------------------
dt1mmbxbp6uk7            1                          1                 1.9356 /* MV_REFRESH (INS) */ INSERT INTO "DOM"."MV_T1_T2" SELECT /*+ NO_MERGE(
5qah9xjyhapg1            0       828077931          1               0.010237 /* MV_REFRESH (DEL) */ DELETE FROM "E668983_DBA"."MV_T1_T2" SNA$ WHERE "T1_ROWID
4agmycb29dd1p            0                          1               0.001747 delete from "DOM"."MLOG$_T2" where xid$$ = :1
f8wzsn9dzcusb            0      1745227344          1               0.000589 delete from "DOM"."MLOG$_T1" where xid$$ = :1
ar6vnyxkss2kq            0       375101422          1               0.000518 select dmltype$$ from "DOM"."MLOG$_T1"  where xid$$ = :1  group by dmlty

A couple of interesting observations.
First is that we can see the XID$$ mechanism which Alberto mentions in his articles linked to above.

Second is that the INSERT is the major contributor to our commit time, followed by the DELETE.
We shall focus on these.

These are the prettified statements:

/* MV_REFRESH (INS) */
INSERT INTO "DOM"."MV_T1_T2"
SELECT /*+ NO_MERGE("JV$") */ "JV$"."RID$","JV$"."OID","JV$"."FLAG","JV$"."OID_T2","JV$"."PADDING","MAS$0".ROWID,"MAS$0"."OID","MAS$0"."FLAG","MAS$0"."PADDING"
FROM   ( SELECT "MAS$"."ROWID" "RID$"  ,  "MAS$".*
         FROM "DOM"."T1" "MAS$"
         WHERE ROWID IN (SELECT  /*+ HASH_SJ */  CHARTOROWID("MAS$"."M_ROW$$") RID$
                         FROM "DOM"."MLOG$_T1" "MAS$"
                         WHERE "MAS$".XID$$ = :1 )) "JV$", "T2" AS OF SNAPSHOT(:B_SCN)  "MAS$0"
WHERE  "JV$"."FLAG"='Y'
AND    "JV$"."OID_T2"="MAS$0"."OID"
AND    "MAS$0"."FLAG"='Y';

With plan:

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |            |      1 |        |      0 |00:00:01.92 |     303K|       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL         |            |      1 |        |      0 |00:00:01.92 |     303K|       |       |          |
|   2 |   NESTED LOOPS                   |            |      1 |        |      1 |00:00:01.92 |     303K|       |       |          |
|   3 |    NESTED LOOPS                  |            |      1 |      1 |      1 |00:00:01.92 |     303K|       |       |          |
|   4 |     VIEW                         |            |      1 |      1 |      1 |00:00:01.92 |     303K|       |       |          |
|*  5 |      HASH JOIN RIGHT SEMI        |            |      1 |      1 |      1 |00:00:01.92 |     303K|   832K|   832K|  387K (0)|
|   6 |       TABLE ACCESS BY INDEX ROWID| MLOG$_T1   |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        INDEX RANGE SCAN          | I_MLOG$_T1 |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
|   8 |       TABLE ACCESS BY INDEX ROWID| T1         |      1 |    860K|   1000K|00:00:01.58 |     303K|       |       |          |
|*  9 |        INDEX RANGE SCAN          | I_T1_2     |      1 |    860K|   1000K|00:00:00.21 |    1814 |       |       |          |
|* 10 |     INDEX UNIQUE SCAN            | I_T2_1     |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|* 11 |    TABLE ACCESS BY INDEX ROWID   | T2         |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------ 

Predicate Information (identified by operation id):
--------------------------------------------------- 

   5 - access(ROWID=CHARTOROWID("MAS$"."M_ROW$$"))
   7 - access("MAS$"."XID$$"=:1)
   9 - access("MAS$"."FLAG"='Y')
  10 - access("JV$"."OID_T2"="MAS$0"."OID")
  11 - filter("MAS$0"."FLAG"='Y')
/* MV_REFRESH (DEL) */
DELETE
FROM   "DOM"."MV_T1_T2" SNA$
WHERE  "T1_ROWID" IN (SELECT /*+ NO_MERGE  HASH_SJ  */ *
                      FROM   (SELECT  CHARTOROWID("MAS$"."M_ROW$$") RID$
                              FROM "DOM"."MLOG$_T1" "MAS$"
                              WHERE "MAS$".XID$$ = :1 )MAS$)
-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT              |              |      1 |        |      0 |00:00:00.01 |       9 |       |       |          |
|   1 |  DELETE                       | MV_T1_T2     |      1 |        |      0 |00:00:00.01 |       9 |       |       |          |
|*  2 |   HASH JOIN SEMI              |              |      1 |    101 |      1 |00:00:00.01 |       3 |   963K|   963K| 1252K (0)|
|   3 |    INDEX FULL SCAN            | I_MV_T1_T2_1 |      1 |    101 |    101 |00:00:00.01 |       1 |       |       |          |
|   4 |    TABLE ACCESS BY INDEX ROWID| MLOG$_T1     |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|*  5 |     INDEX RANGE SCAN          | I_MLOG$_T1   |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------  

Predicate Information (identified by operation id):
---------------------------------------------------  

   2 - access("T1_ROWID"=CHARTOROWID("MAS$"."M_ROW$$"))
   5 - access("MAS$"."XID$$"=:1)

If you have read Alberto’s blogs then you will notice the problematic HASH JOIN RIGHT SEMI in the INSERT
And you will also be familiar with a couple of solutions which now present themselves.

It’s great that there are known workarounds to this problem but, as I hope to demonstrate, the fact that there are at least three separate codepaths through the fast refresh mechanism worries me particularly when we see that
at least one of these alternative code paths causes bugs with other variations on the MV refresh mechanism. What a mess!!


First solution to the problem is paramter _mv_refresh_use_stats.
Repeat update & commit:

alter session set "_mv_refresh_use_stats" = true;

session SET altered.

set timing on
update t1 set flag = 'Y' where oid = 1000001;

Elapsed: 00:00:00.035

commit;
committed.
Elapsed: 00:00:00.174

That’s better.
What do our INSERT and DELETE statements look like now?
Note the different SQL IDs – these are completely different statements.

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE EXECUTIONS ELAPSED_TIME/1000/1000 SQL_FULLTEXT
------------- ------------ --------------- ---------- ---------------------- --------------------------------------------------------------------------------
av53npjd112vx            1       374625119          1               0.013106 /* MV_REFRESH (INS) */ INSERT INTO "DOM"."MV_T1_T2" SELECT /*+ NO_MERGE("JV$") *
5a6ugwh5v1j5x            0       466548951          1               0.009088 /* MV_REFRESH (DEL) */ DELETE FROM "DOM"."MV_T1_T2" SNA$ WHERE "T1_ROWID" IN (SE
gpk46p11kbp3d            0      1735585849          1               0.001665 SELECT OLD_NEW$$, COUNT(*)  FROM "DOM"."MLOG$_T1"   WHERE SNAPTIME$$ > :1 AND SN
4agmycb29dd1p            0                          1               0.001615 delete from "DOM"."MLOG$_T2" where xid$$ = :1
f8wzsn9dzcusb            0      1745227344          1               0.000538 delete from "DOM"."MLOG$_T1" where xid$$ = :1
ar6vnyxkss2kq            0       375101422          1               0.000515 select dmltype$$ from "DOM"."MLOG$_T1"  where xid$$ = :1  group by dmltype$$     

For both the INSERT and the DELETE, we’ve lost the HASH_SJ hint and we’ve gained a CARDINALITY hint and a NO_SEMIJOIN.

/* MV_REFRESH (INS) */
INSERT INTO "DOM"."MV_T1_T2"
SELECT /*+ NO_MERGE("JV$") */ "JV$"."RID$","JV$"."OID","JV$"."FLAG","JV$"."OID_T2","JV$"."PADDING","MAS$0".ROWID,"MAS$0"."OID","MAS$0"."FLAG","MAS$0"."PADDING"
FROM ( SELECT "MAS$"."ROWID" "RID$"  ,  "MAS$".*
       FROM   "DOM"."T1" "MAS$"
       WHERE  ROWID IN (SELECT /*+ CARDINALITY(MAS$ 0)  NO_SEMIJOIN */  CHARTOROWID("MAS$"."M_ROW$$") RID$
                        FROM   "DOM"."MLOG$_T1" "MAS$"
                        WHERE  "MAS$".XID$$ = :1 )) "JV$", "T2" AS OF SNAPSHOT(:B_SCN)  "MAS$0"
WHERE "JV$"."FLAG"='Y'
AND   "JV$"."OID_T2"="MAS$0"."OID"
AND    "MAS$0"."FLAG"='Y'

Giving plan:

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |            |      1 |        |      0 |00:00:00.01 |      15 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL           |            |      1 |        |      0 |00:00:00.01 |      15 |       |       |          |
|   2 |   NESTED LOOPS                     |            |      1 |        |      1 |00:00:00.01 |       7 |       |       |          |
|   3 |    NESTED LOOPS                    |            |      1 |      1 |      1 |00:00:00.01 |       6 |       |       |          |
|   4 |     VIEW                           |            |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|   5 |      NESTED LOOPS                  |            |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|   6 |       VIEW                         | VW_NSO_1   |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|   7 |        SORT UNIQUE                 |            |      1 |      1 |      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         TABLE ACCESS BY INDEX ROWID| MLOG$_T1   |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|*  9 |          INDEX RANGE SCAN          | I_MLOG$_T1 |      1 |      2 |      1 |00:00:00.01 |       1 |       |       |          |
|* 10 |       TABLE ACCESS BY USER ROWID   | T1         |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
|* 11 |     INDEX UNIQUE SCAN              | I_T2_1     |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|* 12 |    TABLE ACCESS BY INDEX ROWID     | T2         |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   9 - access("MAS$"."XID$$"=:1)
  10 - filter("MAS$"."FLAG"='Y')
  11 - access("JV$"."OID_T2"="MAS$0"."OID")
  12 - filter("MAS$0"."FLAG"='Y')
/* MV_REFRESH (DEL) */
DELETE FROM "DOM"."MV_T1_T2" SNA$
WHERE  "T1_ROWID" IN (SELECT /*+ NO_MERGE  NO_SEMIJOIN  */ *
                      FROM   (SELECT  /*+ CARDINALITY(MAS$ 0) */  CHARTOROWID("MAS$"."M_ROW$$") RID$
                              FROM "DOM"."MLOG$_T1" "MAS$"   WHERE "MAS$".XID$$ = :1 )MAS$)

with plan:

-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT                |              |      1 |        |      0 |00:00:00.01 |       9 |       |       |          |
|   1 |  DELETE                         | MV_T1_T2     |      1 |        |      0 |00:00:00.01 |       9 |       |       |          |
|   2 |   NESTED LOOPS                  |              |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|   3 |    VIEW                         | VW_NSO_1     |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|   4 |     SORT UNIQUE                 |              |      1 |      1 |      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   5 |      TABLE ACCESS BY INDEX ROWID| MLOG$_T1     |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|*  6 |       INDEX RANGE SCAN          | I_MLOG$_T1   |      1 |      2 |      1 |00:00:00.01 |       1 |       |       |          |
|*  7 |    INDEX RANGE SCAN             | I_MV_T1_T2_1 |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("MAS$"."XID$$"=:1)
   7 - access("T1_ROWID"="RID$")

There’s no doubt that that is faster for our specific circumstances.

What about the second workaround available?

Let’s reset “_mv_refresh_use_stats” and look at locking stats on the MV logs.

alter session set "_mv_refresh_use_stats" = false;

session SET altered.

begin
  dbms_stats.gather_table_stats(USER,'MLOG$_T1');
  dbms_stats.gather_table_stats(USER,'MLOG$_T2');
  dbms_stats.lock_table_stats(USER,'MLOG$_T1');
  dbms_stats.lock_table_stats(USER,'MLOG$_T2');
end;
/

anonymous block completed

select table_name, num_rows from user_tables where table_name in ('MLOG$_T1','MLOG$_T2');

TABLE_NAME                     NUM_ROWS
------------------------------ --------
MLOG$_T1                              0
MLOG$_T2                              0

What happens then?

update t1 set flag = 'Y' where oid = 1000001;

Elapsed: 00:00:00.026

commit;

committed.
Elapsed: 00:00:00.169

Very similar to the effects of “mv_refresh_use_stats”, but exactly the same? No.

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE EXECUTIONS ELAPSED_TIME/1000/1000 SQL_FULLTEXT
------------- ------------ --------------- ---------- ---------------------- --------------------------------------------------------------------------------
69xq38c4ts0j2            1       190791364          1               0.013053 /* MV_REFRESH (INS) */ INSERT INTO "DOM"."MV_T1_T2" SELECT /*+ NO_MERGE("JV$") *
dzzb43k6h3vhs            0                          1               0.009172 /* MV_REFRESH (DEL) */ DELETE FROM "DOM"."MV_T1_T2" SNA$ WHERE "T1_ROWID" IN (SE
4agmycb29dd1p            0                          1               0.000924 delete from "DOM"."MLOG$_T2" where xid$$ = :1
f8wzsn9dzcusb            0      1745227344          1                0.00064 delete from "DOM"."MLOG$_T1" where xid$$ = :1
ar6vnyxkss2kq            0       375101422          1               0.000632 select dmltype$$ from "DOM"."MLOG$_T1"  where xid$$ = :1  group by dmltype$$

We’ve got completely different SQL IDs again.

In both INSERT and DELETE we’ve lost any hint to do or avoid a semi-join and there’s no CARDINALITY hint.

/* MV_REFRESH (INS) */
INSERT INTO "DOM"."MV_T1_T2"
SELECT /*+ NO_MERGE("JV$") */ "JV$"."RID$","JV$"."OID","JV$"."FLAG","JV$"."OID_T2","JV$"."PADDING","MAS$0".ROWID,"MAS$0"."OID","MAS$0"."FLAG","MAS$0"."PADDING"
FROM ( SELECT "MAS$"."ROWID" "RID$"  ,  "MAS$".*  FROM "DOM"."T1" "MAS$"
       WHERE ROWID IN (SELECT  CHARTOROWID("MAS$"."M_ROW$$") RID$
                       FROM "DOM"."MLOG$_T1" "MAS$"
                       WHERE "MAS$".XID$$ = :1 )) "JV$", "T2" AS OF SNAPSHOT(:B_SCN)  "MAS$0"
WHERE "JV$"."FLAG"='Y' AND "JV$"."OID_T2"="MAS$0"."OID" AND "MAS$0"."FLAG"='Y'
-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                  |            |      1 |        |      0 |00:00:00.01 |      15 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL          |            |      1 |        |      0 |00:00:00.01 |      15 |       |       |          |
|   2 |   NESTED LOOPS                    |            |      1 |        |      1 |00:00:00.01 |       7 |       |       |          |
|   3 |    NESTED LOOPS                   |            |      1 |      1 |      1 |00:00:00.01 |       6 |       |       |          |
|   4 |     VIEW                          |            |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|   5 |      NESTED LOOPS                 |            |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|   6 |       SORT UNIQUE                 |            |      1 |      1 |      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   7 |        TABLE ACCESS BY INDEX ROWID| MLOG$_T1   |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|*  8 |         INDEX RANGE SCAN          | I_MLOG$_T1 |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
|*  9 |       TABLE ACCESS BY USER ROWID  | T1         |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
|* 10 |     INDEX UNIQUE SCAN             | I_T2_1     |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|* 11 |    TABLE ACCESS BY INDEX ROWID    | T2         |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access("MAS$"."XID$$"=:1)
   9 - filter("MAS$"."FLAG"='Y')
  10 - access("JV$"."OID_T2"="MAS$0"."OID")
  11 - filter("MAS$0"."FLAG"='Y')
DELETE FROM "DOM"."MV_T1_T2" SNA$
WHERE  "T1_ROWID" IN (SELECT /*+ NO_MERGE  */ *
                      FROM (SELECT  CHARTOROWID("MAS$"."M_ROW$$") RID$
                            FROM "DOM"."MLOG$_T1" "MAS$"   WHERE "MAS$".XID$$ = :1 )MAS$)
--------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT              |              |      1 |        |      0 |00:00:00.01 |      13 |
|   1 |  DELETE                       | MV_T1_T2     |      1 |        |      0 |00:00:00.01 |      13 |
|   2 |   NESTED LOOPS SEMI           |              |      1 |    101 |      1 |00:00:00.01 |       5 |
|   3 |    INDEX FULL SCAN            | I_MV_T1_T2_1 |      1 |    101 |    101 |00:00:00.01 |       1 |
|*  4 |    TABLE ACCESS BY INDEX ROWID| MLOG$_T1     |    101 |      1 |      1 |00:00:00.01 |       4 |
|*  5 |     INDEX RANGE SCAN          | I_MLOG$_T1   |    101 |      1 |    101 |00:00:00.01 |       3 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("T1_ROWID"=CHARTOROWID("MAS$"."M_ROW$$"))
   5 - access("MAS$"."XID$$"=:1)

So, to briefly summarise the above, we have at least three clear codepaths through the fast refresh mechanism which result in completely different internal SQL being generated.
Multiple code paths are added complexity when it comes to combining other features, upgrades, testing, etc

It’s then no great surprise when such multiple codepaths lead to other bugs.

So which approach are we meant to do?

Should we just take the default longer commit on the chin?

I’m not sure.

But I can come up with an illustration of why we should think twice about doing down the “_mv_refresh_use_stats” approach.
Firstly it’s an underscore parameter so we should really get Oracle Support approval before setting it (which means a fair bit of back and forth until you get someone who knows what you’re going on about)


Secondly, we can quickly find some bugs around the usage of this parameter with the newer COMMIT SCN (also covered in Alberto’s blogs above)

drop materialized view log on t1;
drop materialized view log on t2;
create materialized view log on t1 with rowid, commit scn;
create materialized view log on t2 with rowid, commit scn;

drop materialized view mv_t1_t2;
create materialized view mv_t1_t2...

Then go back to our update and commit:

alter session set "_mv_refresh_use_stats" = true;

session SET altered.

update t1 set flag = 'Y' where oid = 1000001;

1 rows updated

commit;

SQL Error: ORA-12008: error in materialized view refresh path
ORA-00904: "SNAPTIME$$": invalid identifier
12008. 00000 -  "error in materialized view refresh path"
*Cause:    Table SNAP$_<mview_name> reads rows from the view
           MVIEW$_<mview_name>, which is a view on the master table
           (the master may be at a remote site).  Any
           error in this path will cause this error at refresh time.
           For fast refreshes, the table <master_owner>.MLOG$_<master>
           is also referenced.
*Action:   Examine the other messages on the stack to find the problem.
           See if the objects SNAP$_<mview_name>, MVIEW$_<mview_name>,
           <mowner>.<master>@<dblink>, <mowner>.MLOG$_<master>@<dblink>
           still exist.

We don’t actually need COMMIT SCN in our ON-COMMIT MV because we’re using the XID.
It’s a newer mechanism for FAST REFRESH ON DEMAND

I know that I don’t need COMMIT SCN in my case above but the fact that it’s so easy to find an issue means that I’m reluctant to do down the parameter route.

Note that there are quite a few bug notes around this in Oracle Support.
A lot of the advice is conflicting, not least because the behaviour of “_mv_refresh_use_stats” changes mid 10.1 something.

At the moment, I think I’m going to go with the locked stats on the MLOGs but it’s worrying.

What a mess!

References:
Lots of Oracle doc ids…


Integrating Oracle MAF Application With Social Services

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Nowadays we widely use social networks every...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle AVDF - Database Firewall Policies

Yann Neuhaus - Thu, 2014-11-27 04:13

The successful deployment of a Database Firewall depends on an effective policy. Oracle AVDF includes preconfigured firewall policies as described in the Firewall Policy page in the Policy tab of the Audit Vault Server console.

These include policies that log all SQL statements, or log only unique SQL statements. In addition, the Database Firewall policy editor enables you to design your own policies quickly and efficiently.

Policy rules can depend on any combination of the SQL statement type, name of the database user, IP address of the database client, operating system user name, client program name, or any exceptions you specify.

 

First policy and global concept

1. Log in to the Audit Vault Server console as an auditor, and click on the Policy tab:

 

001_20141122-190809_1.png

 

2. Under the Policy menu, click Firewall Policy.

3. Click Create Policy.

The Create Policy dialog appears. Select the Database Type from the drop-down list (choice between IBM DB2, Microsoft SQL Server, MySQL, Oracle Database, Sybase ASE, Sybase SQL Anywhere), Enter a Policy Name and Optionally, enter a Description:

 

002.png

 

3. Click on “Create”. The new policy is created, and the policy's Overview page appears:

 

003.png

 

When you create a new policy, or click an existing policy name in the Firewall Policies page, that policy's Overview page appears. This page shows the policy rules that are being applied to the statement types (clusters) being monitored by the Database Firewall, as well as exceptions and other rules that may apply.

The policy's Overview page is divided into these sub-sections:

  • Exception Rules - Lists exceptions you have created. The rules that you have assigned to SQL statement clusters will not apply to these exceptions. You can move the rules up or down in the list. The rules are evaluated in the order listed.
  • Analyzed SQL - Displays the number of SQL statement clusters for which you have defined policy rules, and their policy actions (such as Warn or Block).
  • Novelty Policies (Any) - Lists special policies you have created for specific statement classes and/or specific tables in your secured target databases. If you have identified specific tables in a policy in this section, the policy rule applies if it matches Any of the tables.
  • Novelty Policies (All) - Lists special policies you have created for specific statement classes and/or specific tables in your secured target databases. If you have identified specific tables in a policy in this section, the policy rule applies if it matches All of the tables.
  • Default Rule - Shows the default rule for any statements that are not matched by the rules set for Analyzed SQL clusters, Exceptions, or Novelty Policies.
  • Policy Controls - Lets you configure firewall policy settings, create policy profiles, as well as sets of filters to use in defining profiles and Exception rules.

 

Practical case

Developing a policy is an iterative process that keeps refining and improving the policy with new data. In order to be able to create a policy statements have to be executed on the database. The examples below present some possibilities provided by Firewall Policy.

These examples are based on a very simple context with two schemas/users:

  1. PSI
  2. GRS

PSI schema contains three tables:

  1. PSI.CREDIT_CARD containing Credit Cards numbers
  2. PSI.EMP containing employees’ salaries
  3. PSI.TEST containing one non sensitive row

 

SQL> select table_name from dba_tables where owner='PSI';

TABLE_NAME

------------------------------

EMP

CREDIT_CARD

TEST

 

Novelty Policy

The table CREDIT_CARD contains credit cards numbers and EMP contains Employee salary. These two tables are very sensitive and nobody can have a look on these tables:

 

 

GRS

PSI

PSI.EMP

NOK

NOK

PSI.CREDIT_CARD

NOK

NOK

PSI.TEST

OK

OK

 

The first step in order to create this policy is to create a novelty rule. Novelty policies specify the action, logging level, and threat severity to use for specific types of statements and/or statements that operate on selected tables. Novelty policies can be used to loosen or tighten your normal policy rules if certain statements are encountered. In our context we want to create a novelty policy that will block all access to these tables:

1. In the Audit Vault Server console, select the “Policy” tab.

2. From the Policy menu, click “Firewall Policy”.

3. Click on the newly created Firewall Policy named “MyPolicy

4. Click Add Novelty Rule in section Novelty Policy (Any):

5. In the Novelty Policy Details dialog, define the following:

a. Novelty Rule: Enter a name for this rule: MyNR

b. Statement Classes: Select one or more types of statements that SQL statements must match in order to apply this rule. In this example we have to select “Data Manipulation Read Only”

c. Policy Controls: Select the Action, Logging Level, and Threat Severity for this rule from the appropriate drop-down list. In this example we have to select “Block” for action and specify in the substitution field, the statement below:

select 'You do not have access to this table' from dual

6. Affected Tables: Select the table(s) to use for matching statements to this policy. In order to have tables in this list, tables have to be accessed first. If there is no activity on the database the list will be empty. In our specific case we select tables: PSI.EMP and PSI.CREDIT_CARD and we click on “Add Tables”:

 

004.png

 

7. Click on “Create”.

8. Now we can test this policy. For the moment this policy will block access to any user trying to have access to these two tables.In order to apply this policy we have to save the policy by clicking on “save” and then “publish”.

9. Click on “Secured Targets”

10. Click on the target where you want to apply the policy

11. Click on Firewall Policy

12. Select the Policy “MyPolicy”

13. Now you can check that the policy is applied by doing a select on this table.

 

C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 13:36:14 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;


'YOUDONOTHAVEACCESSTOTHISTABLE'

------------------------------------
You do not have access to this table
SQL>

 

We can execute the same query with user GRS, the result will be the same:

 

C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 13:36:14 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;


'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table

SQL>
  Exception Rule

The table CREDIT_CARD contains credit cards numbers and EMP contains Employee salary. These two tables are still very sensitive but since PSI has been promoted Chief Financial Officer he need access to these tables. Therefore we will create an exception for him:

 

GRS

PSI

PSI.EMP

NOK

OK

PSI.CREDIT_CARD

NOK

OK

PSI.TEST

OK

OK

  1. In order to change the policy this one has to be unused. Click on secured targets, select the target, and click on firewall policy and change the policy to “log all”:

 

005.png

 

2. Now you can make modification to your policy “MyPolicy”.

First of all we need to create a Profile. Within a firewall policy, a profile lets you define a different set of policy rules based on the session data associated with SQL statements. To define the profile, you use the session filters you defined in the Policy Controls section of the firewall policy. These session filters filter SQL statements based on:

  • IP addresses
  • Database user login names
  • Client program names (for example, SQL*Plus)
  • Operating system user names

In this example we will create a profile based on Database user login named. This user will be PSI.

3.Click on Policy

4.Click on Firewall Policy

5. Click on MyPolicy

6. Click on Database User Set

7. Create a new set by clicking on “Create New Set”

8. Enter values for field New Set Name and member:

 

006.png

 

9.Click on “Create Set”

10. Click on “Cancel”

11.Click on “Profiles”

12. Create a new Profile by clicking on “Create New Profile”

13. Enter the Profile Name and select “UsersHavingAccessToMyTable” in the field “DB User Set”:

 

007.png

 

14. Click on “Create Profile”

 

008.png

 

Now we have to create an exception based on this profile. An exception determines the action, logging level, and threat severity to use when certain session data is encountered. For example, an exception could specify rules for statements that originate (or do not originate) from selected client IP addresses or database user names. In this example, the exception will be based on database user name.

Exceptions override all other policy rules. For example, you may want to override the normal policy rules if SQL statements originate from an administrator, or if they originate from anywhere other than a specific IP address.

You can define many exceptions and control the order in which they are evaluated. Each Exception has its own Action, Logging, and Threat Severity settings.

15. Click on policy

16. Click on firewall policy

17. Click on your newly created policy “MyPolicy”

18. Click on “Add Exception”

19. Enter the expception rule name : “ExceptionForPSITable”

20. Into DB User Set select “Include” and select “UsersHavingAccessToblMyTable”

21. In Policy Control click on “Pass”

22. Click on “Create”:

 

009.png

 

23. Click on “Save” and “Publish”

24. Apply this policy to the target

Now, the user PSI can access to all his tables and user GRS have no access to sensitive tables.

 

C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 14:09:07 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;

'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------

You do not have access to this table

SQL> select * from psi.credit_card;

'YOUDONOTHAVEACCESSTOTHISTABLE'

------------------------------------

You do not have access to this table


SQL> select * from psi.test;

NAME
----------
hello C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 14:18:54 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;

NAME               ID
---------- ----------

Larry     4,8960E+15
John       7,8623E+15

SQL> select * from psi.emp;


NAME             SAL
---------- ----------
Larry         150000
John           80000

SQL> select * from psi.test;

NAME
----------
hello
SQL>
  Analyzed SQL

With the exception we granted access to PSI database user to all his table. Since PSI didn’t present good results to shareholders he has been replaced by a new CFO and this one decided that PSI has now only access to credit card number but is not anymore allowed to make select statement on employees’ salaries table:

 

 

GRS

PSI

PSI.EMP

NOK

NOK

PSI.CREDIT_CARD

NOK

OK

PSI.TEST

OK

OK

  1. First of all we have to remove the exception we did before by clicking on the exception rule and clicking on “delete”

  2. In the policy overview click on “Modify SQL”3.Click on “Change”

  3. Select “Secured Target”, select the profile “ProfileForPrivsUsers” and enter in Event Time last 24 hours.

  4. Click on “Apply”

  5. Click on column header “User Name”

  6. Select “psi”

  7. A line looking like the one below should appear in the list

  8. Select this line

  9. Click on “Set Policy”

  10. In the Action list click on “Pass”:


    0010.png


  11. If you have a look on the list of SQL, the following statement should appear.


    0013.png

  12. Create a profile for user GRS named “ProfileForStdUsers” as we did for user PSI. This profile won't have the possibility to execute statement “Select * from psi.credit_card”.

  13. In “Analyze SQL” select profile “ProfileForStdUsers” and filter on GRS user as we did in step 6.

  14. Select the following statement:


    0014.png


  15. Click on “Set Policy”

  16. Select action “Block”

  17. In the field Substitution enter the following: “select 'Your profile does not allow access to this statement' from dual ”

  18. Now your section “Analyzed SQL” should look like the screenshot below:

  19. Save and Publish the modification done on this policy's

  20. Apply this policy to your target, click on secured targets, Firewall Policy and select “MyPolicy” in the list.

Now we can test the access:


C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 16:33:55 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ConnectÚ Ó :Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;


'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table

SQL> select * from psi.credit_card;
'

YOURPROFILEDOESNOTALLOWACCESSTOTHISSTATEMENT'
----------------------------------------------------
Your profile does not allow access to this statement

SQL> select * from psi.test;

NAME

----------

hello

 

C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 16:35:35 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;

'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table


SQL> select * from psi.credit_card;


NAME               ID
---------- ----------
Larry     4,8960E+15
John       7,8623E+15

SQL> select * from psi.test;

NAME
----------
hello

 

This blog is not intended to be a comprehensive description of all AVDF policies functionnalities but I do hope that it provided you a good overview of some basic functionnalities.

HOWTO: Create a Structured XMLIndex, using the In-Memory Column Store

Marco Gralike - Thu, 2014-11-27 03:41
In Oracle database version 12.1.0.2, Oracle introduced the Oracle In-Memory Database option. It is possible…

White Papers

Oracle in Action - Wed, 2014-11-26 23:44

RSS content

Oracle’s Approach to Performance Tuning Part-I by Darrick Addison

Oracle’s Approach to Performance Tuning Part-II by Darrick Addison

SQL Plan Management in 11g (Oracle White Paper)

SQL Plan Management in 12c (Oracle White Paper)

Adaptive Cursors And SQL Plan Management (Arup Nanda)

Partitioning in 11g (Oracle White paper)

Oracle Database Parallel Execution Fundamentals (Oracle White Paper)

Understanding Parallel Execution Part-I (Randolf Geist)

Understanding Parallel Execution Part-II (Randolf Geist)

Oracle Active Dataguard 11g (Oracle White Paper)

Oracle 11g RAC (Oracle White paper)

Oracle 11gR2 RAC (Oracle White Paper)

Oracle Single Client Access Name (Oracle White Paper)

Oracle RAC One Node (Oracle White Paper)

11g R2 RAC : Architecture, Best Practices And Troubleshooting (Kai Yu)

Automatic Workload Management With Oracle RAC (Oracle White Paper)

RAC Administering Parallel Execution (Riyaz Shamsudeen)

Using RAC Parallel Instance Groups (Chris Lawson)

Oracle 12c RAC (Oracle White paper)

Maximize Availability with Oracle 12c (Oracle White Paper)



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [White Papers], All Right Reserved. 2014.

The post White Papers appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Oracle 12c privilege analysis rocks

Yann Neuhaus - Wed, 2014-11-26 14:18

12c came with a very nice feature: privilege analysis. You don't know which privileges are required? then just grant DBA, run your application, and check which minimal privileges are needed. And today, I've discovered how it is very powerful: you can even see privileges used internally, even when not done by SQL, and even not documented.

It starts like that, with a question from Vladimir Sitnikov (who publishes very interesting stuff from his twitter account) in the tone of a challenge:

@FranckPachot Ok, ace. Do you think dbms_utility.get_parameter_value requires special grants (e.g. in current 11gR2)?

— Vladimir Sitnikov (@VladimirSitnikv) November 26, 2014

So I got to the doc which has a special security model for some functions but nothing about get_parameter_value.

Then I created a simple user with only CREATE SESSION privilege and got:

SQL> drop user TEST;
User dropped.
SQL> grant create session to TEST identified by TEST;
Grant succeeded.
SQL> connect TEST/TEST
Connected.
SQL> variable i number
SQL> variable s varchar2(1000)
SQL> variable t number
SQL> exec :t:=DBMS_UTILITY.GET_PARAMETER_VALUE('NLS_LENGTH_SEMANTICS',:i,:s);
BEGIN :t:=DBMS_UTILITY.GET_PARAMETER_VALUE('NLS_LENGTH_SEMANTICS',:i,:s); END;
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_UTILITY", line 140
ORA-06512: at line 1

So, which privileges do you need? Let's try the 12c privilege analysis:

SQL> grant dba to TEST;
Grant succeeded.

SQL> connect / as sysdba
Connected.

SQL> exec dbms_privilege_capture.create_capture (name=>'demo',type =>dbms_privilege_capture.g_role,roles=>role_name_list('DBA'));
PL/SQL procedure successfully completed.

SQL> exec dbms_privilege_capture.enable_capture (name=>'demo');
PL/SQL procedure successfully completed.

SQL> connect TEST/TEST
Connected.

SQL> exec :t:=dbms_utility.get_parameter_value('NLS_LENGTH_SEMANTICS',:i,:s);
PL/SQL procedure successfully completed.

SQL> print s

S
--------------------------------------------------------------
BYTE

SQL> connect / as sysdba
Connected.

SQL> exec dbms_privilege_capture.disable_capture(name=>'demo');
PL/SQL procedure successfully completed.

SQL> exec dbms_privilege_capture.generate_result(name=>'demo');
PL/SQL procedure successfully completed.

SQL> select object_owner,object_name,obj_priv from dba_used_objprivs ;

OBJECT_OWN OBJECT_NAME     OBJ_PRIV
---------- --------------- ----------
SYS        V_$PARAMETER    SELECT

SQL> select path from dba_used_objprivs_path ;

PATH
--------------------------------------------------------------
GRANT_PATH('TEST', 'DBA', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'EM_EXPRESS_ALL', 'EM_EXPRESS_BASIC', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_IMP_FULL_DATABASE', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_EXP_FULL_DATABASE', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'IMP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_IMP_FULL_DATABASE', 'IMP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'SELECT_CATALOG_ROLE')

SQL> exec dbms_privilege_capture.drop_capture (name=>'demo');
PL/SQL procedure successfully completed.

I've granted the DBA privilege and have run the privilege analysis capture on that role while calling the function. And bingo: you need to be granted SELECT on V_$PARAMETER (which come into DBA role through the SELECT_CATALOG_ROLE) ... which sounds legitimate as the goal is to get a parameter value. 

But do you know what? DBMS_UTILITY.GET_PARAMETER_VALUE do not execute any select statement. That behavior is documented in that package for other function, but not for the GET_PARAMETER_VALUE one:

Rem The dbms_utility package is run-as-caller (psdicd.c) only for
Rem its name_resolve, compile_schema, analyze_schema, wait_on_pending_dml,
Rem and expand_sql_text procedures. This package is not run-as-caller
Rem w.r.t. SQL (psdpgi.c) so that the SQL works correctly (runs as
Rem SYS). The privileges are checked via dbms_ddl.

 

That function calls a C function (KSPGPNICD) so we don't know what happens behind. If you sql_trace it, you don't see anything about V_$PARAMETER.

But privilege analysis show the required privileges anyway, and that rocks.

Happy Thanksgiving

Oracle AppsLab - Wed, 2014-11-26 13:55

Editor’s note: Here’s a first post from one of our new team members, Thao Nguyen (@thaobnguyen), who runs our Emerging Interactions team, the Research and Design part of the R, D & D.

That last D is Development if that’s unclear. Anyway, like Thao says, Happy Thanksgiving for those who celebrate it, and for those who don’t enjoy the silence in our absence. To Thao’s question, I’m going with Internet. Yes, it’s a gadget because it’s a series of tubes, not a big truck.

Find the comments to add the gadget for which you are most thankful.

Tomorrow is Thanksgiving and this seems like a good time to put my voice out on The AppsLab (@theappslab). I’m Thao, and my Twitter (@thaobnguyen) tagline is “geek mom.” I’m a person of few words and those two words pretty much summarize my work and home life. I manage The AppsLab researchers and designers. Jake welcomed us to the AppsLab months ago here, so I’m finally saying “Thank you for welcoming us!”

Photo by floodllama on Flickr used under Creative Commons

Photo by floodllama on Flickr used under Creative Commons

As we reflect on all the wonderful things in our lives, personal and professional, I sincerely want to say I am very thankful for having the best work family ever. I was deeply reminded of that early this week, when I had a little health scare at work and was surround by so much care and support from my co-workers. Enough of the emotional stuff, and onto the fun gadget stuff . . . .

My little health scare led me to a category of devices that hadn’t hit my radar before – potentially, life saving, personal medical apps. I’ve been looking at wearables, fitness devices, healthcare apps, and the like for a long time now but there is a class of medical-grade devices (at least recommended by my cardiologist) that is potentially so valuable in my life, as well as those dear to me . . . AliveCor. It is essentially turns your smartphone into an ECG device so you can monitor your heart health anytime and share it with your physician. Sounds so cool!

Back to giving thanks, I’m so thankful for all the technology and gadgets of today – from the iPhone and iPad that lets me have a peaceful dinner out with the kids to these medical devices that I’ll be exploring now. I want to leave you with a question, “What gadget are you most thankful for?”Possibly Related Posts:

Lunchtime quiz

Jonathan Lewis - Wed, 2014-11-26 06:41

There was a question on OTN a few days ago asking the following question:

Here’s a query that ran okay on 11g, but crashed with Oracle error “ORA-01843: not a valid month” after upgrade to 12c; why ?

The generically correct answer, of course, is that the OP had been lucky (or unlucky, depending on your point of view) on 11g – and I’ll explain that answer in another blog posting.

That isn’t the point of this posting, though. This posting is a test of observation and deduction. One of the respondants in the thread had conveniently supplied a little bit of SQL that I copied and fiddled about with to demonstrate a point regarding CPU costing, but as I did so I thought I’d show you the following and ask a simple question.’


drop table T;

Create Table T
As
with
periods as (
                  Select 'January' period, 1 cal  From Dual
        union all Select 'February' period , 2 cal From Dual
        union all Select 'March' period , 3 cal From Dual
        union all Select 'April' period , 4 cal From Dual
        union all Select 'May'  period, 5 cal From Dual
        union all Select 'June' period, 6 cal From Dual
        union all Select 'July' period, 7 cal From Dual
        union all Select 'August' period, 8 cal From Dual
        union all Select 'September' period, 9 cal  From Dual
        union all Select 'October' period, 10 cal From Dual
        union all Select 'November' period, 11 cal From Dual
        Union All Select 'December' Period, 12 Cal From Dual
        Union All Select '13 Series' Period, Null Cal  From Dual
)
Select  Period,Cal
from periods;

prompt  ==================================
prompt  When we invoke below SQL it works.
prompt  ==================================

set autotrace on explain

select *
from    (
        select
                Period,
                Cal,
                to_date(Period || ', ' || 2014,'Month, YYYY') col1 ,
                to_date('November, 2014','Month, YYYY') col2
        From  T
        Where  Cal > 0
        )
;

prompt  ================================================
prompt  But when we add comparison operations , it fails
prompt  ================================================

select *
from    (
        select
                Period,
                Cal,
                to_date(Period || ', ' || 2014,'Month, YYYY')   col1,
                to_date('November, 2014','Month, YYYY')         col2
        From  T
        Where  Cal > 0
        )
where
        col1 >= col2
;

set autotrace off



All I’ve done is create a table then run and generate the execution plans for two queries – with a comment that if you try to run one query it will succeed but if you try to run the other it will fail (and raise ORA-01843). As far as the original supplier was concerned, both queries succeeded in 11g and the failure of the second one appeared only in 12c. In fact, for reasons that I won’t discuss here, it is POSSIBLE for the failure to appear in 11g as well, though not necessarily with this exact data set.

Here’s the COMPLETE output I got from running the code above on an 11.2.0.4 instance:



Table dropped.


Table created.

==================================
When we invoke below SQL it works.
==================================

PERIOD           CAL COL1      COL2
--------- ---------- --------- ---------
January            1 01-JAN-14 01-NOV-14
February           2 01-FEB-14 01-NOV-14
March              3 01-MAR-14 01-NOV-14
April              4 01-APR-14 01-NOV-14
May                5 01-MAY-14 01-NOV-14
June               6 01-JUN-14 01-NOV-14
July               7 01-JUL-14 01-NOV-14
August             8 01-AUG-14 01-NOV-14
September          9 01-SEP-14 01-NOV-14
October           10 01-OCT-14 01-NOV-14
November          11 01-NOV-14 01-NOV-14
December          12 01-DEC-14 01-NOV-14

12 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   228 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    12 |   228 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAL">0)

Note
-----
   - dynamic sampling used for this statement (level=2)

================================================
But when we add comparison operations , it fails
================================================

PERIOD           CAL COL1      COL2
--------- ---------- --------- ---------
November          11 01-NOV-14 01-NOV-14
December          12 01-DEC-14 01-NOV-14

2 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    19 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |    19 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAL">0 AND TO_DATE("PERIOD"||', '||'2014','Month,
              YYYY')>=TO_DATE(' 2014-11-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


So this is the question. What’s the anomaly in this output ?

Bonus question: What’s the explanation for the anomaly ?


Mandatory Auditing - Oracle 12c Always-On-Auditing

Certainly from an auditing and logging perspective, one of the best new features delivered by Oracle 12c is mandatory auditing of the administrative users such as SYSDBA.  This can be described as ‘always on auditing’.  By default, the following audit related activities are now mandatorily audited -

  • CREATE AUDIT POLICY
  • ALTER AUDIT POLICY
  • DROP AUDIT POLICY
  • AUDIT
  • NOAUDIT
  • EXECUTE of the DBMS_FGA PL/SQL package
  • EXECUTE of the DBMS_AUDIT_MGMT PL/SQL package
  • All configuration changes that are made to Oracle Database Vault
  • ALTER TABLE attempts on the AUDSYS audit trail table (this table cannot be altered)
  • Top level statements by administrative users SYS, SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM, until the database opens.  When the database opens, Oracle Database audits these users using the audit configurations in the system.

The audit activity resulting from mandatory auditing can be found in SYS.UNIFIED_AUDIT_TRAIL. 

Note when the database is not writable (such as during database mounting), if the database is closed, or if it is read-only, then Oracle writes the audit records to external files in the $ORACLE_BASE/audit/$ORACLE_SID directory. 

Mandatory Auditing

Integrigy Framework Event

  • CREATE AUDIT POLICY
  • ALTER AUDIT POLICY
  • DROP AUDIT POLICY
  • EXECUTE of the DBMS_FGA PL/SQL package
  • EXECUTE of the DBMS_AUDIT_MGMT PL/SQL package
  • All configuration changes that are made to Oracle Database Vault
  • ALTER TABLE attempts on the AUDSYS audit trail table (remember that this table cannot be altered)

E12 - Modify audit and logging

  • Top level statements by the administrative users SYS, SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM until the database opens
  • AUDIT
  • NOAUDIT

E11 - Privileged commands

Note: Activity and be found in SYS.UNIFIED_AUDIT_TRAIL when in pure mode and to the traditional audit trails in mixed mode.

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

Suppressing ADF LOV Like Operator Filtering V2

Andrejus Baranovski - Wed, 2014-11-26 01:53
I had a post about the solution to suppress ADF LOV Like operator and prevent LOV popup loading when user is typing existing value - Suppressing ADF LOV Like Operator Filtering. Thanks to a blog reader, there was one use case found, when Like operator was not suppressed properly. This is fixed now and I would like to post updated sample application here.

Additional method is overridden in LOV VO implementation class - applyViewCriteria(ViewCriteria, boolean). Besides previously overridden method buildViewCriteriaClauses(ViewCriteria), this method provides additional handling for LOV Like operator suppression. Download sample application - ADFTableApp_v2.zip.

Blog reader reported an issue, when LOV value was changed to 10, again changed to 100 and again 10, then LOV popup was opened on the last change. This should not happen, as value 10 exists in the LOV. It seems like when LOV value is changed to the same as it was set before, ADF BC executes extra call for LOV filter through applyViewCriteria method. This is why this method is overridden as well as buildViewCriteriaClauses.

Both methods are overridden and check for STARTSWITH (Like) operator is implemented:


It works well now. Try to enter value 10, LOV will accept this value without opening LOV popup:


Enter value 100, this value will be accepted as it exists in the LOV list:


Again enter value 10, it will be accepted as it was previously (it will not be, without overriding applyViewCriteria method):


If I type any value, not available in the LOV list:


As expected LOV popup will be opened and user could select a valid value:

The insanity that is Uber - a 100$B company?

FeuerThoughts - Tue, 2014-11-25 20:55
So we've had taxis for years and we know that generally taxi drivers work hard, long hours and make small amounts of money. The cab companies make more, of course, but I don't think there are a whole lot of billionaires in the taxi business.

And now there is Uber. An earlier round of VC $ put its value at $17B. According to Fortune, Uber is now "raising new funding at a valuation of between $35 billion and $40 billion, according to a new report from Bloomberg. This would be one of the richest “venture capital” rounds in history (Facebook still holds the crown), and likely mean that investors expect Uber to eventually go public at a valuation of at least $100 billion."
How are to make any sense of this? Where would all the money come from to make all these investors (and shareholders) rich? 
By cutting out the "middleman" (regulation to ensure safe rides, primarily)? Maybe, but I can't imagine it will generate that much revenue?
By reducing the cost of a ride, compared to a taxi? That's true, apparently, some of the time with Uber, but often it is way MORE expensive - because prices are "market-driven."
By shifting more and more of the costs and risks to the drivers? That's pretty darn likely. Just look at the poor "contractors" who have to pay for their trucks and lease their gear from FedEx. 
By shifting riders from mass transit to Uber (in other greatly expanding the "pie" of pay-per-ride)? Again, that seems unlikely.
What am I missing? How could Uber replace an existing business that brings in nowhere near that much money and suddenly be printing the stuff?
Oh, and that's if they don't self-destruct due to their cavalier, arrogant attitudes and actions of their management.
Categories: Development

Cooking The Bird

Floyd Teter - Tue, 2014-11-25 19:08
So this has very little to do with Oracle, but it’s the big thing everyone has been asking me about over the past two or three weeks.  If you’re a vegan or a vegetarian, you may want to stop reading right now.

Thanksgiving is coming up here in the States.  It’s a bit deal in terms of remembering what to be thankful about.  It’s also a big deal in terms of cooking, especially cooking turkey.  Y’all asked for it, so here it is:  this is my best recipe for cooking a Thanksgiving turkey.  This will make about 18 servings.

Orange Brine
---------------

  • 6 cups water
  • 2 cups kosher or sea salt, or one cup table salt
  • 1 cup white sugar or 2 cups brown sugar
  • 2 oranges, quartered
  • 1 tablespoon of whole cloves
  • 3 bay leaves
  • 2 teaspoons whole peppercorns
  • 1 Whole Turkey, 12 to 14 pounds, thawed
  • 1 Tablespoon Extra Virgin Olive Oil


Maple Glaze
---------------
  • 1/2 cup pure maple syrup
  • 1 small orange, juiced
  • 1 teaspoon of pure vanilla extract


  • In a large saucepan over high heat, bring the water, salt and sugar to a boil.  Be sure you’re stirring to dissolve the salt and sugar.  Once boiling, turn off heat and let cool to room temperature.
  • In a 3-gallon food-safe container (I use a food storage bucket or a camping cooler), combine one gallon of water with the oranges, cloves, bay leaves and peppercorns.  Add the sugar-salt solution and stir.
  • Congrats!  You’ve made the brine!
  • Remove the giblets and neck from the turkey (I keep ‘em around for making gravy).  Remove excess fat and pat the turkey dry with paper towels.
  • Submerge the turkey in the brine.  Top it off with a weight if needed to keep it submerged.  If the turkey is a bit large, add more water.
  • Keep the turkey and brine in the refrigerator for 12 hours.
  • While the turkey is soaking in the brine, make your glaze.  Stir all three ingredients in a small bowl.  Cover and refrigerate until one hour before the turkey is done cooking.  Don’t worry, we’ll use this in a bit.
  • If you’re cooking in a smoker, load the smoker with apple or cherry wood and start your fire.  If you’re grilling, set up your grill for indirect medium heat (google this if you need instructions).  If you’re cooking in the oven, pre-heat to 325 degrees F.
  • Remove the turkey from the brine and pat it dry with paper towels.  Brush all over with a thin coating of the olive oil.  Do not season, as the brine soak took care of that.
  • Put your turkey in a large foil roasting pan.
  • Place the turkey, still in the roasting pan, in your smoker/grill/oven breast side up!!!  Close the lid or door and find something else to do - don’t peek.  If you’re grilling or oven roasting, plan to cook around 13 minutes per pound.  In a smoker, figure it closer to 30 minutes per pound - yeah, that’s a long cook, so plan accordingly.  NOTE:  if you’re grilling or oven roasting, you’ll likely miss out on most of the wood smoke flavor.  Grab some Liquid Smoke in Applewood flavor from the BBQ Sauce section of your local grocery - add in a teaspoon when you’re making the glaze; it’s not the same, but it’ll fool most people.  Just keep in mind that too much will make your turkey taste extremely bitter, so err on the light side.
  • Discard your brine.  You’re all done with it.
  • After two hours, begin basting with a combination of orange juice plus either water or apple juice (not both!); I prefer apple juice - more moistening and leaves no flavor behind, but to each his/her own. Baste every two hours until the glaze is applied.
  • One hour before the turkey is done cooking, remove the glaze from the refrigerator and let sit at room temp.
  • 30 minutes before your turkey is done cooking, remove the turkey from the foil pan.  See all the drippings in your pan?  That’s for the gravy.  Grab the can and put the turkey breast up directly on the cooking grate.  Close the lid or door and go make your gravy.
  • When you grab the pan and drippings for the gravy, brush the glaze all over your turkey.
  • Your turkey is done when an instant-read thermometer inserted in the thickest part of the thigh reads 180 degrees F.  Don’t have an instant-read thermometer? Stick a toothpick in the thickest part of the thigh without touching the bone; remove the toothpick and inspect the juices running out of the hole; your turkey is done when the juices run clear.
  • When your turkey is done, remove from the heat to a platter and (very important) let it stand for 20 minutes at room temp before carving!
  • But wait, you say, what about stuffing???  Truth is, using this recipe, the inside of the bird will never get hot enough to entirely cook the stuffing.  I cook my stuffing in an aluminum pan on top of the stove…usually add a teaspoon or two of the drippings while it’s cooking.  If I want the bird stuffed, I’ll stuff it while it’s standing after the cooking is done…yeah, I normally don’t do this…never have heard any complaints.
  • One note about something everyone fusses over:  carving.  I use an odd technique taught to me by a professional butcher - it keeps the meat juicier, avoids shredding the meat, and makes the overall presentation much better.  First, remove the entire breast from the bone in one large cut.  Cut across the breast to make crescent-shaped pieces.  Move the pieces to the serving platter as a whole breast, then cut off and add the drumsticks, thighs and wings.

So there ya go!  We'll get back to the Oracle stuff next week.  In the meantime, enjoy and have a happy Thanksgiving!

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Tue, 2014-11-25 18:50

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2014-11-25 18:50

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2014-11-25 18:50

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-11-25 18:50

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-11-25 18:50

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs