Feed aggregator

Need to Know Source of SQL Text

Tom Kyte - Mon, 2016-08-29 18:46
I have a entry in v$sql - SELECT ORG_ENTITY_ID FROM MS_QMS_FLOW_DOWN_ORG_V WHERE ACTIVITY_NAME = :B2 AND UPPER(USER_NAME) = UPPER(:B1 ). Is there a way to find out which Package/Procedure caused this Select statement to run ? PS: I am new to the f...
Categories: DBA Blogs

Now is the Time. This is the Place. ERP Cloud at OpenWorld 2016.

Linda Fishman Hoyle - Mon, 2016-08-29 17:46

A Guest Post by Oracle's Kazim Isfahani, Director, ERP Strategy (pictured left)

The excitement level is building for OpenWorld 2016, September 18 - 22 in San Francisco, CA. CFO’s, project leaders, procurement officers, along with finance and operations leaders, will gather at the premier executive destination to learn more about shifting their business to the cloud to build the business of tomorrow.

Last year, OpenWorld session attendees with an ERP content focus grew 80 percent.One year later, we have twice as many customers and we're offering more than 50 general, conference, and ‘Meet the Expert’ sessions expressly dedicated to ERP Cloud. These sessions are designed to help ERP Cloud customers and prospects hear from their peers, learn from Oracle’s experts, and maximize their time at OpenWorld.

ERP Showcase

New this year is the ERP Showcase which brings together Oracle’s ERP team and partners to help organizations create their own path to the cloud. All track sessions, product demos, theater presentations, networking, and more will be co-located on the third level in Moscone West. This is a unique opportunity to secure all of your ERP Cloud needs from one central location. ERP Showcase sponsors include Cognizant Technology Solutions, Emtec, Forsys, Huron, Infovity, and Prometheus Group.

Customers Sharing Their Journeys

Again this year, we’ll feature a plethora of customers, each willing to share their journey to the cloud. Attendees will have the opportunity to hear from senior executives from companies including Orange, Qualcomm, GE Digital, Profound Medical, City of Detroit, Alex Lee, Skanska, Hillside Family Agencies, Ballard, and the Cerebral Palsy Association of NY. These speakers bring a treasure trove of critical information and proven success recipes that will guide those of you seeking an ERP Cloud journey of your own. The learnings gleaned from these cloud veterans will prove invaluable!

ERP Cloud Execs and Sessions

Key Oracle ERP executives will be on-hand, including Rondy Ng, SVP Applications Development; Tom Anthony, VP Procurement Product Strategy; Colleen Baumbach, VP Applications Development, and Terrance Wampler, VP Financials Product Strategy. Each will be showcasing the latest product releases and associated roadmaps across the ERP family.

Here are a few of the 50 ERP Cloud not-to-be missed sessions from the Session Catalog:

  • “How Oracle’s Modern Cloud Applications Can Drive Your Success,” September 19, 11:00 am PT | Room 3008 [GEN7758]
  • “Run Your Modern Business on a Modern Cloud,” September 19, 2:30 pm PT | Room 3008 [GEN7298]
  • “Oracle Procurement—Empowering Modern Procurement,” September 19, 4:30 pm PT | Room 3001 [GEN7254]
  • “Oracle Project Portfolio Management Cloud—Seize the Digital Opportunity,” September 19, 4:30 pm PT | Room 3003 [GEN7279]
  • “Finance Takes Control and Transforms with the Cloud,” September 20, 10:45 am PT | Room 3001 [GEN7299]
  • “Managing Your Revenue by IFRS 15/ASC 606: Oracle Financials Cloud and Oracle E-Business Suite,” September 21, 11:00 am PT | Room 3016 [CON7314]
  • “Oracle Financials Cloud Implementation: A Customer Experience,” September 21, 3:00 pm PT | Room 3001 [CON6064]

 

Meet the Experts

This ever-popular activity is an opportunity for you to meet with the Oracle ERP Cloud experts, the people who design and build the applications. There are three sessions scheduled.

  • “Meet the Experts: Oracle Procurement Cloud” September 20, 11:00 am PT | Room 3001A [MTE7785]
  • “Meet the Experts: Oracle Financials Cloud” September 21, 3:00 pm PT | Room 3001A [MTE7784]
  • “Meet the Experts: Oracle Project Portfolio Management Cloud,” Sep 21, 12:15 pm PT| Room 3001A [MTE7792]

 

These interactive sessions allow for discussions on Oracle Applications strategy and customer specific business and IT strategy. The experts are available to discuss the latest releases and share insights into the best path for your enterprise. For those of you considering ERP Cloud, this is a perfect opportunity to meet with the product development and strategy team, and learn more about the product. Space is limited, so we recommend that you pre-register.

Demo Grounds and More

The Demo Grounds are a great way for you to network and discover everything our partners have to offer. Want all that ERP Cloud has planned in one place? Visit ERP Central @ OpenWorld 2016.

What About After Hours?

After a long day of learning, networking, and sharing, OpenWorld has plenty of fun planned as well.

  • Welcome Reception on Sunday, September 18 at 7:00 pm on the Oracle Cloud Plaza @ Howard Street
  • ERP Central Happy Hour on Tuesday, September 20 at 6:00 pm, on the third floor of Moscone West
  • Billy Joel concert on Wednesday, September 21 at AT&T Park, an exclusive, private event
  • Numerous partner events; visit the partners’ booths on the third level in Moscone West for details.

 

Register Now

Oracle ERP Cloud is top of mind for companies across size, industry, and geography. The ERP Cloud agenda for this premier event offers you both depth and breadth and an opportunity to hear from experienced Oracle ERP Cloud customers and to apply the best practices in your own organizations.

Filenames in AWR reports

Yann Neuhaus - Mon, 2016-08-29 14:33

If you have read my latest blog posts, you know I’ve measured IOPS with SLOB to estimate ACFS overhead on a fast storage. This blog is about something I learned after wasting one hour on the result.

Here is how I did my tests:

  1. Create a SLOB database in ACFS
  2. Run SLOB PIO tests and tag the AWR report as ‘ACFS’
  3. Move datafile to +DATA
  4. Run SLOB PIO tests and tag the AWR report as ‘ASM’

Of course, I’ve scripted to run several tests varying the number of sessions, work unit, etc. while I was doing something more productive.

While done, I got a set of AWR report and the first task was to check that they were consistent. But they were not. The datafile in ‘File IO Stats’ section did not match the tag I’ve put in the file name. First I suspected a bug in my script with bad tagging or failed datafile move. I had to read the alert.log to get that my tagging was good but filename in AWR reports was wrong. I finally looked at AWR views to understand why the filename was wrong and understood the problem:

SQL> desc DBA_HIST_DATAFILE;
Name Null? Type
----------------------------------------- -------- ----------------------------
DBID NOT NULL NUMBER
FILE# NOT NULL NUMBER
CREATION_CHANGE# NOT NULL NUMBER
FILENAME NOT NULL VARCHAR2(513)
TS# NOT NULL NUMBER
TSNAME VARCHAR2(30)
BLOCK_SIZE NUMBER
CON_DBID NUMBER
CON_ID NUMBER

There’s no SNAP_ID. AWR do not store the history of file names. We can suppose that it stores only the latest filename, but then my reports would be good as they were generated immediately after the snapshot. Or that the first name stays, but I had some reports with ‘+DATA’.

Then, I grepped for ‘WRH$_HISTORY’ in ORACLE_HOME/rdbms/admin and came upon this:

dbmsawr.sql: -- This routine updates WRH$_DATAFILE rows for the datafile name and
dbmsawr.sql: -- WRH$_DATAFILE with the current information in database.

There is an update_datafile_info procedure here in the dbms_workload_repository and the comment says something like:
This change will be captured at max after some
-- (generally 50) snapshots. So the AWR and AWR report may be wrong with
-- respect to data file name or tablespace name for that duration.

I love to work with Oracle. All information is there if you know where to look at.

So if you want to rely on filename in an AWR report after a move, you should run this procedure before taking the report. And you should run this report before the next datafile move.

Here is the example:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
6 rows selected.

My file is user01 and this is what is stored in AWR.

I rename it to users02 (thanks to 12c online move)

SQL> alter database move datafile '/u01/DEMO/oradata/DEMO14/users01.dbf' to '/u01/DEMO/oradata/DEMO14/users02.dbf';
Database altered.

but AWR is not aware of the change even after a snapshot:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users01.dbf

You have to wait for those 50 snapshots or run the update:

SQL> exec dbms_workload_repository.update_datafile_info;
PL/SQL procedure successfully completed.

SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users02.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users02.dbf

But as you see no history about previous names.

Note that if you look at the table behind the view, there’s a SNAP_ID but it’s not part of the primary key. It is used by the purge procedures.

 

Cet article Filenames in AWR reports est apparu en premier sur Blog dbi services.

Letting GoldenGate automatically maintain the insert and update timestamps on the target

Yann Neuhaus - Mon, 2016-08-29 11:18

Today at one of our GoldenGate customers we were faced with the following requirement: For one of the tables on the target we needed two additional columns which shall hold the insert and update timestamps for when the row was either inserted or updated. Just to be clear: It was not about getting the timestamps for the inserts or updates as they happened on the source but the target. In this post we’ll look at how GoldenGate can be configured to do the work requested.

All the below examples work with the well known scott/tiger schema. There is one extract running for capturing on the source and one replicat is re-playing the changes on the target (over sqlnet, no integrated mode here). This is the current status of the streams:

GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:02    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:01    


GGSCI (oelogg1) 2> view params EXTRSCO

EXTRACT extrsco
USERIDALIAS DB1 DOMAIN admin
GETUPDATEBEFORES
REPORT AT 23:40
DDL INCLUDE OPTYPE TRUNCATE OBJNAME ARBOR.*, &
    INCLUDE OPTYPE ALTER OBJNAME ARBOR.*
EXTTRAIL /u01/app/ogg/product/12.1.2.1.9/dirdat/es
TABLE SCOTT.*;

GGSCI (oelogg1) 3> view params REPLSCO

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.*, TARGET SCOTT.*;

Pretty basic, no unusual stuff here. The table we’ll use for the scope of this post is the “project” table which has the following contents in a fresh scott/tiger installation:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Of course the table looks the same on the target:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

To prove that the streams are really working lets add an additional row to the source:

SQL> insert into project values (1005, 'my fun project 1', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

SQL> 

… and then check if the row indeed was replicated to the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1005 my fun project 1				      29-AUG-16    28-SEP-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Ok, looks fine, so back to the initial requirement. We need two additional columns on the target:

alter table SCOTT.PROJECT add create_dt timestamp with time zone;
alter table SCOTT.PROJECT add update_dt timestamp with time zone;
alter table SCOTT.PROJECT modify create_dt default to_date('01.01.2000','DD.MM.YYYY');
alter table SCOTT.PROJECT modify update_dt default to_date('01.01.2000','DD.MM.YYYY');

In our case we needed to set a default value as both column are not allowed to contain NULL values. For the moment the content of the table on the target is:

SQL> col CREATE_DT for a20
SQL> col UPDATE_DT for a20
SQL> select * from project;

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT	     UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ -------------------- --------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

In real life when the table which will be extended holds millions of rows the following will probably be too simple and you’ll need to spend some time on thinking on how you organize the updates. For the scope of this post this is fine:

update SCOTT.PROJECT set create_dt = to_date('01.01.2000','DD.MM.YYYY') where create_dt is null;
update SCOTT.PROJECT set update_dt = to_date('01.01.2000','DD.MM.YYYY') where update_dt is null;
commit;
alter table SCOTT.PROJECT modify create_dt not null;
alter table SCOTT.PROJECT modify update_dt not null;

From now on we have identical insert and update timestamps for all of the rows on the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01                01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

A final check on the source for being sure that the default values work:

SQL> insert into project values (1006, 'my fun project 2', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

All fine. Time to do the GoldenGate work. Obviously the first step is to stop the replicat:

GGSCI (oelogg1) 2> stop REPLSCO

Sending STOP request to REPLICAT REPLSCO ...
Request processed.

GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:09    
REPLICAT    STOPPED     REPLSCO     00:00:00      00:00:01    

Lets do the insert case in a first step by adding the following line (COLMAP) to the replicat configuration:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Start again:

GGSCI (oelogg1) 6> start REPLSCO

Sending START request to MANAGER ...
REPLICAT REPLSCO starting


GGSCI (oelogg1) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:01    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:00    

Looks good from a configuration perspective. Time to start:

SQL> insert into project values (1007, 'my fun project 3', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target we should now see the exact insert date of the record instead of the default value of the column:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect, this works. Lets go on with the update case: All we need is to add the additional column for the update case and populate it:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)),
    update_dt = @IF (@VALONEOF (@GETENV ('GGHEADER', 'OPTYPE'), 'UPDATE', 'SQL COMPUPDATE', 'PK UPDATE' ), @DATENOW(), @COLSTAT 
(MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Then stop and start the replicat to bring the changes into effect. Lets create a new row just to see that this does not have any effect on the insert case:

SQL> insert into project values (1008, 'my fun project 4', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

What we should see on the target are two rows with an exact insert date but a default update date:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 my fun project 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect. What about the update itself?

SQL> update project set DESCRIPTION = upper(description) where PROJECTNO = 1008;

1 row updated.

SQL> commit;

Commit complete.

This should result in one exact update date for my fun project 4:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 MY FUN PROJECT 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 29-AUG-16 01.04.49.000000 PM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

8 rows selected.

Perfect, works fine, too. Just to be sure that deletes still work lets do a final test:

SQL> delete from project where PROJECTNO = 1008;

1 row deleted.

SQL> commit;

Commit complete.

SQL> 

The two additional columns should not prevent Goldengate from being able to delete the rows, lets see:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

The fun project 4 is gone and all works as expected. Hope this helps ….

 

Cet article Letting GoldenGate automatically maintain the insert and update timestamps on the target est apparu en premier sur Blog dbi services.

networking differences between cloud providers

Pat Shuff - Mon, 2016-08-29 09:00
In this blog entry we are going to perform a simple task of enabling an Apache Web Server on a Linux server and look at how to do this on the Oracle Cloud, Amazon AWS, and Microsoft Azure. Last week we did this for the Oracle Cloud but we will quickly review this again. As we go down this path we will look at the different options presented to you as you create a new instance and see how the three cloud vendors diverge in their approach to services. Which version of Linux we select is not critical. We are looking at the cloud tooling and what is required to deploy and secure an instance. Our goals are
  • Deploy a Linux instance into a cloud service
  • Enable port 22 to allow us to communicate from our desktop into the Linux instance
  • Enable port 80 to allow us to communicate from the public internet into the Linux instance
  • Disable all other services coming into this instance.
  • We will use DHCP initially to get an ip address assigned to us but look at static ip addresses in the end

Step 1:Deploy a Linux instance into a small compute service. Go with the smallest compute shape to save money, go with the smallest memory allocation because we don't need much for a test web server, go with the default network interfaces and have an ip address assigned, go with the smallest disk you can to speed up the process.

Step 1a - Oracle Public Cloud

We go to the Compute Console and click on Create Instance. This takes us through screens that allow us to select an operating system, core count and memory size. When we get to the instance config we have the option of defining network security rules with a Security List. We can either create a new security list or select an existing security list. We will in the end select the default that allows us to connect to port 22 and modify the security list at a later point. We could have selected the WebServer entry from the Security List because we have done this before. For this exercise we will select the default and come back later and add another access point. Once we get to the review screen we can create the instance. The only networking questions that we were asked was what Security List definition do we want.

Step 1b - Amazon AWS

We go to the EC2 Console and click on EC2 followed by Launch Instance. From the launch screen we select a Linux operating system and start the configuration. Note that the network and subnet menus allow you to deploy your instance into an ip address range. This is different than the Oracle Cloud where you are assigned into a non-routable ip address range based on the server that you are dropped into. Since these are private ip addresses for a single server this is really not a significant issue. We are going to accept the defaults her and configure the ports in a couple of screens. We are going to go with a dhcp public ip address to be able to attach to our web server.

We accept the default storage and configure the ports that we want to open for our instance. We can define a new security group or accept an existing security group. For this example we are going to add http port 80 since it is a simple add at this point and move forward with this configuration. We could go with a predefined configuration that allows port 80 and 22 but for this example we will create a new one. We then review and launch the instance.

Step 1c - Microsoft Azure

We go to the Azure Portal and click on Virtual Machine -> Add which takes us to the Marketplace. From here we type in Linux and pick a random Linux operating system to boot from. We are assigned a subnet just like we were with the Oracle Cloud and have the ability to add a firewall rule to allow port 80 and 22 through from the public internet. Once we have this defined we can review and launch our instance.

Step 2: Log into your instance and add the apache web server. This can easily be done with a yum install apache2 command. We then edit the /var/www/index.html file so that we can see an answer from the web server.

Step 3: Verify the network security configuration of the instance to make sure that ports 80 and 22 are open.

Step 3a: Oracle Cloud

When we created the instance we went with the default network configuration which only has port 22 open. We now need to add port 80 as an open inbound port for the public internet. This is done by going to the Compute Instance console and viewing our web server instance. By looking at the instance we can see that we have the default Security List associated with our instance. If we have a rule defined for port 80 we can just click on Add Security List and add the value. We are going to assume that we have not defined a rule and need to do so. We create a new rule which allows us to allow http traffic from the public internet to our security list WebServer. We than need to go back and add a new Security List to our instance and select WebServer which allows port 80 and 22.

Step 3b and 3c: AWS and Azure

We really don't need to do anything here because both AWS and Azure gave us the ability to add a port definition in the menu creation system. Had we selected a predefine security list there would be no step 3 for any of the services.

Surprisingly, we are done. Simple network configuration is simple for all three vendors. The key differences that we see are that Amazon and Microsoft give you the ability to define individual port definitions as you create your instance. Oracle wants you to define this with Security Rules and Security Lists rather than one at a time for each instance. All three platforms allow you to configure firewall rules ahead of time and add those as configurations. In this example we were assuming a first time experience which is not the normal way of doing things. The one differential that did stand out is that Amazon allows you to pick and choose your subnet assignment. Oracle and Microsoft really don't give you choices and assign you an ip range. All three give you the option of static of dynamic public ip addresses. For our experiment there really isn't much difference in how any of the cloud vendors provision and administer firewall configurations.

ODA X6 database classes and shapes

Yann Neuhaus - Mon, 2016-08-29 08:47

On the Oracle Database Appliance, like on the Oracle public Cloud, you define the CPU capacity with ‘shapes’. On the latest ODA, the X6, we have a new interface to provision a database. Let’s look at the different shapes available.

ODACLI

You can provision a new database with the command line ODACLI which replaces the OAKCLI you used in ODA X5:

[root@odax6m ~]# odacli create-database
Usage: create-database [options] Options:
* --adminpassword, -m
Password for SYS,SYSTEM and PDB Admin
--cdb, -c
Create Container Database
Default: false
--dbclass, -cl
Database Class OLTP/DSS/IMDB
Default: OLTP
--dbconsole, -co
Enable Database Console
Default: false
--dbhomeid, -dh
Database Home ID (Use Existing DB Home)
* --dbname, -n
Database Name
--dbshape, -s
Database Shape{odb1,odb2,odb3 etc.}
Default: odb1
--dbstorage, -r
Database Storage {ACFS|ASM}
Default: ACFS
--dbtype, -y
Database Type {SI|RAC}
Default: SI
--help, -h
get help
Default: false
--instanceonly, -io
Create Instance Only (For Standby)
Default: false
--json, -j
json output
Default: false
--pdbadmin, -d
Pluggable Database Admin User
Default: pdbadmin
--pdbname, -p
Pluggable Database Name
Default: pdb1
--targetnode, -g
Node Number (for single-instance databases)
Default: 0
--version, -v
Database Version
Default: 12.1.0.2

ODA WebConsole

But the ODA X6 has also a small graphical interface from the web console.

CaptureODACreate

12c multitenant is the default, but you can choose.

Edition

You don’t have the choice when you create the database. You install the ODA in Standard or Enterprise and then you cannot change.

Versions

Two database versions are available: 11.2.0.4 and 12.1.0.2

CaptureODAVersions

You choose ODA to get a stable, certified and supported system so it make sense to run only supported versions with latest PSU. If you have older versions, you must upgrade. Set optimizer_features_enable to previous if your application was not tuned for newer versions. Very often, when an ISV do not certify his software it’s because of optimizer regressions. With proper testing and optimizer settings you should be able to upgrade any application without the risk of regression.

Templates

There are four DBCA templates available

  • Standard Edition or Enterprise Edition
  • Multitenant or non-CDB

The main difference between Enterprise Edition and Standard Editions are:
Options OMS,SPATIAL,CWMLITE,DV are installed in Enterprise Edition but not in Standard Edition
fast_start_mttr_target=300 in Enterprise Edition (feature not supported in Standard Edition)

The main difference between multitenant and non-CDB:
Options JSERVER,IMEDIA,ORACLE_TEXT,APEX are installed in a CDB an not in a non-CDB
maxdatafiles=1024 in CDB (100 in non-CDB)

All templates are configured with filesystem_io_options=setall and use_large_pages=only

Following underscore parameters are set for all ODA templates:
*._datafile_write_errors_crash_instance=FALSE
*._disable_interface_checking=TRUE
*._enable_NUMA_support=FALSE
*._file_size_increase_increment=2143289344
*._gc_policy_time=0
*._gc_undo_affinity=FALSE

Note that both 12c and 11g are available in Enterprise Edition as well as Standard Edition (can even be Standard Edition One for 11g).
Of course, CDB is only for 12c.

Shapes

As in the Oracle Public Cloud, the CPU and Memory comes in shapes:

CaptureODACShape

The choice is the number of core. The cores are hyperthreaded, which means that odb1 will have cpu_count=2. And it is set in spfile. Note that at install no resource manager plan is active so instance caging will not occur except during the automatic maintenance window… My recommandation is to set a plan. In 12.1.0.2 Standard Edition resource manager is implicitly activated.

ODA X6-2 processors are Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz. Here is an example of the LIOPS you can reach when running on all the 40 threads of a X6-2M:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 39.9 545.6 0.00 25.43
DB CPU(s): 38.8 530.9 0.00 24.75
Logical read (blocks): 18,494,690.4 252,862,769.1

This is 18 million logical reads per seconds in this 2 sockets (2s10c20t) appliance. Half of it on the X6-2S which has one socket 1s10c20t.

The core factor for those processors is 0.5 which means that you can run an Enterprise Edition ‘odb2′ with a single processor license (public price 47,500$) and you can run 4 sessions in CPU which means more that you can do nearly 3 million logical reads per second, as here:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 4.0 54.5 0.00 13.78
Logical read (blocks): 2,901,991.5 39,660,331.1

Those shapes are defined as:

CaptureODAShapes

Given the high LIOPS and the available memory, this entry-level appliance can be sufficient for most of medium OLTP workload.

Classes

Three classes are defined to derive the database parameters from the shape.

CaptureODAClasses

The SGA/PGA is calculated from the shape memory and a class factor.
OLTP gives 50% to SGA and 25% to PGA which means that for example a odb4 has sga_target=16 and pga_aggregate_target=8G
DSS gives 25% to SGA and 50% to PGA

Note that OLTP is the only one available in Standard Edition. This does not mean that you can run only OLTP. You can change memory settings later (DSS usually need more PGA than SGA) and you have very good storage bandwidth and IOPS (NVMe access to SSD). This setting is more an indication that most of datawarehouses need features available only on Enterprise Edition such as parallel query, partitioning, bitmap indexes.

ASM or ACFS?

CaptureODAStorage

The template shapes above define a 100GB database. When you create a new database you have the choice to put it directly on +DATA and +RECO, or create a 100GB ADVM volume and ACFS filesystem that will be mounted under /u02/app/oracle/oradata. If you choose ACFS the FRA and REDO will be created under the /u03/app/oracle mount point which is a common ACFS.

The default is ACFS but you should think about it. For production, best performance is ASM. You have SSD to reduce avoid disk latency. You have NVMe to reduce CPU latency. You don’t want to add the ACFS layer. The maximum IOPS we observe is 10 times higher with datafiles directly on ASM:

@FranckPachot Don't know? We get 75k IOPS on ACFS and 700k IOPS on pure ASM. /cc @kevinclosson pic.twitter.com/TcLzUsOh0d

— Marco Mischke (@DBAMarco) August 29, 2016

For test databases, where you use snapshot features, especially with multitenant, you may choose ACFS. However, why not create the CDB in ASM and use ACFS for the PDBs you will want to snapshot? No need for that additional layer for the CDB files. Better to isolate the master and clones for a specific environment into its own ACFS.

And anyway, ODA X6-2S and X6-2M are very interesting for Standard Edition, and you cannot use snapshots nor any ACFS features for a database in Standard Edition.

Storage performance is truly amazing. At 100000 IOPS we have 99% single block reads faster than 256 milliseconds and 97% faster than 128 ms. At 800000 IOPS here are the figures:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 6.9M 4.0 74.1 21.7 .2 .0 .0 .0
db file sequential read 18.4M 83.1 16.3 .7 .0 .0 .0

So what?

It’s nice to have an easy GUI to provision a database on ODA. However there are some limits with it:

  • Be careful on the defaults. They may not fit what you want. Do you want you databases on ACFS?
  • Not all operations can be done though the GUI: you can create but not delete a database.

But there’s more. Performance is there. You can run application that need high performance.

Do you know any other solution which gives you a fully certified system installed in few hours with databases ready? With very good hardware and managed software costs (NUP, Standard Edition in socket metric or Entrerprise Edition capacity-on-demand by multiple of 1 processor license).
You need high-availability? In Standard Edition you cannot use Data Guard. In Standard Edition you can buy Dbvisit standby which gives you switchover and failover (Recovery Point Objective of few minutes) to a second ODA or to a cloud service. Of course, you can build or buy custom scripts to manage the manual standby. However, if you go to ODA you probably appreciate easy and robust software.

 

Cet article ODA X6 database classes and shapes est apparu en premier sur Blog dbi services.

What Does Oracle PARTITION BY Do?

Complete IT Professional - Mon, 2016-08-29 06:00
Have you ever seen the PARTITION BY keyword used in Oracle queries? If so, do you know what it does and how to use it? Learn about the Oracle PARTITION BY keyword and see some examples in this article. What is the Oracle PARTITION BY Keyword? PARTITION BY is a keyword that can be used […]
Categories: Development

PeopleSoft User Security

When performing a PeopleSoft security audit, reconciling users should be one of the first tasks. This includes default accounts created through the installation of PeopleSoft as well as user accounts associated with staff, vendors and customers.

The following are several of the topics that Integrigy investigates during our PeopleSoft security configuration assessments - take a look today at your settings:

  • Default accounts - PeopleSoft default application user accounts with superuser privileges where possible should be removed or have their password changed. Carefully consult your documentation but this is a key task.

Default Oracle PeopleSoft Users

BELHR

JCADMIN1

PSJPN

CAN

NLDHR

PSPOR

CFR

PS

TIME

CNHR

PSCFR

UKHR

ESP

PSDUT

UKNI

FRA

PSESP

USA

FRHR

PSFRA

HSHR

GER

PSGER

WEBGUEST

GRHR

PSINE

WEBMODEL

 

  • Stale users – users that have not logged on in months or years should be identified and removed. Use the following SQL to locate stale users:
SELECT * FROM SYSADM.PSPTLOGINAUDIT;

To manage accounts, the following navigation can assist. As it cannot be mentioned enough, BEFORE you disable or delete any user TEST in non-production first.

User management:

  1. Select PeopleTools, Security, User Profiles, User Profiles
  2. Select user to disable or delete
  3. If disabling, check Account Locked Out check box


 

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Performance issue

Tom Kyte - Mon, 2016-08-29 00:26
Hi Team, We have a database in there we have observed every day morning 4 to 5 AM during this one hr application user not be able to insert data into the table. We seen that in application web page where red count high means not be able to insert ...
Categories: DBA Blogs

Intermittent Ora-06502: PL/SQL: numeric or value error - For Update Skip locked

Tom Kyte - Mon, 2016-08-29 00:26
Hi Tom, We are getting Intermittent Ora-06502: PL/SQL: numeric or value error - For Update Skip locked. This error occurs only when application (.Net) calls this SP and it occurs at last pending row from table. But we are able to execute same SP f...
Categories: DBA Blogs

Gather_table_stats when UPDATING rows in a table using rownum as predicate

Tom Kyte - Mon, 2016-08-29 00:26
Hello Tom, I came across the structure where in users creating the table or updating the table uses the predicate as "rownum exec dbms_stats.gather_table_stats(xx,xx)". Can you help us to explain how this predicate structure how it works in Cre...
Categories: DBA Blogs

Are analytic RDBMS and data warehouse appliances obsolete?

Curt Monash - Sun, 2016-08-28 20:28

I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:

Simply reciting all that, however, begs the question of whether one should still care about analytic RDBMS at all.

My answer, in a nutshell, is:

Analytic RDBMS — whether on premises in software, in the form of data warehouse appliances, or in the cloud – are still great for hard-core business intelligence, where “hard-core” can refer to ad-hoc query complexity, reporting/dashboard concurrency, or both. But they aren’t good for much else.

To see why, let’s start by asking: “With what do you want to integrate your analytic SQL processing?”

  • If you want to integrate with relational OLTP (OnLine Transaction Processing), your OLTP RDBMS vendor surely has a story worth listening to. Memory-centric offerings MemSQL and SAP HANA are also pitched that way.
  • If you want to integrate with your SAP apps in particular, HANA is the obvious choice.
  • If you want to integrate with other work you do in the Amazon cloud, Redshift is worth a look.

Beyond those cases, a big issue is integration with … well, with data integration. Analytic RDBMS got a lot of their workloads from ELT or ETLT, which stand for Extract/(Transform)/Load/Transform. I.e., you’d load data into an efficient analytic RDBMS and then do your transformations, vs. the “traditional” (for about 10-15 years of tradition) approach of doing your transformations in your ETL (Extract/Transform/Load) engine. But in bigger installations, Hadoop often snatches away that part of the workload, even if the rest of the processing remains on a dedicated analytic RDBMS platform such as Teradata’s.

And suppose you want to integrate with more advanced analytics — e.g. statistics, other predictive modeling/machine learning, or graph analytics? Well — and this both surprised and disappointed me — analytic platforms in the RDBMS sense didn’t work out very well. Early Hadoop had its own problems too. But Spark is doing just fine, and seems poised to win.

My technical observations around these trends include:

  • Advanced analytics commonly require flexible, iterative processing.
  • Spark is much better at such processing than earlier Hadoop …
  • … which in turn is better than anything that’s been built into an analytic RDBMS.
  • Open source/open standards and the associated skill sets come into play too. Highly vendor-proprietary DBMS-tied analytic stacks don’t have enough advantages over open ones.
  • Notwithstanding the foregoing, RDBMS-based platforms can still win if a big part of the task lies in fancy SQL.

And finally, if a task is “partly relational”, then Hadoop or Spark often fit both parts.

  • They don’t force you into using SQL or everything, nor into putting all your data into relational schemas, and that flexibility can be a huge relief.
  • Even so, almost everybody who uses those uses some SQL, at least for initial data extraction. Those systems are also plenty good enough at SQL for joining data to reference tables, and all that other SQL stuff you’d never want to give up.

But suppose you just want to do business intelligence, which is still almost always done over relational data structures? Analytic RDBMS offer the trade-offs:

  • They generally still provide the best performance or performance/concurrency combination, for the cost, although YMMV (Your Mileage May Vary).
  • One has to load the data in and immediately structure it relationally, which can be an annoying contrast to Hadoop alternatives (data base administration can be just-in-time) or to OLTP integration (less or no re-loading).
  • Other integrations, as noted above, can also be weak.

Suppose all that is a good match for your situation. Then you should surely continue using an analytic RDBMS, if you already have one, and perhaps even acquire one if you don’t. But for many other use cases, analytic RDBMS are no longer the best way to go.

Finally, how does the cloud affect all this? Mainly, it brings one more analytic RDBMS competitor into the mix, namely Amazon Redshift. Redshift is a simple system for doing analytic SQL over data that was in or headed to the Amazon cloud anyway. It seems to be quite successful.

Bottom line: Analytic RDBMS are no longer in their youthful prime, but they are healthy contributors in middle age. Mainly, they’re still best-of-breed for supporting demanding BI.

Are analytic RDBMS and data warehouse appliances obsolete?

DBMS2 - Sun, 2016-08-28 20:28

I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:

Simply reciting all that, however, begs the question of whether one should still care about analytic RDBMS at all.

My answer, in a nutshell, is:

Analytic RDBMS — whether on premises in software, in the form of data warehouse appliances, or in the cloud – are still great for hard-core business intelligence, where “hard-core” can refer to ad-hoc query complexity, reporting/dashboard concurrency, or both. But they aren’t good for much else.

To see why, let’s start by asking: “With what do you want to integrate your analytic SQL processing?”

  • If you want to integrate with relational OLTP (OnLine Transaction Processing), your OLTP RDBMS vendor surely has a story worth listening to. Memory-centric offerings MemSQL and SAP HANA are also pitched that way.
  • If you want to integrate with your SAP apps in particular, HANA is the obvious choice.
  • If you want to integrate with other work you do in the Amazon cloud, Redshift is worth a look.

Beyond those cases, a big issue is integration with … well, with data integration. Analytic RDBMS got a lot of their workloads from ELT or ETLT, which stand for Extract/(Transform)/Load/Transform. I.e., you’d load data into an efficient analytic RDBMS and then do your transformations, vs. the “traditional” (for about 10-15 years of tradition) approach of doing your transformations in your ETL (Extract/Transform/Load) engine. But in bigger installations, Hadoop often snatches away that part of the workload, even if the rest of the processing remains on a dedicated analytic RDBMS platform such as Teradata’s.

And suppose you want to integrate with more advanced analytics — e.g. statistics, other predictive modeling/machine learning, or graph analytics? Well — and this both surprised and disappointed me — analytic platforms in the RDBMS sense didn’t work out very well. Early Hadoop had its own problems too. But Spark is doing just fine, and seems poised to win.

My technical observations around these trends include:

  • Advanced analytics commonly require flexible, iterative processing.
  • Spark is much better at such processing than earlier Hadoop …
  • … which in turn is better than anything that’s been built into an analytic RDBMS.
  • Open source/open standards and the associated skill sets come into play too. Highly vendor-proprietary DBMS-tied analytic stacks don’t have enough advantages over open ones.
  • Notwithstanding the foregoing, RDBMS-based platforms can still win if a big part of the task lies in fancy SQL.

And finally, if a task is “partly relational”, then Hadoop or Spark often fit both parts.

  • They don’t force you into using SQL or everything, nor into putting all your data into relational schemas, and that flexibility can be a huge relief.
  • Even so, almost everybody who uses those uses some SQL, at least for initial data extraction. Those systems are also plenty good enough at SQL for joining data to reference tables, and all that other SQL stuff you’d never want to give up.

But suppose you just want to do business intelligence, which is still almost always done over relational data structures? Analytic RDBMS offer the trade-offs:

  • They generally still provide the best performance or performance/concurrency combination, for the cost, although YMMV (Your Mileage May Vary).
  • One has to load the data in and immediately structure it relationally, which can be an annoying contrast to Hadoop alternatives (data base administration can be just-in-time) or to OLTP integration (less or no re-loading).
  • Other integrations, as noted above, can also be weak.

Suppose all that is a good match for your situation. Then you should surely continue using an analytic RDBMS, if you already have one, and perhaps even acquire one if you don’t. But for many other use cases, analytic RDBMS are no longer the best way to go.

Finally, how does the cloud affect all this? Mainly, it brings one more analytic RDBMS competitor into the mix, namely Amazon Redshift. Redshift is a simple system for doing analytic SQL over data that was in or headed to the Amazon cloud anyway. It seems to be quite successful.

Bottom line: Analytic RDBMS are no longer in their youthful prime, but they are healthy contributors in middle age. Mainly, they’re still best-of-breed for supporting demanding BI.

Categories: Other

Presenting at Oracle Open World 2016

Galo Balda's Blog - Sun, 2016-08-28 18:51

oow-160x160-im-speaking-3093277

Just a short post to announce that I’ll be presenting an User Group Forum session. Thanks ODTUG!

SQLcl: A Modern Command Line Interface to the Oracle Database [UGF5641]

“In this session learn about the new Java-based command line interface that takes advantage of Oracle SQL Developer’s scripting engine. It delivers a modern command line interface that is backward compatible with SQL*Plus but also introduces new commands and features that have been missing for a long time. In this session, explore the new inline editing, query history, aliasing, output formatting, DDL generation, and scripting options that set SQLcl apart from its predecessor.”

Sunday, Sep 18, 8:00 a.m. – 8:45 a.m.| Moscone South—302

See you there!


Filed under: Open World, SQLcl Tagged: Open World, SQLcl
Categories: DBA Blogs

Oracle Database Cloud (DBaaS) Performance Consistency - Part 3

Randolf Geist - Sun, 2016-08-28 16:43
This is the third part of this installment, comparing the performance consistency of the DBaaS cloud offering with a dedicated physical host. This time instead of burning CPU using logical I/O only (see part 2) the test harness executes a SQL statement that has to perform physical I/O mostly, and to be more precise this round is a read-only test, so doesn't include any modification to data (except the logging and any other background system generated write activity, like maintaining ASH /AWR data etc.).

In order to maximize the physical I/O part the database instance was configured with a minimum sized buffer cache (16k block size in this case) and the following script was executed as many times as CPUs were available:

declare
  n number;
begin
  loop
    select /*+
              leading(t_o)
              use_nl(t_i)
              index(t_o)
              index(t_i)
          */
          sum(t_i.n)
          into n
    from
          t_o
        , t_i&tabname t_i
    where
          t_o.id_fk = t_i.id;
    insert into timings(testtype, thread_id, ts) values ('&testtype', &thread_id, systimestamp);
    commit;
  end loop;
end;
/

The outer table T_O was created like this:

create table t_o (id primary key, id_fk)
organization index
as
select
       rownum as id
     , case mod(rownum, 2) + 1
       when 1
       then mod(rownum, &tab_size / 20)
       else &tab_size / 10 - mod(rownum, &tab_size / 20) + 1
       end as id_fk
from dual

connect by level <= &tab_size;

Each thread got its dedicated inner table of the Nested Loop join, created like this:

create table t_i' || i || ' (id not null, n, filler)
pctfree 99 pctused 1
tablespace &tbs
as
select cast(rownum as integer) as id,
cast(rownum as number) as n,
cast(rpad('x', 200) as varchar2(200)) as filler
from dual

connect by level <= &tab_size / 10;

create index t_i' || i || '_idx on t_i' || i || ' (id, filler) pctfree 99 tablespace &tbs;


Due to the way the data was crafted and thanks to the Nested Loop join batching performed by Oracle this ensured that the sessions were performing "db file parallel read" I/O as much and hard as possible, so were submitting multiple I/O requests using a single call and taking advantage of asynchronous I/O where possible.

When running with 8 threads on those 8 CPUs on the DBaaS service this resulted in approx. 30.000 IOPS for those 16K blocks (approx. 480MB / sec throughput). Note that I got these 30.000 IOPS for 16K blocks also on a different instance with just 4 CPUs (2 OCPUs) and 4 threads, so it looks like this is a common 16K blocks IOPS limit for a DBaaS instance independently from the CPU count without further tweaking (I didn't attempt to optimize this in any way but used the service as provided / configured by the Oracle Cloud). It looks like this is more a throughput limitation at 480MB / sec than an IOPS limitation, because when using a 4K blocksize I was able to get 60.000 IOPS (middle value), but the IOPS rate was much more fluctuating (between 20.000 and 90.000) whereas the 30.000 IOPS using 16K blocksize was pretty much stable.

On the physical host I got approx. half of these IOPS (roughly 15.500) for those 16K blocks (approx. 250MB / sec throughput), which is very likely the throughput limit for the rather outdated hardware configuration, albeit using a rather modern Samsung SSD drive, but also meant that running at 4 threads I got a comparable number of IOPS per thread as in the DBaaS setup. Using a 4K blocksize the physical host maxed out at (very stable) 52.000 IOPS.

The overall results look like this:

DBaaS:



Physical host:




Again the graph is supposed to show how many of the runs deviated how much from the overall median runtime. Similar to the previous, logical I/O only test this time the DBaaS service shows a significantly different profile, with an even larger spread of deviation up to almost 9 percent from the median runtime, whereas the physical host only shows significant deviation up to 2.5 percent.

Again, like in the previous test, the physical host shows more extreme outliers than the DBaaS service.

The same graph on a per day basis this time shows significant differences between the days for the DBaaS service:
The physical host shows a very common pattern, except for the first day:
Looking at the individual performance of each thread the DBaaS shows a significant variation in performance per day:
The physical host shows a pretty consistent performance pattern, but interestingly the different threads show different, but consistent runtimes:
The next test round will include physical writes.

ORACLE_MAINTAINED Set From “_ORACLE_SCRIPT” parameter

Michael Dinh - Sun, 2016-08-28 09:26

A huge thank you to Pete Finnigan for his blog post.

Hacking Oracle 12c COMMON Users

Not only is the implementation of ORACLE_MAINTAINED not ideal, it’s a possible security risk.

My tip to you is to always exit the session before performing any object creation.

From my pain as you can see, looks like “_ORACLE_SCRIPT was not reset when new users where created.

Also ORACLE_MAINTAINED Objects Don’t Export

Learned quite a few new thing this weekend and can now lay to rest.

test:(SYS@test):PRIMARY> alter session set "_ORACLE_SCRIPT"=false;

Session altered.

test:(SYS@test):PRIMARY> create user newdemo identified by demo;

User created.

test:(SYS@test):PRIMARY> select username,oracle_maintained from dba_users order by 2;

USERNAME             O
-------------------- -
TEST1                N
MDINH                N
TESTING              N
DEMO                 N
NEWDEMO              N
GSMUSER              Y
AUDSYS               Y
ANONYMOUS            Y
DBSNMP               Y
XDB                  Y
APPQOSSYS            Y
GSMADMIN_INTERNAL    Y
SYSBACKUP            Y
OUTLN                Y
SYSDG                Y
SYSKM                Y
SYSTEM               Y
XS$NULL              Y
OJVMSYS              Y
ORACLE_OCM           Y
DIP                  Y
SYS                  Y
GSMCATUSER           Y

23 rows selected.

test:(SYS@test):PRIMARY> alter session set "_ORACLE_SCRIPT"=true;

Session altered.

test:(SYS@test):PRIMARY> create user ndemo identified by demo;

User created.

test:(SYS@test):PRIMARY> select username,oracle_maintained from dba_users order by 2;

USERNAME             O
-------------------- -
DEMO                 N
MDINH                N
TEST1                N
NEWDEMO              N
TESTING              N
GSMUSER              Y
SYSKM                Y
XS$NULL              Y
OJVMSYS              Y
APPQOSSYS            Y
ORACLE_OCM           Y
XDB                  Y
DBSNMP               Y
SYSDG                Y
DIP                  Y
OUTLN                Y
ANONYMOUS            Y
SYSBACKUP            Y
AUDSYS               Y
NDEMO                Y
GSMADMIN_INTERNAL    Y
GSMCATUSER           Y
SYS                  Y
SYSTEM               Y

24 rows selected.

test:(SYS@test):PRIMARY>

Multiple instances of processes resulting in deadlock

Tom Kyte - Sun, 2016-08-28 06:06
Hi TOMS, I have gone through the deadlocks questions posted on your website.Thanks for the answers you have shared. However I have a scenario where there are multiple instances of a process are running and causing a deadlock. 1. Design of proc...
Categories: DBA Blogs

How to Insert Random PKs from Child Tables Into Parent Table's FKs

Tom Kyte - Sun, 2016-08-28 06:06
How can I randomly generate and insert an "x" number of rows into a parent table (x being independent of the size of the parent and children tables), where it (the parent table) might have multiple FK columns from two or more child tables? For ins...
Categories: DBA Blogs

SQL questions involving combinations

Tom Kyte - Sun, 2016-08-28 06:06
Dear Tom... I have two questions that are somewhat related to each other in concept: 1) Consider the following table which represents a state history per each person identified by PERSON_ID: SQL> CREATE TABLE status_history 2 (per...
Categories: DBA Blogs

Links for 2016-08-27 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator