Feed aggregator

Slow query because the cardinality estimate is wrong for joins on foreign keys

Tom Kyte - Wed, 2016-07-06 14:46
While investigating a very slow query in our OLTP db, I noticed that Oracle severly under estimates the cardinality for joins that are on foreign key. The following script replicates the issue. create table A (part number not null, rec number not...
Categories: DBA Blogs

Tune order by clause in query.

Tom Kyte - Wed, 2016-07-06 14:46
Hi Tom, In the below query order by is taking a lot of time. so i thought of creating a composite index on columns that are present in the order by clause and force that index using hint. <b>But my problem is, here i have to fetch data of lo...
Categories: DBA Blogs

I need to delete 18000 rows from a table but where clause condition varies. How to complete this deletion in simple way

Tom Kyte - Wed, 2016-07-06 14:46
I need to delete 18000 rows from a table but where clause condtion varies for each set of records. A particular where clause condition can delete 3 records. Another particular where clause condition can delete 1 record. I combined both delete stat...
Categories: DBA Blogs

where are the executed statments stored?

Tom Kyte - Wed, 2016-07-06 14:46
suppose i execute a plsql block , all the statments are not executed. Only high load statments are executed. How to see which of the statements are executed and where are they stored, Like which view/table?
Categories: DBA Blogs

Cloud Raining - Where is Oracle with the Cloud - Is the DB Giant sleeping ?

Tom Kyte - Wed, 2016-07-06 14:46
Hello AskTom Team, I have been working with Oracle Database for over a decade and half. With the recent shift of Companies wanting to put their systems in Cloud rather then on prem What is the future of Oracle. I just attended a...
Categories: DBA Blogs

Inner join vs Where

Tom Kyte - Wed, 2016-07-06 14:46
What is the best practice use "Inner Join" o "Where" Example Example A select DISTINCT(ET.DESCRIPTION) FROM EVENTTYPE ET INNER JOIN EVENTDCO E ON E.EVENTTYPEID = ET.EVENTTYPEID INNER JOIN CONTEXTOPERATION CTX ON E.OPERATIONPK ...
Categories: DBA Blogs

Getting Started with Oracle JET

Shay Shmeltzer - Wed, 2016-07-06 14:31

Last week I did an "Introduction to Oracle JET" session at the KScope16 conference, and I wanted to share the demo I used there with more people.

Specifically the demo shows how you can adopt the code from the Oracle JET cookbook samples to work in the quick start template project.

In this demo you'll learn how to create your first JET application and build a basic JET page.

Specifically it shows the following steps:

 

Hopefully this video can help you build your first Oracle JET page.

Now that you watched this video that shows how to use the pre-configured project provided as a quick start, you might want to follow up and watch the video that shows you how to work with the base distribution and hook up the JET libraries. 

Need more help with Oracle JET? Join the JET community on OTN

Categories: Development

Outer Join with OR and Lateral View Decorrelation

Dominic Brooks - Wed, 2016-07-06 11:33

Use of ANSI SQL is a personal thing.

Historically I have not been a fan apart from where it makes things easier/possible.

This reticence was mainly due to optimizer bugs and limitations in the earlier days.

Recently I have been using it much more because I find that the developers I interact with prefer it / understand it better.

You might/should be aware that Oracle will rewrite ANSI SQL to an Oracle syntax representation, this transformation being listed in the optimizer trace file.

You might/should also be aware that Oracle outer join syntax does not allow OR or IN:

drop table t1;
drop table t2;

create table t1
as
select floor((rownum+1)/2) col1
,      case when mod(rownum,2) = 0 then 1 else 2 end col2
,      10 col3
from   dual
connect by rownum <= 20;

create table t2
as
select rownum col1
,      case when mod(rownum,2) = 0 then 2 else 1 end col3
from   dual
connect by rownum <= 10;

select *
from   t1
,      t2
where  t1.col1 = t2.col1 (+) 
and  ((t1.col2 = 1
and    t2.col3 (+) > t1.col3)
or    (t1.col2 = 2
and    t2.col3 (+) < t1.col3));

ORA-01719: outer join operator (+) not allowed in operand of OR or IN

ANSI SQL remedies this:

alter session tracefile_identifier = 'domlg1';
alter session set events 'trace[rdbms.SQL_Optimizer.*]';
select *
from   t1
left join t2
on    t1.col1 = t2.col1
and ((t1.col2 = 1
and   t2.col3 > t1.col3)
or   (t1.col2 = 2
and   t2.col3 < t1.col3));

alter session set events 'trace off';

But it comes at a price.

Note the execution plan:

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |    20 |  1300 |    42   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER |      |    20 |  1300 |    42   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL | T1   |    20 |   780 |     2   (0)| 00:00:01 |
|   3 |   VIEW              |      |     1 |    26 |     2   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |     1 |    26 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("T1"."COL1"="T2"."COL1" AND ("T1"."COL2"=1 AND
              "T2"."COL3">"T1"."COL3" OR "T1"."COL2"=2 AND "T2"."COL3"<"T1"."COL3"))   

Now, maybe you will have better luck than me but no matter what I try I cannot change the NESTED LOOPS OUTER operation.
So, if that lateral view involves some full table scans or other significant operations, they might be very expense on the outer operation of a nested loop.

The reason is in the optimizer trace.

Query after View Removal
******* UNPARSED QUERY IS ********
SELECT "T1."COL1" "COL1", "T1."COL2" "COL2", "T1."COL3" "COL3", "VW_LAT_AE9E49E8"."ITEM_1_0" "COL1", "VW_LAT_AE9E49E8"."ITEM_2_1" "COL3" FROM "DOM"."T1" "T1", LATERAL( (SELECT "T2"."COL1" "ITEM_1_0", "T2"."COL3" "ITEM_2_1" FROM "DOM"."T2" "T2" WHERE "T1"."COL1"="T2"."COL1" AND ("T1"."COL2"=1 AND "T2"."COL3">"T1"."COL3" OR "T1"."COL2"=2 AND "T2"."COL3" < "T1"."COL3"))) (+) "VW_LAT_AE9E49E8"
DCL:Checking validity of lateral view decorrelation SEL$BCD4421C (#1)
DCL: Bypassed: view has non-well-formed predicate
DCL: Failed decorrelation validity for lateral view block SEL$BCD4421C (#1)

The OR prevents the decorrelation which seems to mean that we’re stuck with a NESTED LOOP for now.

Further Reading on ANSI:
Oracle Optimizer Blog
Jonathan Lewis on ANSI Outer
Jonathan Lewis on ANSI


Centenary

Jonathan Lewis - Wed, 2016-07-06 11:02

I rarely blog about anything non-technical but after the events last Friday (1st July) I wanted to say something about the pride that I shared with several hundred parents around the country as they saw the effect their offspring created through a living memorial of the terrible waste of life that happened  a hundred years ago on 1st July 1916 when some 70,000 soldiers (a very large fraction of them British) were killed or injured on the first day of the battle of the Somme.

While a memorial service was being held at Thiepval – a monument to 72,000 British (Empire) soldiers who died in the battle of the Somme but have no known grave – 1,500 “ghosts of the Somme” were silently wending their way in small groups through the streets, shopping centres, and train stations of cities across the UK, pausing to rest from time to time and occasionally bursting into the song of the trenches: “We’re here because we’re here”.

Each “ghost” represented a specific solder killed on the first day of the Somme and if you approached one of them to ask what was going on their only response was to look you in the eye and hand you a card stating their name, rank, regiment and, where known, the age at which they died.

Although many of the posts and tweets about the event mention the collaboration and assistance of various theatre groups around the country almost all of the soldiers were simply people who had responded to an advertisement for Project Octagon and had spent time over the previous 5 weekends rehearsing for the event. My son Simon was one of the volunteers who was on the London beat, starting with the morning commuters at Kings Cross then trekking around London all day – in hobnailed leather boots – to end at Waterloo station for the evening commuters.

After hours of walking this was how he appeared at Waterloo at the end of the day:

IMG_0877

Like me he normally sports a beard and moustache but he’d shaved the beard off and trimmed the moustache to the style of an older era. The absent, dazed, look is in character for the part but also, I think, an indication of exhaustion, both physical and emotional. I wasn’t even sure he’d realised I was crouching six feet away to take this photo until I asked him about it the following day. When I showed the picture to my wife it brought tears to her eyes to think that 100 years ago that might have been the last sight of her son she’d see before he went off to die – it’s a sentiment that appeared more than once on Twitter through the day.

Shortly before 6:00 pm several more groups converged on Waterloo for a final tableau giving us a rendition of “We’re here because we’re here” that ended in an agonised scream:

IMG_0872

It’s a gut-wrenching thought that a group that size would have been killed roughly every 6 minutes, on average, on the first day of the Somme though, realistically, the entire 1,500 that volunteered for the day would probably have died in the first few minutes of the first wave.

Behind the Scenes

There was no announcement of this living memorial so throughout the day people were asking where the soldiers came from and who had organised the event. Finally, at 7:00 in the evening 1418-Now identified themselves as the commissioning body, with Jeremy Deller as the artist in collaboration with Rufus Norris of the National Theatre.

Like any military operation, though, between the generals at the top and the troops at the bottom there was a pyramid of personnel connecting the big picture to the final detail. Under Jeremey Deller and Rufus Norris there was a handful of key characters without whom the day would have been very different. I can’t tell you who they all were but I’m proud to say that one of them was my daughter Anna who, along with a colleague, spent a large fraction of the last 16 months in the role of “Lead Costume Supervisor ” preparing for the day. Under the pair there were several regional costume supervisors, and each costume supervisor was responsible for several dressers who would have to help the volunteers put on the unfamiliar battledress.

Despite working on the project for 16 months Anna told me very little about what was going on until the day was over, and this is a thumbnail sketch (errors and omissions are my fault) of what she’s told me so far.

Amongst other things she selected a list of names from the soldiers who had died on the first day of battle, recording their rank, regiment, battalion and, where known, age. She then had to find out exactly what kit each battalion would have been wearing on the day, allowing for some of the variation that would have appeared within each battalion and catering for the various ranks; then she had to find a supplier who could make the necessary uniforms in a range of sizes that would allow for the variation in the build of the (as yet unknown, unmeasured) volunteers.

As batches of uniforms arrived each one had to be associated with its (historic) owner and supplied with 200 cards with the owner’s details – and it was really important to ensure that the right name was attached to a uniform before the uniforms could be dispatched around the country. Ideally a uniform would arrive at a location and find a volunteer who was the right size to wear it, with the right apparent age to match the card that went with the uniform; inevitably some uniforms had to be moved around the country to match the available volunteers.

The work didn’t stop with the uniforms being in the right place at the right time, of course. There aren’t many people alive who know how to dress in a British Army uniform from 1916 – so Anna and her colleague had to create a number of videos showing the correct way to wear webbing, how to put on puttees, etc. The other problem with the uniforms was that they were brand new – so they had to be “broken down”. That’s a lot of work when you’ve got 1,500 costumes. In part this was carried out by the volunteers themselves who spent some of their rehearsal time wearing the costumes while doing energetic exercises to wear them in and get them a little grubby and sweaty; but it also meant a lot of work for the dressers who were supplied with videos showing them how to rub (the right sort of) dirt into clothes and how to rough them up and wear them down in the right places with wire brushes etc.

One of the bits of the uniform you probably won’t have seen – or even if you saw it you might not have noticed it – was the T-shirt: the army uniform of the day would have been rather sweaty, itchy and uncomfortable on a hot summer’s day, so the soldiers weren’t wearing the real thing. Anna and her colleague designed a T-shirt that looked like the front of the shirt the troops should have worn under their battledress made of a material that was thinner, softer and much more comfortable than the real thing. In the end the day wasn’t as hot as expected so very few volunteers seemed to unbutton their tops – but if they had done so the T-shirts would have appeared to be the real thing.

Walking the Walk.

Apart from the authenticity of the uniforms another major feature of the day was the way that the ghosts made their way around from place to place silently, in single file, with no apparent references to maps (or satnav). Every group had a carefully planned route and timetable and two stage managers wearing brightly coloured backpacks so that they could be seen easily by the soldiers but, since one who walked 50 metres ahead and one 50m behind, were unlikely to be noticed by anyone who wasn’t looking. The stage managers were following carefully planned and timetabled routes – allowing the soldiers to stay in character all the time.

You may have seen pictures of the troops on the various underground trains – that’s just one demonstration of the level of detailed planning that went into the day. With a tight timetable of action and previous communications to station masters and other public officials to ensure that there would be no hold-ups at critical points the lead stage manager could (for example) get to a station guard to warn them of the imminent arrival of a squad, show them the necessary travel cards, and get the gate held open for them. No need for WW1 ghosts to break character by fumbling for electronic travel cards, just a silent parade through an open gate.

Just as Anna was the Lead Costumer Supervisor, there was a Lead Stage Manager with a cascade of local route masters beneath her. She was based in Birmingham and was responsible for working out how to make the timetabling and routing possible, using her home town as the test-bed for the approach, then briefing the regional organizers who applied the methods to prepare routes and handle logistics for their own locations.

End Results

To the people of London and Manchester and Belfast and Swansea and Penzance and Shetland and half a dozen places around the UK, it just happened: hundreds of ghosts of the past appeared and wandered among us. The uniforms were “real”, the journeys from place to place were “spontaneous” and “unguided”, and the ghosts were haunting. To most of us “it just happened” but behind the scenes the effort involved in preparation, and the attention to detail was enormous.

Between the “headline” names at the top of the pyramid and the highly visible troops on the ground at the bottom, it took the coordinated efforts of about 500 people to turn a wonderful idea into a reality that moved millions of people in their daily lives.

 

If you want to see more images and comments about the day you can follow the hashtag #wearehere and there is a collection of instagram images at http://becausewearehere.co.uk/  and if you’re in the London area on 11th July and want to hear more about the instigation and implementation of the day there’s an evening event on at the National Theatre on Monday 11th July featuring Jenny Waldman, Jeremy Deller and Rufus Norris discussing the event with a BBC correspondent.

 

 


OSB 12c Logging part 2

Darwin IT - Wed, 2016-07-06 06:52
Two weeks ago, I wrote about how to set the log level voor SB Pipelines (12c) to be able to see the logging of the Log activity in the WebLogic Server logs.

Today I encountered that for a developer at my customer the complete oracle.osb.logging.pipeline logger was missing in the log-configuration. So setting the level from EM (Fusion Middleware Control) following the article above is a little hard.

I could not find why in that case the logger was missing. But I did find a simple solution.
In the paragraph '7.1.4 ODL Log Configuration' of the Administering Oracle Service Bus documentation, I found that  you can change the logging via the EM, wlst and the logging.xml file. This file can be found in ${osb.domain.home}/config/fmwconfig/servers/${osbserver.name}, eg. 'c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain\config\fmwconfig\servers\DefaultServer\'.

Go to the end of the file:

...
<logger name='com.sun.xml.ws' level='WARNING' useParentHandlers='true'/>
</loggers>
</logging_configuration>

Copy the last logger and rename it to create an entry for the oracle.osb.logging.pipeline logger:

...
<logger name='com.sun.xml.ws' level='WARNING' useParentHandlers='true'/>
<logger name='oracle.osb.logging.pipeline' level='TRACE:16' />
</loggers>
</logging_configuration>

Set the level and remove the useParentHandlers attribute.
Restart your server and then you should find the option in the EM OSB Log Configuration. If you have multiple OSB servers you'd probably need to update this change for every osb server, since the logging.xml resides in a server specific sub-folder. I haven't tried it to add it for one server and change it to see if it is automatically added to the other server. Would be a nice experiment.

Always fill in the DNS Hostname Prefix in Oracle Compute Cloud Instance

Always fill in the DNS Hostname Prefix when creating a new instance in Oracle Compute Cloud. What seems an unimportant optional field can make your life harder than it should be,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

ora-01008, what is the bind variable's name?

Tom Kyte - Tue, 2016-07-05 20:26
Good time of day, Tom! I run several SQL via DBMS_sql package. Each of that SQL has a set of bind variables. Is there any feature to get a list of variables' names for given SQL? For instance. I wonder to get a list of ':v_name',':p_result' ...
Categories: DBA Blogs

Generate tree paths for hierarchy

Tom Kyte - Tue, 2016-07-05 20:26
Hello , I have one question which are asked into interview ,To make a tree when user insert a node into table its path get automatically reflected into table Table: Tree ---------------------- node(int) parentNode(int) path(...
Categories: DBA Blogs

LEAST AND GREATEST functions

Tom Kyte - Tue, 2016-07-05 20:26
Hello, I am trying to use the below SQL : SELECT least ( DECODE (:VAR1, 9999, NULL, :VAR1), DECODE (:VAR2,9999, NULL,:VAR2) ) FROM DUAL; VAR1 & VAR2 need to be NUMBERs (not varchar) the above SQL seems to work for all numbers exce...
Categories: DBA Blogs

trigger

Tom Kyte - Tue, 2016-07-05 20:26
Hi, my table is with fist name , last name , status. Now the thing is I want to change the status to "APPROVED" as soon as I made the entry in last name, if last name column is empty status should be default lets say "PENDING". I tried it u...
Categories: DBA Blogs

Compare source and target in a Dbvisit replication

Yann Neuhaus - Tue, 2016-07-05 13:51

You’ve setup a logical replication, and you trust it. But before the target goes into production, it will be safer to compare source and target. At least count the number of rows.
But tables are continuously changing, so how can you compare? Not so difficult thanks to Dbvisit replicate heartbeat table and Oracle flashback query.

Here is the state of the replication, with activity on the source and real-time replication to the target:
| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days
MINE IS running. Currently at plog 368 and SCN 6119128 (07/06/2016 04:15:21).
APPLY IS running. Currently at plog 368 and SCN 6119114 (07/06/2016 04:15:19).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ADDRESSES: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.CARD_DETAILS: 100% Mine:894/894 Unrecov:0/0 Applied:894/894 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDER_ITEMS: 100% Mine:5955/5955 Unrecov:0/0 Applied:5955/5955 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDERS: 99% Mine:4781/4781 Unrecov:0/0 Applied:4780/4780 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.INVENTORIES: 100% Mine:5825/5825 Unrecov:0/0 Applied:5825/5825 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.LOGON: 99% Mine:6175/6175 Unrecov:0/0 Applied:6173/6173 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
--------------------------------------------------------------------------------------------------------------------------------------------
7 tables listed.

If you wand to compare the rows from source and target, you will always see a difference because modifications on source arrive on target a few seconds later.

Source and target SCN

The first thing to do is to determine a consistent point in time where source and target are the same. This point in time exists because the redo log is sequential by nature, and the commits are done in the same order on target than source. And this order is visible with the SCN. The only problem is that on a logical replication the SCN on source and target are completely different and have their own life.

The first step is determine an SCN from the target and an SCN on the source that show the same state of transactions.

But before that, let’s connect to the target and set the environment:

$ sqlplus /nolog @ compare.sql
 
SQL*Plus: Release 11.2.0.2.0 Production on Tue Jul 5 18:15:34 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> define table_owner=REPOE
SQL> define table_name=ORDERS
SQL>
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session set nls_date_format='DD-MON-YYYY HH24:mi:ss';
Session altered.
SQL> alter session set nls_timestamp_format='DD-MON-YYYY HH24:mi:ss';
Session altered.

My example is on the #repattack environment, with Swingbench running on the source, and I’ll compare the ORDER table.

Heartbeat table

Each Dbvisit replicate configuration comes with an heartbeat table created in the Dbvisit schema on the source and replicated to the target. This table is updated every 10 seconds on the source with timestamp and SCN. This is a great way to check how the replication is working.Here it will be the way to get the SCN information from the source.

Flashback query

Oracle flashback query offers a nice way to get the commit SCN for the rows updated in the heartbeat table. From the target database, this is the commit SCN for the replication transaction (the APPLY process) and it can be displayed along with the SCN from the source transaction that is recorded in the heartbeat table and replicated to the target.

SQL> column versions_startscn new_value scn_target
SQL> column source_scn new_value scn_source
SQL> column mine_process_name format a12
SQL> column versions_starttime format a21
 
SQL> select mine_process_name,wallclock_date,mine_date,source_scn,mine_scn,versions_startscn,versions_starttime,versions_endscn
from DBVREP.DBRSCOMMON_HEARTBEAT versions between timestamp(sysdate-1/24/60) and sysdate
order by versions_endscn nulls last ;
 
MINE_PROCESS WALLCLOCK_DATE MINE_DATE SOURCE_SCN MINE_SCN VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN
------------ -------------------- -------------------- -------------------- -------------------- -------------------- --------------------- --------------------
MINE 06-JUL-2016 04:14:27 06-JUL-2016 04:14:22 6118717 6118661 4791342
MINE 06-JUL-2016 04:14:37 06-JUL-2016 04:14:31 6118786 6118748 4791342 06-JUL-2016 04:11:29 4791376
MINE 06-JUL-2016 04:14:47 06-JUL-2016 04:14:41 6118855 6118821 4791376 06-JUL-2016 04:11:39 4791410
MINE 06-JUL-2016 04:14:57 06-JUL-2016 04:14:51 6118925 6118888 4791410 06-JUL-2016 04:11:49 4791443
MINE 06-JUL-2016 04:15:07 06-JUL-2016 04:15:01 6119011 6118977 4791443 06-JUL-2016 04:11:59 4791479
MINE 06-JUL-2016 04:15:17 06-JUL-2016 04:15:11 6119091 6119059 4791479 06-JUL-2016 04:12:09 4791515
MINE 06-JUL-2016 04:15:27 06-JUL-2016 04:15:21 6119162 6119128 4791515 06-JUL-2016 04:12:19

This shows that the current version of the heartbeat table on target was commited at SCN 4791515 and we know that this state matches the SCN 6119162 on the source. You can choose any pair you want but the latest will probably be the fastest to query.

Counting rows on source

I’ll use flashback query to count the rows from the source at SCN 6119162. I’m doing it in parallel query, but be careful when the table has high modification activity there will be lot of undo blocks to read.

SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*) from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*)
--------------------
775433

Counting rows on target

I’m doing the same fron the target, but with the SCN 4791515
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*) from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*)
--------------------
775433

Good. Same number of rows. This proves that even with constantly inserted tables we can find a point of comparison, thanks to Dbvisit heartbeat table and thanks to Oracle flashback query. If you are replicating with another logical replication product, you can simulate the heartbeat table with a job that updates the current SCN to a single row table, and replicate it. If your target is not Oracle, then there are good chances that you cannot do that kind of ‘as of’ query which means that you need to lock the table on source for the time you compare.

ORA_HASH

If you think that counting the rows is not sufficient, you can compare a hash value from the columns. Here is an example.
I get the list of columns, with ORA_HASH() function on it, and sum() between them:

SQL> column columns new_value columns
SQL> select listagg('ORA_HASH('||column_name||')','+') within group (order by column_name) columns
2 from dba_tab_columns where owner='&table_owner.' and table_name='&table_name';
old 2: from dba_tab_columns where owner='&table_owner.' and table_name='&table_name'
new 2: from dba_tab_columns where owner='REPOE' and table_name='ORDERS'
 
COLUMNS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_
HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)

With this list defined in a substitution variable, I can compare the sum of hash values:

SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*) HASH
-------------------- --------------------
775433 317531150805040439
 
SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*) HASH
-------------------- --------------------
775433 17531150805040439

Note that this is only an example. You must adapt for your needs: precision of the comparison and performance.

So what?

Comparing source and target is not a bad idea. With Dbvisit replicate, if you defined the replication properly and did the initial import with the SCN provided by the setup wizard, you should not miss transactions, even when there is lot of activity on source, and even without locking the source for the initialisation. But it’s always good to compare, especially before the ‘Go’ decision of a migration done with Dbvisit replicate to lower the downtime (and the stress). Thanks to heartbeat table and flashback query, a checksum is not too hard to implement.

 

Cet article Compare source and target in a Dbvisit replication est apparu en premier sur Blog dbi services.

Fujitsu and Oracle Team Up to Drive Cloud Computing

Oracle Press Releases - Tue, 2016-07-05 11:41
Press Release
Fujitsu and Oracle Team Up to Drive Cloud Computing Strategic alliance provides robust cloud offering to customers in Japan and their subsidiaries around the world

Tokyo and Redwood Shores, Calif.—Jul 5, 2016

Fujitsu Limited, Oracle Corporation, and Oracle Corporation Japan today announced that they have agreed to form a new strategic alliance to deliver enterprise-grade, world-class cloud services to customers in Japan and their subsidiaries around the world.

In order to take advantage of cloud computing to speed innovation, reduce costs and drive business growth, organizations need IT partners that can deliver the performance, security and management capabilities that are demanded by enterprise workloads. To help organizations in Japan capitalize on this opportunity and confidently move enterprise workloads to the cloud, Oracle Cloud Application and Platform services—such as Oracle Database Cloud Service and Oracle Human Capital Management (HCM) Cloud—will be powered by Fujitsu’s datacenters in Japan. Under the new strategic alliance, Fujitsu will work to drive sales of robust cloud offerings to companies in Japan and their subsidiaries around the world.

By bringing Oracle Cloud Application and Platform services to FUJITSU Cloud Service K5, Fujitsu and Oracle will provide a high-performance cloud environment to meet the IT and business needs of customers. Specifically, Fujitsu will install the Oracle Cloud services in its datacenters in Japan and connect them to its K5 service in order to deliver enterprise-grade cloud services. The first Oracle Cloud Application that will be offered to Fujitsu customers under the joint offering is Oracle HCM Cloud. As part of the agreement, Fujitsu will implement Oracle HCM Cloud to gain unprecedented insight into its workforce throughout the company’s worldwide network of offices.

"We at Fujitsu support the digital transformation of our customers, and aim to contribute to optimized customer systems and business growth with the roll out of our Digital Business Platform MetaArc," said Shingo Kagawa, SEVP, Head of Digital Services Business & CTO, Fujitsu Limited. "In particular, we offer the core cloud service on MetaArc, K5, which addresses systems of engagement (SoE)(*1) and systems of record (SoR)(*2). Oracle is a leader in Japan's database market segment and possesses strong capabilities in the SoR domain. Now, as we look to strengthen MetaArc and K5, taking part in this strategic alliance with Oracle will work to meet the cloud needs of our customers."

“In order to realize the full business potential of cloud computing, organizations need secure, reliable and high-performing cloud solutions,” said Edward Screven, Chief Corporate Architect, Oracle. “For over three decades, Oracle and Fujitsu have worked together using our combined R&D, product depth and global reach to create innovative solutions enabling customers to scale their organizations and achieve a competitive advantage. Oracle’s new strategic alliance with Fujitsu will allow companies in Japan to take advantage of an integrated cloud offering to support their transition to the cloud.”

“We strongly believe this cloud alliance will support Japanese companies to drive digital transformation,” said Hiroshige Sugihara, President and CEO, Oracle Corporation Japan. “This will be a gateway for customers to achieve standardization, modernization, and globalization.  This initiative will differentiate us from other cloud providers by emphasizing real enterprise cloud solutions, while offering Japanese companies access to best of breed technology in the new Cloud era.”

The combination of these innovative solutions including Oracle Database Cloud Service, Oracle HCM Cloud, and K5, will enable Fujitsu and Oracle to deliver mission critical systems over a cloud environment within Fujitsu’s datacenters while maintaining the high levels of performance and reliability that had previously been achieved in on-premise environments. Furthermore, with the Oracle Cloud provided from Fujitsu’s state-of-the-art datacenters, which boast a high level of capabilities in Japan, customers using K5 or Fujitsu’s hosting services will have access to use invaluable cloud services.

Contact Info
Fujitsu Limited
Public and Investor Relations Division
Candice van der Laan
Oracle
+1.650.464.3186
candice.van.der.laan@oracle.com
Junko Ishikawa, Norihito Yachita
Oracle Japan
pr-room_jp@oracle.com
Notes

1. Systems-of-Engagement (SoE)
Systems that implement digital transformations, including business-process transformation and new-business development.

2. Systems-of-Record (SoR)
Existing systems that record company data and perform business processes.

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 156,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.7 trillion yen (US$41 billion) for the fiscal year ended March 31, 2016. For more information, please see www.fujitsu.com.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Oracle Japan

Oracle Corporation Japan was established in 1985 as Oracle Corporation’s subsidiary in Japan. With the goal of becoming the number one cloud company, it provides a comprehensive and fully integrated stack of cloud applications and cloud platforms, a suite of products to generate valuable information from big data, and a wide variety of services to support the use of these products. It was listed on the first section of the Tokyo Stock Exchange in 2000 (Company code: 4716). Visit oracle.com/jp.

Fujitsu and Oracle Alliance History

Since entering into a database OEM contract in 1989, the two companies have been providing customers with optimal solutions. Currently, as an Oracle Partner Network (OPN) Diamond level partner, Fujitsu is providing system integration services worldwide. In addition, in the SPARC/Solaris server business, Fujitsu entered into a sales contract with Sun Microsystems in 1983 and a development agreement for SPARC chips in 1988, and further strengthened the relationship with Sun Microsystems through a Solaris OEM contract in 1993. Since Oracle's subsequent acquisition of Sun Microsystems, the two companies have maintained a close, collaborative relationship to the present day.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates in the US and other countries. Other names may be trademarks of their respective owners. This press release is solely for the purpose of providing information and does not constitute an implied contract.

All company or product names mentioned herein are trademarks or registered trademarks of their respective owners. Information provided in this press release is accurate at time of publication and is subject to change without advance notice.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Fujitsu Limited

Candice van der Laan

  • +1.650.464.3186

Junko Ishikawa, Norihito Yachita

Hollander Adds New Channel, Goes Direct to Consumers Fast With Oracle Commerce Cloud

Linda Fishman Hoyle - Tue, 2016-07-05 11:27

Headquartered in Boca Raton, Florida, Hollander Sleep Products is the largest bed pillow and mattress pad manufacturer in North America. The company also has major distribution agreements across North America with brands including Beautyrest, Ralph Lauren, and Nautica, among others.

Hollander has been very successful in the B2B space, manufacturing and marketing its in-house brand Live Comfortably. It also has done very well distributing other company’s products to retailers. But, as you know, retail and distributor businesses have been under great pressure in recent years. So in 2015, the company looked to expand its business model and take its in-house brand Live Comfortably direct to consumers—but it had no direct, consumer retail/commerce experience. To accomplish its goal, Hollander teamed up with Oracle and put Oracle Commerce Cloud to work. Listen to the story from these two Hollander executives:

ODTUG Kscope16

Oracle AppsLab - Tue, 2016-07-05 10:05

Just like last year, a few members (@jkuramot, @noelportugal, @YuhuaXie, Tony and myself) of @theappslab attended Kscope16 to run a Scavenger Hunt, speak and enjoy one of the premier events for Oracle developers. It was held in Chicago this time around, and here are my impressions.

Lori and Tony Blues

Lori and Tony Blues

Since our Scavenger Hunt was quite a success the previous year, we were asked to run it again to spice up the conference a bit.  This is the 4th time we ran the Scavenger Hunt (if you want to learn more about the game itself, check out Noel’s post on the mechanics) and by now it runs like a well-oiled machine.  The competition was even fiercer than last year with a DJI Phantom at stake but in the end @alanarentsen prevailed, congratulations to Alan.  @briandavidhorn was the runner up and walked away with an Amazon Echo and in 3rd place, @GoonerShre got a Raspberry Pi for his efforts.

IMG_20160626_081906

Sam Hetchler and Noel engage in a very deep IoT discussion.

There were also consolation prizes for the next 12 places, they each got both a Google Chromcast and a Tile.  All-in-all it was another very successful run of the Scavenger Hunt with over 170 participants and a lot of buzz surrounding the game, here’s a quote from one of the players:

“I would not have known so many things, and tried them out, if there were not a Scavenger Hunt. It is great.”

Better than Cats. We haven’t decided yet if we are running the Scavenger Hunt again next year, if we do, it will probably be in a different format; our brains are already racing.

Our team also had a few sessions, Noel talked broadly about OAUX, and I had a presentation about Developer Experience or DX.  As is always the case at Kscope, the sessions are pretty much bi-directional, with the audience participating as you deliver your presentation.  Some great questions were asked during my talk, and I even was able to record a few requirements for API Maker, a tool we are building for DX.

Judging by the participation of the attendees, there seems to be a lot of enthusiasm in the developer community for both API Maker and 1CSS, another tool we are creating for DX.  As a result of the session, we have picked up a few contacts within Oracle which we will explore further to push these tools and get them out sooner rather than later.

In addition to all those activities, Raymond ran a preview of an IoT workshop we plan to replicate at OpenWorld and JavaOne this year. I won’t give away too much, but it involves a custom PCB.

IMG_20160626_104141

The makings of our IoT workshop, coming to OOW and J1.

Unfortunately, my schedule (Scavenger Hunt, presentation) didn’t really allow me to attend any sessions but other members of our team attended a few, so I will let them talk about that. I did, however, get a chance to play some video games.

IMG_20160627_183615

I really, really like video games.

And have some fun, as is customary at Kscope.

Cl6f96VWEAABRUp

A traditional Chicago dog.

Cheers,

Mark.

Possibly Related Posts:

PeopleSoft Logging and Auditing

Logging and auditing are one of the pillars of PeopleSoft Security.  Both application and database auditing is required. Logging and auditing support a trust-but-verify approach which is often deemed required to secure the activities of privileged system and database administrators.

While both the application and database offer sophisticated auditing solutions, one key feature Integrigy always recommends is to ensure that EnableDBMononitoring is enabled within the psappssrv.cfg file. This is set by default but we at times find it disabled.

When enabled EnableDBMononitoring allows PeopleSoft application auditing to bridge or flow into database auditing. This is done by populating the Oracle Client_Info variable with the PeopleSoft User Id, IP address and program name. With Oracle RDBMS auditing enabled, anything written to Client_Info is also written into the database audit logs.

In other words, with both database and EnableDBMononitoring enabled, you can report on which user updated what and when – not just that the PeopleSoft application or ‘Access ID’ issued an update statement.

The graphics below we commonly use to help review Integrigy’s approach to PeopleSoft logging and auditing.

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft, Auditor
Categories: APPS Blogs, Security Blogs

Pages

Subscribe to Oracle FAQ aggregator