Skip navigation.

Feed aggregator

Keep your Database Tidy – making sure a file exists before DBMS_DATAPUMP makes a mess

The Anti-Kyte - Mon, 2015-07-06 13:06

There are times when I wonder whether DBMS_DATAPUMP isn’t modelled on your average teenager’s bedroom floor.
If you’ve ever tried to start an import by specifying a file that doesn’t exist ( or that DBMS_DATAPUMP can’t see) you’ll know what I mean.
The job fails, which is fair enough. However, DBMS_DATAPUMP then goes into a huff and refuses to “clean up it’s room”.
Deb has suggested that this sort of thing is also applicable to husbands.
Not that I have any idea of whose husband she’s talking about.
Anyway, you may consider it preferable to check that the export file you want to import from actually exists in the appropriate directory before risking the wrath of the temperamental datapump API.
This apparently simple check can get a bit interesting, especially if you’re on a Linux server…

For what follows, I’ll be using the DATA_PUMP_DIR directory object. To check where this is pointing to…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u01/app/oracle/admin/XE/dpdump/

The owner of the functions I’ll be creating will need to have READ privileges granted directly to them on this directory…

select privilege
from user_tab_privs
where table_name = 'DATA_PUMP_DIR'
/

PRIVILEGE
----------------------------------------
READ

SQL> 

If the user does not have this privilege then you can grant it ( connecting as sysdba) with the following :

grant read on directory data_pump_dir to user
/

…where user is the name of the schema in which you are going to create the function.

Now to create a file in this directory so that we can test for it’s existence…

sudo su oracle
[sudo] password for mike: 

touch /u01/app/oracle/admin/XE/dpdump/test.txt
ls -l  /u01/app/oracle/admin/XE/dpdump/test.txt
-rw-r--r-- 1 oracle dba 0 Jul  3 18:08 /u01/app/oracle/admin/XE/dpdump/test.txt

In order to check for the existence of this file from within PL/SQL, we have a couple of options…

UTL_FILE

The UTL_FILE.FGETATTR procedure retrieves details of a file, including whether or not it exists…

set serveroutput on size unlimited
declare

    l_filename varchar2(4000) := 'test.txt';
    l_exists boolean;
    l_length number;
    l_bsize number;
begin
    utl_file.fgetattr
    (
        location => 'DATA_PUMP_DIR',
        filename => l_filename, 
        fexists => l_exists,
        file_length => l_length,
        block_size => l_bsize
    );
    if l_exists then
        dbms_output.put_line( l_filename ||' exists in DATA_PUMP_DIR : ');
        dbms_output.put_line( 'Length : '||l_length);
        dbms_output.put_line( 'Block Size : '||l_bsize);
    else
        dbms_output.put_line('File does not exist in DATA_PUMP_DIR');
    end if;
end;
/

Run this and we get :

test.txt exists in DATA_PUMP_DIR :
Length : 0
Block Size : 4096

PL/SQL procedure successfully completed.

That’s handy. Let’s put it into a function…

create or replace function file_exists_fn
(
    i_dir in all_directories.directory_name%type,
    i_filename in varchar2
)
    return varchar2
is

    l_exists boolean;
    l_length number;
    l_block_size number;
    
    l_return varchar2(4000);
    
begin
    utl_file.fgetattr
    (
        location => upper(i_dir),
        filename => i_filename,
        fexists => l_exists,
        file_length => l_length,
        block_size => l_block_size
    );
    if l_exists then
        l_return := i_filename||' in '||upper(i_dir)||' - Length : '||l_length||' - Block Size : '||l_block_size;
    else
        l_return := i_filename||' does not exist in '||upper(i_dir);
    end if;
    
    return l_return;
end;
/

Now let’s see what happens with a Symbolic Link…

touch /home/mike/symlink.txt

sudo su oracle
[sudo] password for mike: 

ln -s /home/mike/symlink.txt /u01/app/oracle/admin/XE/dpdump/symlink.txt
ls -l /u01/app/oracle/admin/XE/dpdump/symlink.txt
lrwxrwxrwx 1 oracle dba 22 Jul  3 18:29 /u01/app/oracle/admin/XE/dpdump/symlink.txt -> /home/mike/symlink.txt

If we now call our function to fine symlink.txt in DATA_PUMP_DIR…

select file_exists_fn('DATA_PUMP_DIR', 'symlink.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','SYMLINK.TXT')
--------------------------------------------------------------------------------
symlink.txt does not exist in DATA_PUMP_DIR

SQL> 

It is at this point that I realise that I really should have read the manual, which states that, for UTL_FILE : “neither hard nor symbolic links are supported.”

So, if we’re to handle links, a different approach is required…

The DBMS_LOB approach

The DBMS_LOB has a FILEEXISTS function which looks like it could come in handy here…

set serveroutput on size unlimited
declare

    l_filename varchar2(4000) := 'symlink.txt';
    l_loc bfile;
begin
    l_loc := bfilename('DATA_PUMP_DIR', l_filename);
    if dbms_lob.fileexists(l_loc) = 1 then
        dbms_output.put_line( l_filename||' exists');
    else
        dbms_output.put_l1ine('File not found');
    end if;
end;
/

symlink.txt exists 

PL/SQL procedure successfully completed.

That’s better. After amending the function…

create or replace function file_exists_fn
(
    i_dir in all_directories.directory_name%type,
    i_filename in varchar2
)
    return varchar2
is

    l_loc bfile;
    l_return varchar2(4000);

begin
    l_loc := bfilename(upper(i_dir), i_filename);
    if dbms_lob.fileexists(l_loc) = 1 then
        l_return :=  i_filename||' exists in '||upper(i_dir);
    else
        l_return := 'File '||i_filename||' not found';
    end if;
    return l_return;
end;
/

…we can see that this also works just fine for conventional files…

select file_exists_fn('DATA_PUMP_DIR', 'test.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','TEST.TXT')
--------------------------------------------------------------------------------
test.txt exists in DATA_PUMP_DIR

SQL> 

Let’s check that is works for hard links as well…

touch /home/mike/hardlink.txt
chmod a+rw /home/mike/hardlink.txt
sudo su oracle
[sudo] password for mike: 

cd /u01/app/oracle/admin/XE/dpdump/
ln /home/mike/hardlink.txt hardlink.txt
ls -l hardlink.txt
-rw-rw-rw- 2 mike mike 0 Jul  3 18:50 hardlink.txt

And the test….

select file_exists_fn('DATA_PUMP_DIR', 'hardlink.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','HARDLINK.TXT')
--------------------------------------------------------------------------------
hardlink.txt exists in DATA_PUMP_DIR

SQL> 

So, if you want to minimise the prospect of muttering “I’m not your mother, you know!” to your database, then the DBMS_LOB approach would seem to be the way to go.


Filed under: Linux, Oracle, PL/SQL Tagged: dbms_lob.fileexists, hard link, symbolic link, utl_file.fgetattr

Kscope15 Impressions

Oracle AppsLab - Mon, 2015-07-06 10:34

As per Jake’s post, we got to spend a few days in Florida to support the Scavenger Hunt that we created for the Kscope15 conference.  Since it ran pretty smoothly, we were able to attend a few sessions and mingle with the attendees and speakers, here are my impressions of the event.

IMG_20150620_062305

This was my first time at Kscope.  Jake hyped it up as a not-to-miss conference for Oracle developers and despite my high expectations of the event, it did not disappoint.  The actual conference started Sunday but we arrived Saturday to setup everything for the Scavenger Hunt, dot a few i’s and cross some t’s.

We also ran a quick training session for the organizers helping with the administration of the Scavenger Hunt and later that night started with actually registering players for the hunt.  We signed up about 100 people on the first evening.  Registration continued Sunday morning and we picked up about 50 more players for a grand total of 150, not bad for our first Scavenger Hunt.

IMG_20150621_180905

The number of sessions was a bit overwhelming so I decided to focus on the Database Development and the Application Express track and picked a few session from those tracks.  The first one I attended was called “JSON and Oracle: A Powerful Combination” where Dan McGhan (@dmcghan) from Oracle, explained how to produce JSON from data in an Oracle Database, how to consume JSON in the Oracle Database and even how to use it in queries.

It turns out that Oracle 12.1.0.2 has some new, really cool features to work with JSON so be sure to check those out.  Interestingly, our Scavenger Hunt backend is using some of these techniques, and we got some great tips from Dan on how to improve what we were doing. So thanks for that Dan!

Next I went to “A Primer on Web Components in APEX” presented by my countryman Dimitri Gielis (@dgielis).  In this session, Dimitri demonstrated how you can easily integrate Web Components into an APEX application.  He showed an impressive demo of a camera component that took a picture right from the web application and stored it on the database.  He also demoed a component that integrated voice control into an APEX application, this allowed him to “ask” the database for a row and it would retrieve that row and show it on the screen, very cool stuff.

That night also featured the infamous “APEX Open Mic” where anybody can walk up to the mic and get five minutes to show off what they’ve built with APEX, no judging, no winners or losers, just sharing with the community, and I must say, some really impressive applications where shown, not the least of which one by Ed Jones (@edhjones) from Oracle, who managed to create a Minecraft-like game based on Oracle Social Network (OSN) data where treasure chests in the game represent OSN conversations. Opening the chest opens the conversation in OSN. Be sure to check out his video!

The next day, I attend two more sessions, one by our very own Noel Portugal (@noelportugal) and our Group Vice President, Jeremy Ashley (@jrwashley), I am sure they will tell you all about this through this channel or another so I am leaving that one for them.

IMG_20150621_181205

The other session was called “An Introduction to JavaScript Apps on the Oracle Database,” presented by Dan McGhan.  Dan demonstrated how you can use Node.js to enhance your APEX application with among other things, WebSocket functionality, something not natively offered by APEX.  Here I also learned that Oracle 12c has a feature that allows you to “listen” for particular changes in the database and then broadcast these changes to interested parties (Node.js and then WebSockets in this case), this is for sure something that we are going to be using in the future in some of our demos.

The 3rd day was Hands-On day and I attend 2 more sessions , first “Intro to Oracle REST Data Services” by Kris Rice (@krisrice) from Oracle, and then “Soup-to-Nuts of Building APEX Applications” by David Peake (@orcl_dpeake) from Oracle.

In the first one we were introduced to ORDS, a feature in the Oracle DB that allows you to create REST services straight on top of the Database, no middle tier required!  I’ve seen this before in MySQL, but I did not know you could also do this in an Oracle DB. Again this is a supper powerful feature that we will be using for sure in future projects.

The second, two-hour, session was a walk through of a full fledged APEX application from start to finish by the always entertaining David Peake.  I must admit that by that time I was pretty much done, and I left the session half way through building my application. However, Raymond @yuhuaxie) managed to sit through the whole thing so maybe he can give some comments on this session.

All I can say is that APEX 5.0 was extremely easy to get started with and build a nice Web Application with.

And that was KScope15 in a nutshell for me.  It was an awesome, exhausting experience, and I hope I can be there again in 2016.

Cheers,

Mark.Possibly Related Posts:

Don’t call it test

Laurent Schneider - Mon, 2015-07-06 09:00

There are quite a few names to avoid in your scripts. Even if there are not reserved-words, keep away !

I’ll start with test


cd $HOME/bin
vi test
  echo hello world
chmod +x test
./test
  hello world

The problem is that it may break your other scripts


$ ssh localhost test 1 = 2 && echo WHAT???
hello world
WHAT???

And it may break sooner or later, depending on your OS / version / default shell / default path / others.

There are quite a few filenames you should not use, like test, date, time, hostname, mail, view, touch, sort and make. The command type lists some of those as reserved word, shell builtin, tracked alias, shell keyword. But again it is not consistent over Unix flavors.


$ uname -sr; type date
SunOS 5.10
date is /usr/bin/date
$ uname -sr; type date
Linux 2.6
date is a tracked alias for /bin/date

Your sysadmin may also alias things for colors and safety in the common profile: for instance vi, ls, rm. But if it annoys you, then use \vi instead of vi.

So, What Kind of Person Are You?

Pythian Group - Mon, 2015-07-06 08:48

It’s my not-so-humble opinion that it takes a special kind of ‘someone’ to work in a successful and innovative collective such as Pythian.  We’re a diverse team of thought-leaders, technology forecasters, technical prodigies and individual contributors.  When we look for people to join our global company we’re looking for people who want to see that their work really matters…that they matter.  We have truly discerning tastes when it comes to who gets to have “Pythian” in their email signature – you have to love data and value what it does for people.

Oh.  And you have to like people (we’re funny like that).

Our global team is brimming with top talent dedicated to building something larger than them.  We haven’t gotten this far by playing it safe.  We play it smart.  We’re strategic.  We have a vision to be the most trusted and admired technology services organization in the world….

….And we’re looking for fantastic people to join us.   In order to take your career to the next level at Pythian, you have to be able to:

Lend a hand – There are times when it’s easier to allocate blame but in the end, the problem still exists.  It may not be ‘your fault’ but it can become everyone’s problem.  In situations where there isn’t a lot of time for advanced planning it’s the people who take steps towards a solution that will make the greatest (and often most favorable) impact.

Play to your strengths – Maybe you’re a whiz with numbers or an I.T. genius.  Perhaps your level of organization is outstanding or you have incredible leadership skills. Play to what energizes you.  Cultivate it, infuse your work with it and success will follow.

Lean into the unknown – Opportunity is often found in the things we never knew existed.  Many of the talented people that I’ve come to know at Pythian can dive fearlessly into projects and own them.   If they don’t know how to do something, they learn it and they learn how to do it well.  That’s just the way it’s done.

Embrace diversity – We believe that every employee that works with us is deserving of dignity and respect.

Be approachable –Typically there’s a good mix of personalities in any successful company.  While introverts seem to be at the helm of hands on I.T. work, extroverts also contribute significantly to getting things done.  Regardless of which way your personality leans, always be approachable.  A positive disposition is often contagious.

Put your best face forward – Remember that the skill and professionalism that you demonstrate every day will inevitably become your business card.  Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Do you think you can picture yourself here? Discover more about what it’s like to be part of the Pythian team.  You just might be our kind of people!

The post So, What Kind of Person Are You? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Possible Truncation Attack? logged in #em12c nodemanager.log file

DBASolved - Mon, 2015-07-06 08:38

Recently I’ve come across issues with restarting Oracle Enterprise Manager and seeing messages in the nodemanager.log. The message that I’m referring to is (followed by a java trace stack):

<Jul 2, 2015 11:46:11 PM> <WARNING> Uncaught exception in server handlerjavax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack? 
javax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack?

What is interesting about this message is the panic that may come over a person when they see the word “attack”. The first time I saw this message, I was working on a client site and I was distressed because I was worried about an “attack” on EM. After some investigation, this message is a bit misleading. So, what was the cause of the message?

The “possible truncation attack” is due to the IP address of the host where the OMS runs changed. Here in my test environment, I recently upgraded my wireless router which effected my whole network. The router upgrade changed all the addresses on the network. When OEM was initially installed, the host had an address of 192.168.65.67 after the upgraded the addressed changed to 192.168.65.7. Not a big deal; how to fix though?

In the case of my test lab, I needed to change the /etc/hosts files to ensure that the correct IP address was picked up. In the enterprise, what needs to happen is your local DNS needs to be updated along as the /etc/hosts file. OEM upon start up will look at DNS then /etc/hosts when trying to resolve host to IP resolution. The order of preference can be changed in the /etc/resolv.conf as well.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

SQL vs. PL/SQL

Jonathan Lewis - Mon, 2015-07-06 03:23

Which piece of code will be faster (clue – the table in question has no indexes):

Option 1 – pure SQL


update join1 set
        data = data||'#'
where   key_1=500
and     key_2=23851
and     key_3=57012
and     key_4=521
and     key_6=1
and     key_7=23352
;

Option 2 – a silly PL/SQL row by row approach:


declare
        type rowid_type is table of urowid index by binary_integer;
        tab_rowid           rowid_type;  

        lv_rows_updated     number :=0;  

        cursor my_cursor is
                select  rowid rid
                from    join1
                where   key_1=500
                and     key_2=23851
                and     key_3=57012
                and     key_4=521
                and     key_6=1
                and     key_7=23352
        ;

begin
        open my_cursor;

        -- We know that the number of rows to be updated is very small
        fetch my_cursor bulk collect into tab_rowid limit 10000;

        forall lv_row in tab_rowid.first .. tab_rowid.last
             update join1 set data = data||'#' where  rowid = tab_rowid(lv_row);

        lv_rows_updated := sql%rowcount;
        close my_cursor;
end;
/

It’s a trick question, of course, and although the automatic response from any DBA-type is likely to be “the SQL”, the correct answer is (as so often) “it depends”.

This question appeared as a problem on the OTN database forum a few days ago. In it’s original form it asked why a select statement should be much faster than a select for update or an update – even though the volume identified and updated was very small (just one row in 100M).The note then went on to show that using PL/SQL to select the rowids of the target rows then doing the bulk update by rowid was faster than the basic SQL update. The answer didn’t spring to mind immediately; but fortunately someone asked for some run-time statistics (v$sesstat) and the supplied statistics told me what was going on.

Conveniently the OP gave us the code to recreate the test case – all 100M rows of it; I cut this back to 16M rows (ca. 1.5GB of disc space), and then ran the tests with ny db_cache_size set to 256MB (another clue). I got similar results to the OP – not so dramatic, but the PL/SQL ran faster than the SQL and the difference was due to an obvious change in the CPU usage.

If you haven’t guess from the clue in the 256MB db_cache_size (which means the table is more than 5 times the size of the cache), the answer is “serial direct path reads”. For a sufficiently large table (and that’s not easy to define – start here and follow a few links) it’s fairly common knowledge that from 11g a tablescan can use a serial direct path read, and that’s what the PL/SQL was doing to select the required rowids. However, here’s a detail that’s not often mentioned: an update has to take place in public where everyone can see it so when Oracle executed the simple SQL update or select for update statement it had to scan the table through the buffer cache. Pulling all those blocks into the buffer cache, grabbing latches to link them to the right cache buffers chains, pinning them, then unpinning them uses a lot of CPU – which isn’t needed for the direct path read. The PL/SQL with its pure select used far less CPU than the basic SQL with its update/select for update, and because the OP had a very high-powered machine with plenty of CPU and loads of (disc-)caching effects all over the place the difference in CPU time was exremely visible as a fraction of the total DB time.

This was, inevitably, a very special case where a little detail became a significant fraction of the workload. The OP scanned 100M rows to update 1 row (in 7 – 13 seconds!). This doesn’t sound like a strategy you would normally want to adopt for frequent use; and for occasional use we might be happy to use the slower (13 second) approach to avoid the coding requirement of the fast (7 second) solution.

Footnote:

It’s worth pointing out that the PL/SQL strategy is not safe. In the few seconds between the select statement starting and the row being identified and updated by rowid it’s possible that another session could have updated (or deleted) the row. In the former case the update statement is now updating a row which doesn’t match the specification; in the latter case the code will raise an exception.

We can make the PL/SQL safer by including the original predicates in the update statement – but that still leaves the question of what the code should do if the select statement finds a row and the update fails to update it. Should it, perhaps, assume that there is still a row in the table that needs an update and re-run (using up all the time you saved by adopting a PL/SQL solution).

 

 

 


Why A Brand New Index Might Benefit From An Immediate Coalesce (One Slip)

Richard Foote - Mon, 2015-07-06 01:58
A recent question on the OTN Forums Reg: Index – Gathering Statistics vs. Rebuild got me thinking on a scenario not unlike the one raised in the question where a newly populated index might immediately benefit from a coalesce. I’ve previously discussed some of the pertinent concepts such as how index rebuilds can make indexes bigger, not smaller […]
Categories: DBA Blogs

Transport Tablespace using RMAN Backupsets in #Oracle 12c

The Oracle Instructor - Mon, 2015-07-06 01:29

Using backupsets for Transportable Tablespaces reduces the volume of data you need to ship to the destination database. See how that works:

RMAN TTS on the source database

RMAN TTS on the source database

The tablespace is made READ ONLY before the new BACKUP FOR TRANSPORT command is done. At this point, you can also convert the platform and the endian format if required. Then on the destination site:

RMAN TTS on the destination database

RMAN TTS on the destination database

The FOREIGN keyword indicates that this doesn’t use a backup taken at the destination. Practical example:

 

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 6 08:36:30 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select * from v$tablespace;

       TS# NAME                           INC BIG FLA ENC     CON_ID
---------- ------------------------------ --- --- --- --- ----------
         0 SYSTEM                         YES NO  YES              0
         1 SYSAUX                         YES NO  YES              0
         2 UNDOTBS1                       YES NO  YES              0
         3 TEMP                           NO  NO  YES              0
         4 USERS                          YES NO  YES              0
         5 TBS1                           YES NO  YES              0

6 rows selected.

SQL> select table_name,owner from dba_tables where tablespace_name='TBS1';

TABLE_NAME
-------------------- 
OWNER
--------------------
T
ADAM


SQL> alter tablespace tbs1 read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@uhesse ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:37:28 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    244      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/prima/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> host 'mkdir /tmp/stage';

host command complete

RMAN> configure device type disk backup type to compressed backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters are successfully stored

RMAN> backup for transport format '/tmp/stage/tbs1.bkset'
      datapump format '/tmp/stage/tbs1.dmp'
      tablespace tbs1;

Starting backup at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_PRIMA_yvym":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_PRIMA_yvym is:
   EXPDP>   /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TBS1:
   EXPDP>   /u01/app/oracle/oradata/prima/tbs1.dbf
   EXPDP> Job "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully completed at Mon Jul 6 08:39:50 2015 elapsed 0 00:00:26
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/prima/tbs1.dbf
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.bkset tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.dmp tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 06-JUL-15

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> exit


Recovery Manager complete.
[oracle@uhesse ~]$ ls -rtl /tmp/stage
total 5608
-rw-r-----. 1 oracle oinstall 5578752 Jul  6 08:39 tbs1.bkset
-rw-r-----. 1 oracle oinstall  163840 Jul  6 08:39 tbs1.dmp
[oracle@uhesse ~]$ rman target sys/oracle@sekunda

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:40:49 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SEKUNDA (DBID=3356258651)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> restore foreign tablespace tbs1
      format '/u01/app/oracle/oradata/sekunda/tbs1.dbf'
      from backupset '/tmp/stage/tbs1.bkset'
      dump file from backupset '/tmp/stage/tbs1.dmp';

Starting restore at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TBS1
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.bkset
channel ORA_DISK_1: restoring foreign file 5 to /u01/app/oracle/oradata/sekunda/tbs1.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.bkset
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_SEKUNDA_85631.dmp
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.dmp
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.dmp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_SEKUNDA_ppol":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully completed at Mon Jul 6 08:42:51 2015 elapsed 0 00:00:20
Import completed

Finished restore at 06-JUL-15

RMAN> report schema;

Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/sekunda/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> select count(*) from adam.t;

  COUNT(*)
----------
   1000000

Hope you find it useful :-)


Tagged: 12c New Features, PracticalGuide, RMAN
Categories: DBA Blogs

Upgrade Cloud Control 12cR4 to 12cR5

Tim Hall - Mon, 2015-07-06 00:32

em-12cA couple of weeks ago I wrote a post about doing a Cloud Control 12cR5 installation and said I would be testing the upgrade from 12cR4. I’ve now done that.

The upgrade documentation is quite extensive and the prerequisites are going to be different depending on the database and cloud control versions you are starting with, so this is no way a “recommended” way to do the upgrade. Each one will need to be approached on a case-by-case basis. It’s just meant to give a flavour of what you have to do.

Suffice to say, it worked fine for me. :)

Cheers

Tim…

Upgrade Cloud Control 12cR4 to 12cR5 was first posted on July 6, 2015 at 7:32 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Avoiding the Inbound Monkey

Anthony Shorten - Sun, 2015-07-05 23:32

This is the first in a series of articles on implementation advice to optimize the designs and configuration of Oracle Utilities products.

Early in my career my mentor at the time suggested that I expand my knowledge outside the technical area. The idea is that non-technical techniques would augment my technical knowledge. He suggested a series of books and articles that would expand my thinking. Today I treasure those books and articles and regularly reread them to reinforce my skills.

Recently I was chatting to customers about optimizing their interface designs using a techniques typically called "Responsibility Led Design". The principle is basically that each participant in an interface has distinct responsibilities for the data interchanged and it was important to make sure designs took this into account. This reminded me of one of favorite books titled "The One Minute Manager Meets The Monkey" by Ken Blanchard, William Oncken Jr. and Hal Burrows. I even have a copy of the audio version which is both informative and very entertaining. The book was based on a very popular Harvard Review article entitled "Management Time: Who's Got The Monkey" and expands on that original idea.

To paraphrase the article, a monkey is a task that is not your responsibility that is somehow assigned to you. The terms for this is the "monkey jumping on your back" or simply "Monkey on your back". This epitomizes the concepts of responsibility.

So what has this got to with design or even Oracle Utilities products, you might ask?

One of the key designs for all implementation is sending data INTO the Oracle Utilities products. These are inbound interfaces, for obvious reasons. In every interface there is a source application and a target application. The responsibility of the source application is to send valid data to the target application for processing. Now, one of the problems I see with implementations is when the source application sends invalid data to the target. There are two choices in this case:

  • Send back the invalid request - This means that if the data transferred from the source in invalid for the target then the target should reject the data and ask the source to resend. Most implementations use various techniques for achieve this. This keeps the target clean of invalid data and ensures the source corrects their data before sending it off again. This is what I call correct behavior.
  • Accept the invalid request (usually in a staging area) and correct it within the target for reprocessing - This means the data is accepted by the target. regardless of the error and corrected within the target application to complete the processing.

More and more I am seeing implementations taking the latter design philosophy.  This is not efficient as the responsibility for data clensing (the monkey in this context) has jumped on the back of the target application. At this point, the source application has no responsibility for cleaning their own data and has no real incentive to ever send clean data to the target as the target is now has the monkey firmly on their back. This has consequences for the target application as it can increase resource usage (human and non-human) to now correct data errors from the source application. Some of the customers I chatted to found that while initially they found the volume of these types of transactions were low that over time the same errors kept being sent, and over time the cumulative effect of the data clensing on the target started to get out of control. Typically, at this point, customers ask for advice to try and reduce the impact.

In an Oracle Utilities product world, this exhibits itself as a large number of interface To Do's to manage as well as staging records to manage and additional storage to manage. The latter is quite important as typically implementations keep forgetting to remove completed transactions that have been corrected once they have been applied from the staging area.  The product ship special purge jobs to remove complete staged transactions and we recently added support for ILM to staging records.

My advice to these customers are:

  • Make sure that you assign invalid transactions back to the source application. This will ensure the source application maximizes the quality of their data and also hopefully prevents common transaction errors to reoccur. In other words, the monkey does not jump from the source to the target.
  • If you choose to let the monkey jump on the target's back, for any reason, then use staging tables and make sure they are cleaned up to minimize the impact. Monitor the error rates and number of errors and ensure the source application is informed to correct the error trends.

In short, avoid the monkey in your inbound transactions. This will make sure the resources you allocate to both your source and target are responsible and are allocated in an efficient manner.

Installation steps of JDK 7 for Linux for Oracle Fusion Middleware

Online Apps DBA - Sun, 2015-07-05 14:40

This post covers procedure installs the Java Development Kit (JDK) for 64-bit RPM-based Linux platforms, such as Red Hat and SuSE, using an RPM. This post is from our Oracle Fusion Middleware (FMW) or Oracle Access Manager (OAM) training where we provide dedicated machine to trainees to practice but if you need to install similar setup on your local machine (We use Oracle Virtual Box with Oracle Linux 5.5 ).

You must be login as root user to install  this installation (Assumption is that you are installing JDK on 64 bit Linux)

1. Download the JDK software from here (jdk-7u60-linux-64.rpm)

new

2. The installation process should be carried out with the “root” user.

su – root

when prompted for password, enter the root password.

3. Navigate to the directory where your JDK software is downloaded

cd /stage/oracle/jdk

4. Install the package  using the command : rpm -ivh <package_name>

rpm -ivh  jdk-7u60-linux-x64.rpm

Note: This step will install JDK 1.7 under /usr/java/jdk1.7.0_60

5. To verify the version of java, navigate to the directory /usr/java/jdk1.7.0_60/bin and check javac and java versions. The version should be the latest installed JDK version.

Note: JDK is default installed under the directory /usr/java/jdk1.7.0_60

cd /usr/java/jdk1.7.0_60/bin

./java -version

jdk

 

6. Delete the .rpm file if you want to save disk space.

7. Exit the root shell. No need to reboot.

 

Reference:

 

If you are part of our training program and have not yet registered for closed Facebook Group then send request and post any technical queries.

 

 

 

Related Posts for Fusion Middleware
  1. Installation steps of JDK 7 for Linux for Oracle Fusion Middleware

The post Installation steps of JDK 7 for Linux for Oracle Fusion Middleware appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

RESULT_CACHE hint expiration options

Yann Neuhaus - Sun, 2015-07-05 14:22

The result cache is very nice when base tables are static: get the result without any buffer get, without any join or sorts. But as soon as any DML occurs on one of the tables the result is based on, then result cache is invalidated. And cache misses are quite expensive. You can think of it as a materialized view in memory. But with materialized views, you can accept to see stale results in order to avoid frequent refreshes. Let's see how we can control stale results with undocumented options.

Test case

I'm creating a basic table.

22:30:44 SQL> create table DEMO as select rownum id from xmltable('1 to 100000');

Table created.

22:30:44 SQL> exec if dbms_result_cache.flush then dbms_output.put_line('Flushed.'); end if;

PL/SQL procedure successfully completed.

RESULT_CACHE

On that table, I'll do a simple select with the RESULT_CACHE hint.

22:30:44 SQL> set autotrace on
22:30:44 SQL> select /*+ result_cache */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row); name="select /*+ result_cache */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          5  recursive calls
          0  db block gets
        159  consistent gets
        153  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

We have read the 150 blocks of the DEMO table. The result cache has been loaded. Here are the objects and dependency:
22:30:44 SQL> set autotrace off
22:30:44 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:44 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Published select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0

We can read it as: the query result (id=1) depends on the table (id=0).

If I run it another time:

22:30:45 SQL> select /*+ result_cache */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row); name="select /*+ result_cache */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

No block get at all. The result was in cache.

dependencies

If we do any kind of DML on the tables the result cache depends on:

22:30:45 SQL> delete from DEMO where null is not null;

0 rows deleted.

22:30:45 SQL> commit;

Commit complete.

Then the cache is invalidated:
22:30:45 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:45 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      1
  1 Result     Invalid   select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0

and another run will have to load the cache again:
22:30:45 SQL> set autotrace on
22:30:45 SQL> select /*+ result_cache */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row); name="select /*+ result_cache */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        157  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

22:30:46 SQL> set autotrace off
22:30:46 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         2          0     112463          0

22:30:46 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      1
  1 Result     Invalid   select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0
  2 Result     Published select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0

The is the only documented behaviour: the cache is invalidated if and only if there has been some DML on the tables it depends on.

RESULT_CACHE(SHELFLIFE=seconds)


I flush the cache and run the same with the undocumented result cache hint option: SHELFLIFE=10 seconds:

22:30:46 SQL> exec if dbms_result_cache.flush then dbms_output.put_line('Flushed.'); end if;

PL/SQL procedure successfully completed.

22:30:46 SQL>
22:30:46 SQL> set autotrace on
22:30:46 SQL> select /*+ result_cache(shelflife=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row, shelflife=10); name="select /*+ result_cache(shelflife=10) */ count(*) from DEMO"
we see the 'shelflife' attribute but the dependency is the same as without the option:
22:30:46 SQL> set autotrace off
22:30:46 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:46 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Published select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
I run the query again 5 seconds later:
22:30:51 SQL> set autotrace on
22:30:51 SQL> select /*+ result_cache(shelflife=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row, shelflife=10); name="select /*+ result_cache(shelflife=10) */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

22:30:51 SQL> set autotrace off
22:30:51 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:51 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Published select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0

Nothing special here. No DML occured so the result is still valid ('published')

expiration


But let's wait 5 more seconds and run it again:

22:30:56 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Expired   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
The status has changed, it's not EXPIRED because the 10 seconds shelflife has passed on since the cache was loaded.
22:30:56 SQL>
22:30:56 SQL> set autotrace on
22:30:56 SQL> select /*+ result_cache(shelflife=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row, shelflife=10); name="select /*+ result_cache(shelflife=10) */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        157  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

here I have a cache miss even if no DML occured on the tables it depends on.
22:30:56 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         2          0     112463          0

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Invalid   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
  2 Result     Published select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0

When I run a query using an expired result cache, that cache becomes invalid, the query is fully run (150 block gets) and new result cache populated.

Note that in addition to the shelflife expiration, the dependency on DML is still invalidating the result cache:

22:30:56 SQL> delete from DEMO where null is not null;

0 rows deleted.

22:30:56 SQL> commit;

Commit complete.

22:30:56 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      1
  1 Result     Invalid   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
  2 Result     Invalid   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
So with SHELFLIFE we have two reasons for invalidations: dependency and expiration.

RESULT_CACHE(SNAPSHOT=seconds)


There is another undocumented option for the result cache hint: SNAPSHOT which also taked a number of seconds. Let's try it:

22:30:56 SQL> exec if dbms_result_cache.flush then dbms_output.put_line('Flushed.'); end if;

PL/SQL procedure successfully completed.

22:30:56 SQL>
22:30:56 SQL> set autotrace on
22:30:56 SQL> select /*+ result_cache(snapshot=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; attributes=(single-row, snapshot=10); name="select /*+ result_cache(snapshot=10) */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        157  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

22:30:56 SQL> set autotrace off
22:30:56 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Result     Published select /*+ result_cache(snapsh 2x5f91pfn5p6882f6szxj50jwf     0

The cache has been populated here, but there is no dependency. Let's try DML on base table:
22:30:56 SQL> delete from DEMO where null is not null;

0 rows deleted.

22:30:56 SQL> commit;

Commit complete.

22:30:56 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Result     Published select /*+ result_cache(snapsh 2x5f91pfn5p6882f6szxj50jwf     0

A SNAPSHOT result cache is not invalidated by DML on base tables.

But if we wait 10 seconds:

22:31:06 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Result     Expired   select /*+ result_cache(snapsh 2x5f91pfn5p6882f6szxj50jwf     0

The status is expired, as with the shelflife. Next execution will invalidate it.

Conclusion

There are two reasons to invalidate a result cache: DML on dependency, or expiration after a number of seconds. And we can use any combination of them with undocumented hints. Oracle itself uses them internally. SNAPSHOT is used by Active Dynamic Sampling: stale result are acceptable for one hour. SHELFLIFE is used on dictionary views bases on X$ tables where dependency cannot be tracked.

I hope it will get documented in future releases. There are some cases where we can accept stale results in order to get better performance. We already do that with materialized views, so why not do it with result cache?

RESULT_CACHE: run-time dependency tracking

XTended Oracle SQL - Sun, 2015-07-05 13:20

As you know, since 11.2 “relies_on” clause was deprecated and oracle tracks dependencies at runtime now.
Test function and tables

create or replace function f_without_deps(p_tab varchar2) return varchar2
as
   res varchar2(30);
begin
   execute immediate 'select '''||p_tab||''' from '||p_tab||' where rownum=1' into res;
   return res;
end;
/
create table a as select 'a' a from dual;
create table b as select 'b' b from dual;
create view v_ab as select a,b from a,b;

[collapse]
And it works fine with normal tables:
v_ab

SQL> exec :p_tab:='v_ab';

PL/SQL procedure successfully completed.

SQL> call DBMS_RESULT_CACHE.flush();

Call completed.

SQL> select/*+ result_cache */ f_without_deps(:p_tab) result from dual;

RESULT
----------
v_ab

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

NAME                                                               BUCKET_NO  ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALID
----------------------------------------------------------------- ---------- --- ---------- --------- ---------- ---------- -------
XTENDER.F_WITHOUT_DEPS                                                  1579   0 Dependency Published          0          0       0
XTENDER.V_AB                                                            3127   2 Dependency Published          0          0       0
XTENDER.B                                                                778   3 Dependency Published          0          0       0
XTENDER.A                                                                464   4 Dependency Published          0          0       0
select/*+ result_cache */ f_without_deps(:p_tab) result from dual       1749   1 Result     Published          0          0       0

[collapse]
But don’t forget that the result_cache also caches such functions with the objects, that usually should not be cached, and such objects will not be listed in the result_cache dependencies list:
v$database

SQL> exec :p_tab:='v$database';

PL/SQL procedure successfully completed.

SQL> call DBMS_RESULT_CACHE.flush();

Call completed.

SQL> select/*+ result_cache */ f_without_deps(:p_tab) result from dual;

RESULT
----------
v$database

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

NAME                                                               BUCKET_NO  ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALID
----------------------------------------------------------------- ---------- --- ---------- --------- ---------- ---------- -------
XTENDER.F_WITHOUT_DEPS                                                   772   0 Dependency Published          0          0       0
PUBLIC.V$DATABASE                                                       1363   2 Dependency Published          0          0       0
select/*+ result_cache */ f_without_deps(:p_tab) result from dual       2283   1 Result     Published          0          0       0

3 rows selected.

[collapse] As you can see, there is only dependency on public synonym V$DATABASE, but not real base fixed X$-tables.
SYS.OBJ$

SQL> exec :p_tab:='sys.obj$';

PL/SQL procedure successfully completed.

SQL> call DBMS_RESULT_CACHE.flush();

Call completed.

SQL> select/*+ result_cache */ f_without_deps(:p_tab) result from dual;

RESULT
----------
sys.obj$

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

NAME                                                               BUCKET_NO ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALID
----------------------------------------------------------------- ---------- -- ---------- --------- ---------- ---------- -------
XTENDER.F_WITHOUT_DEPS                                                  3922  0 Dependency Published          0          0       0
select/*+ result_cache */ f_without_deps(:p_tab) result from dual       3753  1 Result     Published          0          0       0

2 rows selected.

[collapse] The results were cached and the dependencies do not include system objects.
We easily check that the queries with any table in SYS schema or with sysdate,systimestamp,current_date,current_timestamp,dbms_random will not be cached:
SYS tables

SQL> select/*+ result_cache */ current_scn result from v$database;

    RESULT
----------
##########

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

no rows selected

SQL> explain plan for select/*+ result_cache */ * from sys.obj$;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
Plan hash value: 2311451600

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 87256 |  7328K|   296   (1)| 00:00:04 |
|   1 |  TABLE ACCESS FULL| OBJ$ | 87256 |  7328K|   296   (1)| 00:00:04 |
--------------------------------------------------------------------------

[collapse] Note that there is no “RESULT CACHE” line.
And even if create own tables in SYS schema(don’t do it :)), they will not be cached :
SYS.V_AB

SYS> create table a as select 'a' a from dual;
SYS> create table b as select 'b' b from dual;
SYS> create view v_ab as select a,b from a,b;
SYS> grant select on v_ab to xtender;

XTENDER> explain plan for select/*+ result_cache */ * from sys.v_ab;
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
Plan hash value: 215283502

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |     6 |     4   (0)| 00:00:01 |
|   1 |  MERGE JOIN CARTESIAN|      |     1 |     6 |     4   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | A    |     1 |     3 |     2   (0)| 00:00:01 |
|   3 |   BUFFER SORT        |      |     1 |     3 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | B    |     1 |     3 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)

[collapse]
But sys_context and userenv will be cached successbully:
sys_context

SQL> explain plan for select/*+ result_cache */ sys_context('userenv','os_user')  from dual;

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 1388734953

---------------------------------------------------------------------------------------
| Id  | Operation        | Name                       | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                            |     1 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE    | 267m2hcwj08nq5kwxcb0nb2ka8 |       |            |          |
|   2 |   FAST DUAL      |                            |     1 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; attributes=(single-row); parameters=(sys_context);
        name="select/*+ result_cache */ sys_context('userenv','os_user')  from dual"

14 rows selected.

[collapse] userenv

SQL> explain plan for select/*+ result_cache */ userenv('instance')  from dual;

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 1388734953

---------------------------------------------------------------------------------------
| Id  | Operation        | Name                       | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                            |     1 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE    | dxzj3fks1sqfy35shbbst4332h |       |            |          |
|   2 |   FAST DUAL      |                            |     1 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; attributes=(single-row); parameters=(sys_context); 
       name="select/*+ result_cache */ userenv('instance')  from dual"

[collapse]

Categories: Development

A function gets called twice if the result_cache is used

XTended Oracle SQL - Sun, 2015-07-05 07:37

Recently I showed simple example how result_cache works with non-deterministic functions and observed strange behaviour: a function gets fired once in the normal query, but twice with the result_cache hint.
Moreover, only third and subsequent query executions return same cached results as second query execution.
I didn’t want to investigate such behavior, just because 1) we should not cache the results of non-deterministic functions and 2) it doesn’t matter if we use deterministic functions.
But later I was asked to explain this, so this post is just a short description with test case.

Look at the simple function that just returns random values:

create or replace function f_nondeterministic(p int:=100) return int
as
  res number;
begin
   res:=round(dbms_random.value(0,p));
   return res;
end;
SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       481

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

with result_cache_statistics

SQL> exec dbms_result_cache.flush;

PL/SQL procedure successfully completed.

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       481

SQL> select name,value from v$result_cache_statistics where name in ( 'Create Count Success', 'Find Count');

NAME                                               VALUE
-------------------------------------------------- ----------
Create Count Success                               1
Find Count                                         0

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

SQL> select name,value from v$result_cache_statistics where name in ( 'Create Count Success', 'Find Count');

NAME                                               VALUE
-------------------------------------------------- ----------
Create Count Success                               1
Find Count                                         1

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

SQL> select name,value from v$result_cache_statistics where name in ( 'Create Count Success', 'Find Count');

NAME                                               VALUE
-------------------------------------------------- ----------
Create Count Success                               1
Find Count                                         2

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o;

NAME                                                BUCKET_NO         ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALIDATIONS
-------------------------------------------------- ---------- ---------- ---------- --------- ---------- ---------- -------------
XTENDER.F_NONDETERMINISTIC                                552          0 Dependency Published          0          0             0
select/*+ result_cache */ f_nondeterministic(1000)       2102          1 Result     Published          0          2             0
 nondeter from dual

[collapse]
As you can see, second execution returns different result than first one.
If we change this function:

create or replace function f_nondeterministic(p int:=100) return int
as
  res number;
begin
   res:=round(dbms_random.value(0,p));
   dbms_output.put_line('fired! ('||res||')');
   return res;
end;

and repeat this test-case:

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       943    -- << (2)

1 row selected.

fired! (607)    -- << (1)
fired! (943)    -- << (2)
SQL> /

  NONDETER
----------
       607    -- << (1)

1 row selected.

SQL> /

  NONDETER
----------
       607    -- << (1)

1 row selected.

SQL> /

  NONDETER
----------
       607    -- << (1)

1 row selected.

we will see that there were 2 function executions: first result was cached, and the second was fetched!

Categories: Development

Flipkart, ecommerce, machine learning, and free advice

Abhinav Agarwal - Sat, 2015-07-04 12:08
I wrote about the obsession of Flipkart (and Myntra) with "mobile-only" without even having an iPad-optimized app! I also talked about the stunning advances being made in voice-search by using machine learning, cognitive learning, natural language processing, even as voice-based search capabilities of e-commerce companies - including Amazon - remain abysmal. Finally, I also included several use-cases that these companies need to work on incorporating into their capabilities.

That piece, Flipkart, Focus and Free Advice, appeared in DNA on June 27th, 2015.


My earlier pieces on the same topic:
  1. Flipkart vs Amazon: Beware the Whispering Death - 20th April '15 (blog, dna)
  2. Mobile Apps: There’s Something (Profitable) About Your Privacy - 18th April '15  (blog, dna)
  3. Mobile advertising and how the numbers game can be misleading - 14th April '15  (blog, dna)
  4. Is Flipkart losing focus - 12th April '15  (blog, dna)
Flipkart, Focus, and Free Advice – Shipping Charges Also Waived!What is one to make of a statement like this - “India is not mobile-first, but mobile-only country[1]”? Especially so if it is from the co-founder of the largest ecommerce company in India, and it turns out the company does not even have an app for the Apple iPad?

I have written at length on the distractions that seem to have been plaguing Flipkart and why it cannot afford to drop its guard in this fiercely contested space[2] - especially in light of all the noise surrounding its mobile ambitions. Somewhat paradoxically, this post is about offering advice to Flipkart that calls for some diversification!

As a logical next step, I wanted to take a look at Flipkart’s mobile apps – both on the iOS and Android platforms – to see how well they were executing on their very bold ambitions. As an aside, I also wanted to see if these (and competitive) mobile apps were leveraging all the computing power now available on tap inside these tiny devices. After all, apart from the recent – and amazing – advances Google has made in its voice-based search capabilities[3], there was this stunning demo from Hound[4] that gave a glimpse into the huge advances that voice-recognition, search, and machine-learning technologies have made in the last decade.
#MustRead - what next, e-tailing ? @flipkart Flipkart, Focus and Free Advice http://t.co/zincR6LS9D via @dna @AbhinavAgarwal— Harini Calamur (@calamur) June 28, 2015
Flipkart, Focus and Free Advice http://t.co/UNF20JCqer by @AbhinavAgarwal— dna (@dna) June 27, 2015The results were, to put it mildly, massively disappointing – which I will describe in some detail.
It should be clear that Amazon and Flipkart and SnapDeal are going to be at each other’s throats in the Indian online retail market. This is one battle from which neither player can walk away. Amazon has lost the China market to Alibaba (“In the first quarter of 2014, Alibaba's e-tailing site had a 48.4 per cent market share against Amazon China's less than 3 per cent.”[5] If that was not enough, Alibaba and Foxconn are in talks with SnapDeal for a rumoured $500 million investment![6]).

Amazon cannot afford to now lose the India market to a local upstart. Flipkart, on the other hand, has even less choice. It plays only in the Indian market. It cannot walk away either; there is no other market for it to walk towards. Its valuations – expected to rise to $15 billion after its next round of funding[7] make it way too costly for it to be acquired – at least profitably so for those funders who have put in hundreds of millions of dollars at these later and higher valuations. Amazon and Flipkart have deep pockets; Flipkart can afford to bleed hundreds of millions of dollars a year even as it grows, while Amazon has conditioned Wall Street to grant it the currency of ultra-high valuations even as it operates on razor-thin margins. It is unlikely that either will be able to deliver a knockout punch to the other anytime soon. This is a fifteen-round slugfest that will be decided by who can keep soaking in the blows and keep standing at the end of the fifteenth round; while they fight, the customer continues to win. Amazon has more diversity in its portfolio of business divisions than does Flipkart – ecommerce, cloud computing services, streaming audio and video, MRO and industrial supplies, smartphones, tablets, and more. While these divisions may at times face off against each other in expectedly healthy and sometimes unhealthy rivalry, they still form a formidable front against the competition. To quote these immortal lines from the Mahabharata, “we may be five against a hundred, but against a common enemy we are a hundred and five.”

So what does Flipkart do? Three things, to begin with.

First, it needs to get serious about software.
When you have a web site that offers millions of products from tens of thousands of resellers to millions of customers that reside in tens of thousands of cities and towns and villages, you need to make sure that your customers are seeing the products that are of most relevance to them, and which they are most likely to buy. If that problems looks like a nail to you, specifically a large-scale optimization problem with a huge number of decision variables, then large-scale computing and regression modelling are the hammer. You need to be applying this hammer to the almost infinite number of nails in front of you, all day and all night long. This is what enables you to present an ever-relevant basket of products to your customers, which keeps them engaged when on your site, and which hopefully makes them buy more often than not. Flipkart needs to take a close, long, hard look at its search capabilities – about which I will talk later in this post – and its suggestions engine, because both are very subpar at this point. If it’s any consolation, while Amazon is certainly better in the search department, its capabilities in this area are nothing great either, yet. Where Amazon scores over its competitors – every single one of them - is its huge and ever-growing corpus of customer reviews. Flipkart probably recognizes the important of this corpus of customer reviews, but has run into rough weather over the expected problem of fake reviews[8].

For inspiration on where the trifecta of search, machine learning, and e-commerce could venture – with Big Data in tow - one can turn to the story of how the popular American TV game show “Jeopardy” became the battleground for IBM researchers to build upon their experience with Deep Blue (the computer that had beaten world chess champion Gary Kasparov in 1997[9]) and to build a computer that would defeat the reigning champion of Jeopardy. That happened in February 2011, after four years of work led by IBM researcher David Ferucci and “about twenty researchers”[10].
This required advances in machine learning and other esoteric concepts like LAT (Lexical Answer Type), IDF (Inverse Document Frequency), temporal and even geospatial reasoning.[11] A new suite of software and platforms, built on a concept called genetic programming (“a technique inspired by biological evolution”) has started to make its way into mainstream commercial applications.  The algorithm here “begins by randomly combining various mathematical building blocks into equations and then testing to see how well the equations fit the data. Equations that fail the test are discarded, while those that show promise are retained and recombined in new ways so that the system ultimately converges on an accurate mathematical model.”[12] What this essentially means is going beyond keyword search-based correlations and moving to more semantic-oriented searches that combine machine learning with natural language processing. This in turn requires serious software brains (smart programmers using and refining the right algorithms and models) and muscle (massive learning and training sets in the hundreds of gigabytes running on clusters of tens of thousands of nodes).
If Flipkart is serious about the mobile ad business (about which I have expressed my reservations), even then it needs to get to the holy grail of deep-learning in ad-tech – “Inferring Without Interfering” the customer’s intent.”[13] In any event, this policy will only stand Flipkart in good stead. If they are already doing so, then good, but the proof is not in the pudding as much as in the eating of the pudding.

A critical differentiator in the coming times is not, I repeat, not, going to be driven by slick UIs or gimmicks on mobile apps like “shake to see offers”, but by offering truly intelligent and immersive experience that are feasible even today. Advances in machine learning, and capabilities such as voice, video, location, and more, when used in tandem will power the next set of innovations. Rather than stick to the tried and tested and old way of making users search using simple keywords and correlations and prior history, e-tailers need to make the shopping experience more intelligent.

Appendix 2 and 3 outline possible use-cases. It should be clear that both Flipkart and Amazon have a long, long way to go before realizing anything close to the vision outlined, but without such advances, competitors like Google will find the wedge they need to prise open this market for themselves.

Second, Flipkart (or even Amazon for that matter, or SnapDeal, or whichever competitor you happen to care about, though in this case the admonition is more targeted at Flipkart in light of its mobile-only pronouncements) needs to get serious about the mobile platform.

Browse to either Flipkart or Myntra’s websites from a browser on an iPad and you are asked to use their app instead. Would you believe if I told you Flipkart does not have an iPad app (as of 15th June 2015)? No? Go check for yourself – I did! Ditto for Myntra (the online fashion retailer Flipkart acquired in 2014)! See Appendix 1 for what I found when I downloaded their apps on my iPad tablet. This would be comedically farcical if serious money weren’t riding on such decisions.

Third, Flipkart needs to get into the cloud business.

Yes, I am serious.

Let’s look at the competition – Amazon. It is the 800 pound gorilla in the cloud computing industry, where its offering goes by the umbrella of AWS (Amazon Web Services) and offers almost everything you could think of under the cloud – platform, infrastructure, software, database, email, storage, even machine learning, and much more. How gorilla-ish? “AWS offers five times the utilized compute capacity of the other 14 cloud providers in the Gartner Magic Quadrant. Combined.[14]” Since 2005, Amazon has spent “roughly $12 billion” on its infrastructure[15]. It competes with the likes of Microsoft and Google in this space. Yet, Amazon’s cloud revenues are estimated to be “30 times bigger than Microsoft’s.[16]

And yet I argue that Flipkart should get into the cloud business. As I wrote last year[17], Flipkart had to invest substantially (per my estimates, more than one hundred crore rupees, or somewhere in the vicinity of $15-$20 million dollars – which is not chump change) to build its capacity to stand up to the traffic it expected for its “Big Billion Day”. This is in addition to the regular additions it must be making to its computing infrastructure. All this is not surprising, given that the retail business is prone to lumpiness in traffic – a disproportionate amount of traffic is concentrated around sale events, or holidays.

For example, while Amazon reportedly had ten million Prime subscribers in March 2013, it reported that over 10 million “tried Prime for the first time” over the holidays in 2014 (traditionally the period between Thanksgiving and Christmas).[18] To prevent web sites from keeling over under the crush of holiday traffic, companies invest substantially, in advance, to make sure the web site keeps chugging along. The flip side is that for those periods when traffic is more average and a fraction of peak traffic, all those thousands of computers, the hundreds of gigabytes of memory, terabytes of disk space, and gobs of network bandwidth capacity are lying idle – depreciating away, obsolescing away.

Amazon realized this a decade ago and started building a rental model around its excess capacity – this was the genesis behind Amazon Web Services. There is no reason for Flipkart to not do the same. What works for Amazon has worked quite well for Flipkart[19]. If it spins off its entire e-commerce infrastructure into a separate entity, it can palm much off the capital costs of its computing infrastructure to the cloud computing subsidiary, substantially improving its balance sheet in the process. You could argue this is nothing but an accounting gimmick, and I am not going to argue with that aspect of the decision - there would be undeniable and real benefits to this decision, and it’s childish to expect a business to be run on utopian principles. As things stand, the state government of Telangana is already assiduously wooing Amazon to invest in an AWS centre in the state[20]. Once operating on Indian soil, Amazon will be able to meet legal requirements that require certain categories of data to remain with the national borders.

Any industry so heavily influenced and shaped by technology as the e-commerce industry would do well to listen to the winds of change. If unheard and unheeded, these winds of change turn into gale storms of disruption that blow away incumbents faster than you can imagine. “Mobile-only” is a useful-enough mantra, but translating that into an “app-only” sermon hints at myopic thinking – a troubling sign for sure. It turns out that Google “secretly” acquired a company that specializes in “streaming native mobile apps”. Is this a shape of the things to come? How will this transform the world of mobile apps, or even the mobile landscape in general? Time will tell, but “lock-in” may well be a wise strategy for your customers, but a terrible one to apply to yourself.[21].

Appendix 1 - App-solutely Serious about Apps?Fire up your favourite mobile browser on an Apple iPad and browse to Myntra’s website (that would be www.myntra.com). You are greeted with a message to vamoose to their mobile app, because after all, Myntra is all about mobility – social mobility in fashion, and mobile devices when speaking more literally.
Figure 1 - Myntra web site on tablet browser
Incredulity hits you in the face when you realize that (on the Apple App Store) the Myntra app is “optimized for iPhone 5, iPhone 6 and iPhone 6 Plus”, but not the iPad. Yes, you read that right – the web site that tells you have to use its mobile app and mobile app only on an iPad does not have an app optimized for the iPad.
Figure 2 - Myntra app details on the Apple App Store
I am, however, somewhat of a cynical person. I tried searching for the keyword “myntra” on the Apple App Store. The only filter applied was to look for “iPad Only” apps. Here are the beatific search results. Indian gave the world the concept of zero, and the search results page gave one practical application of that elegant mathematical concept.
Figure 3 - Search results for "iPad Only" apps on the Apple AppStore for "myntra"
So where was that Myntra app hiding? I changed the filter to “iPhone Only”, and true-enough, there was that Myntra app.
Figure 4 - Myntra app on the Apple App Store
In case you are wondering how that was even possible, know that most apps created for the iPhone (or iPod Touch) can run on an iPad without any modifications – all that is required for you to keep this in mind when compiling the app. Apple calls this a “Universal app”[22].

Now that can’t be so bad, right? After all, the app is available on the iPhone and the iPad, so where and what is the grouse? I will come to that in just a bit, but take a look at what the Myntra app looks like when run on the iPad.
Figure 5 - Myntra app running on an iPadThis is how the app runs inside an iPad. You have the option of tapping the “2x” button, after which the app uses the full screen, but by scaling everything to twice its size. There is no other intelligence here being applied – like changing the icons, or the text, or adding more features. This is iOS doing what little work you see.
Why this arouses incredulity is due to the stunning dissonance one experiences – between the statements of the Myntra (and Flipkart) executives going to town about a “mobile-only” world[23] on the one hand and the reality of a missing-in-action iPad-optimized app on the other. Yes, one could make the argument that Apple commanded a stunningly low single-digit share of 7% of the tablet market in India[24], but to make this argument is to negate your very philosophy of a “mobile-only” world. Mobile includes smartphones, tablets, phablets, wearables (for which Flipkart does have an app![25]), smart-TVs, and even embedded devices.
Flipkart’s mobile web site works - at least for now - on the iPad (though it does not on a smartphone – you have no option but to use their app), but the story is not much different there. No iPad-optimized app, but a smartphone app that does duty on the iPad by virtue of it being a “Universal” app.
Figure 6 - Flipkart shopping app in the Apple App Store
 Figure 7 - Flipkart shopping app on the Apple iPad It’s not as if Amazon’s iPad app is much better. Yes, they do have an iPad app, but it looks more like a hybrid app – a native shell with an embedded browser snuck in, and very little by way of any tablet optimizations.
Figure 8 - Amazon app for the iPad
Appendix 2 – Natural Speech SearchesMobile shopping apps like Flipkart and Amazon provide you the option of inputting your search query via voice (more because of the support the underlying mobile OS provides), but that forces you say out aloud what you have typed – keywords, and nothing more.
Unlike the stunning Hound demo or the capabilities of Google Now[26], e-tailers have yet to leave the stone age in search capabilities. While Hound can understand and answer (correctly) queries like “Show me hotels in Seattle for Friday, staying one night” and then support refinements to the query like “Show only the ones costing less than $300” or “Show only the ones that have three or four or five stars that are pet friendly, that have a gym and a pool, within 4.5 miles of the Space Needle”[27], and Google Now can understand foreign accents (like my Indian English accent) and parse phrases like “ghat”, “jyotirling” and more, a relatively simple phrase like - “What are the best sellers in fiction” – leads to disappointment on both Amazon and Flipkart’s mobile apps.
Figure 9 - Search results in the Amazon appAnd to be clear, what was presented was not the bestsellers list, because the bestseller list looked like this:
Figure 10 - Non-fiction bestsellers in books as shown on the Amazon app
I tried another search – “Suggest books for children”. I don’t know what to call the search results, but one with “*divorce* your Child” as the first result is surreal.
Figure 11 - Search results on Amazon app
To complete my brief experiment on Amazon, I tried “Show me best sellers in electronics”. That also did not yield any relevant results.
Figure 12 - Search results in the Amazon appFlipkart is not much better, and at this point we are really looking at rock-bottom as the baseline. Even a marginal improvement would be welcome here. Sadly, not the case. Though, Flipkart does separate each word out and allow you to delete any one word to refine your search. Given the abysmal quality of search results, it is somewhat of a zero-divide-by-zero case, resulting in only infinite misery trying to find the right combination of keywords that will yield the desired results.
Figure 13 - Search results on the Flipkart app
Does the Myntra app fare any better? Predictably, it doesn’t. If semantic search in the e-commerce space was a problem that had been cracked by either Flipkart or Myntra, it would have been shared across both platforms by now.
Figure 14 - Search results in the Myntra app
Even Google, with its oft-stated e-commerce ambitions[28],[29] , and the eye-popping advances that it has made with its voice-based search (Siri from Apple and lately Cortana from Microsoft also deserve to be included, but neither company seems to be quite interested in e-commerce at the scale of Amazon, yet) left me disappointed with a simple search like – “what are the fiction best sellers in India”.
Figure 15 - Search results in the Google app
Appendix 3What do I have in mind with respect to the kinds of queries that Flipkart (or Amazon) should be trying to enable? Without any further context, I present the following examples:
One:
(this is a comparatively simpler form of the semantic search capabilities I propose)
Me: show me the best sellers in non-fiction
App: [displays a list of book best sellers in non-fiction] [Optionally, excludes or places the ones I have bought at the bottom of the list; or marks them differently and provides me with an option of reading them online – assuming I had purchased an e-book version]
Me: show me only those books that have been published in the last three months;
App: [filters the previous set of search results to show only those non-fiction best sellers that have been published in the last three months]
Me: also include books that were on the bestseller list this year
App: [adds books that were in the top 10/20 bestsellers list in 2015 but have now dropped out of the rankings]
Me: cancel the last search, and show me those books that are also available as e-books, and then sort them by price
App: [displays a list of book best sellers in non-fiction, filtered by those available on the Kindle, and sorts by price, ascending]
Me: send me free e-book samples of the first five books from this list and remind me in one week whether I want to purchase them.
App: [downloads free samples of the first three books to my e-book app] [creates a reminder to remind me in one week]


Two:
(this is a more social and more nuanced form of the semantic search outlined above)
Me: show me a list of LED TVs
App: [displays a list of the bestselling LED TVs]
Me: show me LED TVs that are HD, 40 inches or larger, cost no more than Rs 60,000, and can be delivered in the next three days.
App: [displays a list of TVs matching the criteria, and adds – “there are only three TVs that match your search criteria, so I have changed the price to Rs 70,000, which has resulted in five more search results. Say “cancel” to undo.”]
Me: Which among these would be most relevant to me?
App: [displays the list sorted based on popularity in my postal code] [offers to show the list sorted on TVs sold in the last three months to the housing community I live in – or the company I work at – or based on people with my profile of educational qualifications or marital/family status – based on privacy settings of course]
Me: summarize the most useful reviews for the first TVs, and keep each under two minutes.
App: [summarizes the most useful reviews and then reads out a software-generated summary, in less than two minutes. Also sends a text summary to my WhatsApp or email]
Far-distant utopia? Naah, I don’t think so. This is within the realm of the possible, and I expect to see this become reality in the next two years. Today, however, we are some ways off from the innovations where online shopping will become a truly immersive, interactive experience akin to having a natural conversation with an incredibly knowledgeable yet infinitely patient salesperson.

Three:
(ratcheting things up one more notch)
Me: (standing amidst the ruins of Hampi) Suggest some good books about this place.
App: [suggests bestsellers or highest-rated books on three categories: coffee-table books on Hampi; history of Hampi and the Vijayanagar Empire; historical fiction books set in the fifteenth/sixteenth century Vijaynagara Empire]
Me: Also suggest something on the significance of this chariot temple
App: …

Four:
App: [reminds me that I have a party at my house this weekend where four families are coming over]
Me: I need some snacks and also suggest some recent action movies to rent
App: [suggests food-items to order and shows a list of the five top grossing movies of the year in the “Action” genre and shows options: buy, rent (really?), stream]
Me: place the first, third, and fifth items in the shopping cart, record this and deliver to my wife. Then rent to stream the third movie in HD format on Saturday evening.
App: [places these items in the shopping cart, records a 15 second video and pings the spouse via a notification/alert to view the video. It also places an order for the selected movie]

Disclaimer: views expressed are personal.

 References:
[1] "India is not mobile-first, but mobile-only country: Sachin Bansal, Flipkart's founder and Mukesh Bansal, Myntra's CEO - timesofindia-economictimes", http://articles.economictimes.indiatimes.com/2015-05-13/news/62124447_1_myntra-sachin-bansal-ceo-mukesh-bansal
[2] See http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185 and http://www.dnaindia.com/analysis/standpoint-why-flipkart-seems-to-be-losing-focus-2076806
[3] "Google Launches Custom Voice Actions For Third Party Apps", http://searchengineland.com/google-launches-custom-voice-actions-for-third-party-apps-220148
[4] "After Nine Years of Secret Development, Hound Voice Search App Has a Dazzling Demo | Re/code", http://recode.net/2015/06/02/after-nine-years-of-secret-development-hound-voice-search-app-has-a-dazzling-demo/
[5] "A missed opportunity in China has Amazon founder Jeff Bezos backing his India venture", http://indiatoday.intoday.in/story/amazon-jeff-bezos-india-business-venture-flipkart-snapdeal/1/393933.html
[6] "Alibaba, Foxconn in Talks to Invest $500 Million in India’s Snapdeal - India Real Time - WSJ", http://blogs.wsj.com/indiarealtime/2015/06/16/alibaba-foxconn-in-talks-to-invest-500-million-in-indias-snapdeal/
[7] "Flipkart set to raise up to $800 million - Livemint", http://www.livemint.com/Companies/j2B9ax1SNS4JrDdJAU19sO/Flipkart-set-to-raise-up-to-800-mn.html
[8] See "How e-retailers such as Flipkart, Amazon are keeping fake products at bay - timesofindia-economictimes", http://articles.economictimes.indiatimes.com/2015-01-08/news/57791521_1_amazon-india-sellers-mystery-shoppers, "Who Reviews the Reviewers? How India's Online Businesses Are Fighting Fake Reviews | NDTV Gadgets", http://gadgets.ndtv.com/internet/features/who-reviews-the-reviewers-how-indias-online-business-are-fighting-fake-reviews-697112, and "How genuine are product reviews on FlipKart? - Quora", http://www.quora.com/How-genuine-are-product-reviews-on-FlipKart
[9] "Deep Blue (chess computer) - Wikipedia, the free encyclopedia", https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)
[10] "Rise of the Robots: Technology and the Threat of a Jobless Future”, by Martin Ford, Jeff Cummings, ISBN 9781480574779, http://www.amazon.com/Rise-Robots-Technology-Threat-Jobless/dp/1480574775
[11] Ibid and "The AI Behind Watson — The Technical Article", http://www.aaai.org/Magazine/Watson/watson.php
[12] "Rise of the Robots: Technology and the Threat of a Jobless Future”
[13] Gregory Piatetsky on Twitter: "The #DeepLearning future of ads: "Inferring Without Interfering with what moves the Customer most" - J. Kobelius http://t.co/zh96DC6DDG" https://twitter.com/kdnuggets/status/610848069672927232
[14] "Gartner: AWS Now Five Times The Size Of Other Cloud Vendors Combined - ReadWrite", http://readwrite.com/2013/08/21/gartner-aws-now-5-times-the-size-of-other-cloud-vendors-combined
[15] Ibid.
[16] "How much bigger is Amazon’s cloud vs. Microsoft and Google?", http://www.networkworld.com/article/2837910/public-cloud/how-much-bigger-is-amazon-s-cloud-vs-microsoft-and-google.html
[17] "A Billion Dollar Sale, And A Few Questions", http://www.dnaindia.com/analysis/standpoint-a-billion-dollar-sale-and-a-few-questions-2047853
[18] "Amazon added 10M new Prime subscribers over the holidays, could make up to $1B in annual revenue | VentureBeat | Business | by Harrison Weber", http://venturebeat.com/2014/12/26/amazon-made-nearly-1b-from-new-prime-subscriptions-over-the-holidays/
[19] See http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185
[20] "Amazon likely to bring Web Services to Telangana | Business Standard News", http://www.business-standard.com/article/companies/amazon-likely-to-bring-web-services-to-telangana-115061000659_1.html
[21] "Report: Last Year Google Secretly Acquired Agawi, A Specialist In Streaming Native Mobile Apps | TechCrunch", http://techcrunch.com/2015/06/18/report-last-year-google-secretly-acquired-agawi-a-specialist-in-streaming-native-mobile-apps/
[22] "Start Developing iOS Apps Today: Tutorial: Basics", https://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/FirstTutorial.html
[23] http://articles.economictimes.indiatimes.com/2015-05-13/news/62124447_1_myntra-sachin-bansal-ceo-mukesh-bansal
[24] "Samsung Tops Indian Tablet Market Share, Followed By Micromax, iBall", http://trak.in/tags/business/2014/11/28/indian-tablet-market-share-growth/
[25] "Flipkart launches an app for Android Wear sporting wearables - Tech2", http://tech.firstpost.com/news-analysis/flipkart-launches-an-app-for-android-wear-sporting-wearables-240070.html
[26] See this for a comparison between Hound and Google Now - "Here’s how Hound beta compares to Google Now (Video) | 9to5Google", http://9to5google.com/2015/06/05/hound-beta-vs-google-now-video/
[27] "After Nine Years of Secret Development, Hound Voice Search App Has a Dazzling Demo | Re/code", http://recode.net/2015/06/02/after-nine-years-of-secret-development-hound-voice-search-app-has-a-dazzling-demo/
[28] "Google Preps Shopping Site to Challenge Amazon - WSJ", http://www.wsj.com/articles/google-preps-shopping-site-to-challenge-amazon-1418673413
[29] "Google Finds Partners To Help It Compete With Amazon - Forbes", http://www.forbes.com/sites/benkepes/2015/04/13/google-finds-partners-to-help-it-compete-with-amazon/



© 2015, Abhinav Agarwal. All rights reserved.

Oracle Process Cloud Service - Consuming ADF BC REST Service in Web Form

Andrejus Baranovski - Sat, 2015-07-04 08:57
With the introduction of Oracle Process Cloud Service (https://cloud.oracle.com/process) there is an option to run your business process in the cloud. You can implement very similar things as it is possible with BPM 12c in JDeveloper, but only in the cloud. There is no option to implement human task UI with ADF, it must be done with Web Forms (light UI forms implementation framework). This is disadvantage, as it will require to externalise complex business logic and access data collections through REST calls, instead of processing it locally in ADF extension. However, this is how it is implemented currently in the BPM cloud version.

This is how it looks - Web Form editor in Process Cloud Service:


You have an option to select specific types from component palette. There are such types as money, email, phone, text. Components are provided with built in validation, checks for valid email address, etc. User who implements a form, needs to drag and drop components one by one (generally it works OK, but sometimes may get stuck and page refresh will be required). Properties section is available to enter details for the selected component.

Cool part about it - we can define business logic rules. For example, to show/hide customer field based on trip type, etc. One of the rules below, shows how REST service can be called inline and REST response is applied for drop-down field. This is pretty simple to fetch data from external REST service and use it for the Web Form component. Just invoke REST service and assign collection to the component supporting array of data:


If REST call requires to pass parameters through URL, this is also possible, see example below - list of cities, based on the selected country in the Web Form:


Web Form works pretty well in Preview mode, it even calls REST services and shows real data. Validation messages for built-in checks are displayed pretty nicely. Here I'm selecting a country from the list populated by REST service:


Based on the selection in the Country drop-down, filtered list for Cities becomes available (another REST call):


Good news - most of the Oracle Process Cloud interface itself is implemented with ADF. Here you can see a section listing all Web Forms available for the current process:


I have implemented REST service for Countries and Cities lists in ADF BC. There are two custom methods in AM implementation, to fetch the data and transform it to the list format:


REST resources are defined through annotations and are configured to produce JSON data, this will allow Oracle Process Cloud to parse such data automatically:


If you are going to check how REST service is implemented in ADF BC, you can run a query in Postman to retrieve a list of all countries from HR schema:


Another query to retrieve a list of cities, by country:


Download sample application, with ADF BC REST service - RestADFBCApp.zip.

RMAN -- 4 : Recovering from an Incomplete Restore

Hemant K Chitale - Fri, 2015-07-03 23:22
What do you do if a RESTORE fails mid-way ?  Do you need to rerun the whole restore ?  If it is a very large database, it could take [many ?] hours.

RMAN is "smart" enough to detect datafiles that have been restored and not re-attempt a restore.

UPDATE 08-Jul-15 :  For a database with Oracle Managed Files  (OMF), please see Oracle Support Note Doc ID 1621319.1  {Thus, see my next blog post on Recovering from an Incomplete Restore with OMF Files}

This example below is for non-OMF files.

Here, I begin a database restore.

 
RMAN> restore controlfile from autobackup;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK

recovery area destination: /NEW_FS/oracle/FRA
database name (or database unique name) used for search: HEMANTDB
channel ORA_DISK_1: AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 04-JUL-15

RMAN>
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
Starting implicit crosscheck backup at 04-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 04-JUL-15

Starting implicit crosscheck copy at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 04-JUL-15

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp

using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:02:34
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3777
Session ID: 1 Serial number: 9

[oracle@localhost ~]$

After having restored a few datafiles, the restore failed on being disconnected from the database. (The  server or database instance has crashed).  Since the controlfile has been restored, I can bring up the database in MOUNT mode and then re-attempt a RESTORE DATABASE.

[oracle@localhost ~]$ rman target sys/oracle@orcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jul 4 12:56:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK

skipping datafile 3; already restored to file /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:02:36
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:04:02
Finished restore at 04-JUL-15

RMAN>

RMAN detects that datafiles 3 (undotbs01.dbf) and 4 (users01.dbf) had already been restored.
 If you look at the previous RESTORE run, you can see that these were restored by Channel ORA_DISK_2. The first channel ORA_DISK_1 had started restoring system01.dbf but hadn't completed restoring the datafile when the restore crashed. That restore of datafile 1 (system01.dbf) had to be redone.

 (Another thing to note : Oracle doesn't necessarily restore datafiles in the order of file_id (file#) ! There really is no ORDER BY for a RESTORE)

RMAN> recover database;

Starting recover at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

archived log for thread 1 with sequence 628 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
archived log for thread 1 with sequence 629 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
archived log for thread 1 with sequence 630 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
archived log for thread 1 with sequence 631 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
archived log for thread 1 with sequence 632 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
archived log for thread 1 with sequence 633 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc thread=1 sequence=628
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc thread=1 sequence=629
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc thread=1 sequence=630
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc thread=1 sequence=631
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc thread=1 sequence=632
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=633
media recovery complete, elapsed time: 00:00:02
Finished recover at 04-JUL-15

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>


UPDATE 07-Jul-15 :  Also see my earlier (year 2012) post "Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER"  which also shows retrying a RESTORE DATABASE after a failure of restoring a datafile.  There, a single file in a BackupSet failed to restore.  Oracle didn't continue and try the other datafiles in that BackupSet.  I could either fix the error and retry the entire BackupSet (RESTORE DATABASE would have identified the right BackupSet containing those files) OR I could, as I did in that scenario, individually restore DataFiles from the BackupSet.

It can be a good idea to have your database backup consist of multiple BackupSets, using either multiple CHANNELs or FILESPERSET during the BACKUP.


You could also note, as an aside, that Log Sequence 633 was an online redo log file. RMAN automatically verifies that the online redo log files designated by the controlfile are present and uses them.

.
.
.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #430: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

WebLogic Server (FMW) : Generating Thread Dumps using OS commands

Online Apps DBA - Fri, 2015-07-03 03:33

This post is coming from our Oracle Fusion Middleware Training where we cover Oracle WebLogic Server on Day1 . One of the performance issue that commonly encountered in poorly written application (or on not so performant Fusion Middleware infrastructure) is Stuck Threads.

Stuck Threads in WebLogic Server means a thread performing the same request for a very long time and more than the configurable Stuck Thread Max Time in WebLogic .

Thread dumps are diagnosis information that is used to analyse and troubleshoot performance related issues such as server hangs, deadlocks, slow running, idle or stuck applications etc.

How to generate Thread dumps?
In this post, I will walk you through the steps to generate Thread dumps of a server using operating system (O.S.) commands.

1. Start the server from command line script (using nohup). Let us take managed server as an example for which we need to generate the thread dumps so start the server using script as shown below.
cd $DOMAIN_HOME/bin
nohup ./startManagedWeblogic.sh <Server_name> &

2. Now identify the PID (java Process ID) for the managed server using the below command:
ps auxwww | grep –i java | grep –i <server_name> (This command is for Solaris)

3. Now run the below command to create the thread dump.
kill -3 <PID>

(This will send a signal to the process whose dump we require. This signal causes the Java Virtual Machine to generate a stack trace of the process.)

This command will create thread dump in the nohup.out file (where we started the managed server)

4. Open the nohup.out file to see generated thread dumps:

References

Related Posts for WebLogic/FMW
  1. WebLogic Server (FMW) : Generating Thread Dumps using OS commands

The post WebLogic Server (FMW) : Generating Thread Dumps using OS commands appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Testing the just released PostgreSQL 9.5 Alpha in a docker container

Yann Neuhaus - Fri, 2015-07-03 00:15

On the 2cnd of July the PostgreSQL Global Development Group released an alpha version of the upcoming PostgreSQL 9.5. The same day, Josh Berkus, another of those PostgreSQL core team members released a docker image for testing this alpha release. It's never been that easy to get started with PostgreSQL or testing new features.