Skip navigation.

DBA Blogs

RMAN -- 4b : Recovering from an Incomplete Restore with OMF Files

Hemant K Chitale - Wed, 2015-07-08 07:51
Following up on my previous post (which had the datafiles as non-OMF), here is a case with OMF files.

SQL> select file_name from dba_data_files
2 where tablespace_name = 'HEMANT';

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst850ts_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85312_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85njw_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85qsq_.dbf

SQL>
SQL> !rm /home/oracle/app/oracle/oradata/HEMANTDB/datafile/*hemant*dbf

SQL> shutdown immediate;
ORA-01116: error in opening database file 6
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> shutdown abort;
ORACLE instance shut down.
SQL>

I have removed the datafiles for a tablespace. Note that the datafiles are all OMF.  I then attempt to restore the tablespace.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:15:21 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 390073016 bytes
Database Buffers 58720256 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> select file# from v$datafile
2 where ts# = (select ts# from v$tablespace where name = 'HEMANT')
3 order by 1;

FILE#
----------
6
7
8
9
11

SQL>
SQL> alter database datafile 6 offline;

Database altered.

SQL> alter database datafile 7 offline;

Database altered.

SQL> alter database datafile 8 offline;

Database altered.

SQL> alter database datafile 9 offline;

Database altered.

SQL> alter database datafile 11 offline;

Database altered.

SQL> alter database open;

Database altered.

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:22:02 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> restore tablespace HEMANT;

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szss_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szxb_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3615
Session ID: 29 Serial number: 21

[oracle@localhost ~]$

Once again, the database has crashed in the midst of the RESTORE. Let's check the datafile names.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:25:12 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> set pages60
SQL> select file#, name from v$datafile where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf


SQL>
SQL> select file#, name from v$datafile_header where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6


7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8


9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11



SQL>

[To understand why I queried both V$DATAFILE and V$DATAFILE_HEADER, see my previous post "Datafiles not Restored  --  using V$DATAFILE and V$DATAFILE_HEADER".]

So, datafiles 7 and 9 have been restored. We can see that in the RESTORE log as well -- "backup piece 1" in the RESTORE had datafiles 7 and 9 and was the only one to complete. Let's check the datafile names. Datafiles 7 and 9 are differently named from what they were earlier.  Earlier, they were "%bst85%", now they are "%bst90%".

So, if we want to re-run the restore, we can use SET NEWNAME for datafiles 7 and 9 to allow Oracle to check that they are already restored.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:32:12 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> run
2> {set newname for datafile 7 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf';
3> set newname for datafile 9 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf';
4> restore tablespace HEMANT;}

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

skipping datafile 7; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
skipping datafile 9; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 08-JUL-15

RMAN>

YES ! Datafiles 7 and 9 were identified as "already restored".
Let's re-check the datafiles and then RECOVER them.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:37:29 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> select name from v$datafile
2 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
3 minus
4 select name from v$datafile_header
5 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
6 /

no rows selected

SQL>
SQL> select * from v$datafile_header where name is null;

no rows selected

SQL>
SQL> recover datafile 6;
Media recovery complete.
SQL> recover datafile 7;
Media recovery complete.
SQL> recover datafile 8;
Media recovery complete.
SQL> recover datafile 9;
Media recovery complete.
SQL> recover datafile 11;
Media recovery complete.
SQL> alter tablespace HEMANT online;
alter tablespace HEMANT online
*
ERROR at line 1:
ORA-01109: database not open


SQL> alter database open;

Database altered.

SQL> alter tablespace HEMANT online;

Tablespace altered.

SQL>
SQL> select owner, segment_name, bytes/1048576 from dba_segments where tablespace_name = 'HEMANT';

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
HEMANT
LARGE_TABLE
272


SQL> select count(*) from hemant.large_table;

COUNT(*)
----------
2404256

SQL>

Yes, I have been able to verify that all the datafiles have been restored.  I have been able to bring the tablespace online and query the data in it.

SQL> set pages60
SQL> select file_id, file_name from dba_data_files where tablespace_name = 'HEMANT';

FILE_ID
----------
FILE_NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf


SQL>

And, yes the datafile names (%bst90%) are different from what they were earlier (%bst84% and %bst85%).

(Reference :  See Oracle Support Note Doc ID 1621319.1)
.
.
.

Categories: DBA Blogs

Become an #Oracle Certified Expert for Data Guard!

The Oracle Instructor - Wed, 2015-07-08 04:33

It is with great pride that I can announce a new certification being available – Oracle Database 12c: Data Guard Administration.

We wanted this for years and finally got it now, after having put much effort and expertise into the development of the exam. It is presently in beta and offered with a discount. Come and get it!


Tagged: Data Guard, Oracle Certification
Categories: DBA Blogs

Pythian Introduces Cassandra Consulting & Operational Support

Pythian Group - Tue, 2015-07-07 06:00

Over the past ten years, the technology industry has begun to adopt a whole new way of thinking about the database and data storage. This is largely the result of the fast moving, high-volume and non-traditional types of data that are being generated to support both internal business processes and real-time web applications. Using a NoSQL database enables businesses to quickly and cost-effectively process large amounts of data, whether the data is hosted in the enterprise or on the cloud.

Pythian’s new Cassandra services address the needs of customers deploying Apache Cassandra. Pythian offers highly knowledgeable and experienced Cassandra experts who can guide you to success with your Cassandra deployment by filling critical skills and capacity gaps, getting your Cassandra instance up and running quickly, and ensuring that it performs optimally as you move forward.

If you’re thinking of implementing Cassandra, or already have, watch our webinar, Getting Started With Cassandra, which covers key topics for starting out, such as when you should use Cassandra, potential challenges, real world Cassandra applications and benefits, and more.

Learn more about Pythian’s Cassandra Services.

The post Pythian Introduces Cassandra Consulting & Operational Support appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

So, What Kind of Person Are You?

Pythian Group - Mon, 2015-07-06 08:48

It’s my not-so-humble opinion that it takes a special kind of ‘someone’ to work in a successful and innovative collective such as Pythian.  We’re a diverse team of thought-leaders, technology forecasters, technical prodigies and individual contributors.  When we look for people to join our global company we’re looking for people who want to see that their work really matters…that they matter.  We have truly discerning tastes when it comes to who gets to have “Pythian” in their email signature – you have to love data and value what it does for people.

Oh.  And you have to like people (we’re funny like that).

Our global team is brimming with top talent dedicated to building something larger than them.  We haven’t gotten this far by playing it safe.  We play it smart.  We’re strategic.  We have a vision to be the most trusted and admired technology services organization in the world….

….And we’re looking for fantastic people to join us.   In order to take your career to the next level at Pythian, you have to be able to:

Lend a hand – There are times when it’s easier to allocate blame but in the end, the problem still exists.  It may not be ‘your fault’ but it can become everyone’s problem.  In situations where there isn’t a lot of time for advanced planning it’s the people who take steps towards a solution that will make the greatest (and often most favorable) impact.

Play to your strengths – Maybe you’re a whiz with numbers or an I.T. genius.  Perhaps your level of organization is outstanding or you have incredible leadership skills. Play to what energizes you.  Cultivate it, infuse your work with it and success will follow.

Lean into the unknown – Opportunity is often found in the things we never knew existed.  Many of the talented people that I’ve come to know at Pythian can dive fearlessly into projects and own them.   If they don’t know how to do something, they learn it and they learn how to do it well.  That’s just the way it’s done.

Embrace diversity – We believe that every employee that works with us is deserving of dignity and respect.

Be approachable –Typically there’s a good mix of personalities in any successful company.  While introverts seem to be at the helm of hands on I.T. work, extroverts also contribute significantly to getting things done.  Regardless of which way your personality leans, always be approachable.  A positive disposition is often contagious.

Put your best face forward – Remember that the skill and professionalism that you demonstrate every day will inevitably become your business card.  Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Do you think you can picture yourself here? Discover more about what it’s like to be part of the Pythian team.  You just might be our kind of people!

The post So, What Kind of Person Are You? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Possible Truncation Attack? logged in #em12c nodemanager.log file

DBASolved - Mon, 2015-07-06 08:38

Recently I’ve come across issues with restarting Oracle Enterprise Manager and seeing messages in the nodemanager.log. The message that I’m referring to is (followed by a java trace stack):

<Jul 2, 2015 11:46:11 PM> <WARNING> Uncaught exception in server handlerjavax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack? 
javax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack?

What is interesting about this message is the panic that may come over a person when they see the word “attack”. The first time I saw this message, I was working on a client site and I was distressed because I was worried about an “attack” on EM. After some investigation, this message is a bit misleading. So, what was the cause of the message?

The “possible truncation attack” is due to the IP address of the host where the OMS runs changed. Here in my test environment, I recently upgraded my wireless router which effected my whole network. The router upgrade changed all the addresses on the network. When OEM was initially installed, the host had an address of 192.168.65.67 after the upgraded the addressed changed to 192.168.65.7. Not a big deal; how to fix though?

In the case of my test lab, I needed to change the /etc/hosts files to ensure that the correct IP address was picked up. In the enterprise, what needs to happen is your local DNS needs to be updated along as the /etc/hosts file. OEM upon start up will look at DNS then /etc/hosts when trying to resolve host to IP resolution. The order of preference can be changed in the /etc/resolv.conf as well.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Why A Brand New Index Might Benefit From An Immediate Coalesce (One Slip)

Richard Foote - Mon, 2015-07-06 01:58
A recent question on the OTN Forums Reg: Index – Gathering Statistics vs. Rebuild got me thinking on a scenario not unlike the one raised in the question where a newly populated index might immediately benefit from a coalesce. I’ve previously discussed some of the pertinent concepts such as how index rebuilds can make indexes bigger, not smaller […]
Categories: DBA Blogs

Transport Tablespace using RMAN Backupsets in #Oracle 12c

The Oracle Instructor - Mon, 2015-07-06 01:29

Using backupsets for Transportable Tablespaces reduces the volume of data you need to ship to the destination database. See how that works:

RMAN TTS on the source database

RMAN TTS on the source database

The tablespace is made READ ONLY before the new BACKUP FOR TRANSPORT command is done. At this point, you can also convert the platform and the endian format if required. Then on the destination site:

RMAN TTS on the destination database

RMAN TTS on the destination database

The FOREIGN keyword indicates that this doesn’t use a backup taken at the destination. Practical example:

 

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 6 08:36:30 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select * from v$tablespace;

       TS# NAME                           INC BIG FLA ENC     CON_ID
---------- ------------------------------ --- --- --- --- ----------
         0 SYSTEM                         YES NO  YES              0
         1 SYSAUX                         YES NO  YES              0
         2 UNDOTBS1                       YES NO  YES              0
         3 TEMP                           NO  NO  YES              0
         4 USERS                          YES NO  YES              0
         5 TBS1                           YES NO  YES              0

6 rows selected.

SQL> select table_name,owner from dba_tables where tablespace_name='TBS1';

TABLE_NAME
-------------------- 
OWNER
--------------------
T
ADAM


SQL> alter tablespace tbs1 read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@uhesse ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:37:28 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    244      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/prima/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> host 'mkdir /tmp/stage';

host command complete

RMAN> configure device type disk backup type to compressed backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters are successfully stored

RMAN> backup for transport format '/tmp/stage/tbs1.bkset'
      datapump format '/tmp/stage/tbs1.dmp'
      tablespace tbs1;

Starting backup at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_PRIMA_yvym":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_PRIMA_yvym is:
   EXPDP>   /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TBS1:
   EXPDP>   /u01/app/oracle/oradata/prima/tbs1.dbf
   EXPDP> Job "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully completed at Mon Jul 6 08:39:50 2015 elapsed 0 00:00:26
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/prima/tbs1.dbf
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.bkset tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.dmp tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 06-JUL-15

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> exit


Recovery Manager complete.
[oracle@uhesse ~]$ ls -rtl /tmp/stage
total 5608
-rw-r-----. 1 oracle oinstall 5578752 Jul  6 08:39 tbs1.bkset
-rw-r-----. 1 oracle oinstall  163840 Jul  6 08:39 tbs1.dmp
[oracle@uhesse ~]$ rman target sys/oracle@sekunda

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:40:49 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SEKUNDA (DBID=3356258651)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> restore foreign tablespace tbs1
      format '/u01/app/oracle/oradata/sekunda/tbs1.dbf'
      from backupset '/tmp/stage/tbs1.bkset'
      dump file from backupset '/tmp/stage/tbs1.dmp';

Starting restore at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TBS1
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.bkset
channel ORA_DISK_1: restoring foreign file 5 to /u01/app/oracle/oradata/sekunda/tbs1.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.bkset
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_SEKUNDA_85631.dmp
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.dmp
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.dmp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_SEKUNDA_ppol":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully completed at Mon Jul 6 08:42:51 2015 elapsed 0 00:00:20
Import completed

Finished restore at 06-JUL-15

RMAN> report schema;

Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/sekunda/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> select count(*) from adam.t;

  COUNT(*)
----------
   1000000

Hope you find it useful :-)


Tagged: 12c New Features, PracticalGuide, RMAN
Categories: DBA Blogs

RMAN -- 4 : Recovering from an Incomplete Restore

Hemant K Chitale - Fri, 2015-07-03 23:22
What do you do if a RESTORE fails mid-way ?  Do you need to rerun the whole restore ?  If it is a very large database, it could take [many ?] hours.

RMAN is "smart" enough to detect datafiles that have been restored and not re-attempt a restore.

UPDATE 08-Jul-15 :  For a database with Oracle Managed Files  (OMF), please see Oracle Support Note Doc ID 1621319.1  {Thus, see my next blog post on Recovering from an Incomplete Restore with OMF Files}

This example below is for non-OMF files.

Here, I begin a database restore.

 
RMAN> restore controlfile from autobackup;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK

recovery area destination: /NEW_FS/oracle/FRA
database name (or database unique name) used for search: HEMANTDB
channel ORA_DISK_1: AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 04-JUL-15

RMAN>
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
Starting implicit crosscheck backup at 04-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 04-JUL-15

Starting implicit crosscheck copy at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 04-JUL-15

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp

using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:02:34
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3777
Session ID: 1 Serial number: 9

[oracle@localhost ~]$

After having restored a few datafiles, the restore failed on being disconnected from the database. (The  server or database instance has crashed).  Since the controlfile has been restored, I can bring up the database in MOUNT mode and then re-attempt a RESTORE DATABASE.

[oracle@localhost ~]$ rman target sys/oracle@orcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jul 4 12:56:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK

skipping datafile 3; already restored to file /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:02:36
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:04:02
Finished restore at 04-JUL-15

RMAN>

RMAN detects that datafiles 3 (undotbs01.dbf) and 4 (users01.dbf) had already been restored.
 If you look at the previous RESTORE run, you can see that these were restored by Channel ORA_DISK_2. The first channel ORA_DISK_1 had started restoring system01.dbf but hadn't completed restoring the datafile when the restore crashed. That restore of datafile 1 (system01.dbf) had to be redone.

 (Another thing to note : Oracle doesn't necessarily restore datafiles in the order of file_id (file#) ! There really is no ORDER BY for a RESTORE)

RMAN> recover database;

Starting recover at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

archived log for thread 1 with sequence 628 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
archived log for thread 1 with sequence 629 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
archived log for thread 1 with sequence 630 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
archived log for thread 1 with sequence 631 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
archived log for thread 1 with sequence 632 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
archived log for thread 1 with sequence 633 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc thread=1 sequence=628
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc thread=1 sequence=629
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc thread=1 sequence=630
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc thread=1 sequence=631
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc thread=1 sequence=632
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=633
media recovery complete, elapsed time: 00:00:02
Finished recover at 04-JUL-15

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>


UPDATE 07-Jul-15 :  Also see my earlier (year 2012) post "Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER"  which also shows retrying a RESTORE DATABASE after a failure of restoring a datafile.  There, a single file in a BackupSet failed to restore.  Oracle didn't continue and try the other datafiles in that BackupSet.  I could either fix the error and retry the entire BackupSet (RESTORE DATABASE would have identified the right BackupSet containing those files) OR I could, as I did in that scenario, individually restore DataFiles from the BackupSet.

It can be a good idea to have your database backup consist of multiple BackupSets, using either multiple CHANNELs or FILESPERSET during the BACKUP.


You could also note, as an aside, that Log Sequence 633 was an online redo log file. RMAN automatically verifies that the online redo log files designated by the controlfile are present and uses them.

.
.
.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #430: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Query existing HBase tables with SQL using Apache Phoenix

Kubilay Çilkara - Thu, 2015-07-02 13:25
Spending a bit more time with Apache Phoenix and reading again my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.

I think this is cool as it gives you the ability to use SQL on an HBase table.

To test this, let's say you login to HBase and you create an HBase table like this:

> create 'table2', {NAME=>'cf1', VERSIONS => 5}

The table2 is a simple table in HBase with one column family cf1 and now let's put some data to this HBase table.

> put 'table2', 'row1', 'cf1:column1', 'Hello SQL!'

then maybe add another row

> put 'table2', 'row4', 'cf1:column1', 'London'

Now, in Phoenix all you will have to do is create a database View for this table and query it with SQL. The database View will be read-only.  How cool is that, you don't even need to physically create the table or move the data to Phoenix or convert it, a database view will be sufficient and via Phoenix you can query the HBase table with SQL.

In Phoenix you create the view for the table2 using the same name. As you can see below the DDL used to create the view is case sensitive and if you created your HBase table name in lower case you will have to put the name in between double quotes.

So login to Phoenix and create the "table2" view like this:

> create view "table2" ( pk VARCHAR PRIMARY KEY, "cf1"."column1" VARCHAR );

And here is how you then query it in Phoenix:


SQL Query on Phoenix
Tremendous potential here, imagine all those existing HBase tables which now you can query with SQL. More, you can point your Business Intelligence tools and Reporting Tools and other tools which work with SQL and query HBase as if it was another SQL database.

A solution worth investigating further? It definitely got me blogging in the evenings again.

To find out more about Apache Phoenix visit their project page https://phoenix.apache.org/



Categories: DBA Blogs

Table Recovery in #Oracle 12c

The Oracle Instructor - Thu, 2015-07-02 03:18

You can now restore single tables from backup! It is a simple command although it leads to much effort by RMAN. See it as an enhancement over a ‘normal’ Point In Time Recovery:

Point In Time Recovery

Point In Time Recovery

After a full restore from a sufficiently old backup, archived logs are being applied in direction of the presence until before the logical error. Then a new incarnation comes up (with RESETLOGS) and the whole database is as it was at that time. But what if it is only a dropped table that needs to be recovered? Enter the 12c New Feature:

Table Recovery

Table Recovery

Above is what RMAN does upon Table Recovery. The restore is done to the auxiliary destination, while the database keeps on running like it is just now. The new incarnation is there only temporarily, just to export the dropped table from. Afterwards, it is removed. RMAN will then import the table back to the still running database – unless you say otherwise with the NOTABLEIMPORT clause. So it is a huge effort to go through for the system in spite of the simple RMAN command:

 

SQL> select count(*) from sales;

  COUNT(*)
----------
  10000000

SQL> select sysdate from dual;

SYSDATE
-------------------
2015-07-02 09:33:37

SQL> drop table sales purge;

Table dropped.

Oops – that was a mistake! And I can’t simply say flashback table sales to before drop because of the purge. RMAN to the rescue!

[oracle@uhesse ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 2 09:34:35 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
1       Full    142.13M    DISK        00:01:45     2015-07-01 17:50:32
        BP Key: 1   Status: AVAILABLE  Compressed: YES  Tag: TAG20150701T174847
        Piece Name: /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
  List of Datafiles in backup set 1
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/system01.dbf
  2       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/sysaux01.dbf
  3       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/undotbs01.dbf
  4       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/users01.dbf

RMAN> host 'mkdir /tmp/auxi';

host command complete

RMAN> recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';

Starting recover at 2015-07-02 09:35:54
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='tDtf'

initialization parameters used for automatic instance:
db_name=PRIMA
db_unique_name=tDtf_pitr_PRIMA
compatible=12.1.0.2
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=1512M
processes=200
db_create_file_dest=/tmp/auxi
log_archive_dest_1='location=/tmp/auxi'
#No auxiliary parameter file used


starting up automatic instance PRIMA

Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                402656944 bytes
Database Buffers            1174405120 bytes
Redo Buffers                  13848576 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2015-07-02 09:36:21
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=3 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl
Finished restore at 2015-07-02 09:36:23

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/auxi/PRIMA/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 2015-07-02 09:36:32
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/auxi/PRIMA/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 2015-07-02 09:37:08

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

Starting recover at 2015-07-02 09:37:09
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:00
Finished recover at 2015-07-02 09:38:11

sql statement: alter database open read only

contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files =
  ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter system set  control_files =   ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 2015-07-02 09:39:11
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=12 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 2015-07-02 09:39:47

datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=883993187 file name=/tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  4 online

Starting recover at 2015-07-02 09:39:47
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:15
Finished recover at 2015-07-02 09:41:03

database opened

contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_tDtf_lwFD":
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 600 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADAM"."SALES"                              510.9 MB 10000000 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tDtf_lwFD" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tDtf_lwFD is:
   EXPDP>   /tmp/auxi/tspitr_tDtf_59906.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tDtf_lwFD" successfully completed at Thu Jul 2 09:42:53 2015 elapsed 0 00:01:06
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tDtf_uink" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tDtf_uink":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADAM"."SALES"                              510.9 MB 10000000 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tDtf_uink" successfully completed at Thu Jul 2 09:54:13 2015 elapsed 0 00:11:12
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_temp_bs9tm7pz_.tmp deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_2_bs9trods_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_1_bs9trjw6_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl deleted
auxiliary instance file tspitr_tDtf_59906.dmp deleted
Finished recover at 2015-07-02 09:54:16

See how much work was done by RMAN here? But now, life is good again:

SQL> select count(*) from adam.sales;

  COUNT(*)
----------
  10000000

You say that you could have done that yourself even before 12c? Yes, you’re right: It’s not magic, it’s just more comfortable now ;-)


Tagged: Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Apache Phoenix, SQL is getting closer to Big Data

Kubilay Çilkara - Tue, 2015-06-30 15:50


Here is a post about another project in the Big Data world, like Apache Hive from my previous post, enables you to do SQL on Big Data. It is called Apache Phoenix.

Phoenix is a bit different, a bit closer to my heart too, as I read the documentation on Apache Phoenix, the word 'algebra' and 'relational algebra' came across few times, and that mean only one thing, SQL! The use of the word algebra in the docs did give me a lot of confidence. SQL has closure, is based on a database systems model which has it's roots in logic and maths and especially a subset of algebra, The Set Theory.

Apache Phoenix is developed in Salesforce and is now one of the popular projects in Apache. Apache Phoenix is a SQL skin on top of HBase, the columnar (NoSQL) database of the Hadoop ecosystem, capable of storing very large tables and data and query them via 'scans'. HBase is part of the Hadoop ecosystem and the file system it uses is usually HDFS. Apache Phoenix is using JDBC on the client as a driver.

In the race to bring the easiest to use tools for Big Data, I think Apache Phoenix is very close. It is the SQL we know used since the 1970s. The Apache Phoenix team seems to be committed and willing to introduce all of the missing parts of SQL, including transaction processing with different isolation levels.  Making Phoenix a fully operational Relational Database layer on HBase. Have a look in their roadmap. The amount of current and suggested future SQL compatibility is remarkable, and this makes me take them really seriously.
  • Transactions
  • Cost-based Query Optimization! (Wow)
  • Joins
  • OLAP
  • Subqueries
  • Striving for full SQL-92 compliance
In addition to all this, it is also possible to turn an existing HBase table to an Apache Phoenix table using CREATE TABLE or even CREATE VIEW, the DDL statements that we know. How handy is that? Suddenly you can SQL enable your existing HBase database!
How to install and use Phoenix

The SQL skin can be installed to an existing Hadoop HBase installation very quickly. All you need to do is to download and extract the tarball. You can setup a standalone Hadoop environment, look at my previous blog post for that, and then install HBase and install Apache Phoenix
Once the Apache  Phoenix software is installed, then you can start it and query it with SQL like this.

From within the bin/ directory of Phoenix install directory run

$ ./sqlline.py  localhost

That will bring you to the phoenix prompt


0: jdbc:phoenix:localhost> select * from mytable;

Categories: DBA Blogs

Prepare for the Leap Second

Pythian Group - Mon, 2015-06-29 10:37

Catch up on how to handle the Leap Second and whether you’re ready for it with our previous updates on the impacts it will have on Cassandra and Linux.

 

Background Information

A leap second will be inserted at the end of June 30, 2015 at 23:59:60 UTC. 

There is a small time difference between the atomic clock and astronomical time (which is based on the rotation of earth). Rotation of earth is slowing down.

To synchronize these times, one second will be added to the atomic clock – a leap second – so that both clocks will synchronize. This will happen on June 30th – July 1st midnight UTC (not in local time, time the same as in GMT time zone). After 23 hours 59 minutes 59 seconds, the time will become 23 hours 59 minutes 60 seconds.

Since this system of correction was implemented in 1972, 26 such leap seconds have been inserted. The most recent one happened on June 30, 2012 at 23:59:60 UTC.

Unlike daylight savings time, which shifts the timezone information and does not alter the underlying UTC time clock on which servers work, a leap-second change is an actual change in the UTC time value. Usually, UTC time is continuous and predictable, but the leap second breaks this normal continuity requiring it to be addressed.

 

What You Need to Know – Summary

The June 2015 leap second event is the addition of one second to the atomic clock on June 30, 2015. Pythian has researched the implications that the upcoming leap second insertion may have and presents the relevant information to its clients and the wider community.

At the operating system level:

  • Windows and AIX servers are not affected by this issue.
  • Linux servers using NTP (network time protocol) may be affected, potentially causing error messages, server hangs or 100% CPU utilization. There are a series of patches and workarounds available, depending upon the needs of the components running on the Linux server.
  • HP-UX servers have NTP patches released in Q2 2015.

For databases and other software components:

  • Java programs are at risk of generating endless error loops, spiking CPU utilization. Patches are available.
  • Databases generally obtain time-stamps from the server OS, so those running on Linux have potential issues. For most, there are no additional corrections necessary.
  • Oracle databases have minimal additional risk. Oracle clustered environments and java-based administration tools should be reviewed and corrective actions taken.
  • Microsoft SQL Server databases have no risk but may expose minor application issues on data granularity and error handling.
  • Open source databases should be reviewed for Java risks. Updated kernels are available.
  • Cisco UCS environments should be reviewed. Patches are available.

Symptoms from the leap second event may persist for up to a day before and after the leap second event, as server NTP service updates are provided.

For all environments, a complete assessment and planning for your systems should be performed. The Pythian team would be pleased to help you perform this assessment and complete the planning necessary to ensure your systems can handle the leap second event in 2015. Get started by reviewing the full Leap Second Report.

 

The post Prepare for the Leap Second appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Multisection Backup for Image Copies

The Oracle Instructor - Mon, 2015-06-29 10:30

A nice Oracle Database 12c New Feature enhances the multisection backup, introduced in 11g: You can use it now for image copies also!

Multisection Backup with Image Copy

Multisection Backup with Image Copy

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    235      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> configure device type disk parallelism 2;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> backup section size 301m as copy datafile 4;

Starting backup at 29-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 1 through 38528
channel ORA_DISK_2: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 38529 through 77056
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:15
Finished backup at 29-JUN-15

RMAN> host 'ls -rtl /u02/fra/PRIMA/datafile/';

total 616468
-rw-r-----. 1 oracle oinstall 631250944 Jun 29 18:07 o1_mf_users_bs2v934z_.dbf
host command complete

We use backupsets by default now especially also upon the DUPLICATE DATABASE command, which leads finally to image copies of course.


Tagged: 12c New Features, Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Solution)

Richard Foote - Mon, 2015-06-29 00:27
OK, time to reveal how a couple of simple deletes can cause an index to double in size. If we go back and look at the tree dump before the delete operation: —– begin tree dump branch: 0x180050b 25167115 (0: nrow: 19, level: 1) leaf: 0x180050c 25167116 (-1: row:540.540 avs:4) leaf: 0x180050d 25167117 (0: row:533.533 […]
Categories: DBA Blogs

Log Buffer #429: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-06-26 06:47

This Log Buffer Edition gathers a wide sample of blogs and then purifies the best ones from Oracle, SQL Server and MySQL.

Oracle:

  • If you take a look at the “alter user” command in the old 9i documentation, you’ll see this: DEFAULT ROLE Clause.
  • There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after a large update that was rolled back.”
  • 12c Parallel Execution New Features: 1 SLAVE distribution
  • Index Tree Dumps in Oracle 12c Database (New Age)
  • Is it possible to cause tables to be stale with only tiny amounts of change?

SQL Server:

  • Making Data Analytics Simpler: SQL Server and R
  • Challenges with integrating MySQL data in an ETL regime and the amazing FMTONLY trick!
  • Azure Stream Analytics aims to extract knowledge structures from continuous ordered streams of data by real-time analysis.
  • Grant User Access to All SQL Server Databases
  • SQL SERVER – How Do We Find Deadlocks?

MySQL:

  • Efficient Use of Indexes in MySQL
  • SHOW ENGINE INNODB MUTEX is back!
  • Business-critical MySQL with DR in vCloud Air
  • Become a MySQL DBA blog series – Common operations – Schema Changes.
  • Building a Better CREATE USER Command

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #429: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Hive (HiveQL) SQL for Hadoop Big Data

Kubilay Çilkara - Thu, 2015-06-25 13:30


In this  post I will share my experience with an Apache Hadoop component called Hive which enables you to do SQL on an Apache Hadoop Big Data cluster.

Being a great fun of SQL and relational databases, this was my opportunity to set up a mechanism where I could transfer some (a lot)  data from a relational database into Hadoop and query it with SQL. Not a very difficult thing to do these days, actually is very easy with Apache Hive!

Having access to a Hadoop cluster which has the Hive module installed on, is all you need. You can provision a Hadoop cluster yourself by downloading and installing it in pseudo mode on your own PC. Or you can run one in the cloud with Amazon AWS EMR in a pay-as-you-go fashion.

There are many ways of doing this, just Google it and you will be surprised how easy it is. It is easier than it sounds. Next find links for installing it on your own PC (Linux).  Just download and install Apache Hadoop and Hive from Apache Hadoop Downloads

You will need to download and install 3 things from the above link.

  • Hadoop (HDFS and Big Data Framework, the cluster)
  • Hive (data warehouse module)
  • Sqoop (data importer)
You will also need to put the connector of the database (Oracle, MySQL...) you want to extract data from in the */lib folder in your Sqoop installation. For example the MySQL JDBC connector can be downloaded from hereDon't expect loads of tinkering installing Apache Hadoop and Hive or Sqoop, just unzipping binary extracts and few line changes on some config files in directories, that's all. Is not a big deal, and is Free. There are tones of tutorials on internet on this, here is one I used from another blogger bogotobogo.


What is Hive?

Hive is Big Data SQL, the Data Warehouse in Hadoop. You can create tables, indexes, partition tables, use external tables, Views like in a relational database Data Warehouse. You can run SQL to do joins and to query the Hive tables in parallel using the MapReduce framework. It is actually quite fun to see your SQL queries translating to MapReduce jobs and run in parallel like parallel SQL queries we do on Oracle EE Data Warehouses and other databases. :0) The syntax looks very much like MySQL's SQL syntax.

Hive is NOT an OLTP transactional database, does not have transactions of INSERT, UPDATE, DELETE like in OLTP and doesn't conform to ANSI SQL and ACID properties of transactions.


Direct insert into Hive with Apache Sqoop:
After you have installed Hadoop and have hive setup and are able to login to it, you can use Sqoop - the data importer of Hadoop - like in the following command and directly import a table from MySQL via JDBC into Hive using MapReduce.
$  sqoop import -connect jdbc:mysql://mydatbasename -username kubilay -P -table mytablename --hive-import --hive-drop-import-delims --hive-database dbadb --num-mappers 16 --split-by id
Sqoop import options explained:
  •  -P will ask for the password
  • --hive-import which makes Sqoop to import data straight into hive table which it creates for you
  • --hive-drop-import-delims Drops \n\r, and \01 from string fields when importing to Hive. 
  • --hive-database tells it which database in Hive to import it to, otherwise it goes to the default database. 
  • --num-mappers number of parallel maps to run, like parallel processes / threads in SQL
  • --split-by  Column of the table used to split work units, like in partitioning key in database partitioning. 
The above command will import any MySQL table you give in place of mytablename into Hive using MapReduce from a MySQL database you specify.

Once you import the table then you can login to hive and run SQL to it like in any relational database. You can login to Hive in a properly configured system just by calling hive from command line like this:

$ hive
hive> 


More Commands to list jobs:

Couple of other commands I found useful when I was experimenting with this:

List running Hadoop jobs

hadoop job -list

Kill running Hadoop jobs

hadoop job -kill job_1234567891011_1234

List particular table directories in HDFS

hadoop fs -ls mytablename


More resources & Links



Categories: DBA Blogs

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards)

Richard Foote - Thu, 2015-06-25 01:02
OK, time for a little quiz. One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance. Or so the theory goes …   :) In many cases, this drives DBAs to […]
Categories: DBA Blogs

Empty Leaf Blocks After Rollback Part II (Editions of You)

Richard Foote - Wed, 2015-06-24 01:35
In my last post, I discussed how both 1/2 empty and totally empty leaf blocks can be generated by rolling back a bulk update operation. An important point I made within the comments of the previous post is that almost the exact scenario would have taken place had the transaction committed rather than rolled back. A […]
Categories: DBA Blogs