Skip navigation.

DBA Blogs

Read Oracle Linux 7 Beta 1

Surachart Opun - 7 hours 24 min ago
It might be too late posting about Oracle Linux 7 (Beta 1). Just came back from long holidays in Thailand. I think it's very interesting to learn something new in OL7. Users can download it by using OTN account.
Download.
Release Note.

After installed it, I tested some a little bit.
[root@ol7beta ~]# cat /etc/oracle-release
Oracle Linux Everything release 7.0 Beta
[root@ol7beta ~]# uname -r
3.8.13-31.el7uek.x86_64
Users can choose to start with RHCK or UEK3.
[root@ol7beta ~]#Oracle Linux 7 provides the temporary file system (tmpfs), which is configured in volatile memory and whose contents do not persist after a system reboot.
[root@ol7beta ~]# df
Filesystem          1K-blocks   Used Available Use% Mounted on
/dev/mapper/ol-root  49747968 962512  48785456   2% /
devtmpfs               886508      0    886508   0% /dev
tmpfs                  893876      0    893876   0% /dev/shm
tmpfs                  893876   2212    891664   1% /run
tmpfs                  893876      0    893876   0% /sys/fs/cgroup
/dev/sda1              487652  91380    366576  20% /boot
[root@ol7beta ~]# systemctl status  tmp.mount
tmp.mount - Temporary Directory
   Loaded: loaded (/usr/lib/systemd/system/tmp.mount; disabled)
   Active: inactive (dead)
    Where: /tmp
     What: tmpfs
     Docs: man:hier(7)
           http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
[root@ol7beta ~]# systemctl start  tmp.mount
[root@ol7beta ~]# systemctl status  tmp.mount
tmp.mount - Temporary Directory
   Loaded: loaded (/usr/lib/systemd/system/tmp.mount; disabled)
   Active: active (mounted) since Wed 2014-04-16 05:33:32 ICT; 1s ago
    Where: /tmp
     What: tmpfs
     Docs: man:hier(7)
           http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
  Process: 16209 ExecMount=/bin/mount tmpfs /tmp -t tmpfs -o mode=1777,strictati                                                                                        me (code=exited, status=0/SUCCESS)
Apr 16 05:33:32 ol7beta systemd[1]: Mounting Temporary Directory...
Apr 16 05:33:32 ol7beta systemd[1]: tmp.mount: Directory /tmp to mount over...y.
Apr 16 05:33:32 ol7beta systemd[1]: Mounted Temporary Directory.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ol7beta ~]# df
Filesystem          1K-blocks   Used Available Use% Mounted on
/dev/mapper/ol-root  49747968 962344  48785624   2% /
devtmpfs               886508      0    886508   0% /dev
tmpfs                  893876      0    893876   0% /dev/shm
tmpfs                  893876   2292    891584   1% /run
tmpfs                  893876      0    893876   0% /sys/fs/cgroup
/dev/sda1              487652  91380    366576  20% /boot
tmpfs                  893876      0    893876   0% /tmpNote: After installed, Not found "ifconfig" command line.
[root@ol7beta ~]# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: p2p1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:48:ff:7f brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.20/24 scope global p2p1
    inet6 fe80::a00:27ff:fe48:ff7f/64 scope link
       valid_lft forever preferred_lft foreverThe output of the ifconfig command has changed format.
So, installed ifconfig command and tested.
[root@ol7beta ~]# rpm -qa |grep createrepo
[root@ol7beta ~]# mount /dev/cdrom /mnt
mount: /dev/sr0 is write-protected, mounting read-only
[root@ol7beta ~]# cd /mnt/Packages/
[root@ol7beta Packages]# rpm -ivh createrepo-0.9.9-21.el7.noarch.rpm
warning: createrepo-0.9.9-21.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
error: Failed dependencies:
        deltarpm is needed by createrepo-0.9.9-21.el7.noarch
        libxml2-python is needed by createrepo-0.9.9-21.el7.noarch
        python-deltarpm is needed by createrepo-0.9.9-21.el7.noarch
[root@ol7beta Packages]# cd /mnt
[root@ol7beta mnt]# createrepo .
-bash: createrepo: command not found
[root@ol7beta mnt]#
[root@ol7beta mnt]#
[root@ol7beta mnt]#  cd /mnt/Packages/
[root@ol7beta Packages]# rpm -ivh createrepo-0.9.9-21.el7.noarch.rpm deltarpm-3.6-1.el7.x86_64.rpm  libxml2-python-2.9.1-2.0.1.el7.x86_64.rpm  python-deltarpm-3.6-1.el7.x86_64.rpm
warning: createrepo-0.9.9-21.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:deltarpm-3.6-1.el7               ################################# [ 25%]
   2:python-deltarpm-3.6-1.el7        ################################# [ 50%]
   3:libxml2-python-2.9.1-2.0.1.el7   ################################# [ 75%]
   4:createrepo-0.9.9-21.el7          ################################# [100%]
[root@ol7beta Packages]# cd /mnt
[root@ol7beta mnt]# yum clean all
There are no enabled repos.
 Run "yum repolist all" to see the repos you have.
 You can enable repos with yum-config-manager --enable
[root@ol7beta mnt]# yum repolist all
repolist: 0
[root@ol7beta mnt]# vi /etc/yum.repos.d/iso.repo
[root@ol7beta mnt]# cat /etc/yum.repos.d/iso.repo
[local]
name=Local CD Repo
baseurl=file:///mnt
gpgcheck=1
gpgkey=file:///mnt/RPM-GPG-KEY
[root@ol7beta mnt]# yum clean all
Cleaning repos: local
Cleaning up everything
[root@ol7beta mnt]# yum repolist all
local                                                                                                                                            | 3.6 kB  00:00:00
(1/2): local/group_gz                                                                                                                            | 112 kB  00:00:00
(2/2): local/primary_db                                                                                                                          | 4.0 MB  00:00:00
repo id                                                                   repo name                                                                       status
local                                                                     Local CD Repo                                                                   enabled: 4,628
repolist: 4,628
[root@ol7beta mnt]# yum provides */ifconfig
local/filelists_db                                                                                                                               | 3.5 MB  00:00:00
net-tools-2.0-0.13.20131004git.el7.x86_64 : Basic networking tools
Repo        : local
Matched from:
Filename    : /sbin/ifconfig
[root@ol7beta mnt]# yum install net-tools-2.0-0.13.20131004git.el7.x86_64
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.13.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================================================================
 Package                               Arch                               Version                                               Repository                         Size
========================================================================================================================================================================
Installing:
 net-tools                             x86_64                             2.0-0.13.20131004git.el7                              local                             303 k
Transaction Summary
========================================================================================================================================================================
Install  1 Package
Total download size: 303 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
warning: /mnt/Packages/net-tools-2.0-0.13.20131004git.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for net-tools-2.0-0.13.20131004git.el7.x86_64.rpm is not installed
Retrieving key from file:///mnt/RPM-GPG-KEY
Importing GPG key 0xEC551F03:
 Userid     : "Oracle OSS group (Open Source Software group) "
 Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
 From       : /mnt/RPM-GPG-KEY
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : net-tools-2.0-0.13.20131004git.el7.x86_64                                                                                                            1/1
  Verifying  : net-tools-2.0-0.13.20131004git.el7.x86_64                                                                                                            1/1
Installed:
  net-tools.x86_64 0:2.0-0.13.20131004git.el7
Complete!
[root@ol7beta mnt]# ifconfig -alo: flags=73  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
p2p1: flags=4163  mtu 1500        inet 192.168.111.20  netmask 255.255.255.0  broadcast 0.0.0.0        inet6 fe80::a00:27ff:fe48:ff7f  prefixlen 64  scopeid 0x20        ether 08:00:27:48:ff:7f  txqueuelen 1000  (Ethernet)        RX packets 4847  bytes 541675 (528.9 KiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 3591  bytes 1145806 (1.0 MiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0and Users must know about "systemctl" command.
[root@ol7beta ~]#
[root@ol7beta ~]# type systemctl
systemctl is /usr/bin/systemctl
Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Using test prep software to prepare for 12c OCP upgrade exam

Bobby Durrett's DBA Blog - 17 hours 23 min ago

I got the newly available Kaplan test prep software for the Oracle 12c OCP upgrade exam.

I took the test in certification mode when I was tired at the end of the day some day last week and got 44% right – fail!  I usually wait until I get all the questions right before taking the real test so I have a ways to go.

The practice test software has been useful  in terms of showing me things I didn’t study very well or at all.  I’m expecting to significantly improve my correct answer percentage on my next pass.

I’m a little nervous though because it seems that the real test involves some questions that are generic database questions and I don’t think that the test prep software includes that section.  If you look at the list of topics they have a  section called “Key DBA Skills”.  I’d hope that after 19 years as an Oracle DBA I’d have some skills, but there are plenty of things I don’t do every day, such as setting up ASM.  I guess I’ll just have to bone up on the key areas of pre-12c that I don’t use all the time and hope I’m not surprised.

Anyway, I’m at 44% but hoping to make some strides in the new few weeks.

- Bobby

 

Categories: DBA Blogs

Supercharge your Applications with Oracle WebLogic

All enterprises are using an application server but the question is why they need an application server. The answer is they need to deliver applications and software to just about any device...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Why is Affinity Mask Negative in sp_configure?

Pythian Group - Tue, 2014-04-15 07:56

While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure.

 

I was curious to find out why the value of affinity mask could be negative as according to BOL http://technet.microsoft.com/en-us/library/ms187104(v=sql.105).aspx

 

 affinity_mask_memeThe values for affinity mask are as follows:

          · A one-byte affinity mask covers up to 8 CPUs in a multiprocessor computer.

       

          · A two-byte affinity mask covers up to 16 CPUs in a multiprocessor computer.

         

          · A three-byte affinity mask covers up to 24 CPUs in a multiprocessor computer.

         

          · A four-byte affinity mask covers up to 32 CPUs in a multiprocessor computer.

         

         · To cover more than 32 CPUs, configure a four-byte affinity mask for the first 32    CPUs and up to a four-byte affinity64 mask for the remaining CPUs.

 

Time to unfold the mystery. Windows Server 2008 R2 supports more than 64 logical processors. From ERRORLOG, I see there are 40 logical processors on the server:

 

2014-03-31 18:18:18.18 Server      Detected 40 CPUs. This is an informational message; no user action is required.

 

Further, going down in the ERRORLOG I see this server has four NUMA nodes configured.

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

:

Node configuration: node 0: CPU mask: 0x00000000000ffc00:0 Active CPU mask: 0x0000000000001c00:0.

Node configuration: node 1: CPU mask: 0x00000000000003ff:0 Active CPU mask: 0×0000000000000007:0.

Node configuration: node 2: CPU mask: 0x000000003ff00000:0 Active CPU mask: 0×0000000000700000:0.

Node configuration: node 3: CPU mask: 0x000000ffc0000000:0 Active CPU mask: 0x00000001c0000000:0. 

 

These were hard NUMA nodes. No soft NUMA node configured on the server (no related registry keys exist)

 

An important thing to note is that the affinity mask value forsp_configure ranges from -2147483648 to 2147483647 = 2147483648 + 2147483647 + 1 = 4294967296 = 2^32 = the range of int data type. Hence affinity mask value from sp_configure is not sufficient to hold more than 64 CPUs. To deal with this, ALTER SERVER CONFIGURATION was introduced in SQL Server 2008 R2 to support and set the processor affinity for more than 64 CPUs. However, the value of affinity mask in sp_configure, in such cases, is still an *adjusted* value which we are going to find out below.

 

Let me paste the snippet from ERRORLOG again:

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

 

As it says, the underlined values above are for the processor mask i.e. processor affinity or affinity mask. These values correspond to that of online_scheduler_mask in sys.dm_os_nodes which makes up the ultimate value for affinity mask in sp_configure. Ideally, affinity mask should be a sum of these values. Let’s add these hexadecimal values using windows calculator (Choose Programmer from Viewmenu)

 

  0x0000000000001c00

+ 0×0000000000000007

+ 0×0000000000700000

+ 0x00000001c0000000

--------------------

= 0x00000001C0701C07

 

= 7523539975 (decimal)

 

So, affinity mask in sp_configure should have been equal to 7523539975. Since this no. is greater than the limit of 2^32 i.e. 4294967296 we see an *adjusted* value (apparently a negative value). The reason I say it an *adjusted* value is because sum of processor mask values (in decimal) is adjusted (subtracted from the int range i.e. 4294967296 so that it fits within the range and falls below or equal to 4294967296 ). Here’s is an example which explains the theory:

 

7523539975 – 4294967296  – 4294967296 = –1066394617 = the negative value seen in sp_configure

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

That explains why affinity mask shows up as a negative number in sp_configure.

 

To make the calculation easier, I wrote a small script to find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

               


-- Find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

--------------------------------------------------------------------------------------

BEGIN
DECLARE @real_value bigint; -- to hold the sum of online_scheduler_mask

DECLARE @range_value bigint = 4294967296; -- range of int dataype i.e. 2^32

DECLARE @config_value int = 0; -- default value of affinity_mask in sp_configure output. to be set later.
-- Fetch the sum of Online Scheudler Mask excluding node id 64 i.e. Hidden scheduler

SET @real_value =( SELECT SUM(online_scheduler_mask) as online_scheduler_mask

FROM sys.dm_os_nodes

WHERE memory_node_id <> 64

);
-- Calculate the value for affinity_mask in sp_configure

WHILE (@real_value > 2147483647)

BEGIN

SET @real_value=(@real_value - @range_value);

END;
-- Copy the value for affinity_mask as seen in sp_configure

SET @config_value = @real_value;
PRINT 'The current config_value for affinity_mask parameter in sp_configure is: ' + cast(@config_value as varchar);

END;

This script will give the current config value for SQL server in any case, NUMA nodes, >64 procs, SQL Server 2008 R2..

 

Hope this post will help you if were as puzzled as I was seeing the negative no. in sp_configure.

 

Stay tuned!

Categories: DBA Blogs

From Las Vegas to Ottawa

Pakistan's First Oracle Blog - Sun, 2014-04-13 05:27
After a very engaging session at Collaborate14 in sunny Las Vegas amidst the desert of Nevada, I just arrived in not-so-bitterly cold Ottawa, the capital of Canada. Looking forward meeting with various Pythian colleagues and hanging out with the friends I cherish most.

My Exadata IORM session went well. Lots of follow back discussion plus questions are still pouring in. I promise I will answer them as soon as I return to Australia after couple of weeks. That reminds me of my flight from one corner of the globe to the other, but well I need to learn as how to sleep like a baby during flights. Any ideas?

Ottawa reminds me of Australian capital Canberra. It's quite a change after neon-city Vegas. Where Vegas was bathing in lights, simmering with shows, bubbling with bars, swarming with party-goers, and rattling with Casinos; Ottawa is laid-back, quiet, peaceful, and small. Restaurants and cafes look cool. Ottawa River is mostly still frozen and mounds of snow are evident along the road sides with leafless trees.

But spring is here, and things look all set to rock.
Categories: DBA Blogs

adding NOT NULL columns to an existing table ... implications make me grumpy

Grumpy old DBA - Sat, 2014-04-12 07:56
This is DBA basics 101 in the oracle world but well also something that we grumpy DBA types forget from time to time.  We have an existing table in a schema that is populated with data.  Something like this say:


create table dbaperf.has_data ( column_one varchar2(10) not null, column_two number(10) not null);

insert into dbaperf.has_data(column_one, column_two) values('First row',13);
insert into dbaperf.has_data(column_one, column_two) values('Another',42); commit;

Now you need to add another column that is also NOT NULL.  Chris Date not happy the vendor implementations of the relational model allow null columns.  Be aware of any potential NULL columns in rows and handle them carefully ( IS null / IS not null ) to avoid messing up results.

But anyhow we are going to add in a new column that is NOT NULL.

How easy that is to do against an Oracle table depends on whether one is also supplying a DEFAULT value for the new column.  If you do not supply DEFAULT value what happens here?

 alter table dbaperf.has_data add ( column_three char(1) NOT NULL );

You get: ORA-01758: table must be empty to add mandatory (NOT NULL) column

To get around that you have to do this in three steps:
  • Add in the new column
  • Populate all the new columns with a value ( data migration )
  • Make the column NOT NULL
alter table dbaperf.has_data add ( column_three char(1) );

update dbaperf.has_data set column_three = 'X' where column_one = 'First row';
update dbaperf.has_data set column_three = 'X' where column_one = 'Another';

alter table dbaperf.has_data modify ( column_three NOT NULL );

Things get easier if you do this with a DEFAULT clause on the new column.  The problem is of course some columns have a reasonable default value others may not get any agreement for a default value.  A min and a max type column probably can have an easy default others not so much.

alter table dbaperf.has_data add ( column_four number(21,2) default 0 NOT NULL );

All of this discussion side steps the implications of adding a new column to a large existing table or partitioned table and fragging up the blocks ... that is a little beyond 101 for now.
Categories: DBA Blogs

Moving away from wordpress

Lutz Hartmann - Fri, 2014-04-11 13:55

I am sick of this advertisement on my site.

Therefor I am about to move most of my posts to

http://sysdba.ch/index.php/postlist

 

Thanks for following my blog for so long.

Lutz Hartmann


Categories: DBA Blogs

Oracle RMAN Restore to the Same Machine as the Original Database

Pythian Group - Fri, 2014-04-11 07:52

Among the most critical but often most neglected database administration tasks is testing restore from backup. But sometimes, you don’t have a test system handy, and need to test the restore on the same host as the source database. In such situations, the biggest fear is overwriting the original database. Here is a simple procedure you can follow, which will not overwrite the source.

  1. Add an entry to the oratab for the new instance, and source the new environment:
    oracle$ cat >> /etc/oratab <<EOF
    > foo:/u02/app/oracle/product/11.2.0/dbhome_1:N
    > EOF
    
    oracle$ . oraenv
    ORACLE_SID[oracle]? foo
    The Oracle base remains unchanged with value /u02/app/oracle
  2. Create a pfile and spfile with a minimum set of parameters for the new instance. In this case the source database is named ‘orcl’ and the new database will have a DB unique name of ‘foo’. This example will write all files to the +data ASM diskgroup, under directories for ‘foo’. You could use a filesystem directory as the destination as well. Just make sure you have enough space wherever you plan to write:
    oracle$ cat > $ORACLE_HOME/dbs/initfoo.ora <<EOF
    > db_name=orcl
    > db_unique_name=foo
    > db_create_file_dest=+data
    > EOF
    
    oracle$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 9 15:35:00 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to an idle instance.
    
    SQL> create spfile from pfile;
    File created.
    
    SQL> exit
    Disconnected
  3. Now, using the backup pieces from your most recent backup, try restoring the controlfile only. Start with the most recently written backup piece, since RMAN writes the controlfile at the end of the backup. It may fail once or twice, but keep trying backup pieces until you find the controlfile:
    oracle$ ls -lt /mnt/bkup
    total 13041104
    -rwxrwxrwx 1 root root      44544 Apr  4 09:32 0lp4sghk_1_1
    -rwxrwxrwx 1 root root   10059776 Apr  4 09:32 0kp4sghi_1_1
    -rwxrwxrwx 1 root root 2857394176 Apr  4 09:32 0jp4sgfr_1_1
    -rwxrwxrwx 1 root root 3785719808 Apr  4 09:31 0ip4sgch_1_1
    -rwxrwxrwx 1 root root 6697222144 Apr  4 09:29 0hp4sg98_1_1
    -rwxrwxrwx 1 root root    3647488 Apr  4 09:28 0gp4sg97_1_1
    
    $ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Wed Apr 9 15:37:10 2014
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database (not started)
    
    RMAN> startup nomount;
    Oracle instance started
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> restore controlfile from '/mnt/bkup/0lp4sghk_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=1 device type=DISK
    channel ORA_DISK_1: restoring control file
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 04/09/2014 15:42:10
    ORA-19870: error while restoring backup piece /mnt/bkup/0lp4sghk_1_1
    ORA-19626: backup set type is archived log - can not be processed by this conversation
    
    RMAN> restore controlfile from '/mnt/bkup/0kp4sghi_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=19 device type=DISK
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
    output file name=+DATA/foo/controlfile/current.348.844443549
    Finished restore at 09-APR-14

    As you can see above, RMAN will report the path and name of the controlfile that it restores. Use that path and name below:

    RMAN> sql "alter system set
    2>  control_files=''+DATA/foo/controlfile/current.348.844443549''
    3>  scope=spfile";
    
    sql statement: alter system set 
    control_files=''+DATA/foo/controlfile/current.348.844443549'' 
    scope=spfile
  4. Mount the database with the newly restored controlfile, and perform a restore to the new location. The ‘set newname’ command changes the location that RMAN will write the files to the db_create_file_dest of the new instance. The ‘switch database’ command updates the controlfile to reflect the new file locations. When the restore is complete, use media recovery to apply the archived redologs.
    RMAN> startup force mount
    Oracle instance started
    database mounted
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> }
    
    executing command: SET NEWNAME
    Starting restore at 09-APR-14
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=23 device type=DISK
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00002 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0hp4sg98_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0hp4sg98_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to +data
    channel ORA_DISK_1: restoring datafile 00004 to +data
    channel ORA_DISK_1: restoring datafile 00005 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0ip4sgch_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0ip4sgch_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:05
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00003 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0jp4sgfr_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0jp4sgfr_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
    Finished restore at 09-APR-14
    
    RMAN> switch database to copy;
    
    datafile 1 switched to datafile copy "+DATA/foo/datafile/system.338.844531637"
    datafile 2 switched to datafile copy "+DATA/foo/datafile/sysaux.352.844531541"
    datafile 3 switched to datafile copy "+DATA/foo/datafile/undotbs1.347.844531691"
    datafile 4 switched to datafile copy "+DATA/foo/datafile/users.350.844531637"
    datafile 5 switched to datafile copy "+DATA/foo/datafile/soe.329.844531637"
    
    RMAN> recover database;
    
    Starting recover at 09-APR-14
    using channel ORA_DISK_1
    starting media recovery
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_25_841917031.dbf thread=1 sequence=25
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_26_841917031.dbf thread=1 sequence=26
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_27_841917031.dbf thread=1 sequence=27
    media recovery complete, elapsed time: 00:00:01
    Finished recover at 09-APR-14
    
    RMAN> exit
    
    Recovery Manager complete.
  5. Before opening the database, we need to update the file locations for the online redologs, tempfiles, and block change tracking file (if in use) so that we don’t step on the files belonging to the source database. The scripts below generate the ‘alter database rename file’ commands to do this. Just spool to a file or paste the results at the SQL*Plus prompt to make the necessary changes.
    SQL> set pages 1000
    SQL> set lines 150
    SQL> select 'alter database rename file'||chr(10)||
    2  ''''||member||''' to'||chr(10)||
    3  ''''||replace(member,'orcl','foo')||''';' stmt
    4  from gv$logfile;
    
    STMT
    -----------------------------------------------
    alter database rename file
    '+DATA/orcl/onlinelog/group_3.266.812880531' to
    '+DATA/foo/onlinelog/group_3.266.812880531';
    
    alter database rename file
    '+DATA/orcl/onlinelog/group_3.267.812880533' to
    '+DATA/foo/onlinelog/group_3.267.812880533';
    
    alter database rename file
    '+DATA/orcl/onlinelog/group_2.264.812880527' to
    '+DATA/foo/onlinelog/group_2.264.812880527';
    
    alter database rename file
    '+DATA/orcl/onlinelog/group_2.265.812880529' to
    '+DATA/foo/onlinelog/group_2.265.812880529';
    
    alter database rename file
    '+DATA/orcl/onlinelog/group_1.262.812880521' to
    '+DATA/foo/onlinelog/group_1.262.812880521';
    
    alter database rename file
    '+DATA/orcl/onlinelog/group_1.263.812880523' to
    '+DATA/foo/onlinelog/group_1.263.812880523';
    
    SQL> alter database rename file
    2  '+DATA/orcl/onlinelog/group_3.266.812880531' to
    3  '+DATA/foo/onlinelog/group_3.266.812880531';
    Database altered.
    
    SQL> alter database rename file
    2  '+DATA/orcl/onlinelog/group_3.267.812880533' to
    3  '+DATA/foo/onlinelog/group_3.267.812880533';
    Database altered.
    
    SQL> alter database rename file
    2  '+DATA/orcl/onlinelog/group_2.264.812880527' to
    3  '+DATA/foo/onlinelog/group_2.264.812880527';
    Database altered.
    
    SQL> alter database rename file
    2  '+DATA/orcl/onlinelog/group_2.265.812880529' to
    3  '+DATA/foo/onlinelog/group_2.265.812880529';
    Database altered.
    
    SQL> alter database rename file
    2  '+DATA/orcl/onlinelog/group_1.262.812880521' to
    3  '+DATA/foo/onlinelog/group_1.262.812880521';
    Database altered.
    
    SQL> alter database rename file
    2  '+DATA/orcl/onlinelog/group_1.263.812880523' to
    3  '+DATA/foo/onlinelog/group_1.263.812880523';
    Database altered.
    
    
    SQL> select 'alter database rename file'||chr(10)||
    2  ''''||name||''' to'||chr(10)||
    3  ''''||replace(name,'orcl','foo')||''';' stmt
    4  from gv$tempfile;
    
    STMT
    -----------------------------------------------
    alter database rename file
    '+DATA/orcl/tempfile/temp.268.812880537' to
    '+DATA/foo/tempfile/temp.268.812880537';
    
    SQL> alter database rename file
    2  '+DATA/orcl/tempfile/temp.268.812880537' to
    3  '+DATA/foo/tempfile/temp.268.812880537';
    Database altered.
    
    
    SQL> select 'alter database rename file'||chr(10)||
    2  ''''||filename||''' to'||chr(10)||
    3  ''''||replace(filename,'orcl','foo')||''';' stmt
    4  from v$block_change_tracking;
    
    STMT
    -----------------------------------------------
    alter database rename file
    '+DATA/orcl/changetracking/ctf.303.782571145' to
    '+DATA/foo/changetracking/ctf.303.782571145';
    
    SQL> alter database rename file
    2  '+DATA/orcl/changetracking/ctf.303.782571145' to
    3  '+DATA/foo/changetracking/ctf.303.782571145';
    Database altered.
    
  6. Finally, open the database with the resetlogs option:
    SQL> alter database open resetlogs;
    
    database opened.
Categories: DBA Blogs

The Growing Trend Toward Data Infrastructure Outsourcing

Pythian Group - Fri, 2014-04-11 07:45

Today’s blog post is the first of three in a series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper.

Despite the strong push to outsource corporate functions that began more than two decades ago, many IT shops have been hesitant to outsource their data management requirements.

Generally, the more mission-critical the data, the more organizations have felt compelled to control it, assigning that responsibility to their brightest and most trusted in-house database experts. The reasoning has been that with greater control comes greater security.

That mindset is rapidly changing. Macroeconomic trends are putting mounting pressure on organizations to rethink the last bastion of IT in-housing—data infrastructure management—and instead look for flexible, cost-effective outsourcing solutions that can help them improve operational efficiency, optimize performance, and increase overall productivity.

This trend toward outsourcing to increase productivity, and not simply reduce costs, is supported by a recent Forrester Research report that highlights the key reasons companies  are looking for outside help: too many new technologies and data sources, and difficulty finding people with the skills and experience to optimize and manage them.

To learn how to develop a data infrastructure sourcing strategy, download the rest of our white paper, Data Infrastructure Outsourcing.

Categories: DBA Blogs

What Happens in Vegas, Doesn’t Stay in Vegas – Collaborate 14

Pythian Group - Thu, 2014-04-10 08:04

IOUG’s Collaborate 14, is star-studded this year with the Pythian team illuminating various tracks in the presentation rooms. It’s acting like a magnet in the expo halls of The Venetian for data lovers. It’s a kind of rendezvous for those who love their data. So if you want your data to be loved, feel free to drop by at Pythian booth 1535.

Leading from the front is Paul Vallée with an eye-catching title, with real world gems. Then there is Michael Abbey’s rich experience, Marc Fielding’s in-depth technology coverage and Vasu’s forays into Apps Database Administration. There is my humble attempt at Exadata IORM, and Rene’s great helpful tips, and Alex Gorbachev’s mammoth coverage of mammoth data – it’s all there with much more to learn, share and know.

Vegas Strip is buzzing with the commotion of Oracle. Even the big rollers are turning their necks to see what the fuss is about. Poker faces have broken into amazed grins, and even the weird, kerbside card distribution has stopped. Everybody is focused on the pleasures of Oracle technologies.

Courtesy of social media, all of this fun isn’t confined to Vegas. You can follow @Pythian on Twitter to know it all, live, and in real time.

Come Enjoy!

Categories: DBA Blogs

Paul Vallée’s Interview with Oracle Profit Magazine

Pythian Group - Wed, 2014-04-09 23:00

Aaron Lazenby, Editor at Oracle’s Profit Magazine interviewed Pythian Founder, Paul Vallée this week to discuss the growing risk of internal threats to IT. Paul Vallee

“What we need to create is complete accountability for everything that happens around a data center, and that’s where our industry is not up to snuff right now. We tend to think that if you secure access to the perimeter of the data center, then what happens in the meeting inside can be unsupervised. But that’s not good enough.” says Paul.

The interview, Inside Job, is a preview to Paul’s Collaborate ’14 session taking place in later today in Las Vegas. If you’re at Collaborate, make sure you don’t miss Paul’s presentation Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden. The presentation begins at 4:15 PM Pacific at the Venetian, Level 3 – Murano 3306.

What are your thoughts? How else can organizations mitigate the risk of internal threats? Comment below.

Categories: DBA Blogs

SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 1

Galo Balda's Blog - Wed, 2014-04-09 22:45

In my previous post, I showed how to clone a GitHub repository using SQL Developer. In this post I’m going to show to synchronize the remote and local repositories after remote gets modified.

Here I use GitHub to commit a file called sp_test_git.pls.  You can create files by clicking on the icon the red arrow is pointing to.

new_file

The content of the file is a PL/SQL procedure that prints a message.

file_content

At this point, the remote repository and the local repository are out of sync. The first thing that you may want to do before modifying any repository, is to make sure that you have the most current version of it so that it includes the changes made by other developers. Let’s synchronize remote and local.

Make sure you open the Versions window. Go to the main menu click on Team -> Versions.

versions

Open the Local branch and click on master, then go to main menu click on Team -> Git -> Fetch to open the “Fetch from Git” wizard. Fetching a repository copies changes from the remote repository into your local system, without modifying any of your current branches. Once you have fetched the changes, you can merge them into your branches or simply view them. We can see the changes on the Branch Compare window by going to the main menu click on Team -> Git -> Branch Compare.

branch_compare

 Branch Compare is showing that sp_test_git.pls has been fetched from the remote master branch. We can right click on this entry and select compare to see the differences.

compare

The window on the left displays the content of the fetched file and the window on right displays the content of the same file in the local repository. In this case the right windows is empty because this is a brand new file that doesn’t exist locally. Let’s accept the changes and merge them into the local repository. We go to the Branch Compare window, right click on the entry, select merge and click on the “Ok” button.

merge

Now the changes should have been applied to the local repository.

local_update

We can go to the path where the local repository is located and confirm that sp_test_git.pls is there.

 

 


Filed under: Source Control, SQL Developer Tagged: Source Control, SQL Developer
Categories: DBA Blogs

On Error Messages

Chen Shapira - Wed, 2014-04-09 13:01

Here’s a pet peeve of mine: Customers who don’t read the error messages. The usual symptom is a belief that there is just on error: “Doesn’t work”, and that all forms of “doesn’t work” are the same. So if you tried something, got an error, your changed something and you are still getting an error, nothing changed.

I hope everyone who reads this blog understand why this behavior makes any troubleshooting nearly impossible. So I won’t bother to explain why I find this so annoying and so self defeating. Instead, I’ll explain what can we, as developers, can do to improve the situation a bit. (OMG, did I just refer to myself as a developer? I do write code that is then used by customers, so I may as well take responsibility for it)

Here’s what I see as main reasons people don’t read error messages:

  1. Error message is so long that they don’t know where to start reading. Errors with multiple Java stack dumps are especially fun. Stack traces are useful only to people who look at the code, so while its important to get them (for support), in most cases your users don’t need to see all that very specific information.
  2. Many different errors lead to the same message. The error message simply doesn’t indicate what the error may be, because it can be one of many different things. I think Kerberos is the worst offender here, so many failures look identical. If this happens very often, you tune out the error message.
  3. The error is so technical and cryptic that it gives you no clue on where to start troubleshooting.  “Table not Found” is clear. “Call to localhost failed on local exception” is not.

I spend a lot of time explaining to my customers “When <app X> says <this> it means that <misconfiguration> happened and you should <solution>”.

To get users to read error messages, I think error messages should be:

  1. Short. Single line or less.
  2. Clear. As much as possible, explain what went wrong in terms your users should understand.
  3. Actionable. There should be one or two actions that the user should take to either resolve the issue or gather enough information to deduce what happened.

I think Oracle are doing a pretty good job of it. Every one of their errors has an ID number, a short description, an explanation and a proposed solution. See here for example: http://docs.oracle.com/cd/B28359_01/server.111/b28278/e2100.htm#ORA-02140

If we don’t make our errors short, clear and actionable – we shouldn’t be surprised when our users simply ignore them and then complain that our app is impossible to use (or worse – don’t complain, but also don’t use our app).

 

 

 


Categories: DBA Blogs

LVC Producers at #Oracle University

The Oracle Instructor - Tue, 2014-04-08 11:18

LVC stands for Live Virtual Class – this is how we call our courses done interactively over the internet. At Oracle University, we have a fine crew of people who take care that the attendees (as well as the instructor, sometimes) are not impacted by technical problems. This can be e.g. connectivity issues, browser incompatibilities, questions how to deal with the learning platform WebEx or which way to choose to access the remote lab environment. All that and more is handled by LVC producers, so that the instructor can focus on the educational matters. I really appreciate this separation of duties, because I find it already demanding enough to deliver high quality Oracle Technology classes!

Many of the LVC producers work from Bucharest, and they kindly invited me to visit them at their workplace today. I gladly accepted and we had the nicest chat up on the 6th floor – it was so cool to meet these guys in person that supported me so many times already! As you can see, this is a bright bunch :-)

LVC Producers from Bucharest


Tagged: LVC
Categories: DBA Blogs

4 Things Every CMO Should Do Before Approaching IT About Big Data

Pythian Group - Tue, 2014-04-08 10:42

Read the full article, 4 Things Every CMO Should Do Before Approaching IT About Big Data.  Approaching IT with Big Data article

“You’ve likely heard the whispers (or shouts) about Big Data’s potential, how it’s the holy grail of marketing—and it can be. But to uncover this information and take action on it, marketing needs to partner closely with all departments, especially with IT.” says Samer Forzley, VP of Marketing at Pythian.

“IT can go off and develop as many Big Data initiatives as it wants, but without the necessary insights from the marketing team, those projects will never translate into sales. But if each plays to its strengths, with CMOs entering the Big Data conversation with logistics upfront, then IT’s structural knowhow can bring that solution to fruition.”

Categories: DBA Blogs

RMAN Infatuation?

Pythian Group - Tue, 2014-04-08 09:59

Lately, I am becoming infatuated with RMAN again.

Have you ever run “restore database preview”?

Are you curious about how the recovery SCN are determined?

Media recovery start SCN is 1515046
Recovery must be done beyond SCN 1515051 to clear datafile fuzziness

If you are, then I will demonstrate this for you.

RMAN LEVEL0 backup and restore database preview summary:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
366     B  A  A DISK        07-APR-2014 14:11:32 1       1       YES        ARCHIVELOG
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:35 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
371     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
372     B  0  A DISK        07-APR-2014 14:11:52 1       1       YES        LEVEL0
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
374     B  F  A DISK        07-APR-2014 14:11:58 1       1       NO         TAG20140407T141156

RMAN> restore database preview summary;

Starting restore at 07-APR-2014 14:13:04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=107 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=23 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=108 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
371     B  0  A DISK        07-APR-2014 14:11:38 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:39 1       1       YES        LEVEL0
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
Media recovery start SCN is 1524017
Recovery must be done beyond SCN 1524025 to clear datafile fuzziness
Finished restore at 07-APR-2014 14:13:05

RMAN>

Query database to determine recovery SCN:

ARROW:(SYS@db01):PRIMARY> r
  1  SELECT
  2    MIN(checkpoint_change#) start_scn,
  3    GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn
  4  FROM v$backup_datafile
  5  WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0)
  6*

 START_SCN BEYOND_SCN
---------- ----------
   1524017    1524025

ARROW:(SYS@db01):PRIMARY>

RMAN LEVEL0 and LEVEL1 backup and restore database preview summary:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
366     B  A  A DISK        07-APR-2014 14:11:32 1       1       YES        ARCHIVELOG
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:35 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
371     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
372     B  0  A DISK        07-APR-2014 14:11:52 1       1       YES        LEVEL0
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
374     B  F  A DISK        07-APR-2014 14:11:58 1       1       NO         TAG20140407T141156
375     B  A  A DISK        07-APR-2014 14:14:37 1       1       YES        ARCHIVELOG
376     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
377     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
378     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
379     B  1  A DISK        07-APR-2014 14:14:42 1       1       YES        LEVEL1
380     B  1  A DISK        07-APR-2014 14:14:42 1       1       YES        LEVEL1
381     B  1  A DISK        07-APR-2014 14:14:45 1       1       YES        LEVEL1
382     B  A  A DISK        07-APR-2014 14:14:47 1       1       YES        ARCHIVELOG
383     B  F  A DISK        07-APR-2014 14:14:51 1       1       NO         TAG20140407T141448

RMAN> restore database preview summary;

Starting restore at 07-APR-2014 14:15:59
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=107 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=23 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=108 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
371     B  0  A DISK        07-APR-2014 14:11:38 1       1       YES        LEVEL0
376     B  1  A DISK        07-APR-2014 14:14:39 1       1       YES        LEVEL1
370     B  0  A DISK        07-APR-2014 14:11:39 1       1       YES        LEVEL0
377     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
378     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
368     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
380     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
379     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
382     B  A  A DISK        07-APR-2014 14:14:47 1       1       YES        ARCHIVELOG
Media recovery start SCN is 1524335
Recovery must be done beyond SCN 1524339 to clear datafile fuzziness
Finished restore at 07-APR-2014 14:16:00

RMAN>

Query database to determine recovery SCN:

ARROW:(SYS@db01):PRIMARY> r
  1  SELECT
  2    MIN(checkpoint_change#) start_scn,
  3    GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn
  4  FROM v$backup_datafile
  5  WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0)
  6*

 START_SCN BEYOND_SCN
---------- ----------
   1524335    1524339

ARROW:(SYS@db01):PRIMARY>

Why is all of this important?

It allows one to automate the process to validate backup without having to actually run “restore database preview”.

Tested on 11.2.0.4 database.

Categories: DBA Blogs

Partner Webcast – Oracle SuperCluster Product Family: Technology Overview

When you’re under pressure to deliver more—more performance, more capacity, and more business value—you need systems that offer seamless integration. Oracle SuperCluster T5-8 and the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Monitor Oracle Golden Gate from SQL

DBASolved - Mon, 2014-04-07 08:41

One of my presentations at Collaborate 14 this year revolves around how many different ways there are to monitor Oracle Golden Gate.   As I was putting the presentation together, I was listing out the different ways for monitoring. I have covered a few of the ways already in earlier posts.  What I want to show you here is how to execute a simple “info all” command and see the results from SQL*Plus or SQL Developer using SQL.

First, a script (shell, Perl, etc..) needs to be written to write the output of the “info all” command to a text file.  In this case, I’m going to write the text file in /tmp since I’m on Linux.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#Title: gg_monitor_sqldev.pl
#
use strict;
use warnings;

#Static Variables

my $gghome = "/oracle/app/product/12.1.2/oggcore_1";
my $outfile = "/tmp/gg_process_sqldev.txt";

#Program
my @buf = `$gghome/ggsci << EOF
info all
EOF`;

open (GGPROC, ">$outfile") or die "Unable to open file";
foreach (@buf)
{
      if(/MANAGER/||/JAGENT/||/EXTRACT/||/REPLICAT/)
    {
        no warnings 'uninitialized';
         chomp;
        my ($program, $status, $group, $lagatchkpt, $timesincechkpt) = split(" ");
       
        if ($group eq "")
        {
            $group = $program;
        }
        if ($lagatchkpt eq "" || $timesincechkpt eq "")
        {
            $lagatchkpt = "00:00:00";
            $timesincechkpt = "00:00:00";
        }
        print GGPROC "$program|$status|$group|$lagatchkpt|$timesincechkpt\n";
    }
}
close (GGPROC);

Next, is the text file needs to be placed into a table to be read by SQL*Plus or SQL Developer.  External Tables are great for this.


create directory TEMP as '/tmp';
grant read on directory TEMP to PUBLIC;

drop table ggate.os_process_mon;

create table ggate.os_process_mon
(
process char(15),
status char(15),
group char(15),
lagatchk char(15),
timelastchk char(15)
)
organization external
(type oracle_loader
default directory TEMP
access parameters
(
RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY '|'
    MISSING FIELD VALUES ARE NULL
(
            process char(15),
            status char(15),
            ggroup char(15),
            lagatchk char(15),
            timelastchk char(15)
         )
    )
    location ('gg_process_sqldev.txt')
);

select * from ggate.os_process_mon;

Lastly, with these two pieces in place, I can now select the status from SQL*Plus or SQL Developer using SQL. Image 1 shows a sample from my testing environment, I’m building.

Image 1:
image

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

COLLABORATE minus 3

Pythian Group - Mon, 2014-04-07 07:50

I always like to get to the location for a conference a day in advance so I can

  • Get accustomed to the time change
  • Get a feel for my way around the venue
  • Figure out where my room is
  • Establish a few landmarks so I do not wander aimlessly around the facility and hotel as though every voyage is a new life experience

COLLABORATE officially starts on Tuesday, though there are education sessions all day Monday facilitated by the three main groups responsible for the show – the IOUG, OAUG, and Quest International Users Group. So where did this animal called COLLABORATE come from one may wonder?

Rewind to about 2004. The three above-mentioned user groups each had their own show. Each reached out to Oracle for logistic and education support, something that the vendor was (and still is) happy to give. It was starting to become obvious that the marketplace upheaval was having a dramatic effect on user group conference attendance. At the same time Oracle expressed a desire to support fewer shows. You do the math – it only made sense. Why not have a 4-5 day mega conference and work with Oracle for many facets of support. Not only were the attendees of each show being asked to pick one or the other; Oracle was investing a massive number of personnel to support all three shows separately. It was a cumulative decision to amalgamate the shows and we wondered where it all would start.

With the blessing of the IOUG board I made one of those very first phone calls to one more people on the OAUG board and the rest is history. I do not remember who I spoke to first and there were probably a handful of feelers going out from other places in the IOUG infrastructure to OAUG bigwigs. I spoke to board member Donna Rosentrater  (@DRosentrater) and we jammed on what could/should become of a co-operative effort. We chatted a few times and the interest amongst board members of the IOUG and OAUG reflected cautious optimism that we could pull if off. Each user group had its own revenue stream from separate shows. We needed to embark down a path that would not put these at risk. That is what the brunt of the negotiations centered on and the work we did together led to the very first COLLBORATE at the Gaylord in Nashville in 2006.

Once the initial framework was established, it was time to turn the discussions over to the professionals. Both groups’ professional resources collaborated (hence the name maybe) and this mega/co-operative show became a reality. COLLABORATE 14 is the 9th show put on by Quest, OAUG, and IOUG. I am not going to say “this year’s show is going to be the best yet” as I believe that implicitly belittles previous successful events. Suffice to say, for what the user community needs from an information-sharing perspective – COLLABORATE is just what the doctor ordered.

Tomorrow’s a day off; wander aimlessly through Las Vegas tempted by curios, shops, food emporiums, and just about every other possible temptation one could think of. Sunday starts with a helicopter trip to the Grand Canyon and I went all out and forked over the extra $50 to sit in the convex bubble beside the pilot. There’s a bazillion vendors poised to whisk ine away to the canyon with a fly over the Hoover dam there or on the way back. I chose Papillon and am looking forward to the excitement of the day which starts at 5:10am with a shuttle to the site where the whirlybird takes off. Talk about taking one’s breath away.

Categories: DBA Blogs