Skip navigation.

DBA Blogs

Log Buffer #367, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2014-04-17 07:47

Log Buffer is globe trotting this week from end to end. From every nook, it has brought you some sparkling gems of blog posts. Enjoy!!!

Oracle:

On April 16th, Oracle announced the Oracle Virtual Compute Appliance X4-2.

Do your Cross Currency Receipts fail Create Accounting?

Oracle Solaris 11.2 Launch in NY.

WebCenter Portal 11gR1 dot8 Bundle Patch 3 (11.1.1.8.3) Released.

What do Sigma, a Leadership class and a webcast have in common?

SQL Server:

Stairway to SQL Server Agent – Level 9: Understanding Jobs and Security.

SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

SQL Server 2014 In-Memory OLTP Dynamic Management Views.

Why every SQL Server installation should be a cluster.

SQL Server Backup Crib Sheet.

MySQL:

Looking for Slave Consistency: Say Yes to –read-only and No to SUPER and –slave-skip-errors.

More details on disk IO-bound, update only for MongoDB, TokuMX and InnoDB.

Making the MTR rpl suite GTID_MODE Agnostic.

Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful.

MongoDB, TokuMX and InnoDB for disk IO-bound, update-only by PK.

Categories: DBA Blogs

How to start the Internet of Things adventure with Java

Looking for new challenges? Want to get back to the roots of Computer Science? How about starting to explore Internet of Things? No doubt it is one of the fastest growing area of IT and an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone

Pankaj Chandiramani - Thu, 2014-04-17 04:02

Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel
” across these snapshots to access data from any point
in time.


Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:



  • Agile provisioning
    (~ 2 minutes to provision a 1 TB database)

  • Over 90% storage
    savings

  • Reduced administrative
    overhead from integrated lifecycle management


Register
Now!


April 24 — 10:00 a.m. PT | 1:00 p.m. ET

May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET

May 22 — 10:00 a.m. PT | 1:00 p.m. ET





Categories: DBA Blogs

Indexing Foreign Key Constraints With Bitmap Indexes (Locked Out)

Richard Foote - Thu, 2014-04-17 01:29
Franck Pachot made a very valid comment in my previous entry on Indexing Foreign Keys (FK) that the use of a Bitmap Index on the FK columns does not avoid the table locks associated with deleting rows from the parent table. Thought I might discuss why this is the case and why only a B-Tree index does […]
Categories: DBA Blogs

SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 2

Galo Balda's Blog - Wed, 2014-04-16 17:46

In this post I’m going to show to synchronize the remote and local repositories after an existing file in local gets modified. What I’ll do is modify the sp_test_git.pls file in our local repository and then push those changes to the remote repository (GitHub).

First, I proceed to open the sp_test_git.pls file using SQL Developer, add another dbms_output line to it and save it. The moment I save the file, the Pending Changes (Git) window gets updated to reflect the change and the icons in the toolbar get enabled.

modify_file

Now I can include a comment and then add the file to the staging area by clicking on the Add button located on the Pending Changes (Git) window. Notice how the status changes from “Modified Not Staged” to “Modified Staged”.

staged_file

What if I want to compare versions before doing a commit to the local repository? I just have to click on the Compare with Previous Version icon located on the Pending Changes (Git) window.

compare2

The panel on the left displays the version stored in the local repository and the panel on the right displays the version in the Staging Area.

The next step is to commit the changes to the local repository. For that I click on the Commit button located on the Pending Changes (Git) window and then I click on the OK button in the Commit window.

commit

Now the Branch Compare window displays information telling that remote and local are out of sync.

branch_compare2

So the final step is to sync up remote and local by pushing the changes to GitHub. For that I go to the main menu and click on  Team -> Git -> Push to open the “Push to Git” wizard where I enter the URL for the remote repository, the user name and password to complete the operation. Now I go to GitHub to confirm the changes have been applied.

updated_github


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control

Categories: DBA Blogs

ORA-00600 [3631] recovering pluggable database after flashback database in Oracle 12c

Bobby Durrett's DBA Blog - Wed, 2014-04-16 15:44

I was trying to recreate the scenario where a 12c container database is flashed back to a SCN before the point that I recovered a pluggable database to using point in time recovery.

I got this ugly ORA-00600:

RMAN> recover pluggable database pdborcl;

Starting recover at 16-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery failed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/16/2014 06:07:40
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
 datafile 32 , 33 , 34 , 35
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3631], [32], [4096], [4210689], [], [], [], [], [], [], [], []

I think the above error message stems from this bug:

Bug 14536110  ORA-600 [ktfaput: wrong pdb] / crash using PDB and FDA

There may have been some clever way to recover from this but I ended up just deleting and recreating the CDB through DBCA which was good experience playing with DBCA in Oracle 12c.  I’m trying to learn 12c but I have a feeling that I have hit a bug that keeps me from testing this flashback database, point in time recovery of a pluggable database scenario.  I wonder if I should patch?  I think that Oracle has included a patch for this bug in a patch set.  It could be good 12c experience to apply a patch set.

- Bobby

Categories: DBA Blogs

Monitoring Oracle Golden Gate from SQL Developer

DBASolved - Wed, 2014-04-16 07:16

Last week I was at Collaborate 14 speaking in two sessions; one of the sessions I had done a couple of times before.  The other session was about the different ways of monitoring Oracle GoldenGate (If you are curious about the presentation it can be found here).  While at the conference I ran the idea of monitoring GoldenGate from SQL Developer by a few peers and there seems to be interest.  As for Oracle, this approach to monitoring GoldenGate is not on Oracle’s road map for SQL Developer.

To achieve this goal, the usage of XML extensions within SQL Developer is needed.  Using XML extensions, I’ve been able to leverage monitoring GoldenGate from SQL into a working extension.  The extension is not perfect and continues to need some work.  As you can see in image 1, I can get the status of a GoldenGate process and associated stats.

Image 1:image

The SQL Developer extension for Oracle GoldenGate is available for whoever would like to use it and extend on it.  This extension is included with my other GoldenGate monitoring scripts located here and on my scripts page.

Note: at some point, I will hopefully get this extension uploaded to a Github repository for community digestion.

This extension is to help DBAs have a way to monitor their GoldenGate environments without the need of going directly to the server. For now, it just gives up/down status and operation stats.  Hopefully, as this matures (as I and others work on it) it will become a robust extension for all monitoring with Oracle GoldenGate.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate, Replication
Categories: DBA Blogs

Read Oracle Linux 7 Beta 1

Surachart Opun - Wed, 2014-04-16 03:53
It might be too late posting about Oracle Linux 7 (Beta 1). Just came back from long holidays in Thailand. I think it's very interesting to learn something new in OL7. Users can download it by using OTN account.
Download.
Release Note.

After installed it, I tested some a little bit.
[root@ol7beta ~]# cat /etc/oracle-release
Oracle Linux Everything release 7.0 Beta
[root@ol7beta ~]# uname -r
3.8.13-31.el7uek.x86_64
Users can choose to start with RHCK or UEK3.
[root@ol7beta ~]#Oracle Linux 7 provides the temporary file system (tmpfs), which is configured in volatile memory and whose contents do not persist after a system reboot.
[root@ol7beta ~]# df
Filesystem          1K-blocks   Used Available Use% Mounted on
/dev/mapper/ol-root  49747968 962512  48785456   2% /
devtmpfs               886508      0    886508   0% /dev
tmpfs                  893876      0    893876   0% /dev/shm
tmpfs                  893876   2212    891664   1% /run
tmpfs                  893876      0    893876   0% /sys/fs/cgroup
/dev/sda1              487652  91380    366576  20% /boot
[root@ol7beta ~]# systemctl status  tmp.mount
tmp.mount - Temporary Directory
   Loaded: loaded (/usr/lib/systemd/system/tmp.mount; disabled)
   Active: inactive (dead)
    Where: /tmp
     What: tmpfs
     Docs: man:hier(7)
           http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
[root@ol7beta ~]# systemctl start  tmp.mount
[root@ol7beta ~]# systemctl status  tmp.mount
tmp.mount - Temporary Directory
   Loaded: loaded (/usr/lib/systemd/system/tmp.mount; disabled)
   Active: active (mounted) since Wed 2014-04-16 05:33:32 ICT; 1s ago
    Where: /tmp
     What: tmpfs
     Docs: man:hier(7)
           http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
  Process: 16209 ExecMount=/bin/mount tmpfs /tmp -t tmpfs -o mode=1777,strictati                                                                                        me (code=exited, status=0/SUCCESS)
Apr 16 05:33:32 ol7beta systemd[1]: Mounting Temporary Directory...
Apr 16 05:33:32 ol7beta systemd[1]: tmp.mount: Directory /tmp to mount over...y.
Apr 16 05:33:32 ol7beta systemd[1]: Mounted Temporary Directory.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ol7beta ~]# df
Filesystem          1K-blocks   Used Available Use% Mounted on
/dev/mapper/ol-root  49747968 962344  48785624   2% /
devtmpfs               886508      0    886508   0% /dev
tmpfs                  893876      0    893876   0% /dev/shm
tmpfs                  893876   2292    891584   1% /run
tmpfs                  893876      0    893876   0% /sys/fs/cgroup
/dev/sda1              487652  91380    366576  20% /boot
tmpfs                  893876      0    893876   0% /tmpNote: After installed, Not found "ifconfig" command line.
[root@ol7beta ~]# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: p2p1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:48:ff:7f brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.20/24 scope global p2p1
    inet6 fe80::a00:27ff:fe48:ff7f/64 scope link
       valid_lft forever preferred_lft foreverThe output of the ifconfig command has changed format.
So, installed ifconfig command and tested.
[root@ol7beta ~]# rpm -qa |grep createrepo
[root@ol7beta ~]# mount /dev/cdrom /mnt
mount: /dev/sr0 is write-protected, mounting read-only
[root@ol7beta ~]# cd /mnt/Packages/
[root@ol7beta Packages]# rpm -ivh createrepo-0.9.9-21.el7.noarch.rpm
warning: createrepo-0.9.9-21.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
error: Failed dependencies:
        deltarpm is needed by createrepo-0.9.9-21.el7.noarch
        libxml2-python is needed by createrepo-0.9.9-21.el7.noarch
        python-deltarpm is needed by createrepo-0.9.9-21.el7.noarch
[root@ol7beta Packages]# cd /mnt
[root@ol7beta mnt]# createrepo .
-bash: createrepo: command not found
[root@ol7beta mnt]#
[root@ol7beta mnt]#
[root@ol7beta mnt]#  cd /mnt/Packages/
[root@ol7beta Packages]# rpm -ivh createrepo-0.9.9-21.el7.noarch.rpm deltarpm-3.6-1.el7.x86_64.rpm  libxml2-python-2.9.1-2.0.1.el7.x86_64.rpm  python-deltarpm-3.6-1.el7.x86_64.rpm
warning: createrepo-0.9.9-21.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:deltarpm-3.6-1.el7               ################################# [ 25%]
   2:python-deltarpm-3.6-1.el7        ################################# [ 50%]
   3:libxml2-python-2.9.1-2.0.1.el7   ################################# [ 75%]
   4:createrepo-0.9.9-21.el7          ################################# [100%]
[root@ol7beta Packages]# cd /mnt
[root@ol7beta mnt]# yum clean all
There are no enabled repos.
 Run "yum repolist all" to see the repos you have.
 You can enable repos with yum-config-manager --enable
[root@ol7beta mnt]# yum repolist all
repolist: 0
[root@ol7beta mnt]# vi /etc/yum.repos.d/iso.repo
[root@ol7beta mnt]# cat /etc/yum.repos.d/iso.repo
[local]
name=Local CD Repo
baseurl=file:///mnt
gpgcheck=1
gpgkey=file:///mnt/RPM-GPG-KEY
[root@ol7beta mnt]# yum clean all
Cleaning repos: local
Cleaning up everything
[root@ol7beta mnt]# yum repolist all
local                                                                                                                                            | 3.6 kB  00:00:00
(1/2): local/group_gz                                                                                                                            | 112 kB  00:00:00
(2/2): local/primary_db                                                                                                                          | 4.0 MB  00:00:00
repo id                                                                   repo name                                                                       status
local                                                                     Local CD Repo                                                                   enabled: 4,628
repolist: 4,628
[root@ol7beta mnt]# yum provides */ifconfig
local/filelists_db                                                                                                                               | 3.5 MB  00:00:00
net-tools-2.0-0.13.20131004git.el7.x86_64 : Basic networking tools
Repo        : local
Matched from:
Filename    : /sbin/ifconfig
[root@ol7beta mnt]# yum install net-tools-2.0-0.13.20131004git.el7.x86_64
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.13.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================================================================
 Package                               Arch                               Version                                               Repository                         Size
========================================================================================================================================================================
Installing:
 net-tools                             x86_64                             2.0-0.13.20131004git.el7                              local                             303 k
Transaction Summary
========================================================================================================================================================================
Install  1 Package
Total download size: 303 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
warning: /mnt/Packages/net-tools-2.0-0.13.20131004git.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for net-tools-2.0-0.13.20131004git.el7.x86_64.rpm is not installed
Retrieving key from file:///mnt/RPM-GPG-KEY
Importing GPG key 0xEC551F03:
 Userid     : "Oracle OSS group (Open Source Software group) "
 Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
 From       : /mnt/RPM-GPG-KEY
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : net-tools-2.0-0.13.20131004git.el7.x86_64                                                                                                            1/1
  Verifying  : net-tools-2.0-0.13.20131004git.el7.x86_64                                                                                                            1/1
Installed:
  net-tools.x86_64 0:2.0-0.13.20131004git.el7
Complete!
[root@ol7beta mnt]# ifconfig -alo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 223  bytes 293120 (286.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 223  bytes 293120 (286.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
p2p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.111.20  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::a00:27ff:fe48:ff7f  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:48:ff:7f  txqueuelen 1000  (Ethernet)
        RX packets 35846  bytes 6586576 (6.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32375  bytes 5390303 (5.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 and Users must know about "systemctl" command.[root@ol7beta ~]#
[root@ol7beta ~]# type systemctl
systemctl is /usr/bin/systemctl
Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Using test prep software to prepare for 12c OCP upgrade exam

Bobby Durrett's DBA Blog - Tue, 2014-04-15 17:54

I got the newly available Kaplan test prep software for the Oracle 12c OCP upgrade exam.

I took the test in certification mode when I was tired at the end of the day some day last week and got 44% right – fail!  I usually wait until I get all the questions right before taking the real test so I have a ways to go.

The practice test software has been useful  in terms of showing me things I didn’t study very well or at all.  I’m expecting to significantly improve my correct answer percentage on my next pass.

I’m a little nervous though because it seems that the real test involves some questions that are generic database questions and I don’t think that the test prep software includes that section.  If you look at the list of topics they have a  section called “Key DBA Skills”.  I’d hope that after 19 years as an Oracle DBA I’d have some skills, but there are plenty of things I don’t do every day, such as setting up ASM.  I guess I’ll just have to bone up on the key areas of pre-12c that I don’t use all the time and hope I’m not surprised.

Anyway, I’m at 44% but hoping to make some strides in the new few weeks.

- Bobby

 

Categories: DBA Blogs

Far Sync (Oracle 12c New Feature)

DBA Scripts and Articles - Tue, 2014-04-15 13:50

Oracle Far Sync is an Oracle 12c new feature for Oracle Data Guard. This feature is meant to resolve the performance problems induced by network latency when you maintain a standby database geographically distant of the primary database. In this type of situation you sometimes have to make a compromise between performance and data loss. [...]

The post Far Sync (Oracle 12c New Feature) appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Supercharge your Applications with Oracle WebLogic

All enterprises are using an application server but the question is why they need an application server. The answer is they need to deliver applications and software to just about any device...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Why is Affinity Mask Negative in sp_configure?

Pythian Group - Tue, 2014-04-15 07:56

While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure.

 

I was curious to find out why the value of affinity mask could be negative as according to BOL http://technet.microsoft.com/en-us/library/ms187104(v=sql.105).aspx

 

 affinity_mask_memeThe values for affinity mask are as follows:

          · A one-byte affinity mask covers up to 8 CPUs in a multiprocessor computer.

       

          · A two-byte affinity mask covers up to 16 CPUs in a multiprocessor computer.

         

          · A three-byte affinity mask covers up to 24 CPUs in a multiprocessor computer.

         

          · A four-byte affinity mask covers up to 32 CPUs in a multiprocessor computer.

         

         · To cover more than 32 CPUs, configure a four-byte affinity mask for the first 32    CPUs and up to a four-byte affinity64 mask for the remaining CPUs.

 

Time to unfold the mystery. Windows Server 2008 R2 supports more than 64 logical processors. From ERRORLOG, I see there are 40 logical processors on the server:

 

2014-03-31 18:18:18.18 Server      Detected 40 CPUs. This is an informational message; no user action is required.

 

Further, going down in the ERRORLOG I see this server has four NUMA nodes configured.

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

:

Node configuration: node 0: CPU mask: 0x00000000000ffc00:0 Active CPU mask: 0x0000000000001c00:0.

Node configuration: node 1: CPU mask: 0x00000000000003ff:0 Active CPU mask: 0×0000000000000007:0.

Node configuration: node 2: CPU mask: 0x000000003ff00000:0 Active CPU mask: 0×0000000000700000:0.

Node configuration: node 3: CPU mask: 0x000000ffc0000000:0 Active CPU mask: 0x00000001c0000000:0. 

 

These were hard NUMA nodes. No soft NUMA node configured on the server (no related registry keys exist)

 

An important thing to note is that the affinity mask value forsp_configure ranges from -2147483648 to 2147483647 = 2147483648 + 2147483647 + 1 = 4294967296 = 2^32 = the range of int data type. Hence affinity mask value from sp_configure is not sufficient to hold more than 64 CPUs. To deal with this, ALTER SERVER CONFIGURATION was introduced in SQL Server 2008 R2 to support and set the processor affinity for more than 64 CPUs. However, the value of affinity mask in sp_configure, in such cases, is still an *adjusted* value which we are going to find out below.

 

Let me paste the snippet from ERRORLOG again:

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

 

As it says, the underlined values above are for the processor mask i.e. processor affinity or affinity mask. These values correspond to that of online_scheduler_mask in sys.dm_os_nodes which makes up the ultimate value for affinity mask in sp_configure. Ideally, affinity mask should be a sum of these values. Let’s add these hexadecimal values using windows calculator (Choose Programmer from Viewmenu)

 

  0x0000000000001c00

+ 0×0000000000000007

+ 0×0000000000700000

+ 0x00000001c0000000

--------------------

= 0x00000001C0701C07

 

= 7523539975 (decimal)

 

So, affinity mask in sp_configure should have been equal to 7523539975. Since this no. is greater than the limit of 2^32 i.e. 4294967296 we see an *adjusted* value (apparently a negative value). The reason I say it an *adjusted* value is because sum of processor mask values (in decimal) is adjusted (subtracted from the int range i.e. 4294967296 so that it fits within the range and falls below or equal to 4294967296 ). Here’s is an example which explains the theory:

 

7523539975 – 4294967296  – 4294967296 = –1066394617 = the negative value seen in sp_configure

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

That explains why affinity mask shows up as a negative number in sp_configure.

 

To make the calculation easier, I wrote a small script to find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

               


-- Find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

--------------------------------------------------------------------------------------

BEGIN
DECLARE @real_value bigint; -- to hold the sum of online_scheduler_mask

DECLARE @range_value bigint = 4294967296; -- range of int dataype i.e. 2^32

DECLARE @config_value int = 0; -- default value of affinity_mask in sp_configure output. to be set later.
-- Fetch the sum of Online Scheudler Mask excluding node id 64 i.e. Hidden scheduler

SET @real_value =( SELECT SUM(online_scheduler_mask) as online_scheduler_mask

FROM sys.dm_os_nodes

WHERE memory_node_id <> 64

);
-- Calculate the value for affinity_mask in sp_configure

WHILE (@real_value > 2147483647)

BEGIN

SET @real_value=(@real_value - @range_value);

END;
-- Copy the value for affinity_mask as seen in sp_configure

SET @config_value = @real_value;
PRINT 'The current config_value for affinity_mask parameter in sp_configure is: ' + cast(@config_value as varchar);

END;

This script will give the current config value for SQL server in any case, NUMA nodes, >64 procs, SQL Server 2008 R2..

 

Hope this post will help you if were as puzzled as I was seeing the negative no. in sp_configure.

 

Stay tuned!

Categories: DBA Blogs

adding NOT NULL columns to an existing table ... implications make me grumpy

Grumpy old DBA - Sat, 2014-04-12 07:56
This is DBA basics 101 in the oracle world but well also something that we grumpy DBA types forget from time to time.  We have an existing table in a schema that is populated with data.  Something like this say:


create table dbaperf.has_data ( column_one varchar2(10) not null, column_two number(10) not null);

insert into dbaperf.has_data(column_one, column_two) values('First row',13);
insert into dbaperf.has_data(column_one, column_two) values('Another',42); commit;

Now you need to add another column that is also NOT NULL.  Chris Date not happy the vendor implementations of the relational model allow null columns.  Be aware of any potential NULL columns in rows and handle them carefully ( IS null / IS not null ) to avoid messing up results.

But anyhow we are going to add in a new column that is NOT NULL.

How easy that is to do against an Oracle table depends on whether one is also supplying a DEFAULT value for the new column.  If you do not supply DEFAULT value what happens here?

 alter table dbaperf.has_data add ( column_three char(1) NOT NULL );

You get: ORA-01758: table must be empty to add mandatory (NOT NULL) column

To get around that you have to do this in three steps:
  • Add in the new column
  • Populate all the new columns with a value ( data migration )
  • Make the column NOT NULL
alter table dbaperf.has_data add ( column_three char(1) );

update dbaperf.has_data set column_three = 'X' where column_one = 'First row';
update dbaperf.has_data set column_three = 'X' where column_one = 'Another';

alter table dbaperf.has_data modify ( column_three NOT NULL );

Things get easier if you do this with a DEFAULT clause on the new column.  The problem is of course some columns have a reasonable default value others may not get any agreement for a default value.  A min and a max type column probably can have an easy default others not so much.

alter table dbaperf.has_data add ( column_four number(21,2) default 0 NOT NULL );

All of this discussion side steps the implications of adding a new column to a large existing table or partitioned table and fragging up the blocks ... that is a little beyond 101 for now.
Categories: DBA Blogs

Moving away from wordpress

Lutz Hartmann - Fri, 2014-04-11 13:55

I am sick of this advertisement on my site.

Therefor I am about to move most of my posts to

http://sysdba.ch/index.php/postlist

 

Thanks for following my blog for so long.

Lutz Hartmann


Categories: DBA Blogs

Oracle RMAN Restore to the Same Machine as the Original Database

Pythian Group - Fri, 2014-04-11 07:52

Among the most critical but often most neglected database administration tasks is testing restore from backup. But sometimes, you don’t have a test system handy, and need to test the restore on the same host as the source database. In such situations, the biggest fear is overwriting the original database. Here is a simple procedure you can follow, which will not overwrite the source.

  1. Add an entry to the oratab for the new instance, and source the new environment:
    oracle$ cat >> /etc/oratab <<EOF
    > foo:/u02/app/oracle/product/11.2.0/dbhome_1:N
    > EOF
    
    oracle$ . oraenv
    ORACLE_SID[oracle]? foo
    The Oracle base remains unchanged with value /u02/app/oracle
  2. Create a pfile and spfile with a minimum set of parameters for the new instance. In this case the source database is named ‘orcl’ and the new database will have a DB unique name of ‘foo’. This example will write all files to the +data ASM diskgroup, under directories for ‘foo’. You could use a filesystem directory as the destination as well. Just make sure you have enough space wherever you plan to write:
    oracle$ cat > $ORACLE_HOME/dbs/initfoo.ora <<EOF
    > db_name=orcl
    > db_unique_name=foo
    > db_create_file_dest=+data
    > EOF
    
    oracle$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 9 15:35:00 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to an idle instance.
    
    SQL> create spfile from pfile;
    File created.
    
    SQL> exit
    Disconnected
  3. Now, using the backup pieces from your most recent backup, try restoring the controlfile only. Start with the most recently written backup piece, since RMAN writes the controlfile at the end of the backup. It may fail once or twice, but keep trying backup pieces until you find the controlfile:
    oracle$ ls -lt /mnt/bkup
    total 13041104
    -rwxrwxrwx 1 root root      44544 Apr  4 09:32 0lp4sghk_1_1
    -rwxrwxrwx 1 root root   10059776 Apr  4 09:32 0kp4sghi_1_1
    -rwxrwxrwx 1 root root 2857394176 Apr  4 09:32 0jp4sgfr_1_1
    -rwxrwxrwx 1 root root 3785719808 Apr  4 09:31 0ip4sgch_1_1
    -rwxrwxrwx 1 root root 6697222144 Apr  4 09:29 0hp4sg98_1_1
    -rwxrwxrwx 1 root root    3647488 Apr  4 09:28 0gp4sg97_1_1
    
    $ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Wed Apr 9 15:37:10 2014
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database (not started)
    
    RMAN> startup nomount;
    Oracle instance started
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> restore controlfile from '/mnt/bkup/0lp4sghk_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=1 device type=DISK
    channel ORA_DISK_1: restoring control file
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 04/09/2014 15:42:10
    ORA-19870: error while restoring backup piece /mnt/bkup/0lp4sghk_1_1
    ORA-19626: backup set type is archived log - can not be processed by this conversation
    
    RMAN> restore controlfile from '/mnt/bkup/0kp4sghi_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=19 device type=DISK
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
    output file name=+DATA/foo/controlfile/current.348.844443549
    Finished restore at 09-APR-14

    As you can see above, RMAN will report the path and name of the controlfile that it restores. Use that path and name below:

    RMAN> sql "alter system set
    2>  control_files=''+DATA/foo/controlfile/current.348.844443549''
    3>  scope=spfile";
    
    sql statement: alter system set 
    control_files=''+DATA/foo/controlfile/current.348.844443549'' 
    scope=spfile
  4. Mount the database with the newly restored controlfile, and perform a restore to the new location. The ‘set newname’ command changes the location that RMAN will write the files to the db_create_file_dest of the new instance. The ‘switch database’ command updates the controlfile to reflect the new file locations. When the restore is complete, use media recovery to apply the archived redologs.
    RMAN> startup force mount
    Oracle instance started
    database mounted
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> }
    
    executing command: SET NEWNAME
    Starting restore at 09-APR-14
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=23 device type=DISK
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00002 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0hp4sg98_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0hp4sg98_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to +data
    channel ORA_DISK_1: restoring datafile 00004 to +data
    channel ORA_DISK_1: restoring datafile 00005 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0ip4sgch_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0ip4sgch_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:05
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00003 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0jp4sgfr_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0jp4sgfr_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
    Finished restore at 09-APR-14
    
    RMAN> switch database to copy;
    
    datafile 1 switched to datafile copy "+DATA/foo/datafile/system.338.844531637"
    datafile 2 switched to datafile copy "+DATA/foo/datafile/sysaux.352.844531541"
    datafile 3 switched to datafile copy "+DATA/foo/datafile/undotbs1.347.844531691"
    datafile 4 switched to datafile copy "+DATA/foo/datafile/users.350.844531637"
    datafile 5 switched to datafile copy "+DATA/foo/datafile/soe.329.844531637"
    
    RMAN> recover database;
    
    Starting recover at 09-APR-14
    using channel ORA_DISK_1
    starting media recovery
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_25_841917031.dbf thread=1 sequence=25
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_26_841917031.dbf thread=1 sequence=26
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_27_841917031.dbf thread=1 sequence=27
    media recovery complete, elapsed time: 00:00:01
    Finished recover at 09-APR-14
    
    RMAN> exit
    
    Recovery Manager complete.
  5. Before opening the database, we need to re-create the controlfile so that we don’t step on any files belonging to the source database. The first step is to generate a “create controlfile” script, and to locate the trace file where it was written:
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 16 10:56:28 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    SQL> alter database backup controlfile to trace;
    Database altered.
    
    SQL> select tracefile
      2  from v$session s,
      3       v$process p
      4  where s.paddr = p.addr
      5  and s.audsid = sys_context('USERENV', 'SESSIONID');
    TRACEFILE
    ----------------------------------------------------------
    /u02/app/oracle/diag/rdbms/foo/foo/trace/foo_ora_19168.trc
    
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition
  6. Next, we need to edit the controlfile creation script so that all we have left is the “create controlfile … resetlogs” statement, and so that all file paths to the original database are removed or changed to reference the db_unique_name of the test database.Below is a pipeline of clumsy awks I created that creates a script called create_foo_controlfile.sql. It should take care of most permutations of these trace controlfile scripts.
    $ sed -n '/CREATE.* RESETLOGS/,$p' /u02/app/oracle/diag/rdbms/foo/foo/trace/foo_ora_18387.trc | \
    > sed '/.*;/q' | \
    > sed 's/\(GROUP...\).*\( SIZE\)/\1\2/' | \
    > sed 's/orcl/foo/g' | \
    > sed 's/($//' | \
    > sed 's/[\)] SIZE/SIZE/' | \
    > grep -v "^    '" > create_foo_controlfile.sql

    If it doesn’t work for you, just edit the script from your trace file, so that you end up with something like this:

    CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS  ARCHIVELOG
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 
      SIZE 50M BLOCKSIZE 512,
      GROUP 2 
      SIZE 50M BLOCKSIZE 512,
      GROUP 3 
      SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
      '+DATA/foo/datafile/system.338.845027673',
      '+DATA/foo/datafile/sysaux.347.845027547',
      '+DATA/foo/datafile/undotbs1.352.845027747',
      '+DATA/foo/datafile/users.329.845027673',
      '+DATA/foo/datafile/soe.350.845027673'
    CHARACTER SET WE8MSWIN1252
    ;
  7. The next step is to use the above script to open the database with the resetlogs option on a new OMF controlfile:
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 16 10:56:28 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    SQL> alter system reset control_files scope=spfile;
    System altered.
    
    SQL> startup force nomount
    ORACLE instance started.
    
    Total System Global Area  238034944 bytes
    Fixed Size                  2227136 bytes
    Variable Size             180356160 bytes
    Database Buffers           50331648 bytes
    Redo Buffers                5120000 bytes
    
    SQL> @create_foo_controlfile
    Control file created.
    
    SQL> select value from v$parameter where name = 'control_files';
    VALUE
    -------------------------------------------
    +DATA/foo/controlfile/current.265.845031651
    
    SQL> alter database open resetlogs;
    Database altered.
  8. Last but not least, don’t forget to provide a tempfile or two to the temporary tablespaces:
    SQL> alter tablespace temp
      2  add tempfile size 5G;
    Tablespace altered.
Categories: DBA Blogs

The Growing Trend Toward Data Infrastructure Outsourcing

Pythian Group - Fri, 2014-04-11 07:45

Today’s blog post is the first of three in a series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper.

Despite the strong push to outsource corporate functions that began more than two decades ago, many IT shops have been hesitant to outsource their data management requirements.

Generally, the more mission-critical the data, the more organizations have felt compelled to control it, assigning that responsibility to their brightest and most trusted in-house database experts. The reasoning has been that with greater control comes greater security.

That mindset is rapidly changing. Macroeconomic trends are putting mounting pressure on organizations to rethink the last bastion of IT in-housing—data infrastructure management—and instead look for flexible, cost-effective outsourcing solutions that can help them improve operational efficiency, optimize performance, and increase overall productivity.

This trend toward outsourcing to increase productivity, and not simply reduce costs, is supported by a recent Forrester Research report that highlights the key reasons companies  are looking for outside help: too many new technologies and data sources, and difficulty finding people with the skills and experience to optimize and manage them.

To learn how to develop a data infrastructure sourcing strategy, download the rest of our white paper, Data Infrastructure Outsourcing.

Categories: DBA Blogs

What Happens in Vegas, Doesn’t Stay in Vegas – Collaborate 14

Pythian Group - Thu, 2014-04-10 08:04

IOUG’s Collaborate 14, is star-studded this year with the Pythian team illuminating various tracks in the presentation rooms. It’s acting like a magnet in the expo halls of The Venetian for data lovers. It’s a kind of rendezvous for those who love their data. So if you want your data to be loved, feel free to drop by at Pythian booth 1535.

Leading from the front is Paul Vallée with an eye-catching title, with real world gems. Then there is Michael Abbey’s rich experience, Marc Fielding’s in-depth technology coverage and Vasu’s forays into Apps Database Administration. There is my humble attempt at Exadata IORM, and Rene’s great helpful tips, and Alex Gorbachev’s mammoth coverage of mammoth data – it’s all there with much more to learn, share and know.

Vegas Strip is buzzing with the commotion of Oracle. Even the big rollers are turning their necks to see what the fuss is about. Poker faces have broken into amazed grins, and even the weird, kerbside card distribution has stopped. Everybody is focused on the pleasures of Oracle technologies.

Courtesy of social media, all of this fun isn’t confined to Vegas. You can follow @Pythian on Twitter to know it all, live, and in real time.

Come Enjoy!

Categories: DBA Blogs

Paul Vallée’s Interview with Oracle Profit Magazine

Pythian Group - Wed, 2014-04-09 23:00

Aaron Lazenby, Editor at Oracle’s Profit Magazine interviewed Pythian Founder, Paul Vallée this week to discuss the growing risk of internal threats to IT. Paul Vallee

“What we need to create is complete accountability for everything that happens around a data center, and that’s where our industry is not up to snuff right now. We tend to think that if you secure access to the perimeter of the data center, then what happens in the meeting inside can be unsupervised. But that’s not good enough.” says Paul.

The interview, Inside Job, is a preview to Paul’s Collaborate ’14 session taking place in later today in Las Vegas. If you’re at Collaborate, make sure you don’t miss Paul’s presentation Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden. The presentation begins at 4:15 PM Pacific at the Venetian, Level 3 – Murano 3306.

What are your thoughts? How else can organizations mitigate the risk of internal threats? Comment below.

Categories: DBA Blogs

SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 1

Galo Balda's Blog - Wed, 2014-04-09 22:45

In my previous post, I showed how to clone a GitHub repository using SQL Developer. In this post I’m going to show to synchronize the remote and local repositories after remote gets modified.

Here I use GitHub to commit a file called sp_test_git.pls.  You can create files by clicking on the icon the red arrow is pointing to.

new_file

The content of the file is a PL/SQL procedure that prints a message.

file_content

At this point, the remote repository and the local repository are out of sync. The first thing that you may want to do before modifying any repository, is to make sure that you have the most current version of it so that it includes the changes made by other developers. Let’s synchronize remote and local.

Make sure you open the Versions window. Go to the main menu click on Team -> Versions.

versions

Open the Local branch and click on master, then go to main menu click on Team -> Git -> Fetch to open the “Fetch from Git” wizard. Fetching a repository copies changes from the remote repository into your local system, without modifying any of your current branches. Once you have fetched the changes, you can merge them into your branches or simply view them. We can see the changes on the Branch Compare window by going to the main menu click on Team -> Git -> Branch Compare.

branch_compare

 Branch Compare is showing that sp_test_git.pls has been fetched from the remote master branch. We can right click on this entry and select compare to see the differences.

compare

The window on the left displays the content of the fetched file and the window on right displays the content of the same file in the local repository. In this case the right windows is empty because this is a brand new file that doesn’t exist locally. Let’s accept the changes and merge them into the local repository. We go to the Branch Compare window, right click on the entry, select merge and click on the “Ok” button.

merge

Now the changes should have been applied to the local repository.

local_update

We can go to the path where the local repository is located and confirm that sp_test_git.pls is there.

 

 


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control
Categories: DBA Blogs