DBA Blogs

Oracle anounces the Unbreakable DB Appliance

Freek D’Hooge - Wed, 2011-09-21 12:33

More then 10 years after Oracle’s first appliance attempt with Raw Iron and 3 years after the release of Exadata, Oracle has now announced the Unbreakable DB Applicance.

This “cluster in a box” consists out of a 4 RU chassis, in which 2 server nodes,  96 GB memory per node, 12 TB raw shared disk storage  (24 disks) and 292 GB flash disks has been placed.
The two server nodes have a total of 24 cpu cores, but cores can be disabled.
This allows for sub-capacity licensing of the software (with a minimum of 4 cores).

On the software side, the appliance is running Oracle linux and 11gR2 grid infrastructure and 11gR2 db software. Databases on this appliance can run as single node, RAC or RAC One Node.
Oracle enterprise manager is also part of the software stack.

Claims are made towards one button installation of software and patching.
The appliance has also a “phone home” functionality which automatically creates a service request when a problem is detected.

List price for the hardware is $ 50,000 (regardless of how many cores you activate) and for the software the standard DB licensing applies.
Which means that existing CPU licences can be transferred to this appliance.

Oracle positions this system below the Exadata quarter rack, and it is also worth mentioning that this appliance is not expandable.

So far the product launch information.

Some questions / remarks I have:

  • According to the presentation the hardware price remains the same, regardless of how many cores you activate (namely $ 50,000).
    In my opinion, this means that no one will buy this appliance to just activate 4 cores.
    There are much cheaper solutions when you only need a low number of cores (certainly when you consider that most companies already have a san which can be used for the Oracle databases)
  • There are 24 disks in the appliance, which seems low (certainly compaired to the 24 cpu cores).
    However, keep in mind that this storage is dedicated and probably (I don’t have confirmation on this) capable of asm intelligent data placement and command queuing.
    Normally SAN vendors are using an estimate of 180 IOPS per san disk. Oracle however is using an estimation of 300 IOPS per cell disk for Exadata, and tests done by Glenn Fawcett show that they can actually perform even better (around 400 IOPS).
    http://glennfawcett.wordpress.com/2011/05/10/exadata-drives-exceed-the-laws-of-physics-asm-with-intelligent-placement-improves-iops/
    Using the number of 300 IOPS, this would mean that the 24 disks translate to 40 SAN disks (that may not used by any other application, so in reality to even more san disks), which already looks very different.Now, I’m still unsure how it will perform with write intensive databases (oltp or dwh), certainly when several databases are consolidated on this appliance.As this appliance is not expandable, the number of disks may be a weak point, compaired to the number of cpu cores.
    I’m hoping that someone like Kevin Closson (poke poke) will be able to shed some light on this, as my knowledge in this area is rather limited :-)
  • In the presentation it was mentioned that the flash storage is used for the redo logs, but it is unclear if it could also be used to store datafiles or as cache (as with the Exadata smart flash cache)

As with many things the proof of the pudding is in the eating, so I’m looking forward to some benchmarks and presentations by real world customers.
And if anyone from Oracle is reading this, you may always send me a demo machine so I can do some testing on my own  ;-))

update 20:12, fixed wrong memory specification


Categories: DBA Blogs

SRVCTL For RAC

Ayyu's Blog - Wed, 2011-09-07 10:22
Categories: DBA Blogs

RAC investigations part I

Freek D’Hooge - Wed, 2011-08-10 17:06

Environment description

2 node rac with Oracle 11.2.0.2.2
Oracle Linux 5.6 with the Unbreakable Enterprise Kernel (2.6.32-100.36.1.el5uek)

Conducted tests

test_srv is a service which has both the instance running on node1 and node2 as preferred instances.
On node1 the service was manually stopped.

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
 NAME=ora.mydb.test_srv.svc
 TYPE=ora.service.type
 CARDINALITY_ID=1
 DEGREE_ID=1
 TARGET=OFFLINE
 STATE=OFFLINE
 CARDINALITY_ID=2
 DEGREE_ID=1
 TARGET=ONLINE
 STATE=ONLINE on node2

Issue a “shutdown abort” on the instance running on node2

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

start the instance again

[grid@node1 ~]$ srvctl start instance -d mydb -i mydb2

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

The service is now running on both instances, although before the crash the service was set offline on node1.

Same test, but this time the service is stopped on all instances

[grid@node1 ~]$ srvctl stop service -d mydb -s test_srv

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

[grid@node1 ~]$ srvctl stop instance -d mydb -i mydb2 -o abort

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

This time both services stay offline.
But what happens if we start the instance again:

[grid@node1 ~]$ srvctl start instance -d mydb -i mydb2

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

Now the service has started again on the restarted instance.
Explanation for this is that the service was configured to come up automatically with the instance, which explains why the service is started on the restarted node.
For the failover this seems to me as expected behaviour as it is the same as what would happen with a preferred / available configuration.

For the third test, we will reconfigure the service to have a preferred and an available node

[grid@node1 ~]$ srvctl stop service -d mydb -s test_srv
[grid@node1 ~]$ srvctl modify service -d mydb -s test_srv -n -i mydb2 -a mydb1

[grid@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: mydb2
Available instances: mydb1

[grid@node1 ~]$ srvctl start service -d mydb -s test_srv -i mydb2
[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

The service is running on its preferred instance, which we will now crash

[grid@node1 ~]$ srvctl stop instance -d mydb -i mydb2 -o abort

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=OFFLINE

eumm, I actually expected a relocation here…
As I have other services which have a preferred / available configuration, I know this service should failover.

[grid@node1 ~]$ srvctl status service -d mydb -s test_srv
Service test_srv is not running.

[grid@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: mydb2
Available instances: mydb1

[grid@node1 ~]$ srvctl status database -d mydb
Instance mydb1 is running on node node1
Instance mydb2 is not running on node node2

I could find no clues in the different cluster log files as of why the relocation did not occur.
More testing will be necessary.
Also note that the output of the crsctl status resource does not contain information about on which node or instance the service is expected to be online.
But by using the -v flag we can see the last_server attribute:

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -v
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
LAST_SERVER=node2
STATE=OFFLINE
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=137
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.mydb.test_srv.svc 1 1
INCARNATION=5
LAST_RESTART=08/10/2011 16:32:53
LAST_STATE_CHANGE=08/10/2011 16:34:03
STATE_DETAILS=
INTERNAL_STATE=STABLE

After starting the instance again, the service was back available

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

A second run of this test gave the same result.
Manually relocating the service did work though:

[grid@node1 ~]$ srvctl relocate service -d mydb -s test_srv -i mydb1 -t mydb2
[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

What if I removed the service and recreated it directly as preferred / available:

[grid@node1 ~]$ srvctl stop service -d mydb -s test_srv

[grid@node1 ~]$ srvctl remove service -d mydb -s test_srv

[grid@node1 ~]$ srvctl add service -d mydb -s test_srv -r mydb2 -a mydb1 -y AUTOMATIC -P BASIC -e SELECT
PRCD-1026 : Failed to create service test_srv for database mydb
PRKH-1014 : Current user grid is not the same as oracle owner orauser of oracle home /opt/oracle/orauser/product/11.2.0.2/dbhome_1.

would it?
Let us test it:

[grid@node1 ~]$ su - orauser
Password:

[orauser@node1 ~]$ srvctl add service -d mydb -s test_srv -r mydb1,mydb2 -y AUTOMATIC -P BASIC -e SELECT

[orauser@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: mydb1,mydb2
Available instances:

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

now modify it:

[orauser@node1 ~]$ srvctl modify service -d mydb -s test_srv -n -i mydb2 -a mydb1

[orauser@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: mydb2
Available instances: mydb1

[orauser@node1 ~]$ srvctl start service -d mydb -s test_srv -i mydb2

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

[orauser@node1 ~]$ srvctl stop instance -d mydb -i mydb2 -o abort

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=OFFLINE

Nope, the user modifying the service has nothing to do with it.
I also tested the scenario where I directly created a preferred / available service, but in this case the failover also did not work.
But after some more testing I found the reason.
During the first test I had shutdown the instance via sqlplus, not via srvctl. And the other services I talked about had failed over during this test (I never did a failback).
After doing the shutdown abort again via sqlplus, the failover worked again.

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

[orauser@node2 ~]$ export ORACLE_SID=mydb2
[orauser@node2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Aug 10 18:28:29 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> shutdown abort
ORACLE instance shut down.

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

SQL> startup
ORACLE instance started.

Total System Global Area 3140026368 bytes
Fixed Size                  2230600 bytes
Variable Size            1526728376 bytes
Database Buffers         1593835520 bytes
Redo Buffers               17231872 bytes
Database mounted.
Database opened.

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

as expected, starting the instance again did not trigger a failback of the service.

Question now is, if the failover not happening when issuing the shutdown via srvctl is expected behaviour or not.
For this, one probably would have to open a service case, answer a couple of question not important for this issue, escalate and still have to wait for several months.
Do I sound bitter now?

Conclusion:

  • When restarting an instance, an offline service that has this instance listed as a preferred node will be started (management policy = automatic).
  • When an instance on which a service was running fails, the service is started on at least one other preferred instance.
  • The service will remain running on this instance, even when the original instance is started again (in which case the service will run on both instances).
  • When a service has a preferred / available configuration, the service will failover to the available instance, but not failback afterwards.
  • Failover in a preferred / available configuration does not happen when the instance was stopped via “srvctl shutdown <db_unique_name> – o abort”

Questions remaining:

  • What if there where more then 2 nodes, with a service that has all three or more nodes listed as preferred, but currently only running on one node.
    If the instance on which that service is running fails, would the service then be started on all preferred nodes or on only 1 of them?
  • What if, in the above case, the service was running on 2 nodes.
    Would it still be started on other nodes?
  • And what if one of the nodes was configured as available and not as preferred? Would the service on the preferred node still be started or the one on the available instance or both?
  • And last but not least, is the srcvtl shutdown behaviour a bug or not?

It would be neat if someone has access to a 3 or more node rac on which they can run the above tests and send me the results  :-)

Update 13/08/2011:
Amar Lettat, one of my colleagues at Uptime, has pointed me to MOS note 1324574.1 – “11gR2 RAC Service Not Failing Over To Other Node When Instance Is Shut Down”.
This note clearly points out that the service not failing over when shutting down with srvctl is expected behaviour in 11.2.
It also points to the Oracle documentation, where this behaviour is also documented.
So not a bug, only a well documented change in behaviour.


Categories: DBA Blogs

Configuring FTP on Exadata

Alejandro Vargas - Sun, 2011-07-24 23:30
Exadata is installed with the minimum set of rpm's required to make it work as a database server.In many cases you will need to install by yourself the rpms required to make available some specific functions, like FTP.Exadata is installed either with Oracle Enterprise Linux or Solaris Express. This instructions match the Linux distribution, and can be used on any RH compatible Linux, not only OEL on Exadata.You can find the rpm's on the Oracle Enterprise Linux Distribution Disk, downloadable from edelivery.oracle.comInstall the Following rpms:[root@exand02 rpms]# lsftp-0.17-35.el5.x86_64.rpm pam-rpms vsftpd-2.0.5-16.el5_4.1.x86_64.rpmlftp-3.7.11-4.el5.x86_64.rpm tftp-server-0.49-2.0.1.x86_64.rpmThe Command to Install[root@exand02 rpms]# rpm -Uivh vsftpd-2.0.5-16.el5_4.1.x86_64.rpm ftp-0.17-35.el5.x86_64.rpm lftp-3.7.11-4.el5.x86_64.rpmStart Service vsftpd[root@exand02 rpms]# service vsftpd startStarting vsftpd for vsftpd: [ OK ][root@exand02 rpms]# service vsftpd statusvsftpd (pid 9274) is running...Configure Automatic vsftp Start[root@exand02 rpms]# chkconfig vsftpd on[root@exand02 rpms]# chkconfig --list | grep vsftpdvsftpd 0:off 1:off 2:on 3:on 4:on 5:on 6:offecho "service vsftpd status" >> /etc/rc.local[root@exand02 rpms]# tail -2 /etc/rc.local########### END DO NOT REMOVE Added by Oracle Exadata ###########service vsftpd startEdit /etc/vsftpd.confSet the following parameters on vsftpd.conf#anonymous_enable=YES (changed to NO to allow Exadata users to ftp)anonymous_enable=NO#userlist_enable=YES (changed to NO to allow Exadata users to ftp)userlist_enable=NOTest[root@exand02 vsftpd]# ftp exand02Connected to exand02 (10.25.104.130).220 (vsFTPd 2.0.5)Name (exand02:root): oracle331 Please specify the password.Password:230 Login successful.Remote system type is UNIX.Using binary mode to transfer files.ftp> pwd257 "/home/oracle"ftp> ls227 Entering Passive Mode (10,25,104,130,85,192)150 Here comes the directory listing.drwxr-xr-x 3 1001 500 4096 May 20 19:47 localdrwxr----- 3 1001 500 4096 May 03 12:20 oradiag_oracle-rw-r--r-- 1 1001 500 1020 Jun 01 14:41 ~oraclec226 Directory send OK.ftp> bye221 Goodbye.
Categories: DBA Blogs

Configuring FTP on Exadata

Alejandro Vargas - Sun, 2011-07-24 23:30

Exadata is installed with the minimum set of rpm's required to make it work as a database server.
In many cases you will need to install by yourself the rpms required to make available some specific functions, like FTP.

Exadata is installed either with Oracle Enterprise Linux or Solaris Express. This instructions match the Linux distribution, and can be used on any RH compatible Linux, not only OEL on Exadata.

You can find the rpm's on the Oracle Enterprise Linux Distribution Disk, downloadable from edelivery.oracle.com

Install the Following rpms:

[root@exand02 rpms]# ls
ftp-0.17-35.el5.x86_64.rpm pam-rpms
vsftpd-2.0.5-16.el5_4.1.x86_64.rpm
lftp-3.7.11-4.el5.x86_64.rpm tftp-server-0.49-2.0.1.x86_64.rpm

The Command to Install

[root@exand02 rpms]# rpm -Uivh vsftpd-2.0.5-16.el5_4.1.x86_64.rpm ftp-0.17-35.el5.x86_64.rpm lftp-3.7.11-4.el5.x86_64.rpm

Start Service vsftpd

[root@exand02 rpms]# service vsftpd start
Starting vsftpd for vsftpd: [ OK ]
[root@exand02 rpms]# service vsftpd status
vsftpd (pid 9274) is running...

Configure Automatic vsftp Start

[root@exand02 rpms]# chkconfig vsftpd on

[root@exand02 rpms]# chkconfig --list | grep vsftpd
vsftpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

echo "service vsftpd status" >> /etc/rc.local

[root@exand02 rpms]# tail -2 /etc/rc.local
########### END DO NOT REMOVE Added by Oracle Exadata ###########
service vsftpd start

Edit /etc/vsftpd.conf

Set the following parameters on vsftpd.conf

#anonymous_enable=YES (changed to NO to allow Exadata users to ftp)
anonymous_enable=NO

#userlist_enable=YES (changed to NO to allow Exadata users to ftp)
userlist_enable=NO

Test

[root@exand02 vsftpd]# ftp exand02

Connected to exand02 (10.25.104.130).
220 (vsFTPd 2.0.5)
Name (exand02:root): oracle
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.

ftp> pwd
257 "/home/oracle"

ftp> ls
227 Entering Passive Mode (10,25,104,130,85,192)
150 Here comes the directory listing.
drwxr-xr-x 3 1001 500 4096 May 20 19:47 local
drwxr----- 3 1001 500 4096 May 03 12:20 oradiag_oracle
-rw-r--r-- 1 1001 500 1020 Jun 01 14:41 ~oraclec
226 Directory send OK.

ftp> bye
221 Goodbye.

Categories: DBA Blogs

11.2.0.2 Creating a Standby or a Clone Database using Rman Duplicate From Active Database

Alejandro Vargas - Sun, 2011-07-24 03:20
There are a few things on 11.2.0.2 that you need to take into account to create a standby database or a clone database from an Active Database using Duplicate database command. Points 2, 9 and 10 of this document contains the details I’ve found I needed to change in order to get the clone or duplicate for standby running smoothly on this release:The steps required to complete the task are: 1)

Create a parameter file for the clone or standby

2)

add the following 3 parameters to the standby or clone database parameter file, even if the paths are identical on both the source and standby or clone servers: *._compression_compatibility='11.2.0' *.db_file_name_convert=('trg_path','aux_path'…) *.log_file_name_convert=('trg_path','aux_path'…) 3)

Copy the passwordfile from the source database to the clone or standby Oracle_Home/database folder or $Oracle_Home/dbs on Linux/Uniux.

4)

Configure the network on the clone or standby server so that you can connect to the database

5)

Startup nomount the clone or standby database

6)

Add a tnsnames.ora entry to the clone or standby on your source database server

7)

Check that you can connect to the clone or standby database as sysdba

8)

On the Primary server connect to the source database and clone or standby database (auxiliary) using RMAN >rman target / auxiliary sys/@prodcl Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jul 24 15:49:25 2011 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: PROD (DBID=8114723455) connected to auxiliary database: PRODCL (not mounted)

9)

Use the following syntax to duplicate to a clone (use as many channels as CPU’s you may dedicate): run { allocate channel c1 type disk; allocate auxiliary channel cr1 type disk; duplicate target database to 'prodcl' from active database nofilenamecheck; } 10)

Use the following syntax to duplicate for standby (use as many channels as CPU’s you may dedicate): run { allocate channel c1 type disk; allocate auxiliary channel cr1 type disk; duplicate target database for standby from active database nofilenamecheck; }
Categories: DBA Blogs

11.2.0.2 Creating a Standby or a Clone Database using Rman Duplicate From Active Database

Alejandro Vargas - Sun, 2011-07-24 03:20

There are a few things on 11.2.0.2 that you need to take into account to create a standby database or a clone database from an Active Database using Duplicate database command.

Points 2, 9 and 10 of this document contains the details I’ve found I needed to change in order to get the clone or duplicate for standby running smoothly on this release:

The steps required to complete the task are:

1) Create a parameter file for the clone or standby

2) add the following 3 parameters to the standby or clone database parameter file, even if the paths are identical on both the source and standby or clone servers:

*._compression_compatibility='11.2.0'
*.db_file_name_convert=('trg_path','aux_path'…)
*.log_file_name_convert=('trg_path','aux_path'…)

3) Copy the passwordfile from the source database to the clone or standby Oracle_Home/database folder or $Oracle_Home/dbs on Linux/Uniux.

4) Configure the network on the clone or standby server so that you can connect to the database

5) Startup nomount the clone or standby database

6) Add a tnsnames.ora entry to the clone or standby on your source database server

7) Check that you can connect to the clone or standby database as sysdba

8) On the Primary server connect to the source database and clone or standby database (auxiliary) using RMAN

>rman target / auxiliary sys/@prodcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jul 24 15:49:25 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: PROD (DBID=8114723455)
connected to auxiliary database: PRODCL (not mounted)

9) Use the following syntax to duplicate to a clone (use as many channels as CPU’s you may dedicate):

run {
allocate channel c1 type disk;
allocate auxiliary channel cr1 type disk;
duplicate target database to 'prodcl' from active database nofilenamecheck;
}

10) Use the following syntax to duplicate for standby (use as many channels as CPU’s you may dedicate):

run {
allocate channel c1 type disk;
allocate auxiliary channel cr1 type disk;
duplicate target database for standby from active database nofilenamecheck;
}

Categories: DBA Blogs

Blogging at Pythian

Jared Still - Fri, 2011-07-15 16:11
As I started working for Pythian at the beginning of the year, I have started to blog there as well.

First post is today:  Applying External Timing Data to Untimed Events

I may still post here from time to time. Work at Pythian is quite enjoyable, but it is always so busy there is less time for blogging.  At least for me anyway, as I have non-Oracle interests to attend to as well.
Categories: DBA Blogs

Purple Sweaters and Oracle 11g

alt.oracle - Tue, 2011-05-31 09:15

If you're geeky enough like me, you get a little excited whenever Oracle puts out a new version. Most of us have wondered at one time or another what it would be like to be able to beta-test the new Oracle version before it comes out. You may read the pre-release articles about the new features, if nothing else to keep ahead of the technology curve. Like geeky me, you may download the new version on the same day it comes out – maybe to play with it a little or just to see how it's different. But as intrigued as I get when new versions come out, I'm generally a little scared too. What the hell did they do now, I wonder. Grabbing the newest version of Oracle when it comes out is a little like getting a Christmas present from that family member who always buys you something you didn't ask for. You're glad to get a gift, but you pray to God that it's not another purple sweater. And Oracle's version history is littered with plenty of purple sweaters.

Sometimes I think back longingly to the days of Oracle 8i. Despite being the first version with that silly single-letter suffix thing (Oracle 8i – the "i" is for INTERNET DATABASE!), it was streamlined, compact, and just worked well. Then 9i came out - with it's 700+ new features. Instead of fitting on a single CD, the 9i install now needed three CDs, which either meant you were dealing with a lot of CD swapping or pushing three times the amount of data over the network just to do an install. And that would've been fine if the new stuff was good stuff. Unfortunately, all that extra cruft was stuff you almost certainly didn't need. Did you really need to install Oracle's application server will every one of your database installs? Did you really need an HTTP server? Oh, and that wasn't even the best part. With all that extra crap came... wait for it... SECURITY HOLES! Oracle 9i was the version where Oracle started to get creamed in the tech press about its glaring security risks. Which means if you installed Oracle 9i using the click next... click next... click next... method, you might as well leave the doors to your company unlocked.

To Oracle's credit, they listened. I remember going to several pre-release seminars before Oracle 10g came out. Oracle made a big deal about how they put it together. In a revolutionary move, Oracle actually asked DBAs what they liked and didn't like about the Oracle database. DBAs said it was too big, took too long to install and had too much junk. Oracle responded. Version 10g had plenty of new features, but a lot of them were actually useful. And in a move that must be a first in the history of software, 10g was actually smaller than 9i – going back to one CD instead of three. Security was tighter. It installed quickly. All in all, a really forward-thinking move on Oracle's part, if you could ignore that dumb "g is for grid" thing.

Well, like I said, whoever thought up the approach to 10g obviously got fired, because now we have 11g. Before I go too far, yes, I know 11g has some good new features, although a quick list of the useful ones doesn't exactly spring to mind. But, in a total reversal of the slim and trim approach of 10g, version 11g has now become an even bigger, more unwieldy behemoth than 9i. A shining example of software crafted by suits instead of engineers. With 11g, you now get to drag 2GBs worth of crap from server to server in a vain attempt to do a simple database install. In fairness, you can separate out the "database" directory after you download the entire mess, but still... that leaves about 1.5GB of purple sweaters.

Every software company deals with bloat - how do you sell the next version? I get that. And Oracle has bought half the planet and needs to integrate those acquisitions across the board. Yep – I got it. But I also know that the RDBMS is Oracle’s flagship product. The company that produced 11g is the same company that was smart enough to ask DBAs what they should put in 10g. 10g was an incredibly successful version for Oracle – why screw with that?

I mentioned last time that, as great as Automatic Storage Management (ASM) is, Oracle had managed to screw it up in 11g. Here’s why. After telling you last time that ASM was so good that it should be used in single-instance systems as well as RAC, Oracle has gone and screwed me over. In 11gR2, ASM is now bundled with the “grid infrastructure” – the set of components used to run Real Application Clusters. Does that mean that you can’t use ASM with a single-instance database? Nope, but it makes it incredibly inconvenient. If you wanted to standardize on ASM across your database environments, you’d have to install the entire grid infrastructure on every one of your servers. If you manage 5 databases, it’s not too big a deal. If you manage 500, it's a much bigger deal. So c'mon Oracle – when you make good tools, make it easy for us to use them. This is incredibly discouraging.

On an unrelated positive note, I'm pleased to note that alt.oracle has been picked up by the Oracle News Aggregator at http://orana.info, which is just about the biggest Oracle blog aggregator in the universe. So thanks to Eddie Awad and the fine folks at OraNA.
Categories: DBA Blogs

ASM – It's not just for RAC anymore

alt.oracle - Tue, 2011-05-10 21:43

I'm super critical of Oracle when they screw stuff up or try to push technology in a direction that's bad for DBAs. You'll be hearing some rants about it in upcoming posts. But I also think that Oracle is a company that is actually good for the direction that technology is heading, unlike some companies whose names begin with "Micro" and end with "soft". Yes, they're a vast, stone-hearted corporation that would sell their grandmothers to raise their stock price. So is every other technology company – get used to it. But when they do something right, I'll be fair and sing their praises. Once every version or so, Oracle does something that really changes the game for DBAs. In version 8 it was RMAN. In 9i it was locally managed tablespaces. In 10g, it's definitely ASM - Automatic Storage Management. Yeah, I know this is kinda old news - ASM has been out for a good long while. What surprises me, though, is how many DBAs think that ASM is only useful for RAC architectures. "I don't run RAC, why would I need ASM?"

When ASM came out, it both intrigued and terrified me. The claim that it could produce I/O performance almost on par with raw devices without all the grief that comes with using them was exciting. But the idea of putting your production data on a completely new way of structuring files was pretty scary. I trust filesystems like UFS and ext2/3 (maybe even NTFS a little, but don't quote me) because they've stood the test of time. If there's one thing a DBA shouldn't screw around with, it's the way that the bits that represent your company's data are written to disk. I'm skeptical of any new way to store Oracle data on disk, since I'm the loser that has to recover the data if everything goes south. So I entered into my new relationship with ASM the way you should – with a whole lot of testing.

I originally moved to ASM out of sheer necessity. I was running RAC and using a woeful product called OCFS – Oracle Clustered Filesystem – to store the data. Performance was bad, weird crashes happened when there was heavy I/O contention, it wasn't pretty. Nice try, Oracle. It's cool that it was an open source project, but eventually it became clear that Oracle was pushing toward ASM as their clustered filesystem of choice. To make a long story short, we tested the crap out of it and ASM came through with flying colors. Performance was outstanding and the servers used a lot less CPU, since ASM bypasses that pesky little filesystem cache thing. In the end, we moved our single instance databases to ASM as well and saw similar results. It's true that, since you give Oracle control of how reads and writes are done, ASM is a very effective global filesystem for RAC. But the real strength of ASM is in the fact that its a filesystem built specifically for Oracle databases. You don't use it to store all your stolen mp3 files (unless you're storing them as blobs in the database, wink), you use it for Oracle datafiles. You give Oracle control of some raw partitions and let it go. And it does a good job. Once you go ASM, you never go back.

I'm not going to do a sell job on the features of ASM, since I don't work for the sales department at Oracle. Really, the positives for ASM boil down to three key features. 1) It bypasses the filesystem cache, thus going through fewer layers in the read/write process. This increases performance in essentially the same way that raw devices do. 2) It works constantly to eliminate hot spots in your Oracle data. This is something that your typical filesystem doesn't do, since it takes an intimate knowledge of how the particular application (in this case Oracle) is going to use the blocks on disk. Typical filesystems are designed to work equally well with all sorts of applications, while ASM is specialized for Oracle. 3) It works (with Oracle) as a global filesystem. In clustered systems, your filesystem is crucial. It has to be "globally aware" that two processes from different machines might try to modify the same block of data at the same time. That means that global filesystems need to have a "traffic cop" layer of abstraction that prevents data integrity violations. Normally this layer would impact performance to a certain degree. But ASM gives control to Oracle, which has a streamlined set of rules about what process can access a certain block and prevents this performance loss.

So consider using ASM. Even if you don't run RAC, benefits #1 and #2 make it worth your while. Our DBA team has been using it religiously on both RAC and non-RAC systems for years without any problems.

Of course, we're talking about Oracle here, so leave it to them to take the wonderful thing that is ASM and screw it up. Next time I'll tell you how they did just that in version 11g.
Categories: DBA Blogs

RAC, ASM and Linux Forum, May 18, 2011: EXADATA Production Customers updates

Alejandro Vargas - Sun, 2011-05-08 08:11

Exadata is changing the world of Database Performance, on this forum we will have two EXADATA Production Customers updates.

75 million Customer Telecom Turkcell will be represented by Ferhat Sengonul, Senior OS Administrator, DBA, and Datawarehouse Project Leader, that led Exadata implementation and scale out to their actual 3 full, 24 database nodes, Exadata racks.

Ferhat will present his experience with a very large Data-Warehouse in Exadata, including online high performance reporting, VLDB backup and recovery best practices and upgrading from a traditional 11 racks (1 Sun M9000 Sparc 7; 10 storage racks 250 TB uncompressed) datawarehouse to a full rack and then to multiple racks. We will hear also about his consolidation project of all datawarehouse databases in Exadata.

Golden Pages, the first Consolidation on Exadata project implemented in Israel will be presented by Shimi Nahum, Senior DBA and Exadata Project Leader. Shimi will tell us about the challenges the Exadata environment presented to him as a DBA and how he faced them, and the impact of using Oracle Exadata to consolidate multiple Customer databases, including Siebel and ERP databases.

A practical dive into the technology will be presented by Oracle's Ophir Manor, the responsible for the several POC's being run by different Israeli Customers.

And finally I will tell about the experiences from the field, installing and implementing Exadata at different Customers around the world.

Exadata is radically changing the rules and expectations a DBA can have of an Oracle Database; this first hand experiences promise to be one of the most interesting conferences in Israel this year.

The conference will be held on May 18 at the Sharon Conference Center 09 starting at 14:00

REGISTRATION: ILOUG RAC, ASM and Linux Forum Registration,

SCHEDULE:

14:00 – 14:30

Registration

14:30 – 14:40

Welcome

14:40 – 15:25

Shimi Nahum, Dapei Zahab, Senior Oracle DBA, responsible of the Exadata project.

The first production Oracle Exadata in Israel, challenges for the DBA, speedup impact of Exadata on the end Customer

15:25 – 16:15

Ferhat Sengönül, Senior OS and DBA Turkcell, Responsible for the DW project

A very large Data-Warehouse in Exadata, the migration process, backup and recovery strategies, scaling up from 1 Exadata rack to 3

16:15 – 16:30

Refreshments Break

16:30 – 17:15

Ofir Manor. Oracle Senior Sales Consultant and Exadata Expert.

Preparing the IT infrastructure for Exadata. Lifetime maintenance procedures.

17:15 – 17:45

Alejandro Vargas, Oracle Principal Support Consultant and EMEA Exadata Core Team Member.

Inside the Oracle Database Machine, secrets about the configuration, install and support procedures

17:45 – 18:15

Questions and Answers
Categories: DBA Blogs

RAC, ASM and Linux Forum, May 18, 2011: EXADATA Production Customers updates

Alejandro Vargas - Sun, 2011-05-08 08:11

Exadata is changing the world of Database Performance, on this forum we will have two EXADATA Production Customers updates.

75 million Customer Telecom Turkcell will be represented by Ferhat Sengonul, Senior OS Administrator, DBA, and Datawarehouse Project Leader, that led Exadata implementation and scale out to their actual 3 full, 24 database nodes, Exadata racks.

Ferhat will present his experience with a very large Data-Warehouse in Exadata, including online high performance reporting, VLDB backup and recovery best practices and upgrading from a traditional 11 racks (1 Sun M9000 Sparc 7; 10 storage racks 250 TB uncompressed) datawarehouse to a full rack and then to multiple racks. We will hear also about his consolidation project of all datawarehouse databases in Exadata.

Golden Pages, the first Consolidation on Exadata project implemented in Israel will be presented by Shimi Nahum, Senior DBA and Exadata Project Leader. Shimi will tell us about the challenges the Exadata environment presented to him as a DBA and how he faced them, and the impact of using Oracle Exadata to consolidate multiple Customer databases, including Siebel and ERP databases.

A practical dive into the technology will be presented by Oracle's Ophir Manor, the responsible for the several POC's being run by different Israeli Customers.

And finally I will tell about the experiences from the field, installing and implementing Exadata at different Customers around the world.

Exadata is radically changing the rules and expectations a DBA can have of an Oracle Database; this first hand experiences promise to be one of the most interesting conferences in Israel this year.

The conference will be held on May 18 at the Sharon Conference Center 09 starting at 14:00

REGISTRATION: ILOUG RAC, ASM and Linux Forum Registration,

SCHEDULE:

14:00 – 14:30 Registration

14:30 – 14:40 Welcome

14:40 – 15:25 Shimi Nahum, Dapei Zahab, Senior Oracle DBA, responsible of the Exadata project.

The first production Oracle Exadata in Israel, challenges for the DBA, speedup impact of Exadata on the end Customer

15:25 – 16:15 Ferhat Sengönül, Senior OS and DBA Turkcell, Responsible for the DW project

A very large Data-Warehouse in Exadata, the migration process, backup and recovery strategies, scaling up from 1 Exadata rack to 3

16:15 – 16:30 Refreshments Break

16:30 – 17:15 Ofir Manor. Oracle Senior Sales Consultant and Exadata Expert.

Preparing the IT infrastructure for Exadata. Lifetime maintenance procedures.

17:15 – 17:45 Alejandro Vargas, Oracle Principal Support Consultant and EMEA Exadata Core Team Member.

Inside the Oracle Database Machine, secrets about the configuration, install and support procedures

17:45 – 18:15 Questions and Answers
Categories: DBA Blogs

Surrender (a little) to the Dark Side

alt.oracle - Wed, 2011-04-06 22:05

When I was a freshman in college, I, like many, was bouncing back and forth on what my major should be. I was leaning heavily toward electrical engineering, but my long standing love of computers had me seriously considering Comp Sci as well. I decided to take a couple of introductory Comp Sci classes to see if I liked them. So I tried taking the Introduction to Programming course and lab during my first semester. While I imagine that today they use some cool and zippy language, back then they used Fortran, a programming language that only a mother language could love. The class was fine, but throughout the course, I began to have visions of myself growing old sitting in front of a room-sized mainframe typing in endless subroutines using indecipherable languages. As the old joke goes, "a computer without COBOL and Fortran is like a piece of chocolate cake without ketchup and mustard." That's a bit of an exaggeration, but let's face it, back then, being a computer professional was a lot different than it is today. So I chose a different path, but wound up in computers anyway. Fancy that.

Even though I chose to turn away from the "Dark Side" of development and became a DBA (i.e. Jedi Master), I've always regretted it a little bit. Why? Because programming is fun. But let's make a distinction here between programming and software development. Programming is cool, creative and useful. Software development is an everlasting grind of hateful user requirements, rigid coding standards and endless revisions because your functional wants that company logo to be moved three pixels to the left of where it is on the company website.

True story here. During my incredibly short stint as a sort-of web developer, I was assigned to revise the page on a company's website that had the pictures and biographies of the CEO and all his lackeys. The page was fine, but then I got a request that came from the CEO - his picture on the page needed to be bigger than everyone else's. Why? Well, he's the CEO, that's why – he's better than everyone else. So I did it and moved the page elements around to allow for the bigger picture. Soon after, I started getting requests to put in bigger pictures of the lackeys, as well. Why? Well, they're important too! So I did that. Then the CEO was pissed so he ordered an even BIGGER picture and a longer, more flowery bio. Then the lackeys... well you get the idea. It was the Cold War all over again. So I'm making a distinction here between writing actual programs that do something as opposed to a dog and pony show for a bunch of suits.

The IT world is so specialized anymore that we DBAs don't get to sling code on a regular basis, unless it's maybe PL/SQL or some shell scripts. A lot of DBAs are missing out on the fun. Maybe you've gotten the chance to debug some Perl or Python. That stuff is good too, but there's a whole world of cool, useful tools that have yet to be coded, because YOU haven't coded them.

We talked last time about GUIs and the bias against them. My main problem with GUIs is that they can only do what they're programmed to do. But what if you could make your own GUI that would do whatever you wanted? Well, "I could never do a GUI" you say. "There's all the drawing objects at the right pixel coordinates," etc, etc. Nope. I haven't had to do stuff like that since the days of my Commodore 64. Modern software is mostly based on libraries of code that some other poor shmuck has already done. You don't really need to "draw" a window – you just find out what the command is to say, "Hey – put a window on the screen." The libraries for windows, dialog boxes, dropdowns, etc, have probably already been written for your language of choice. If they haven't, well, you're probably writing in Fortran. Shame on you.

I'm not saying it's easy, but it's also not as hard as you think. A few years back, I stumbled on some example code on a website that let you make simple GUIs in Tcl/Tk. Tcl is a language, by the way. Tk is a set of extensions that lets you make pretty GUI-type stuff. I typed the commands into my Linux console and, voila – pretty windows and clicky boxes. A light clicked on somewhere in a my head and I figured out the general idea of how this worked. All you're really doing is making function calls. We DBAs know how to do this. If you do a SELECT AVG(SALARY) FROM EMP, you're just passing in the values from the SALARY column of the EMP table and the AVG function spits out the results. Using GUI libraries in some languages isn't that much more complicated. It's all pushing and pulling the data you want in and out of these functions/subroutines.

Awhile back I wrote a program in Perl that works as a GUI interface to Data Pump. Not a CGI that runs from a webpage (although that's cool too), but a real, bonefide, run-on-your-desktop GUI. Yes it took awhile – I don't have a degree in Comp Sci and all my experience in coding is self taught. It's probably moderately useful, but more than anything else it was COOL. It's hard to match the satisfaction of creating your own useful tool, whether it's a script or a GUI, that solves a problem. You're not gonna program the next sequel to Doom (that's a video game), but you can still do cool stuff. So don't sell yourself short – dive in and learn something new. Give in to the Dark Side a little. Yoda won't mind.
Categories: DBA Blogs

GUI or not GUI

alt.oracle - Thu, 2011-03-10 20:01
One of the longest and loudest controversies in the DBA world is that of the graphical user interface vs command line.  Some of the opinions sound like this…

“GUIs are for newbies who don’t know what they’re doing.”
“Why should I learn all the commands – there’s already a tool to do that.”
“GUIs are too slow.”
“Learning the command line takes too long.”
“I don’t need to learn a bunch of commands that I’ll never use – I just want to get my job done.”

My own feelings about this go back to my early days as a DBA.  I had this supervisor who was an absolute wizard when it came to Enterprise Manager.  Now, we’re talking the early OEM that came with Oracle version 8.0, here.  Ancient stuff.  If it could be done with OEM, this guy could “git ‘er done”.  One day tho, some kind of devastating emergency happened.  As a newbie, I wasn’t always trusted to handle the big issues, so I went to the supervisor and told him the situation. 

“Hey boss, we need to do so-and-so.” 
“Oh,” says Boss, “I don’t know how to do that with Enterprise Manager.” 
“Um,” I says, “I don’t think you *can* do that with Enterprise Manager.” 
“Oh,” says Boss, “Then what do we do?”

I remember the look of defeat on his face.  He was a nice guy, he wanted to help, he was responsible to help, but since Oracle hadn’t written that particular ability into his GUI tool, he had no idea as to how to do it.  It made an impression on me.  I decided then and there - that wasn’t going to be me.  I made a commitment that lasted for years – I will not use GUI tools.  No matter how much longer it takes me to do the job, with looking up commands and all, I will abstain from the evil of the GUI.  And so I did.

As a result, I learned the command line.  I REALLY learned the command line.  SQL*Plus was my home.  Not only did I learn a ton of data dictionary views by heart, over time, I sort of developed a “feel” for syntax even if I didn’t know it.  I could kinda intuit what might be in a certain v$ view or I could guess what the columns of a particular dba_* view should be.  It was and is incredibly useful and I don’t regret it.  I wrote and saved my little scripts to do things.  But, over time, I started to look down on my peers who used GUI tools, inwardly thinking they really couldn’t hack it from the command line.  You obviously don’t say something like that, but you joke about it, etc, just to let them know.  It probably didn’t help matters that in the ultimate GUI vs command line deathmatch, Windows vs Linux, I was (and am) squarely on the Linux side.

What started to change me was, ironically, Enterprise Manager.  Although I didn’t use it, I’d kept up with OEM, watching it get, for the most part, better and better.  But when 10g was released, it was like OEM had a bar mitzvah, sweet sixteen and a coming-out party all in one.  Re-christened as Grid/Database Control, you could do dang near EVERYTHING with OEM now.  OEM was finally a comprehensive tool.  It was so comprehensive, that it started to shake my “GUIs are for losers” mentality.  I thought, I could really do some damage with this OEM thing (in a good way).  I started to think in terms of what would be more efficient, OEM or command line, for different situations.  Command line was still winning in my mind, but not by as much as before.

The thing that finally “brought balance to the force” for me was a quote I read by a well-known Oracle consultant/author/blogger guy.  If I said his name, you’d probably recognize it.  I read something of his where he was consulting for a client and said this, almost verbatim, “I knew their DBAs were incompetent because they were using Enterprise Manager.”  Whoa.  Now it’s true that I didn’t want to be like my old boss, unable to do anything without a GUI, but I sure didn’t want to be like this arrogant bastard either.  Besides that, I had seen enough of Grid/Database Control to know that his reasoning was crap.

In the end, the command line versus GUI war boils down to a few principles for me.  A good DBA needs to be efficient.  If you’re more efficient using a GUI than command line, then go for it.  If, on the other hand, the only reason you use a GUI is that you’re just too lazy to learn the commands, then you get what you deserve.    I’m still heavily command line oriented, but, in truth, I know there are instances where it would just be faster to use a GUI tool.  Take, for instance, performance tuning.  Everybody has their own way of doing it, but Grid/Database Control really does a good job of pulling a lot of different metrics together.  It would take a lot of scripts to pull that much information into one place.  It’s not for everyone, but it shouldn’t just be written off without a second thought.  And when you decide which one's "faster", you have to take into consideration the amount of time it took for you to come up with that whiz-bang script of yours.

In the end, I think everyone should aspire to learn how to leverage the command line.  It’s powerful, open ended, versatile and doesn’t tie you down to any particular toolset.  A GUI will always be limited by its programming.  If the programmer didn't dream it, you probably can't do it.  But the point is to get the job done.  If Enterprise Manager helps you bust out your super ninja DBA skillz, I won’t stop you.

And if you're still a hardcore command liner, I'll try to change your mind next time.  What if you could make your own GUI?  Hmm?
Categories: DBA Blogs

Oracle Linux 5.6 is out

Sergio's Blog - Fri, 2011-01-28 06:14

I wrote a quick summary of the Oracle Linux 5.6 release over on our Linux blog. For what it's worth, 5.6 is on public-yum.oracle.com as well.

Categories: DBA Blogs

Everybody needs a spare database

alt.oracle - Sun, 2011-01-23 17:29

I've gotten a little preachy in this blog lately, so I thought this time I'd give you something useful. Have you ever wished you had a quicky little set of database tables so you could do some generally wacky stuff that would likely get you fired if you did it on your production database? I thought so. In the past, the only way to do something like this was to build another database somewhere. Of course, where to put it? Some of us weirdos have machines at home where we build databases, do virtual machines or stuff like that. Guilty. But not everyone wants to tie up their home machine with the multi-gigabyte behemoth that Oracle 11g has become. Well, have I got a deal for you.

Oracle provides a nifty little free service to show off their Oracle Application Express product (APEX), which I'm not sure has been as popular as they'd like it to be. You can register at their site and get your own little workspace that will allow you to play around with Oracle a little.

Here's how it works.

  • Go to http://apex.oracle.com and click the link to "Sign Up"
  • Click through the "next" buttons, giving Oracle your name and email address. Give them a real one since they'll send the verification link to it.
  • Provide a name for your workspace and a schema name for your database objects
  • Next you have to give a reason for requesting an account. Now, I don't know if anyone actually reads these or not, but you'd probably be better off if you didn't put something like "That dork from alt.oracle said it would be cool." Try "Evaluation purposes" instead.
  • Next, you type in your little verification thing with the goofy letters and click "Submit Request"
  • After a bit, you'll hopefully get an email back saying "click this link to verify, etc".
  • Lastly, you'll get another email with your login.

Then you can login and poke around. Truthfully, you can do a lot of stuff on your new personal Apex. I'm not super familiar with it yet, but it looks like you can...

  • Create your own tables, indexes, constraints, sequences, etc
  • Run SQL statements and scripts
  • Build PL/SQL objects
  • Build your own webby-type applications with the GUI "Application Builder"

I'm not sure yet if you can build web apps that you and others could access from a browser without going through the whole Apex frontend, but if so, that would be uber-cool. One word of warning however. FOR THE LOVE OF ALL THAT IS HOLY, DON'T PUT ANY REAL DATA IN THIS CRAZY THING! I have no idea as to how secure it is – it's only for evaluation purposes, so DON'T DO IT.

You can't do a lot of administration-type stuff with your own personal Apex. If you're looking to mess with parameter files and flash recovery areas, it's time to bust out a virtual machine. But it is nice to have a place where you could try some SQL stuff without fear of a pink-slip visit from HR. So go get your account and do some crazy, webby SQL stuff. And, finally, FOR THE LOVE OF ALL THAT IS HOLY, DON'T PUT ANY REAL DATA IN THIS CRAZY THING!
Categories: DBA Blogs

Oooohhh... shiny!

alt.oracle - Tue, 2011-01-18 22:52
I went to last year's Oracle Open World. I'd always wanted to go, but having been a consultant for so many years, those opportunities don't always come your way. In my experience, companies will spring for their own employees to go to Open World, but "no way" to that lousy, overpaid consultant who probably won't even be here next week. That leaves it up to the consulting company, whose take on things is usually, "If you don't already know everything they're talking about at Open World, then why did we hire you? Get back to work!" But since I work for a good consulting company, they offered me the chance to go.

Open World is a blast. If you're a geeky Oracle person like me, it's a complete nerd-o-gasm. First of all, Oracle's always announcing the "next big thing" – this year, it was the Oracle Linux kernel (perhaps the subject of a future post) and the latest in Exadata. Then you have your session speakers, most of which are pretty good. The technology booths full of people trying to sell you stuff are always cool. Of course, best of all is all the free swag you get. I came home with more techie junk than you can shake a datafile at. Let me tell you, it takes some mad ninja skilz to nab 11 t-shirts from Open World and get them home. I had to throw away all my underwear just to get them to fit in my luggage (don't ask me how the flight home was...).

Of course, the real focus of any Open World is same as that of a lot of the software industry – better, faster, stronger, more. Newer and shinier. What you have isn't what you need. I can't fault them for this – they need to keep selling stuff to compete and to stay in business, and innovation is a huge part of what we do. Progress is good. But sometimes a DBA needs to distinguish between something that represents progress and something that represents a big ol' pile of shiny.

I talked last time about how being a good DBA means having a healthy dose of skepticism. That has to apply to "new feature-itis" too. Part of what I do involves evaluating new technologies. Not only do I have to evaluate the tech to verify that it does what it says it does, I need to assess that its benefits are worth the time, risks and cost of adopting it. As an evaluator, there's an implied trust with my employers that if I recommend a shiny, new feature, it's because it will benefit their interests – not necessarily mine. I haven't seen it often, but I can remember working with more than one DBA who didn't share my take on this. I've come to expect non-technical people to fall into the whole "Look! Shiny!" thing when it comes to new tech. But some technical folks in positions of authority see new tech as way to 1) pad their resume ("why yes I've worked with feature X, I helped bring it into my last company"), or 2) make them indispensable, since they adopted it and are the only ones who understand it. When I was a newbie DBA, I knew a senior DBA who did just that - repeatedly. Everybody could see it, but nobody was in a position to do anything about it. Then, he left and the rest of us were put in the position of having to support this big, expensive, shiny nightmare.

Flying along the bleeding edge can be a bumpy ride. Resist the urge to pad your resume at the expense of your employer. Otherwise, your big ol' pile of shiny might become a big ol' pile of something else.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs