DBA Blogs

How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

Alejandro Vargas - Wed, 2010-05-19 05:31

The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document.

In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

Categories: DBA Blogs

Impact of Truncate or Drop Table When Flashback Database is Enabled

Alejandro Vargas - Wed, 2010-05-19 04:51

Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover.

One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explain, we knew there is an overhead for these operations.

But the information on the note was not clear enough, with the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

Categories: DBA Blogs

Impact of Truncate or Drop Table When Flashback Database is Enabled

Alejandro Vargas - Wed, 2010-05-19 04:51

Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover.

One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explain, we knew there is an overhead for these operations.

But the information on the note was not clear enough, with the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

The Oracle Enterprise Linux Software and Hardware Ecosystem

Sergio's Blog - Wed, 2010-05-19 00:43

It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux.

As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing.

Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities.

The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via oelhelp_ww@oracle.com

Chip Manufacturers Server vendors Storage Systems, Volume Management and File Systems Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand SOA and Middleware Backup, Recovery & Replication Data Center Automation Clustering & High Availability Virtualization Platforms and Cloud Providers Security Management
Categories: DBA Blogs

The Next RAC, ASM and Linux Forum. May 4, 2010 Beit HP Raanana

Alejandro Vargas - Tue, 2010-04-27 18:39

The next RAC, ASM and Linux forum will take place next week, you are still on time to register : Israel Oracle Users Group RAC,ASM and Linux Forum

This time we will have a panel formed by Principal Oracle Advanced Customer Services Engineers and RAC experts Galit Elad and Nickita Chernovski and Senior Oracle Advanced Customer Services Engineers and RAC experts Roy Burstein and Dorit Noga.

They will address the subject: 5 years of experience with RAC at Israeli Customers, lessons learned. It is a wonderful opportunity to meet with the people that is present at most major implementations and helped to solve all major issues along the last years.

In addition we will have 2 most interesting Customer Presentations:

Visa Cal DBA Team Leader Harel Safra will tell about their experience with scalability using standard Linux Servers for their mission critical data warehouse.

Bank Discount Infrastructure DBA Uril Levin, who is in charge of the Bank Backup and Recovery Project, will speak about their Corporate Backup Solution using RMAN; that includes an end to end solution for VLDBS and mission critical databases. One of the most interesting RMAN implementations in Israel.

This time I will not be able to attend myself as I'm abroad on business, Galit Elad will greet you and will lead the meeting.

I'm sure you will enjoy a very, very interesting meeting.

Beit HP is located at 9 Dafna Street, Raanana

Best Regards

Alejandro


View Larger Map

Categories: DBA Blogs

Certain things in this world

Virag Sharma - Tue, 2010-04-13 09:03

"In this world nothing can be said to be certain but death and taxes."
--Benjamin Franklin

Categories: DBA Blogs

Oracle Enterprise Linux 5.5 Installation Media Now Available

Sergio's Blog - Tue, 2010-04-13 00:10
OEL 5.5 was released on ULN on April 5, 2010 and on public-yum.oracle.com shortly after. Now you can also download the installation DVD or CDs from edelivery.oracle.com/linux.
Categories: DBA Blogs

OCFS2 Now Certified for E-Business Suite Release 12 Application Tiers

Sergio's Blog - Wed, 2010-03-24 09:30

Steven Chan writes that OCFS2 is now certified for use as a clustered filesystem for sharing files between all of your E-Business Suite application tier servers.  OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux community.  It was accepted into Linux kernel 2.6.16.

OCFS2 is included in Oracle Enterprise Linux (OEL) and supported under Unbreakable Linux support.

Categories: DBA Blogs

Cloning A Database On The Same Server Using Rman Duplicate From Active Database

Alejandro Vargas - Sat, 2010-03-20 01:16

To clone a database using Rman we used to require an existing Rman backup, on 11g we can clone databases using the "from active" database option.

In this case we do not require an existing backup, the active datafiles will be used as the source for the clone.

In order to clone with the source database open it must be on archivelog mode. Otherwise we can make the clone mounting the source database, as shown in this example.


These are the steps required to complete the clone:


  1. Configure The Network

  2. Create A Password File For The New Database

  3. Create An Init.Ora For The New Database

  4. Create The Admin Directory For The New Database

  5. Shutdown And Startup Mount The Source Database

  6. Startup Nomount The New Database

  7. Connect To The Target (Source) And Auxiliary (New Clone) Databases Using Rman

  8. Execute The Duplicate Command

  9. Remove The Old Pfile

  10. Check The New Database

A step by step example is provided on this file: rman-duplicate-from-active-database.pdf

Categories: DBA Blogs

Who's using a database link?

Jared Still - Fri, 2010-03-05 12:41
Every once in awhile it is useful to find out which sessions are using a database link in an Oracle database. It's one of those things that you may not need very often, but when you do need it, it is usually rather important.

Unfortunately for those of us charged with the care and feeding of the Oracle RDBMS, this information is not terribly easy to track down.

Some years ago when I first had need to determine which sessions were at each end of database link, I found a script that was supplied courtesy of Mark Bobak. When asked, Mark said he got the script from Tom Kyte. Yong Huang includes this script on his website, and notes that Mark further attributed authorship in Metalink Forum thread 524821.994. Yong has informed me that this note is no longer available.  I have found the script in other locations as well, such Dan Morgan's website. So now you know the scripts provenance.

Here's the script, complete with comments.  Following the script is an example of usage.


-- who is querying via dblink?
-- Courtesy of Tom Kyte, via Mark Bobak
-- this script can be used at both ends of the database link
-- to match up which session on the remote database started
-- the local transaction
-- the GTXID will match for those sessions
-- just run the script on both databases

Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10) "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
s2.username,
substr(
decode(bitand(ksuseidl,11),
1,'ACTIVE',
0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
2,'SNIPED',
3,'SNIPED',
'KILLED'
),1,1
) "S",
substr(w.event,1,10) "WAITING"
from x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where g.K2GTDXCB =t.ktcxbxba
and g.K2GTDSES=t.ktcxbses
and s.addr=g.K2GTDSES
and w.sid=s.indx
and s2.sid = w.sid


Now let's take a look a the results of the script.

Logging on to DB1 as system, create a database link to db2 using the SCOTT account:

create database link scott_link connect to scott identified by "tiger" using 'db2';

Make sure it works:

system@db1 SQL> select sysdate from dual@scott_link;

SYSDATE
-------------------
03/05/2010 10:13:00

1 row selected.


Now logon to DB1 as sysdba and run who_dblink.sql:

sys@db1 SQL> @who_dblink

ORIGIN GTXID LSESSION USERNAME S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
oraserver.-21901 DB1.d6d6d69e.3.16.7190 500.15059 SYSTEM I SQL*Net me

1 row selected.

Now do the same on DB2:

sys@db2 SQL> @who_dblink

ORIGIN GTXID LSESSION USERNAME S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
ordevdb01.-21903 DB1.d6d6d69e.3.16.7190 138.28152 SCOTT I SQL*Net me

1 row selected.

How do you identify the session on the database where the database link connection was initiated?

Notice that the output from DB1 shows the PID in the ORIGIN column.  In this case it is 21901.

Running the following SQL on DB1 we can identify the session from which the SYSTEM user initiated the database link connection:

select
b.username,
b.sid SID,
b.serial# SERIAL#,
b.osuser,
b.process
from v$session b,
v$process d,
v$sess_io e,
v$timer
where
b.sid = e.sid
and b.paddr = d.addr
and b.username is not null
-- added 0.0000001 to the division above to
-- avoid divide by zero errors
-- this is to show all sessions, whether they
-- have done IO or not
--and (e.consistent_Gets + e.block_Gets) > 0
-- uncomment to see only your own session
--and userenv('SESSIONID') = b.audsid
order by
b.username,
b.osuser,
b.serial#,
d.spid,
b.process,
b.status,
e.consistent_Gets,
e.block_Gets,
e.Physical_reads;


USERNAME SID SERIAL # OS USER PROCESS
---------- ------ -------- -------------------- ------------------------
SYS 507 12708 oracle 22917

SYSTEM 500 15059 oracle 21901

2 rows selected.

The session created using the SCOTT account via the database link can also be seen using the same query on DB2 and looking for the session with a PID of 21903:

USERNAME    SID SERIAL# OSUSER  PROCESS
---------- ---- ------- ------- ------------
SCOTT 138 28152 oracle 21903
SYS 147 33860 oracle 22991
146 40204 oracle 24096


3 rows selected.
Categories: DBA Blogs

Treasure Trove of Oracle Security Documents

Jared Still - Tue, 2010-03-02 11:32
This morning I stumbled across a veritable treasure trove of Oracle Security documents at the web site of the Defense Information Systems Agency.

I've only spent a few minutes glancing through a small sampling of them, and they appear to be fairly comprehensive. When you consider that these are documents used to audit sensitive database installations, it makes a lot of sense that they would cross all the t's and dot all the i's.

The index of documents can be found here: DISA IT Security Documents

As you can see for yourself, there are documents covering security concerns for a number of other systems.
Categories: DBA Blogs

Compression for tables with more than 250 columns

Alejandro Vargas - Sun, 2010-02-28 19:15

Compression for tables with more than 250 columns

Tables with more than 250 columns are not supported to be compressed, this restriction remains in place even on 11g R2.

On the 11g R2, Sql Language Reference Manual, page 16-36 we can read:

Restrictions on Table Compression

* COMPRESS FOR OLTP and COMPRESS BASIC are not supported for tables with more than 255 columns.

This is a serious limitation specially for Telecoms where CDR tables can have a number of columns way over 255.

The available workaround:


  • Split the table into 2 sub-tables.

  • create table A as select pk,field 1 to 150 from origtable

  • create table B as select pk,field 151 to 300 from origtable

  • Each one will have less than 250 rows.

  • They will be joined by the primary key.

  • The table will be accessed using a view that has all the columns of the original table.

  • create view origtable as select a.pk,field a.1 to a.150, field b.151 to b.300 from a, b where a.pk=b.pk

Categories: DBA Blogs

Cool but unknown RMAN feature

Jared Still - Fri, 2010-02-19 11:53
Unknown to me anyway until just this week.

Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.

The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
rman target sys/manager nocatalog

After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.

The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:


rman target sys/manager catalog rman/password@rcat

This week I found out this is not true.

Possibly this is news to no one but me, but I'm sharing anyway. :)

Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.

After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:

archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects


Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version 5.0.0.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10

Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version 5.0.0.0
Finished Control File and SPFILE Autobackup at 16-FEB-10

RMAN> RMAN>

Recovery Manager complete.

Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on Tue Feb 16 04:07:59 PST 2010 ====


That seemed rather strange, and it was happening in both of the new databases.
The key to this was to look at the top of the log file, where I found the following:

BACKUP_MODE: lvl_0
BACKUP_TYPE: INCREMENTAL LEVEL=0
ORACLE_SID : oiddev
PWD_SID : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.


RMAN;
connected to target database: OIDDEV (DBID=3982952863)

RMAN;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified

RMAN;
Starting backup at 16-FEB-10
using target database controlfile instead of recovery catalogallocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset

Notice the line near the bottom of the displayed output?

The one that says "using target database controlfile instead of recovery catalog" ?

RMAN will go ahead with the backup of the database even though the connection to the catalog database failed.  This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.

The RMAN scripts are being run on a linux server in the following format:

$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE

rman commands go here

EOF


This was quite interesting to discover, and my be old news to many of you, but it was new to me.

This is not exactly a new feature either - one of the databases being backed up is 9.2.0.6. And of course there is now no need to update the backup scripts.
Categories: DBA Blogs

RAC, ASM and Linux Forum, December 15, 13:30 - 18:00 Beit HP Raanana

Alejandro Vargas - Sun, 2009-12-13 03:04

It's time for our 2nd, 2009 RAC, ASM and Linux Forum in Tel Aviv!

I would like to warmly invite you to our next RAC, ASM and Linux forum to be held at Beit HP in Raanana, on December 15.

You can register on the Israel Oracle User Group site.

On this forum we will have a 11g R2 Technology Update and 2 very interesting Customer Presentations about minimum downtime VLDB Migration to RAC on Linux and Creating and Managing RPM's for Oracle Silent Install on Linux.

Panel on Grid Infrastructure, ASM, Clusterware and RAC 11g R2, Technology Update

Annie Flint, Principal Member of Technical Staff, Oracle Corporation
Ofir Manor, Senior Sales Consultant, Oracle Israel
Alejandro Vargas, Principal Support Consultant, Oracle Advanced Customer Services

In the few months since last forum on June many things happened, 11g Release 2 is already production and brought a revolution in terms of performance and deep changes to the world of ASM, Oracle Clusterware and RAC.

Exadata Release 2 was released opening the way for OLTP databases based on the new Oracle - Sun Database Machine.

In this seminar we will review the new face of ASM, Oracle Clusterware and RAC on 11g Release 2 and we will comment on some of the incredible performance enhancements of the new version.

Migration of a VLDB to RAC 11g with Minimal Downtime

Dotan Mor,
Senior DBA
Pelephone DBA Team

Dotan will tell us the whole story of migrating an 8TB Datawarehouse, with near 0 downtime, from Linux 3 with OCFS2 to Linux 5, with CRS, RAC and ASM 11g, and Infiniband; and how his customer still cannot believe the incredible improvements they got in performance.

He will tell us also all the several problems faced in the way to this big success and how all of them were overcome.

Packaging Application and Database Together On Linux for Super-Silent-Installs

Liron Amitzi,
VP Professional Services
Brillix

Liron will explain us how to build a Linux RPM that contains inside the whole set of files required to easily and fast deploy a complete application, from the database to last application executable.

See you there!

Best Regards,

Alejandro

Categories: DBA Blogs

Check IO Scripts

Alejandro Vargas - Wed, 2009-12-09 19:32

These scripts are very useful to check throughput.

The original version can be found on My Oracle Support Note 422414.1 by Luca Canali.

set lines 250 pages 50000

alter session set nls_date_format='dd-mm-yyyy hh24:mi';

col Phys_Read_Total_Bps for 999999999999
col Phys_Write_Total_Bps for 999999999999
col Redo_Bytes_per_sec for 999999999999
col Phys_Read_IOPS for 999999999999
col Phys_write_IOPS for 999999999999
col Phys_redo_IOPS for 999999999999
col OS_LOad for 999999999999
col DB_CPU_Usage_per_sec for 999999999999
col Host_CPU_util for 999999999999
col Network_bytes_per_sec for 999999999999
col Phys_IO_Tot_MBps for 999999999999
col Phys_IOPS_Tot for 999999999999

spool io_max_checkup.log

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) Phys_Read_Tot_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Phys_Write_Tot_Bps,
sum(case metric_name when 'Redo Generated Per Sec' then maxval end) Redo_Bytes_per_sec,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then maxval end) Phys_Read_IOPS,
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then maxval end) Phys_write_IOPS,
sum(case metric_name when 'Redo Writes Per Sec' then maxval end) Phys_redo_IOPS,
sum(case metric_name when 'Current OS Load' then maxval end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then maxval end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then maxval end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then maxval end) Network_bytes_per_sec,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;

spool off

spool io_maxtot_summary.log

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end)/1024/1024 +
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end)/1024/1024 +
sum(case metric_name when 'Redo Generated Per Sec' then maxval end)/1024/1024 Phys_IO_Tot_MBps,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then maxval end) +
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then maxval end) +
sum(case metric_name when 'Redo Writes Per Sec' then maxval end) Phys_IOPS_Tot,
sum(case metric_name when 'Current OS Load' then maxval end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then maxval end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then maxval end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then maxval end) Network_bytes_per_sec,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;

spool off

Categories: DBA Blogs

Using OCFS2 as a Generic Cluster File System?

Sergio's Blog - Wed, 2009-12-09 00:56
If you use OCFS2 for purposes other than running Oracle software, please leave a comment or drop me a line: sergio[dot]leunissen[at]oracle[dot]com I'm curious how you use OCFS2, how big your filesystems are, how many nodes are in the cluster, what you like about OCFS2, etc.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs