Sorry for the inconvenience but I have made a drastic move to merge the individual websites I maintain into one. Over the next couple of weeks, hopefully sooner, Everything, websites & blogs will be merged into a single website: www.jameskoopmann.com. This will enable me to communicate via a single website and hopefully make it easier on you and myself. You will quickly notice that I have taken a huge leap of faith and am using a blogging interface for this. So come stop by, take a look around, and let me know what you think. Just remember I am just now begging the transition so be a bit patience. But I DO want your comments.
Please change bookmarks, RSS, etc. as thecheapdba.com, blog.thecheapdba.com, and blog.jameskoopmann.com will be going away soon.
To increase your ability to model the physical aspects of an Oracle database it is advantageous for the designer to test a disk configuration before they actually install an Oracle database on top of it. For this reason I suggest you take a look at Oracle’s ORION tool to help benchmark your storage architecture. The proper benchmarking can be the difference between the same hardware having poor or excellent performance. Through the use of Oracle’s ORION workload tool Database Architects can effectively develop a workload that can mimic and stress a storage array in the same manner as the planned application with an Oracle backend database. Because the ORION tool does not require a running Oracle database, multiple configurations can be tested such that an optimal storage configuration can be obtained while providing for reliability, stability, and scalability.
Take a look at this introductory article Measuring Disk I/O - Oracle’s ORION Tool
How do you know if the disks you will be using from ANY particular vendor can muster up the IOPS and MBPS required to satisfy your current or future workloads?
In the article, Measuring Disk I/O, we took a quick look at the amount of IOPS and MBPS, or workload, that our Oracle database is generating. These numbers are very important when we start to look at our system for available throughput, especially out to the disk subsystem. Why are these numbers important? As a very simple example, and no real meaning, suppose you, after running the scripts from that article, find out that your database is requesting, and getting, 100,000 IOPS. Well, if your disk subsystem has 1,000 disks and if every disk is participating in satisfying 100,000 IOPS, you could sort of say that each disk is performing about 100 IOPS. You then have to ask yourself the following questions:
Is this good on a per disk basis?
Do I have room to grow if my throughput were to double or triple?
How much breathing room do I really have?
So let’s take a quick dive into these from a truly vendor perspective in the article : Measuring Disk I/O - A Vendor View
Do you know how your disk subsystem actually is performing?
I have just posted an article on www.thecheapdba.com that will take a look at extracting some I/O statistics so that you can monitor and determine just how well your disks are doing within Oracle.
How can I separate Oracle I/O to maximize performance?
Should I separate data files from index files?
Should I separate redo logs”
These question(s), AND many more, seem to flood our minds as database administrators. They are easy to answer with generalities but in practice can be very difficult to come to a conclusion on unless we take a look at how our disk subsystem is actually performing.
Take a look at this article for some guidance : Measuring Disk I/O
Now, before I get too many comments on why this won’t work, let me just say that this really is a cheap-man’s archive log mover. AND it does assume that you know your archive log process very well and the number of logs defined within your database. Setting the KEEPLOGS parameter inside this script is VERY CRITICAL. Setting this variable to a number higher than the number of redo logs in your database ensures it will never move a log file that is still being written to. This script will move archive logs from one directory to another. It does this based on reverse order of archive log creation and then, depending on how many logs to keep, will skip the first number of logs defined by KEEPLOGS and then move the rest.
@ECHO OFF TITLE move_archive_lgos.bat REM move_archive_lgos.bat REM ===================== REM This script will move archive logs from one directory to another. REM It does this based on reverse order of archive log creation and REM then, depending on how many logs to keep, will skip the first REM number of logs defined by KEEPLOGS and then move the rest. REM REM It is advisable to set KEEPLOGS greater than the number of logs REM defined and alowing for time for them to write out to disk. REM DATE /T TIME /T SET VERSION_STRING=1.0 SET FROMDIR=\Server_ASHARE_CArchive SET TODIR=\Server_BSHARE_BACKUPArchive SET KEEPLOGS=6 SET KEEPDAYS=0 SET c=1 DIR %FROMDIR% /O-D /B > begdir.lst FOR /F %%I IN (begdir.lst) DO call :MOVELOGFILE %%I DIR %FROMDIR% /O-D /B > enddir.lst goto :EOF goto :EOF :MOVELOGFILE IF %c% GTR %KEEPLOGS% ( ECHO %FROMDIR%%1 COPY %FROMDIR%%1 %TODIR%%1 IF EXIST %TODIR%%1 ( ERASE %FROMDIR%%1 ) ) SET /a c=%c%+1 goto :EOF
A lot of the AWR reports ask for, before spitting out their report, the number of days back you would like to go before entering your beginning and ending snapshot IDs. When they report on all the snapshot IDs they have left out, I think, one very important piece of information that just might cloud our judgment when selecting the proper snapshot range. This would be whether, during the snapshot period, there has been a bounce of the database–reseting the statistics to zero.
This script I have here is very similar to the ones in the AWR reports ($ORACLE_HOME/rdbms/admin/awr*) but also shows when the database has been bounced during a snapshot. We can do this by the following SQL which joins the DBA_HIST_DATABASE_INSTANCE and DBA_HIST_SNAPSHOT views—showing historical information on the snapshots in the Workload Repository. We obviously need to join these tables on the dbid, instance_number, and, the important part, startup_time. We also make sure that we only bring back snapshots that are newer than the number of days back specified by the user by comparing the time of the actual snapshot (end_interval_time). Please note that this script will output a status of ‘**db restart**’ for those times that the database was down and unavailable. This is very important as it shows us those times that Oracle was not collecting statistics (the database was down) and more importantly the statistic counters were zeroed. We can report on a bounce condition if the startup_time and begin_interval_time are the same.
prompt prompt prompt Enter the number of days to look for snapshot IDs prompt ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ select dhdi.instance_name, dhdi.db_name, dhs.snap_id, to_char(dhs.begin_interval_time,'MM/DD/YYYY:HH24:MI') begin_snap_time, to_char(dhs.end_interval_time,'MM/DD/YYYY:HH24:MI') end_snap_time, decode(dhs.startup_time,dhs.begin_interval_time,'**db restart**',null) db_bounce from dba_hist_snapshot dhs, dba_hist_database_instance dhdi where dhdi.dbid = dhs.dbid and dhdi.instance_number = dhs.instance_number and dhdi.startup_time = dhs.startup_time and dhs.end_interval_time >= to_date(sysdate - &&num_days_back) order by db_name, instance_name, snap_id;
Glad to be back. It HAS been awhile and hopefully you forgive thecheapdba for staying away.
BUT this post, I am sure you will like. It is an installation guide for Oracle 11g on Linux CentOS-5.
As the installation is quite lengthy I will just provide you with a link to the main, new and “improved” website location.
So just visit http://www.thecheapdba.com/articles/Install_Oracle11gCentOS5.htm to take a look at it!
Cheers, and it won’t be this long in between posts again.
The numbers are compelling: Oracle's share is 23.2%. A cluster of five other vendors have between 9% and 14% each. The rest is spread broadly, with each vendor commanding 2% or less. Oracle's share grew 23.3%, compared to growth of just under 12% for the sector as a whole.
I am glad to see this for a bunch of reasons. As Vice President of Embedded Technology at Oracle, I take a personal interest, of course. Oracle Berkeley DB, which Oracle acquired with Sleepycat in 2006, is aimed squarely at the embedded space. I have long maintained that embedded opportunities represent a significant source of new revenue and growth. Computers have escaped the data center, and special-purpose systems are getting deployed in living rooms, in the walls of buildings and in shirt pockets. There is an enormous amount of data travelling over networks and touching these systems.
The key to our success in the embedded space has been to assemble a family of products that address a wide range of requirements. A manufacturer building mobile telephone handsets needs to store crucial information reliably. So does a vendor building an optical network switch, and an ISV developing high-performance equity trading systems for financial markets. The three have very different requirements, though, and it's unrealistic to expect any single product to satisfy all of them.
All of our database products -- Oracle Database, Oracle TimesTen, Oracle Berkeley DB and Oracle Lite -- can be embedded in partner systems and deployed invisibly to end users. All contributed to our number one ranking by IDC.
It's not just the technology that has made us successful, though. The people who choose and deploy embedded databases are software developers. In the enterprise, we generally talk to DBAs and CIOs, but in the embedded world, we talk to architects and CTOs. Those conversations are different, and we have had to develop new expertise and new strategies as we have pursued embedded customers. Over the past several years, we've concentrated on building the technical, support and sales expertise necessary to win embedded business in countries around the globe. IDC's vendor share numbers suggest that we're doing okay.
Congratulations to Oracle's Embedded Global business team, and to the product development and support groups for all four products! This is a tremendous accomplishment.