FAQ: dba
Date: Mon, 23 May 1994 00:00:15 GMT
Message-ID: <1994May23.000215.11620_at_arbi.Informatik.Uni-Oldenburg.DE>
Here's a repost of the 5 DBA FAQs to date. Note:
- The "Become user" FAQ can be modified to use the "prompt" command instead of the "select from dual" command, thus reducing the number of database hits. However, there's a slight problem with this, since in Unix, the "$" of an OPS$ username causes problems. I haven't completed the workaround yet.
- The first FAQ, on backups, was rewritten and appeared in the Summer 93 issue of Oracle magazine, who now owns the copyright.
- I cannot mail these out to everyone, partly because my node does not recognize all the return mail paths.
- The second, third, and fourth FAQs reference material from previous published articles, again copyrighted by Oracle magazine. If you want to reprint them, I suggest that you reprint the article-format versions that appeared in that magazine. Contact them for copyright information.
- This material may be freely distributed by Tony Jambu and the Oracle group list server. It may not be published by the Midwest Oracle Users Group (aka TUSC).
-kml.
:::::::::: :: :: :::::Alself me to myduce introlow Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!DBA FAQ
(Frequently Asked Questions for/from Database Administrators)
by Kevin M. Loney, aka kml_at_cellar.org (username "Scott Tiger")
First in a Series.
Modification history
24-SEP-92 Original version, by Kevin M. Loney. 07-OCT-91 Modified to include NOTE about O/S specific conditions that
affect whether or not an entire block image is written out per transactions during hot backup mode (see note 3 of the 'Backup Strategy for Archivelogs' section). Note came from gpongrac_at_us.oracle.com.
Part 1. Backup Strategy
This is several pages long, you may want to just print it out and read it offline.
Q: What backup strategy should I use to best protect myself
against media and node failures? How should I structure my backup procedures to minimize the vulnerabilities they cause? And how can I have them check for both physical and logical errors?
A:
I will assume throughout this discussion that the reader has at
least a passing knowledge of the backup options available in Oracle.
The recovery options and procedures are well documented, both in the
DBA Guide and in the online RECOVERY.DOC file in the RDBMS directory.
The procedures described here will allow the user to best exploit
the tools available to him/her.
There are three valid ways of backing up an Oracle (V6-7) database. They are:
- Shut the database down and backup up all tablespace, log, and control files.
- While the database is up, export the entire database.
- While the database is up, running in ARCHIVELOG mode, set the tablespaces into backup state and back up their files, then get the control files and the archived redo logs.
I'll be integrating all three of these. Even if you are already familiar with them, read the sections below.
- The Cold Backup.
Once a week, shut down the database and perform a cold backup by going disk by disk and backing up all files in the [oracle] directories. When complete, restart the database. Why perform a shutdown/startup? Well, it's good to make sure the database can be restarted. There are certain parameters that are only checked on startup; errors with rollback segments in particular may only show up during startup operations. - The Export.
A full system export picks up information that user exports do not; this includes third-party indexes and grants. Why perform an export at all? Well, regular backups check that the database files are physically acceptable; export checks that they are logically readable. [Aside: This does not imply that they are logically importable; corrupt records may be exported into the dump file, preventing import. See my paper, #111, "Editing Corrupted Export Files (When Import Fails)" in the proceedings from the 1992 IOUW.]
For example, export will detect data structure errors in tables (these are particularly prevalent in tables containing LONG datatypes). Export is of limited use for full-system backups; in order to recreate from an export file, you'd have to re-create the database from scratch first. However, it is worthwhile to periodically validate every record in the database via export. The dump file can then be used to retrieve particular tables/users if needed.
Immediately following the export, run scripts to map the tablespaces to owners, and owners to tablespaces. This will allow you to see who owns what where. In the event of a tablespace loss, you would then be able to quickly determine what users/systems will be affected. Here are two scripts I use:
rem user object locations
set pagesize 60
break on owner on tablespace_name
column owner format A20
column tablespace_name format A32
column objects format A20
select substr(owner,1,20) owner,
substr(tablespace_name,1,32) tablespace_name, count(*)||' tables' objects
from sys.dba_tables
group by
substr(owner,1,20), substr(tablespace_name,1,32)
union
select substr(owner,1,20) owner,
substr(tablespace_name,1,32) tablespace_name, count(*)||' indexes' objects
from sys.dba_indexes
group by
substr(owner,1,20), substr(tablespace_name,1,32)
spool user_locs.lst
/
spool off
clear columns
clear breaks
rem ts_users.sql
rem user list by tablespace for export reference
column tablespace_name format A32
column owner format A20
column objects format A20
break on tablespace_name on owner
set pagesize 60
select
substr(tablespace_name,1,32) tablespace_name, substr(owner,1,20) owner, count(*)||' tables' objects
from sys.dba_tables
group by
substr(tablespace_name,1,32), substr(owner,1,20)
union
select
substr(tablespace_name,1,32) tablespace_name, substr(owner,1,20) owner, count(*)||' indexes' objects
from sys.dba_indexes
group by
substr(tablespace_name,1,32), substr(owner,1,20)
spool ts_users.lst
/
spool off
These will create the tablespace/user mapping files.
3. The Hot Backup.
The ARCHIVELOG function that came in OracleV6 opened up a whole new set of possible backup options, but also caused a lot of heartache until they got it working right.
Good parts about Archivelog:
- It gives you 24-hour availability.
- It gives you point-in-time recovery.
- It allows you to restore without losing any data at all.
- It lets you keep the database up, preserving the data in your SGA.
Bad parts about Archivelog:
- Administration can be difficult.
- Repetitive failed load attempts can create massive logs.
- When it gets stuck, the database comes to a halt.
Implementing Archivelog:
To start using Archivelog, simply alter your startup command file:
startup mount <SID> exclusive;
connect internal;
alter database archivelog;
archive log start;
alter database open;
and startup the database. From then on, the database will be in Archivelog mode until you turn it off (alter database noarchivelog). Note that a failed transaction is still a transaction; Archivelog keeps track of rollback segment extents, so rolled back inserts or deletes affect it just like completed transactions. Turn it off (shutdown, restart with noarchivelog mode) before doing big loads from SQL*Loader.
When you start archiving, archived logs will be written every time the redo operation is about to overwrite a previously written logfile. It will write it to the directory indicated by the log_archive_dest parameter in your init.ora file. They will all be the same size (in V6; V7 can have variably sized archive logs) as your redo logs. They will increase in number until they run out of space on their destination device. At that point the database will freeze until you clear more space for them in the log_archive_dest location. SO, have a second location ready to receive them.
Backup strategy for Archivelogs:
- Do each tablespace one at a time. That is, rather than setting them all offline, then backing them up, then setting them back online, do them each separately. You don't want to risk having a system crash while the entire database is in begin backup state; recovery is a mess. Minimize your window of vulnerability by having only one tablespace in backup state at any one time.
- Before you backup the control file, force an archive log switch. This will update the header information in the control file.
- Don't do it during user activity. When in backup state, a tablespace's activity is still written to the archive logs. However, it's written block-by-block rather than byte-by-byte. So changing one record in a tablespace that's being backed up will result in that record's entire block being written to the archive area. NOTE: This is correct only for those platforms where the physical sector size is less than the Oracle logical block size. On systems where the physical disk transfer size is equal to the Oracle block size, then we do not incur the penalty of having to log the entire block. This is true for MVS, VM, and perhaps other systems.
Sample Archive log command file for VMS:
$ dup = "backup/ignore=(noback,interl,label)/log" $ sqldba
CONNECT INTERNAL alter tablespace system begin backup; exit
$ dup u01:[oracle]ora_system.dbs tape1:ora_system.bck/sav $ sqldba
CONNECT INTERNAL alter tablespace system end backup; alter tablespace appl1 begin backup; exit
$ dup u02:[oracle]appl1.dbs tape1:appl1.bck/sav $ sqldba
CONNECT INTERNAL alter tablespace appl1 end backup; exit $! $! get archive logs $ rename/log u03:[oracle.arcs]*.arc *.arclogs$ rename/log u04:[oracle.arcs2]*.arc *.arclogs !secondary arcs dir $ sqldba
CONNECT INTERNAL alter system switch logfile; exit
$ dup u03:[oracle.arcs]*.arclogs,u04:[oracle.arcs2]*.arclogs tape1:logs.bck/sav
$ del/log u03:[oracle.arcs]*.arclogs;0 $ del/log u04:[oracle.arcs2]*.arclogs;0 $! $! get control file $ sqldba CONNECT INTERNAL alter database backup controlfile to 'u01:[oracle]control.bkp' reuse; exit
$ dup u01:[oracle]control.bkp tape1:control.bck/sav
Note: The "alter system switch logfile" command is all but undocumented,
(see pg 3-15 of the DBA guide. It refers you to a nonexistent cross-reference). It will NOT show up in the alert log. Don't be alarmed by that; it does actually work.
Integrating the three methods.
Shutdown once a week. If possible, run a full export immediately prior to the shutdown. If this is not possible because of time constraints, perform the export on the previous night. While down, perform a cold backup of all Oracle-related files, then restart. On all other nights, perform a hot backup while the database is up using the methodology described above.
If at all possible in your O/S, it is also a good idea to shadow the disks on which your realtime and archived redo logs reside.
Kevin Loney (kml_at_cellar.org) regularly writes the User Help column for Oracle magazine. In his spare time, he is an Oracle DBA in southeast PA. Comments, questions, and job offers are always welcome.
:::::::::: :: :: :::::Alself me to myduce introlow Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!DBA FAQ
(Frequently Asked Questions for/from Database Administrators)
by Kevin M. Loney, aka kml_at_cellar.org (username "Scott Tiger")
Second in a Series.
Part 2. Database Monitoring
This is several pages long, you may want to just print it out and read it offline.
NOTE: Portions of this article previously appeared in "The One-Minute
DBA", in Oracle magazine, Winter 1992. Therefore, do not attempt to re-publish this material without the express written consent of Oracle Magazine. For a free subscription, contact your Oracle sales rep or call 415-506-7000. ************
The scripts in this FAQ are V6 specific.
Since posting my request for DBA FAQ topics and ideas, I have received a number of responses detailing individuals' monitoring scripts. I have tried to integrate the best features of each in the process of writing this FAQ. Special thanks to Andy Finkenstadt(andy_at_homebase.vistachrome.com)
Q: How can I best monitor my database? In particular, how can I track changes in space usage, fragmentation, and rollback segments? A: Perform daily monitoring sessions that write their results to tables. Then write scripts that report on changes in the statistics gathered. Examples are shown below.
Monitoring files
This may seem like an odd place to start, but it really is significant. This will give you a quick way of determining which files are on which device (which one needs to know when a disk goes down). It also helps to get a quick report on the increase in the total disk space allocated to the database over time.
Create a table to store the data:
create table file_check (TS CHAR(30) not null, FILE_NAME CHAR(80), BLOCKS NUMBER not null, DISKBLOCKS NUMBER, CHECK_DATE DATE);
Then, on a daily basis, write file spec information from dba_data_files to that table. Reports on this data will be vital if you ever have to try to take files offline while the database is mounted but not open. A sample insert script:
INSERT INTO file_check SELECT TABLESPACE_NAME ts, FILE_NAME , BLOCKS , BLOCKS*4 DISKBLOCKS, /*VMS!*/ TO_CHAR(sysdate,'DD-MON-YY') check_date FROM SYS.DBA_DATA_FILES;
Note that the ratio of Oracle block size to VMS block size is 4:1. Be sure to replace the '4' above with your operating system's value.
Monitoring Space, part 1 - The Big Picture
This section calls for the creation of another table. This will be used to store the results of space monitoring queries and will be used in later sections to generate trend reports.
Create a table to store the data:
create table space_check (TS CHAR(30), NUM_EXTS NUMBER, MAX_EXTS NUMBER, ORABLOCKS_FREE NUMBER, DISKBLOCKS_FREE NUMBER, PERCENTFREE NUMBER, DISKBLOCKS NUMBER, CHECK_DATE DATE);
Now, write a script that populates the space_check table once a day. Use a batch job to automate the entry.
CREATE VIEW BLOCK_TOTAL (TABLESPACE_NAME, TBLOCKS) AS SELECT TABLESPACE_NAME, SUM(BLOCKS) FROM SYS.DBA_DATA_FILES GROUP BY TABLESPACE_NAME; INSERT INTO space_check SELECT BT.TABLESPACE_NAME ts, COUNT(DFS.BLOCKS) NUM_EXTS, MAX(DFS.BLOCKS) MAX_EXTS, SUM(DFS.BLOCKS) ORABLOCKS_free, SUM(DFS.BLOCKS)*4 DISKBLOCKS_free, (SUM(DFS.BLOCKS)/BT.TBLOCKS)*100 PERCENTFREE, BT.TBLOCKS*4 DISKBLOCKS, TO_CHAR(sysdate,'DD-MON-YY') check_date FROM SYS.DBA_FREE_SPACE DFS, BLOCK_TOTAL BT WHERE DFS.TABLESPACE_NAME = BT.TABLESPACE_NAME GROUP BY BT.TABLESPACE_NAME, BT.TBLOCKS;
Again, note that a 4:1 ratio of OS block size to db block size is assumed. To report from these tables for any given date:
SELECT TS, FILE_NAME, BLOCKS, DISKBLOCKS FROM FILE_CHECK WHERE CHECK_DATE = to_date('&&CHECK_DATE') ORDER BY TS, FILE_NAME; SELECT TS, NUM_EXTS, MAX_EXTS, ORABLOCKS_FREE, round(100*MAX_EXTS/ ORABLOCKS_FREE,2) max_pct, DISKBLOCKS_FREE, round(PERCENTFREE,2), DISKBLOCKS FROM SPACE_CHECK WHERE CHECK_DATE = to_date('&&CHECK_DATE') order by ts;
Explanation of the resulting report:
TS = tablespace
Num_Exts = The total number of free extents in the tablespace
Max_Exts = The maximum size, in Oracle blocks, of the largest free extent.
ORAblocks_free = The total amount of free space in the tablespace, in
oracle blocks.
Max_Pct = The size of the largest free extent as a percentage of all the
free space in that tablespace.
Diskblocks_free = The free space in the tablespace, in OS blocks.
Percentfree = The percent of the tablespace that has not been allocated
by any segments.
Diskblocks = The total number of diskblocks allocated to this tablespace.
Monitoring Space, part 2 - Trend Analysis
Now that this information is stored in the database, we can have the database monitor itself for changes. First, we can print out all values; in the next section we will only print those that have changed a certain percentage.
First, to monitor changes in the Percentfree value:
column week4 format 999
column week3 format 999
column week2 format 999
column week1 format 999
column today format 999
column change format 999
ttitle center 'Changes by week in tablespace PercentFree values' skip 2
select ts tablespace_name,
max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)) week1, max(decode(check_date,to_char(sysdate-21,'DD-MON-YY'), round(percentfree),0)) week2, max(decode(check_date,to_char(sysdate-14,'DD-MON-YY'), round(percentfree),0)) week3, max(decode(check_date,to_char(sysdate-7,'DD-MON-YY'), round(percentfree),0)) week4, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) today, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)) changefrom space_check
group by ts
order by decode(
max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)),0,9999, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0))), max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0));
I realize that looks a little convoluted. What it says is that for each tablespace, the database is to print out the values from today, a week ago, two weeks ago, three weeks ago, four weeks ago, and the change from the fourth week ago to today. The order by clause forces it to print the tablespaces with the lowest percentfree at the top of the report.
For num_exts, we'd like to reverse the order of the report, so that
the most fragmented will appear at the top:
column week4 format 999
column week3 format 999
column week2 format 999
column week1 format 999
column today format 999
column change format 999
ttitle center 'Changes by week in tablespace Num_Extents values' skip 2
select ts tablespace_name,
max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), num_exts,0)) week1, max(decode(check_date,to_char(sysdate-21,'DD-MON-YY'), num_exts,0)) week2, max(decode(check_date,to_char(sysdate-14,'DD-MON-YY'), num_exts,0)) week3, max(decode(check_date,to_char(sysdate-7,'DD-MON-YY'), num_exts,0)) week4, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), num_exts,0)) today, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), num_exts,0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), num_exts,0)) change
from space_check
group by ts
order by decode(
max(decode(check_date,to_char(sysdate,'DD-MON-YY'), num_exts,0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), num_exts,0)),0,-9999, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), num_exts,0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), num_exts,0))) DESC, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), num_exts,0)) desc,1;
The same can be done for any of the other columns. Note the differing order by clauses for ascending (like percentfree) or descending (like num_exts) change reports.
Monitoring Space - part 3: Automatic alerts
As part of the process that inserts records into Space_check and File_check on a daily basis, you should also kick off a job that checks them for warning conditions. The following script is a modified version of the script above that prints our percent_free values. It only prints out those tablespaces whose free space has dropped 5 percent in four weeks. Note that that 5 percent is relative to the starting value (ie, a drop of 20 to 19 percentfree will be printed out) while the printed values will probably show a smaller drop. This keeps tablespaces from failing to show up as they diminish in size.
column week4 format 999
column week3 format 999
column week2 format 999
column week1 format 999
column today format 999
column change format 999
ttitle center 'Tablespaces whose PCTF values have changed 5 pct this month'
select ts tablespace_name,
max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)) week1, max(decode(check_date,to_char(sysdate-21,'DD-MON-YY'), round(percentfree),0)) week2, max(decode(check_date,to_char(sysdate-14,'DD-MON-YY'), round(percentfree),0)) week3, max(decode(check_date,to_char(sysdate-7,'DD-MON-YY'), round(percentfree),0)) week4, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) today, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)) changefrom space_check
group by ts
having
abs((max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)))/ max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),1))) > 0.05 order by decode( max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0)),0,9999, max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0)) - max(decode(check_date,to_char(sysdate-28,'DD-MON-YY'), round(percentfree),0))), max(decode(check_date,to_char(sysdate,'DD-MON-YY'), round(percentfree),0));
Note that the percent change detected can be easily changed. Have this script generate a listing file and mail it to your account every day before you get in. That way, in the first minute you're at work you'll have a warning of the most common space problems that may come your way.
Sample output:
TABLESPACE_NAME WEEK1 WEEK2 WEEK3 WEEK4 TODAY CHANGE ------------------------------ ----- ----- ----- ----- ----- ------ PRODUCTION 79 63 62 62 61 -18 PRODUCTION_I 90 74 73 73 73 -17 APPL2_DATA 63 61 60 59 58 -5 APPL2_INDEX 54 53 52 52 51 -3 TEST_DATA 72 72 71 68 78 6
Plotting the Results in SQL*Plus
SQL*Plus can be used to generate some simple line graphs. The following script plots the percentfree value as a function of week. A few things to note: the WW to_char function marks the week beginning with the first day of the year, thus, there are 53 weeks of the year and they start on a different day every year. Secondly, I print only the minimum percentfree value for a week, since this is the most critical value. If you leave SPACE at 1 in SQL*Plus, you'll need a printer that supports 209 characters. If you add a SET SPACE 0 command, you can get by with a 132 column printer.
First, create a view that summarizes the values by week:
create view BY_WEEKS as select ts, to_char(check_date,'YYYY') yr, to_char(check_date,'WW') wk, max(ROUND(percentfree)) pct, MIN(ROUND(PERCENTFREE)) minpct from space_check group by ts, to_char(check_date,'YYYY'), to_char(check_date,'WW');
Then select the values. Most of the repetetive lines are removed for
brevity.
set linesize 240
set pagesize 200
COLUMN WK FORMAT A3
SET FEEDBACK OFF
SET VERIFY OFF
ttitle center 'Min Percentfree value by week for '&&ts skip 2
select
wk,
decode(minpct,0,'o',null) zchk0, decode(minpct,1,'o',null) chk1, decode(minpct,2,'o',null) chk2, decode(minpct,3,'o',null) chk3, decode(minpct,4,'o',null) chk4, decode(minpct,5,'o',null) chk5, decode(minpct,6,'o',null) chk6, decode(minpct,7,'o',null) chk7, decode(minpct,8,'o',null) chk8, decode(minpct,9,'o',null) chk9, decode(minpct,10,'o',null) tchk10, etc. decode(minpct,98,'o',null) chk98,
decode(minpct,99,'o',null) chk99,
decode(minpct,100,'o',null) hchk100
from by_weeks
where ts = upper('&&ts')
and yr = '&&year';
Spool the result to a file. I change the first character of the column alias every 10 characters to give me a marker to go by when reading the graph.
Monitoring Rollback Segments
The following script will display the current status of the rollback segments in the database. I have to confess that I don't store this in a database table, mostly because its values are so subject to change by an outside force (me), and the trends, while possibly helpful to detecting rollback segments approaching the OS limit for number of extents, are not consistent enough on my database to be useful.
SELECT SUBSTR(DS.SEGMENT_NAME,1,22) R_SEGMENT, SUBSTR(DS.TABLESPACE_NAME,1,20) TABLESPACE, DS.BLOCKS, DS.EXTENTS, DRS.STATUS FROM DBA_SEGMENTS DS,DBA_ROLLBACK_SEGS DRS WHERE DS.SEGMENT_NAME = DRS.SEGMENT_NAME ORDER BY 1;
Future FAQ postings will explore further database statistics that can be monitored, such as incremental hit ratios, dictionary cache hits, and bstat/estat statistics.
Feedback is always welcome and appreciated.
Kevin Loney (kml_at_cellar.org) regularly writes the User Help column for Oracle magazine. In his spare time, he is an Oracle DBA in southeast PA. Comments, questions, and job offers are always welcome.
:::::::::: :: :: :::::Alself me to myduce introlow Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!DBA FAQ
(Frequently Asked Questions for/from Database Administrators)
by Kevin M. Loney, aka kml_at_cellar.org (username "Scott Tiger")
Third in a Series.
Part 3. Rollback Segment Management
This is several pages long, you may want to just print it out and read it offline.
Q: How can I best manage my rollback segments? How can I drop
public rollback segments? How can I tell how much rollback space a transaction takes?
Monitoring Rollback Segments
In DBA FAQ, Part 2, I gave this script that allows you to quickly see the status of your rollback segments:
SELECT SUBSTR(DS.SEGMENT_NAME,1,22) R_SEGMENT, SUBSTR(DS.TABLESPACE_NAME,1,20) TABLESPACE, DS.BLOCKS, DS.EXTENTS, DRS.STATUS FROM DBA_SEGMENTS DS,DBA_ROLLBACK_SEGS DRS WHERE DS.SEGMENT_NAME = DRS.SEGMENT_NAME ORDER BY 1;
How do you use this information?
Well, for starters understand that fragmentation in rollback segments is not a bad thing. Each new transaction in a rollback segment will start in a new extent of that segment. I have found that the best setup for managing them is one in which every extent is the exact same size (PCTINCREASE 0). This allows the rollback segments in the tablespace to reuse dropped extents while limiting their maximum size.
In VMS, no Oracle object can take more than 121 extents. So if my rollback segments are created with an initial extent of 1 Meg, and a next 1M, PCTINCREASE 0, then anytime a transaction or group of transactions tries to exceed 121 Meg, the offending statement will fail. Thus, the script above will show you two important things: the number of extents, and the amount of space allocated.
Ideally:
1. each transaction will fit in one extent of a rollback segment.
2. all non-system rollback segments will be located outside of the
SYSTEM tablespace
3. all rollback segments will be of equal size
NOTE: #2 seems hard to figure out at first. After all, you have to have a second rollback segment in SYSTEM before you can write data to non-SYSTEM rollback segments. However, once you have done that, you can create a tablespace called ROLLBACKS (eg) and create rollback segments there. Once you have done this, you no longer need the non-SYSTEM rollback segment in the SYSTEM tablespace. No other objects should be created in the ROLLBACKS tablespace.
NOTE: #3 is important because it means that any given transaction, when running, will have a good chance of running without dynamically extending the rollback segment it resides in (can you say "recursive calls"?), improving its performance and its chance of succeeding.
Monitoring Rollback Area used by Transactions
This topic has been covered in several Oracle technical bulletins and a few IOUG papers, so I'll present it very briefly. Any transaction's rollback area can be measured by monitoring the changes in the system statistics tables during its run. This means that a valid reading requires that nothing else be running during the time the test is taking place.
All you have to do is store the result of the following query immediately before and after running your test:
SELECT SUM(WRITES) FROM V$ROLLSTAT; When the transaction has completed, and you have the beginning and ending values, the size of the rollback information generated can be calculated:
ENDING_WRITES - BEGINNING_WRITES - 54 = ROLLBACK INFO GENERATED. NOTE: The "54" is the rollback overhead that corresponds to the amount of overhead required by a table insert operation. This assumes that you're storing these values in tables. If you're just tracking them on paper, then ignore this factor.
This will tell you how much rollback space you need to handle this transaction; knowing how many transactions will be running at once will give you an idea of how much space will be needed in your ROLLBACKS tablespace.
Dropping Public Rollback Segments
Private rollback segments are very easy to manage. Whatever is in init.ora is in use. Whatever isn't in init.ora can be dropped. Public rollback segments, however, are more trouble to deal with.
In order to drop a public rollback segment, you must first convince Oracle that it doesn't need it. To do this, rig init.ora so that Oracle starts up using just the SYSTEM rollback segment. Add the following lines to init.ora (if there aren't already entries for these variables):
transactions = 20 transactions_per_rollback_segment = 20
and start the database. You will now be able to drop the public rollback segments. Do yourself a favor and change them to private ones. When you're done your work, change your init.ora back to its earlier values.
In their paper "Managing Rollback Segments" (IOUW, 1992), Mark Ramacher and Harmeet Bharara give a script that can be used to identify all users with active transactions and the rollback segment each is using. The version that appears in that article, however, is wrong. Here is the corrected version:
SELECT R.NAME ROLLBACK_SEG, L.PID ORACLE_PID, P.SPID SYSTEM_PID, NVL(P.USERNAME,'NO TRANSACTION') TRANSACTION, P.TERMINAL FROM V$LOCK L, V$PROCESS P, V$ROLLNAME R WHERE L.PID = P.PID (+) AND TRUNC(L.ID1(+)/65536)=R.USN AND L.TYPE(+) = 'TX' AND L.LMODE(+) = 6 ORDER BY R.NAME;
Note that the column inside the Trunc command is id1, not idL.
Sample output:
ROLLBACK_SEG ORACLE_PID SYSTEM_PI TRANSACTION TERMINAL ------------------------------ ---------- --------- -------------- -------- R_SEG1 NO TRANSACTION R_SEG2 NO TRANSACTION R_SEG3 NO TRANSACTION R_SEG4 14 23C09442 SMITHJ VTA3905: R_SEG5 15 23C0A8D3 IDL1 VTA3992: R_SEG6 22 23C07727 SCOTT VTA3869: SYSTEM NO TRANSACTION
This shows that only three users are actively writing to the rollback segments, and there is no contention for the segment headers.
If there were more than one user using a rollback segment, there would be multiple records for that rollback segment. I'd recommend adding a break on the rollback_seg column to make this stand out.
MISCELLANY
Starting in Version 6.0.33, you can specify which rollback segment your transaction is to use via the <set transaction use rollback segment> command. Note, however, that this command does not guarantee that yours will be the only transaction in that rollback segment.
In Version 7, you will be able to specify an Optimum size for each rollback segment. One the rollback segment exceeds this size, it will shrink back to it as soon as its current transactions have completed.
DBA FAQ Information:
Part 1 - Backup Strategies Part 2 - Monitoring the Database Part 3 - Rollback Segments
Kevin Loney (kml_at_cellar.org) regularly writes the User Help column for Oracle magazine. In his spare time, he is an Oracle DBA in southeast PA. Comments and questions are always welcome.
:::::::::: :: :: :::::Alself me to myduce introlow Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!DBA FAQ
(Frequently Asked Questions for/from Database Administrators)
by Kevin M. Loney, aka kml_at_cellar.org (username "Scott Tiger")
Fourth in a Series.
30-NOV-92 Initial posting. KML.
Part 4. Database Administration, Bits and Pieces
This is several pages long, you may want to just print it out and read it offline.
The scripts in this FAQ are V6 specific.
Unlike the other FAQ's, this isn't geared toward a specific topic. It contains tips and scripts for a variety of areas. These may not be published without prior approval of the author. Even then, they may not be published in the Midwest Oracle Users Group newsletter.
Topics:
- Monitoring the Dictionary Cache usage.
- "Mapping" tablespace space usage
- Displaying "widow" grants
- Automating cleanup of widow grants.
- Monitoring the Dictionary Cache usage.
In V6, information about the database - such as table names, grants, and usernames, is stored in the dictionary cache. When a user performs a query against a table, the cache is searched to see if that table is already there. If it is not, the descriptive information about that table is loaded into the cache (this is called a recursive call). That information stays in the cache until the cache runs out of space. The least recently used entry is then moved out of the cache to make room for new entries.
The sizes of these caches are determined by the 'dc_' init.ora parameters.
The v$rowcache table can be used to determine how many queries scored
'hits' against the cache (found the sought entry in the cache), or 'missed'
(performed a recursive call). Current usage and the dc setting ('count')
can also be queried:
set linesize 80
SET PAGESIZE 60
SELECT PARAMETER, GETS, GETMISSES, USAGE, count
FROM V$ROWCACHE
ORDER BY 1;
Part of a sample output:
PARAMETER GETS GETMISSES USAGE COUNT -------------------------------- ---------- ---------- ---------- ---------- dc_column_grants 18 6 6 100 dc_columns 575005 14571 9579 14000 dc_constraint_defs 0 0 1 100 dc_constraints 2 2 2 500 dc_files 6466 53 53 64 dc_free_extents 511 211 211 600 dc_indexes 119282 741 784 2000 dc_object_ids 7281 35 11 2000 dc_objects 113726 1943 1974 3000
So for the dc_objects cache, the init.ora parameter setting is 3000, 1974 of which are currently being used. There have been 113,726 requests against this cache, and all but 1,943 were hits ('gets').
Note that the caches are reset as part of database startups, so systems which are kept up for long periods should have higher cache settings.
2. "Mapping" tablespace space usage
Space in a tablespace is either used or free; free space is tracked in the dba_free_space table, while used space is tracked in dba_extents. By combining the two tables in one query, you can quickly see just how badly fragmented your tablespace is. This also allows you to determine how contiguous the free space extents are.
SET PAGESIZE 60
SET LINESIZE 132
SET VERIFY OFF
SPOOL MAP
SELECT 'FREE SPACE' OWNER,' ' OBJECT,FILE_ID, BLOCK_ID, BLOCKS
FROM DBA_FREE_SPACE WHERE TABLESPACE_NAME = '&&TS'
UNION
SELECT SUBSTR(OWNER,1,20), SUBSTR(SEGMENT_NAME,1,32),FILE_ID,BLOCK_ID,BLOCKS
FROM DBA_EXTENTS
WHERE TABLESPACE_NAME = '&&TS'
ORDER BY 3,4;
SPOOL OFF
UNDEFINE TS
Partial Sample Output:
(This output has been trimmed to 80 chars by shortening the Object field)
OWNER OBJECT FILE_ID BLOCK_ID BLOCKS -------------------- -------------------- ---------- ---------- ---------- ABC_MGR ABC 3 2 5120 ABC_MGR ABC_PK 3 5122 70 ABC_MGR ABC_DOC_NO 3 5192 97 FREE SPACE 3 5289 7 ABC_MGR ABC_CHILDREN_UPDATE 3 5296 36 ABC_MGR PARENT 3 5332 10 ABC_MGR ABC_CORPORATE_SOURCE 3 5342 18 ABC_MGR ABC_FILE_COLOR_FOR_S 3 5360 3 ABC_MGR ABC_LOC 3 5363 2145 FREE SPACE 3 7508 384 FREE SPACE 3 7892 384 FREE SPACE 3 8276 576 FREE SPACE 3 8852 385 FREE SPACE 3 9237 385 ABC_MGR ABC_LOC 3 9622 10 FREE SPACE 3 9632 84 ABC_MGR ABC_WORD_INDEX 4 2 39 ABC_MGR ABC_WORD_INDEX 4 41 39 FREE SPACE 4 80 3 ABC_MGR ABC_WORD_INDEX 4 83 59 ABC_MGR ABC_INDEX 4 142 14 ABC_MGR ABC_INDEX 4 156 14 ABC_MGR ABC_INDEX 4 170 21 ABC_MGR ABC_INDEX 4 191 32 ABC_MGR ABC_WORD_INDEX 4 223 89 ABC_MGR ABC_PK 4 312 70 FREE SPACE 4 382 35 ABC_MGR ABC_DOC_NO 4 417 97 FREE SPACE 4 514 57 FREE SPACE 4 571 76230
Let's take a look at this. In file 3, there are 5 contiguous free extents; these could be coalesced into 1 by creating an object of exactly their combined size (I prefer to first create objects of the size of all larger free chunks). Also, the ABC_WORD_INDEX is fragmented, with extents in several locations. Exporting the ABC_MGR user, rebuilding the tablespace, and re-importing the user will reorder the extent allocation. Using the COMPRESS=Y flag on export will cause all allocated space to be compressed into one allocated extent.
Note that the block_id+blocks can be used to determine if the two free extents are contiguous. This is the fact exploited in Kathy Jou's article from the 1991 IOUG, available via the RTSS as bulletin 100954.708. I am working on a more robust version of this for a future article.
3. Displaying "widow" grants
When an account is deleted (ie, connect privilege is revoked), it maintains
an grants made to it. Those grants can only be revoked by the grantor.
(Ideally, only the owner of a table should make any grants on it). Thus,
"widow" grants are left hanging out there in the database - they will be
reactivated as soon as the account is re-granted connect priv. Plug this
security hole.
To list existing "widow" grants, run the following script. *NOTE: I reference a table called DROPS. This is a denormalized table of all dis-connected users, created because selecting from dba_users was too slow.*
create table drops
as select username from dba_users where connect_priv = '0';
rem widow_grants.sql
set termout off
set echo off
set feedback off
set newpage 0
set pagesize 60
set linesize 132
break on grantee skip 1 on owner on grantor
spool widows.lis
select grantee, owner, grantor, table_name, select_priv, update_priv, insert_priv, delete_priv from dba_tab_grants where grantee in (select username from drops) order by 1,2,3,4;
spool off
This will show the privileges owned by any account that no longer has connect privilege.
4. Automating cleanup of widow grants
Which brings us to the logical question: how can I clean up those grants? Well, you need to remember that each distinct grantor must do the revoking.
Run the following under each grantor that showed up in the prior listing:
set termout off
set echo off
set feedback off
set newpage 0
set pagesize 0
spool cleanup.sql
select 'revoke all on '||table_name||' from '||grantee||';'
from user_tab_grants_made
where grantee in
(select username from drops);
spool off
_at_cleanup
host delete cleanup.sql;*
5. One last thing...
Since it's denormalized,
DROP TABLE DROPS;
In the next FAQ:
How to "become" another user in V6.
Feedback is always welcome and appreciated.
Kevin Loney (kml_at_cellar.org) regularly writes the User Help column for Oracle magazine. In his spare time, he is an Oracle DBA in southeast PA. Comments and questions are always welcome.
:::::::::: :: :: :::::Alself me to myduce introlow :Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!DBA FAQ
(Frequently Asked Questions for/from Database Administrators)
by Kevin M. Loney, aka kml_at_cellar.org (username "Scott Tiger")
Fifth in a Series.
30-NOV-92 Initial posting. KML.
Part 5. How to Become another user in V6
This is several pages long, you may want to just print it out and read it offline. The scripts in this FAQ are V6 specific.
This article may not be published without prior approval of the author.
How to become another user in Oracle V6.
What you'll need:
A DBA account A "become" script A "reset" script
I can't help you get the DBA account, but I can provide the two scripts. Not only that, but I can do it in such a way that it won't show up in the dba_audit_dba table.
This method makes use of the password assignment techniques used by export/import. I've also used it to lock people out of accounts while keeping the account open (connect='1') so that its grants won't get cleaned out (see FAQ 4).
First, create a script that will reset the user to his present encrypted password; then revise his password to a word of your choosing. You'll have to remember to reset his password when you're done.
BECOME.SQL:
rem become.sql
rem K Loney, 9/10/92.
rem resets user's password to 'reset' temporarily
rem to change it back, run sys$login:reset_{username}.sql
set pagesize 0
set feedback off
set verify off
set echo off
set termout off
spool sys$login:reset_&&1..sql
select 'alter user &&1 identified by values '||''''||
password||''''||';'
from dba_users where username = upper('&&1');
select 'host delete sys$login:reset_&&1..sql;*' from dual;
select 'exit' from dual;
spool off
alter user &&1 identified by newpassword; exit
That's it. The first part builds an alter user string (which won't show up in dba_audit_dba, at least not in 6.0.33.1), and stores it in a file (reset_username). Then the account's password is changed - in this case, to 'newpassword'. You can now log in as that user, using 'newpassword' as the account password. When you're done, run the reset_username script you created. It'll delete itself when it's complete.
Feedback is always welcome and appreciated.
Kevin Loney (kml_at_cellar.org) regularly writes the User Help column for Oracle magazine. In his spare time, he is an Oracle DBA in southeast PA. Comments and questions are always welcome.
:::::::::: :: :: :::::Alself me to myduce introlow :Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!TUSC is to ETHICS as OIL is to _____.
Follow the bouncing copyright infringement!
Sep 92 - The first DBA FAQ, on backup strategies, written by me,
shows up in comp.databases.oracle.
Oct 92 - Tom Sheridan of TUSC (a consulting company that pretty
much runs the MidWest Oracle Users Group) contacts me to see if they can run the backups FAQ in their mag. I assent, on the condition that I get editing approval, as the original FAQ was not written for printed publication. I get no response. Fall 92 - Vol 3 No. 4 of the MOUG newsletter comes out. It includes the backups article, pretty much as it was originally written, with no editing. A friend sends me a copy. I opt to leave it out of the "Publications" list in my resume. Summer 93 - ORACLE magazine's summer issue features the backups article that I have re-written for publication. ORACLE owns the copyright to that article. Fall 93 - The Abstracts catalog for the IOUW shows up. Guess who's giving a talk? Richard Niemic of TUSC, president of the MOUG. Guess what the topic is: "WHICH BACKUP PLAN WILL SAVE ME?" The abstract: "Many DBAs and developers are not familiar with all of the possibilities that are available concerning data recovery. This presentation will cover all types of database backups. The primary focus will be on: export and import; imcremental export and import; archiving; image backups; and hot and cold backup. The time and storage costs of each topic will be discussed."
GEEZ, RICH, WHERE DID YOU EVER GET THE IDEA????!!!!
(Wouldn't you know it, I'm presenting at the same time. Would
somebody mind showing up and asking Rich what he knows about
copyright infringement and plagiarism?)
:::::::::: :: :: :::::Alself me to myduce introlow :Scott/::: ::::::< ::::: . :::::::::kml_at_cellar.org ::Tiger:::::::::: :: ::: ::::::::Damn the electric fence!Received on Mon May 23 1994 - 02:00:15 CEST