Charles Schultz

Subscribe to Charles Schultz feed
Tales of a Sojourner in the land of OracleCharles Schultzhttp://www.blogger.com/profile/07973399674184183130noreply@blogger.comBlogger100125
Updated: 8 hours 19 min ago

Learning about parameters

Mon, 2009-09-21 09:06
We are kicking off an effort to analyze and hopefully standardize parameters across some 200 databases. Fortunately, we have a history of parameters which is made possible by an in-house job that scrapes them from each database every night.

So this post is more about my own little path of discovery; I am sure most of you know this already.

Lesson 1: Pay attention to what the documentation says
For instance, db_cache_size has this little blurb in the 10gR2 docs:
The value must be at least 4M * number of cpus * granule size (smaller values are automatically rounded up to this value).
Let us assume the documentation means to say 4*1024*1024, as opposed to 4 megabytes; 4 megabytes * granule size (which is also in bytes) would give you bytes squared, which makes no sense. We will also assume the granule size = 8KB, but in order for the numbers to be realistic, we will say 8 bytes. That first 4M is really throwing things off. 4M * 8 bytes = 33,554,432 bytes, or 32 megabytes.

So let's say we have a generic setting of, say 200M (bear with me). If you upgrade your host from using 5 CPUs (4M * 5 cpus * 8K = 32MB * 5 = 160MB) to 10 CPUs (160MB * 2 = 320MB), the setting for db_cache_size will automatically jump up to 320MB, despite your explicit setting of 200M. If you had budgeted your databases to max out memory, you might want to be a tad careful. =)

Lesson 2: Comments in the init file are stored in x$ksppcv.ksppstcmnt (and thus v$parameter.UPDATE_COMMENT)
Of course, one has to put the comments in the right place. We pretty much discovered this by accident; if you put a # followed by text after the parameter=value, you are all set. Of course, if you ever look at spfiles, they have the same thing. Just something we have missed for a very long time. =) We have found this to be very handy in identifying parameters that deviate from a given "standard" or are set as a result of a bug. Or an upgrade (which are just really massive bug fixes, as well as new bug introductions, right? *grin*).

Lesson 3: Undocumented/unpublished x$ tables really sucks
I really wish Oracle would document the x$ tables for us. So I am looking at X$KSPPI.ksppity; all the online blogs/white papers/articles that I can find decode this value into 4 groups (boolean, string, number, file). But wait, in 10gR2 I have 5 distinct types (1-4, 6). 6 seems to correspond to "Big Integer", that little convention of using [K|M|G] after a number to distinguish a shorthand for its size. Not sure why 5 was skipped - we do not have any values for 5.

Overdramatic? Saying that such a small thing sucks? This is merely the tip of the iceberg - there is so much out there that is not available for the general public, and the experts end up making guesses (really good and quite often very accurate guesses, but guesses none-the-less).


Well that is it for now. Still exploring, learning... seeing what's out there.

Concepts Guide: 6/27 - Dependencies Among Schema Objects

Fri, 2009-09-11 15:25
Almost nice to have a short chapter. I cheated a peeked ahead and saw that Chapter 7 is only 6 pages; how the heck did they squeeze the Data Dictionary into 6 pages? Guess we will have to wait and see.

I like how Oracle automatically attempts to recompile dependent objects if the referenced object is changed - the discussion in this chapter highlights Oracle's infrastruture (although does not detail it) and how it all happens like black magic. =) Here is a quick example:

SQL > create table t1 (a char(1));

Table created.

SQL > create view v1 as select * from t1;

View created.

SQL > create view v2 as select * from v1;

View created.

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 VALID
V2 VALID
T1 VALID

SQL > alter table t1 add (b char(1));

Table altered.

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 INVALID
V2 INVALID
T1 VALID

SQL > select * from v2;

no rows selected

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 VALID
V2 VALID
T1 VALID


Note how V2 depends on V1, which depends on T1. Even though the text is not clear on whether or not a recurssive recompilation will occur, we can see that in practice, it does. Cool stuff.

The notes on page 6 (Object Name resolution) are key to remember. It seems easy sometimes to forget the order of precedence:
  1. own schema
  2. public synonym
  3. schema matches fully-qualified object owner

There is an important ramification to this:
Because of how Oracle resolves references, it is possible for an object to depend on the nonexistence of other objects. This situation occurs when the dependent object uses a reference that would be interpreted differently were another object present.

For instance, it is possible that V1 needs to have T1 not be in the current schema, but rather a synonym.

Right below that is a section on how SQL also maintains a dependency on the referenced object. Because Oracle (as of Oracle 11gR1) does not fully support surgically removing individual sql cursors, one workaround is to change the dependent object, which immediately invalidates any associated cursors. Great for if you want a new query plan because something changed and you do not want to redo the stats. Well... kinda great; not to many shops allow the DBA to make changes to the table on the fly in Production. =)

The section on remote dependencies was slightly interesting - I did not realize that Oracle checked not only on timestamps, but also on signatures. When you think about it, it is amazing how much Oracle does behind the scenes.

RAC Attack!

Wed, 2009-09-09 14:01
Jeremy Schneider graced us with RAC Attack last week - it was quite awesome! Jeremy brings such a wealth of knowledge and passion for the technology that often times I found myself hard pressed to keep the workshop going. As I was the "organizer" person, I felt some responsibilities in those directions.

It also opened my eyes on several fronts. This is the first time I have helped to facilitate such a workshop, and there were a number of interesting obstacles, logistical and technological. Jeremy handled it all with his usual easy manner and we got it all worked out quite well. For instance, the harddrives of the individual computers were just a tad too small to accomodate all the jumpstart VM images that Jeremy likes to deploy; as a result, we ended up hosting files on various computers and mapping network drives. Not the quickest thing in the world, but hey, it worked. Also, again from the perspective of a facilitator, I found it challenging to address the numerous questions that folks had from time to time, which gave me a greater respect for those who do this kind of thing on a regular basis. Not only did Jeremy answer questions, but took advantage of several opportunities to delve into the deeper details of "how things work".

In retrospect, we are faced with the ubiquitous puzzle of how to address different styles of learning. For those, like me, who crave the hands-on aspect, this workshop is excellent! For those who need more lecture, this lab was a bit of a wake-up call. *grin* Actually, if only we had more time, we could certainly have entertained more dialogue; RAC is rich with controversy. =)

Jeremy was also able to spill the beans a little on Oracle 11gR2, since someone decided to release the Linux version the Tuesday before the workshop began. So we were treated to a few sneak peeks and tidbits. Good stuff.

Personally, I was challenged to discover new ways to do these kind of labs/workshops. I heard a lot of positive feedback about the wide variety of skill sets and job roles in the class, but as a result of that, the various backgrounds required different levels of "background information". Going forward, I would try to break the labs into more modular components (opposed to a totally open lab time) and preceed each lab with some solid instruction. What Jeremy did was great for DBAs, but we had some folks who needed a bit more hand-holding. That is just the nature of the beast. The good news is that Jeremy equipped us to do exactly that - we can now hold our own lab and choose any pace we want. I am hoping to pursue this a little and get others involved, especially in terms of disucssing how we as an organization want to tackle overlapping job roles in regards to cluster and storage management.

The virtualization aspect was also very nice. I think it gave us a glimpse into what we can do with virtualized resources, something we can definitely utilize more fully for future labs and group sessions.

Thanks, Jeremy,

Concepts Guide: 5/27 - Schema Objects

Wed, 2009-06-24 15:16
I found that reading the Guide is quite hard if you are already tired. ;-)

As always, I like pictures. Figure 5-1 on page 5-3 does justice. Although they would make their point more clear by labelling the schemas.

Was not aware of intra-block chaining (pp 5-5) - interesting concept. Especially since it does not affect performance (does not increase the number of IO calls).

Figure 5-3 is good in that it really helps to see the various pieces of a block and the row, including the headers.

As much as I hate how nulls are handled in Oracle, the one good thing is that I like how Oracle just does not even record information for null-trailing rows (ie, column-length not stored in block). Except, of course, if you have LONG data - DO NOT USE LONG! =)

I was not aware how table compression actually worked. Now that I know a little more about it, I am surprised the guide did not mention any limitations. For example, if a block has 100% unique data (uncompressable), would the symbol table still be built and populated? If not, what is the cut-off? At what point does Oracle deem compression worthwhile, pragamatically?

I have never seen a practical use for nested tables, but I'll keep my eyes open. I jumped to 27-7 as referenced just to see what it said. I still have never seen a practical use for nested tables.

The fact that sessions are "bound" to temp tables was new to me; I did not realize you could not do ddl on a temp table if is already bound to a session. Kinda makes sense, though. I wonder why they do not simply call it "locking"? =) Afterall, that is how normal tables are handled.

Ojbect Views really confuse me, not being familiar with the concept. And this being the Concepts Guide, I found that the short little blurb did not really help me much at all.

I also did not realize that one could put declaritive constraints on views; interesting way to propagate metadata information for restricted environments.

The short paragraph on Materialized View Logs did not do the concept any justice. I get the impression that either space and/or time was restrained when this section was done. =)

The intro section to Dimensions left my head whirling. I am not a Warehousing guy by any means; while I appreciate the extra background and the quasi-example, I find that it deep-dives too quick for me. And using an example of a year being tied to a year-row is just the most absurd thing I have ever heard. Why not a practical, real-life example that "anyone" can grasp?

Good discussion for sequences; I like how the good is balanced with the bad - the "Caution" is stronger than I expected, but I think very important and am glad to see that the authors made it stand out.

Nice long section on indexes. *grin* I always find it strange to find in 10g documentation references to "analyze table", when most of the time I believe they really mean collect stats, for which we are told to use dbms_stats instead. The intro to deterministic (user-defined) indexes was quite interesting. I would personally consider this an example of "black magic" in Oracle. Another one of those cases where there is a strong lack of practical examples.

Figure 5-7 starts out looking like a great pictoral example. However I found it to be quite confusing. Actually, what I really want to see is how an index is built, starting with one row. At the very least, it would be helpful to augment the figure with text explaining the function of the values for the branch blocks. However, excellent information on how searches are mathematically bounded (big-O notation).

Excellent piece on bitmap indexes; great examples, wonderful discourse. I appreciate the balanced approach to addressing the pros and cons of bitmap indexes, which may at the outset seem to be the pancea of query performance issues. The sidebar on cardinality was very well done as well.

The section on Index-organized tables was also quite interesting, however I wonder why, if they are so highly recommended for OLTP applications, why are they not more popular?

Application Domain indexes, and the Oracle Data Cartridge in general, are another area of black magic that I fear ever going back to. I dabbled in it once when attempting to define custom statistics for a function and never looked back. =) I am sure they have their place on some "True Expert"'s shelf, but not here....

Like IOTs, the Concepts Guide does a good job selling Clusters and Hash Clusters as beneficial methods, but I do not see many folks using it in Real Life. Is it merely the learning curve that keeps the standard DBA away from these features? We have a lot of third-party apps; shall we assume that the vendors simply do not have the expertise to utilize these fun but advanced toys?

Interesting stuff none-the-less.

Fusion Tables

Fri, 2009-06-12 13:24
So I admit it, I read slashdot (who doesn't?? *grin*). While some topics I really do not care about, for some reason "Oracle" in the headline does. =) And I am not opposed to Oracle-bashing, because I do a fair share myself.


I love how folks at Google Labs come up with all this crazy stuff. And not just GL, but Apple and lots of other places as well. The way technology moves is absolutely spellbinding, and I mean that in the most literal sense possible. *grin*

What I hate is techno-marketing gibberish:
"So now we have an n-cube, a four-dimensional space, and in that space we can now do new kinds of queries which create new kinds of products and new market opportunities"
Ok so I can grapple with n-cube or 4-space. Show me a query that can create a new kind of product. Heck, show me a query that can make an old product! Create new market opportunities?!? Come on, everything in the galaxy is a market opportunity. You couldn't hit a house fly with a query. And I mean that in the most literal sense. *wink*

Pages