Feed aggregator

4.0 New Features Application

David Peake - Wed, 2011-01-05 10:20

Application Express 4.0 introduced many new features. You can run the sample application here or download the application and install into your own workspace here.

Explorations in Component Interface: PeopleCode

Digital Eagle - Wed, 2011-01-05 07:36
This page has moved.  Please update your links:
http://psst0101.digitaleagle.net/2011/01/10/tip-comparing-trace-files/

This is a continuation of the following posts:

Now, we need to write some code to use the Component Interface.  Let’s use the Application Engine program from this step by step post.

Adding a Log Step to the Program

Before we dive into the PeopleCode, we need to add a second step to the program.  This step will come in handy for testing how the program works.  It will just simply log a message to the standard out file.

I used this SQL to find a message catalog that would work easily:

SELECT * FROM PSMSGCATDEFN
WHERE MESSAGE_TEXT = '%1'

Then, I added a new step.  First, you select Step 01.  Then, you can either use the menu Insert > Step/Action or the toolbar icon.  When you get the new step, change the action from SQL to Log Message.  Then, set the Message Set to 94 and the Number to 1.  For the parameters, enter something like “Process Complete”.

Selection_814

Generating a PeopleCode CI Template

Now, we are ready to begin adding PeopleCode to Step 01.  You can open the PeopleCode by double clicking anywhere in the gray of the PeopleCode action.  If you have been following other tutorials, you probably have something in that program.  For simplicity’s sake, let’s just delete that and start from scratch.  This tutorial will assume you have a blank program at this point.

One of the cool things about Component Interfaces is that you can have Application Designer generate a template PeopleCode program.  If you have the component interface in the project, you can just drag it from the development tab of the project list and drop it into the program.  This generates a template in the program.  You will want to have a blank program here because it will generate a lot of code.  Note that this is a template in that you have to make some changes before you can even save it.

Selection_815

For this purpose, you can edit this template if you want, but it would probably be easier to delete the template back out and just type the code that I have written.

Writing the Program

First, we’ll start with some variables.  The file variable is to write a special log file with output information.  The session variable stores your session and the oBlgPersDtaCi stores the reference to the component interface.  The Emplid variable is going to be used to store the ID we want to update.

Local File &fileLog;
Local ApiObject &oSession, &oBlgPersDtaCi;
Local string &emplid;

Then, we need this function to display any error messages.  It basically just loops through the session’s message collection and prints out any messages that were triggered since the last call.

Function errorHandler()
   Local ApiObject &oPSMessageCollection, &oPSMessage;
   Local number &i;
   Local string &sErrMsgSetNum, &sErrMsgNum, &sErrMsgText, &sErrType;
      &oPSMessageCollection = &oSession.PSMessages;
   For &i = 1 To &oPSMessageCollection.Count
      &oPSMessage = &oPSMessageCollection.Item(&i);
      &sErrMsgSetNum = &oPSMessage.MessageSetNumber;
      &sErrMsgNum = &oPSMessage.MessageNumber;
      &sErrMsgText = &oPSMessage.Text;
      &fileLog.WriteLine(&sErrType | " (" | &sErrMsgSetNum | "," | &sErrMsgNum | ") - " | &sErrMsgText);
   End-For;
   rem ***** Delete the Messages from the collection *****;
   &oPSMessageCollection.DeleteAll();
End-Function;

Then, we’ll start a function so we can update multiple times.  The employee parameter is the ID we want to update, and we’ll use the makeError parameter later.  We put everything in a Try to attempt to handle errors gracefully.

Function updateCI(&emplid As string, &makeError As boolean)
   try

Then, we need to tell the tools what to do with the messages.  We will tell it to use the collection only in an effort to handle the messages gracefully:

      rem ***** Set the PeopleSoft Session Error Message Mode *****;
      rem ***** 0 - None *****;
      rem ***** 1 - PSMessage Collection only (default) *****;
      rem ***** 2 - Message Box only *****;
      rem ***** 3 - Both collection and message box *****;
      &oSession.PSMessagesMode = 1;      

Then, we need to initialize the component interface.  This is simulating the user clicking the link on the menu to go into the component.

      rem ***** Get the Component Interface *****;
      &oBlgPersDtaCi = &oSession.GetCompIntfc(CompIntfc.BLG_PERS_DTA_CI);
      If &oBlgPersDtaCi = Null Then
         errorHandler();
         throw CreateException(0, 0, "GetCompIntfc failed");
      End-If;

Then, we need to set some modes.  Setting Interactive Mode to true is like turning off deferred processing.  Get History Items is like checking the Include History box on the search page.  Edit History is like checking the correction mode on the search page.

      rem ***** Set the Component Interface Mode *****;
      &oBlgPersDtaCi.InteractiveMode = True;
      &oBlgPersDtaCi.GetHistoryItems = True;
      &oBlgPersDtaCi.EditHistoryItems = True;

Then, we update the Empl ID field on the search page:

      rem ***** Set Component Interface Get/Create Keys *****;
      &oBlgPersDtaCi.EMPLID = &emplid;

Then, we click the search button.  If this fails, that means no rows were returned.

      rem ***** Execute Get *****;
      If Not &oBlgPersDtaCi.Get() Then
         rem ***** No rows exist for the specified keys.*****;
         errorHandler();
         throw CreateException(0, 0, "Get failed");
      End-If;

Then, we should be on the page where we can make changes.  To keep things simple, we are going to  work with just one single field on the page — the birth state.  First, we print the value to the log.  Then, we will either set it to XX to generate an invalid value error or we will toggle the state between FL and TX.  Finally, we call the error handler to see if we generated any errors.

      rem ***** Begin: Get/Set Component Interface Properties *****;
      rem ***** Get/Set Level 0 Field Properties *****;
      &fileLog.WriteLine("&oBlgPersDtaCi.BIRTHSTATE = " | &oBlgPersDtaCi.BIRTHSTATE);
      If &makeError Then;
         &oBlgPersDtaCi.BIRTHSTATE = "XX";
      Else;
         If &oBlgPersDtaCi.BIRTHSTATE = "TX" Then;
            &oBlgPersDtaCi.BIRTHSTATE = "FL";
         Else;
            &oBlgPersDtaCi.BIRTHSTATE = "TX";
         End-If;
      End-If;
      errorHandler();
      rem ***** End: Get/Set Component Interface Properties *****;

Then, we save the component:

      rem ***** Execute Save *****;
      If Not &oBlgPersDtaCi.Save() Then;
         errorHandler();
         throw CreateException(0, 0, "Save failed");
      End-If;

Then, we cancel.  This is like going back to the search page.

      rem ***** Execute Cancel *****;
      If Not &oBlgPersDtaCi.Cancel() Then;
         errorHandler();
         throw CreateException(0, 0, "Cancel failed");
      End-If;

Then, we finish up our try, catch and our function.

   catch Exception &ex      
      rem Handle the exception;
      &fileLog.WriteLine(&ex.ToString());
   end-try;
   End-Function;

Now, this is the start of the main part of the program.  We will set our Empl ID, open the log file and get the current session:

&emplid = "KU0001";
rem ***** Set the Log File *****;
&fileLog = GetFile("C:\temp\BLG_PERS_DTA_CI.log", "w", "a", %FilePath_Absolute);
&fileLog.WriteLine("Begin");
rem ***** Get current PeopleSoft Session *****;
&oSession = %Session;

Then, we call our function to use the CI:

updateCI(&emplid, False);

Finally, we wrap up the log file.

&fileLog.WriteLine("End");
&fileLog.Close();
Full Program

And, here is the full program:

Local File &fileLog;
Local ApiObject &oSession, &oBlgPersDtaCi;
Local string &emplid;
Function errorHandler()
Local ApiObject &oPSMessageCollection, &oPSMessage;
Local number &i;
Local string &sErrMsgSetNum, &sErrMsgNum, &sErrMsgText, &sErrType;
&oPSMessageCollection = &oSession.PSMessages;
For &i = 1 To &oPSMessageCollection.Count
&oPSMessage = &oPSMessageCollection.Item(&i);
&sErrMsgSetNum = &oPSMessage.MessageSetNumber;
&sErrMsgNum = &oPSMessage.MessageNumber;
&sErrMsgText = &oPSMessage.Text;
&fileLog.WriteLine(&sErrType | " (" | &sErrMsgSetNum | "," | &sErrMsgNum | ") - " | &sErrMsgText);
End-For;
rem ***** Delete the Messages from the collection *****;
&oPSMessageCollection.DeleteAll();
End-Function;
Function updateCI(&emplid As string, &makeError As boolean)
try
rem ***** Set the PeopleSoft Session Error Message Mode *****;
rem ***** 0 - None *****;
rem ***** 1 - PSMessage Collection only (default) *****;
rem ***** 2 - Message Box only *****;
rem ***** 3 - Both collection and message box *****;
&oSession.PSMessagesMode = 1;
rem ***** Get the Component Interface *****;
&oBlgPersDtaCi = &oSession.GetCompIntfc(CompIntfc.BLG_PERS_DTA_CI);
If &oBlgPersDtaCi = Null Then
errorHandler();
throw CreateException(0, 0, "GetCompIntfc failed");
End-If;
rem ***** Set the Component Interface Mode *****;
&oBlgPersDtaCi.InteractiveMode = True;
&oBlgPersDtaCi.GetHistoryItems = True;
&oBlgPersDtaCi.EditHistoryItems = True;
rem ***** Set Component Interface Get/Create Keys *****;
&oBlgPersDtaCi.EMPLID = &emplid;
rem ***** Execute Get *****;
If Not &oBlgPersDtaCi.Get() Then
rem ***** No rows exist for the specified keys.*****;
errorHandler();
throw CreateException(0, 0, "Get failed");
End-If;
rem ***** Begin: Get/Set Component Interface Properties *****;
rem ***** Get/Set Level 0 Field Properties *****;
&fileLog.WriteLine("&oBlgPersDtaCi.BIRTHSTATE = " | &oBlgPersDtaCi.BIRTHSTATE);
If &makeError Then;
&oBlgPersDtaCi.BIRTHSTATE = "XX";
Else;
If &oBlgPersDtaCi.BIRTHSTATE = "TX" Then;
&oBlgPersDtaCi.BIRTHSTATE = "FL";
Else;
&oBlgPersDtaCi.BIRTHSTATE = "TX";
End-If;
End-If;
errorHandler();
rem ***** End: Get/Set Component Interface Properties *****;
rem ***** Execute Save *****;
If Not &oBlgPersDtaCi.Save() Then;
errorHandler();
throw CreateException(0, 0, "Save failed");
End-If;
rem ***** Execute Cancel *****;
If Not &oBlgPersDtaCi.Cancel() Then;
errorHandler();
throw CreateException(0, 0, "Cancel failed");
End-If;
catch Exception &ex
rem Handle the exception;
&fileLog.WriteLine(&ex.ToString());
end-try;
End-Function;
&emplid = "KU0001";
rem ***** Set the Log File *****;
&fileLog = GetFile("C:\temp\BLG_PERS_DTA_CI.log", "w", "a", %FilePath_Absolute);
&fileLog.WriteLine("Begin");
rem ***** Get current PeopleSoft Session *****;
&oSession = %Session;
updateCI(&emplid, False);
&fileLog.WriteLine("End");
&fileLog.Close();
The Result

See the App Engine test post for instructions on how to run the program.  If it worked, you should get this in the log file:

Selection_816

The standard out of the program should look like this:

Selection_817

And, the page should have changed the birth state:

Selection_818


We don't need no steennkking recoveries!

alt.oracle - Tue, 2011-01-04 22:24

Since this is a new blog, let's start with something basic - backups. Everybody knows you do need those 'steenking backups'. You know it, the rest of your team knows it, even the suits know it (they read it in a Delta Airlines inflight magazine). But there are a couple of problems I see with backups these days. The first lies with the DBA and, sadly, it can get your ass fired.

Yes, you did a nice and proper RMAN backup of your database. You did the right syntax, you reviewed the log, you even did a 'report need backup' command and made sure it came back clean. The real question is: if you lose everything, do you know how to put it back together? In my observation, it's utterly confounding how few DBAs today know how to do a recovery. Because let's face it - doing a backup is a deceptively simple process. You start RMAN and type 'backup database'. You can make it more complicated than that, but it doesn't always have to be. Backup is clean, orderly and comfortable. Recovery is messy, complicated and scary if you don't know what you're doing. Ask yourself these questions.

  • You lose a datafile. Do you know how to do a complete recovery without restoring the whole database? Do you know how to do it while the database is online?
  • You lose an entire database – control files, redo logs and datafiles. Can you do a complete recovery from that? Try it – it's an enlightening exercise.
  • A brilliant developer drops the most important table in your production database. Can you do a point-in-time incomplete recovery to roll the database forward to the point right before the table was dropped?
  • You lose an entire database, including the archivelogs. Have you tried the process of pulling the last backup off of tape and then doing a restore?


The list goes on. So how do you learn to do this stuff? You learn by doing it. I tell students in my classes that if you want to know how do recoveries, break stuff and then see if you can fix it. Literally. Build a database that no one else is using. Then, delete the system datafile and try to recover. Delete two redo logs and see how far you can get. Delete the system datafile AND the control files and see what you can do. It's one of the most enlightening experiences a DBA can go through. You'll learn what really makes a database tick. Consider this scenario – your mission critical, never-goes-down, life-blood of the company database experiences a media failure. The suits are surrounding you in your cube, watching your every move, questioning your every decision, telling you how much every moment of downtime is costing them, while sweat pours off your face onto your shaking hands as you try to think of the next command to type. I've seen it happen before.

Several years ago, I worked for a company that had a division that decided they needed their own personal DBA – the "regular" DBA group wasn't giving them enough TLC, I guess. They hired a guy who claimed to have extensive knowledge of Oracle version 9iR2 (way back then, yeah). He started on a Monday. That same day, the server on which his database lived had a drive controller error that wrote corrupt data across the entire RAID array. Time for recovery! Unfortunately, new guy (who was a really nice fellow) didn't have a clue as to what to do, and, worse, he didn't know some basic 9i stuff. He didn't know what an spfile was and he kept trying to "connect internal". Long story short, new guy started on a Monday and was gone by Wednesday.

Spare yourself the agony. Practice, practice, practice. Test, test and test again. We'll talk about the second problem next time. Until then, go break something – and try to fix it.
Categories: DBA Blogs

Component Interfaces Explorations: Security

Digital Eagle - Tue, 2011-01-04 21:54
This page has moved.  Please update your links:
http://psst0101.digitaleagle.net/2011/01/10/tip-comparing-trace-files/

This is part two of the explorations in component interfaces.  Before we can do anything with the Component Interface that we created in part one, we have to make sure that we have security to it.  Maybe at some point we can explore what happens when we don’t have security, but for now let’s fix it before it breaks.

So, if you haven’t already created the component interface, head over to part one first, and build the CI.  Then, you can setup the security.

The first thing you must know is which permission list you need to attach to.  Normally, your security would decide this.  You could create one permission list for all of the component interfaces in the system and grant that to everyone — I mean, the user can’t use the component interface unless they have access to the process or page that uses it.  Or, you could attach the component interface to the same permission list that gives the user access to the corresponding page or process.

In this case, we are just going to pick any one of the permission lists we have access to.  This is an experiment anyway, not a production thing.

First, open the User Profile for the Operator ID you use to connect — PeopleTools > Security > User Profiles > User Profiles.  Go to the Roles tab, and pick any one of the Roles.  I am picking the PeopleSoft User role:

Selection_810

Clicking the View Definition link will take you to the Role.  Then, click on the Permission Lists page and choose one of the permission lists in the list.  In my case, there is only one, so I clicked the View Definition on that one:

Selection_811

Now, that you are on the Permission List, you need to go to the Component Interfaces tab/page.  Then, add the component interface name that you created in part one.

Selection_812

If you save at this point, you haven’t done anything.  PeopleTools will remove it from the permission list because you haven’t given it any access yet.  Make sure that you click the Edit link and click the Full Access button:

Selection_813

Finally, make sure that you click Ok and Save to save the component interface.  You may want to re-open that permission list, and make sure that it saved the change.

Now you have security.  Stay tuned for writing some code to make it work…


Y2K10 PeopleSoft Style

Digital Eagle - Tue, 2011-01-04 21:09
This page has moved.  Please update your links:
http://psst0101.digitaleagle.net/2011/01/10/tip-comparing-trace-files/

I was surprised to find today that many of the PeopleTools items on the menu were missing.  Then, someone pointed me to article 1183084.1 on Oracle Support.

The problem is that many of the content references have a Expiration Date or Valid To date of 12/31/2010.  If you have some patience, you can go though Portal Structure and Content and update each one.  But, PeopleSoft has an Application Engine program attached to the article that will automatically do it.

Running the program is pretty straight forward.  The only thing I had to do was adjust the last updated field.  The program only updates content references that were last updated by PPLSOFT, but most of my content references were marked with PS.  You could either update the PSPRSMDEFN table manually in the database, but I changed all of the references for ” = ‘PPLSOFT'” to ” IN (‘PPLSOFT’, ‘PS’)”.


Explorations in Component Interface

Digital Eagle - Tue, 2011-01-04 16:46

This is part one of a multi-part series exploring some quirks in using Component Interfaces with Application Engine programs.  If nothing else, hopefully, these will give new developers some insight into how to use a Component Interface.  My goal is to expose a bug in the Application Engine tool that maybe Oracle will see and fix.

This first part will simply walk you through creating a Component Interface.  This part is just a map to associate the fields on the screen (or really in the component’s buffer) with an API property that can be accessed with code.

First, we create a new definition in Application Designer.  You can either use the Ctrl + N keyboard shortcut or the File > New menu.  Choose Component Interface from the list:

Selection_804

Next, have no fear — you will see the open dialog making it look like you want to open a component.  Really, Application Designer is just asking you which component you want to map.  In this example, we will use the “PERSONAL_DATA” component, which is the Modify a Person screen (Workforce Administration > Personal Information > Modify a Person):

Selection_805

Next, Application Designer asks you if you want to default the properties.  I almost always say yes to this questions because it will make Application Designer do all the work for you in generating the map.  The properties will be given names based on their field names in the buffer:

Selection_806

Now, you should have a new component interface generated for you.  Notice that the left side is the Component Structure.  It is the same as the Structure tab on the Component itself.  The right side is the map of record/field to property name.  In this screenshot, I have the component open in the background and I drew a line to show how the structure is the same.  Then, I drew a line from the structure to the property generated for one of the fields:

Selection_807

Finally, save the component interface.  You can either use the Ctrl + S keyboard shortcut, or you can use the File > Save menu.  I gave it the name BLG_PERS_DTA_CI.

Selection_808

While your at it, you may also want to add it to the project.  You can use the F7 keyboard shortcut or the Insert > Current Definition Into Project menu.

This concludes creating the Component Interface.  Please stay tuned for the next steps …


Theorems Regarding Business Logic

Kenneth Downs - Tue, 2011-01-04 16:33

In yesterday's Rigorous Definition of Business Logic, we saw that business logic can be defined in four orders:

  • First Order Business Logic is entities and attributes that users (or other agents) can save, and the security rules that govern read/write access to the entitites and attributes.
  • Second Order Business Logic is entities and attributes derived by rules and formulas, such as calculated values and history tables.
  • Third Order Business Logic are non-algorithmic compound operations (no structure or looping is required in expressing the solution), such as a month-end batch billing or, for the old-timers out there, a year-end general ledger roll-up.
  • Fourth Order Business Logic are algorithmic compound operations. These occur when the action of one step affects the input to future steps. One example is ERP Allocation.
A Case Study

The best way to see if these have any value is to cook up some theorems and examine them with an example. We will take a vastly simplified time billing system, in which employees enter time which is billed once/month to customers. We'll work out some details a little below.

Theorem 1: 1st and 2nd Order, Analysis

The first theorem we can derive from these definitions is that we should look at First and Second Order Schemas together during analysis. This is because:

  • First Order Business Logic is about entities and atrributes
  • Second Order Business Logic is about entities and attributes
  • Second Order Business Logic is about values generated from First Order values and, possibly, other Second Order values
  • Therefore, Second Order values are always expressed ultimately in terms of First Order values
  • Therefore, they should be analyzed together

To give the devil his due, ORM does this easily, because it ignores so much database theory (paying a large price in performance for doing so) and considers an entire row, with its first order and second order values together, as being part of one class. This is likely the foundation for the claims of ORM users that they experience productivity gains when using ORM. Since I usually do nothing but bash ORM, I hope this statement will be taken as utterly sincere.

Going the other way, database theorists and evangelists who adhere to full normalization can hobble an analysis effort by refusing to consider 2nd order because those values denormalize the database, so sometimes the worst of my own crowd will prevent analysis by trying to keep these out of the conversation. So, assuming I have not pissed off my own friends, let's keep going.

So let's look at our case study of the time billing system. By theorem 1, our analysis of entities and attributes should include both 1st and 2nd order schema, something like this:

 
 INVOICES
-----------
 invoiceid      2nd Order, a generated unique value
 date           2nd Order if always takes date of batch run
 customer       2nd Order, a consequence of this being an
                           aggregation of INVOICE_LINES
 total_amount   2nd Order, a sum from INVOICE_LINES
               
 INVOICE_LINES
---------------
 invoiceid      2nd order, copied from INVOICES
 customer         +-  All three are 2nd order, a consequence
 employee         |   of this being an aggregration of
 activity         +-  employee time entries
 rate           2nd order, taken from ACTIVITIES table
                           (not depicted)
 hours          2nd order, summed from time entries
 amount         2nd order, rate * hours
 
 TIME_ENTRIES
--------------
 employeeid     2nd order, assuming system forces this
                    value to be the employee making
                    the entry
 date           1st order, entered by employee
 customer       1st order, entered by employee
 activity       1st order, entered by employee
 hours          1st order, entered by employee

Now, considering how much of that is 2nd order, which is almost all of it, the theorem is not only supported by the definition, but ought to line up squarely with our experience. Who would want to try to analyze this and claim that all the 2nd order stuff should not be there?

Theorem 2: 1st and 2nd Order, Implementation

The second theorem we can derive from these definitions is that First and Second Order Business logic require separate implementation techniques. This is because:

  • First Order Business Logic is about user-supplied values
  • Second Order Business Logic is about generated values
  • Therefore, unlike things cannot be implemented with like tools.

Going back to the time entry example, let's zoom in on the lowest table, the TIME_ENTRIES. The employee entering her time must supply customer, date, activity, and hours, while the system forces the value of employeeid. This means that customer and activity must be validated in their respective tables, and hours must be checked for something like <= 24. But for employeeid the system provides the value out of its context. So the two kinds of values are processed in very unlike ways. It seems reasonable that our code would be simpler if it did not try to force both kinds of values down the same validation pipe.

Theorem 3: 2nd and 3rd Order, Conservation of Action

This theorem states that the sum of Second and Third Order Business Logic is fixed:

  • Second Order Business Logic is about generating entities and attributes by rules or formulas
  • Third Order Business Logic is coded compound creation of entities and attributes
  • Given that a particular set of requirements resolves to a finite set of actions that generate entities and values, then
  • The sum of Second Order and Third Order Business Logic is fixed.

In plain English, this means that the more Business Logic you can implement through 2nd Order declarative rules and formulas, the fewer processing routines you have to code. Or, if you prefer, the more processes you code, the fewer declarative rules about entitities and attributes you will have.

This theorem may be hard to compare to experience for verification because most of us are so used to thinking in terms of the batch billing as a process that we cannot imagine it being implemented any other way: how exactly am I suppose to implement batch billing declaratively?.

Let's go back to the schema above, where we can realize upon examination that the entirety of the batch billing "process" has been detailed in a 2nd Order Schema, if we could somehow add these facts to our CREATE TABLE commands the way we add keys, types, and constraints, batch billing would occur without the batch part.

Consider this. Imagine that a user enters a a TIME_ENTRY. The system checks for a matching EMPLOYEE/CUSTOMER/ACTIVITY row in INVOICE_DETAIL, and when it finds the row it updates the totals. But if it does not find one then it creates one! Creation of the INVOICE_DETAIL record causes the system to check for the existence of an invoice for that customer, and when it does not find one it creates it and initializes the totals. Subsequent time entries not only update the INVOICE_DETAIL rows but the INVOICE rows as well. If this were happening, there would be no batch billing at the end of the month because the invoices would all be sitting there ready to go when the last time entry was made.

By the way, I coded something that does this in a pretty straight-forward way a few years ago, meaning you could skip the batch billing process and add a few details to a schema that would cause the database to behave exactly as described above. Although the the format for specifying these extra features was easy enough (so it seemed to me as the author), it seemed the conceptual shift of thinking that it required of people was far larger than I initially and naively imagined. Nevertheless, I toil forward, and that is the core idea behind my Triangulum project. Observation: There Will Be Code

This is not so much a theorem as an observation. This observation is that if your application requires Fourth Order Business Logic then somebody is going to code something somewhere.

An anonymous reader pointed out in the comments to Part 2 that Oracle's MODEL clause may work in some cases. I would assume so, but I would also assume that reality can create complicated Fourth Order cases faster than SQL can evolve. Maybe.

But anyway, the real observation here is is that no modern language, either app level or SQL flavor, can express an algorithm declaratively. In other words, no combination of keys, constraints, calculations and derivations, and no known combination of advanced SQL functions and clauses will express an ERP Allocation routine or a Magazine Regulation routine. So you have to code it. This may not always be true, but I think it is true now.

This is in contrast to the example given in the previous section about the fixed total of 2nd and 3rd Order Logic. Unlike that example, you cannot provide enough 2nd order wizardry to eliminate fourth order. (well ok maybe you can, but I haven't figured it out yet myself and have never heard that anybody else is even trying. The trick would be to have a table that you truncate and insert a single row into, a trigger would fire that would know how to generate the next INSERT, generating a cascade. Of course, since this happens in a transaction, if you end up generating 100,000 inserts this might be a bad idea ha ha.)

Theorem 5: Second Order Tools Reduce Code

This theorem rests on the acceptance of an observation, that using meta-data repositories, or data dictionaries, is easier than coding. If that does not hold true, then this theorem does not hold true. But if that observation (my own observation, admittedly) does hold true, then:

  • By Theorem 3, the sum of 2nd and 3rd order logic is fixed
  • By observation, using meta-data that manages schema requires less time than coding,
  • By Theorem 1, 2nd order is analyzed and specified as schema
  • Then it is desirable to specify as much business logic as possible as 2nd order schema, reducing and possibly eliminating manual coding of Third Order programs.

Again we go back to the batch billing example. Is it possible to convert it all to 2nd Order as described above. Well yes it is, because I've done it. The trick is an extremely counter-intuitive modification to a foreign key that causes a failure to actually generate the parent row that would let the key succeed. To find out more about this, check out Triangulum (not ready for prime time as of this writing).

Conclusions

The major conclusion in all of this is that anlaysis and design should begin with First and Second Order Business Logic, which means working out schemas, both the user-supplied values and the system-supplied values.

When that is done, what we often call "processes" are layered on top of this.

Tomorrow we will see part 4 of 4, examining the business logic layer, asking, is it possible to create a pure business logic layer that gathers all business logic unto itself?

Categories: Development

Oracle enhanced adapter 1.3.2 is released

Raimonds Simanovskis - Tue, 2011-01-04 16:00

I just released Oracle enhanced adapter version 1.3.2 with latest bug fixes and enhancements.

Bug fixes and improvements

Main fixes and improvements are the following:

  • Previous version 1.3.1 was checking if environment variable TNS_NAME is set and only then used provided database connection parameter (in database.yml) as TNS connection alias and otherwise defaulted to connection to localhost with provided database name. This was causing issues in many setups.
    Therefore now it is simplified that if you provide only database parameter in database.yml then it by default will be used as TNS connection alias or TNS connection string.
  • Numeric username and/or password in database.yml will be automatically converted to string (previously you needed to quote them using "...").
  • Database connection pool and JNDI connections are now better supported for JRuby on Tomcat and JBoss application servers.
  • NLS connection parameters are supported via environment variables or in database.yml. For example, if you need to have NLS_DATE_FORMAT in your database session to be DD-MON-YYYY then either you specify nls_date_format: DD-MON-YYYY in database.yml for particular database connection or set ENV['NLS_DATE_FORMAT'] = 'DD-MON-YYYY' in e.g. config/initializers/oracle.rb. You can see the list of all NLS parameters in source code.
    It might be necessary to specify these NLS session parameters only if you work with some existing legacy database which has, for example, some stored procedures that require particular NLS settings. If this is new database just for Rails purposes then there is no need to change any settings.
  • If you have defined foreign key constraints then they are now correctly dumped in db/schema.rb after all table definitions. Previously they were dumped after corresponding table which sometimes caused that schema could not be recreated from schema dump because it tried to load constraint which referenced table which has not yet been defined.
  • If you are using NCHAR and NVARCHAR2 data types then now NCHAR and NVARCHAR2 type values are correctly quoted with N'...' in SQL statements.
Upcoming changes in Rails 3.1

Meanwhile Oracle enhanced adapter is updated to pass all ActiveRecord unit tests in Rails development master branch and also updated according to Arel changes. Arel library is responsible for all SQL statement generation in Rails 3.0.

Rails 3.0.3 is using Arel version 2.0 which was full rewrite of Arel 1.0 (that was used initial Rails 3.0 version) and as a result of this rewrite it is much faster and now Rails 3.0.3 ActiveRecord is already little bit faster than in ActiveRecord in Rails 2.3.

There are several improvements in Rails master branch which are planned for Rails 3.1 version which are already supported by Oracle enhanced adapter. One improvement is that ActiveRecord will support prepared statement caching (initially for standard simple queries like find by primary key) which will reduce SQL statement parsing time and memory usage (and probably Oracle DBAs will complain less about Rails dynamic SQL generation :)). The other improvement is that ActiveRecord will correctly load included associations with more than 1000 records (which currently fails with ORA-01795 error).

But I will write more about these improvements sometime later when Rails 3.1 will be released :)

Install

As always you can install Oracle enhanced adapter on any Ruby platform (Ruby 1.8.7 or Ruby 1.9.2 or JRuby 1.5) with

gem install activerecord-oracle_enhanced-adapter

If you have any questions please use discussion group or report issues at GitHub or post comments here. And the best way how to contribute is to fix some issue or create some enhancement and send me pull request at GitHub.

Categories: Development

Database smart flash cache wait events

Yasin Baskan - Tue, 2011-01-04 03:12
When you enable the database smart flash cache and start using it you will see new wait events related to that. These events help to find out if the problem is about the flash cache or not.

The ones I faced till now are "db flash cache single block physical read", "db flash cache multiblock physical read" and "write complete waits: flash cache". These are from a 11.2.0.1 database using the F5100 flash array as the database smart flash cache.

db flash cache single block physical read

"db flash cache single block physical read" is the flash cache equivalent of "db file sequential read". Read waits from the flash cache are not accounted for in the "db file sequential read" event and have their own wait event. The following is from an AWR report of 30 mins from a database using the database smart flash cache.


Top 5 Timed Foreground Events
    EventWaitsTime(s)Avg wait (ms)% DB timeWait Classdb file sequential read649,57615,4692447.33User I/ODB CPU 10,451 31.98 db flash cache single block physical read11,217,3035,044015.43User I/Obuffer busy waits71,3231,845265.65Concurrencylog file switch (checkpoint incomplete)2,3052511090.77Configuration

    There are over 11 million "db flash cache single block physical read" waits which took about 0.44ms on average (AWR reports it as 0ms). "db file sequential read" waits are over 600,000. This means we had a high flash cache hit ratio, most of the reads were coming from the flash cache, not the disks.

    This is the wait event histogram from the same AWR report.




    % of Waits
    Event
    Total Waits
    <1ms
    <2ms
    <4ms
    <8ms
    <16ms
    <32ms
    <=1s
    >1s
    db flash cache single block physical read
    11.2M
    99.0
    .9
    .1
    .0
    .0
    .0
    .0


    99% of all flash cache single block reads were under 1ms, none of them are over 4ms.

    db flash cache multiblock physical read


    "db flash cache multiblock physical read" is the flash cache equivalent of "db file scattered read". It is the event we see when we are reading multiple blocks from the flash cache. The AWR report I am using in this post does not contain many multiblock operations but here is the event from the foreground wait events section.


    Event
    Waits
    %Time -outs
    Total Wait Time (s)
    Avg wait (ms)
    Waits /txn
    % DB time
    db flash cache multiblock physical read
    1,048
    0
    1
    1
    0.00
    0.00


    We had 1048 waits of 1ms on average.



     
    % of Waits
    Event
    Total Waits
    <1ms
    <2ms
    <4ms
    <8ms
    <16ms
    <32ms
    <=1s
    >1s
    db flash cache multiblock physical read
    1171
    83.2
    12.4
    3.2
    .9
    .3




    The wait event histogram shows that again most of the waits are below 1ms with some being up to 16ms.

    write complete waits: flash cache

    This is the wait event we see when DBWR is trying to write a block from the buffer cache to the flash cache and a session wants to modify that block. The session waits on this event and goes on with the update after DBWR finishes his job. You can find a good explanation and a good example for this in Guy Harrison's post. Here is the event and its histogram in the same AWR report I have.


    Event
    Waits
    %Time -outs
    Total Wait Time (s)
    Avg wait (ms)
    Waits /txn
    % DB time
    write complete waits: flash cache
    345
    0
    1
    4
    0.00
    0.00





    % of Waits
    Event
    Total Waits
    <1ms
    <2ms
    <4ms
    <8ms
    <16ms
    <32ms
    <=1s
    >1s
    write complete waits: flash cache
    345
    22.9
    23.2
    27.2
    19.7
    5.5
    .3
    1.2


    I had 345 waits in 30 mins with an average time of 4ms. But the important thing is the contribution of this event to DB time was only 1 second in a 30 minute workload.

    You can see that this event starts climbing up in some situations, especially if you are having a problem with your flash cache device and writes to it start to take longer.

    Here is one case where poor DBWR processes are struggling to write to the flash cache. Database control was showing high waits in the "Configuration" class and those waits were "write complete waits", "free buffer waits" and "write complete waits: flash cache". This was all because my flash cache device was a single conventional HDD, not even an SSD. After changing the db_flash_cache_file parameter to use the F5100 array this picture was not seen anymore.



    Moving OVD from Test to Production

    Mark Wilcox - Tue, 2011-01-04 03:00

    Customer asked support "How to move a test OVD server to production".

    There is a couple of ways to do this.

    One way is to clone the environment:
    http://download.oracle.com/docs/cd/E15523_01/core.1111/e10105/testprod.htm#CH...

    Another way - which is particularly useful if you want to push configuration from a parent OVD server to children in a cluster:
    http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10046/basic_server_set...
    Note if you use the second option and you have any data in a Local Store Adapter - you may also need to use the oidcmprec tool to synchronize that data:
    http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10029/replic_mng_mon.h...

    Posted via email from Virtual Identity Dialogue

    Oem Agent not starting anymore..

    Bas Klaassen - Mon, 2011-01-03 04:42
    This morning I noticed a database down in our OEM application. I also received an email from the inicident, so I decided to look into this problem. I found a sollution very quick. It seemed the dbsnmp password of the target database was not valid anymore. So I changed the password in OEM, configured the target again en the problem was solved...But then the agent stopt running and was producing alBas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com1
    Categories: APPS Blogs

    2010 in review (the easy way…)

    Klein Denkraam - Mon, 2011-01-03 04:19

    The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

    Healthy blog!

    The Blog-Health-o-Meter™ reads Fresher than ever.

    Crunchy numbers

    Featured image

    A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 8,100 times in 2010. That’s about 19 full 747s.

     

    In 2010, there were 4 new posts, growing the total archive of this blog to 58 posts. There were 3 pictures uploaded, taking up a total of 4mb.

    The busiest day of the year was May 28th with 72 views. The most popular post that day was MDL import/export.

    Where did they come from?

    The top referring sites in 2010 were itnewscast.com, google.com, google.co.in, Private networks, and decipherinfosys.wordpress.com.

    Some visitors came searching, mostly for ventoux, oracle dedicated vs shared, mont ventoux, rpe-02062, and oracle shared server mode.

    Attractions in 2010

    These are the posts and pages that got the most views in 2010.

    1

    MDL import/export May 2010

    2

    Just for myself (Shared server vs Dedicated) October 2007
    4 comments

    3

    Change OWB location details October 2009
    2 comments

    4

    OWB Process Flows that keep saying they are running April 2009
    8 comments

    5

    Shrink OWB runtime April 2008


    2010 in review

    Digital Eagle - Sun, 2011-01-02 19:57
    This page has moved.  Please update your links:
    http://psst0101.digitaleagle.net/2011/01/02/2010-in-review/

    The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

    Healthy blog!

    The Blog-Health-o-Meter™ reads Wow.

    Crunchy numbers

    Featured image

    A helper monkey made this abstract painting, inspired by your stats.

    About 3 million people visit the Taj Mahal every year. This blog was viewed about 55,000 times in 2010. If it were the Taj Mahal, it would take about 7 days for that many people to see it.

     

    In 2010, there were 26 new posts, growing the total archive of this blog to 158 posts.

    The busiest day of the year was September 22nd with 375 views. The most popular post that day was Step By Step.

    Where did they come from?

    The top referring sites in 2010 were peoplesoft.wikidot.com, jjmpsj.blogspot.com, psinstall.blogspot.com, google.com, and google.co.in.

    Some visitors came searching, mostly for ora-01502, peopletools tables, windows file lock viewer, yum samba, and peoplesoft installation step by step.

    Attractions in 2010

    These are the posts and pages that got the most views in 2010.

    1

    Step By Step September 2008
    11 comments

    2

    PeopleTools Tables September 2007
    29 comments

    3

    ORA-01502: Indexes in unusable state February 2008
    6 comments

    4

    Server Administration Tip: Logging Into the Weblogic Console March 2007
    1 comment

    5

    Step by Step: Yum + Samba December 2008


    Business Logic: From Working Definition to Rigorous Definition

    Kenneth Downs - Sun, 2011-01-02 12:05

    This is part 2 of a 4 part mini-series that began before the holidays with A Working Definition Business Logic. Today we proceed to a rigorous definition, tomorrow we will see some theorems, and the series will wrap up with a post on the "business layer."

    In the first post, the working definition said that business logic includes at least:

    • The Schema
    • Calculations
    • Processes

    None of these was very rigorously defined, kind of a "I'll know it when I see it" type of thing, and we did not talk at all about security. Now the task becomes tightening this up into a rigorous definition.

    Similar Reading

    Toon Koppelaars has some excellent material along these same lines, and a good place to start is his Helsinki Declaration (IT Version). The articles have a different focus than this series, so they make great contrasting reading. I consider my time spent reading through it very well spent.

    Definitions, Proofs, and Experience

    What I propose below is a definition in four parts. As definitions, they are not supposed to prove anything, but they are definitely supposed to ring true to the experience of any developer who has created or worked on a non-trivial business application. This effort would be a success if we reach some concensus that "at least it's all in there", even if we go on to argue bitterly about which components should be included in which layers.

    Also, while I claim the definitions below are rigorous, they are not yet formal. My instinct is that formal definitions can be developed using First Order Logic, which would allow the theorems we will see tomorrow to move from "yeah that sounds about right" to being formally provable.

    As for their practical benefit, inasmuch as "the truth shall make you free", we ought to be able to improve our architectures if we can settle at very least what we are talking about when we use the vague term "business logic."

    The Whole Picture

    What we commonly call "business logic", by which we vaguely mean, "That stuff I have to code up", can in fact be rigorously defined as having four parts, which I believe are best termed orders, as there is a definite precedence to their discovery, analysis and implementation.

    • First Order: Schema
    • Second Order: Derivations
    • Third Order: Non-algorithmic compound operations
    • Fourth Order: Algorithmic compound operations

    Now we examine each order in detail.

    A Word About Schema and NoSQL

    Even "schema-less" databases have a schema, they simply do not enforce it in the database server. Consider: an eCommerce site using MongoDB is not going to be tracking the local zoo's animal feeding schedule, because that is out of scope. No, the code is limited to dealing with orders, order lines, customers, items and stuff like that.

    It is in the very act of expressing scope as "the data values we will handle" that a schema is developed. This holds true regardless of whether the datastore will be a filesystem, an RDBMS, a new NoSQL database, or anything else.

    Because all applications have a schema, whether the database server enforces it or whether the application enforces it, we need a vocabulary to discuss the schema. Here we have an embarrasment of choices, we can talk about entities and attributes, classes and properties, documents and values, or columns and tables. The choice of "entities and attributes" is likely best because it is as close as possible to an implementation-agnostic language.

    First Order Business Logic: Schema

    We can define schema, including security, as:

    that body of entities and their attributes whose relationships and values will be managed by the application stack, including the authorization of roles to read or write to entities and properties.

    Schema in this definition does not include derived values of any kind or the processes that may operate on the schema values, those are higher order of business logic. This means that the schema actually defines the entire body of values that the application will accept from outside sources (users and other programs) and commit to the datastore. Restating again into even more practical terms, the schema is the stuff users can save themselves.

    With all of that said, let's enumerate the properties of a schema.

    Type is required for every attribute.

    Constraints are limits to the values allowed for an attribute beyond its type. We may have a discount percent that may not exceed 1.0 or 100%.

    Entity Integrity is usually thought of in terms of primary keys and the vague statement "you can't have duplicates." We cannot have a list of US States where "NY" is listed 4 times.

    Referential Integrity means that when one entity links or refers to another entity, it must always refer to an existing entity. We cannot have some script kiddie flooding our site with sales of items "EAT_ME" and "F***_YOU", becuase those are not valid items.

    The general term 'validation' is not included because any particular validation rule is is a combination of any or all of type, constraints, and integrity rules.

    Second Orders Business Logic: Derived values

    When we speak of derived values, we usually mean calculated values, but some derivations are not arithmetic, so the more general term "derived" is better. Derivations are:

    A complete entity or an attribute of an entity generated from other entities or attributes according to a formula or rule.

    The definition is sufficiently general that a "formula or rule" can include conditional logic.

    Simple arithmetic derived values include things like calculating price * qty, or summing an order total.

    Simple non-arithmetic derivations include things like fetching the price of an item to use on an order line. The price in the order is defined as being a copy of the item's price at the time of purchase.

    An example of a complete entity being derived is a history table that tracks changes in some other table. This can also be implemented in NoSQL as a set of documents tracking the changes to some original document.

    Security also applies to generated values only insofar as who can see them. But security is not an issue for writing these values because by definition they are generated from formulas and rules, and so no outside user can ever attempt to explicitly specify the value of a derived entity or property.

    One final point about Second Order Business Logic is that it can be expressed declaratively, if we have the tools, which we do not, at least not in common use. I wrote one myself some years ago and am re-releasing it as Triangulum, but that is a post for another day.

    Sorting out First and Second Order

    The definitions of First and Second Order Business Logic have the advantage of being agnostic to what kind of datastore you are using, and being agnostic to whether or not the derived values are materialized. (In relational terms, derivations are almost always denormalizing if materialized, so in a fully normalized database they will not be there, and you have to go through the application to get them.)

    Nevertheless, these two definitions can right off bring some confusion to the term "schema." Example: a history table is absolutely in a database schema, but I have called First Order Business Logic "schema" and Second Order Business Logic is, well, something else. The best solution here is to simply use the terms First Order Schema and Second Order Schema. An order_lines table is First Order schema, and the table holding its history is Second Order Schema.

    The now ubiquitous auto-incremented surrogate primary keys pose another stumbling block. Because they are used so often (and so often because of seriously faulty reasoning, see A Sane Approach To Choosing Primary Keys) they would automatically be considered schema -- one of the very basic values of a sales order, check, etc. But they are system-generated so they must be Second Order, no? Isn't the orderid a very basic part of the schema and therefore First Order? No. In fact, by these definitions, very little if any of an order header is First Order, the tiny fragments that are first order might be the shipping address, the user's choice of shipping method, and payment details provided by the user. The other information that is system-generated, like Date, OrderId, and order total are all Second Order.

    Third Order Business Logic

    Before defining Third Order Business Logic I would like to offer a simple example: Batch Billing. A consulting company bills by the hour. Employees enter time tickets throughout the day. At the end of the month the billing agent runs a program that, in SQL terms:

    • Inserts a row into INVOICES for each customer with any time entries
    • Inserts a row into INVOICE_LINES that aggregates the time for each employee/customer combination.

    This example ought to make clear what I mean by definining Third Order Business Logic as:

    A Non algorithmic compound operation.

    The "non-algorithmic" part comes from the fact that none of the individual documents, an INVOICE row and its INVOICE_LINES, is dependent on any other. There is no case in which the invoice for one customer will influence the value of the invoice for another. You do not need an algorithm to do the job, just one or more steps that may have to go in a certain order.

    Put another way, it is a one-pass set-oriented operation. The fact that it must be executed in two steps is an artifact of how database servers deal with referential integrity, which is that you need the headers before you can put in the detail. In fact, when using a NoSQL database, it may be possible to insert the complete set of documents in one command, since the lines can be nested directly into the invoices.

    Put yet a third way, in more practical terms, there is no conditional or looping logic required to specify the operation. This does not mean there will be no looping logic in the final implementation, because performance concerns and locking concerns may cause it to be implemented with 'chunking' or other strategies, but the important point is that the specification does not include loops or step-wise operations because the individual invoices are all functionally independent of each other.

    I do not want to get side-tracked here, but I have had a working hypothesis in my mind for almost 7 years that Third Order Business Logic, even before I called it that, is an artifact, which appears necessary because of the limitations of our tools. In future posts I would like to show how a fully developed understanding and implementation of Second Order Business Logic can dissolve many cases of Third Order.

    Fourth Order Business Logic

    We now come to the upper bound of complexity for business logic, Fourth Order, which we label "algorithmic compound operations", and define a particular Fourth Order Business Logic process as:

    Any operation where it is possible or certain that there will be at least two steps, X and Y, such that the result of Step X modifies the inputs available to Step Y.

    In comparison to Third Order:

    • In Third Order the results are independent of one another, in Fourth Order they are not.
    • In Third Order no conditional or branching is required to express the solution, while in Fourth Order conditional, looping, or branching logic will be present in the expression of the solution.

    Let's look at the example of ERP Allocation. In the interest of brevity, I am going to skip most of the explanation of the ERP Allocation algorithm and stick to this basic review: a company has a list of sales orders (demand) and a list of purchase orders (supply). Sales orders come in through EDI, and at least once/day the purchasing department must match supply to demand to find out what they need to order. Here is an unrealistically simple example of the supply and demand they might be facing:

      *** DEMAND ***          *** SUPPLY ***
    
        DATE    | QTY           DATE    | QTY
    ------------+-----      ------------+----- 
      3/ 1/2011 |  5          3/ 1/2011 |  3
      3/15/2011 | 15          3/ 3/2011 |  6
      4/ 1/2011 | 10          3/15/2011 | 20
      4/ 3/2011 |  7   
    

    The desired output of the ERP Allocation might look like this:

     *** DEMAND ***      *** SUPPLY ****
        DATE    | QTY |  DATE_IN   | QTY  | FINAL 
    ------------+-----+------------+------+-------
      3/ 1/2011 |  5  |  3/ 1/2011 |  3   |  no
                      |  3/ 3/2011 |  2   | Yes 
      3/15/2011 | 15  |  3/ 3/2011 |  4   |  no
                      |  3/15/2011 | 11   | Yes
      4/ 1/2011 | 10  |  3/15/2011 |  9   |  no
      4/ 3/2011 |  7  |    null    | null |  no
    

    From this the purchasing agents know that the Sales Order that ships on 3/1 will be two days late, and the Sales Orders that will ship on 4/1 and 4/3 cannot be filled completely. They have to order more stuff.

    Now for the killer question: Can the desired output be generated in a single SQL query? The answer is no, not even with Common Table Expressions or other recursive constructs. The reason is that each match-up of a purchase order to a sales order modifies the supply available to the next sales order. Or, to use the definition of Fourth Order Business Logic, each iteration will consume some supply and so will affect the inputs available to the next step.

    We can see this most clearly if we look at some pseudo-code:

    for each sales order by date {
       while sales order demand not met {
          get earliest purchase order w/qty avial > 0
             break if none
          make entry in matching table
          // This is the write operation that 
          // means we have Fourth Order Business Logic
          reduce available qty of purchase order
       }
       break if no more purchase orders
    }
    
    Conclusions

    As stated in the beginning, it is my belief that these four orders should "ring true" with any developer who has experience with non-trivial business applications. Though we may dispute terminology and argue over edge cases, the recognition and naming of the Four Orders should be of immediate benefit during analysis, design, coding, and refactoring. They rigorously establish both the minimum and maximum bounds of complexity while also filling in the two kinds of actions we all take between those bounds. They are datamodel agnostic, and even agnostic to implementation strategies within data models (like the normalize/denormalize debate in relational).

    But their true power is in providing a framework of thought for the process of synthesizing requirements into a specification and from there an implementation.

    Tomorrow we will see some theorems that we can derive from these definitions.

    Categories: Development

    SQL Features Tutorials: Unicode (New SQL Snippets Tutorial)

    Joe Fuda - Fri, 2010-12-31 18:00

    A new section on Unicode has been added to SQL Snippets. Topics covered include

    • a practical overview of Unicode
    • Unicode support in Oracle
    • working with Unicode data in UNIX and Windows XP operating systems
    • viewing Unicode data in Excel 2007, Firefox, Internet Explorer, Notepad, PuTTY, SQL Developer, and SQL*Plus
    • transferring Unicode data with WinSCP and ftp.

    • ...

    Sizing the SGA or the buffer cache when enabling the database flash cache

    Yasin Baskan - Thu, 2010-12-30 07:32
    The documentation about using the database flash cache feature recommends increasing db_cache_size or sga_target or memory_target, whichever you are using, to account for the metadata for the blocks kept in the flash cache. This is because for each block in the flash cache, some metadata is kept in the buffer cache. The recommendation is to add 100 bytes for a single instance database and 200 bytes for RAC multiplied by the number of blocks that can fit in the flash cache. So you need to calculate how many blocks the flash cache can keep (by dividing db_flash_cache_size in bytes with the block size) and increase the buffer cache.

    If you do not do this and the buffer cache size is too low, it is automatically increased to cater for this metadata. It also means you will have less space for actual data blocks if you do not increase your buffer cache size.

    I happened to learn this adjustment by chance and it gave me a chance to calculate exactly how much space the flash cache metadata needs.

    This is on a single instance 11.2.0.1 database on an M4000 server running Solaris 10.

    I start with db_cache_size=320m (I am not using sga_target or memory_target because I want to control the size of the buffer cache explicitly) and db_flash_cache_size=20g, the instance starts up without any errors or warnings but the alert log shows:


    The value of parameter db_cache_size is below the required minimum
    The new value is 4MB multiplied by the number of cpus plus the memory required for the L2 cache.
    WARNING: More than half the granules are being allocated to the L2 cache when db_cache_size is set to 335544320. Either decrease the size of L2 cache, or increase the pool size to
    671088640

    If you look at the db_cache_size at this point it shows 448m. The database automatically increased 320m to 448m. It also warns me that most of this space will be used for the flash cache metadata. This is a server with 32 CPUs (cores actually) so I multiply this by 4m, it makes 128m which is the space that will be used for actual data blocks. The remaining 320m will be used for the flash cache metadata. I have 20g of flash cache and my block size is 8K, this means 2,621,440 blocks can fit in there. Let's see how much space is needed for the metadata on one block, since I have 320m for the metadata I convert it to bytes and divide by the number of blocks, 320*1024*1024/2621440, which gives me 128 bytes.

    The documentation states 100 bytes for a single instance database but it is actually a little bit higher.

    Another case to verify. This time I start with db_cache_size=448m and db_flash_cache_size=60g. Similar messages are written to the alert log again.


    The value of parameter db_cache_size is below the required minimum
    The new value is 4MB multiplied by the number of cpus plus the memory required for the L2 cache.
    WARNING: More than half the granules are being allocated to the L2 cache when db_cache_size is set to 469762048. Either decrease the size of L2 cache, or increase the pool size to
    2013265920

    When I look at db_cache_size now I see that it is increased to 1088m.

    Of the 1088m buffer cache, again 128m will be used for data blocks, the remaining 960m is for the flash cache metadata. 60g of flash can hold 7,864,320 blocks, doing the math again tells me that the metadata for a single block is again 128 bytes.

    If you are starting with a small buffer cache, remember to check the alert log and the current size of the buffer cache. If it is already high and you do not see any adjustments be aware that you will use 128 bytes for the metadata for each block. This means you will have less memory for data blocks. It is a good practice to calculate this need beforehand and size the buffer cache accordingly.

    Using an ASM diskgroup as flash cache

    Yasin Baskan - Wed, 2010-12-29 14:04
    We have been testing the F5100 flash array in our humble lab (borrowed that term from a colleague, he knows who he is). There are two ways to use it, one is to place your datafiles on it, the other is to use it as the database flash cache.

    The database flash cache feature came with 11.2 and is a way to extend the SGA. It is not the same thing as the flash cache in Exadata, read Kevin Closson's this post to find out what the difference is. F5100 is one of the products you can use as the flash cache, the other is the F20 card.

    It is possible and (may be the best option) to use ASM to configure the flash cache. The documentation states that you can use a filename or an ASM diskgroup name for db_flash_cache_file parameter which is the parameter to enable the flash cache feature.

    So, how do we do this?

    The first step is creating an ASM diskgroup on the flash devices. In my test, for simplicity I use one flash device which is /dev/rdsk/c1t9d0s0. This is just one flash module (of size 24GB) from F5100. The process for creating the diskgroup is no different than creating a diskgroup on conventional disks, just make sure your asm_diskstring includes the flash device path. By using asmca I created a diskgroup named FLASH using external redundancy on this single flash device.

    Then following the documentation I set the parameters to enable the flash cache.


    SQL> alter system set db_flash_cache_size=20G scope=spfile;


    System altered.

    SQL> alter system set db_flash_cache_file='+FLASH' scope=spfile;

    System altered.

    Now time to restart to make the new parameters effective.

    SQL> startup force;
    ORACLE instance started.

    Total System Global Area 2606465024 bytes
    Fixed Size                  2150840 bytes
    Variable Size            2113932872 bytes
    Database Buffers          469762048 bytes
    Redo Buffers               20619264 bytes
    Database mounted.
    ORA-03113: end-of-file on communication channel
    Process ID: 17560
    Session ID: 2471 Serial number: 3

    The instance fails to start. Looking at the alert log file we see this.

    Errors in file /u01/app/oracle/diag/rdbms/flash/flash/trace/flash_dbw0_18053.trc:
    ORA-01565: error in identifying file '+FLASH'
    ORA-17503: ksfdopn:2 Failed to open file +FLASH
    ORA-15045: ASM file name '+FLASH' is not in reference form
    DBW0 (ospid: 18053): terminating the instance due to error 1565

    So it seems that we need to specify the actual file name, not the name of the diskgroup. Let's see what we have in the diskgroup FLASH using amscmd.

    ASMCMD> pwd
    +FLASH/FLASH/FLASHFILE
    ASMCMD> ls
    bufpool#2147472020.256.739041793

    Even if we got an error when starting the instance there is a file created under +FLASH/FLASH/FLASHFILE. The first FLASH in the path is the diskgroup name, the second is my database name.

    What we need to do is set this file as the db_flash_cache_file parameter to be able to start the instance with flash cache enabled. I edit my text init.ora file and change the parameter.

    db_flash_cache_file='+FLASH/FLASH/FLASHFILE/bufpool#2147472020.256.739041793'

    Now we can start the instance.

    SQL> create spfile from pfile='/tmp/x.ora';

    File created.

    SQL> startup
    ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
    ORACLE instance started.

    Total System Global Area 2606465024 bytes
    Fixed Size                  2150840 bytes
    Variable Size            2113932872 bytes
    Database Buffers          469762048 bytes
    Redo Buffers               20619264 bytes
    Database mounted.
    Database opened.
    SQL> show parameter flash_cache

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    db_flash_cache_file                  string      +FLASH/flash/flashfile/bufpool
                                                     #2147472020.256.739041793
    db_flash_cache_size                  big integer 20G

    The instance is up and I can see that the flash cache is enabled and using the correct ASM diskgroup.

    The documentation states that we can use the diskgroup name but as we saw it needs some correction.

    A problem and a lesson

    The path I followed to this point is a little embarrassing and teaches a lesson about setting parameters.

    At first I used db_flash_cache_file='FLASH' assuming it would use the ASM diskgroup. After restarting the instance I immediately started a workload on the database to see the effects of the flash cache. Here is what I saw in the Enterprise Manager performance page.


    The waits of class "Configuration" were holding back the database. When I clicked on "Configuration" link I saw this.



    The system was mostly waiting on "free buffer waits", "write complete waits" and "write complete waits: flash cache". This is because of DBWR. Someone has to write the blocks from the buffer cache to the flash when the buffer cache is full. This process is DBWR. Since I love using the DTrace toolkit I used the iosnoop script in that toolkit to find out what DBWR was doing. iosnoop can show you what file a process reading from or writing to most. So I ran iosnoop for one of the DBWR processes.

    bash-3.00# ./iosnoop -p 16631
      UID   PID D     BLOCK   SIZE       COMM PATHNAME
      101 16631 W    175311   8704     oracle
      101 16631 W 246883104   8192     oracle /u01/app/oracle/product/11.2.0/dbhome_1/dbs/FLASH
      101 16631 W 246883104   8192     oracle /u01/app/oracle/product/11.2.0/dbhome_1/dbs/FLASH
      101 16631 W    175328   8704     oracle
      101 16631 W 246883120   8192     oracle /u01/app/oracle/product/11.2.0/dbhome_1/dbs/FLASH
      101 16631 W 246883120   8192     oracle /u01/app/oracle/product/11.2.0/dbhome_1/dbs/FLASH
      101 16631 W    175345   8704     oracle
      101 16631 W 246883136   8192     oracle /u01/app/oracle/product/11.2.0/dbhome_1/dbs/FLASH
      101 16631 W 246883136   8192     oracle /u01/app/oracle/product/11.2.0/dbhome_1/dbs/FLASH

    The file it is trying to write to is $ORACLE_HOME/dbs/FLASH and since that directory is on a slow local disk it is taking a long time and causing the waits.

    If I had looked at the db_flash_cache_file parameter after restarting the database to see if it was alright I would have seen this before starting the workload. So, once more it teaches to check if the parameter was set the way you want it before taking further action.

    What does ISDEFAULT mean in v$parameter?

    Yasin Baskan - Mon, 2010-12-27 03:32
    There is a common misunderstanding among DBAs about the column ISDEFAULT in the view v$parameter. Some think that when this column is TRUE it means the current value of the parameter is the default value. This leads to wrong conclusions and sometimes wrong settings for even production environments.

    The documentation says this about this column:

    "Indicates whether the parameter is set to the default value (TRUE) or the parameter value was specified in the parameter file (FALSE)"

    This explanation is not a clear one and different people may understand different things from it.

    This column is dependent on the setting in the parameter file when the instance is started. It does not show if the current value is the default or not. It only shows if the parameter is set in the parameter file or not. When it is TRUE it means that you did not set this in the parameter file when starting the instance. When it is FALSE it means this parameter was set in the parameter file when starting the instance.

    Here is a real life case about this. When changing a few parameters an Exadata DBA accidentally sets the parameter cell_offload_processing to FALSE using an alter system command. When he tries to take back the settings he looks at v$parameter.ISDEFAULT to find out if cell_offload_processing=FALSE is the default setting. He sees that ISDEFAULT=TRUE and arrives at the wrong conclusion that cell_offload_processing=FALSE is the default value and leaves the parameter that way. This causes all Exadata storage offloading to be disabled and may cause query times to go over the roof.

    Let's look at this with an example on 11.2.0.2. This is from a database created with default parameters.


    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        TRUE                           TRUE      FALSE

    We see that the parameter is set to TRUE, ISMODIFIED is false meaning we did not modify the parameter and ISDEFAULT is TRUE meaning we did not set this parameter in the parameter file.

    Now let's change it.


    SQL> alter system set cell_offload_processing=FALSE;


    System altered.


    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        FALSE                          TRUE      SYSTEM_MOD

    After we set it FALSE we see that ISMODIFIED reflected the change, but ISDEFAULT is still TRUE. From this if a DBA concludes that this is the default value and takes action based on that, the result will be wrong. As we did not set this parameter in the parameter file when starting the instance, ISDEFAULT still shows TRUE.

    Let's do a restart and look at v$parameter again.


    SQL> startup force;
    ORACLE instance started.


    Total System Global Area  839282688 bytes
    Fixed Size                  2231128 bytes
    Variable Size             583009448 bytes
    Database Buffers          251658240 bytes
    Redo Buffers                2383872 bytes
    Database mounted.
    Database opened.
    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        FALSE                          FALSE     FALSE

    Now we see that ISDEFAULT is FALSE because the parameter was in the parameter file.

    What if we set it the parameter to the default value.


    SQL> alter system set cell_offload_processing=TRUE;


    System altered.


    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        TRUE                           FALSE     SYSTEM_MOD

    As you see even if the parameter has the default value now, ISDEFAULT still shows FALSE.

    Again let's restart and look again.


    SQL> startup force;
    ORACLE instance started.


    Total System Global Area  839282688 bytes
    Fixed Size                  2231128 bytes
    Variable Size             583009448 bytes
    Database Buffers          251658240 bytes
    Redo Buffers                2383872 bytes
    Database mounted.
    Database opened.
    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        TRUE                           FALSE     FALSE

    The parameter has the default value and ISDEFAULT is still FALSE after a restart. This is again because even if we set the parameter to the default value it was in the parameter file anyway.

    What's the correct way to fix this? If we reset the parameter instead of setting it explicitly it will revert back to the default value.


    SQL> alter system reset cell_offload_processing;


    System altered.


    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        TRUE                           FALSE     FALSE


    SQL> startup force;
    ORACLE instance started.


    Total System Global Area  839282688 bytes
    Fixed Size                  2231128 bytes
    Variable Size             583009448 bytes
    Database Buffers          251658240 bytes
    Redo Buffers                2383872 bytes
    Database mounted.
    Database opened.
    SQL> select name,VALUE,ISDEFAULT,ISMODIFIED from v$parameter
      2  where name='cell_offload_processing';


    NAME                           VALUE                          ISDEFAULT ISMODIFIED
    ------------------------------ ------------------------------ --------- ----------
    cell_offload_processing        TRUE                           TRUE      FALSE

    Now we are back to the defaults as the reset command removed the parameter from the parameter file.

    So, do not count on the ISDEFAULT column when you are trying to find if the current value of a parameter is the default value or not. The documentation is the most reliable source to find out the default values of the parameters.

    Creating listeners on the private network interfaces of RAC nodes or use that Exadata Infiniband network

    Yasin Baskan - Fri, 2010-12-24 07:23
    In general usage, everyone configures tns listeners on the public interfaces of RAC nodes so that clients can connect through the public network. Conventionally the private network interfaces are used for the interconnect traffic so most people do not open them to the application because the application traffic may interfere with the interconnect messaging.

    But what if we have a high-speed and high-bandwidth interconnect network that some applications can also use for fast communication to the nodes? Can we create tns listeners on the private interfaces, if we can how? This high-speed interconnect network is especially true for Exadata where we have an Infiniband private network that is used for RAC interconnect and for the communication between the storage nodes and the database nodes. Since it is a high-speed low-latency network the application servers or ETL servers can connect to that network and use it for fast data transfer.

    To do this we need to create tns listeners on the private interfaces so that clients can connect through sqlnet. The following steps will show how I did this. I did this configuration on a half rack Database Machine but since I do not have access to that now the example here is based on a test Virtualbox system (which runs 11.2.0.2 RAC) so Infiniband is not here but the steps are the same for Exadata. Also DNS is not used, all addresses are defined in the /etc/hosts files of the nodes.

    Here is my /etc/hosts file for this setup.


    127.0.0.1               localhost.localdomain localhost
    ::1             localhost6.localdomain6 localhost6


    # Public
    192.168.56.2   rac1.localdomain        rac1
    192.168.56.3   rac2.localdomain        rac2
    # Private
    192.168.0.101   rac1-priv.localdomain   rac1-priv
    192.168.0.102   rac2-priv.localdomain   rac2-priv
    # Virtual
    192.168.56.4   rac1-vip.localdomain    rac1-vip
    192.168.56.5   rac2-vip.localdomain    rac2-vip
    # SCAN
    192.168.56.6   rac-scan.localdomain rac-scan

    I have two nodes, rac1 and rac2. The private network is 192.168.0 and the public network is 192.168.56. After the default installation I have the default listeners created on the public interfaces, there is only one SCAN address as this is a test setup.

    Assume I have an ETL server that is connected to the private network which needs to connect to the database through the private interface. What I need is a listener per node that is listening on the private IP addresses.

    If you start with netca ( from the grid home because that is where the listeners run on 11.2) and try to create the listeners you will see that you will not be able to select the private network.


    It will show only the public network because this selection is based on your virtual IP definitions. Since I do not have any VIPs on the private network I do not see it.

    So the first step is to create VIPs on the private interfaces. I start by adding the new VIPs to the /etc/hosts files. These lines should be added to both nodes' /etc/hosts file.


    # Private VIPs
    192.168.0.103   rac1-ib.localdomain rac1-ib
    192.168.0.104   rac2-ib.localdomain rac2-ib


    With root I run "srvctl add vip" from the grid home to create the VIPs.


    [root@rac1 ~]# cd /u01/app/11.2.0/grid/bin/
    [root@rac1 bin]# ./srvctl add vip -n rac1 -A 192.168.0.103/255.255.255.0/eth1 -k 2
    [root@rac1 bin]# ./srvctl add vip -n rac2 -A 192.168.0.104/255.255.255.0/eth1 -k 2

    I created a VIP for each node on the eth1 interface which is the private interface and I have specified "-k 2" to indicate that the network number is 2. You can use "srvctl add vip -h" to see what options you have.

    At this step if you look at the "ifconfig -a" output you will not see the new VIPs up because we have not started them up yet. Let's start them now.

    [root@rac1 bin]# ./srvctl start vip -i rac1-ib

    Now you will see the new IP up in the "ifconfig -a" output. This is the related line.


    eth1:2    Link encap:Ethernet  HWaddr 08:00:27:8B:69:FB
              inet addr:192.168.0.103  Bcast:192.168.0.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

    We need to start the new VIP for the second node also. Go to the second node and repeat the same for that.

    [root@rac2 bin]# ./srvctl start vip -i rac2-ib

    We can now start netca and we see the new subnet there. You can create the listener by selecting the new subnet. I named mine as LISTENER_IB and chose 1522 as the port number.


    After netca completes here is what I see on rac1.


    [oracle@rac1 ~]$ lsnrctl status listener_ib


    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 24-DEC-2010 13:36:20


    Copyright (c) 1991, 2010, Oracle.  All rights reserved.


    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_IB)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER_IB
    Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
    Start Date                24-DEC-2010 13:34:46
    Uptime                    0 days 0 hr. 1 min. 35 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
    Listener Log File         /u01/app/oracle/diag/tnslsnr/rac1/listener_ib/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_IB)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.103)(PORT=1522)))
    The listener supports no services
    The command completed successfully

    The listener is up and running but it does support any services yet. Here is my init.ora parameters related to the listener.

    SQL> show parameter listen


    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    listener_networks                    string
    local_listener                       string      (DESCRIPTION=(ADDRESS_LIST=(AD
                                                     DRESS=(PROTOCOL=TCP)(HOST=192.
                                                     168.56.4)(PORT=1521))))
    remote_listener                      string      rac-scan:1521

    I can change the local_listener parameter and add this listener so that my database registers with it. After that as the default installation sets the SCAN listener as the remote_listener SCAN will be able to direct connections to this listener as well. But, there is a problem with this. What happens if SCAN directs the connection from the ETL server to the public interface instead of the private one? Or vice-versa, what happens if it directs connections from the public network clients to the private interface? Users will get errors because they cannot reach the private network and the ETL server will get errors because it cannot reach the public network.

    The correct way to register my database to the listeners is to use the listener_networks parameter. listener_networks is a new 11.2 parameter and serves the purpose of cross-registration when you have listeners on multiple networks. Basically with it you can say, "register my local listeners on the public interfaces to the SCAN listener, register my local listeners on the private interface to each other".

    This way clients using SCAN will connect to the public interfaces and clients using the private network will connect to the private interfaces. Let's do it now.

    Not to clutter the listener parameters with long tns descriptions let's add the definitions to the tnsnames.ora file and use the names instead. On both nodes's tnsnames.ora file residing in the database home I add these lines. Remember to change the hostnames to rac2 for the ones other ORCL_IB when editing the file on rac2.


    ORCL_IB =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-ib)(PORT = 1522))
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-ib)(PORT = 1522))
      )


    ORCL_PUBLIC_LOCAL=
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
      )


    ORCL_PRIVATE_LOCAL=
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-ib)(PORT = 1522))
      )

    ORCL_PUBLIC_LOCAL will be used as the local listener for the public network, ORCL_PRIVATE_LOCAL will be used as the local listener for the private network, ORCL_IB will be used as the remote listener for the private network so that the local private listeners can register to each other and the SCAN address will be used as the remote listener for the public network.

    Now time to change the parameters.

    SQL> alter system set listener_networks='((name=public_network)(local_listener=orcl_public_local)(remote_listener=rac-scan:1521))','((name=priv_network)(local_listener=orcl_private_local)(remote_listener=orcl_ib))';

    System altered.

    SQL> alter system set remote_listener='';

    System altered.

    SQL> show parameter listen

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    listener_networks                    string      ((name=public_network)(local_l
                                                     istener=orcl_public_local)(rem
                                                     ote_listener=rac-scan:1521)),
                                                     ((name=priv_network)(local_lis
                                                     tener=orcl_private_local)(remo
                                                     te_listener=rac-scan:1521))
    local_listener                       string      (DESCRIPTION=(ADDRESS_LIST=(AD
                                                     DRESS=(PROTOCOL=TCP)(HOST=192.
                                                     168.56.4)(PORT=1521))))
    remote_listener                      string

    Now let's see what is registered to the new listener listener_ib.

    [oracle@rac1 ~]$ lsnrctl status listener_ib

    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 24-DEC-2010 15:03:17

    Copyright (c) 1991, 2010, Oracle.  All rights reserved.

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_IB)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER_IB
    Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
    Start Date                24-DEC-2010 13:34:46
    Uptime                    0 days 1 hr. 28 min. 32 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
    Listener Log File         /u01/app/oracle/diag/tnslsnr/rac1/listener_ib/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_IB)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.103)(PORT=1522)))
    Services Summary...
    Service "orcl" has 2 instance(s).
      Instance "orcl1", status READY, has 2 handler(s) for this service...
      Instance "orcl2", status READY, has 1 handler(s) for this service...
    The command completed successfully

    Both instances are registered and can be served by this listener. Also have a look at the scan listener to make sure we did not make a mess.

    [oracle@rac1 ~]$ lsnrctl status listener_scan1

    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 24-DEC-2010 15:04:47

    Copyright (c) 1991, 2010, Oracle.  All rights reserved.

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER_SCAN1
    Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
    Start Date                24-DEC-2010 12:06:02
    Uptime                    0 days 2 hr. 58 min. 45 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
    Listener Log File         /u01/app/11.2.0/grid/log/diag/tnslsnr/rac1/listener_scan1/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.6)(PORT=1521)))
    Services Summary...
    Service "orcl" has 2 instance(s).
      Instance "orcl1", status READY, has 1 handler(s) for this service...
      Instance "orcl2", status READY, has 1 handler(s) for this service...
    The command completed successfully

    It is OK too.

    Now we can use this tnsnames entry to connect through the private network.

    ORCL_ETL =
      (DESCRIPTION =
        (LOAD_BALANCE=on)
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-ib)(PORT = 1522))
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-ib)(PORT = 1522))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = orcl)
        )
      )

    It took quite a while for me to get this in writing, if you use this procedure and see problems or if you think there are points not explained clearly, let me know so I can edit the post, this way it will be more useful to others who may need it.

    Oracle IRM Desktop update

    Simon Thorpe - Thu, 2010-12-23 04:30

     

    christmas-presents.jpg

    Just in time for Christmas, we have made a fresh IRM Desktop build available with a number of valuable enhancements:

     


    • Office 2010 support
    • Adobe Reader X support
    • Enhanced compatibility with SharePoint
    • Ability to enable the Sealed Email for Lotus Notes integration during IRM Desktop installation

     

    The kit is currently available as a patch that you can access by logging in to My Oracle Support and looking for patch 9165540. The patch enables you to download a package containing all 27 language variants of the IRM Desktop. We will be making the kit available from OTN as soon as possible, at which time you will be able to pick a particular language if preferred.

    Pages

    Subscribe to Oracle FAQ aggregator