DBA Blogs

Everybody needs a spare database

alt.oracle - Sun, 2011-01-23 17:29

I've gotten a little preachy in this blog lately, so I thought this time I'd give you something useful. Have you ever wished you had a quicky little set of database tables so you could do some generally wacky stuff that would likely get you fired if you did it on your production database? I thought so. In the past, the only way to do something like this was to build another database somewhere. Of course, where to put it? Some of us weirdos have machines at home where we build databases, do virtual machines or stuff like that. Guilty. But not everyone wants to tie up their home machine with the multi-gigabyte behemoth that Oracle 11g has become. Well, have I got a deal for you.

Oracle provides a nifty little free service to show off their Oracle Application Express product (APEX), which I'm not sure has been as popular as they'd like it to be. You can register at their site and get your own little workspace that will allow you to play around with Oracle a little.

Here's how it works.

  • Go to http://apex.oracle.com and click the link to "Sign Up"
  • Click through the "next" buttons, giving Oracle your name and email address. Give them a real one since they'll send the verification link to it.
  • Provide a name for your workspace and a schema name for your database objects
  • Next you have to give a reason for requesting an account. Now, I don't know if anyone actually reads these or not, but you'd probably be better off if you didn't put something like "That dork from alt.oracle said it would be cool." Try "Evaluation purposes" instead.
  • Next, you type in your little verification thing with the goofy letters and click "Submit Request"
  • After a bit, you'll hopefully get an email back saying "click this link to verify, etc".
  • Lastly, you'll get another email with your login.

Then you can login and poke around. Truthfully, you can do a lot of stuff on your new personal Apex. I'm not super familiar with it yet, but it looks like you can...

  • Create your own tables, indexes, constraints, sequences, etc
  • Run SQL statements and scripts
  • Build PL/SQL objects
  • Build your own webby-type applications with the GUI "Application Builder"

I'm not sure yet if you can build web apps that you and others could access from a browser without going through the whole Apex frontend, but if so, that would be uber-cool. One word of warning however. FOR THE LOVE OF ALL THAT IS HOLY, DON'T PUT ANY REAL DATA IN THIS CRAZY THING! I have no idea as to how secure it is – it's only for evaluation purposes, so DON'T DO IT.

You can't do a lot of administration-type stuff with your own personal Apex. If you're looking to mess with parameter files and flash recovery areas, it's time to bust out a virtual machine. But it is nice to have a place where you could try some SQL stuff without fear of a pink-slip visit from HR. So go get your account and do some crazy, webby SQL stuff. And, finally, FOR THE LOVE OF ALL THAT IS HOLY, DON'T PUT ANY REAL DATA IN THIS CRAZY THING!
Categories: DBA Blogs

Oooohhh... shiny!

alt.oracle - Tue, 2011-01-18 22:52
I went to last year's Oracle Open World. I'd always wanted to go, but having been a consultant for so many years, those opportunities don't always come your way. In my experience, companies will spring for their own employees to go to Open World, but "no way" to that lousy, overpaid consultant who probably won't even be here next week. That leaves it up to the consulting company, whose take on things is usually, "If you don't already know everything they're talking about at Open World, then why did we hire you? Get back to work!" But since I work for a good consulting company, they offered me the chance to go.

Open World is a blast. If you're a geeky Oracle person like me, it's a complete nerd-o-gasm. First of all, Oracle's always announcing the "next big thing" – this year, it was the Oracle Linux kernel (perhaps the subject of a future post) and the latest in Exadata. Then you have your session speakers, most of which are pretty good. The technology booths full of people trying to sell you stuff are always cool. Of course, best of all is all the free swag you get. I came home with more techie junk than you can shake a datafile at. Let me tell you, it takes some mad ninja skilz to nab 11 t-shirts from Open World and get them home. I had to throw away all my underwear just to get them to fit in my luggage (don't ask me how the flight home was...).

Of course, the real focus of any Open World is same as that of a lot of the software industry – better, faster, stronger, more. Newer and shinier. What you have isn't what you need. I can't fault them for this – they need to keep selling stuff to compete and to stay in business, and innovation is a huge part of what we do. Progress is good. But sometimes a DBA needs to distinguish between something that represents progress and something that represents a big ol' pile of shiny.

I talked last time about how being a good DBA means having a healthy dose of skepticism. That has to apply to "new feature-itis" too. Part of what I do involves evaluating new technologies. Not only do I have to evaluate the tech to verify that it does what it says it does, I need to assess that its benefits are worth the time, risks and cost of adopting it. As an evaluator, there's an implied trust with my employers that if I recommend a shiny, new feature, it's because it will benefit their interests – not necessarily mine. I haven't seen it often, but I can remember working with more than one DBA who didn't share my take on this. I've come to expect non-technical people to fall into the whole "Look! Shiny!" thing when it comes to new tech. But some technical folks in positions of authority see new tech as way to 1) pad their resume ("why yes I've worked with feature X, I helped bring it into my last company"), or 2) make them indispensable, since they adopted it and are the only ones who understand it. When I was a newbie DBA, I knew a senior DBA who did just that - repeatedly. Everybody could see it, but nobody was in a position to do anything about it. Then, he left and the rest of us were put in the position of having to support this big, expensive, shiny nightmare.

Flying along the bleeding edge can be a bumpy ride. Resist the urge to pad your resume at the expense of your employer. Otherwise, your big ol' pile of shiny might become a big ol' pile of something else.
Categories: DBA Blogs

Magical Snapshotty-things

alt.oracle - Thu, 2011-01-13 21:50
Magical Snapshotty-Things

I spent some time with a storage vendor recently. Vendors kill me. No matter what you say, they still cannot for the life of them understand why you are not using their product. And if you are, they are mystified by the fact that you're not using every last bell and whistle. In this case, the vendor was questioning why we weren't using their magical-snapshotty backup solution. Basically the way their backup feature works (similar to most snapshotty type of features) is that when a backup occurs, only the deltas are written out. Then, pointers/vectors (vectors sounds cooler) act as reference points to the delta blocks. If a recovery has to occur, the product is smart enough to The upshot of stuff like this is that recoveries are blazingly fast and the amount of data written is extremely small.

Very attractive - too good to be true right? Maybe a little - which takes me to my conversation with the vendor and my point about the inability of vendors to see past their product.

Me: So, your solution doesn't actually copy the data anywhere, except for the deltas?
Them: Yes, that makes it extremely fast and it uses tiny amounts of space.
Me: Okay, but that means there's not a complete copy of the data on physically separate part of the SAN?
Them: Yes, and it's extremely fast by the way.
Me: Um, yeah. So what if something radical happens? What if you lose more disks in the RAID group than you have parity disks?
Them: --Laughs--. That never happens.
Me: Really? I've seen RAID5 groups where two disks failed simultaneously.
Them: No way. Really?
Me: Yep. I've seen it happen three different times.
Them: --dumbfounded look. crickets chirping--
Me: So, you're willing to sign a form that guarantees that your storage system will never have a failure of that nature?
Them: --exasperated look-- Well, we can't really do that.
Me: Hmm. That's a shame.

In the end, they were probably frustrated with me, and I didn't intend to make them mad, but sometimes a DBA has to call BS on things. There's nothing wrong with their product. It's a very good product and we may end up making use of it in some way. The problem is that they're proceeding from a false assumption: namely, that unlikely failures are impossible failures. They're not.

In my last post, I mentioned that I would talk about the second common problem I see in the DBA world with backups, and that is "shortcuts" – ways to make things better, faster, stronger that ultimately leave you with a noose around your neck. The skeptic in me says, if it sounds too good to be true, it probably is – or at least there are probably some strings attached. If these guys were selling a magical-performance-tuney thing, it would be different. But as a DBA, you need to understand that there is no area where your fannie in the on the line more than the recoverability of the data. If you lose data and can't recover - it's gone - and you may be too.

With all apologies to Harry Potter, the trouble with magic is that it isn't real. Database administration isn't an art – it's a hard, cold science. In the end, there aren't many shortcuts to doing your job. If you're going to use a magical backup solution, you have to be dead sure 1) that you know the exact process as to how you're going to magically recover that data and 2) that you've taken every eventuality into consideration.

So in the end, problem #2 is similar to problem #1. Test things and make sure you know what you're doing. If red flags go up, stop and think. I don't want to see you in the unemployment line.
Categories: DBA Blogs

We don't need no steennkking recoveries!

alt.oracle - Tue, 2011-01-04 22:24

Since this is a new blog, let's start with something basic - backups. Everybody knows you do need those 'steenking backups'. You know it, the rest of your team knows it, even the suits know it (they read it in a Delta Airlines inflight magazine). But there are a couple of problems I see with backups these days. The first lies with the DBA and, sadly, it can get your ass fired.

Yes, you did a nice and proper RMAN backup of your database. You did the right syntax, you reviewed the log, you even did a 'report need backup' command and made sure it came back clean. The real question is: if you lose everything, do you know how to put it back together? In my observation, it's utterly confounding how few DBAs today know how to do a recovery. Because let's face it - doing a backup is a deceptively simple process. You start RMAN and type 'backup database'. You can make it more complicated than that, but it doesn't always have to be. Backup is clean, orderly and comfortable. Recovery is messy, complicated and scary if you don't know what you're doing. Ask yourself these questions.

  • You lose a datafile. Do you know how to do a complete recovery without restoring the whole database? Do you know how to do it while the database is online?
  • You lose an entire database – control files, redo logs and datafiles. Can you do a complete recovery from that? Try it – it's an enlightening exercise.
  • A brilliant developer drops the most important table in your production database. Can you do a point-in-time incomplete recovery to roll the database forward to the point right before the table was dropped?
  • You lose an entire database, including the archivelogs. Have you tried the process of pulling the last backup off of tape and then doing a restore?


The list goes on. So how do you learn to do this stuff? You learn by doing it. I tell students in my classes that if you want to know how do recoveries, break stuff and then see if you can fix it. Literally. Build a database that no one else is using. Then, delete the system datafile and try to recover. Delete two redo logs and see how far you can get. Delete the system datafile AND the control files and see what you can do. It's one of the most enlightening experiences a DBA can go through. You'll learn what really makes a database tick. Consider this scenario – your mission critical, never-goes-down, life-blood of the company database experiences a media failure. The suits are surrounding you in your cube, watching your every move, questioning your every decision, telling you how much every moment of downtime is costing them, while sweat pours off your face onto your shaking hands as you try to think of the next command to type. I've seen it happen before.

Several years ago, I worked for a company that had a division that decided they needed their own personal DBA – the "regular" DBA group wasn't giving them enough TLC, I guess. They hired a guy who claimed to have extensive knowledge of Oracle version 9iR2 (way back then, yeah). He started on a Monday. That same day, the server on which his database lived had a drive controller error that wrote corrupt data across the entire RAID array. Time for recovery! Unfortunately, new guy (who was a really nice fellow) didn't have a clue as to what to do, and, worse, he didn't know some basic 9i stuff. He didn't know what an spfile was and he kept trying to "connect internal". Long story short, new guy started on a Monday and was gone by Wednesday.

Spare yourself the agony. Practice, practice, practice. Test, test and test again. We'll talk about the second problem next time. Until then, go break something – and try to fix it.
Categories: DBA Blogs

alt.oracle – the blog for the rest of us

alt.oracle - Sat, 2010-12-18 22:29

Welcome to alt.oracle - the Oracle blog of Steve Ries. What can you expect to find here? Well, this blog is designed to be different. I've read a lot of people who blog about Oracle and, while they're fine, they tend to fall into two categories. One, Oracle "business types" telling you about SOA, blah, blah, E-business, blah, blah, cloud computing, blah. The second are those that continually write articles on obscure, esoteric technical solutions that you may have need of once in your lifetime if someone held a gun to your head and said... "Okay punk. Write a concrete metaclass in Ruby that will take all the data in your database, convert it to the Japanese 16-bit Shifted Japanese Industrial Standard characterset, put it back in sideways and cause all your reverse key indexes to spontaneously combust. And for God's sake use the declarative style!!" That stuff is all well and good, but I'm an Oracle DBA and have been one for a long time. If I were going to read an Oracle blog on a regular basis (which is my hope for this blog), I'd want it to be a little more relevant to my daily life. And a little bit more entertaining.

If you're wondering about me, I've been a DBA for about 13 years. I hold five Oracle certifications and have been an Oracle instructor, training students in the Oracle certification tracks for about 6 years. I currently consult for a large governmental body best known for its guns and large tanks. I've specialized in designing high performance Oracle RAC systems all the way back to when it was called Oracle Parallel Server. But beyond that, I'm a geek. A pure Star Trek watching, video game playing, Linux compiling, unapologetic geek. I'm crazy enough to think that what I do for a living is so cool that if they asked me nicely, I just might do it for free. That being said, there are certainly parts of the job that I don't like. While I've learned to love good manager-types (they're rare and worth their weight in gold), the bad ones make my skin crawl. And sometimes I think that if someone uses another cool buzzword like "ROI" one more time, I'll go into an epileptic fit. So I like to complain about and make some good natured fun of "the suits" as they're often called. Heck - we all like doing that behind their backs. That's why Dilbert is so popular.

However cold and hard my computerized heart may be, I have a soft spot for Oracle newbies - people who are just trying hard to "get it done" while the suits are standing over their shoulders telling them to "make it faster!". So expect some helpful hints and stories along the way.  Of course, I take no responsibility if any of my "helpful hints" turn out to break your database.  Always test stuff somewhere safe first.

If you're wondering about the name "alt.oracle", it comes from the Usenet newsgroup alt.* hierarchy that goes all the way back to the "Great Renaming" of 1987 (I'm not making that up - look it up). I'm old enough to have actually used the Internet before the World Wide Web came around, back in the days of Gopher and WAIS, and Usenet was one of my favorite browsing spots. The groups in the alt.* hierarchy ranged from the extremely weird to the naughty, naughty, but tended to fall under the idea that they were "alternative to the mainstream". Taking the aforementioned goals into consideration, I thought it an appropriate name for this blog.

So maybe you get the idea by now. Sometimes informative, often opinionated, but always interesting. If you're an everyday, lunch pail, trying-to-get-it-done DBA, I think you'll like it here. If you're a “suit”, it would probably be better if you just left now...
Categories: DBA Blogs

OAM-OIM integration (both 11g)

Pankaj Chandiramani - Tue, 2010-12-14 17:58

Next up i will be covering OAM-OIM integration for 11g .

Why do we need Integration?
Oracle Identity Manager(OIM) is an Identity Administration solution which Enables management of users and organizational identities with their associated attributes.

Oracle Access Manager(OAM) is an Access Management solution which Facilitates authentication of resource-related accounts for users and organizations, and allow users and organizations to access their accounts by authorizing them .

OIM and OAM are integrated together to provide a complete Identity and Access Solution

I will start with Architecture & then steps involved for integration in coming post ...

Categories: DBA Blogs

Upgrade 10g Osso to 11g OAM (Part 2)

Pankaj Chandiramani - Sun, 2010-12-12 16:29

This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html

So last post we saw the overview of upgrading osso to oam11g . Now some more details on same .

As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers.

OAM Upgrade Steps Overview
Pre-Req : You already have a OAM 11g Installed
Upgrade Step 1: Configure User Store & Make it Primary
Upgrade Step 2: Create Policy Domain , this is dome by UA automatically
Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant
Verify successful Upgrade

OAM_Upgrade_from_osso.JPG

Details on UA step :
To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password .

Some pointers


  • Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server

  • Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers.

  • Now, OAM and OSSO 10g servers are working in a co-exist mode.
  • OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers.<>/li

How to test ?

Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server.

Screen-shots & Troubleshooting tips to be followed .......

Categories: DBA Blogs

Live Webcast: Eliminate Silent Data Corruption with Oracle Linux

Sergio's Blog - Fri, 2010-12-03 03:49

On Thursday 16 December at 9:00 am Pacific, Martin Petersen, Linux Kernel Developer at Oracle, and I are hosting a live webcast covering:

  • The impact of data corruption to the business environment
  • How Oracle's Unbreakable Enterprise Kernel reduces the potential for incorrect data to be written to disk
  • Data integrity features that decrease application and database errors and system down-time
Sign up here.

Categories: DBA Blogs

You can always learn something new.

Jared Still - Sat, 2010-11-27 13:57
It’s high time for this blog to come off hiatus.  I really don’t know why I let it go so long, just pre-occupied with work and extra curricular activities I guess.

One of those activities was to contribute two chapters to a new book from Apress, Pro Oracle SQL.  Though it was only two chapters, it did consume a significant amount of time.  Some folks seem to be able to bang out well written prose and code with seemingly little effort.  It seems that I labor over it more than most, at least it feels that way at times.

On to something new.  Not really new, but it was new to me the other day.  Or if it was not new, I had completely forgotten about it.

It has to do with the innocuous date formats used with to_date().  I ran into to some unexpected behavior from to_date() while running one of the scripts used for the aforementioned book.
When logging into a data base, part of my normal login includes setting the nls_date_format for my session:

 alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss'  

The main purpose of doing so is so that DBA scripts that include dates will display in my preferred format without the need to use to_date(to_char()) to display the preferred format while preserving the data type.

When writing scripts that may be used by others, or any circumstance where I cannot rely on a setting for nls_date_format, I will use to_char() and to_date() with format masks to ensure the script will run without error.

When developing scripts for use in published writing, I normally do not set nls_date_format for my sessions, but this time I had forgot to disable it.

So, when double checking the scripts to be included with the book chapters, I was rather surprised to see that one of them did not work.

 SQL> l  
1 select
2 to_date('01/01/2011 12:00:00','mm/dd/yyyy hh24:mi:ss')
3 , to_date('01/01/2011')
4* from dual;
SQL>
, to_date('01/01/2011')
*
ERROR at line 3:
ORA-01843: not a valid month

The SQL session I was checking it from was connected to a  completely new and different database, setup just for the purpose of verifying that the scripts all worked as I expected, but one script failed on the to_date().  I at first thought it just do to not having a format mask specified in the second to_date(), but then immediately wondered why script had always worked previously. You can probably guess why, though at first I did not understand what was occurring.

The new environment was not setting nls_date_format upon login.  I had inadvertently setup my initial test environment where the scripts were developed with nls_date_format=’mm/dd/yyyy hh24:mi:ss’.

What surprised me was that to_date(‘01/01/2011’) had worked properly without a specific date format mask, and a date format that did not match the nls_date_format.

The “new” bit is that as long as the date format corresponds to part of the session nls_date_format setting, the conversion will work.

So, with nls_date_format set to ‘mm/dd/yyyy hh24:mi:ss’, we should expect to_date(‘01/01/2011’) to succeed.

This can easily be tested by setting a more restrictive nls_date_format, and then attempting to use to_date() without a format mask.

 SQL> alter session set nls_date_format = 'mm/dd/yyyy';  
Session altered.
SQL> select to_date('01/01/2011 12:00') from dual;
select to_date('01/01/2011 12:00') from dual
*
ERROR at line 1:
ORA-01830: date format picture ends before converting entire input string

When I saw that error message, I then understood what was happening. to_date() could be used without a format mask, as long as the date corresponded to a portion of the nls_date_format.  When the specified date exceeded could be specified with nls_date_format, an ORA-1830 error would be raised.

In this sense it is much like number formats.  I was a little surprised that I didn’t already know this, or had forgotten it so completely.

But, here’s the real surprise.  The following to_date calls will also be correctly translated by nls_date_format.

 SQL> select to_date('Jan-01 2011') from dual;  
TO_DATE('JAN-012011
-------------------
01/01/2011 00:00:00
1 row selected.

SQL> select to_date('Jan-01 2011 12:00:00') from dual;
TO_DATE('JAN-012011
-------------------
01/01/2011 12:00:00
1 row selected.

This was quite unexpected it.  It also is  not new.  I tested it on various Oracle versions going back to 9.2.0.8, and it worked the same way on all.

There’s always something to learn when working with complex pieces of software such as Oracle, even something as seemingly simple as formatting dates.
Categories: DBA Blogs

How to Install the Oracle-Validated rpm Using a Local Repository

Alejandro Vargas - Tue, 2010-11-23 23:20

One of the steps required to install Oracle on Linux is to Install all the Linux packages, rpm's, needed by Oracle, and their dependencies. That is followed by creating the Oracle user account and groups and setup the parameters for the kernel.

All of these tasks can be automatized by installing a single rpm that is distributed by Oracle, the Oracle-Validated rpm.

On the Oracle Enterprise Linux 5.5 distribution disk it is included the Oracle-Validated rpm and also the ASMLib related rpms.

The rpm can be installed as part of the Linux install process, as explained on Sergio Leunissen post from 2009

Another option, if you have your server connected to the Internet, is to run the install using Yum; it will install the oracle-validated rpm and download all required dependencies.

Yet another option, if you don't have access to the Internet, is to setup a local or NFS mounted repository that contains all rpm's included on the Linux distribution disk.

In this post I'm including an example of the steps required to setup a local rpm repository and install the Oracle-Validated rpm and it's dependencies from it:

How to Install The Oracle-Validated rpm From a Local Repository

Categories: DBA Blogs

How to Install the Oracle-Validated rpm Using a Local Repository

Alejandro Vargas - Tue, 2010-11-23 23:20

One of the steps required to install Oracle on Linux is to Install all the Linux packages, rpm's, needed by Oracle, and their dependencies. That is followed by creating the Oracle user account and groups and setup the parameters for the kernel.

All of these tasks can be automatized by installing a single rpm that is distributed by Oracle, the Oracle-Validated rpm.

On the Oracle Enterprise Linux 5.5 distribution disk it is included the Oracle-Validated rpm and also the ASMLib related rpms.

The rpm can be installed as part of the Linux install process, as explained on Sergio Leunissen post from 2009

Another option, if you have your server connected to the Internet, is to run the install using Yum; it will install the oracle-validated rpm and download all required dependencies.

Yet another option, if you don't have access to the Internet, is to setup a local or NFS mounted repository that contains all rpm's included on the Linux distribution disk.

In this post I'm including an example of the steps required to setup a local rpm repository and install the Oracle-Validated rpm and it's dependencies from it:

How to Install The Oracle-Validated rpm From a Local Repository

Categories: DBA Blogs

Using 11g RMAN Duplicate Command to Create a Physical Standby Database Over the Network

Alejandro Vargas - Sun, 2010-11-21 01:59

This post contains a quick, step-by-step walk over the procedure of creating a Physical Standby Database using Rman Duplicate command, not using any backup.

Setting up a physical standby database is a simple operation when the required infrastructure is ready.

We need 2 servers, a network that communicate both of them, and storage connected to the servers that is proportional to the database size + extra space for archive logs and backups.

The Oracle Home is installed on both servers at the same patch level; you may use also cloning to install the RDBMS home on the standby server.

You can find the details of the test on this document: how to create a physical standby database using Rman Duplicate command.pdf

Categories: DBA Blogs

Using 11g RMAN Duplicate Command to Create a Physical Standby Database Over the Network

Alejandro Vargas - Sun, 2010-11-21 01:59

This post contains a quick, step-by-step walk over the procedure of creating a Physical Standby Database using Rman Duplicate command, not using any backup.

Setting up a physical standby database is a simple operation when the required infrastructure is ready.

We need 2 servers, a network that communicate both of them, and storage connected to the servers that is proportional to the database size + extra space for archive logs and backups.

The Oracle Home is installed on both servers at the same patch level; you may use also cloning to install the RDBMS home on the standby server.

You can find the details of the test on this document:
how to create a physical standby database using Rman Duplicate command.pdf

Categories: DBA Blogs

Upgrade 10g Osso to 11g OAM

Pankaj Chandiramani - Sat, 2010-11-20 18:08
As described earlier , Oam 11g is a replacement of 10g osso in IDM suite . In OAM 11g there is a new feature called Co-exist in where you can do step-up replacement of osso environment , ie , a phased migration approach where only one 10g OSSO instance is migrated to 11g OAM at a time.

To be frank , this is a really cool feature specially when you want to upgrade production & don't want a downtime while migration . I will be taking through the step-by-step details on the same , but before that here is an overview .

Typical OSSO Server Deployment Topology
A Cluster of 10g SSO Servers Front-ended by a Load Balancer (LBR)

Flow

User accesses a protected resource , Agent intercepts it and redirects to the LBR
->LBR routes the request to one of the SSO Servers in the cluster
->The SSO Server authenticates and sets a SSO_ID Cookie containing the session state.

10g Osso Topology
git1.JPG






















Now with Co-existance , we can use phase migration approach & replace the osso server with OAM 11g servers one by one .The cluster now will have both 10g SSO Servers and 11g OAM Server(s) ( till all the servers are upgraded )

OSSO 10g - OAM 11g Co-existence
git2.JPG

















So whats the problem?
  • 10g SSO Server sets a SSO_ID cookie
  • 11g OAM Server sets an OAM_ID cookie
  • They don't understand each other's cookies . They don't honor sessions created by each other.
  • Single Sign on wouldn't work

Solution 11g OAM Server should also
  • Understand the 10g SSO Cookie
  • Create/Update the 10g SSO Cookie
To Be continued ............
Categories: DBA Blogs

Oracle RDBMS Home Install Using Cloning

Alejandro Vargas - Wed, 2010-11-17 00:40

Using a standard Oracle Home, that is updated to the last patches, as the source to install new Oracle Homes can save a lot of time, compared to installing the same Oracle Home + Patches from scratch.

The procedure to clone an Oracle Home is simple and is well documented on a set of My Oracle Support documents that can be found on Document 1154613.1 ordered by release.

On this post I'm providing a step by step example of cloning a 11g R2 Home: How to clone a 11g R2 Oracle Home

This is nice to have solution if you need to make multiple installs on many servers. Yo do one install + patches, then move that copy over to all other servers.

Categories: DBA Blogs

Oracle RDBMS Home Install Using Cloning

Alejandro Vargas - Wed, 2010-11-17 00:40

Using a standard Oracle Home, that is updated to the last patches, as the source to install new Oracle Homes can save a lot of time, compared to installing the same Oracle Home + Patches from scratch.

The procedure to clone an Oracle Home is simple and is well documented on a set of My Oracle Support documents that can be found on Document 1154613.1 ordered by release.

On this post I'm providing a step by step example of cloning a 11g R2 Home:
How to clone a 11g R2 Oracle Home

This is nice to have solution if you need to make multiple installs on many servers. Yo do one install + patches, then move that copy over to all other servers.

Categories: DBA Blogs

Data Guard for Manual Failover, Step by Step

Alejandro Vargas - Sat, 2010-11-13 17:14

about:blankIn this post I'm showing the steps used to implement a manual failover scenario. My customer did not want to enable fast start failover but to leave the decision to failover in case of a major crash to the management team.

In the example I'm providing here I did configure flashback database with a one hour retention time so that the OS team can have this time to solve any issues on the primary, if they succeed to solve the problem in this time then the old primary can be easily reinstated as the new standby, other wise it will need to be recreated from a backup taken from the new primary

All details of this experience can be found on this document "Step by Step Configuration of a Physical Standby Database for Manual Failover"

Categories: DBA Blogs

Data Guard for Manual Failover, Step by Step

Alejandro Vargas - Sat, 2010-11-13 17:14

about:blank


In this post I'm showing the steps used to implement a manual failover scenario. My customer did not want to enable fast start failover but to leave the decision to failover in case of a major crash to the management team.

In the example I'm providing here I did configure flashback database with a one hour retention time so that the OS team can have this time to solve any issues on the primary, if they succeed to solve the problem in this time then the old primary can be easily reinstated as the new standby, other wise it will need to be recreated from a backup taken from the new primary

All details of this experience can be found on this document "Step by Step Configuration of a Physical Standby Database for Manual Failover"

Categories: DBA Blogs

Why should i upgrade to oam 11g ?

Pankaj Chandiramani - Tue, 2010-11-02 00:37

One of my reader has some questions :

1)I use osso , what is oam ? Why should i upgrade to oam 11g ?
2)Can i upgrade from oam 10g to oam11g ?

So here are the details
1) You should look to upgrade to oam11g as osso will not be supported after 2011 (or will need extended support) . OAM 11g is the supported product that will replace the osso in fusion middle-ware . So if you are running osso 10g , you should start looking at upgrade options .

2)Current rollout of 11g oam is intended for osso customers mostly . So you can wait for further announcements or follow-up with PM team.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs