Feed aggregator

Agile PLM 9.2.2.1 – Part III – Application Node Installation

Aviad Elbaz - Wed, 2008-01-09 08:48
This is the 3rd (and last..) post about Oracle Agile 9.2.2.1 installation.
In this post we will see the Agile application node installation step by step including all Agile application required components.

Previous related posts:
- Agile PLM 9.2.2.1 – Part I
- Agile PLM 9.2.2.1 – Part II – Database Node Installation


The Agile Application node installation composed from the following steps:
- Complete all application installation pre requisites
- Oracle Application Server 10.1.2.0.2 Installation
- Oracle Applications Server Patch
- Agile Application Installation
- Agile Viewer Installation
- Deploy Agile application on Oracle Application Server
- Verify Installation
- Configure IIS as a Proxy Server for Agile PLM
- Configure File Manager with IIS
- Verify File Manager installation


Prerequisites

1) Copy Platform directory from Disk2 to Disk1 to the setup.exe level
2) Make sure Microsoft IIS (Internet Information Services) is installed on this box.

*** It is important to install MS IIS before proceeding with the Oracle AS 10.1.2.0.2 installation, otherwise you might get into port conflict between IIS and Oracle AS 10.1.2.0.2.


Oracle Application Server 10.1.2.0.2 Installation

1) Run installer - setup.exe (from Disk1 directory)


2) Oracle Home destination:
a. Name: oracleas1
b. Path: d:\OraHome_1


3) Language: Choose the appropriate languages.


4) Check the Administrative privileges


5) At Select Configuration Options window, leave the upper 2 options checked only


6) Port configuration: Automatic


7) Fill in the Instance name and ias_admin password:


8) Install


9) Exit


10) Shutdown Oracle AS:
a. Open a cmd window
b. cd oraHome_1\bin
c. emctl stop iasconsole
d. opmnctl stopall


Oracle Applications Server Patch Installation

1) Open a cmd window
a. set ORACLE_HOME=d:\OraHome_1
b. cd [Installation Dir]\Windows\patches\oas101202\OPatch
c. opatch apply d:\ [Installation Dir]\Windows\patches\oas101202\OPatch\3992805


2) Type "Y" (for : Is this system ready …?)



Agile Application Installation

1) From Disk1: cd [Installation Dir]\Windows
2) Execute setup.exe


3) Accept the license agreement
4) Enter license & username


5) Select Applications Server + File Manager + Web Proxies


6) Location to install Agile application: D:\agile\Agile9221


7) Select Oracle Application Server 10g (10.1.2.0.2)


8) Select Standalone Installation


9) Enter Oracle Application Server Home directory: d:\OraHome_1


10) Click on Use Existing


11) Choose: No, use a Database for authentication


12) Hostname: agileapp.[domain]


13) Web Server information: agileapp.[domain]:80


14) Agile viewer information: agileapp.[domain]:5099


15) Database details:
a. Agile Database Host Name: agiledb
b. Agile Database Port: 1521
c. Agile Database SID: agile9
d. Agile Database User: agile


16) Virtual path: Agile


17) At File Manager User Authentication window select: Use Internal user account


18) File Manager Virtual Path: Filemgr


19) Agile File Manager window: agileapp.[domain]:80


20) Agile File Manager Storage Location: e:\agile\agile9221\files


21) Select to create product icons in an new Program Group called: Agile


22) Install…


23) Restart the system


Agile Viewer Installation

1) From Agile Viewer installation directory execute: setup_win.exe


2) Accept the license agreement
3) Enter User name and License key
4) Check the Agile Viewer only


5) Select New Install


6) Location: d:\Agile\Agile9221


7) Select Regular Agile Viewer


8) Enter hostname & port: agileapp.[domain]:5099


9) Done



Deploy Agile application on Oracle Application Server

1) cd OraHome_1\opmn\bin
a. Stop all Oracle AS processes - opmnctl stopall
b. Start all Oracle AS processes - opmnctl startall
c. cd d:\agile\agile9221\agileDomain\bin
d. Execute command: DeployAgile


2) Verify deployment
a. cd \OraHome1\dcm\bin
b. dmctl listapplications



Verify Installation

1) Run in browser the following url: http://agileapp:7777/Agile/PLMServlet
2) Connect with admin user


Configure IIS as a Proxy Server for Agile PLM

1) Navigate to: Control Panel -> Administrative tools -> Internet Information Services (IIS) Manager


2) Right click on “Default Web Site” (under Web Sites) -> properties
3) Select the “Home Directory” tab
4) In the “Execute permissions” list, select “scripts and executables”


5) Select the “ISPAI filter” tab -> add
a. Filter Name: oproxy
b. Executable: D:\Agile\Agile9221\AgileProxies\oracle_proxy.dll


6) Right click on Default Web Site-> new -> Virtual directory


7) Alias: oproxy


8) Path: d:\agile\agile9221\AgileProxies


9) Check the read and execute options


10) Finish.
11) Navigate to “Web service extension” -> select: “all unknown ISAPI extensions and Click “Allow”


12) Navigate to: Control Panel -> Administrative tools -> Services
13) Restart the “IIS Admin Service”
14) Run in browser: http://agileapp/Agile/PLMServlet (without port 7777)
15) Logon with admin user to verify IIS configuration.


Configure File Manager with IIS

1) Edit d:\agile\agile9221\Tomcat\conf\server.xml
2) Look for the port after the following text:
!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 --

3) Edit file jk2.properties -> channelSocket.port=8009 (the previous port)
4) Navigate to: Control Panel -> Administrative tools -> IIS Manager
5) Go to ISPAI Filter tab -> add
a. name: Jakarta IIS Connector
b. Executable: D:\Agile\Agile9221\AgileProxies\isapi_redirect.dll
6) Right click on default web site-> new -> Virtual directory
a. Alias : Jakarta
b. Path: d:\agile\agile9221\agileproxies
7) Check the read and executable options
8) Restart IIS Admin Service again.


Verify File Manager installation

1) Startup tomcat server by: d:\agile\agile9221\tomcat\bin\catalina start
2) Open the following url in browser to check Java installation on client: http://agileapp/JavaClient/start.html


3) In order to use the Agile java client we should install Java JRE 1.5.x
4) Open the following url again: http://agileapp/JavaClient/start.html
5) Click on Launch
6) Login with admin user.
7) Navigate to: Server setting -> locations
Verify all locations (especially under the File Manager tab)


Now when the Agile application node installed, the Agile system are ready for use.
If you have an initial dump file to export, you can do it now with agile9imp.bat script.

For more information:
Installing Agile PLM for OAS

You are welcome to leave a comment for any issue or additional information.

Aviad

Categories: APPS Blogs

Introduction to Simple Oracle Auditing

Ayyappa Yelburgi - Wed, 2008-01-09 06:20
IntroductionThis article will introduce the reader to the basics of auditing an Oracle database. Oracle's RDBMS is a functionally rich product and there are a number of auditing alternatives available to the reader. Because auditing Oracle is such a huge subject, doing all of it justice would take an entire book, so this paper will cover the basics of why, when and how to conduct an audit. It ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com8

Introduction to Simple Oracle Auditing

Ayyu's Blog - Wed, 2008-01-09 06:20
Categories: DBA Blogs

BEA mashup platform - Genesis

Rakesh Saha - Tue, 2008-01-08 17:43

Monitors in Server Manager

Mark Vakoc - Tue, 2008-01-08 17:42

Monitors


Most of my posts thus far have been about installation, troubleshooting, and other server manager basics. Today begins a series of posts outlining the new or enhanced capabilities provided by SM.


Monitors are the mechanism by which administrators can be alerted through e-mail when an event of interest occurs. Much of this functionality is a direct carryover from that provided by the SAW SMC infrastructure in previous tools releases with some significant enhancements to boot.


As you may be aware by now Server Manager is a complete replacement for SAW and SMC. Among the other benefits, such as deployment and configuration management, we wanted to enhance and make easier to use the functionality provided by the SAW application.


While evaluating the SMC monitoring capabilities we identified the need to improve it in the following ways:

* Simplify the setup required to monitor servers and configure events of interest

* Enhance the monitored events to include some key items of interest, such as a user being unable to login to the E1 HTML Server

* Permit configuration of the hours in which alert e-mails should be sent for sites that make use of multiple administrators that are responsible for particular times of the week

* Maintain a history of past events and record the e-mail messages that were sent


We also changed the mechanism by which the events of interest are obtained. Beginning with 8.97 our server products contain an embedded variant of the management agent that provides server manager with the runtime information about the servers. Using this mechanism to obtain events provided two primary benefits: many of the events are reported to SM immediately upon occuring and events can be obtained from clustered or multi-JVM configurations for our web based products.


Currently monitoring is supported for our enterprise server and HTML server products only.


To get started select the monitors link from the quicklinks section. Note you must be signed into the management console as the jde_admin user or another user that as been granted the 'monitorConfig' permission to make changes to the monitoring configuration.


SMTP Configuration


The first step is to configure the SMTP mail server that will be used by server manager to send emails. Simply supply the mail server name, TCP/IP port to use, and sender email to use as the 'from' address. Some SMTP servers may require the sender email be from the same domain the mail server is configured to use. Note: SMTP servers that require authentication to send emails is not currently supported.


After making any changes you may supply an email address to test the settings. Server Manager will send an email to the supplied address to ensure the mail server configuration is correct.


Getting Started


The next step is to create a new monitor. You may have as many monitors as you wish. For example you may wish to create multiple monitors that listen for different events and each have different email recipients. Enter a name for the new monitor and select the 'Create' button. You will be redirected to a page used to configure the newly created monitor.


The first option in the general settings controls how often the monitor should poll for events. Some events will be detected immediately; when they occur a notification is sent to the management console and then to each running monitor. If this event is enabled for a particular monitor an email will be sent immediately. Other events are polled on a periodic basis. For example checking the free disk space on an enterprise server occurs on this period poll. You can change the frequency in which the monitor will check for these events.



Checking for monitored events is a low impact activity. That said if you have a large number of monitors it may be advisable to increase this interval from the default of 30 seconds.


Secondly you may configure whether this monitor should be automatically started when the management console application is started. Regardless of this selection an authorized user may start and stop monitors at any time using the previous page.


Instance Selection


The next step involves selecting the managed instances that this monitor should observe. Simply move the desired instances from the available options list to the selected options. Note that any changes made here on a running monitor will take effect immediately; the monitor need not be restarted.



Event Selection


Now that we have selected which managed instances we should monitor we now need to select which events we wish to observe. You do so by simply selecting the events of interest in the next section of the page. Each event has a help box next to it describing what the event is and when it may occur.



Some events may have threshold values that allow you to define a limit that, once reached, will trigger an email notification. The example below shows the limits for simultaneous users.



Once a threshold limit is reached on an enabled event a email notification will be sent. Notifications will not be resent unless the threshold goes higher. Consider the simultaneous users event. If we set the threshold to 50 we would receive a notification once 50 users are on at the same time. If two users sign off and two new users sign back on we are back at 50 simultaneous users. An email will not be sent; an email for 50 users has already been sent. If another user signs on, so we are at 51 sessions, and email will be sent; we have gone higher then the highest threshold reached.


I won't go into what all the events are in this post; they are documented with online help within the application.


Notification Hours


For a particular monitor you may specify which hours in the day and which days of the week email notifications should be sent. This may be helpful for those who administer in shifts. Those interested in events on weekdays may be different then those interested in weekend events, for example.



When you create a new monitor the default will be to enable notifications for all hours of all days. You can change this by modifying the times for each day using 24 hour notation. To disable events for an entire day simply set the start time and end time to both be 00:00.


The management console will use the clock and time zone information provided by the JVM on which it runs. That is the times should be considered to be the times as known to the management console machine.


Email Recipients


Finally we specify the email recipients that should receive notifications. You may add as many recipients as you wish. Any changes made to this list will take effect immediately; you need not restart the monitor.



Emails are sent individually to each recipient defined for a monitor using the from address configured previously. The subject and content of the email will contain details of the event. The mail format is plain text and is suitable for email, pager, and SMS mailboxes.



If an email could not be sent for any reason the failure will be recorded in the monitor's history, as discussed below.

Monitor History


Server Manager maintains a history for each monitor. Each start of a monitor will be listed in the monitor history.



You may view the history of a particular monitor to see all the events that occurred and the emails sent by clicking the appropriate icon in the grid row.



Each event that occurred will be listed along with the same type if information that was contained in the email sent. A grid will contain a listing for each email recipient of the monitor showing the successful sending of the email, an email that wasn't sent because it was outside the notification hours configured, or a email that failed to send for some reason such as an invalid recipient.


You may delete the monitor history if you no longer wish to view it. You may not delete the history for an actively running monitor.


Cloning Monitors


We have made it easy to clone an existing monitor. Simply select the corresponding icon in the 'Create Duplicate' column in the list of available monitors.



All the settings, selected managed instances, events, notification hours, and email recipients from the selected monitor will be copied to a new monitor definition. This makes setting up monitors for shifts much easier; the events and other setup need not be configured multiple times.


Summary


Hopefully you see that setting up and using monitors in Server Manager is much easier than previous solutions and the added events make administering your E1 servers much easier. Dig in, play with monitors, and enjoy.


UPDATE: I think the issue with missing images has been resolved.

10 Scripts Every DBA Should Have

Ayyappa Yelburgi - Tue, 2008-01-08 06:06
I. Display the Current Archivelog Status :ARCHIVE LOG LIST;II. Creating a Control File Trace FileALTER DATABASE BACKUP CONTROLFILE TO TRACE;III. Tablespace Free Extents and Free Spacecolumn Tablespace_Name format A20column Pct_Free format 999.99select Tablespace_Name,Max_Blocks,Count_Blocks,Sum_Free_Blocks,100*Sum_Free_Blocks/Sum_Alloc_Blocks AS Pct_Free from(select Tablespace_Name, SUM(Blocks) ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com6

10 Scripts Every DBA Should Have

Ayyu's Blog - Tue, 2008-01-08 06:06
Categories: DBA Blogs

Hail the Champions

Fadi Hasweh - Tue, 2008-01-08 01:55
I participate recently in a post from our famous Bolger OCP advisor, it a nice blog that help apps community with info. About Apps certification.
I am trying to be active again.
You can check his post here.

Good luck with your certification
Fadi

Fix for Rails 2.0 on Oracle with database session store

Raimonds Simanovskis - Mon, 2008-01-07 16:00

As I started to explore Rails 2.0 I tried to migrate one application to Rails 2.0 which is using Oracle as a database. Here are some initial tips for Rails 2.0 on Oracle that I found out.

Oracle adapter is no more included in Rails 2.0 so you need to install it separately. It is also not yet placed on gems.rubyforge.org therefore you need to install it with:

sudo gem install activerecord-oracle-adapter --source http://gems.rubyonrails.org

The next issue that you will get is error message “select_rows is an abstract method”. You can find more information about it in this ticket. As suggested I fixed this issue with the following Oracle adapter patch that I call from anvironment.rb file:

module ActiveRecord
  module ConnectionAdapters
    class OracleAdapter
      def select_rows(sql, name = nil)
        result = select(sql, name)
        result.map{ |v| v.values}
      end
    end
  end
end

And then I faced very strange behaviour that my Rails application was not working with database session store – no session data was saved. When I changed session store to cookies then everything worked fine.

When I continued investigation I found out that the issue was that for each new session new row was created in “sessions” table but no session data was saved in “data” column. As “data” column is text field which translates to CLOB data type in Oracle then it is not changed in Oracle adapter by INSERT or UPDATE statements but with special “write_lobs” after_save callback (this is done so because in Oracle there is limitation that literal constants in SQL statements cannot exceed 4000 characters and therefore such hack with after_save callback is necessary). And then I found that class CGI::Session::ActiveRecordStore::Session (which is responsible for database session store) does not have this write_lobs after_save filter. Why so?

As I understand now in Rails 2.0 ActiveRecord class definition sequence has changed – now at first CGI::Session::ActiveRecordStore::Session class is defined which inherits from ActiveRecord::Base and only afterwards OracleAdapter is loaded which adds write_lobs callback to ActiveRecord::Base but at this point it is not adding this callback to already defined Session class. As in Rails 1.2 OracleAdapter was loaded together with ActiveRecord and before Session class definition then there was no such issue.

So currently I solved this issue with simple patch in environment.rb file:

class CGI::Session::ActiveRecordStore::Session 
  after_save :write_lobs
end

Of course it would be nicer to force that OracleAdapter is loaded before CGI::Session::ActiveRecordStore::Session definition (when ActionPack is loaded). If somebody knows how to do that please write a comment :)

Categories: Development

Where has all my memory gone ?

Christian Bilien - Sun, 2008-01-06 14:09
A while ago, I came across an interesting case of memory starvation on a Oracle DB server running Solaris 8 that was for once not directly related to the SGA or the PGA. The problem showed up from a user perspective as temporary “hangs” that only seemed to happen at a specific time of the […]
Categories: DBA Blogs

Happy New Year 2008

Peter Khos - Fri, 2008-01-04 22:57
I hoped that 2007 has been good to you all both in your professional and personal lives. 2007 has been eventful and lots of stuff happening. We are now almost just 2 years away from the 2010 Olympics (Feb 2010) and construction of the various venues and transportation systems are chugging along full steam. In my own suburb, Richmond, we have the Canada Line (a light rail system linking the Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com1

Unconventional Oracle Installs, part One

Moans Nogood - Wed, 2008-01-02 18:24
You have to watch this:

http://www.youtube.com/watch?v=CHzV4LZnvHc

We'll follow it up with a few other initiatives in order to help the big companies bring down the time spent to install Oracle from, say, 50 hours to one or two.

Perrow and Normal Accidents

Moans Nogood - Wed, 2008-01-02 18:21
While reading the book 'Deep Survival' (most kindly given to me at the UKOUG conference in Birmingham by Sir Graham Wood of Oracle after the fire in my house) I happened on a description on page 107 of a book called 'Normal Accidents' by a fellow named Perrow (get it? per row - a perfect name for database nerds).

Perrow's theses is that in any tightly coupled system - in which unexpected interactions can happen - accidents WILL happen, and they're NORMAL.

Also, he states that technological steps taken to remedy this will just make matters worse.

Perrow and IT systems
=====================
I have freely translated Perrow's thoughts into the following:

IT systems are tightly coupled. A change - a patch, a new application, or an upgrade - to a layer in the stack can cause accidents to happen, because they generate unexpected interactions between the components of the system.

This is normal and expected behaviour, and any technological gear added to the technology stack in order to minimize this risk will make the system more complex and therefor more prone to new accidents.

For instance, I find that two of the most complexing things you can do to an IT system are clusters and SAN's.

These impressive technologies are always added in order to make systems more available and guard against unexpected accidents.

Hence, they will, in and by themselves, guarantee other normal accidents to happen to the system.

Complexing and de-complexing IT systems
=======================================
So you could say that it's a question of complexing or de-complexing IT systems.

I have found four situations that can complex IT systems (I'm being a bit ironic here):

1. To cover yourself (politics).
2. Exploration.
3. SMS decisions.
4. Architects.

1. Reason One: To cover yourself (politics)
===========================================
You might want to complex systems in order to satisfy various parties that you depend on or who insist on buying certain things they've heard about at vendor gatherings:

"Yes, we've done everything humanely possible, including buying state-of-the-art technology from leading vendors and asking independant experts to verify our setup".

This is known as CYB (Cover Your Behind).

2. Reason Two: Exploration
==========================
Ah, the urge to explore unknown territories and boldly go where no man has ever gone before...

Because you can.

The hightened awareness thus enabled might be A Good Thing for your system and your customers.

It could also create situations that you and others find way too interesting.

Reason Two is often done by men, because we love to do stupid or dangerous things.

3. Reason Three: SMS decisions
==============================
A third reason for complexing IT systems could be pure ignorance in what is commonly referred to as Suit Meets Suit (SMS) decisions - where a person of power from the vendor side with no technical insight talks to a person of power from the customer side with no technical insight.

These SMS situations tend to cause considerable increases in the GNP (just like road accidents and fires) of any country involved because of all the - mostly unneccessary - work following.

The costs to humans, systems and users can be enormous. Economists tend to love it.

4. Reason Four: Architects
==========================
A fourth reason for complexing IT systems can be architects. Don't get me wrong: There are many good IT architects. The very best ones, though, tend not to call themselves architects.

One of my dear friends once stated that an architect is often a developer that can't be used as a developer any more. Very funny.

However, what I have witnessed myself is that the combination of getting further away from the technical reality and getting closer to the management levels (the C class, as it were) tend to make some architects less good at making architectural decisions after a while.

That's where the vendors get their chance of selling the latest and greatest and thus complexing new and upcoming systems.


Summary: The end of reasoning
=============================
Four reasons must be enough. There are probably more, but I cannot think of them right now.

Anyway, imagine what savings in costs and worries you can obtain by moving just a notch down that steep slope of complexity in your system.

You might be able to de-complex your system to a degree where it becomes
absolutely rock solid and enormously available.

That should be our goal in the years to come: To help our customers de-complex their systems, while of course trying everything we can to support those who chose to complex theirs.

Two new angles on tuning/optimising Oracle

Moans Nogood - Wed, 2008-01-02 18:00
Now and then some new angles and thoughts emerge in a field where a lot of people think there's not much new to be said.

Two examples:

1. James Morle told me a while ago, that he thinks all performance problems relate to skew, to latency, or to both. It's brilliant, I think. I hope James will one day write about it. He's a damn fine writer when he gets down to it.

2. This one from Dan Fink. Impressive piece, I think. Enjoy it.

http://optimaldba.blogspot.com/2007/12/how-useful-is-wait-interface.html

When I emailed Dan and told him I admired his angle on this, he responded:

"I think it is a matter of keeping an open mind and knowing that you have friends and colleagues who are open to new ideas. Support is absolutely critical, even when you don't necessarily agree with what is being said. That keeps the flow of information open.

I shall never forget walking into a conference room. In big letters on one of the whiteboards were the words "THINK OUTSIDE THE BOX". For emphasis...someone had drawn a nice large box around them! "

I like that one :-)).

Using NFS partitions on AIX

Mark Vakoc - Wed, 2008-01-02 13:29
Unless you are running an E1 enterprise server on an NFS partition on the AIX platform, you can probably skip this posting.

Still here? Ok. This post outlines a potential problem with changing the tools release of an enterprise server when it is running on a NFS partition on AIX. It pertains to that combination only.

The AIX operating system has a feature that keeps shared libraries in memory even when the program that loads them terminates. Subsequent loads of that or any other program using the same library would be faster because the library is already in memory.

This behavior can cause some problems when the shared library is located on an NFS partition. Consider the case when Server Manager is performing a tools change for an enterprise server. The management agent will 1) stop the enterprise server, 2) delete the existing tools release, 3) extract and replace it with the new tools release.

So where's the problem? After stopping the enterprise server the E1 shared libraries may be cached by AIX even though no active processes are using them. AIX maintains open file handle to the shared library. On UNIX based platforms you are able to delete a file that is open by another process; although it will immediately disappear from the file system directory listings it will not actually be removed once the last handle to that file is closed. This behavior is done within the filesystem implementation.

The remote nature of the NFS file system requires a special implementation. When an open file is deleted on a NFS partition it will appear as a .nfs##### file in the same directory, where #### refers to a number randomly assigned. This file cannot be removed directly; it will disappear as soon as the last process holding the originally deleted file closes that handle.

So what does this have to do with E1 and Server Manager? The second step of performing a tools release change involves deleting the existing tools release. The caching of the shared libraries, and thus the presence of the .nfs#### files in the $EVRHOME/system/lib directories will prevent the removal of the system directory. This will cause the tools release change to fail, and the previous tools release will be restored. Even root cannot delete this .nfs files directly.

What can be done is to stop the enterprise server using Server Manager then sign on as root and run the command 'slibclean'. This will instruct AIX to unload/uncache any shared libraries that are no longer being used by an active process. You may then change the tools release using Server Manager without any issue.

Solaris Express on a Toshiba Satellite Pro A200

Hampus Linden - Tue, 2008-01-01 05:25
I bought myself one of those cheap laptops the other month. I needed a small machine for testing and since laptops are just as cheap (if not cheaper) as desktops these days I got a laptop.
The machine came with Vista but I wanted to triple boot Vista, Ubuntu and Solaris Express Community Edition.

  • Use diskmgmt.msc in Vista to shrink the partition the machine came with, Windows can do this natively so there is no need to use Partition Magic or similar tools. Create at least three new partitions. One for Solaris, one for Linux and one for Linux swap.
  • Secondly install Solaris, boot off the CD and go through the basic installer. The widescreen resolution worked out of the box (as usual). Do a full install, spending time "fixing" a smaller installer is just annoying. Solaris will install it's grub boot loader on both the MBR and superblock (on the Solaris partition). It probably makes sense to leave a large slice unused so it can be used with ZFS after the installation is done.
  • Install Ubuntu. Nothing simpler than that.
  • Edit Ubuntu's grub menu config (/boot/grub/menu.lst) to include Solaris. Simply point it to the Solaris parition (hd0,2 for me). Add these lines at the end of the file.
    title Solaris
    root (hd0,2)
    chainloader
Done!

I had to install the gani NIC driver in Solaris to get the Ethernet card working and the Open Sound System sound card driver to get sound working.
The Atheros WiFi card is supposed to be supported but I couldn't get it to work, even after adding the pci device alias to the driver. I'll post an update if I get it to work.

Google - just another big, dumb, brutal organisation?

Moans Nogood - Mon, 2007-12-31 04:42
I found this article in The Economist interesting:

http://economist.com/business/displaystory.cfm?story_id=10328123

There's some truth there, I think. Google is buying stuff (like blogger), is making pirate copies (sorry: clones) of other companies' software and in general trying to be as dominant and brutal as Microsoft, IBM, Oracle and the others. Yawn.

What the Hell happened to "Don't do evil"? Why did Google sell out to the Chinese horror regime?

They're just after the money and the happiness of shareholdes. Boring stuff.

Mogens

R12 Global Deployment functionality

RameshKumar Shanmugam - Sat, 2007-12-29 17:04
It is a common Operation process in any industry to move people around or transfer employee’s temporary basis for a particular project or Assignment. Or transfer them permanently to a different country.

Though this functionality was available in 11i for the HR professionals to do it manually if the cross business group is enabled they can update organization and location in the assignment form, or another method of doing this is to terminate and hire the employee in the new business group.

Now in R12 this functionality has been made as a standard functionality in the Manager Self Service Responsibility Under the function 'Transfer'

Manager Self Service > Transfer



Select the employee you wanted to transfer, follow the wizard which will take you through complete process like new salary change, new direct report, New Location Change, Time card approver, work Schedule etc., finally you will receive a Review summary page where you can review and submit for the approval

Note: if you are a Oracle Payroll Customer you need to take necessary actions when changing the work location for the payroll Taxation
Try this out!!!
Categories: APPS Blogs

Oracle 11g NF Database Replay

Virag Sharma - Thu, 2007-12-27 22:10

Oracle 11g New Feature Database Replay

“Simulating production load is not possible” , you might have heard these word.

In one project, where last 2 year management want to migrate from UNIX system to Linux system ( RAC ) , but they still testing because they are not sure where this Linux Boxes where bale to handle load or not. They have put lot of efforts and time in load testing and functional testing etc, but still not le gain confidence.

After using these feature of 11g , they will gain confidence and will able to migrate to Linux with full confidence and will know how there system will behave after migration/upgrade.

As per datasheet given on OTN

Database Replay workload capture of external clients is performed at the database server level. Therefore, Database Replay can be used to assess the impact of any system changes below the database tier level such as below:

  • Database upgrades, patches, parameter, schema changes, etc.
  • Configuration changes such as conversion from a single instance to RAC etc.
  • Storage, network, interconnect changes
  • Operating system, hardware migrations, patches, upgrades, parameter changes

DB replay does this by capturing a workload on the production system with negligible performance overhead( My observation is 2-5% more CPU usage ) and replaying it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. This makes possible complete assessment of the impact of the change including undesired results; new contentions points or performance regressions. Extensive analysis and reporting ( AWR , ADDM report and DB replay report) is provided to help identify any potential problems, such as new errors encountered and performance divergences. The ability to accurately capture the production workload results in significant cost and timesaving since it completely eliminates the need to develop simulation workloads or scripts. As a result, realistic testing of even complex applications using load simulation tools/scripts that previously took several months now can be accomplished at most in a few days with Database Replay and with minimal effort. Thus using Database Replay, businesses can incur much lower costs and yet have a high degree of confidence in the overall success of the system change and significantly reduce production deployment

Steps for Database Replay

  1. Workload Capture

Database are tracked and stored in binary files, called capture files, on the file system. These files contain all relevant information about the call needed for replay such as SQL text, bind values, wall clock time, SCN, etc.

1) Backup production Database #

2) Add/remove filter ( if any you want )
By default, all user sessions are recorded during workload capture. You can use workload filters to specify which user sessions to include in or exclude from the workload. Inclusion filters enable you to specify user sessions that will be captured in the workload. This is useful if you want to capture only a subset of the database workload.
For example , we don't want to capture load for SCOTT user

BEGIN
DBMS_WORKLOAD_CAPTURE.ADD_FILTER (
fname => 'user_scott',
fattribute => 'USER',
fvalue => 'SCOTT');
END;

Here filter name is "user_scott" ( user define name)

3) Create directory make sure enough space is there

CREATE OR REPLACE DIRECTORY db_replay_dir
AS '/u04/oraout/test/db-replay-capture';

Remember in case on Oracle RAC directory must be on shared disk otherwise , you will get following error

SQL> l
1 BEGIN
2 DBMS_WORKLOAD_CAPTURE.start_capture (name =>'capture_testing',dir => 'DB
3 END;
4*

SQL> /
BEGIN
*
ERROR at line 1:
ORA-15505: cannot start workload capture because instance 2 encountered errors
while accessing directory "/u04/oraout/test/db-replay-capture"
ORA-06512: at "SYS.DBMS_WORKLOAD_CAPTURE", line 799
ORA-06512: at line 2



4) Capture workload

BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (
name => capture_testing',dir=>'DB_REPLAY_DIR',
duration => NULL );
END
;

Duration => NULL mean , it will capture load till we stop with below mentioned manual SQL command. Duration is optional input to specify the duration (in seconds) , default is NULL

5) Finish capture

BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;

# Take backup of production before Load capture, so we can restore database on test environment and will run replay on same SCN level of database to minimize data divergence

Note as per Oracle datasheet

The workload that has been captured on Oracle Database release 10.2.0.4 and higher can also be replayed on Oracle Database 11g release.So , I think , It simply mean NEW patch set 10.2.0.4 will support capture processes. Is it mean Current patch set (10.2.0.3) not support load capture ??????

2. Workload Processing

Once the workload has been captured, the information in the capture files has to be processed preferably on the test system because it is very resource intensive job. This processing transforms the captured data and creates all necessary metadata needed for replaying the workload.

exec DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_DIR');

  1. Workload Replay

1) Restore database backup taken step one to test system and start Database

2) Initialize

BEGIN
DBMS_WORKLOAD_REPLAY.initialize_replay (
replay_name => 'TEST_REPLAY',
replay_dir => 'DB_REPLAY_DIR');
END;

3) Prepare

exec DBMS_WORKLOAD_REPLAY.prepare_replay(synchronization => TRUE)

4) Start clients

$ wrc mode=calibrate replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:41 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Report for Workload in: /u03/oradata/test/db-replay-capture
-----------------------

Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).

Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 7

Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE




$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (00:31:52)

5) Start Replay

BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/



$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (00:31:52)
Replay started (00:33:32)
Replay finished (00:42:52)



  1. Analysis and Reporting

Generate AWR , ADDM and DB reply report and compare with data gathered on production for same timeperiod when load was captured on Production database. For Database Replay Report run following command

SQL> COLUMN name FORMAT A20
SQL> SELECT id, name FROM dba_workload_replays;

ID NAME
---------- --------------------
1 TEST_REPLAY

DECLARE
v_report CLOB;
BEGIN
v_report := DBMS_WORKLOAD_replay.report(
replay_id => 1,
format=>DBMS_WORKLOAD_CAPTURE.TYPE_HTML
);
dbms_output.put_line(l_report);
END;
/


For sample report [ Click Here]



Reference
Chapter 22 Database Replay
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator