Feed aggregator

Cloud to Ground Mashup Webinar

Jim Marion - Mon, 2016-10-24 23:45

At 11:00 AM Pacific on Tuesday, October 25th (tomorrow), I have the privilege of talking about Cloud and on-premise (ground) integration. Whether cloud to cloud, cloud to ground, or ground to ground, integration is probably one of the most difficult aspects of any implementation. Integration comes in two flavors:

  • Back-end
  • Front-end

Back-end integration is the most common. Back-end integration involves integrating data between two systems either for processing or presenting a common user experience.

Front-end integration is about combining the user experience of two separate applications to create a common user experience. I often find that I can eliminate some of the back-end integrations if I can appropriately mashup front-end applications. In this webinar you will learn enterprise mashup strategies that allow you to present a seamless user experience to your users across cloud and ground applications. No modifications. Just tailoring and configuration.

Oracle Scheduler Integration Whitepaper available

Anthony Shorten - Mon, 2016-10-24 18:23

As part of Oracle Utilities Application Framework V4. and above, a new API has been released to allow customers and partners to schedule and execute Oracle Utilities jobs using the DBMS_SCHEDULER package (Oracle Scheduler) which is part of the Oracle Database (all editions). This API allows control and monitoring of product jobs within the Oracle Scheduler so that these can be managed individually or as part of a schedule and/or job chain.

Note: It is highly recommended that the Oracle Scheduler objects be housed in an Oracle Database 12c database for maximum efficiency. 

This has a few advantages:

  • Low Cost - The Oracle Scheduler is part of the Oracle Database license (all editions) so there is no additional license cost for existing instances.
  • Simple but powerful - The Oracle Scheduler has simple concepts which makes it easy to implement but do not be fooled by its simplicity. It has optional advanced facilities to allow features like resource profiling and load balancing for enterprise wide scheduling and resource management.
  • Local or Enterprise - There are many ways to implement Oracle Scheduler to allow it to just manage product jobs or become an enterprise wide scheduler. It supports remote job execution using the Oracle Scheduler Agent which can be enabled as part of the Oracle Client installation. One of the prerequisites of the Oracle Utilities product installation is the installation of the Oracle Client so this just adds the agent to the install. Once the agent is installed it is registered as a target with the Oracle Scheduler to execute jobs on that remote resource.
  • Mix and Match - The Oracle Scheduler can execute a wide range of job types so that you can mix non-product jobs with product jobs in schedules and/or chains.
  • Scheduling Engine is very flexible - The calendaring aspect of the scheduling engine is very flexible with overlaps supported as well as exclusions (for preventing jobs to run on public holidays for example).
  • Multiple Management Interfaces - The Oracle Utilities products do not include a management interface for the Oracle Scheduler as there are numerous ways the Oracle Scheduler objects can be maintained including command line, Oracle SQL Developer and Oracle Enterprise Manager (base install no pack needed).
  • Email Notification - Individual jobs can send status via email based upon specific conditions. The format of the email is now part of the job definition which means it can be customized far more easier.

Before using the Oracle Scheduler it is highly recommended that you read the Scheduler documentation provided with the database:

We have published a new whitepaper which outlines the API as well as some general advice on how to implement the Oracle Scheduler with Oracle Utilities products. It is available from My Oracle Support at Batch Scheduler Integration for Oracle Utilities Application Framework (Doc id: 2196486.1).

ASH script to show query run times

Bobby Durrett's DBA Blog - Mon, 2016-10-24 17:02

I ran into a situation last week where a developer complained that a query sometimes ran for 3 or more seconds but normally runs much less than 1 second. I had just been to a local AZORA user group meeting where Tim Gorman talked about using ASH to diagnose issues so Tim’s talk motivated me to find some clever way to use ASH. I had these three pairs of start and stop dates and times to work with. Each was about 3 to 4 seconds apart. I started looking at DBA_HIST_ACTIVE_SESS_HISTORY for the time period or even a large 11 second time period that bracketed the interval but I did not get any rows back for the first two intervals and only one row for the third. I knew that the V$ version of ASH sampled every 1 second so it might catch these 3 second queries but the queries in question had run the day before. But, something Tim said in the user group meeting made me think about using the V$ view. He said that on inactive development databases the in-memory V$ ASH data could hang around for a few days. Sure enough I was able to find some information in one of the given time periods. But, then I had to find the one slow execution of the query because there were multiple executions at the same time. I found that grouping by SQL_EXEC_ID would let me see each execution of the query by itself. So, I developed this query to show how long each execution ran:

to_char(SQL_EXEC_START,'YYYY-MM-DD HH24:MI:SS') sql_start,
to_char(min(sample_time),'YYYY-MM-DD HH24:MI:SS') first_sample,
to_char(max(sample_time),'YYYY-MM-DD HH24:MI:SS') last_sample,
max(sample_time)-min(sample_time) elapsed_seconds
to_date('20-OCT-2016 17:00:00','DD-MON-YYYY HH24:MI:SS')
to_date('20-OCT-2016 17:30:00','DD-MON-YYYY HH24:MI:SS') and

Here are a few rows from the output from around the time of the first interval that I was looking at:

----------- ------------------- ------------------- ------------------- -----------------------
   16785284 2016-10-20 17:05:24 2016-10-20 17:05:25 2016-10-20 17:05:25 +000000000 00:00:00.000
   16785285 2016-10-20 17:05:25 2016-10-20 17:05:25 2016-10-20 17:05:25 +000000000 00:00:00.000
   16785380 2016-10-20 17:05:31 2016-10-20 17:05:31 2016-10-20 17:05:34 +000000000 00:00:03.000
   16785692 2016-10-20 17:05:51 2016-10-20 17:05:52 2016-10-20 17:05:53 +000000000 00:00:01.000
   16785772 2016-10-20 17:05:54 2016-10-20 17:05:55 2016-10-20 17:05:55 +000000000 00:00:00.000
   16785852 2016-10-20 17:05:59 2016-10-20 17:06:01 2016-10-20 17:06:01 +000000000 00:00:00.000
   16785940 2016-10-20 17:06:07 2016-10-20 17:06:08 2016-10-20 17:06:08 +000000000 00:00:00.000

The third row down lined up well with the interval in question. So, I was able to use ASH to show that the query ran for 3 seconds within the database. Also, each line was a wait on db file sequential read. This lead me to look at the execution plan and to check the index and partitioning to look for ways to improve the query’s performance.



Categories: DBA Blogs

temp table and third party

Tom Kyte - Mon, 2016-10-24 11:46
Hi, i was use sybase and connect to it with third party name (SAP business object - Desktop intelligent) but now we decide to use oracle instead of sybase but i found problem , most script which i was run in sybase has Temp table like that --------...
Categories: DBA Blogs

Dynamically Logging parameters of procedure or function

Tom Kyte - Mon, 2016-10-24 11:46
Hi Tom. Is there any way to log the parameters of functions and procedures inside package or as standalone objects without having to write every parameter name and its value. I think of something generic where I can put into procedure/function ...
Categories: DBA Blogs

Generating rownumbers without rownum,row_number(),sum() over and rowid in oracle

Tom Kyte - Mon, 2016-10-24 11:46
Recently one of my friend faced a question in one of the interview to generate rownumbers without using rownum or Row_number() function. He had suggested to do to running sum of 1's but that was not the right solution as the table contained duplicat...
Categories: DBA Blogs

Automatic Selection of Quality Limited Data (row) for Export

Tom Kyte - Mon, 2016-10-24 11:46
I would like to use an existing functionality or function or by what other means for automatic selection of quality data for all tables in the tablespace/schema that follows through the foreign keys relation until the end of the chain (the most compl...
Categories: DBA Blogs

extracting particular pattern from data using sql

Tom Kyte - Mon, 2016-10-24 11:46
Dear Tom, Need to extract particular pattern as example. select ('|100|BXX656|:20:200100O0012|:32A:010607USD6025,10|:50:XYZ LABORATORIES PVT LTD|B/2 TESTCHAMBERS 22 B DESAI RD|CBD-26| -||||||||||||||:59:W.S.A. TEXT SZCZESIUL|USA|||:71A...
Categories: DBA Blogs

How to Maximize Performance for Date Logic Queries when Base Tables Only Contain Year and Month Columns

Tom Kyte - Mon, 2016-10-24 11:46
Hi Tom, <b>It's my first question!</b> Also, I love this forum--as a newbie to Oracle, Ask Tom is invaluable :) In our organization, we have many base tables that are built with YEAR and MONTH number type columns that serve as part of a composi...
Categories: DBA Blogs

Oracle Job and Email notification

Tom Kyte - Mon, 2016-10-24 11:46
Hi, I have a Oracle Chain, which calls multiple Jobs and internally Job calls SP to update a table Flag. Oracle Chain ------------- BEGIN DBMS_SCHEDULER.CREATE_CHAIN ( CHAIN_NAME => 'CCR_CHAIN', RULE_SET_NAME => ...
Categories: DBA Blogs

Archive log list Generating high in my Production server

Tom Kyte - Mon, 2016-10-24 11:46
Hi Tom, In my Production server generating archivelog files more than 200GB per day. Usually we taking manual backup (OS level move files) of this archive log to another drive for emergency purpose. after that will delete next day. and monthly onc...
Categories: DBA Blogs

Enabling the Mobile Workforce with Cloud Content and Experience - Part 2

WebCenter Team - Mon, 2016-10-24 08:08

Author: Mark Paterson, Director, Oracle Documents Cloud Service Product Management

Hope you are finding this series on quick tips on how to use key features of Oracle Documents Cloud Service mobile app to drive effective mobile collaboration helpful. In my first post, we covered steps to do mobile editing on files. In this post, we are covering the latest updates in Oracle Documents Cloud Service (Oracle Documents) that make it super simple to upload all the great assets you create or have on your mobile devices directly into Oracle Documents.

Need a way to get files, photos, or videos stored within Oracle Documents while on the move? Oracle Documents Cloud Service’s mobile app has been enhanced to further simplify file uploads.

  1. If you haven’t already, start off by installing Oracle Documents mobile app on your iPhone, iPad, or Android device. Log on to Oracle Documents Cloud Service and you’re ready to go - use it anywhere, anytime and from any of your mobile devices. It’s designed to be intuitive, familiar to your mobile habits - swipe to navigate and tap to open folders and files. The app guides you through what to do.

  2. Mobile devices make it super easy to take pictures and record videos so it makes sense to upload them to Oracle Documents directly from your mobile device as well. You can open an app like the Photos app and initiate the upload right from there.

  3. For example, you can share an entire moment with Oracle Documents. Find the moment you need to upload and simply tap on “Share”, and then select “Share this moment”. You can then select “Oracle Documents” from the list of apps. You can pick the account and folder where you wish to store the photos, even update filenames or provide file descriptions, then tap “Upload” and they all get uploaded in one go.

  4. You can initiate a similar flow from within Oracle Documents. Find the folder where you want to store your assets and tap on “+” and choose Upload “Media” or “Files”. From here, you can gather a set of files to upload:
    Tap on the camera icon to take a picture or to select photos to upload
    Tap on the microphone icon to record a voice note
    Tap on the file upload icon to select files from other 3rd party applications

  5. Once you have your list of files you can update file names or provide file descriptions as needed and when you are ready tap on “Add” and they all get uploaded into your folder.

  6. If you are an Android user, the same Upload flows are available. You can, for example, initiate photo uploads right from within Google Photos

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-CA;}

Check out our latest "How To" video to see it in action:

You can always find the Oracle Documents mobile apps in the App stores.

Normal 0 false false false EN-US X-NONE X-NONE -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-CA;}

Now that you know how easy is to store them in Oracle Documents, keep on taking those photos and recording those videos, In my next post I am going move away from more traditional content subjects and talks about sites and a new feature we’ve added to make easier for you as a mobile user to access your sites right from our apps.

Don’t have Oracle Documents Cloud Service yet? You can start a free trial immediately. Visit cloud.oracle.com/documents to get started on your free trial today.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-CA;}

Enhancement Request for SQL Developer for users of Logger

Jeff Kemp - Mon, 2016-10-24 07:57

Juergen Schuster, who has been enthusiastically trying OraOpenSource Logger, raised an idea for the debug/instrumentation library requesting the addition of a standard synonym “l” for the package. The motive behind this request was to allow our PL/SQL code to remain easy to read, in spite of all the calls to logger sprinkled throughout that are needed for effective debugging and instrumentation.

In the judgement of some (myself included) the addition of the synonym to the standard package would run the risk of causing clashes on some people’s systems; and ensuring that Logger is installable on all systems “out of the box” should, I think, take precedence.

However, the readability of code is still an issue; so it was with that in mind that I suggested that perhaps an enhancement of our favourite development IDE would go a long way to improving the situation.

Therefore, I have raised the following enhancement request at the SQL Developer Exchange:

Logger: show/hide or dim (highlight) debug/instrumentation code

“The oracle open source Logger instrumentation library is gaining popularity and it would be great to build some specific support for it into SQL Developer, whether as a plugin or builtin. To enhance code readability, it would be helpful for PL/SQL developers to be able to hide/show, or dim (e.g. grey highlight) any code calling their preferred debug/instrumentation library (e.g. Logger).

“One way I expect this might work is that the Code Editor would be given a configurable list of oracle object identifiers (e.g. “logger”, “logger_logs”); any PL/SQL declarations or lines of code containing references to these objects would be greyed out, or be able to be rolled up (with something like the +/- gutter buttons).”

Mockup #1 (alternative syntax highlighting option):


Mockup #2 (identifier slugs in header bar to show/hide, with icons in the gutter showing where lines have been hidden):


“Gold-plated” Option: add an option to the SQL Editor’s right-click context menu – on any identifier, select “Hide all lines with reference to this” and it adds the identifier to the list of things that are hidden!

If you like the idea (or at least agree with the motive behind it) please vote for it.

Filed under: Tools Tagged: enhancement, Logger, oracle-sql-developer

Anniversary OICA

Jonathan Lewis - Mon, 2016-10-24 07:00

Happy anniversary to me!

On this day 10 years ago I published the first article in my blog. It was about the parameter optimizer_index_cost_adj (hence OICA), a parameter that has been a  source of many performance problems and baffled DBAs over the years and, if you read my first blog posting and follow the links, a parameter that should almost certainly be left untouched.

It seems appropriate to mention it today because I recently found a blog posting (dated 3rd May 2013) on the official Oracle Blogs where the director for Primavera advises setting this parameter to 1 (and the optimizer_index_caching parameter to 90) for the Primavera P6 OLTP (PMDB) database. The recommendation is followed by a fairly typical “don’t blame me” warning, viz: “As with any changes that affect query optimization, it is paramount to TEST, TEST and TEST again. At least these settings are easily adjusted or change back to the original value”.

Here’s a thought, though: setting the optimizer_index_cost_adj to the extreme value 1 is a catastrophic change so don’t suggest it unless you are extremely confident that it’s almost certain to be the right thing to do. If you’re confident that it’s a good idea to reduce the parameter to a much smaller value than the default then suggest a range of values that varies from “ideal if it works, but high risk” to “low risk and mostly helpful”. Maybe a suggestion like: “Primavera P6 OLTP (PMDB) tends to work best with this parameter set to a value in the range of 1 to 15” would be a more appropriate comment from someone in a position of authority.

Here’s another thought: if you work for Oracle you could always contact the optimizer group to present them with your argument for the strategy and see what they think about it. Then you can include their opinion when you offer your suggestion.

For what it’s worth, here’s my opinion: as a general rule you shouldn’t be working around performance issues by fiddling with the optimizer_index_cost_adj; as a specific directive do not set it to 1. If you want to encourage Oracle to be enthusiastic about indexes in general then adjust the system statistics (preferably with a degree of truth). If you need to persuade Oracle that particular indexes are highly desirable than you can use dbms_stats.set_index_stats() to adjust the clustering_factor (and avg_data_blocks_per_key) of those indexes. If you are running or later then you can use dbms_stats.set_table_prefs() to set the “table_cached_blocks” parameter for tables where you think Oracle should be particularly keen on using indexes but isn’t; and if your queries are suffering from bad cardinality estimates because of a pattern of multi-column filter predicates create some column group (extended) statistics.

Why am I so firmly set against setting the optimizer_index_cost_adj to 1 ? Because it doesn’t tell Oracle to “use indexes instead of doing tablescans”, it tells Oracle that every index is just about as good as every other index for almost any query. Here’s a pdf file of an article (formerly published on DBAZine and then on my old website) I wrote over twelve years ago explaining the issue. Various links in the article no longer work, and the data pattern was generated to display the problem in 8i and 9i and you would need to modify the data to display the same effect in newer versions of Oracle – but the principle remains the same.

If you would like to see a slightly newer example of how the parameter causes problems. Here’s a thread dated April 2012 from the OTN database forum where a SYS-recursive query caused a performance problem because the parameter was set 1.


SQL Server 2016: Distributed availability groups and Cross Cluster migration

Yann Neuhaus - Mon, 2016-10-24 06:43

How to migrate an environment that includes availability groups from one Windows Failover Cluster to another one? This scenario is definitely uncommon and requires a good preparation. How to achieve this task depends mainly of your context. Indeed, we may use a plenty of scenarios according to the architecture in-place as well as the customer constraints in terms of maximum downtime allowed for example. Among all possible scenarios, there is a process called “cross-cluster migration scenario” that involves two Windows Failover Clusters side by side. In this blog post, I would like to focus on it and I will then show improvements in this field with SQL Server 2016.

Just to be clear, for the first scenario my intention is not to detail all the steps required to achieve a cross-cluster migration. You may refer to the whitepaper written by Microsoft here. To be honest, my feeling is the documentation may be improved because it includes sometimes a lack of specificity but it has the great merit of being a reality. I remember to navigate through this document, go down and go up back to the beginning several times by using the bookmarks in order to be confident with my understanding of the whole process. Anyway, let’s apply this procedure to a real customer case.

In my experience, I may confirm we haven’t customers with only one availability group installed on their environment. Most of time, we encountered customer cases which include several SQL Server instances and replicas on the same cluster node. I will demonstrate it with the following infrastructure (similar to most customer shops):


blog 107 - 0 - distributed ag cross cluster migration use case

We have 3 availability groups with the first two ones hosted on the same SQL14 instance and the last one on the SQL142 instance. The availability group architecture runs on the top of a Windows failover cluster (WSFC) – WIN201216CLUST – that includes two cluster nodes and a file share witness (FSW) not reported in the above picture. So a pretty common scenario at customer shops as I said previously. Without going into details of customer cases, the idea was to migrate all the current physical environment from the first datacenter (subnet to the second datacenter on a virtual environment (subnet As an aside, note that my customer subnets are not exactly the same and he used a different set of IP ranges but anyway it will help to set the scene.

So basically, according to the Microsoft documentation the migration process is divided in 4 main steps:

  • Preparation of the new WSFC environment (no downtime)
  • Preparation of the migration plan (no downtime).
  • Data migration (no downtime)
  • Resource migration (downtime)

Probably, the main advantage of using this procedure is the short outage that will occur during the last step (resource migration). In this way, we are more comfortable with previous preparation steps because they do not require downtime. Generally migration of data between two replicas is an important part of the migration process in terms of time and we are able to prepare smoothly the migration stuff between the two WSFCs.


Preparation of the new WSFC

Further points in the process have brought my attention. The first one concerns the preparation of the new WSFC environment (first step). Basically, we have to prepare the target environment that will host our existing availability groups and Microsoft warned us about the number of nodes (temporary nodes or not) regarding the overlapping among availability groups section and migration batches. During this preparation step we have also to set the corresponding cluster registry permissions to allow correct switching of the cluster context from the new installed replicas on the remote WSFC. At the first glance, I wondered why we have to perform such operation but the response became obvious when I tried to do a switch of the cluster from my new SQL Server instance and I faced the following error message:



Msg 19411, Level 16, State 1, Line 1
The specified Windows Server Failover Clustering (WSFC) cluster,  ‘WIN201216CLUST.dbi-services.test’, is not ready to become the cluster context of AlwaysOn Availability Groups (Windows error code: 5).

The possible reason could be that the specified WSFC cluster is not up or that a security permissions issue was encountered. Fix the cause of the failure, and retry your ALTER SERVER CONFIGURATION SET HADR CLUSTER CONTEXT = ‘remote_wsfc_cluster_name’ command.

It seems that my SQL Server instance is trying unsuccessfully to get information from the registry hive of the remote WSFC in order to get a picture of the global configuration. As an aside, until the cluster context is not switched, we are not able to add the new replicas to the existing availability group. If we put a procmon trace (from sysinternals) on the primary cluster node, we may notice that executing the above command from the remote SQL Server instance implies the reading of the local HKLM\Cluster hive.

Well, after fixing the cluster permissions issues by using the PowerShell script provided by Microsoft, we may add the concerned replicas to our existing AG configuration. The operation must be applied on all the replicas from the same WSFC. According to the Microsoft documentation, I added then two replicas in synchronous replication and another one in asynchronous mode. A quick look at the concerned DMVs confirms that everything is ok

	g.name as ag_name,
	r.replica_server_name as replica_name,
	rs.role_desc AS [role],
	rs.connected_state_desc as connection_state,
	rs.synchronization_health_desc as sync_state
FROM sys.dm_hadr_availability_replica_states as rs
JOIN sys.availability_groups as g
	on g.group_id = rs.group_id
JOIN sys.availability_replicas as r
	on r.replica_id = rs.replica_id


blog 107 - 1 - distributed ag cross cluster migration 1


	g.name as ag_name,
	r.replica_server_name as replica_name,
	DB_NAME(drs.database_id) AS [db_name],
	drs.database_state_desc as db_state,
	drs.synchronization_health_desc as sync_health,
	drs.synchronization_state_desc as sync_state
FROM sys.dm_hadr_database_replica_states as drs
JOIN sys.availability_groups as g
	on g.group_id = drs.group_id
JOIN sys.availability_replicas as r
	on r.replica_id = drs.replica_id
ORDER BY r.replica_server_name


blog 107 - 2 - distributed ag cross cluster migration 2

If you are curious like me you may wonder how SQL Server deals with the two remote replicas? A quick look at the registry doesn’t give us any clue. But what about getting registry changed values after adding new replicas? Regshot tool was a good tool to use in my case to track changes between two registry snapshots:

blog 107 - 3 - distributed ag cross cluster migration 3

This is probably not an exhaustive list of added or modified keys but this output provides a lot of useful information to understand what’s happening to the cluster registry hive when adding remote replicas. The concerned resource is identified by the id 9bb8b518-2d1a-4705-a378-86f282d387da which corresponds to my DUMMY availability group. It makes sense to notice some changes at this level.

blog 107 - 4 - distributed ag cross cluster migration 4

I may formulate some assumptions here. Those registry changes are necessary to represent the complete picture of the new configuration (with ID 8B62AACE-6AFC-49B7-9369-590D5E832ED6). If we refer to the SQL Server error log we may identify easily each replica server name by its corresponding hexadecimal value.


Resource migration

However, resource migration is a critical part of the migration process because it will introduce downtime. The downtime duration depends mainly on the number of items to migrate. Migrating availability group includes bringing offline each availability group as well as dropping each corresponding listener at the source and then recreating the availability group configuration at the target. In other words, the more you have items to migrate, the longer this migration step might be.

We are also concerned by the availability group’s topology and migration batches. Indeed, according to the Microsoft documentation, we may not change the HADR context of the targeted SQL Server instance until we have migrated all the related availability groups preventing using them as new functional replicas. To understand the importance of migration batches, think about the following scenario: the HADR context to the remote cluster is enabled and you have just finished to migrate the first availability group. You then switch back the HADR to the local context but you forget to migrate the second availability. At this point, reverting the HADR context to the remote cluster is not possible because the concerned replica is no longer eligible.

Assuming I used a minimal configuration that includes only two target replicas as shown in the first picture (WSFC at the destination), I have at least to group DUMMY and DUMMY2 availability groups in one batch. DUMMY3 availability group may be migrated as a separate batch.

So basically, steps to perform the resources migration are as follows:

  • Stop application connectivity on each availability group
  • Bring offline each availability group (ALTER AVAILABILTY GROUP OFFLINE) and drop the corresponding listener
  • Set the HADR context to local for each cluster node
  • Recreate each availability group and the corresponding listener with the new configuration
  • Validate application connectivity

The migration script provided by Microsoft helps a lot in the generation of the availability group definition but we face some restrictions. Indeed, the script doesn’t retain the exact configuration of the old environment including the previous replication mode or the backup policy for example. This is an expected behavior and according to the Microsoft documentation it is up to you to maintain the existing configuration at the destination.

Finally after performing the last migration step here is the last configuration for my DUMMY availability group as follows:

blog 107 - 5 - distributed ag cross cluster migration 5

blog 107 - 6 - distributed ag cross cluster migration 6


So what about SQL Server 2016 in the context of cross-cluster migration?

By starting with SQL Server 2016, distributed availability groups are probably the way to go. I already introduced this feature in a previous blog and I would like to show you how interesting this feature is in this context. Well let’s first go back to the initial context. We basically have to perform the exact same steps compared to the first scenario but distributed availability groups may increase ? drastically the complexity of the entire process.

  • Preparation of the new WSFC environment (no downtime) – We no longer need to grant access on the cluster registry hive permissions to the SQL Server service account as well as switching the HADR context to the remote cluster
  • Preparation of the new availability groups on the destination WSFC including the corresponding listeners (no downtime) – We no longer need to take into account the migration batches in the migration plan
  • Data migration between the availability groups (no downtime)
  • Application redirection to the new availability groups (downtime) – We no longer need to switch back the HADR context to local nodes as well as recreating the different availability group’s configurations at this level.

In short, the migration of availability groups across WSFC with SQL Server 2016 requires less efforts and shorter downtime.

Here is the new scenario after preparing the new WSFC environment and applying data migration between availability groups in the both sides

blog 107 - 7 - distributed ag cross cluster migration 7


During the first phase, I prepared a new Windows Failover Cluster WIN2012162CLUST which will host empty availability groups DUMMY10, DUMMY20 and DUMMY30. Those availability groups will act as new containers when implementing distributed availability groups (respectively TEMPDB_DUMMY_AG, TEMPDB_DUMMY2AG and TEMPDB_DUMMY3AG). You may notice that I configured ASYNC replication mode between local and remote availability groups but regarding our context, synchronous replication mode remains a viable option.


blog 107 - 8 - distributed ag cross cluster migration 8


A quick look at the sys.dm_hadr_database_replica_states confirms all the distributed availability groups are working well. TEMPDB_DUMMY3_AG is not included in the picture below.

blog 107 - 9 - distributed ag cross cluster migration 9

At this step, availability groups (DUMMY10 and DUMMY20) on the remote WSFC cannot be accessed and are used only as standby waiting to be switched as new primaries.

In the last step (resources migration) here the new steps we have to execute

  • Failover all the availability groups context to the remote WSFC by using distributed availability group capabilities
  • Redirect applications to point to the new listener

And that’s all!

In the respect of the last point, we may use different approaches and here is mine regarding the context. We don’t want to modify application connection strings that may lead to extra steps from the application side. Listeners previously created with remote availability groups may be considered as technical listeners in order to achieve cross-cluster migration through distributed availability groups. Once this operation is done, we may reuse old listeners in the new environment and achieve almost transparent application redirection in this way.

One another important thing I have to cover is the distributed availability group process behavior. After digging into several failover tests, I noticed triggering a failover event from a distributed availability group will move the primary replica of each secondary availability group to PRIMARY but unfortunately, it will not switch automatically the role of the primary of each primary availability group to SECONDARY as expected. This situation may lead to a split brain scenario in the worst case. The quick workaround consists in forcing the primary availability group to be SECONDARY before initiating the failover process. Let me demonstrate a little bit:

   =========    FROM THE OLD PRIMARIES (SPLIT BRAIN SCENARIO                ========= */





USE master;



The new situation is as follows:

blog 107 - 10 - distributed ag cross cluster migration 10

Let’s try to connect to the old primary availability group. Accessing the DUMMY database is no longer permitted as expected

Msg 976, Level 14, State 1, Line 1
The target database, ‘DUMMY’, is participating in an availability group and is currently
not accessible for queries. Either data movement is suspended or the availability replica is not
enabled for read access. To allow read-only access to this and other databases in the availability
group, enable read access to one or more secondary availability replicas in the group. 
For more information, see the ALTER AVAILABILITY GROUP statement in SQL Server Books Online.

We now have to get back the old listener for the new primary availability groups. In my case, I decided to drop the old availability groups in order to completely remove the availability group configuration from the primary WSFC and to make the concerned databases definitely inaccessible as well (RESTORING STATE).



USE master;




Finally I may drop technical listeners and recreate application listeners by using the following script:

   =========    ADD APPLICATION LISTENERS WITH NEW CONFIG                  ========= */


USE master;






Et voilà!


Final thoughts

Cross-cluster migration is definitly a complex process regardless SQL Server 2016 improvements. However as we’ve seen in this blog post, the last SQL Server version may reduce the overall  complexity at the different migration steps. Personnally as a DBA, I don’t like to use custom registry modification stuff which may impact directly the WSFC level (required in the first migration model) because it may introduce some anxiousness and unexpected events. SQL Server 2016 provides a more secure way through distributed availability groups and includes all migration steps at the SQL Server level which make me more confident with the migration process.

Happy cross-cluster migration!





Cet article SQL Server 2016: Distributed availability groups and Cross Cluster migration est apparu en premier sur Blog dbi services.

A Guide to the Oracle Data Types

Complete IT Professional - Mon, 2016-10-24 06:00
In this article, I explain what the different data types are in Oracle database and everything you need to know about them. What You’ll Learn about Oracle SQL Data Types There are many data types in the Oracle database. There are character, number, date, and other data types. When you create a table, you need to […]
Categories: Development

New OA Framework 12.2.6 Update Now Available

Steven Chan - Mon, 2016-10-24 02:04

Web-based content in Oracle E-Business Suite 12 runs on the Oracle Application Framework (OAF or "OA Framework") user interface libraries and infrastructure.  Since the release of Oracle E-Business Suite 12.2 in 2013, we have released several cumulative updates to Oracle Application Framework to fix performance, security, and stability issues. 

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Pack. "Cumulative" means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for EBS 12.2.x is now available:

Where is the documentation for this update?

Instructions for installing this OAF Release Update Pack are here:

Who should apply this patch?

All EBS 12.2.6 users should apply this patch.  Future OAF patches for EBS 12.2.6 will require this patch as a prerequisite. 

I'm on an earlier EBS release.  Can I apply this patch?

Yes. This patch can be applied to all EBS 12.2.3 to 12.2.5 environments that have AD/TXK Delta 8 and ATG Delta 6 installed.

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS 12.2.x OAF bundle patches.

This latest bundle patch includes fixes for the following critical issues along with security :

  • Table scroll navigation issue when navigating to previous records
  • Javadoc for Layered Layout component
  • Reset icon in Personalization workbench not loading few times
  • UI distortion on the page after adding records in table in the last range
  • Worklist Font issue in Tree mode on Homepage
  • KFF combinations update event  was not getting processed for KFF inside table
  • Profile option name for profile FND_SHOW_INSTANCE_NAME changed to FND: Show Instance Name

Related Articles

Categories: APPS Blogs

Documentum story – Attempt to fetch object with handle 3c failed

Yann Neuhaus - Mon, 2016-10-24 02:00

Some time ago, I was working on the preparation of an upgrade of a Content Server and everything was working fine so I was about to begin but just before that I checked our monitoring interface for this environment to crosscheck and I saw the following alerts coming from the log file of the docbases installed on this CS:

2016-04-05T10:47:01.411651      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"


As you might know, an object with an ID starting with “3c” represents the dm_docbase_config and therefore I was a little bit afraid when I saw this alert. So first thing, I tried to open an idql session to see if the docbase was responding:

[dmadmin@content_server_01 log]$ iapi DOCBASE1 -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084

Connecting to Server using docbase DOCBASE1
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a800d7441 started for user dmadmin."

Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_docbase_config
API> exit


Ok so apparently the docbase is working properly, we can access D2, DA, the idql is also working and even the dm_docbase_config object can be retrieved and seen but there is still the exact same error in the log file coming every 5 minutes exactly. So I took a look at the error message in detail because I didn’t want to scroll hundreds of lines for maybe nothing. The string description containing the object ID wouldn’t really be useful inside the log file to find what might be the root cause and the same thing apply for the DM_OBJ_MGR_E_FETCH_FAIL error, it would just print the exact same error again and again with just different timestamps. When thinking about that in my mind, I actually realised that the 2 or 3 error lines I was able to see on my screen were completely exact – except the timestamps – and that include the process ID that is throwing this error (second column on the log file).


With this new information, I tried to find all log entries related to this process ID:

[dmadmin@content_server_01 log]$ grep "21425\[21425\]" $DOCUMENTUM/dba/log/DOCBASE1.log | more
2016-04-04T06:58:09.315163      21425[21425]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase DOCBASE1 attempting to open"
2016-04-04T06:58:09.315282      21425[21425]    0000000000000000        [DM_SERVER_I_START_KEY_STORAGE_MODE]info:  "Docbase DOCBASE1 is using database for cryptographic key storage"
2016-04-04T06:58:09.315316      21425[21425]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase DOCBASE1 process identity: user(dmadmin)"
2016-04-04T06:58:11.400017      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Post Upgrade Processing."
2016-04-04T06:58:11.401753      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Base Types."
2016-04-04T06:58:11.404252      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmRecovery."
2016-04-04T06:58:11.412344      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmACL."
2016-04-04T06:58:11.438249      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDocbaseIdMap."
2016-04-04T06:58:11.447435      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Error log streams."
2016-04-04T06:58:11.447915      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmUser."
2016-04-04T06:58:11.464912      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmGroup."
2016-04-04T06:58:11.480200      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmSysObject."
2016-04-04T06:58:11.515201      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmExprCode."
2016-04-04T06:58:11.524604      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmKey."
2016-04-04T06:58:11.533883      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmValueAssist."
2016-04-04T06:58:11.541708      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmValueList."
2016-04-04T06:58:11.551492      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmValueQuery."
2016-04-04T06:58:11.559569      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmValueFunc."
2016-04-04T06:58:11.565830      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmExpression."
2016-04-04T06:58:11.594764      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmLiteralExpr."
2016-04-04T06:58:11.603279      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmBuiltinExpr."
2016-04-04T06:58:11.625736      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmFuncExpr."
2016-04-04T06:58:11.636930      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmCondExpr."
2016-04-04T06:58:11.663622      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmCondIDExpr."
2016-04-04T06:58:11.707363      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDDInfo."
2016-04-04T06:58:11.766883      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmScopeConfig."
2016-04-04T06:58:11.843335      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDisplayConfig."
2016-04-04T06:58:11.854414      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmNLSDDInfo."
2016-04-04T06:58:11.878566      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDomain."
2016-04-04T06:58:11.903844      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmAggrDomain."
2016-04-04T06:58:11.929480      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmMountPoint."
2016-04-04T06:58:11.957705      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmLocation."
2016-04-04T06:58:12.020403      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Server Configuration."
2016-04-04T06:58:12.135418      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmPolicy."
2016-04-04T06:58:12.166923      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDDCommonInfo."
2016-04-04T06:58:12.196057      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDDAttrInfo."
2016-04-04T06:58:12.238040      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDDTypeInfo."
2016-04-04T06:58:12.269202      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmRelation."
2016-04-04T06:58:12.354573      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmForeignKey."
2016-04-04T06:58:12.387309      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmEvent."
2016-04-04T06:58:12.403895      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize DQL."
2016-04-04T06:58:12.405622      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmFolder."
2016-04-04T06:58:12.433583      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmDocument."
2016-04-04T06:58:12.480234      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Plugin Type."
2016-04-04T06:58:12.490196      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmNote."
2016-04-04T06:58:12.518305      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmComposite."
2016-04-04T06:58:12.529351      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmStorage."
2016-04-04T06:58:12.539944      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Common area."
2016-04-04T06:58:12.612097      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize File Store."
2016-04-04T06:58:12.675604      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Optical Store."
2016-04-04T06:58:12.717573      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Linked Stores."
2016-04-04T06:58:12.803227      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Distributed Stores."
2016-04-04T06:58:13.102632      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Blob Stores."
2016-04-04T06:58:13.170074      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize External Store."
2016-04-04T06:58:13.242012      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize External File Store."
2016-04-04T06:58:13.305767      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize External URL."
2016-04-04T06:58:13.363407      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize External Free."
2016-04-04T06:58:13.429547      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmContent."
2016-04-04T06:58:13.461400      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmiSubContent."
2016-04-04T06:58:13.508588      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmOutputDevice."
2016-04-04T06:58:13.630872      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmMethod."
2016-04-04T06:58:13.689265      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmRouter."
2016-04-04T06:58:13.733289      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmFederation."
2016-04-04T06:58:13.807554      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmAliasSet."
2016-04-04T06:58:13.871634      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Content Addressable Storage."
2016-04-04T06:58:13.924874      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Formats."
2016-04-04T06:58:13.995154      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Convert."
2016-04-04T06:58:13.998050      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Indices."
2016-04-04T06:58:14.025587      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Dump Files."
2016-04-04T06:58:14.107689      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Load Files."
2016-04-04T06:58:14.176232      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize In Box."
2016-04-04T06:58:14.225954      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Distributed References."
2016-04-04T06:58:14.292782      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Client Registrations."
2016-04-04T06:58:14.319699      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Client Rights."
2016-04-04T06:58:14.330420      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Client Rights Domain."
2016-04-04T06:58:14.356866      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Business Activities."
2016-04-04T06:58:14.411545      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Business Processes."
2016-04-04T06:58:14.444264      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Packages."
2016-04-04T06:58:14.493478      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Workitems."
2016-04-04T06:58:14.521836      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Workflows."
2016-04-04T06:58:14.559605      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Audit Trail."
2016-04-04T06:58:14.639303      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Clean Old Links."
2016-04-04T06:58:14.640511      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Compute Internal Type Tag Cache."
2016-04-04T06:58:14.696040      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize LastActionProcs."
2016-04-04T06:58:14.696473      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize User Types."
2016-04-04T06:58:14.696828      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Start Up - Phase 1."
2016-04-04T06:58:15.186812      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Start Up - Phase 2."
2016-04-04T06:58:15.666149      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Crypto Objects."
2016-04-04T06:58:15.677119      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Queue Views."
2016-04-04T06:58:15.678343      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Port Info Views."
2016-04-04T06:58:15.679532      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Global Cache Finalization."
2016-04-04T06:58:16.939100      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize National Language Character Translation."
2016-04-04T06:58:16.941815      21425[21425]    0000000000000000        [DM_CHARTRANS_I_TRANSLATOR_OPENED]info:  "Translator in directory ($DOCUMENTUM/product/7.2/install/external_apps/nls_chartrans) was added succesfully initialized.  Translator specifics: (Chararacter Translator: , Client Locale: (Windows :(4099), Version: 4.0), CharSet: ISO_8859-1, Language: English_US, UTC Offset: 0, Date Format:%2.2d/%2.2d/%2.2d %2.2d:%2.2d:%2.2d, Java Locale:en, Server Locale: (Linux :(8201), Version: 2.4), CharSet: UTF-8, Language: English_US, UTC Offset: 0, Date Format:%2.2d/%2.2d/%2.2d %2.2d:%2.2d:%2.2d, Java Locale:en, Shared Library: $DOCUMENTUM/product/7.2/install/external_apps/nls_chartrans/unitrans.so)"
2016-04-04T06:58:16.942377      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize LDAP setup."
2016-04-04T06:58:16.961767      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Distributed change-checking."
2016-04-04T06:58:17.022448      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Fulltext Plugin and Configuration."
2016-04-04T06:58:17.113147      21425[21425]    0000000000000000        [DM_FULLTEXT_T_QUERY_PLUGIN_VERSION]info:  "Loaded FT Query Plugin: $DOCUMENTUM/product/7.2/bin/libDsearchQueryPlugin.so, API Interface version: 1.0, Build number: HEAD; Sep 14 2015 07:48:06, FT Engine version: xPlore version 1.5.0020.0048"
2016-04-04T06:58:17.122313      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Distributed Content."
2016-04-04T06:58:17.125027      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Distributed Content Map."
2016-04-04T06:58:17.125467      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Distributed Content Digital Signatures."
2016-04-04T06:58:17.621156      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Acs Config List."
2016-04-04T06:58:17.621570      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmLiteSysObject."
2016-04-04T06:58:17.623010      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize dmBatchManager."
2016-04-04T06:58:17.624369      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Partition Scheme."
2016-04-04T06:58:17.627552      21425[21425]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Authentication Plugins."
2016-04-04T06:58:17.631207      21425[21425]    0000000000000000        [DM_SESSION_I_AUTH_PLUGIN_LOADED]info:  "Loaded Authentication Plugin with code 'dm_krb' ($DOCUMENTUM/dba/auth/libkerberos.so)."
2016-04-04T06:58:17.631480      21425[21425]    0000000000000000        [DM_SESSION_I_AUTH_PLUGIN_LOAD_INIT]info:  "Authentication plugin ( 'dm_krb' ) was disabled. This is expected if no keytab file(s) at location ($DOCUMENTUM/dba/auth/kerberos).Please refer the content server installation guide."
2016-04-04T06:58:17.638885      21425[21425]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase DOCBASE1 opened"
2016-04-04T06:58:17.639005      21425[21425]    0000000000000000        [DM_SERVER_I_SERVER]info:  "Setting exception handlers to catch all interrupts"
2016-04-04T06:58:17.639043      21425[21425]    0000000000000000        [DM_SERVER_I_START]info:  "Starting server using service name:  DOCBASE1"
2016-04-04T06:58:17.810637      21425[21425]    0000000000000000        [DM_SERVER_I_LAUNCH_MTHDSVR]info:  "Launching Method Server succeeded."
2016-04-04T06:58:17.818319      21425[21425]    0000000000000000        [DM_SERVER_I_LISTENING]info:  "The server is listening on network address (Service Name: DOCBASE1_s, Host Name: content_server_01 :V4 IP)"
2016-04-04T06:58:17.821615      21425[21425]    0000000000000000        [DM_SERVER_I_IPV6_DISABLED]info:  "The server can not listen on IPv6 address because the operating system does not support IPv6"
2016-04-04T06:58:19.301490      21425[21425]    0000000000000000        [DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 21612, session 013f245a80000007) is started sucessfully."
2016-04-04T06:58:19.302601      21425[21425]    0000000000000000        [DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 21613, session 013f245a8000000a) is started sucessfully."
2016-04-04T06:58:20.304937      21425[21425]    0000000000000000        [DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 21626, session 013f245a8000000b) is started sucessfully."
2016-04-04T06:58:21.307256      21425[21425]    0000000000000000        [DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 21639, session 013f245a8000000c) is started sucessfully."
2016-04-04T06:58:22.307448      21425[21425]    0000000000000000        [DM_SERVER_I_START]info:  "Sending Initial Docbroker check-point "
2016-04-04T06:58:22.325337      21425[21425]    0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 21655, session 013f245a80000456) is started sucessfully."


Ok so this is the first and second pages I got as a result and that’s actually when I realised that the process I was talking about above was in fact the docbase process… So I displayed the third page of the more command and I got the following:

2016-04-05T10:04:28.305238      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_CURSOR_FAIL]error:  "In operation Exec an attempt to create cursor failed; query was: 'SELECT * FROM DM_DOCBASE_CONFIG_RV dm_dbalias_B , DM_DOCBASE_CONFIG_SV dm_dbalias_C  WHERE (dm_dbalias_C.R_OBJECT_ID=:dmb_handle AND dm_dbalias_C.R_OBJECT_ID=dm_dbalias_B.R_OBJECT_ID) ORDER BY dm_dbalias_B.R_OBJECT_ID,dm_dbalias_B.I_POSITION'; error from database system was: ORA-03114: not connected to ORACLE"
2016-04-05T10:04:28.305385      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:04:28.317505      21425[21425]    0000000000000000        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12541: TNS:no listener
2016-04-05T10:04:28.317591      21425[21425]    013f245a80000002        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12541: TNS:no listener
2016-04-05T10:04:58.329725      21425[21425]    0000000000000000        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12541: TNS:no listener
2016-04-05T10:04:58.329884      21425[21425]    013f245a80000002        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12541: TNS:no listener
2016-04-05T10:05:28.339052      21425[21425]    0000000000000000        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12541: TNS:no listener
2016-04-05T10:05:28.339143      21425[21425]    013f245a80000002        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12541: TNS:no listener
2016-04-05T10:05:49.077076      21425[21425]    0000000000000000        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
2016-04-05T10:05:49.077163      21425[21425]    013f245a80000002        [DM_SESSION_I_RETRYING_DATABASE_CONNECTION]info:  "The following error was encountered trying to get a database connection:  ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
2016-04-05T10:06:23.461495      21425[21425]    013f245a80000002        [DM_SESSION_W_RESTART_AGENT_EXEC]warning:  "The agent exec program has stopped running.  It will be restarted."
2016-04-05T10:06:48.830854      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:11:52.533340      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:16:52.574766      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:21:52.546389      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:26:52.499108      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:31:52.232095      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:36:57.700202      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:42:01.198050      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:47:01.411651      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:52:02.242612      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T10:57:11.886518      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T11:02:13.133405      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"
2016-04-05T11:07:15.364236      21425[21425]    0000000000000000        [DM_OBJ_MGR_E_FETCH_FAIL]error:  "attempt to fetch object with handle 3c3f245a60000210 failed"


As you can see above, the first error occurred at “2016-04-05T10:04:28.305385″ and that’s actually 0,000147s (=0,1ms) after an error while trying to execute an SQL query on the database for the operation Exec… So this must be linked… The database errors stopped one minute after the first one so the DB was available again. I quickly verified that using sqlplus.


We opened a SR on the EMC website to work on it with them but as far as I know, no solution were found. The only workaround that we found is that a simple restart of the docbase will cleanup these errors from the docbase log file and they will not appear afterwards but that’s pretty annoying to restart the docbase while it is actually working properly (jobs are running, dfc clients are OK, aso…) just because there is one error message printed every 5 minutes in the log file that is flooding our monitoring tool.


The problem with this error is that it can happen frequently if the Network isn’t really reliable, if the Database Listener isn’t always responsive or if anything else prevent Documentum from reaching the Database while it is doing something with the dm_docbase_config objects… Something that we didn’t try yet is to re-initialize the Content Server, to see if it can help to restore a proper log file. I think I’m gonna try that next time this issue occurs!

Edit: Re-initialize the Content Server isn’t helping :(.


Cet article Documentum story – Attempt to fetch object with handle 3c failed est apparu en premier sur Blog dbi services.

Architecture Guidelines - Same Domain Issues

Anthony Shorten - Sun, 2016-10-23 21:32

After a long leave of absence to battle cancer, I am back and the first article I wanted to publish is one about some architectural principles that may help in planning your production environments.

Recently I was asked by a product partner about the possibility of housing more than one Oracle Utilities product and other Oracle products on the same machine in the same WebLogic Domain and in the same Oracle database. The idea was the partner wanted to save hardware costs somewhat by combining installations. This is technically possible (to varying extents) but not necessarily practical for certain situations, like production. One of mentors once told me, "even though something is possible, does not mean it is practical".

Let me clarify the situation. We are talking about multiple products on the same WebLogic domain on the same non-virtualized hardware sharing the database via different schemas. That means non-virtualized sharing of CPU, memory and disk. 

Let me explain why housing multiple products in the same domain and/or same hardware is not necessarily a good idea:

  • Resource profiles - Each product typically has a different resource profile, in terms of CPU, memory and disk usage. By placing multiple products in this situation, you would have to compromise on the shared settings to take all the products into account. For example, as the products might share the database instance then the instance level parameters would represent a compromize across the products. This may not be optimal for the individual products.
  • Scalability issues - By limiting your architecture to specific hardware you are constrained in any possible future expansion. As your transaction volumes grow, you need to scale and you do not want to limit your solutions.
  • Incompatibilities - Whilst the Oracle Utilities products are designed to interact on the platform level, not all products are compatible when sharing resources. Let explain with an example. Over the last few releases we have been replacing our internal technology with Oracle technology. One of the things we replaced was the Multi-Purpose Listener (MPL) with the Oracle Service Bus to provide industry level integration possibilities. Now, it is not possible to house Oracle Service Bus within the same domain as Oracle Utilities products. This is not a design flaw but intentional as really a single instance of Oracle Service Bus can be shared across products and can be scaled separately. Oracle Service Bus is only compatible with Oracle SOA Suite as it builds domain level configuration which should not be compromized by sharing that domain with other products.

There is a better approach to this issue:

  • Virtualization - Using a virtualization technology can address the separation of resources and scalability. It allows for lots of combinations for configuration whilst allocating resources appropriately for profiles and scalability as your business changes over time.
  • Clustering and Server separation - Oracle Utilities products can live on the same WebLogic domain but there are some guidelines to make it work appropriately. For example, each product should have its own cluster and/or servers within the domain. This allows for individual product configuration and optimization. Remember to put non-Oracle Utilities products on their own domain such as Oracle SOA Suite, Oracle Service Bus etc as they typically are shared enterprise wide and have their pre-optimized domain setups.

This is a first in a series of articles on architecture I hope to impart over the next few weeks.

Speaking at APAC OTN TOUR 2016 in Wellington, New Zealand

Pakistan's First Oracle Blog - Sun, 2016-10-23 19:44
The APAC OTN Tour 2016 will be running from October 26th until November 11th visiting 4 countries/7 Cities in the Asia Pacific Region.

I will be speaking at APAC OTN TOUR 2016 in Wellington, New Zealand on 26th October on the topic which is very near and dear to me; Exadata and Cloud.

My session is 12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management with Fahd Mirza

Hope to see you there !

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator