Skip navigation.

DBA Blogs

Log Buffer #445: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-10-16 13:45

This Log Buffer edition works it way through some of the coolest blog posts from Oracle, SQL Server and MySQL of the past week.


  • What if I’m a developer or QA person using a copy of production database to do my work. What if my copy of production is now out of date and I want to refresh my data with the data as it is on production.
  • Direct path and buffered reads again.
  • Copy Data Management for Oracle Database with EMC AppSync and XtremIO.
  • Little things worth knowing: automatic generation of extended statistics in 12c.
  • Any time you execute more than one SQL statement in a PL/SQL procedure the results of executing that procedure may not be self-consistent unless you have explicitly locked your session SCN to a fixed value!!!!!

SQL Server:

  • An Introduction to the SQLCMD Mode in SSMS (SQL Spackle).
  • Describes the idle connection resiliency feature, which allows ODBC and SqlClient data access applications to maintain their connections to SQL Server 2014 or an Azure SQL Database.
  • How to Upgrade SQL Server.
  • Integration Services Logging Levels in SQL Server 2016.
  • Understanding the OVER clause.


  • Proxy Protocol and Percona XtraDB Cluster: A Quick Guide.
  • Do not run those commands with MariaDB GTIDs.
  • What If You Can’t Trace End-to-End?
  • Storing UUID Values in MySQL Tables.
  • How to Deploy and Manage MaxScale using ClusterControl.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Links for 2015-10-15 []

Categories: DBA Blogs

Got Published in AUSOUG's Foresight Online Spring 2015

Pakistan's First Oracle Blog - Tue, 2015-10-13 00:47
AUSOUG's Foresight Online Spring 2015 Edition is the premier publication by Australian Oracle User Group.

Following are highlights of this edition:

  • President's Message
  • DBA Article: Automated Testing of Oracle BPM Suite 12c Processes with SOAP UI - Peter Kemp, State Revenue Office, Victoria
  • DBA Article: Best Practices for Oracle on Pure Storage
  • Apps Article: Performance Review Data Capture - Brad Sayer, More4Apps
  • DBA / Dev Article: Database Developers – Feeling Left Out of Agile? - D Nowrood, Dell Software
  • Apps Article:  Cost-effective alternative to Oracle Discoverer and BI Suite - Wilhelm Hamman, Excel4apps
  • DBA Article: DBA101 - characterset corruption - Paul Guerin, HP
  • Quick Tips 1: Five Reasons to Upgrade to APEX 5.0 - Scott Wesley, Sage Computing Services
  • Quick Tips 2: Last Successful login time in 12c - Fahd Mirza Chughtai, The Pythian Group
Categories: DBA Blogs

Sharding in Oracle 12c Database

Pakistan's First Oracle Blog - Mon, 2015-10-12 21:22
Sharding for Oracle DBAs is still pretty much an alien or pretty new concept. In the realms of big data, this term is being used quite extensively though.

What is Sharding in simple words:

Sharding is partitioning. Horizontal partitioning to be exact.

Sharding means partitioning a table rows on basis of some criteria and storing that partitioned rows of table (i.e. a shard) on different database servers. These database servers are cheap low commodity servers.

The benefits include smaller data to manage, smaller backups, faster reads, and faster response time for the queries.

Just like existing partitioning option in the Oracle database, there are generally three kinds of sharding:

Range Sharding
List Sharding
Hash Sharding

The news out there on social media is that Oracle 12c next version is coming up with Sharding option. That is pretty exciting and let's see what they come up in this regard.

Categories: DBA Blogs

Trace Files -- 3 : Tracing for specific SQLs

Hemant K Chitale - Sun, 2015-10-11 04:00
11g allows definition of tracing by SQL_ID as well.

Here is an example.

Given a particular SQL that has been executed in the past, which we've identified as :

SQL> select sql_id, sql_text, executions from v$sql where sql_id='06d4jjswswagq';

------------- ------------------------------------------------------------------------------------- ----------
06d4jjswswagq select department_id, sum(salary) from hr.employees group by department_id order by 1 1


We could use either ALTER SESSION (from the same session) or ALTER SYSTEM (from another session, to trace all sessions) to enable tracing specifically for this SQL alone.

SQL> connect system/oracle
SQL> alter system set events 'sql_trace [sql:06d4jjswswagq] wait=true, plan_stat=all_executions';

System altered.


(note : The options for "plan_stat" are "never", "first_execution", "all_executions").  This allows us to capture execution plan statistics.
Once I have enabled SQL-specific tracing, it is not limited to a session but can run across all sessions that execute the SQL.  Even if I execute other SQLs from the same session that executed this SQL, the other SQLs are *not* traced.

Thus, I started another session that executed :

SQL> select department_id, sum(salary) from hr.employees group by department_id order by 1;

------------- -----------
10 4400
20 19000
30 24900
40 6500
50 156400
60 28800
70 10000
80 304500
90 58000
100 51608
110 20308

------------- -----------

12 rows selected.

SQL> select count(*) from hr.employees;


SQL> select count(*) from hr.departments;


The trace file only captured the target SQL. The other two SQLs were *not* in the trace file.  Tracing is not bound to a session, so if you have multiple sessions executing the target SQL, each session creates a trace file.

Tracing is disabled with :

SQL> alter system set events 'sql_trace [sql:06d4jjswswagq] off';

System altered.


Thus, just as in the previous post where I demonstrated tracing by module and action, we can enable tracing for a specific SQL.

Categories: DBA Blogs

Log Buffer #444: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-10-09 13:53

This Log Buffer Edition covers some blog posts of Oracle, SQL Server and MySQL from this past week.


  • Oracle Utilities Application Framework V4. includes a new help engine and changes to the organization of help.
  • Very simple oracle package for HTTPS and HTTP.
  • Oracle ZFS Storage Appliance surpasses $1B in Revenue (Oct 6th).
  • Tim spotted a problem with the PDB Logging Clause.
  • How to Pass Arguments to OS Shell Script from Oracle Database.

SQL Server:

  • How efficient is your covered index?
  • Storing Passwords in a Secure Way in a SQL Server Database.
  • There is plenty that is novel and perhaps foreign to a new R user, but it’s no reason to abandon your hard-earned SQL skills.
  • SSIS Design Pattern – Staging Fixed Width Flat Files.
  • Shorten Audits, Tighten Security with Minion Enterprise.


  • Confusion and problems with lost+found directory in MySQL/Galera cluster configuration.
  • Simplifying Docker Interactions with BASH Aliases.
  • Getting started MySQL Group Replication on Ubuntu with MySQL Sandbox.
  • MySQL lost “AUTO_INCREMENT” after a long time.
  • Using Apache Spark and MySQL for Data Analysis.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

My Sales Journey: #6

Pythian Group - Fri, 2015-10-09 13:43


Last week, I delved into the the mind of our executive tier to pin point how to make outreach appealing to that group. Although most companies want to speak to these decision makers it is an uphill battle to get the ear of a VP/Exec. Sometimes, we must reach the people who are talking to their bosses each day and these are the managers in the trenches living with the problems we want to solve.
So today, lets step inside the mind of a Manager:

Managers maintain the integrity and continual functioning of mission critical operations. They evaluate and make recommendations regarding new technologies, tools and techniques. They are keeping a head count and know where the gaps exist in their teams. Managers are responsible for their team performance whilst also expected to do more with less.

They are also the people who will most likely be willing to listen if you have a solution  to their problems. Tell them how you or your company has solved similar problems with others. Give them proof.

Assure them of their importance. Make seeking your help a positive experience for them and their team. Managers need to know that you are not taking their teams job’s or theirs. As a managed service firm we like our clients to know that we can augment and work as an intimate extension of their in-house team.

If you should be so lucky that a forward thinking manager wants to introduce you to their executive team. Give them credit. There are tons of people who are too afraid to do things differently and resistant to change so when someone takes that leap for you do not leave them behind.

You can either work up the food chain or down as a Sales professional. Know that catching the big fish is hard but rewarding and most times you will need to get creative with your hook. Isn’t that what Sales is all about in a nutshell? I personally, do not have a preference but I am also new to the game.

I’d love to hear your thoughts about your sales process, who do you talk to first, what gets you a higher response rate, do you go after the big fish or work your way up the ladder – leave me a message!

Categories: DBA Blogs

When it comes to Black Friday’s online rush, even we Brits don’t want to queue

Pythian Group - Fri, 2015-10-09 08:10

Black Friday is a pretty new concept in the UK. Traditionally an American post-Thanksgiving shopping day, it has recently gained popularity in the UK. It really took off last year and defined the start of what I heard being called the Golden Quarter – from November through to the end of January – when retailers will make 40-50 percent of their annual sales.

With anything new, people take a while to find their feet, and will try out new things. I hope that one of the technologies that was trialled widely on e-commerce sites during this period last year, isn’t used again this year: The queuing system.

The idea was sound: instead of having too many customers hitting a site and causing poor performance for everyone, a number were put into a virtual waiting room and allowed onto the site in order. The meant that, once in, everyone could shop quickly without any risk of a crash from overload.

But in practice, it seemed to customers as if the site wasn’t working. The Press reported that sites had “crashed” anyway and the user experience was awful. You might queue to get into Harrods on the first day of a Sale, or you might queue online to get a ticket to the last One Direction concert, but with plenty of choices available, users simply hopped elsewhere.

To me, the most frustrating thing was that this seemed like a lazy solution. It is not difficult to build your e-commerce site for the peaks you should be expecting.

Why not ensure you can spin up extra capacity with a cloud provider, if needed? And why not take the time to configure the database structure? This would mean that, for those few days when 1000 people a minute are wanting to see the shoes you have in stock, they all can. Easily and without delay.

Building an e-commerce site – or indeed any application – to be scalable to handle peak traffic should be a high priority for your developers, DBAs, and sys admins. With the range of available cloud technologies, database technologies, and automation tools there is no excuse.

Let’s hope that for Black Friday 2015, for once the UK is queue free!

By the way, if you need a hand on any of the areas discussed above, please do get in touch. As a majority of Pythian’s retail clients are in the US, we’ve had many years of practice, and ensure our clients peak demands are handled smoothly.

Categories: DBA Blogs

Links for 2015-10-08 []

Categories: DBA Blogs

SQL Server Fast Food

Pythian Group - Thu, 2015-10-08 13:55


An environment where you have a high number of databases on one server, or many, can be time consuming to something as simple as adding a user account. You have the option of using the GUI with SQL Server Management Studio (SSMS), which if it was a rush to get something in place for 8 or 10 databases I can see possibly doing that to get it done. You could do this with a bit of typing using T-SQL and a cursor or that famed, undocumented procedure sp_MSForeachdb.

I recently had a request from a customer that fell into the above scenario and in using PowerShell to handle the request I just wanted to show how I went about getting it done. I think this is a situation where both T-SQL or PowerShell will work, I just picked the one I wanted to use.

Breaking this down, these are the basic steps I had to perform:

  1. Check for the login
  2. Create user
  3. Create role
  4. Assign INSERT and UPDATE to the role
  5. Add the user to the database role

All in all that is not too much, if you understand how PowerShell and SMO work for you. If you are not familiar with PowerShell you can reference the recent series I published on the Pillars of PowerShell that should help you get started. When I was learning PowerShell I always found I learned the best by reading through other folks scripts to find out how stuff was done. You can find the full script at the end of this post if you want to just skip right to it, I won’t be offended.

One thing I always find useful with SMO is remembering that everything MSDN documents everything for the namespace Microsoft.SqlServer.Management.Smo. If you spend the time to review it and at least get familiar with how the documentation is laid out, using and finding answers for things with SMO becomes much easier.


The Bun

As always the first step is going to be to create the object for the instance or server:

$s = New-Object Microsoft.SqlServer.Management.Smo.Server $server

The task of verifying the login exists, I utilized one of the common methods that is available with a string type, Contains(). Now you generally use the Get-Member cmdlet to find the various methods available for an object, but this particular one does not show if you were to run: $s.Logins | Get-Member. There are a set of methods that follow each type of value (e.g. String, integer, date, etc.) and the Contains() method is one with the string type. There are two ways I have found to discover these type of methods:

  1. Pass the value type to Get-Member [e.g. “A string” | Get-Member]
  2. Use tab completion [e.g. Type out “$s.Logins.” with the period on the end, and then just start hitting the tab key]

If you want a bit of exercise you can see if you can add in code to actually create the login if it does not exist. I was only working with one server in this case so did not bother adding it this time around.

Being that I need to add these objects to each database I start out by getting the collection of databases on the instance:

$dbList = $s.Databases

From there I am simply going to iterate over each database that will be stored in the variable: $d.


The Meat

The first thing I want to do is verify the database is online and accessible, so each database (e.g. $d) has a property called “isAccessible” that simply returns true or false. The equivalent of this in T-SQL would be checking the value of the status column in sys.databases for T-SQL. One shortcut you will see in PowerShell at times is the use of an explanation point ( ! ) before an object in the if statement, this basically tells it to check for false to be returned:

if (!$d.isAccessible) {…}
#equates to:
if ($d.isAccessible -eq $false) {…}

Now that I know the database is online I need to create and modify some objects in the database. When dealing with objects such as user accounts, roles, tables, etc. in a database, in PowerShell these are going to be classes under the SMO namespace. So in this script I am going to use the following classes for the user and database role:

Under the User and Database Role class you will see the constructors section that shows what is needed to create the object. So for example, digging into the link for the database role constructor I see it takes two parameters:

  1. Microsoft.SqlServer.Management.Smo.Database object
  2. a string value of what you want to call the role.

The $d variable is my database object, so that is covered and then I wrote the function to pass the database role name into the $roleName:

$r = New-Object Microsoft.SqlServer.Management.Smo.DatabaseRole($d,$roleName)

I continued through the article for the database role class and in the Properties list see that some have a description of “Gets the…” and then some have “Gets or sets…”. This basically means “Gets the…” = read only property, and “Gets or sets” = property can be read or modified. When you are using CREATE ROLE, via T-SQL, you have to provide the name of the role and the owner of that role. I passed the name of the role when creating the database role object ($r) so I just need to set the owner and then call the method to actually create it:

$r.Owner = 'dbo'
The Ingredients

The only thing I needed to do in this situation was set INSERT and UPDATE permissions, and at the schema level to handle the client’s requirements. Assigning permissions in SMO took me a bit to figure out, majority of the time on writing this script actually. There are two additional classes I need to handle setting permissions on a schema:

I create the object for the schema, according to the documented constructor. Within each class that deals with specific objects in a database that can be given access, you should find a Grant() method and in my case what I need is Grant(ObjectPermissionSet, String[ ]). The second parameter is an object that contains the permissions I want to assign to this role. This is where the second class above came into play.

The properties for the ObjectPermissionSet class are the permissions I can assign via SMO to an object in a database, and simply setting them to true will assign that permission:

$dboSchema = New-Object Microsoft.SqlServer.Management.Smo.Schema($d,'dbo')
$perms = New-Object Microsoft.SqlServer.Management.Smo.ObjectPermissionSet
$perms.Insert = $true
$perms.Update = $true

Then to finish it off that last line in the script is to just add the user as a member of the database role created. You can find the full script below for your pleasure. Enjoy!


Full Script

To ensure the script is readable, and save space, I published this script to my public GitHub repository. You can view or download the full script from here.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Issues with Plan Cache Reuse & Row Goal Optimization

Pythian Group - Thu, 2015-10-08 13:11


I am presenting here on behalf of my colleague  Fabiano Amorim (he is busy resolving other exciting performance issues…  :-D ) .

Fabiano had an interesting case with one of our customers that is very common in SQL Server.

The case is about a performance issue caused by two optimizer decisions not working well together:


Problem Description

Let’s review the following query:

select top 1 col_date from tab1
where col1 = 10
and col2 = 1
and col3 = 1
order by col_date asc


Table tab1 have two indexes:

  1. ix1 (col1, col_date, col2) include(col3)
  2. ix2 (col1, col2, col3) include(col_date)


The Query optimizer (QO) has two query plan options:

  1. select -> top -> filter -> index seek (ix1) Read the ordered index ix1 by b-tree seeking by “col1 = 10”, apply the residual predicates (filter) “col2 = 1 and col3=1”, after reading just 1 row (TOP 1) the execution is finished since the index is ordered by  col1, col_date, the first col_date returned is already the TOP1 ASC according to the index order.
  2. select -> top N sort -> index seek (ix2) Read the covered index ix2 b-tree (notice it has all needed columns), seeking by “col1 = 10 and col2 = 1 and col3=1”, get the col_date in the index leaf level (included column), use “top N sort” algorithm to sort and keep only TOP 1 row, finish execution.

The problem, is that, if the QO chooses the first option, this will be good for high selectivity predicates.
For instance, let’s suppose that “col1 = 10” returns 5 rows; remember that index ix1 is ordered by col1, col_date, col2:


col1 | col2| col3 | col_date

10    | 4      | 4      | 2015-12-01

10    | 3      | 3      | 2015-12-02

10    | 1      | 1      | 2015-12-03

10    | 5      | 5      | 2015-12-04

10    | 2      | 2      | 2015-12-05


After seeking the index, SQL will need to apply the residual predicate (“col2 = 1 and col3=1”) until it finds the “row goal”: TOP iterator is asking for just one row, in this case the third row will match the predicate and SQL Server will return the first row that matches the residual predicate.

So, in this case it has to read only 3 rows. So far so good…

Now, let’s supposed SQL created that plan, and now it’s going to reuse it for a new value on col1 filter:


select top 1 col_date from tab1
where col1 = 99
and col2 = 1
and col3 = 1
order by col_date asc


What if after the seek (“col1 = 99”) 2 million of rows are returned? Now this plan is not so good, since it will need to apply the predicate on many rows before it finds a match:


col1 | col2| col3 | col_date

99    | 2      | 2      | 2015-12-01

99    | 2      | 2      | 2015-12-02

…after a couple of million rows…

99    | 1      | 1      | 2015-12-03

99    | 2      | 2      | 2015-12-04

99    | 2      | 2      | 2015-12-05


In this case, using the second option is better. Just go and seek the b-tree for all values (col1 = 99 and col2 = 1 and col3 = 1), this will return 1 row… TOP n SORT will do almost nothing and execution will finish quickly.

Here is the problem: most of the times, SQL knows whether to use option 1 or option 2 based on the parameters values. But if it is reusing the plan from cache, the optimization path may already be set improperly resulting in the known issue called “parameter sniffing” (plan reuse that is wrong for the specific set of rows)… That means that the row goal optimization should not be used if there is a covered indx.

Unfortunately by default, QO “thinks” this is cheaper than “seek+top n sort”… Of course it all depends on the distribution of data…So in a nutshell,  QO chooses rowgoal optimization where this should not be used therefore we should pay extra attention to those kind of plans…


Possible Solutions

There are many alternatives to fix it.

Some examples:

  1. Force the index (index=ix2)
  2. Option(recompile)
  3. drop the index ix1, define ix2 as a unique (tells QO that only 1 row will be returned)

Each one of the above has advantages and disadvantages.

We also need to ensure that statistics are up to date!


Additional Resources


Discover our expertise in SQL Server. 

Categories: DBA Blogs

Simplifying Docker Interactions with BASH Aliases

Pythian Group - Thu, 2015-10-08 12:21
Landing a Docker Whale

Docker has been consuming my life in the last few weeks. I have half a dozen projects in progress that use containers in some fashion, including my Visualizing MySQL’s Performance Schema project.

Since I prefer to work from a Mac laptop, I have to utilize a Linux Virtual Machine (VM) which runs the Docker daemon. Luckily, Docker Machine makes this a very simple process.

However, interacting both with Docker and Docker Machine does introduce some additional commands that I would rather simplify for the repeatable use-cases I’ve come across. With BASH aliases, this is not a problem.

Is My Docker Environment Setup?

When working with Docker through Docker Machine, you first have to set up your environment with various DOCKER_* variables, such as these:

View the code on Gist.

The first alias is an easy way to check that the Docker environment is setup.

View the code on Gist.

Now, all I have to type is de, and I get the Docker environment output:

View the code on Gist.Setting up My Docker Environment

But how do you set up the environment with Docker Machine? The docker-machine command provides the details:

View the code on Gist.

Notice that the comments indicate you have to run the command through eval to get the terminal setup correctly. I don’t want to type that out each time I open a new terminal.

The docker-machine command requires the name of the VM to set up as an argument, so I’ve created a function to accept the argument:

View the code on Gist.

Each time I open a terminal I can setup the environment:

View the code on Gist.

If you only use one Docker VM for local development, you can hardcode the name of it to execute the command to automatically setup the docker environment when a new terminal is created.

Cleaning Out Docker Images

The last helpful alias I have comes from building and re-building containers that have left old images on my VM.

View the code on Gist.

The docker-clean command cleans up all dangling images:

View the code on Gist.

And running the docker-clean command yields:

View the code on Gist.

I put all of these aliases and functions together in my ~/.bash_profile* script, which is executed anytime I open a terminal window:

View the code on Gist.

*Note: Instead of putting these aliases and functions in ~/.bash_profile, other distributions would look for them in ~/.bashrc or ~/.bash_aliases to ensure they are available for all types of interactive shells.

If you have any other commands to simplify Docker interactions, please share them in the comments!


Discover more about our expertise with DevOps.

Categories: DBA Blogs

Partners Guide to Oracle Cloud - The Oracle Cloud Playbooks

OPN has published the Oracle Cloud Platform Strategic Partner Playbook for a while now, designed exclusively for partners. This was created in close partnership with Product Marketing, Product...

We share our skills to maximize your revenue!
Categories: DBA Blogs

My Delphix presentation at OakTable World

Bobby Durrett's DBA Blog - Wed, 2015-10-07 17:52

It is official.  I will be doing my Delphix presentation at OakTable World during the Oracle OpenWorld conference at the end of this month.  My talk is at 9 am on Tuesday, October 27.

I will describe our journey as a new Delphix customer with its ups and downs. I tried to have the spirit of a user group talk where you get a real person’s experience that you might not get from a more sales oriented vendor presentation.

Kyle Hailey, a OakTable member and Delphix employee, will host my talk.  I have been very impressed by Kyle’s technical knowledge and he will be with me to answer questions about Delphix that I could not answer.  I think it will be a good combination of my real world user experience and his depth of technical background in Delphix and Oracle performance tuning.

If you are going to OpenWorld and if you want to know more about Delphix come check it out.  Also, feel free to email me or post comments here if you have any questions about what the talk will cover.


Categories: DBA Blogs

Partner Webcast – Oracle Mobile Cloud Service: Gates to Enterprise Mobility for Your Business

Nowadays Mobility has definitely disrupted business models. Mobile first companies that are using the context of mobile to create unique applications are creating new business models disrupting and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Rapid Digital Transformation with Oracle Process Cloud

Today, IT is heavily optimized to develop and manage longer running durable applications with evolutionary change, current demand calls for creation of disposal applications and fast frequency...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle E-Business Suite: Virtual Host Names

Pythian Group - Tue, 2015-10-06 07:56

The ability to use virtual host names with Oracle E-Business Suite is one of the features that I have been waiting for a long time. When I finally saw a post on Steven Chan’s blog about it, I was very excited. But, when I finally got to review the Mos note “Configuring Oracle E-Business Suite Release 12.x Using Logical Host Names”, I was left with disappointed.

In my opinion, the main advantage of using virtual host names is during a DR failover scenario. By using virtual hosts we can setup the servers in both a primary datacenter and secondary datacenter to use the same virtual hostname, even though their physical hostnames are different. This virtual hostname setup helps when we failover services and databases to a secondary datacenter, as we don’t have to reconfigure the application to use new physical hostnames. Currently when we install E-Business Suite to use a virtual hostname, “Concurrent Managers” dont work, as they internally use the physical hostname to communicate.

The new MOS note describes this very feature of using virtual hostnames with Oracle E-Business Suite. But why I am disappointed? Because it left a very important use case out. In most cases when virtual hostnames are used, the servers are configured with a different physical hostname. i.e., if you run hostname or uname commands you will see that the actual physical hostname and virtual hostname is only present in DNS and hosts file. This scenario is not covered by the MOS note. The MOS note asks us to reconfigure the server with virtual hostname such that when we type hostname or uname command it shows the virtual hostname instead of the physical hostname.

I believe the need to reconfigure the server to use a virtual hostname, defeats the main purpose of setting up virtual hostnames, making this MOS note useless :(

Thus, I will keep on waiting for this out of the box feature. I currently have a custom in-house method to use virtual hostnames with E-Business Suite that I will blog about it in future.


Discover more about our expertise with Oracle.

Categories: DBA Blogs