Skip navigation.

DBA Blogs

What happens to the Standby when you move a datafile on the Primary?

The Oracle Instructor - Fri, 2015-05-08 03:50

In 12c, we have introduced online datafile movement as a new feature. Now does that impact an existing standby database? I got asked that yesterday during an online webinar. My answer was that I expect no impact at all on the standby database since redo apply doesn’t care about the physical placement of the datafile on the primary. But I added also that this is just an educated guess because I didn’t test that yet. Now I did:

You know, I like to practice what I preach: Don’t believe it, test it! :-)


Tagged: 12c New Features, Data Guard Moving a datafile in a Data Guard environment
Categories: DBA Blogs

Digital Transformation Partner Forum Budapest 28-29 April 2015 - Wrap Up

Digital Disruption is influencing every Business! The Digital Disruption wave  is prevalent in all our partners and customers discussions today. Hence ORACLE has set the digital...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Enterpise for Eclipse (OEPE) in an existing Eclipse installation

Oracle's developer tools strategy is to offer the best possible developer tools choices to support diverse needs. When it comes to Java IDEs, while JDeveloper is Oracle’s owned developed Java IDE,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Finding the Oracle Database Appliance Plug-in within #em12c

DBASolved - Wed, 2015-05-06 08:26

The Oracle Database Appliance (ODA) has been around for a few years now. It is a great, compact, and powerful machine for running at two-node Oracle Real Application Cluster (RAC). The adoption of the ODA has been mostly seen in medium sized organizations that need a work horse but cannot afford the sticker price of an Oracle Exadata.

Just like all the appliances that Oracle puts out, there is a need to monitor these appliances from top to bottom. This is achived by using Oracle Enterprise Manager 12c Plug-ins. Recently, Oracle let it be known that the ODA Plug-in has been released; however, from searching online it is not easily found. Hence the reason for this blog post…. :)

To find the ODA Plug-in, you need to basically download it from within the Self-Update area inside of Oracle Enterprise Manager 12c. In order to do this, you need to set you MOS credientials to access MOS.

Using Setup -> My Oracle Support -> Set Credentials

Once your MOS credentials are set, then you can got to the Self-Update page and update the plug-ins for your Oracle Enterprise Manager (Setup -> Extensibility -> Self-Update).

From the Self-Update page, select the Check Update.

After clicking the Check Update button, Oracle Enterprise Manager will kick off a job to update all the plug-ins in the software library. Once the job completes, you can look at the status of the job and see that the Oracle Database Appliance plug-in was downloaded successfully.

Now that the plug-in has been downloaded, you can go back to the Plug-in Page and deploy the plug-in to the agents that are running on the ODA targets (Setup -> Extensibility -> Plug-ins).

Listed under the Engineered Systems plug-ins, you will not see version 12.1.0.1.0 of the Oracle Database Appliance plug-in.

Now that the plug-in has been downloaded, it can be deployed to the required targets and configured (more on this later, hopefully).

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

OpenTSDB and Google Cloud Bigtable

Pythian Group - Wed, 2015-05-06 02:15

Data comes in different shapes. One of the these shapes is called a time series. Time series is basically a sequence of data points recorded over time. If, for example, you measure the height of the tide every hour for 24 hours, then you will end up with a time series of 24 data points. Each data point will consist of tide height in meters and the hour it was recorded at.

Time series are very powerful data abstractions. There are a lot of processes around us that can be described by a simple measurement and a point in time this measurement was taken at. You can discover patterns in your website users behavior by measuring the number of unique visitors every couple of minutes. This time series will help you discover trends that depend on the time of day, day of the week, seasonal trends, etc. Monitoring a server’s health by recording metrics like CPU utilization, memory usage and active transactions in a database at a frequent interval is an approach that all DBAs and sysadmins are very familiar with. The real power of time series is in providing a simple mechanism for different types of aggregations and analytics. It is easy to find, for example, minimum and maximum values over a given period of time, or calculate average, sums and other statistics.

Building a scalable and reliable database for time series data has been a goal of companies and engineers out there for quite some time. With ever increasing volumes of both human and machine generated data the need for such systems is becoming more and more apparent.

OpenTSDB and HBase

There are different database systems that support time series data. Some of them (like Oracle) provide functionality to work with time series that is built on top of their existing relational storage. There are also some specialized solutions like InfluxDB.

OpenTSDB is somewhere in between these two approaches: it relies on HBase to provide scalable and reliable storage, but implements it’s own logic layer for storing and retrieving data on top of it.

OpenTSDB consists of a tsd process that handles all read/write requests to HBase and several protocols to interact with tsd. OpenTSDB can accept requests over Telnet or HTTP APIs, or you can use existing tools like tcollector to publish metrics to OpenTSDB.

OpenTSDB relies on scalability and performance properties of HBase to be able to handle high volumes of incoming metrics. Some of the largest OpenTSDB/HBase installations span over dozens of servers and process ~280k writes per second (numbers from http://www.slideshare.net/HBaseCon/ecosystem-session-6)

There exist a lot of different tools that complete OpenTSDB ecosystem from various metrics collectors to GUIs. This makes OpenTSDB one of the most popular ways to handle large volumes of time series information and one of the major HBase use cases as well. The main challenge with this configuration is that you will need to host your own (potentially very large) HBase cluster and deal with all related issues from hardware procurement to resource management, dealing with Java garbage collection, etc.

OpenTSDB and Google Cloud Bigtable

If you trace HBase ancestry you will soon find out that it all started when Google published a paper on a scalable data storage called Bigtable. Google has been using Bigtable internally for more than a decade as a back end for web index, Google Earth and other projects. The publication of the paper initiated creation of Apache HBase and Apache Cassandra, both very successful open source projects.

Latest release of Bigtable as a publicly available Google Cloud service gives you instant access to all the engineering effort that was put into Bigtable at Google over the years. Essentially, you are getting a flexible, robust HBase-like database that lacks some of the inherited HBase issues, like Java GC stalls. And it’s completely managed, meaning you don’t have to worry about provisioning hardware, handling failures, software installs, etc.

What does it mean for OpenTSDB and time series databases in general? Well, since HBase is built on Bigtable foundation it is actually API compatible with Google Cloud Bigtable. This means that your applications that work with HBase could be switched to work with Bigtable with minimal effort. Be aware of some of the existing limitations though. Pythian engineers are working on integrating OpenTSDB to work with Google Cloud Bigtable instead of HBase and we hope to be able to share results with the community shortly. Having Bigtable as a back end for OpenTSDB opens a lot of opportunities. It will provide you with a managed cloud-based time-series database, which can be scaled on demand and doesn’t require much maintenance effort.

There are some challenges that we have to deal with, especially around a client that OpenTSDB uses to connect to HBase. OpenTSDB uses it’s own implementation of HBase client called AsyncHBase. It is compatible on a wire protocol level with HBase 0.98, but uses a custom async Java library to allow for asynchronous interaction with HBase. This custom implementation allows OpenTSDB to perform HBase operations much faster than using standard HBase client.

While HBase API 1.0.0 introduced some asynchronous behavior using BufferedMutator it is not a trivial task to replace AsyncHBase with a standard HBase client, because it is tightly coupled with the rest of OpenTSDB code. Pythian engineers are working on trying out several ideas on how to make the transition to standard client look seamless from an OpenTSDB perspective. Once we have a standard HBase client working, connecting OpenTSDB to Bigtable should be simple.

Stay tuned.

Categories: DBA Blogs

db file parallel read on Linux and HP-UX

Bobby Durrett's DBA Blog - Tue, 2015-05-05 13:39

In my previous post I described how I could not explain why I got better db file parallel read wait times in a test on Linux than I got running the same test on HP-UX.  I have discovered that the Linux wait times were better because Linux cached the data in the filesystem cache and HP-UX did not.

Neither system used direct I/O for the tests so both could cache data in the filesystem cache.  Evidently Linux does this faster than HP-UX.  I figured this out by repeatedly running the query flushing the buffer cache before each run.  Flushing the buffer cache prevented the table and index from being cached within the database.  On Linux the query ran for the same amount of time for all 5 executions.  On HP-UX it ran much faster after running it for the first time.  Apparently Linux cached the table and index before the first run and HP-UX cached them after the first run.

Here is how I ran the query:

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

Here are the elapsed times for the query on Linux:

Elapsed: 00:00:09.16
Elapsed: 00:00:09.17
Elapsed: 00:00:09.28
Elapsed: 00:00:09.18
Elapsed: 00:00:09.20

Here is the same thing on HP-UX:

Elapsed: 00:01:03.27
Elapsed: 00:00:19.23
Elapsed: 00:00:19.28
Elapsed: 00:00:19.35
Elapsed: 00:00:19.43

It’s not surprising that the HP-UX times with the data cached are twice that of Linux.  An earlier post found the processor that I am evaluating on Linux was about twice as fast as the one I’m using on HP-UX.

Just to double-check that the caching was really at the filesystem level I turned direct I/O on for the Linux system using this parameter:

alter system set filesystemio_options=DIRECTIO scope=spfile;

I ran the test again after bouncing the database to make the parameter take effect and the run times were comparable to the slow first run on HP-UX:

Elapsed: 00:01:12.03
Elapsed: 00:01:06.69
Elapsed: 00:01:12.98
Elapsed: 00:01:10.14
Elapsed: 00:01:07.21

So, it seems that without the filesystem cache this query takes about 1 minute to run on either system.  With caching the query runs under 20 seconds on both systems.

In some ways I think that these results are not important.  Who cares if Linux caches things on the first attempt and HP-UX on the second?

The lesson I get from this test is that HP-UX and Linux are different in subtle ways and that when we migrate a database from HP-UX to Linux we may see performance differences that we do not expect.

Here is a zip of my script and its logs: zip

– Bobby

Categories: DBA Blogs

Status Of My SlideShare Material

Hemant K Chitale - Tue, 2015-05-05 09:30
My  SlideShare  Material has had 7,390 views to date.


.
.
.
Categories: DBA Blogs

Links for 2015-05-04 [del.icio.us]

Categories: DBA Blogs

OEM 12c Silent Installation

Pythian Group - Mon, 2015-05-04 13:17

“What’s for lunch today?”, said the newly born ready to run Red Hat 6.4 server.

“Well, I have an outstanding 3-course meal of OEM12c installation.
For the appetizer, a light and crispy ASM 12c,
DB 12c with patching for the main and desert, and to cover everything up, OEM 12c setup and configuration”, replied  the DBA who was really happy to prepare such a great meal for his new friend.

“Ok, let’s start cooking, it won’t take long”, said the DBA and took all his cookware (software), prepared ingredients (disk devices) and got the grid infrastructure cooked:

./runInstaller -silent \
-responseFile /home/oracle/install/grid/response/grid_install.rsp -showProgress \
INVENTORY_LOCATION=/u01/app/oraInventory \
SELECTED_LANGUAGES=en \
oracle.install.option=HA_CONFIG \
ORACLE_BASE=/u01/app/oracle \
ORACLE_HOME=/u01/app/oracle/product/12.1.0/grid \
oracle.install.asm.OSDBA=dba \
oracle.install.asm.OSASM=dba \
oracle.install.crs.config.storageOption=LOCAL_ASM_STORAGE \
oracle.install.asm.SYSASMPassword=sys_pwd \
oracle.install.asm.diskGroup.name=DATA \
oracle.install.asm.diskGroup.redundancy=EXTERNAL \
oracle.install.asm.diskGroup.AUSize=4 \
oracle.install.asm.diskGroup.disks=/dev/asm-disk1 \
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* \
oracle.install.asm.monitorPassword=sys_pwd \
oracle.install.config.managementOption=NONE

And added some crumbs:

/u01/app/oracle/product/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/asm.rsp
where /tmp/asm.rsp had:
oracle.assistants.asm|S_ASMPASSWORD=sys_pwd
oracle.assistants.asm|S_ASMMONITORPASSWORD=sys_pwd

“It was a great starter”, said the server finishing the first dish,

“I am getting even more hungry. What’s for the main?”.

“Oh, you will love it! It is Database 12c. It is one of these new meals and it is already very popular”, answered the DBA enthusiastically and continued cooking.

“Looking forward to trying it”, the server decided to have a nap until the dish was ready.

“You asked, you got it”, and the DBA gave the server the dish he never tried:

./runInstaller -silent -showProgress \
-responseFile /home/oracle/install/database/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_BASE=/u01/app/oracle \
ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 \
oracle.install.db.InstallEdition=EE oracle.install.db.DBA_GROUP=dba \
oracle.install.db.BACKUPDBA_GROUP=dba \
oracle.install.db.DGDBA_GROUP=dba \
oracle.install.db.KMDBA_GROUP=dba \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true

The topping ingredient was of course a brand new database:

./dbca -silent -createDatabase -gdbName em12 \
-templateName General_Purpose.dbc \
-emConfiguration none \
-sysPassword sys_pwd \
-systemPassword sys_pwd \
-storageType ASM \
-asmsnmpPassword sys_pwd \
-diskGroupName DATA \
-redoLogFileSize 100 \
-initParams log_buffer=10485760,processes=500,\
session_cached_cursors=300,db_securefile=PERMITTED \
-totalMemory 2048

“Delicious! That’s what I dreamt of! Where did you find it?”, the server could not hide his admiration.

“Well, you have not tried desert yet. When you have it, you will forget all those dishes that you had before.”

“Hmm, you intrigue me. Definitely I will have it!”

“Anything for you, my friend”, and the DBA cooked his famous, rich and delicious desert:

./runInstaller -silent \
-responseFile /home/oracle/install/em/response/new_install.rsp \
-staticPortsIniFile /tmp/ports.ini \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true \
ORACLE_MIDDLEWARE_HOME_LOCATION=/u01/em12 \
AGENT_BASE_DIR=/u01/agent12c \
WLS_ADMIN_SERVER_USERNAME=weblogic \
WLS_ADMIN_SERVER_PASSWORD=Sun03day03 \
WLS_ADMIN_SERVER_CONFIRM_PASSWORD=Sun03day03 \
NODE_MANAGER_PASSWORD=Sun03day03 \
NODE_MANAGER_CONFIRM_PASSWORD=Sun03day03 \
ORACLE_INSTANCE_HOME_LOCATION=/u01/gc_inst \
CONFIGURE_ORACLE_SOFTWARE_LIBRARY=true \
SOFTWARE_LIBRARY_LOCATION=/u01/sw_lib \
DATABASE_HOSTNAME=oem12c.home \
LISTENER_PORT=1521 \
SERVICENAME_OR_SID=em12 \
SYS_PASSWORD=sys_pwd \
SYSMAN_PASSWORD=Sun03day03 \
SYSMAN_CONFIRM_PASSWORD=Sun03day03 \
DEPLOYMENT_SIZE="SMALL" \
MANAGEMENT_TABLESPACE_LOCATION="+DATA" \
CONFIGURATION_DATA_TABLESPACE_LOCATION="+DATA" \
JVM_DIAGNOSTICS_TABLESPACE_LOCATION="+DATA" \
AGENT_REGISTRATION_PASSWORD=Sun03day03 \
AGENT_REGISTRATION_CONFIRM_PASSWORD=Sun03day03

“You made my day!” exclaimed the server when nothing was left on his plate.

“Anytime my friend!” smiled DBA in response.

He was as happy as any chef that the cooking went the way it was planned and the final product was just as the recipe had said.

Have a good day!

Categories: DBA Blogs

Log Buffer #421: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-05-04 11:29

As always, this fresh Log Buffer Edition shares some of the unusual yet innovative and information-rich blog posts from across the realms of Oracle, SQL Server and MySQL.

Oracle:

A developer reported problems when running a CREATE OR REPLACE TYPE statement in a development database. It was failing with an ORA-00604 followed by an ORA-00001. These messages could be seen again and again in the alert log.

  • Few Random Solaris Commands : intrstat, croinfo, dlstat, fmstat for Oracle DBA
  • When to use Oracle Database In-Memory?
  • Oracle Linux and Oracle VM at EMCWorld 2015
  • SQLcl connections – Lazy mans SQL*Net completion

SQL Server:

  • SQL Server expert Wayne Sheffield looks into the new T-SQL analytic functions coming in SQL Server 2012.
  • The difference between the CONCAT function and the STUFF function lies in the fact that CONCAT allows you to append a string value at the end of another string value, whereas STUFF allows you insert or replace a string value into or in between another string value.
  • After examining the SQLServerCentral servers using the sp_Blitz™ script, Steve Jones now looks at how we will use the script moving forward.
  • Big data applications are not usually considered mission-critical: while they support sales and marketing decisions, they do not significantly affect core operations such as customer accounts, orders, inventory, and shipping. Why, then, are major IT organizations moving quickly to incorporating big data in their disaster recovery plans?
  • There are no more excuses for not having baseline data. This article introduces a comprehensive Free Baseline Collector Solution.

MySQL:

  • MariaDB 5.5.43 now available
  • Testing MySQL with “read-only” filesystem
  • There are tools like pt-kill from the percona tool kit that may print/kill the long running transactions at MariaDB, MySQL or at Percona data instances, but a lot of backup scripts are just some simple bash lines.
  • Optimizer hints in MySQL 5.7.7 – The missed manual
  • Going beyond 1.3 MILLION SQL Queries/second
Categories: DBA Blogs

Quick Tip : Oracle User Ulimit Doesn’t Reflect Value on /etc/security/limits.conf

Pythian Group - Mon, 2015-05-04 11:21

So the other day I was trying to do a fresh installation of a new Oracle EM12cR4 in a local VM,  and as I was doing it with the DB 12c, I decided to use the Oracle preinstall RPM to ease my installation of the OMS repository database. Also I was doing both the repository and EM12c OMS install in the same VM, that is important to know.

[root@em12cr4 ~]# yum install oracle-rdbms-server-12cR1-preinstall -y

I was able to install the DB without any issues, but when I was trying to do the installation of EM12cR4, an error in the pre-requisites popped up:

WARNING: Limit of open file descriptors is found to be 1024.

For proper functioning of OMS, please set “ulimit -n” to be at least 4096.

And if I checked the soft limit for the user processes , it was set to 1024:

oracle@em12cr4.localdomain [emrep] ulimit -n
1024

So if you have been working with Oracle DBs for a while you know that this has to be checked and modified in/etc/security/limits.conf , but it was my surprise that the limit has been set correctly for the oracle user to at least 4096:

[root@em12cr4 ~]# cat /etc/security/limits.conf | grep -v "#" | grep  nofile
oracle   soft   nofile   4096
oracle   hard   nofile   65536

So my next train of thought was to verify the user bash profile settings, as if the ulimits are set there, it can override the limits.conf, but again it was to my surprise that there was nothing in there, and that is were I was perplexed:

[oracle@em12cr4 ~]# cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH

So what I did next was open a root terminal and do a trace of the login of the Oracle user:

[root@em12cr4 ~]# strace -o loglimit su - oracle

And in another terminal was verify what was the user reading regarding the user limits, and this is where I hit the jackpot. I was able to see here that it was reading the pam_limits.so and the /etc/security/limits.conf as it should, but it was also reading another configuration file called oracle-rdbms-server-12cR1-preinstall.conf,  (Does this look familiar to you ? :) ) and as you can see the RLIMIT_NOFILE was being set to 1024:

[root@em12cr4 ~]# grep "limit" loglimit
getrlimit(RLIMIT_STACK, {rlim_cur=10240*1024, rlim_max=32768*1024}) = 0
open("/lib64/security/pam_limits.so", O_RDONLY) = 6
...
open("/etc/security/limits.conf", O_RDONLY) = 3
read(3, "# /etc/security/limits.conf\n#\n#E"..., 4096) = 2011
open("/etc/security/limits.d", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/etc/security/limits.d/90-nproc.conf", O_RDONLY) = 3
read(3, "# Default limit for number of us"..., 4096) = 208
open("/etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf", O_RDONLY) = 3
setrlimit(RLIMIT_STACK, {rlim_cur=10240*1024, rlim_max=32768*1024}) = 0
setrlimit(RLIMIT_NPROC, {rlim_cur=16*1024, rlim_max=16*1024}) = 0
setrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=64*1024}) = 0

So I went ahead and checked the file /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf and evidently, that is where the limit was set to 1024, so the only thing I did was change the value there to 4096:

[root@em12cr4 ~]# cat /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf | grep -v"#" | grep nofile
oracle   soft   nofile    1024
oracle   hard   nofile    65536
[root@em12cr4 ~]# vi /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf
[root@em12cr4 ~]# cat /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf | grep -v"#" | grep nofile
oracle   soft   nofile    4096
oracle   hard   nofile    65536

Once I did that change, and logged out and logged back in, I was able to see the values that I had set in the first place in /etc/security/limits.conf and now I was able to proceed with the installation of EM12cR4:

oracle@em12cr4.localdomain [emrep] ulimit -n
4096

Conclusion

So when you install the RPM oracle-rdbms-server-12cR1-preinstall, be sure that if you are to change any future user limits, there might be another configuration file that can be setting other values than the ones desired and set in /etc/security/limits.conf

Note.- This was originally published in rene-ace.com

Categories: DBA Blogs

Do you encounter problems with JD Edwards 9.1 Standalone DEMO Installation?

JD Edwards 9.1 Standalone DEMO gives you an insight for the JD Edwards EnterpriseOne, an integrated applications suite of comprehensive enterprise resource planning software that combines business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

I love Live Demos – how about you?

The Oracle Instructor - Mon, 2015-05-04 05:59

Tired of boring slide-shows? Join me for free to see Oracle core technology live in action!

Live demonstrations have always been a key part of my classes, because I consider them one of the best ways to teach.

This is your opportunity to have a glimpse into my classroom and watch a demo just as I have delivered it there.

Apparently, not many speakers are keen to do things live, so the term Demonar (Demonstration + Seminar) waited for me to be invented :-)

A positive effect towards your attitude about LVC and Oracle University Streams with its live webinars is intended, since the setting and platform is very similar there.


Categories: DBA Blogs

Webcast - Oracle Database Backup Service

As Oracle Continues the Oracle Cloud Expansion, it helps organizations more rapidly adopt and utilize hybrid cloud solutions, which can securely and seamlessly integrate public cloud solutions...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Added a page about my LVC schedule

The Oracle Instructor - Sun, 2015-05-03 06:49

I get often asked by customers about my schedule, so they can book a class with me. This page now shows my scheduled Live Virtual Classes. I deliver most of my public classes in that format and you can attend from all over the world :-)


Categories: DBA Blogs

db file parallel read faster on Linux than HP-UX?

Bobby Durrett's DBA Blog - Fri, 2015-05-01 16:39

I am still working on comparing performance between an HP-UX blade and a Linux virtual machine and I have a strange result.  I tried to come up with a simple example that would do a lot of single block I/O.  The test runs faster on my Linux system than my HP-UX system and I’m not sure why.  All of the parameters are the same, except the ones that contain the system name and filesystem names.  Both systems are 11.2.0.3.  The dramatic difference in run times corresponds to an equally dramatic difference in db file parallel read wait times.

I created a table called TEST and populated it with data and added an index called TESTI. I ran this query to generate a lot of single block I/O:

select /*+ index(test testi) */ sum(blocks) from test;

Here is the result on HP:

SUM(BLOCKS)
-----------
 1485406208

Elapsed: 00:01:28.38

Statistics
----------------------------------------------------------
          9  recursive calls
          0  db block gets
    3289143  consistent gets
     125896  physical reads
      86864  redo size
        216  bytes sent via SQL*Net to client
        248  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

select EVENT,TOTAL_WAITS,TIME_WAITED,AVERAGE_WAIT
  2  FROM V$SESSION_EVENT a
  3  WHERE a.SID= :monitored_sid
  4  order by time_waited desc;

EVENT                          TOTAL_WAITS TIME_WAITED AVERAGE_WAIT
------------------------------ ----------- ----------- ------------
db file parallel read                 4096        6760         1.65
db file sequential read              14526         236          .02
events in waitclass Other                1          28        28.49
SQL*Net message from client             19           5          .28
db file scattered read                   5           3          .65
SQL*Net message to client               20           0            0
Disk file operations I/O                 1           0          .01

Here is the same thing on Linux:

SUM(BLOCKS)
-----------
  958103552

Elapsed: 00:00:09.01

Statistics
----------------------------------------------------------
          9  recursive calls
          0  db block gets
    3289130  consistent gets
     125872  physical reads
      77244  redo size
        353  bytes sent via SQL*Net to client
        360  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

select EVENT,TOTAL_WAITS,TIME_WAITED,AVERAGE_WAIT
  2  FROM V$SESSION_EVENT a
  3  WHERE a.SID= :monitored_sid
  4  order by time_waited desc;

EVENT                          TOTAL_WAITS TIME_WAITED AVERAGE_WAIT
------------------------------ ----------- ----------- ------------
db file parallel read                 4096          55          .01
events in waitclass Other                1          17        16.72
db file sequential read              14498          11            0
SQL*Net message from client             19           6          .31
db file scattered read                  15           0            0
SQL*Net message to client               20           0            0
Disk file operations I/O                 1           0            0

Something doesn’t seem right.  Surely there is some caching somewhere.  Is it really possible that the Linux version runs in 9 seconds while the HP one runs in a minute and a half?  Is it really true that db file parallel read is 1 hundredth of a second on HP and .01 hundredths of a second on Linux?

I’m still working on this but thought I would share the result since it is so strange.

Here is a zip of my scripts and their logs if you want to check them out: zip

– Bobby

p.s. Here are some possibly relevant parameters, same on both system:

compatible                   11.2.0.0.0
cpu_count                    4
db_block_size                8192
db_cache_size                512M
db_writer_processes          2
disk_asynch_io               FALSE
dispatchers                  (PROTOCOL=TCP)(DISPATCHERS=32)
filesystemio_options         ASYNCH
large_pool_size              32M
log_buffer                   2097152
max_shared_servers           12
pga_aggregate_target         5871947670
sga_max_size                 3G
sga_target                   3G
shared_pool_size             1G
shared_servers               12
star_transformation_enabled  FALSE
Categories: DBA Blogs

Parallel Execution -- 5 Parallel INSERT

Hemant K Chitale - Fri, 2015-05-01 09:57
Oracle permits Parallel DML (as with Parallel Query, this requires the Enterprise Edition).

Unlike Parallel Query, Parallel DML is *not* enabled by default.  You must explicitly enable it with an ALTER SESSION ENABLE PARALLEL DML.

The most common usage is Parallel INSERT.

Parallel Insert uses PX servers to execute the Insert.  Ideally, it makes sense to use Parallel Query to drive the Parallel Insert.  Each PX server doing the Insert executes a Direct Path Insert --- it allocates one or more extents to itself and inserts rows into that extent.  Effectively, the Parallel Insert creates a temporary segment.  When the whole Insert is successful, these extents of the temporary segment are merged into the target table (and the temporary segment loses it's existence).

Note that there are four consequences of this behaviour :

(a) Any empty or usable blocks in the existing extents are NOT used for the new rows.  The table *always* grows in allocated space even if there are empty blocks.

(b) Depending on the number of PX servers used, this method allocates more new extents than would a normal (Serial) Insert.

(c) The rows inserted are not visible to even the session that executed the Insert until and unless it issues a COMMIT.  Actually, the session cannot even re-query the same table (irrelevant is the possibility that the query would hit only pre-existing rows) without a COMMIT.  (This does not prevent the session from querying some other table before the COMMIT).

(d) The Direct Path Insert does not require large Undo space.  It does not track all the rowids into Undo.  It only needs to track the temporary segment and extents to be discarded should a ROLLBACK be issued.  So, it uses minimal Undo space.


[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Fri May 1 22:46:46 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>alter session enable parallel dml;

Session altered.

HEMANT>create table another_large_table as select * from large_table where 1=2;

Table created.

HEMANT>insert /*+ PARALLEL */
2 into another_large_table
3 select /*+ PARALLEL */ *
4 from large_table;

4802944 rows created.

HEMANT>!ps -ef |grep ora_p0
oracle 3637 1 0 22:47 ? 00:00:00 ora_p000_orcl
oracle 3639 1 0 22:47 ? 00:00:00 ora_p001_orcl
oracle 3641 1 0 22:47 ? 00:00:00 ora_p002_orcl
oracle 3643 1 0 22:47 ? 00:00:00 ora_p003_orcl
oracle 3680 3618 0 22:50 pts/1 00:00:00 /bin/bash -c ps -ef |grep ora_p0
oracle 3682 3680 0 22:50 pts/1 00:00:00 grep ora_p0

HEMANT>
HEMANT>select count(*) from another_large_table;
select count(*) from another_large_table
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


HEMANT>


So, we see that 4 PX servers were used. We also see that the session cannot re-query the table.
What evidence do we have of the temporary segment and extents ?

SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_type = 'TEMPORARY'
4 /

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ------------ --------------- ---------- -------------
HEMANT 11.54579 HEMANT 141 536.9375

SYS>

HEMANT>commit;

Commit complete.

HEMANT>

SYS>/

no rows selected

SYS>

The temporary segment no longer exists after the inserting session issues a COMMIT.  The extents of the temporary segment have been merged into the target table segment.

SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_name = 'ANOTHER_LARGE_TABLE'
4 /

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 142 537

SYS>

Now, let's see if another Parallel Insert would be able to reuse usable table blocks.  We DELETE (*not* TRUNCATE !) the rows in the table and re-attempt a Parallel Insert.

HEMANT>delete another_large_table;

4802944 rows deleted.

HEMANT>commit;

Commit complete.

HEMANT>

SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_name = 'ANOTHER_LARGE_TABLE';

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 142 537

SYS>

HEMANT>insert /*+ PARALLEL */
2 into another_large_table
3 select /*+ PARALLEL */ *
4 from large_table;

4802944 rows created.

HEMANT>

SYS>l
1 select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3* where segment_name = 'ANOTHER_LARGE_TABLE'
SYS>/

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 142 537

SYS>

HEMANT>commit;

Commit complete.

HEMANT>

SYS>l
1 select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3* where segment_name = 'ANOTHER_LARGE_TABLE'
SYS>
SYS>/

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 281 1073.9375

SYS>

We see that the inserted rows took another 139 extents and did NOT reuse any of the existing blocks even though they were all candidates for new rows.

This is something you must be extremely careful about !!  A Parallel Insert will "grow" the table by allocating new extents, ignoring all usable blocks in the table.  The only exception is if you have TRUNCATEd the target table.

HEMANT>truncate table another_large_table reuse storage;

Table truncated.

HEMANT>insert /*+ PARALLEL */
2 into another_large_table
3 select /*+ PARALLEL */ *
4 from large_table;

4802944 rows created.

HEMANT>

SYS>select s.username, s.sql_id, t.used_ublk
2 from v$session s, v$transaction t
3 where s.taddr=t.addr
4 /

USERNAME SQL_ID USED_UBLK
------------------------------ ------------- ----------
HEMANT 8g72bx3jy79gy 1

SYS>select sql_fulltext
2 from v$sqlstats
3 where sql_id = '8g72bx3jy79gy';

SQL_FULLTEXT
--------------------------------------------------------------------------------
insert /*+ PARALLEL */
into another_large_table
select /*+ PARALLEL */ *
from la


SYS>

Note how the 4.8million row Insert used only 1 Undo Block.
HEMANT>commit;

Commit complete.

HEMANT>

SYS>select s.username, s.sql_id, t.used_ublk
2 from v$session s, v$transaction t
3 where s.taddr=t.addr
4 /

no rows selected

SYS>
SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_name = 'ANOTHER_LARGE_TABLE'
4 /

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 140 537

SYS>

The TRUNCATE allowed the next Parallel Insert to reuse the extents.

.
.
.

Categories: DBA Blogs

Oracle Business Intelligence Security Considerations

As with many other systems, security is an important consideration in developing and administering an Oracle Business Intelligence System. When Oracle Business Intelligence is deployed on production...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2015-04-30 [del.icio.us]

  • www.slideshare.net
    via Blogs.Oracle.com/IMC - Slideshows by User: oracle_imc_team http://www.slideshare.net/
Categories: DBA Blogs