DBA Blogs

Executing .sql file in Pl/sql procedure

Tom Kyte - Mon, 2017-01-09 13:46
Hi , I am trying to execute a set of insert statements in a pl/sql block. However these insert scripts vary time to time. Hence i would like to save them in some location as .sql file and execute in pl/sql So i would like to know if it is poss...
Categories: DBA Blogs

issue with the usage of the oracle external table

Tom Kyte - Mon, 2017-01-09 13:46
Hi tom, I have an oracle external table in my database and i am running my database in linux operating system.I have a directory where i will be placing the (.csv) file and loading into the external table.But there is an issue with the externa...
Categories: DBA Blogs

Using Oracle Big Data Lite Virtual Machine with Ravello trial

Big Data Lite Virtual Machine is the most straight-forward way to start learning about Oracle Big Data Stack or even develop demos/proof of concepts on top of it (don't forget however that BD Lite...

We share our skills to maximize your revenue!
Categories: DBA Blogs

V$RMAN_BACKUP_JOB_DETAILS, a caveat

Hemant K Chitale - Sun, 2017-01-08 23:46
Building on a previous blog post (you could read it before or after this post), here's a quick demo of a caveat or quirk with V$RMAN_BACKUP_JOB_DETAILS.

This in 11.2.0.4

[oracle@ora11204 Desktop]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Mon Jan 9 13:39:44 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1362461976)

RMAN> backup as compressed backupset
2> incremental level 1 database
3> plus archivelog ;


Starting backup at 09-JAN-17
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=108 RECID=1245 STAMP=928093294
input archived log thread=1 sequence=109 RECID=1246 STAMP=928093719
input archived log thread=1 sequence=110 RECID=1247 STAMP=928093722
input archived log thread=1 sequence=111 RECID=1248 STAMP=928093724
...
...
...
input archived log thread=1 sequence=163 RECID=1318 STAMP=929802055
input archived log thread=1 sequence=164 RECID=1319 STAMP=932823436
input archived log thread=1 sequence=165 RECID=1320 STAMP=932823439
input archived log thread=1 sequence=166 RECID=1321 STAMP=932823606
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_annnn_TAG20170109T134007_d768kr8l_.bkp tag=TAG20170109T134007 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
Finished backup at 09-JAN-17

Starting backup at 09-JAN-17
using channel ORA_DISK_1
no parent backup or copy of datafile 2 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/u01/app/oracle/oradata/ORCL/sysaux.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/system.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/undotbs1.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/users.dbf
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_nnnd0_TAG20170109T134123_d768n3v5_.bkp tag=TAG20170109T134123 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00011 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_hemant_cdgb60g4_.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_c552qnsh_.dbf
input datafile file number=00013 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_intermed_c552qpc7_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_nnnd1_TAG20170109T134123_d768ojmm_.bkp tag=TAG20170109T134123 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 09-JAN-17

Starting backup at 09-JAN-17
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=167 RECID=1322 STAMP=932823743
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_annnn_TAG20170109T134223_d768ozv9_.bkp tag=TAG20170109T134223 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-JAN-17

Starting Control File and SPFILE Autobackup at 09-JAN-17
piece handle=/u02/FRA/ORCL/autobackup/2017_01_09/o1_mf_s_932823745_d768p1jo_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 09-JAN-17

RMAN>


What does V$RMAN_BACKUP_JOB_DETAILS tell us ?

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') Start_At,
2 to_char(end_time,'DD-MON HH24:MI') End_At,
3 input_bytes/1048576 Input_MB, output_bytes/1048576 Output_MB,
4 input_type, status
5 from v$rman_backup_job_details
6 where start_time > trunc(sysdate)
7* order by start_time
SQL> /

START_AT END_AT INPUT_MB OUTPUT_MB INPUT_TYPE
--------------------- --------------------- ---------- ---------- -------------
STATUS
-----------------------
09-JAN 13:40 09-JAN 13:42 2917.06055 491.563477 DB INCR
COMPLETED


SQL>


The view does NOT show how much of the input/output was for ArchiveLogs.  It clubs ArchiveLogs and the controlfile autobackup under the   single entry for "DB INCR".   Anyone reading this row from V$RMAN_BACKUP_JOB_DETAILS would NOT know if ArchiveLogs had been backed-up, would NOT know if a controlfile/spfile autobackup was created.
Furtheremore, if there is a failure (e.g. only the last ArchiveLog backupset failed ?), would you be able to identify what has successfully been backed up.  Also see my previous blog post.
.
.
.


Categories: DBA Blogs

error in packages

Tom Kyte - Sun, 2017-01-08 19:26
<code>i have table and packages like this,but iam getting error while executing it.so please help me: CREATE TABLE LOGGING_DATA_HDR ( LOG_ID NUMBER, TABLE_NAME VARCHAR2 (30 CHAR) NOT NULL, PK_DATA VARCHAR2 (500 BYTE...
Categories: DBA Blogs

Need to Know the process & result of my question

Tom Kyte - Sun, 2017-01-08 19:26
Emp, dept table is present for reference SQL> select * from dept; DEPTNO DNAME LOC ---------- -------------- ------------- 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO ...
Categories: DBA Blogs

A select query uses Full table Scan. How to force the query using Index Scan option.

Tom Kyte - Sun, 2017-01-08 19:26
A Select query is using Full table Scan. The execution time takes much long. How to force the query to use index scan option.
Categories: DBA Blogs

How do I call MySQL Stored Procedure from oracle using HS link ?

Tom Kyte - Sun, 2017-01-08 19:26
How do I call MySQL Stored Procedure from oracle using HS link ? HS link is correct and I can get the tables data just by adding '@' at the end of the table name ,but I can't call the SP same way ! Help please.
Categories: DBA Blogs

AZORA Meeting January 26th

Bobby Durrett's DBA Blog - Fri, 2017-01-06 15:40

The next Arizona Oracle User Group meeting is January 26th in Scottsdale. Signup on this meetup link.

Copied from invitation:

When:  January 26, 2017 (Thursday)
12:30 pm – 4:00 pm

Where:  Republic Services
1st Floor Training Rooms
14400 N 87th St (AZ101 & Raintree)
Scottsdale, AZ

Agenda:

12:30 – 1:00 Registration and Pizza

1:00-1:10  Welcome

1:10-2:10  Presentations

Room 1    Biju Thomas – OneNeck IT Solutions
(Oracle ACE Director)
“Oracle Database 12c New Features for 11gR2 DBA”

Room 2   Charles Kim – Viscosity North America
(Oracle ACE Director)
“Bullet Proof Your Data Guard Environment”

2:10-2:25  Break – Coffee & Cookies

2:25-3:25  Presentations

Room 1   Biju Thomas – OneNeck IT Solutions
(Oracle ACE Director)
“Introduction to Oracle Databases in the Cloud”

Room 2  Jerry Ward – Viscosity North America
“Building Faceted Search Navigation in APEX
with Oracle JET and  PL/SQL Pipelines”

3:25-3:30    Wrap Up and Closing

Categories: DBA Blogs

Improved VirtualBox Network

Bobby Durrett's DBA Blog - Fri, 2017-01-06 09:33

Before I left for vacation I noticed this blog post about setting up a network on VirtualBox: “CREATING A TEST LAB USING VIRTUALBOX / NAT NETWORKING”. Now that I am back in the office I thought I would try to apply some of the things that I learned from the blog post to my existing VirtualBox network. I’m pretty happy with the results so I thought I would document my experience here.

My situation differed slightly from the situation in the original post:

  1. I’m using VirtualBox 5.1.12 instead of 5.0.4
  2. I’m using Oracle Enterprise Linux 7.3 instead of CentOs 6
  3. I have 6 existing VMs instead of starting from scratch

I was cautious at first about trying the things recommended in the post because I didn’t want to mess up my existing VMs. These are all just test environments but it took some work to set them up. But, I wasn’t happy with the way the network was setup on my VMs so that motivated me to try something different. My biggest problem with my network was that I couldn’t SSH into my VMs when I worked from home using my work VPN connection.

After re-configuring my network I now have a static ip address for each VM within its own NAT network and I have port forwarding setup so that I can SSH into each one even when I’m working from home using VPN into my company’s network. I also have port forwarding so that I can login to an Oracle database on one of the VMs from my laptop even when using VPN.

The original blog post has most of the details but I thought that I would document a few minor differences that I came across to be helpful.

Default network name was NatNetwork instead of LocalNat.

In port forwarding I forwarded a port to port 1521 on the VM that has an Oracle database. I had to disable and stop the firewall on that VM so that I could connect to the Oracle database.

I went to /etc/sysconfig/network-scripts and looked at the configuration file and the format wasn’t the same as in the original post. This is probably because it is OEL 7.3 instead of CentOs 6. Instead of editing the file manually I ended up using the graphical network configuration app so that I knew I would edit the configuration correctly.

Later, after I wrote the first draft of this blog post I realized that I wanted to set up the network so that it automatically appends a domain name for our internal systems. I.e. if my standard domain is mycompany.com and I want to connect to mymachine.mycompany.com it would be nice to just connect to mymachine without fully qualifying the name with the domain. I ended up manually editing the file /etc/sysconfig/network-scripts/ifcfg-enp0s3 after googling around on the Internet to see how to add this. I added a DOMAIN line like this:

DOMAIN=”mydomain1.com mydomain2.com”

I created two text files to keep track of two things about my network configuration:

  1. The ports that VirtualBox forwards
  2. The ip address for each vm

I had to get the DNS server ip addresses from my laptop’s network settings.

Anyway, I don’t have much to add to the original post except just to point out these slight differences and my experience. It was super helpful to switch to this NAT network with port forwarding approach.

Bobby

P.S. After writing the first draft of this blog post I did some more testing. There is one thing about this configuration that I don’t like. It seems that anyone on my company’s network can SSH into one of my VMs if they use the forwarded port number and my laptop’s IP address.  I tried this from one of our Unix servers and it worked disturbingly well, even over the VPN. I think that the original blog post wanted this behavior but I was just trying to connect from my laptop to my VMs when I was using VPN. Still, it might come in handy to SSH into my VMs from other computers on our network. At least it isn’t opening up every port on my VM – only the ones I have set up with port forwarding. Also, I usually only bring these VMs up for short periods of time so they wont be available for people to hack into except for short unpredictable intervals and when I’m actively using them.

Categories: DBA Blogs

External File with CHARACTERSET WE8MSWIN1252 does not create a .bad file (added additional information at the end)

Tom Kyte - Thu, 2017-01-05 00:06
So, I queried the following to (hopefully) answer what you've asked for - SELECT PRODUCT, VERSION FROM SYS.PRODUCT_COMPONENT_VERSION; NLSRTL 11.2.0.3.0 Oracle Database 11g Enterprise Edition 11.2.0.3.0 PL/...
Categories: DBA Blogs

Deadlock - Missing FK index, but no update to parent PK value

Tom Kyte - Thu, 2017-01-05 00:06
Hi Tom, Thank you for the wealth of knowledge on this site. I have read many, many threads on this site describing situations similar to what I am about to describe. It's possible as well I am completely off here and this isn't related to FK at all....
Categories: DBA Blogs

Partner Webcast - Oracle Enterprise IaaS Works the Way Enterprise IT Works

As the demand for cloud grows, Oracle has different ways of assisting customers lifting and shifting their existing applications from on-premise to the cloud or building custom applications on the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Massive Open Online Course: Streamline Continuous Deployment and Delivery Using Oracle's DevOps Tools and Solutions

We’re excited to announce a Massive Open Online Course (MOOC) to cover DevOps, one of the most popular trends in the industry. As many analysts have pointed out 2016 was the year...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Problem with multiconsumer AQ.

Tom Kyte - Tue, 2017-01-03 11:26
I'm trying to broadcast a message using dbms_aq.enqueue with a queue set to multiple consumers. The consumers on the queue comes and goes, so building up a recipient_list through various SQL on the raw queue information is not an option. I'm stuc...
Categories: DBA Blogs

12cR1 RAC Posts -- 2 : Convert AdminManaged DB to PolicyManaged

Hemant K Chitale - Fri, 2016-12-30 22:14
I have an AdminManaged Database in my (2node) RAC Cluster.

How do I convert it to PolicyManaged ?

(Yes, let me admit :  It makes no sense to have PolicyManaged on a 2node Cluster.  But since I can't create an 8 or 16 node Cluster (with multiple databases to boot ?!), let me demonstrate with a 2node  Cluster.  The principle remains the same).

First, I show the configuration of the database in the Cluster :
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC1 is running on node collabn1
Instance RAC2 is running on node collabn2
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name: RAC
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfileRAC1.ora
Password file: +DATA/RAC/PASSWORD/pwdrac.277.931824933
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: FRA,DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: RAC1,RAC2
Configured nodes: collabn1,collabn2
Database is administrator managed
[oracle@collabn1 ~]$


This is the current definition of server pool(s) :
[oracle@collabn1 ~]$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$


So, we see that I don't have any Server Pools defined. Only the default FREE and GENERIC (for AdminManaged database(s)) exist.
I now proceed to remove the database from the configuration.
[oracle@collabn1 ~]$ srvctl stop database -d RAC
[oracle@collabn1 ~]$ srvctl remove database -d RAC
Remove the database RAC? (y/[n]) y
[oracle@collabn1 ~]$


I now create a new (custom) Server Pool (called "MyPool").
[oracle@collabn1 ~]$ srvctl add srvpool -serverpool MyPool -importance 100 -min 1 -max 2 \
> -servers "collabn1,collabn2" -verbose
[oracle@collabn1 ~]$ srvctl config srvpool -serverpool MyPool
Server pool name: MyPool
Importance: 100, Min: 1, Max: 2
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$


So, now with an "upto 2nodes" Server Pool, I add my database to it.
[oracle@collabn1 ~]$ srvctl add database -d RAC -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 \
> -serverpool MyPool -verbose
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$


This shows that RAC is now a PolicyManaged database in the "MyPool" Server Pool !
Can I now start the database and check on the instance(s) ?
[oracle@collabn1 ~]$ srvctl start database -d RAC
PRCR-1079 : Failed to start resource ora.rac.db
CRS-5017: The resource action "ora.rac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_2.ora'
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.rac.db' on 'collabn2' failed
CRS-5017: The resource action "ora.rac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_1.ora'
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.rac.db' on 'collabn1' failed
CRS-2632: There are no more servers to try to place resource 'ora.rac.db' on that would satisfy its placement policy
[oracle@collabn1 ~]$


Yes, of course. I need to create initRAC_1.ora and initRAC_2.ora.
After creating the new parameter files (pointing to the SPFILE in the ASM Diskgroup), I try again.
[oracle@collabn1 ~]$ srvctl start database -d RAC
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC_1 is running on node collabn1
Instance RAC_2 is running on node collabn2
[oracle@collabn1 ~]$ ps -ef |grep smon
oracle 3447 1 0 11:37 ? 00:00:00 asm_smon_+ASM1
root 3605 1 0 11:37 ? 00:00:11 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle 4203 1 0 11:38 ? 00:00:00 mdb_smon_-MGMTDB
oracle 22882 1 0 12:08 ? 00:00:00 ora_smon_RAC_1
oracle 23422 12657 0 12:10 pts/0 00:00:00 grep smon
[oracle@collabn1 ~]$
[oracle@collabn2 ~]$ ps -ef |grep smon
oracle 3495 1 0 11:41 ? 00:00:00 asm_smon_+ASM2
root 3593 1 0 11:41 ? 00:00:09 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle 15973 1 0 12:08 ? 00:00:00 ora_smon_RAC_2
oracle 16647 4582 0 12:10 pts/0 00:00:00 grep smon
[oracle@collabn2 ~]$
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$


Yes, I now have instances RAC_1 and RAC_2 (instead of RAC1 and RAC2) now running. If I had multiple (4 or more ?!) nodes (and a Server Pool configuration to match), there's no guarantee that RAC_1 starts on collabn1 (and RAC_2 on collabn2).  These are "floating" instances that can start on any nodes in the Cluster.


(UPDATE : It seems that when I shutdown a node, a PolicyManaged Instance is SHUTDOWN ABORT, unlike an AdminManaged Instance which gets SHUTDOWN NORMAL ?)

.
.
.

Categories: DBA Blogs

Investigating IO performance on Amazon EC2

Pythian Group - Fri, 2016-12-30 14:23

I published a blog post called “Investigating IO Performance on Amazon RDS for Oracle” recently, and soon after posting it I received several questions asking if IO worked the same way on EC2 instances. My immediate though was it did, mostly because RDS for Oracle is basically an EC2 instance with Oracle Database on top of it, where the configuration is fully managed by Amazon. But as always, it’s better to test than assume, so here we go!

Although the testing was done by running a workload in an Oracle database, the results will apply to any other type of workload because the performance characteristics purely depend on the type of instance, type of EBS volume and the size of the IO requests, and it doesn’t matter how the data is processed after it’s retrieved from the storage.

The Testing

The testing was done exactly the same was as described in the previous blog post, the only difference was that I had to create an oracle database manually by myself. I used the 12.1.0.2.0 database Enterprise Edition and ASM, and the EBS volume was used as an ASM disk.

Measuring the IO performance

On RDS we had the nice Enhanced Monitoring which I set up with a refresh interval of a few seconds and I used it to collect performance statistics quickly. For EC2 (specifically for EBS volumes), there is no such thing as enhanced monitoring, so I needed to use the standard CloudWatch monitoring with the minimum refresh rate of 5 minutes (very inconvenient, because a single test case would have to be run for 5 to 10 minutes to collect reliable data). This was not acceptable, so I looked for alternatives and found that iostat displayed the same values the monitoring graphs did:

[root@ip-172-31-21-241 ~]# iostat 5 500
...
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.11    0.11    0.43   99.04

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0
xvdf              0.40         0.00         8.00          0         40
xvdg           3060.00     24496.00        14.40     122480         72
...

the “tps” showed IO per second, and “kB_read/s”+”kB_wrtn/s” allowed me to calculate the throughput (I actually ended up using just the kB_read/s as my workload was 100% read only and the values in kB_wrtn/s were tiny).
iostat is even more convenient to use than the enhanced monitoring, it didn’t take long to see the first benefit of EC2 over RDS!

The Results

It was no surprise, the outcome of testing on EC2 was quite similar to the results from testing on RDS.

Provisioned IOPS Storage on an EC2 Instance

As on RDS, the testing was done on db.m4.4xlarge with 100G of io1 with 1000 provisioned IO/s. Also the results are very very similar, the only notable difference that I could observe (although I can’t explain it, and I’m not sure there is a patter in it or not, as I did’t do too many tests), was the fact that the throughput for 64K-96K IOs didn’t reach the same level as 128K+ IOs.

Provisioned IOPS Throughput (on EC2)

Provisioned IOPS Measured IO/s (on EC2)

These results confirm that (the same as with RDS), there are several sizes of physical IOs: (8), 16, 32, 64 and 128, and starting with 64K, the performance is throughput-bound, but with IOs of smaller size, it’s IOPS-bound.

General Purpose Storage on an EC2 Instance

The testing with General Purpose SSDs (100G with 300 baseline IOPS) didn’t provide any surprises and the results were exactly the same as for RDS.
The only difference in the graphs is the “bust performance” measures for IOs of different sizes that I’ve added to outline how the “bursting” improves both IO/s and Throughput.

General Purpose SSD Throughput (on EC2)

General Purpose SSD Measured IO/s (on EC2)

These results also confirm that (the same as with RDS), there are several sizes of physical IOs: 16, 32, 64 and 128, and starting with 32K, the performance is throughput-bound, but with IOs of smaller size, it’s IOPS-bound.

Additional Flexibility with EC2Using Multiple gp2 Volumes

Opposite to RDS, I can configure my storage and instance more freely, so instead of having just a single gp2 volume attached to it I added five 1G-sized (yes tiny) volumes to the +DATA disk group. the minimum IOPS for a gp2 volume is 100, so my 5 volumes gave cumulative 500 baseline IOPS. As ASM was used, the IOs were +/- evenly distributed between the volumes.

I didn’t do too thorough testing, but still I noticed a few things.
Take a look at these iostat outputs from testing done with 8K reads (this is burst performance):

[root@ip-172-31-21-241 ~]# iostat 5 500
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0
xvdf              3.40         0.00        16.80          0         84
xvdg           1203.40      9632.00         1.60      48160          8
xvdi           1199.60      9596.80         0.00      47984          0
xvdj           1211.60      9691.20         3.20      48456         16
xvdk           1208.60      9670.40         0.00      48352          0
xvdl           1203.00      9625.60         3.20      48128         16

 

  • Bursting performance applies to each volume separately. It should allow getting up to 3000 IOPS per volume, but I reached only ~1200 per volume with cumulative throughput of 48214 KB/s (not even close to the limit). So there’s some other limit or threshold that applies to this configuration (and it’s not the CPU). But look! I’ve got 6024 IO/s burst performance, which is quite remarkable for just 5G.
  • As I was not hitting the maximum 3000 bursting IOPS per volume, the burst credit was running out much slower. if it lasts normally ~40 minutes at 3000 IOPS, it lasts ~3 times longer at ~1200 IOPS, which would allow running at better performance longer (i.e if one used 5x2G volumes instead of 1x10G volume)

This iostat output is from testing done with 1M reads (this is burst performance):

[root@ip-172-31-21-241 ~]# iostat 5 500
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0
xvdf              3.40         0.00        16.80          0         84
xvdg            384.40     48820.80         0.80     244104          4
xvdi            385.80     49155.20         0.00     245776          0
xvdj            385.00     49014.40         6.40     245072         32
xvdk            386.80     49225.60         0.00     246128          0
xvdl            385.00     48897.60         6.40     244488         32

 

  • The cumulative throughput is 245111 KB/s, which is very close to the throughput limit of the instance. I wasn’t able to reach such throughput on a single volume of gp2, where the maximum I observed was just 133824 KB/s, and 163840 KB/s is a throughput limit for a single gp2 volume which was bypassed too. It appears that configuring multiple volumes allows reaching the instance throughput limit that was not possible with a single volume.

I didn’t run any non-burst tests as it required too much time (2 hours of waiting to exhaust the burst credits).

Database with a 32K Block Size

We have observed that starting with 32K block reads the EBS volume become’s throughput-bound, not IOPS-bound. Obviously I wanted to see how it performed if the database was created with a 32K block size.
I ran a few very simple tests using 1 data block sized IOs (32K) on these two configurations:

  1. db.m4.4xlarge with 100G / 1000 PIOPS (io1)
  2. db.m4.4xlarge with 20G / 100 IOPS (gp2)

There were no surprises on the Provisioned IOPS storage and I got the 1000 IOPS that were provisioned (actually it was slightly better – 1020 IO/s), and the throughput was 32576.00 KB/s
On General Purpose SSD, the story was different – we know that starting from 32K-sized IOs, the performance becomes throughput-bound, and it was confirmed here too:

  • During burst period I measured up to 4180 IO/s at 133779 KB/s, which was 4 times faster than Provisioned SSD.
  • During non-burst period I measured up to 764 IOs at 24748 KBs/s throughput. Which is somewhat slower than Provisioned SSD. Also 24748 KBs/s, was slower than the throughput I measured on a 100G gp2 volume (we already ow that the non-burst throughput limit for gp2 depends on the size of the disk). If I used a 100G gp2 volume, I’d get 2359 IO/s at 75433 KB/s (this is from the graph above), which is also better that what one can get from a Provisioned SSD volume, and costs less.
Conclusions

Most of the conclusions were already outlined in the previous blog post, and they also apply to the EC2 instances when a single EBS volume is used for storage.

On the other side, the EC2 instance allows System administrators and DBAs (or should I say “Cloud Administrator”) to work around some of the limitations by changing the “variables” that can’t be altered on RDS – like, the block size of the database (which is 8K on RDS), and the number of EBS volumes behind the RDS configuration. Using a 32K block size for a database residing on General Purpose volume allows bypassing the IOPS limitation completely, and only throughput limits stay in effect. However, if 32K block size is not an option (as for Oracle e-Business Suite), then the IOPS and throughput can still be maximized by using a configuration of multiple GP2 volumes.

After having all these tests done, I think the only reason for using RDS instead of EC2 is the database management that is provided by Amazon. If that is something very critical for your requirements, it’s the way to go. If it’s not something you require – the EC2 can be configured to perform better for the same price, but you need to think about it’s maintenance by yourself.

Categories: DBA Blogs

Datapump in parallel with a shell script

DBA Scripts and Articles - Thu, 2016-12-29 14:23

I recently came across an issue while exporting a huge partitioned table for a data migration. The export took years without any obvious reason. After some research I found that the parallelism wasn’t working, I could only see one datapump worker at a time, then I found this note explaining the behaviour (Doc ID 1467662.1) … Continue reading Datapump in parallel with a shell script

The post Datapump in parallel with a shell script appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Links for 2016-12-27 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs