Feed aggregator

Investigating IO performance on Amazon EC2

Pythian Group - Fri, 2016-12-30 14:23

I published a blog post called “Investigating IO Performance on Amazon RDS for Oracle” recently, and soon after posting it I received several questions asking if IO worked the same way on EC2 instances. My immediate though was it did, mostly because RDS for Oracle is basically an EC2 instance with Oracle Database on top of it, where the configuration is fully managed by Amazon. But as always, it’s better to test than assume, so here we go!

Although the testing was done by running a workload in an Oracle database, the results will apply to any other type of workload because the performance characteristics purely depend on the type of instance, type of EBS volume and the size of the IO requests, and it doesn’t matter how the data is processed after it’s retrieved from the storage.

The Testing

The testing was done exactly the same was as described in the previous blog post, the only difference was that I had to create an oracle database manually by myself. I used the 12.1.0.2.0 database Enterprise Edition and ASM, and the EBS volume was used as an ASM disk.

Measuring the IO performance

On RDS we had the nice Enhanced Monitoring which I set up with a refresh interval of a few seconds and I used it to collect performance statistics quickly. For EC2 (specifically for EBS volumes), there is no such thing as enhanced monitoring, so I needed to use the standard CloudWatch monitoring with the minimum refresh rate of 5 minutes (very inconvenient, because a single test case would have to be run for 5 to 10 minutes to collect reliable data). This was not acceptable, so I looked for alternatives and found that iostat displayed the same values the monitoring graphs did:

[root@ip-172-31-21-241 ~]# iostat 5 500
...
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.11    0.11    0.43   99.04

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0
xvdf              0.40         0.00         8.00          0         40
xvdg           3060.00     24496.00        14.40     122480         72
...

the “tps” showed IO per second, and “kB_read/s”+”kB_wrtn/s” allowed me to calculate the throughput (I actually ended up using just the kB_read/s as my workload was 100% read only and the values in kB_wrtn/s were tiny).
iostat is even more convenient to use than the enhanced monitoring, it didn’t take long to see the first benefit of EC2 over RDS!

The Results

It was no surprise, the outcome of testing on EC2 was quite similar to the results from testing on RDS.

Provisioned IOPS Storage on an EC2 Instance

As on RDS, the testing was done on db.m4.4xlarge with 100G of io1 with 1000 provisioned IO/s. Also the results are very very similar, the only notable difference that I could observe (although I can’t explain it, and I’m not sure there is a patter in it or not, as I did’t do too many tests), was the fact that the throughput for 64K-96K IOs didn’t reach the same level as 128K+ IOs.

Provisioned IOPS Throughput (on EC2)

Provisioned IOPS Measured IO/s (on EC2)

These results confirm that (the same as with RDS), there are several sizes of physical IOs: (8), 16, 32, 64 and 128, and starting with 64K, the performance is throughput-bound, but with IOs of smaller size, it’s IOPS-bound.

General Purpose Storage on an EC2 Instance

The testing with General Purpose SSDs (100G with 300 baseline IOPS) didn’t provide any surprises and the results were exactly the same as for RDS.
The only difference in the graphs is the “bust performance” measures for IOs of different sizes that I’ve added to outline how the “bursting” improves both IO/s and Throughput.

General Purpose SSD Throughput (on EC2)

General Purpose SSD Measured IO/s (on EC2)

These results also confirm that (the same as with RDS), there are several sizes of physical IOs: 16, 32, 64 and 128, and starting with 32K, the performance is throughput-bound, but with IOs of smaller size, it’s IOPS-bound.

Additional Flexibility with EC2Using Multiple gp2 Volumes

Opposite to RDS, I can configure my storage and instance more freely, so instead of having just a single gp2 volume attached to it I added five 1G-sized (yes tiny) volumes to the +DATA disk group. the minimum IOPS for a gp2 volume is 100, so my 5 volumes gave cumulative 500 baseline IOPS. As ASM was used, the IOs were +/- evenly distributed between the volumes.

I didn’t do too thorough testing, but still I noticed a few things.
Take a look at these iostat outputs from testing done with 8K reads (this is burst performance):

[root@ip-172-31-21-241 ~]# iostat 5 500
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0
xvdf              3.40         0.00        16.80          0         84
xvdg           1203.40      9632.00         1.60      48160          8
xvdi           1199.60      9596.80         0.00      47984          0
xvdj           1211.60      9691.20         3.20      48456         16
xvdk           1208.60      9670.40         0.00      48352          0
xvdl           1203.00      9625.60         3.20      48128         16

 

  • Bursting performance applies to each volume separately. It should allow getting up to 3000 IOPS per volume, but I reached only ~1200 per volume with cumulative throughput of 48214 KB/s (not even close to the limit). So there’s some other limit or threshold that applies to this configuration (and it’s not the CPU). But look! I’ve got 6024 IO/s burst performance, which is quite remarkable for just 5G.
  • As I was not hitting the maximum 3000 bursting IOPS per volume, the burst credit was running out much slower. if it lasts normally ~40 minutes at 3000 IOPS, it lasts ~3 times longer at ~1200 IOPS, which would allow running at better performance longer (i.e if one used 5x2G volumes instead of 1x10G volume)

This iostat output is from testing done with 1M reads (this is burst performance):

[root@ip-172-31-21-241 ~]# iostat 5 500
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0
xvdf              3.40         0.00        16.80          0         84
xvdg            384.40     48820.80         0.80     244104          4
xvdi            385.80     49155.20         0.00     245776          0
xvdj            385.00     49014.40         6.40     245072         32
xvdk            386.80     49225.60         0.00     246128          0
xvdl            385.00     48897.60         6.40     244488         32

 

  • The cumulative throughput is 245111 KB/s, which is very close to the throughput limit of the instance. I wasn’t able to reach such throughput on a single volume of gp2, where the maximum I observed was just 133824 KB/s, and 163840 KB/s is a throughput limit for a single gp2 volume which was bypassed too. It appears that configuring multiple volumes allows reaching the instance throughput limit that was not possible with a single volume.

I didn’t run any non-burst tests as it required too much time (2 hours of waiting to exhaust the burst credits).

Database with a 32K Block Size

We have observed that starting with 32K block reads the EBS volume become’s throughput-bound, not IOPS-bound. Obviously I wanted to see how it performed if the database was created with a 32K block size.
I ran a few very simple tests using 1 data block sized IOs (32K) on these two configurations:

  1. db.m4.4xlarge with 100G / 1000 PIOPS (io1)
  2. db.m4.4xlarge with 20G / 100 IOPS (gp2)

There were no surprises on the Provisioned IOPS storage and I got the 1000 IOPS that were provisioned (actually it was slightly better – 1020 IO/s), and the throughput was 32576.00 KB/s
On General Purpose SSD, the story was different – we know that starting from 32K-sized IOs, the performance becomes throughput-bound, and it was confirmed here too:

  • During burst period I measured up to 4180 IO/s at 133779 KB/s, which was 4 times faster than Provisioned SSD.
  • During non-burst period I measured up to 764 IOs at 24748 KBs/s throughput. Which is somewhat slower than Provisioned SSD. Also 24748 KBs/s, was slower than the throughput I measured on a 100G gp2 volume (we already ow that the non-burst throughput limit for gp2 depends on the size of the disk). If I used a 100G gp2 volume, I’d get 2359 IO/s at 75433 KB/s (this is from the graph above), which is also better that what one can get from a Provisioned SSD volume, and costs less.
Conclusions

Most of the conclusions were already outlined in the previous blog post, and they also apply to the EC2 instances when a single EBS volume is used for storage.

On the other side, the EC2 instance allows System administrators and DBAs (or should I say “Cloud Administrator”) to work around some of the limitations by changing the “variables” that can’t be altered on RDS – like, the block size of the database (which is 8K on RDS), and the number of EBS volumes behind the RDS configuration. Using a 32K block size for a database residing on General Purpose volume allows bypassing the IOPS limitation completely, and only throughput limits stay in effect. However, if 32K block size is not an option (as for Oracle e-Business Suite), then the IOPS and throughput can still be maximized by using a configuration of multiple GP2 volumes.

After having all these tests done, I think the only reason for using RDS instead of EC2 is the database management that is provided by Amazon. If that is something very critical for your requirements, it’s the way to go. If it’s not something you require – the EC2 can be configured to perform better for the same price, but you need to think about it’s maintenance by yourself.

Categories: DBA Blogs

Oracle INSTRB, INSTRC, INSTR2, and INSTR4 Function with Examples

Complete IT Professional - Fri, 2016-12-30 05:00
In this article, I’ll explain what all of the variations of the INSTR function (INSTRB, INSTRC, INSTR2 and INSTR4) do, and show you some examples Purpose of the Oracle INSTR2, INSTR4, INSTRB, and INSTRC Functions These INSTR functions are variations of the basic INSTR function, which searches inside a string for a substring, and returns […]
Categories: Development

SLOB 2.3 Data Loading Failed? Here’s a Quick Diagnosis Tip.

Kevin Closson - Thu, 2016-12-29 15:06

The upcoming SLOB 2.4 release will bring improved data loading error handling. While still using SLOB 2.3, users can suffer data loading failures that may appear–on the surface–to be difficult to diagnose.

Before I continue, I should point out that the most common data loading failure with SLOB in pre-2.4 releases is the concurrent data loading phase suffering lack of sort space in TEMP. To that end, here is an example of a SLOB 2.3 data loading failure due to shortage of TEMP space. Please notice the grep command (in Figure 2 below) one should use to begin diagnosis of any SLOB data loading failure:

screen-shot-2016-12-27-at-3-37-20-pm

Figure 1

And now, the grep command:

screen-shot-2016-12-27-at-3-37-42-pm

Figure 2

 


Filed under: oracle Tagged: Exadata, Oracle Performance, Random I/O, SLOB, SLOB Testing, Xeon E7 Performance

Datapump in parallel with a shell script

DBA Scripts and Articles - Thu, 2016-12-29 14:23

I recently came across an issue while exporting a huge partitioned table for a data migration. The export took years without any obvious reason. After some research I found that the parallelism wasn’t working, I could only see one datapump worker at a time, then I found this note explaining the behaviour (Doc ID 1467662.1) … Continue reading Datapump in parallel with a shell script

The post Datapump in parallel with a shell script appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

NetBeans Git Client for JET Versioning and Oracle Developer Cloud Service

Andrejus Baranovski - Thu, 2016-12-29 13:31
I should say I'm happy with how NetBeans Git client works. It offers good performance and resolves conflicts pretty well.

It shows a list of pending changes and also changes colour for changed file name:


Changes can be committed into local repository through informative wizard:


File changes are displayed in very clear way, easy to understand them:


All local changes can be pushed to Oracle Developer Cloud Service Git (or other repository):


Select a branch where to push your work:


Notification about recent update is displayed in Developer Cloud Service console:


We can track changes, do merge requests and merge into trunk in Developer Cloud Service:

Oracle Linux 12cR2 Preinstall Package

Tim Hall - Thu, 2016-12-29 07:21

I was doing some clean Oracle Linux 6 and 7 installs today and look what I noticed in the yum repository for OL6.

Available Packages
oracle-database-server-12cR2-preinstall.x86_64 1.0-1.el6 public_ol6_latest
oracle-rdbms-server-11gR2-preinstall.x86_64 1.0-13.el6 public_ol6_latest
oracle-rdbms-server-12cR1-preinstall.x86_64 1.0-14.el6 public_ol6_latest

And for OL7.

Available Packages
oracle-database-server-12cR2-preinstall.x86_64 1.0-2.el7 ol7_latest
oracle-rdbms-server-11gR2-preinstall.x86_64 1.0-4.el7 ol7_latest
oracle-rdbms-server-12cR1-preinstall.x86_64 1.0-5.el7 ol7_latest

A quick look at the repos and it first turned up in September. I’m surprised I’ve not noticed it before now…

Cheers

Tim…

Oracle Linux 12cR2 Preinstall Package was first posted on December 29, 2016 at 2:21 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle LENGTH2, LENGTH4, LENGTHB, and LENGTHC Function with Examples

Complete IT Professional - Thu, 2016-12-29 05:00
In this article, we’ll look at the variations of the Oracle LENGTH function – LENGTH2, LENGTH4, LENGTHB, and LENGTHC. Purpose of the Oracle LENGTH2, LENGTH4, LENGTHB, and LENGTHC Function The purpose of these LENGTH function variants is the same as the basic LENGTH function – to find the length of a specified string. However, the […]
Categories: Development

Links for 2016-12-27 [del.icio.us]

Categories: DBA Blogs

GNW05 – Extending Databases with Hadoop video (plus GNW06 dates)

Tanel Poder - Tue, 2016-12-27 18:02

In case you missed this webinar, here’s a 1.5h holiday video about how Gluent “turbocharges” your databases with the power of Hadoop – all this without rewriting your applications :-)

Also, you can already sign up for the next webinar here:

  • GNW06 – Modernizing Enterprise Data Architecture with Gluent, Cloud and Hadoop
  • January 17 @ 12:00pm-1:00pm CST
  • Register here.

See you soon!

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Monitoring long running operations

DBA Scripts and Articles - Tue, 2016-12-27 11:03

It can be quite useful to be able to monitor long running operations on your database, the view v$session_longops can be used to estimate the time necessary for certain operations to finish. Monitoring long running operations scripts Here is a script to monitor a RMAN backup: [crayon-5863774448cb1162851675/] Here is a sample result: [crayon-5863774448cbb212560092/] Here is … Continue reading Monitoring long running operations

The post Monitoring long running operations appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

ADF BC REST 12.2.1.2 Custom Method JDeveloper Workaround

Andrejus Baranovski - Tue, 2016-12-27 04:19
Some of you who would try to implement custom method with ADF BC REST may face JDeveloper 12.2.1.2 wizard issue. JDeveloper 12.2.1.2 wizard is refusing to register ADF BC REST custom method, but it works perfectly on ADF runtime. Seems to be JDeveloper 12.2.1.1 - 12.2.1.2 bug. There is a workaround to modify REST service configuration manually and include custom method binding.

Sample application (available on GitHub - jetcrud). This sample implements custom method in VO implementation class - testCall:


Method is exposed through client interface:


Now if you go to REST service definition and try to enable this method to be included into REST interface - JDeveloper will report error:


Something wrong happens in RSTCustomMethodTab class:


Workaround - add method call into REST service definition manually. I recommend to do it outside of JDeveloper, as it hangs. Change definition in external editor. This is the example for custom method entry:


a

If you take a look into JDeveloper wizard for REST definition, it still shows method unchecked. But you can ignore it:


To execute custom method through REST call, make sure to use POST and specify method name along with parameters in REST request body:


Make sure not to forget to provide action Content-Type:


Check section for more info - 22.13.5 Executing a Custom Action.

DOAG.tv Interviews (German)

Randolf Geist - Tue, 2016-12-27 03:00
In den letzten Wochen sind zwei Interviews veröffentlicht worden, die die DOAG mit mir im Rahmen der jährlichen DOAG Konferenz in Nürnberg durchgeführt hat.

Das erste stammt noch von der DOAG Konferenz 2015 und bezieht sich auf meinen damaligen Vortrag über die neuen Parallel Execution Features von Oracle 12c:

DOAG.tv Interview 2015

Das zweite ist von der diesjährigen DOAG Konferenz und bezieht sich auf meine Performance-Tests der Oracle Database Cloud und dem dazugehörigen Vortrag:

DOAG.tv Interview 2016

Die Interviews dauern jeweils nur wenige Minuten, gehen also nur in wenigen Stichpunkten auf die jeweiligen Themen ein.

Big Data SQL 3.1 is Now Available!

Oracle Big Data SQL 3.1 is Now Available!We are excited to announce that Oracle Big Data SQL 3.1 is now available.  Big Data SQL 3.1 is another major milestone as we continue to expand...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-12-26 [del.icio.us]

Categories: DBA Blogs

Success Story: Version Control for PL/SQL

Gerger Consulting - Mon, 2016-12-26 15:43
It’s been a little over three months since we released Gitora 2.0 and the first success stories have started to emerge. Here is one of them:


Rhenus Logistics, the leading logistics company from Germany uses Gitora to manage their Oracle Database.
Problem
Rhenus IT uses both Java and PL/SQL to serve their users and customers. They have a team of about 10 PL/SQL developers. The team manages more than 20,000 database packages, views, functions, procedures, object types and triggers spread over 30+ database schemas.
Rhenus IT wanted to move to a continuous delivery environment in which they can be more agile and deliver solutions to the business faster. Managing the PL/SQL code was the hardest piece of the puzzle.
Solution
After experimenting with other solutions in the market, Rhenus decided to move forward with Gitora.


Gitora enabled Rhenus Developers to:
  • Use Git, the prominent open source version control system used by millions of developers.
  • Move their database code between development and various staging databases automatically.
  • Move code between source and target databases very fast because Gitora only executes differences between source and target databases, without comparing the code bases in both databases first (which can be very time consuming).
  • Enforce check-in, check-out of database objects at the database level.
  • Automate build process for the database code using Gitora API’s.
  • Implement an affordable continuous delivery solution compared to alternatives.
Michiel Arentsen, the System Architect at Rhenus who implemented the solution at Rhenus has started an excellent blog in which he writes about his Gitora implementation. We highly recommend you to check it out. Below are the list of blog posts he wrote which should be very useful to anyone who is currently implementing Gitora at his/her company:
Categories: Development

PL/SQL Objects for JSON in Oracle 12cR2

Tim Hall - Mon, 2016-12-26 13:37

I’ve been playing around with some more of the new JSON features in Oracle Database 12c Release 2 (12.2).

The first thing I tried was the new PL/SQL support for the JSON functions and conditions that were introduced for SQL in 12.1. That was all pretty obvious.

Next I moved on to the new PL/SQL objects for JSON. These are essentially a replacement for APEX_JSON as far as generation and parsing of JSON data are concerned. If I’m honest I was kind-of confused by this stuff at first for a couple of reasons.

  • If you are coming to it with an APEX_JSON mindset it’s easy to miss the point. Once you “empty your cup” it’s pretty straight forward.
  • The documentation is pretty terrible at the moment. There are lots of mistakes. I tweeted about this the other day and some folks from the Oracle documentation team got back to me about it. I gave them some examples of the problems, so hopefully it will get cleaned up soon!

I was originally intending to write a single article covering both these JSON new features, but it got clumsy, so I separated them.

The second one isn’t much more than a glorified links page at the moment, but as I cover the remaining functionality it will either expand or contain more links depending on the nature of the new material. Big stuff will go in a separate article. Small stuff will be included in this one.

I also added a new section to this recent ORDS article, giving an example of handling the JSON payload using the new object types.

I’ve only scratched the surface of this stuff, so I’ll probably revisit the articles several times as I become more confident with it.

Cheers

Tim…

PS. Remember, you can practice a lot of this 12.2 stuff for free at https://livesql.oracle.com .

PL/SQL Objects for JSON in Oracle 12cR2 was first posted on December 26, 2016 at 8:37 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ADF REST Framework Version 2 (and later) - 12.2.1.2

Andrejus Baranovski - Mon, 2016-12-26 06:40
While building our new Oracle Cloud application with ADF BC REST and JET, I have discovered not announced feature in ADF BC REST 12.2.1.2. Starting from 12.2.1.2 ADF BC REST offers runtime versions. This is configurable in adf-config.xml file or could be provided through REST request header. ADF 12.2.1.2 supports version 1, 2 and 3. Version 2 offers better query support, while version 3 provides better response for hierarchical data - 16.5.2 What You May Need to Know About Versioning the ADF REST Framework.

You can specify version in adf-config.xml, as per documentation:


Version 2 offers more advanced support for data query. Besides query by example from version 1, we could use advanced query syntax - 22.5.4 Filtering a Resource Collection with a Query Parameter. For example, like operator wasn't supported in version 1:


It is supported in version 2. I could specify version 2 directly in REST request header as in example below:


Download ADF BC REST sample from GitHub repository - jetcrud.

A Guide to the Oracle TRUNCATE TABLE Statement

Complete IT Professional - Mon, 2016-12-26 05:00
The Oracle TRUNCATE TABLE statement is a useful statement for removing data in Oracle SQL. I’ll explain what the statement does and show you some examples in this article. What Does the Oracle TRUNCATE TABLE Statement Do? The Oracle TRUNCATE statement, or TRUNCATE TABLE statement, removes all data from a table. It’s similar to the […]
Categories: Development

Oracle JDeveloper (SOA and BPM) 12c (12.2.1.2.0) - Download Temporarily Not Available

Andrejus Baranovski - Mon, 2016-12-26 04:40
If you try to download JDeveloper (as well as SOA Suite or BPM Suite) from OTN - you will see a message on OTN download section - "This page is temporarily not available we'll be back soon".

You should not worry, as per Shay Shmeltzer answer on OTN Forum - "We discovered an issue with the installer - we are working to fix this. Once we have the updated installer we'll update the forum and the pages." Read more here.

There is solution - if you need urgently to download JDeveloper, go to Oracle Software Delivery Cloud and download from there.

Links for 2016-12-25 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator