RE: Moving to flash storage

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Fri, 18 May 2018 08:01:47 -0400
Message-ID: <05c701d3eea0$17ccc470$47664d50$_at_rsiz.com>



You wrote: "RDBMS systems are usually I/O bound, not CPU bound."  

I doubt that is true, and I will explain why:  

In, as my friend Kevin likes to note, "party like its 1999," Terascape freely distributed Dipstick (you know, like checking your oil) to the self-identified Oracle customers who thought they had a disk i/o problem.  

Before Oracle provided segment level i/o statistics, some clever work was involved in producing Dipstick, but that is another story.  

Of that self-identified pool almost exactly 25% actually had a disk i/o problem. (Now I have switched the problem to disk alone, but I believe that is fair in context.)  

This does NOT include I/O from memory to cpu, which often is the pacing resource determining the maximum throughput of an RDBMS system and which increases its "market share" as a pacing resource as persistent memory gets faster and especially as seek times become proportional to address calculations instead of mechanical head movements and data transfer rates approach the speed of the communications wire stack instead of being limited by rotational speed times data density.  

I have not re-measured the data or surveyed. But I suspect that number is about constant.  

Of the total 25%, many were batch write limit rates on finely honed SQL capturing the results of things like monthly rollovers and/or daily transaction processing mixed with queries. The big thing SSD does for that system profile is remove (most of ) the need to separate the write deposit resources and read sources (rollover jobs do both, which is hilarious if the source data and result data go to the same spinning device set, injecting an oscillation of competing seeks if the job is not bound as a buffer grind before it starts producing result rows.)  

IF your system is like the above, HUGE win moving to SSD. And a lot less work than segregating your batch i/o segments so that they own their i/o stack when they are in play and scheduling batch jobs to not interfere with each other.  

If you've got a lot of code that presently is generating RAM memory into and out of CPU, it won't help you much.  

Then again it might not actually cost that much extra today. And putting it all on media where seek times just don't matter that much should serve to focus future activities on reducing LIOs driven by the code (if it is possible).  

If folks really understood how to determine whether they are actually disk i/o bound, that would be a very interesting survey to re-do, with a note about whether your system was SSD or spinners.  

My view of the world is skewed by the fact I am usually only asked to look at systems that are under severe stress, often involving code being ported.  

Folks involved with storage systems likely are approached by people predominantly interest in improving persistent i/o speed.  

mwf

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Mladen Gogala
Sent: Friday, May 18, 2018 12:08 AM
To: oracle-l_at_freelists.org
Subject: Re: Moving to flash storage      

On 05/17/2018 10:15 AM, Matt Adams wrote:

It looks like we might be migrating the big production databases (2 of them for a combined 40 Tb or so) to a NetApp 8080 Flash storage device. These database have thousands of concurrent connections and turn over between 10 G and 30 G of redo per hour.  

I seem to vaguely recall a message or two here on the list over the last couple of years regarding things to watch out for when migrating to flash storage.

Well, the characteristics of the flash storage are very different from the characteristics of the spinning disks. The first important difference is that the difference between sequential access and random access is much smaller. Make sure to gather new system statistics. Also, flash disks are much more expensive than rotational disk. So much so that advanced compression option suddenly starts making sense. The "compress for all operations" can really save you some space and money. Benefits of the index scan are much smaller with flash storage. Also, in my experience, using larger block size like 16k can make some difference for some kinds of flash memory. Talk to NetApp and ask them for their recommendations. Reading bigger blocks in bursts can speed things up. As for the transfer rates, you want 32 Gb/sec fibre channel adapters. It doesn't get any faster than that. RDBMS systems are usually I/O bound, not CPU bound. And flash is much faster variety of IO.

--

Mladen Gogala
Database Consultant
Tel: (347) 321-1217

--

http://www.freelists.org/webpage/oracle-l Received on Fri May 18 2018 - 14:01:47 CEST

Original text of this message