Re: Why use Oracle In-Memory database from another perspective

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: Wed, 4 Oct 2017 10:58:31 +1100
Message-ID: <or187f$u1g$1_at_dont-email.me>


On 4/10/2017 5:06 _at_wiz, richard.rankin_at_ieee.org wrote:
> On Sunday, September 17, 2017 at 9:46:54 PM UTC-5, richar...@yahoo.com.hk wrote:

>> Why use Oracle In-Memory database from another perspective
>> A lot of people are talking about why or why not use Oracle In-memory database in their applications and most of them are too focused on the size of the database or whether it is an OLAP application. It seems that small and medium size databases are not suitable for using Oracle In-memory database option. But if your OLTP databases are suffering from performance bottleneck and you are looking for solutions, I think Oracle In-Memory database option should be on your solutions list, especially when you are planning to upgrade your hardware.
>> https://tosska.com/use-oracle-memory-database-another-perspective/


>
> It's always useful to put pin and code into memory. There are quite a few caching options and the option to put tables and other objects in memory. An in-memory database allows you to put an entire subset of the total system functionality into another database and have it in memory. This requires a lot of memory but it's not all that expensive these days. So the question may be: move a big chunk of your functionality into an entirely different database in memory or be more selective and pin objects, cache complex query results. You may have to make these decisions when building the physical database and trying different options to see which works best. Possibly even in production some or a large portion of, the heavily accessed objects into memory as you find the need (for speed). However, a no-brainer is to buy an Exadata machine with its caches, its varying disk speeds 7.2K rpm, 15K rpm, SSDs and let the system to decide where stuff will be (although you're not giving up control completely). The Exadata will move things around to accommodate changes in access patterns. There are a number of options and you can decide what to do based on a priori knowledge and make changes later based on numbers you gather. The hardest to make changes later with is using an in-memory, you either build it that way or not. Changing it involves a complete redesign. Moving to an Exadata system is as painful as moving any system to a new box. With using a variety of individual options you can add, remove and modify these options. I'm not going to make a decision for you. That's your job.
>

I think there is some confusion in this thread. Exadata does a lot of caching.
Caches and cache management - in many, many forms and shapes - is a completely different subject to the "in-memory" option. Which is separately costed and therefore needs to be gauged as to its appropriateness.

Besides the beneficial impact on I/O overhead that classic caching has, once a page/block of data is in memory it needs to be decoded/interpreted to find the actual data.

By this I mean: rows have to be located in the block (address arithmetic) and within each row data has to be interpreted into a format that the CPU hardware can understand. For example: dates and numbers in Oracle are stored in a format that has nothing to do with the capabilities of the usual Intel-type CPUs.

If a search is being made in a range of cached blocks for a specific value, then ALL data has to be decoded/interpreted before it can be compared. And that uses up CPU!

One of the functions of the In-memory option is precisely to do this "in advance" and cache the results in a format (columnar, but I won't go into the details of why it is called that way) that the CPU will have no trouble in searching later on. As a result, subsequent linear searches are greatly speeded in terms of CPU usage.

Exadata of course has fantastic I/O capabilities, although with modern SAN and fast CPU hardware that has become a lot more relative. But for example, Exadata does not have the hardware data replication capabilities of modern SAN gear. And no, Dataguard is NOT the same as SAN-based replication. Neither is RAC, for that matter! Received on Wed Oct 04 2017 - 01:58:31 CEST

Original text of this message