Feed aggregator

Index column order and Partitioned table

Tom Kyte - Sun, 2017-02-19 19:06
We have a table with few hundred millions of rows. We perform INSERT, DELETE, SEARCH operation on this table. Column ID is unique and column CODE has very few distinct values (lets say 100 distinct values). So we partition table by LIST CODE column...
Categories: DBA Blogs

How to speed up slow unicode migration of a table with xmltype columns

XTended Oracle SQL - Sun, 2017-02-19 18:46

Recently I have had an issue with slow unicode migration of the database upgraded from 10g to 12.1.0.2. The main problem was a table with xmltype: we spent about 4 hours for this table(~17GB) during test migration, though all other tables (~190GB) migrated just for about 20 minutes.
We used DMU(Database Migration Assistant for Unicode), and the root cause of the problem was update statement generated by DMU for this table:

update  /*+ PARALLEL(A,16)*/ "RRR"."T_XMLDATA" A  set A."SYS_NC00011$" = SYS_OP_CSCONV(A."SYS_NC00011$", 'AL16UTF16')

“SYS_NC00011$” was internal hidden CLOB column used to store XMLTYPE. As you can see DMU added PARALLEL hint, but though oracle can use parallel dml for xmltype since 12.1.0.1, we can’t use it because of its’ limitations:

Changes in Oracle Database 12c Release 1 (12.1.0.1) for Oracle XML DB

Parallel DML Support for XMLType
Support for parallel DML has been improved for XMLType storage model binary XML using SecureFiles LOBs. The performance and scalability have been improved for both CREATE TABLE AS SELECT and INSERT AS SELECT.

Restrictions on Parallel DML

Parallel DML can be done on tables with LOB columns provided the table is partitioned. However, intra-partition parallelism is not supported.

For non-partitioned tables with LOB columns, parallel INSERT operations are supported provided that the LOB columns are declared as SecureFiles LOBs. Parallel UPDATE, DELETE, and MERGE operations on such tables are not supported.

Btw, Oracle didn’t support parallel dml for xmltype on previous releases:

No Parallel DML for XMLType – DML operations on XMLType data are always performed in serial. Parallel DML is not supported for XMLType. (Parallel query and DDL are supported for XMLType.)

So I had to use manual parallelization:
1. Monitor “Convert application tables” step through “View Table Conversion progress” and press “Stop” button during conversion of this table.
2. Create table with ROWIDs of this table and split them into 16 groups:

create table tmp_rids as 
select rowid rid, ntile(16)over(order by rowid) grp 
from t_xmldata;

3. Execute

ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT LEVEL 1,FOREVER'; 

to avoid “ORA-22839: Direct updates on SYS_NC columns are disallowed”
4. Start 16 sessions and each of them have to update own part:

update t_xmldata A 
set A."SYS_NC00011$" = SYS_OP_CSCONV(A."SYS_NC00011$", 'AL16UTF16') 
where rowid in (select rid from tmp_rids where grp=&grp);
commit;

5. Disable event 22838:

ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT OFF'; 

6. Open “View Table Conversion progress” window, click on this table and change “Retry” to “Skip” option for the update step.

This simple method allowed to make unicode migration about 16 times faster.

Categories: Development

Data Pump or Data Pain Part04 – The End

Michael Dinh - Sun, 2017-02-19 13:10

Please take a look at my notes below:
DataPump: Schema/DB Migration Notes


Public Cloud consequences with an Oracle environment

Amis Blog - Sun, 2017-02-19 10:23

The title suggests a negative statement of using a Public Cloud. Well, it isn’t.  I’m convinced the Cloud is the next BIG thing, with huge advantages for businesses. But companies should be aware of what they choose. A lot of providers, including Oracle, are pushing us to the cloud, Public, Private or Hybrid. And make us believe that a public cloud will be an inevitable extension to our on-premises environment. Moving your weblogic and database environment including data from on-premises to the public cloud and back is no problem, or will be no problem in the future. But what kind of hassle you have to cope with, technical and business-wise?

The list of implications and consequences in this post is not exhaustive, it intends to give you an idea of the technical and non-technical consequences of moving to and from the cloud.

Public Cloud

The title of this blogpost is about the ‘Public Cloud’. What I mean here is the Oracle-authorized Public clouds: Amazon, Azure and of course the Oracle Cloud. More drilled down: the PaaS and the IaaS environments. What were the big differences again and why am I not talking about consequences with a Private Cloud.

I like to think that a Private Cloud is a better version of the average on-premises environments with at least some of these characteristics of a Public Cloud:

– Self-serviced

– Shared services

– Standardized

– Virtual

– Metering, automatic cost allocation and chargeback

So on-premises may be a Private Cloud, but most of the time it’s not the same. One – not mentioned yet – characteristic is very important: you (your IT-department) should be in control of the environment regarding versions, patches, Service Level Agreements etc.

 

A Public Cloud has at least the following characteristics next to the one mentioned above:

– Shared resources with other companies

– Available through the internet

And: no department of your company is in full control of versions, patches, Service Level Agreements etc.

 

So the scope of the article will be the Public Cloud

Version differences

When deploying Oracle software in the Public Cloud, you are depending on the versions and patches offered by the cloud provider.E.g. Oracle announced it will deploy the most recent patches and releases at first in the Oracle Public Cloud. And afterwards these releases and patches will be available for the on-premises environment.

So when having a development- and test- environment in the Public Cloud, and the production at on-premises or private cloud, you must be fully aware of the version differences regarding your life cycle management.

image

License differences

When deploying your software in an IaaS environment it’s possible to ‘Bring Your Own License’. But in the Oracle Cloud it’s possible being charged ‘metered’ and ‘unmetered’ subscription.

Databases in the Oracle Public Cloud are fully installed with options, and it’s possible and sometimes inevitable to use them.  E.g. the extra tablespace you create in the Oracle cloud is default encrypted. So when moving this database  to on-premises, you must be owner the Security Option license of the Oracle Database Enterprise Edition to be compliant.

And moving a Pluggable Database from the (Oracle) cloud to on-premises, to a Container-Database with already one Pluggable Database in it, you are running into a Multitenancy Option.

The next picture is a test of creating two tablespaces in the Oracle Cloud, one default, and one with TDE. The result is two tablespaces with TDE.

image

Centralized administration

It’s an ideal world when you are able to manage your database in the Private and Public Cloud from one central mangement point. When using the Oracle management solution this should be the Oracle Enterprise Manager Cloud Control. It’s relatively simple to deploy agents on-primises ánd in the Oracle Cloud (PaaS and IaaS). But unfortunately it’s not supported to deploy databases or middleware in Azure or Amazon RWS with Oracle Enterprise Manager. You will get the following message:

 

image

It’s technically possible to overcome this, but not supported at the moment. So with Oracle Enterprise Manager you basically have two options: Oracle Public Cloud or the Cloud@Customer option.

Standards

Deployment of Oracle PaaS software is automated in the cloud according to the standards dictated by the provider. It should be very convenient that these standards are the same as the on-premises software, but that’s not always the case . At least, not in a PaaS environment in de Public Cloud.

In an IaaS environemt you’ve got generally almost full control over the Deployments. Almost? Yeah, you will still be relying at the version and configuration of the Operationg System. To also overcome this, you have to choose for a bare metal option in the Public Cloud.

 

Azure / Amazon

However the Clouds of Microsoft Azure and Amazon are for some a way ahead of the Oracle Cloud, the fact is that for Oracle the Oracle Cloud prevails.

As already said, it’s not possible to manage the Oracle software in the Azure / Amazon public cloud by Oracle Enterprise Manager on-premise. Amazon does support an Agent for Enterprise Manager on-premise, but not the one you actually want to make your life easier.

You’re depending on the breadcrumps Oracle is willing to share with the other Cloud providers.

 

SLA

The business would like to have a Service Level agreement with the IT-department, but the IT-department could be relying on the SLA of the Public Cloud provider. And that could be a mismatch. For example, in my tests I had a trial agreement with Microsoft Azure a while ago. All working fine, but suddenly Azure had a dispute with Oracle (I think) and I got the following message.

image

You don’t want this in a real environment.

Security

There is and there has been a distrust against security in the Public Cloud. I believe the security in the mentioned Public Clouds is generally better or at least the same as  what the average company wishes.

Nevertheless you may have to cope with different security-bases and changes of security-roles in your company in a so-called ‘hybrid infrastructure’.

 

Roles – Cloud Administrator

As already mentioned, there will be new roles (and more likely new jobs) to manage the Oracle software in the Public Cloud. Managing subscriptions, interoperability, compliancy etc. New processes, new management, new challenges.

 

Data Sovereignty

For a lot of companies it’s important to know where the data geographically actually resides.This data is subject to the laws of the country in which it is located. What is the roadmap of these Public Cloud providers?

 

Latency

Seperating the database and middleware will cause latency when one of them is in the Public Cloud. Oracle has two solutions for that:

Cloud@customer. Bring the Public Cloud to on-premises with a seperate machine, managed by Oracle.

– Connect the Internet backbone to your company. Equinix now provides access through a direct connect or Equinix Cloud Exchange to Oracle Cloud in five (5) metros in the US, Europe, and Australia. Enterprise customers with a presence in these Equinix IBX data centers can leverage Oracle’s FastConnect giving them a high-performance private access to Oracle Cloud.

 

 

Resources:

Private vs hybrid: http://www.stratoscale.com/blog/cloud/private-cloud-vs-public-hybrid-or-multi/?utm_source=twitter&utm_medium=social&utm_campaign=blog_recycle

Time to embrace the cloud: https://software.dell.com/community/b/en/posts/dbas-embrace-the-cloud

The post Public Cloud consequences with an Oracle environment appeared first on AMIS Oracle and Java Blog.

Node.js application using SSE (Server Sent Events) to push updates (read from Kafka Topic) to simple HTML client application

Amis Blog - Sun, 2017-02-19 06:33

This article describes a simple Node.js application that uses Server Sent Events (SSE) technology to push updates to a simple HTML client, served through the Express framework. The updates originate from messages consumed from a Kafka Topic. Although the approach outlined in this article stands on its own, and does not even depend on Apache Kafka, it also forms the next step in a series of articles that describe an Kafka Streams application that processes messages on a Kafka Topic – deriving running Top N ranks – and produces them to another Kafka Topic. The Node.js application in this current article consumes the Top N messages and pushes them to the HTML client.

The simple story told by this article is:

image

And the complete picture – including the prequel discussed in https://technology.amis.nl/2017/02/12/apache-kafka-streams-running-top-n-grouped-by-dimension-from-and-to-kafka-topic/ – looks like this:

image

 

Sources are found on GitHub:https://github.com/lucasjellema/kafka-streams-running-topN/tree/master/kafka-node-express-topN-sse .

Topics discussed in this article

Browser, HTML & JavaScript

  • Programmatically add HTML elements
  • Add row to an HTML table and cells to a table row
  • Set Id attribute on HTML elements
  • Loop over all elements in an array using for .. of
  • Subscribe to a SSE server
  • Process an incoming SSE message (onMessage, process JSON)
  • Formatting (large) numeric values in JavaScript strings
  • Concatenating Strings in JavaScript

Node & Express (server side JavaScript)

  • Consume message from Kafka Topic
  • Serve static HTML documents using Express
  • Expose API through Express that allows SSE clients to register for server sent events
  • Push JSON messages to all SSE clients
  • Execute a function periodically, based on an interval using a Node Time (setInterval)

Browser – Client Side – HTML & JavaScript

The client side of the implementation is a simple HTML document (index.html) with embedded JavaScript. In a real application, the JavaScript should ideally be imported from a separate JavaScript library. In the <script> tag in the <head> of the document is the JavaScript statement that registers the browser as a SSE subscriber:

var source = new EventSource(“../topn/updates”);

The SSE server is located at a path /topn/updates relative to the path where the index.html document was loaded (http://host:port/index.html – downloaded from the public sub directory in the Node application where static resources are located and served from). Requests to this URL path are handled through the Express framework in the Node application.

On this EventSource object, a message handler is created – with the function to be invoked whenever an SSE event is received on this source:

source.onmessage = function(event) { … }

The content of the function is fairly straightforward: the JSON payload from the event is parsed. It contains the name of a continent and an array of the current top 3 countries by size in that continent. Based on this information, the function locates the continent row (if it does not yet exist, the row is created) in the table with top3 records. The top3 in the SSE event is written to the innnerHTML property of the second table cell in the continent’s table row.

 


<!DOCTYPE html>
<html>
  <head>
    <title>Continent and Country Overview</title>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <script>
	  // assume that API service is published on same server that is the server of this HTML file
    // send event to SSE stream 
    /* "{\"nrs\":[{\"code\":\"FR\",\"name\":\"France\",\"population\":66836154,\"size\":643801,\"continent\":\"Europe\"},{\"code\":\"DE\",\"name\":\"Germany\",\"population\":80722792,\"size\":357022,\"continent\":\"Europe\"},{\"code\":\"FI\",\"name\":\"Finland\",\"population\":5498211,\"size\":338145,\"continent\":\"Europe\"},null]}"
update all Sse Client with message {"nrs":[{"code":"FR","name":"France","population":66836154,"size":643801,"continent":"Europe"},{"code":"DE","name":"Germany","population":80722792,"size":357022,"continent":"Europe"},{"code":"FI","name":"Finland","population":5498211,"size":338145,"continent":"Europe"},null]}
*/
      var source = new EventSource("../topn/updates");
      source.onmessage = function(event) {
        var top3 = JSON.parse(event.data);
        if (top3.continent) {
        var nrs = top3.nrs;
         var trID = "continent_"+top3.continent;
         // find row in table with id equal to continent
         var tr = document.getElementById(trID);
         // if not found, then add a row
         if (!tr) {
           // table does not yet have a row for the continent, than add it 
           // find table with continents
           var tbl = document.getElementById("continentsTbl");
           // Create an empty <tr> element and add it to the 1st position of the table:
           tr = tbl.insertRow(1);
           tr.setAttribute("id", trID, 0);
           // Insert new cells (<td> elements) at the 1st and 2nd position of the "new" <tr> element:
           var cell1 = tr.insertCell(0);
           cell1.setAttribute("id",trID+"Continent",0);
           var cell2 = tr.insertCell(1);
           cell2.setAttribute("id",trID+"Top3",0);
           // Add some text to the new cells:
           cell1.innerHTML = top3.continent;
         }// tr not found
         var top3Cell = document.getElementById(trID+"Top3");
         var list = "<ol>";
         for (country of top3.nrs) {
            if (country) {
                list= list.concat( "<li>",country.name," (size ",country.size.toLocaleString(),")","</li>");
            }
         }//for
         list= list+ "</ol>";
         top3Cell.innerHTML = list;    
        }// if continent    
      };//onMessage
    </script>    
  </head>
  <body>
    <div id="loading">
      <h2>Please wait...</h2>
    </div>
    <div id="result">
      <table id="continentsTbl">
        <tr><td>Continent</td><td>Top 3 Countries by Size</td></tr>
      </table>
    </div>
  </body>
</html>

 

Node Application – Server Side – JavaScript using Express framework

The server side in this article consists of a simple Node application that leverages the Express module as well as the kafka-node module. A simple, generic SSE library is used – in the file sse.js. It exports the Connection object – that represents the SSE channel to a single client – and the Topic object that manages a collection of Connections (for all SSE consumers around a specific subject). When the connection  under a Connection ends (on close), the Connection is removed from the Collection.

"use strict";

console.log("loading sse.js");

// ... with this middleware:
function sseMiddleware(req, res, next) {
    console.log(" sseMiddleware is activated with "+ req+" res: "+res);
    res.sseConnection = new Connection(res);
    console.log(" res has now connection  res: "+res.sseConnection );
    next();
}
exports.sseMiddleware = sseMiddleware;
/**
 * A Connection is a simple SSE manager for 1 client.
 */
var Connection = (function () {
    function Connection(res) {
          console.log(" sseMiddleware construct connection for response ");
  
        this.res = res;
    }
    Connection.prototype.setup = function () {
        console.log("set up SSE stream for response");
        this.res.writeHead(200, {
            'Content-Type': 'text/event-stream',
            'Cache-Control': 'no-cache',
            'Connection': 'keep-alive'
        });
    };
    Connection.prototype.send = function (data) {
        console.log("send event to SSE stream "+JSON.stringify(data));
        this.res.write("data: " + JSON.stringify(data) + "\n\n");
    };
    return Connection;
}());

exports.Connection = Connection;
/**
 * A Topic handles a bundle of connections with cleanup after lost connection.
 */
var Topic = (function () {
    function Topic() {
          console.log(" constructor for Topic");
  
        this.connections = [];
    }
    Topic.prototype.add = function (conn) {
        var connections = this.connections;
        connections.push(conn);
        console.log('New client connected, now: ', connections.length);
        conn.res.on('close', function () {
            var i = connections.indexOf(conn);
            if (i >= 0) {
                connections.splice(i, 1);
            }
            console.log('Client disconnected, now: ', connections.length);
        });
    };
    Topic.prototype.forEach = function (cb) {
        this.connections.forEach(cb);
    };
    return Topic;
}());
exports.Topic = Topic;

The main application – in file topNreport.js does a few things:

  • it serves static HTML resources in the public subdirectory (which only contains the index.html document)
  • it implements the /topn/updates API where clients can register for SSE updates (that are collected in the sseClients Topic)
  • it consumes messages from the Kafka Topic Top3CountrySizePerContinent and pushes each received message as SSE event to all SSE clients
  • it schedules a function for periodic execution (once every 10 seconds at the moment); whenever the function executes, it sends a heartbeat event to all SSE clients

 

/*
This program serves a static HTML file (through the Express framework on top of Node). The browser that loads this HTML document registers itself as an SSE client with this program.

This program consumes Kafka messages from topic Top3CountrySizePerContinent to which the Running Top3 (size of countries by continent) is produced.

This program reports to all its SSE clients the latest update (or potentially a periodice top 3 largest countries per continent (with a configurable interval))

 
*/

var express = require('express')
  , http = require('http')
  , sseMW = require('./sse');

var kafka = require('kafka-node')
var Consumer = kafka.Consumer
var client = new kafka.Client("ubuntu:2181/")
var countriesTopic = "Top3CountrySizePerContinent";

var app = express();
var server = http.createServer(app);

var PORT = process.env.PORT || 3000;
var APP_VERSION = '0.9';

server.listen(PORT, function () {
  console.log('Server running, version '+APP_VERSION+', Express is listening... at '+PORT+" ");
});

 // Realtime updates
var sseClients = new sseMW.Topic();


app.use(express.static(__dirname + '/public'))
app.get('/about', function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.write("Version "+APP_VERSION+". No Data Requested, so none is returned");
    res.write("Supported URLs:");
    res.write("/public , /public/index.html ");
    res.write("incoming headers" + JSON.stringify(req.headers)); 
    res.end();
});
//configure sseMW.sseMiddleware as function to get a stab at incoming requests, in this case by adding a Connection property to the request
app.use(sseMW.sseMiddleware)

// initial registration of SSE Client Connection 
app.get('/topn/updates', function(req,res){
    var sseConnection = res.sseConnection;
    sseConnection.setup();
    sseClients.add(sseConnection);
} );


var m;
//send message to all registered SSE clients
updateSseClients = function(message) {
    var msg = message;
    this.m=message;
    sseClients.forEach( 
      function(sseConnection) {
        sseConnection.send(this.m); 
      }
      , this // this second argument to forEach is the thisArg (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach) 
    ); //forEach
}// updateSseClients

// send a heartbeat signal to all SSE clients, once every interval seconds (or every 3 seconds if no interval is specified)
initHeartbeat = function(interval) {
    setInterval(function()  {
        var msg = {"label":"The latest", "time":new Date()}; 
        updateSseClients( JSON.stringify(msg));
      }//interval function
    , interval?interval*1000:3000
    ); // setInterval 
}//initHeartbeat

// initialize heartbeat at 10 second interval
initHeartbeat(10); 


// Configure Kafka Consumer for Kafka Top3 Topic and handle Kafka message (by calling updateSseClients)
var consumer = new Consumer(
  client,
  [],
  {fromOffset: true}
);

consumer.on('message', function (message) {
  handleCountryMessage(message);
});

consumer.addTopics([
  { topic: countriesTopic, partition: 0, offset: 0}
], () => console.log("topic "+countriesTopic+" added to consumer for listening"));

function handleCountryMessage(countryMessage) {
    var top3 = JSON.parse(countryMessage.value);
    var continent = new Buffer(countryMessage.key).toString('ascii');
    top3.continent = continent;
    updateSseClients( top3);
}// handleCountryMessage

Running the application

In order to run the application, the Node application that publishes the basic country records to a Kafka Topic is started:

SNAGHTML2a52e0

The Kafka Streaming Java application that derives the Top 3 per continent as produces it to a Kafka Topic is started:

image

And the Node application that consumes from the Top3 Topic and pushes SSE events to the browser clients is run:

image

After a little wait, the browser displays:

image

based on output from the Kafka Streams application:

image

When all messages have been processed from the countries2.csv input file, this is what the browser shows:

image

This is the result of all the individual top3 messages pushed as SSE events from the Node application to the browser client. The screenshot shows only one browser client; however, many browsers could have connected to the same Node server and have received the same SSE events simultaneously.

image

The post Node.js application using SSE (Server Sent Events) to push updates (read from Kafka Topic) to simple HTML client application appeared first on AMIS Oracle and Java Blog.

Masking bind values with Oracle Transparent Sensitive Data Protection

Tom Kyte - Sun, 2017-02-19 00:46
Hi, I'm trying to use the Oracle Transparent Sensitive Data Protection feature on a database ver. 12.1.0.2.0. The plan is to use the predefined REDACT_AUDIT policy in order to hide bind variables that refer sensitive columns. I've followed the i...
Categories: DBA Blogs

Data to be displayed grouped by start and end date

Tom Kyte - Sun, 2017-02-19 00:46
I have a table t I have a table t Create table t (employee varchar2(10), (job varchar2(5), start_date date, end_date date, FTE int ); with this data in it: insert into t values (1111,1,19-May-2008,18-May-2010,100); insert into t v...
Categories: DBA Blogs

Regarding expressions around columns in SQL

Tom Kyte - Sun, 2017-02-19 00:46
Hi Tom, i havae a query that i given below SELECT 2 tab_type , ID,LVNAME,LV_NAME,APPL_FROM,GRANT_FROM,GRANT_TO, APPLICABLE,LWP,LVSTR,TYPE,EFFECT_ON,AGNST_FROM, HALFPAID,UNPAID,QTRPAID,0,IS_PRORATA,SYSDATE FROM UHO.LVOPEN W...
Categories: DBA Blogs

Shifting / Replacing RAC nodes

Tom Kyte - Sun, 2017-02-19 00:46
Greetings, Hope you are doing well. I wanted to ask a short question/advice for one of our issues. We are replacing some nodes in a production rac cluster with newer ones. Is adding/deleting nodes the best way to go about? Or are there any ...
Categories: DBA Blogs

Updating Materialized Views - Unable to render MATERIALIZED VIEW DDL ...with DBMS_METADATA attempting internal generator

Tom Kyte - Sun, 2017-02-19 00:46
I know it was not efficient, but due to my privileges on an Oracle database I had tablespace on, I had been running an SSIS package to drop and recreate about 20 tables daily. Tables used for reporting queries. It took 2 hours. I was running SSIS f...
Categories: DBA Blogs

Parse a valid WHERE clause syntax using PL/SQL

Tom Kyte - Sun, 2017-02-19 00:46
I have a predicate column in a table which stores a text which must be a valid WHERE clause SEQ PREDICATE --- --------- 1 EMPNO in ( select mgr from emp) AND ( COMISSION is null or SAL > 100 ) 2 RTRIM(LAST_NAME) LIKE '%KYTE%' How to ...
Categories: DBA Blogs

Script to get OMS ,Agent and DB Plugin Patch details

Arun Bavera - Sat, 2017-02-18 18:17

 

GETTING OMS  PATCH DETAILS

export OMS_HOME=/u01/middleware/oms
export ORACLE_HOME=$OMS_HOME
export PLUGIN_HOME=/u01/middleware/plugins/oracle.sysman.db.oms.plugin_12.1.0.8.0/

DT=$(date '+%m%d%Y')

LOG=/tmp/EM_Patch_details_${DT}.txt

echo "1.Getting the OMS system patches" >${LOG}
echo "=====================================================================================" >>${LOG}

$ORACLE_HOME/OPatch/opatch lsinventory -details >>${LOG}

echo "" >>${LOG}
echo "======================================================================================" >>${LOG}
echo "2.Getting the OMS DB PLUGIN patches" >>${LOG}
echo "======================================================================================" >>${LOG}
$ORACLE_HOME/OPatch/opatch lsinventory -oh $PLUGIN_HOME -details >>${LOG}

 

GETTING AGENT PATCH DETAILS

export AGENT_BASE=/u01/app/oracle/product/agent12c
export AGENT_HOME=${AGENT_BASE}/core/12.1.0.5.0
export ORACLE_HOME=${AGENT_HOME}
export PLUGIN_HOME=${AGENT_BASE}/plugins/oracle.sysman.db.agent.plugin_12.1.0.8.0/

DT=$(date '+%m%d%Y')

LOG=/tmp/AGENT_Patch_details_${DT}.txt

echo "1.Getting the AGENT system patches" >${LOG}
echo "=====================================================================================" >>${LOG}

$ORACLE_HOME/OPatch/opatch lsinventory -details >>${LOG}

echo "" >>${LOG}
echo "======================================================================================" >>${LOG}
echo "2.Getting the AGENT DB PLUGIN patches" >>${LOG}
echo "======================================================================================" >>${LOG}
$ORACLE_HOME/OPatch/opatch lsinventory -oh $PLUGIN_HOME -details >>${LOG}

Categories: Development

Log File Switches Twice as Often as ARCHIVE_LAG_TARGET

Tom Kyte - Sat, 2017-02-18 06:26
I have an odd issue occurring on my primary pre-production database server. In both my production and pre-production environments, we have the ARCHIVE_LAG_TARGET parameter set to force a log switch at regular intervals. I know this is not a modern-...
Categories: DBA Blogs

UTL_MAIL.SEND issue with Senders email domain

Tom Kyte - Sat, 2017-02-18 06:26
I am not getting any error in the application when I use sender's email domain as @yahoo.com and email is not delivered to recipient. recipient will get email If I use sender's domain as @gmail.com or @abc.com or any other valid domains. So how to tr...
Categories: DBA Blogs

after update count of executed updates

Tom Kyte - Sat, 2017-02-18 06:26
I have a script executed in SQL*PLUS command line that should give out the summ of all real updated rows from the WHERE clause after all update executes simiilar as on execute update command directely in SQL*Plus with "1 row updated". But the fol...
Categories: DBA Blogs

View peformance

Tom Kyte - Sat, 2017-02-18 06:26
I have composite partitioned FACT table that list-range partitioned and it currently houses 1756515660 rows. In order to incorporate dynamic partitioning/sub-partitioning, we want to switch to list-list style of table. Given the large size of the t...
Categories: DBA Blogs

The Great Wall

Tim Hall - Sat, 2017-02-18 04:34

I went to see The Great Wall last night.

 

The critics gave it a hard time. The reviews aren’t great. It’s a really cool film!

OK, it doesn’t have the best story. The character development isn’t great. The progression of Matt Damon’s character is quite clumsy. I see the flaws in all that, but my gosh what visuals!

If you are going to watch this film I would suggest watching it on the biggest screen you can find. I watched it on an IMAX 3D and it was amazing. The use of colour was out of this world. Visually it was like all the best bits of House of Flying Daggers and Hero, mixed with a bit of video game feel and a real-life version of animation. It was just spectacular.

As I mentioned, Matt Damon’s character was not my favourite. In a way I would have preferred him not to be in it. I know what he was meant to represent in the film, but I found it a distraction. The real lead character was Tian Jing (Commander Lin Mae), who is fearless, beautiful and totally kick-ass as leader of the Crane Corps. Watching the Crane Corps on a big screen, especially on 3D, is enough to make you lose your dinner! According to IMDB she is in the new Kong film too. I hope they’ve given her a decent role, not just some crappy love interest.

China has some seriously spectacular scenery. If you’ve watched any of the Chinese epics that have been released over the last 15 years you will know what I mean.

So disconnect your brain, sit back and bask in all visual glory!

Cheers

Tim…

The Great Wall was first posted on February 18, 2017 at 11:34 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Linux: keyring-password for wlan needed / legitimate wlan login

Dietrich Schroff - Sat, 2017-02-18 03:20
Each time i resume my notebook after a suspend to disk-cycle the connection to WLAN hangs with the following message:

 (Die Systemrichtlinien verhinder das Bearbeiten von Netzwerkeinstellungen für alle Benutzer / Enter password for default keyring to unlock)
I tried to fix this via network-manager with adding the hook at the checkbox "Alle Benutzer dürfen dieses Netzwerk verwenden", but this did not help at all.
The point is, that after resuming my login keyring wants a authentication with password again. But this behaviour can be easily changed with the tool seahorse.
(Installation in debian with:  apt-get install seahorse)

After starting seahorse choose "File -> New" and you will get the following dialog:
 Create a new password-keyring (Passwort-Schlüsselbund) -
 Choose a name -
 and do not enter a password (this is not really secure, but if you configure your screensaver to lock your screen with a password, this should not be a problem)
 Confirm an empty password - and the you are done.

At the end you have to delete the old keyring (the one with the password):
This is done with right clicking the appropriate item...

After all this steps you should not see this legitimate-popup anymore...

12cR2 real-time materialized view (on query computation)

Yann Neuhaus - Fri, 2017-02-17 15:49

Materialized views is a very old feature (you may remember that it was called snapshots a long time ago). It has all advantages of a view, as you can define any select statement that joins, filters, aggregates, and see it as one table. It has all advantages of a table, as it is stored in one segment, can be indexed, partitioned, have constraints, be compressed, etc. It looks like indexes as it stores data redundantly, in a different physical way, more focused on the way it will be queried rather than on the way data is entered. Like indexes, they can be used transparently (with query rewrite) but unlike indexes, they are not maintained synchronously but have to be refreshed. It has some advantages of replication because it can capture the changes done on source tables, into materialized view logs, so that refresh can be incremental (fast refresh).
Oracle Database 12.2 goes a step further being able to deliver fresh result even when the materialized is stale. This is an amazing feature called real-time materialized view, that does on-query computation of fresh result from the stale one, joined with the materialized view log.

I create my DEMO table on Oracle Exdata Express Cloud Service

SQL> create table DEMO (id primary key,a,b) as select rownum,round(log(10,rownum)) a, rownum b from xmltable('1 to 100000');
Table created.

I plan to create a materialized view to aggregate the count and sum of B grouped by A. And DBMS_MVIEW can tell me what I need to be able to fast refresh it.

Explain Materialized View

The goal is to have real-time materialized view with frequent refreshes, which means that we need fast refresh to be possible after any kind of modification.


SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
PL/SQL procedure successfully completed.
 
SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
 
CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST_AFTER_ONETAB_DML COUNT(*) is not present in the select list
REFRESH_FAST
REFRESH_FAST_AFTER_INSERT the detail table does not have a materialized view log PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

Here is what I have to do in order to have a materialized view that can be fast refreshed: COUNT(*) in the select, and create a materialized view log.

Materialized view log


SQL> create materialized view log on DEMO;
Materialized view log created.

Let’s check if it is ok now, with he additional count(*):

SQL> delete from mv_capabilities_table;
15 rows deleted.
 
SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
PL/SQL procedure successfully completed.
 
SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
 
CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST
REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_AFTER_INSERT mv log must have ROWID PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_INSERT mv log must have new values PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_INSERT mv log does not have all necessary columns PDB_ADMIN.DEMO
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

I must add ROWID, used columns and NEW VALUES


SQL> drop materialized view log on DEMO;
Materialized view log dropped.
 
SQL> create materialized view log on DEMO with sequence, rowid (a,b) including new values;
Materialized view log created.

You can see that I’ve added the sequence, that was not mentioned by the explain_mview. I’ll come back on that later and probably in another post.


SQL> delete from mv_capabilities_table;
16 rows deleted.
SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
PL/SQL procedure successfully completed.
SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
 
CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view

Ok, now I’m ready to create the materialized view. The only remaining message is for partitioned tables.


SQL> create materialized view DEMO_MV refresh fast on demand as select a,count(b),sum(b),count(*) from DEMO group by a;
Materialized view created.

Aggregate query on the source table

I’m running a simple query that can get its result from the source table or from the materialized view


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 0
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 262 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 262 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 262 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

The query has read the source table. I need to enable query rewrite to get the CBO transparently transforming to a query on the materialized view.

Query Rewrite


SQL> alter materialized view DEMO_MV enable query rewrite;
Materialized view altered.

I also need the query_rewrite_integrity to be set. It is by default:

SQL> show parameter query_rewrite
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced

Now, the rewrite can occur:

SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 0
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

This query is optimized: 9 blocks read from the materialized view instead of 262 ones from the source table.

You can note that it’s not a new child cursor: the previous cursor has been invalidated when I altered the materialized view.

This rewrite can occur only because the materialized view has been refreshed and the source table had no modifications on it.

Stale MVIEW

Let’s do some DML on the source table.


SQL> insert into DEMO values(0,0,0);
1 row created.

and query again


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 1
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

Now, the materialized view is stale. We cannot get the same result from it, so the rewrite didn’t happen.

You can see that I have a new child cursor. The previous one cannot be shared because the previous one was only for non-stale materialized view.

Stale tolerated

If I want to keep using the materialized view, I have the option to accept stale results:


SQL> alter session set query_rewrite_integrity=stale_tolerated;
Session altered.

Now, the rewrite can occur even when the source table has changed since the last refresh.


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 2
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

Of course, here you can’t see that the result is stale, because I inserted a row with value 0 which do not change the sum. Let’s do a count the rows, which is something that is also aggregated in my materialized view. I have the option to disable the rewrite and query the source table:


SQL> select /*+ no_rewrite */ count(b) from DEMO;
 
COUNT(B)
----------
100001

This is the accurate result, but with access to full table.

The rewrite can also be forced by hint (because it is a cost decision)


SQL> select /*+ rewrite */ count(b) from DEMO;
 
COUNT(B)
----------
100000

Stale result here: I don’t see the latest modifications.

Frequent refresh

In order to limit the gap between fresh data and stale result, you can refresh the materialized view frequently. It’s not too expensive thanks to the materialized view log: fast refresh is incremental.

Here I don’t want stale result:

SQL> alter session set query_rewrite_integrity=enforced;
Session altered.

and I refresh the materialized view


SQL> exec dbms_mview.refresh('DEMO_MV','f');
PL/SQL procedure successfully completed.

Then I can expect, for the time until the next updates, to get results from he materialized view.


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 1
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

Unfortunately I re-used the same cursor here. When you refresh, the cursors are not invalidated.

I’m running another statement now to get it parsed again:

SQL> select sum(b) this_is_another_cursor from DEMO where a=3;
 
THIS_IS_ANOTHER_CURSOR
----------------------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 27xfg0qjcf7ff, child number 0
-------------------------------------
select sum(b) this_is_another_cursor from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

So, we now read the materialized view but this will last only while there is no updates on the table. So the idea is to trigger a refresh as soon as there are modifications. Ideally it should be like indexes, which are maintained automatically. But indexes are much simple: it’s a simple value to rowid mapping entry to maintain. And rowids do not change. Materialized views have joins, aggregates and contains all columns.

Refresh on commit

So the idea is to defer the maintenance of the materialized view to commit time. This is the latest point where we are required to do it as we want other sessions to never see stale results. And materialized view logs are there to store the incremental changes even if the transaction is very long. Of course, we need to be aware of it because in general the commit is an immediate and simple operation.

Let’s define the materialized view to refresh on commit instead of on-demand


SQL> alter materialized view DEMO_MV refresh on commit;
Materialized view altered.

I do some modifications


SQL> delete from DEMO where id=0;
1 row deleted.

And I run my query


SQL> select sum(b) this_is_a_third_cursor from DEMO where a=3;
 
THIS_IS_A_THIRD_CURSOR
----------------------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5dfs068dgbwvd, child number 0
-------------------------------------
select sum(b) this_is_a_third_cursor from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

Rewrite cannot happen here because the materialized view is stale. I didn’t commit yet. Of course, other sessions can query from the view because they must not see my modification.


SQL> commit;
Commit complete.

The commit has triggered the fast refresh of the materialized view


SQL> select sum(b) this_is_a_fourth_cursor from DEMO where a=3;
 
THIS_IS_A_FOURTH_CURSOR
-----------------------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0075r0yzqt90a, child number 0
-------------------------------------
select sum(b) this_is_a_fourth_cursor from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

With on commit refresh, the materialized view is never stale. The problem is that it can slow down the transactions: in addition to fill the materialized view logs, the commit has the overhead to apply them. In 12.1 this is the only way to have a query on the materialized view that is always fresh. But there’s something new in 12.2.

Real-time materialized views

Even when the materialized view is stale, we can get fresh result without querying the source tables. We have the stale values in the materialized view and we have all changes logged into the materialized view log. Easy or not, merging that can be computed to get fresh result. We still need fast refresh but we don’t need refresh on commit anymore:


SQL> alter materialized view DEMO_MV refresh on demand;
Materialized view altered.

And in order to use this new feature we have to enable it a materialized view level:


SQL> alter materialized view DEMO_MV enable on query computation;
Materialized view altered.

Then let the magic happen:


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 0
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

Here my materialized view is not stale, so nothing special happened. Here is a some modification:

SQL> insert into DEMO values(0,0,0);
1 row created.

and…

SQL> select sum(b) try_again from DEMO where a=3;
&npsp;
TRY_AGAIN
----------
4950617
&npsp;
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
&npsp;
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dtmhccwr0v7r5, child number 0
-------------------------------------
select sum(b) try_again from DEMO where a=3
&npsp;
Plan hash value: 2180342005
&npsp;
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
&npsp;
Predicate Information (identified by operation id):
---------------------------------------------------
&npsp;
2 - storage("A"=3)
filter("A"=3)

Still no magic here. For the session that did the modifications, it seems that query rewrite cannot happen. All changes are in the materialized view log, but applying the uncommited ones for my session seems to be impossible here. Well, let’s commit my changes.


SQL> commit;
Commit complete.

and see the magic:


SQL> select sum(b) try_again from DEMO where a=3;
&nbps;
SUM(B)
----------
4950617
&nbps;
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
&nbps;
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dtmhccwr0v7r5, child number 0
-------------------------------------
select sum(b) try_again from DEMO where a=3
&nbps;
Plan hash value: 2180342005
&nbps;
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
&nbps;
Predicate Information (identified by operation id):
---------------------------------------------------
&nbps;
2 - storage("A"=3)
filter("A"=3)

Oh… that’s my previous cursor. No invalidation occurs. I have to parse a different statement.


SQL> select sum(b) here_I_am from DEMO where a=3;
 
HERE_I_AM
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 34fqrktpthuk7, child number 1
-------------------------------------
select sum(b) here_I_am from DEMO where a=3
 
Plan hash value: 1240257898
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 25 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 25 | | | |
| 2 | VIEW | | 1 | 705 | 1 |00:00:00.01 | 25 | | | |
| 3 | UNION-ALL | | 1 | | 1 |00:00:00.01 | 25 | | | |
|* 4 | FILTER | | 1 | | 1 |00:00:00.01 | 16 | | | |
|* 5 | HASH JOIN OUTER | | 1 | 100 | 1 |00:00:00.01 | 16 | 3843K| 3843K| 1699K (0)|
|* 6 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 | 1025K| 1025K| |
| 7 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 7 | | | |
| 8 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 7 | 1956K| 1956K| 2324K (0)|
| 9 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 7 | | | |
| 10 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 7 | | | |
|* 11 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 7 | | | |
| 12 | WINDOW SORT | | 1 | 1 | 1 |00:00:00.01 | 7 | 2048 | 2048 | 2048 (0)|
|* 13 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 1 | 1 | 1 |00:00:00.01 | 7 | 1025K| 1025K| |
| 14 | VIEW | | 1 | 605 | 0 |00:00:00.01 | 9 | | | |
| 15 | UNION-ALL | | 1 | | 0 |00:00:00.01 | 9 | | | |
|* 16 | FILTER | | 1 | | 0 |00:00:00.01 | 0 | | | |
| 17 | NESTED LOOPS OUTER | | 1 | 600 | 0 |00:00:00.01 | 0 | | | |
| 18 | VIEW | | 1 | 100 | 0 |00:00:00.01 | 0 | | | |
|* 19 | FILTER | | 1 | | 0 |00:00:00.01 | 0 | | | |
| 20 | HASH GROUP BY | | 1 | | 0 |00:00:00.01 | 0 | 2982K| 2982K| |
|* 21 | VIEW | | 1 | 1 | 0 |00:00:00.01 | 0 | | | |
| 22 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 23 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 24 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 25 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
|* 26 | INDEX UNIQUE SCAN | I_SNAP$_DEMO_MV | 0 | 6 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
|* 27 | HASH JOIN | | 1 | 5 | 0 |00:00:00.01 | 9 | 3043K| 3043K| 1702K (0)|
|* 28 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 | 1025K| 1025K| |
| 29 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
| 30 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2319K (0)|
| 31 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 32 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 33 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 34 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 35 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
-----------------------------------------------------------------------------------------------------------------------------------------------------------

We got it. All the magic. The materialized view is read. The materialized view log is read. But we don’t need the source tables. All this is merged by outer join and union all. The plan is harder to read but it requires only 25 logical reads to get fresh results instead of 270 from the source table. The bigger the tables are, the more complex the query is, the more benefit you get as long as you don’t have too many changes since the last refresh. And this without any overhead on other transactions commits. That’s the beauty of 12cR2 Enterprise Edition. Can you imagine you have to code this yourself? For any query? For any modifications on source tables?

FRESH_MV

This was query rewrite: query the source table and have the CBO transform the query to query the materialized (given that the CBO costing estimates that it is cheaper). But you can also query the materialized view and ask to get fresh result by joining materialized view log to the stale result. And this can be used also in Standard Edition (only query rewrite is limited to Enterprise Edition). On-query computation when querying the materialized vue is enabled by the FRESH_MV hint:


SQL> select /*+ fresh_mv */ * from DEMO_MV;
 
A COUNT(B) SUM(B) COUNT(*)
---------- ---------- ---------- ----------
5 68378 4500058747 68378
2 285 49590 285
3 2846 4950617 2846
1 28 490 28
4 28460 494990550 28460
0 4 6 4
 
6 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last +alias'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gyar0v20qcksu, child number 0
-------------------------------------
select /*+ fresh_mv */ * from DEMO_MV
 
Plan hash value: 2169890143
 
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 |00:00:00.01 | 12 | | | |
| 1 | VIEW | | 1 | 730 | 6 |00:00:00.01 | 12 | | | |
| 2 | UNION-ALL | | 1 | | 6 |00:00:00.01 | 12 | | | |
|* 3 | VIEW | VW_FOJ_0 | 1 | 100 | 5 |00:00:00.01 | 9 | | | |
|* 4 | HASH JOIN FULL OUTER | | 1 | 100 | 6 |00:00:00.01 | 9 | 2897K| 2897K| 3217K (0)|
| 5 | VIEW | | 1 | 6 | 6 |00:00:00.01 | 9 | | | |
| 6 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 9 | 1025K| 1025K| |
| 7 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
| 8 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2268K (0)|
| 9 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 10 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 11 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 12 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 13 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
| 14 | VIEW | | 1 | 630 | 1 |00:00:00.01 | 3 | | | |
| 15 | UNION-ALL | | 1 | | 1 |00:00:00.01 | 3 | | | |
|* 16 | FILTER | | 1 | | 0 |00:00:00.01 | 1 | | | |
| 17 | NESTED LOOPS OUTER | | 1 | 600 | 1 |00:00:00.01 | 1 | | | |
| 18 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
|* 19 | FILTER | | 1 | | 1 |00:00:00.01 | 0 | | | |
| 20 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2304K (0)|
| 21 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 22 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 23 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 24 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 25 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
|* 26 | INDEX UNIQUE SCAN | I_SNAP$_DEMO_MV | 1 | 6 | 1 |00:00:00.01 | 1 | 1025K| 1025K| |
| 27 | MERGE JOIN | | 1 | 30 | 1 |00:00:00.01 | 2 | | | |
| 28 | MAT_VIEW ACCESS BY INDEX ROWID | DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 2 | | | |
| 29 | INDEX FULL SCAN | I_SNAP$_DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 1 | 1025K| 1025K| |
|* 30 | FILTER | | 6 | | 1 |00:00:00.01 | 0 | | | |
|* 31 | SORT JOIN | | 6 | 100 | 1 |00:00:00.01 | 0 | 2048 | 2048 | 2048 (0)|
| 32 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
| 33 | SORT GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 2048 | 2048 | 2048 (0)|
| 34 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 35 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 36 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 37 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 38 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Have you seen that we need even less logical reads (12) than before (25). There is an optimization here with RESULT CACHE. You get this when you have the sequence in the materialized view log, and you can see that the sequence is used in the predicates:


Predicate Information (identified by operation id):
---------------------------------------------------
 
3 - filter("AV$0"."OJ_MARK" IS NULL)
4 - access(SYS_OP_MAP_NONNULL("SNA$0"."A")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
11 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
"MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
13 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
16 - filter(CASE WHEN ROWID IS NOT NULL THEN 1 ELSE NULL END IS NULL)
19 - filter(SUM(1)>0)
23 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
"MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
25 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
26 - access("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
30 - filter("DEMO_MV"."COUNT(*)"+"AV$0"."D0">0)
31 - access("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
filter("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
36 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
"MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
38 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))

Of course, you also see a predicate with the staleness timestamp (here 2017-02-16 20:31:08) of the materialized view.

This result cache is interesting because the materialized view log is read several times in the execution plan and this is a way to actually read it only once.

SQL> select type,column_count,row_count,cache_id,name from v$result_cache_objects;
 
TYPE COLUMN_COUNT ROW_COUNT CACHE_ID NAME
---------- ------------ ---------- ------------------------------ ------------------------------
Dependency 0 0 PDB_ADMIN.MLOG$_DEMO PDB_ADMIN.MLOG$_DEMO
Result 7 1 6jf9k1y2wt8xc5b00gv9px6ww0 DMLTYPES:MLOG$_DEMO

The result cache has a dependency on the materialized view log, to be aware of additional changes, and when tracing the transformed query, we can see a lifetime of session for this result cache. /*+ RESULT_CACHE(LIFETIME=SESSION, NAME=”DMLTYPES:MLOG$_DEMO”) */. Note that I included the sequence in the materialized view log, but this is not required. I’ll show in a future post that the execution plan is different then, and not using result cache.

So what?

This is an amazing feature. You can optimize your queries transparently by creating materialized views, get fresh result, and minimize the refresh overhead. And depending on the size of the tables and the rate of modifications. You can choose the right refresh frequency with the goal to limit the materialized view logs to apply on each query. You have real-time result and bulk refresh at the same time. Oracle Database has always been a database for mixed workloads, where readers don’t block writers. And once again we have a feature to optimize queries by pre-calculating them, with minimal impact on source.

It is transparent, but after this first test, I have a few questions that raise and that I’ll try to answer in future posts: Is it always better to have the sequence in the materialized view log? Is the default result cache size still sufficient? How can it use a 1 seconds only precision timestamp and not a SCN? What happens with this at winter Daylight Saving Time clock change? Can we get query rewrite when our own transaction has made the modifications? Do we need to invalidate cursors that read the source table? How accurate are the cardinality estimations on the very volatile materialized view? When full materialized view log is read, can it trigger a complete refresh?

 

Cet article 12cR2 real-time materialized view (on query computation) est apparu en premier sur Blog dbi services.

Review at amazon: Projektbegleitendes Projektmanagement

Dietrich Schroff - Fri, 2017-02-17 14:06
Today i finished the book "Projektbegleitendes Projektmanagement:


The author created a quite good reference for project and quality management. If you need a short summary of ISO 9001 or best practices for quality assurance, you will find this in a few pages.
I would like to pay particular attention to the following part: The chapters 9 to 11 contain many checklists, table of contents for some needful documents and so on...
From my point of view this is already sufficient for the purchase of this book.


If you are interested, take a look at my review at amazon.de. (like all my reviews: written in german ;-) 

Pages

Subscribe to Oracle FAQ aggregator