Feed aggregator

SEQUENCE COntention

Tom Kyte - Tue, 2017-02-21 07:26
I need a query to find the sequence contention in oracle. Can you please provide it? Thanks in advance. Sam
Categories: DBA Blogs

Advantages and disadvantages of using Shareplex vs Dataguard for disaster recovery

Tom Kyte - Tue, 2017-02-21 07:26
Hi AskTom, Just need some advise on the setup of our database environment. We will have production instances A and B. A is live for external business and B is an exact copy of A but is used as operation data storage (B is live datawarehouse env)....
Categories: DBA Blogs

Latches and mutex

Tom Kyte - Tue, 2017-02-21 07:26
Dear Team, Can you please let me know few demonstration of latches and mutex types. one more thing can we take any preliminary action to avoid latches and mutex in future. want to know exact logic behind this. Thank's Pradeep
Categories: DBA Blogs

Webcast: "Simplified and Touch-Friendly User Interface in EBS"

Steven Chan - Tue, 2017-02-21 02:05

OAF WebcastOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for the latest updates on our rapidly-evolving OA Framework (OAF) user interface capabilities, see:

Senthilkumar Ramalingam, Group Manager Product Development, shares the latest Oracle Applications Framework (OAF) which includes the Oracle Alta UI, a redesigned home page and a set of new components optimized for display on mobile devices such as tablets.  Oracle Alta UI is the next generation user interface standards from Oracle that offer a modern and compelling UI for both cloud and on-premise applications. In addition, the OAF UI components offer several touch-friendly gestures for common actions, for a smarter and more efficient end user experience. The session also covers major UI enhancements in components like tables, search and attachments. Come see the new components, new gesture-based touch interactions, and a modernized UI that completely transform the Oracle E-Business Suite end user experience. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

APEX 5.1 New Features - neuer Termin

Denes Kubicek - Tue, 2017-02-21 00:33
APEX 5.1 ist endlich da. Oracle Application Express wird mit jedem Release besser und zieht immer mehr Entwickler weltweit in seinen Bann. Es ist einfach, einfache wie auch komplexe Applikationen auf Basis des Oracle Stacks zu entwickeln. Es macht sogar richtig Spaß !

Mit APEX 5.1 sind als wichtiges neues Feature die Interactive Grids mit dazu gekommen. Wir haben sehr lange auf eine moderne Möglichkeit gewartet, Excel - ähnliche Funktionen auf einer Webseite mit APEX zu implementieren. Jetzt ist es endlich soweit :) . Sogar Master-Detail-Detail-Detail-... Beziehungen sind umsetzbar, unsere Anwender werden sich freuen.

Darüber hinaus gibt es auch in vielen anderen Bereichen wichtige Neuerungen, die uns das Leben erleichtern. Gleichzeitig sind aber auch einige wichtige Dinge zu beachten, damit wir ein reibungsloses Upgrade durchführen können.

In diesem Kurs lernen Sie die neuen Funktionalitäten von Oracle Application Express 5.1 kennen, insbesondere wie Sie diese erfolgreich in der Praxis einsetzen.

Lernen Sie von und diskutieren Sie mit den weltweit bekannten Oracle APEX Experten:

Denes Kubicek, langjährige Projekterfahrung in den Bereichen Oracle und APEX mit multisite Applikationen, Preisträger des "Oracle APEX Developer of the Year 2008" Awards des Oracle Magazines, ein Oracle ACE Director und sehr bekannt im Oracle APEX OTN Forum für seine Beispielapplikation und

Dietmar Aust, erfahrener Oracle Consultant mit Spezialisierung auf Oracle APEX, Oracle ACE und aktiv in den OTN Foren zu APEX und Oracle XE, mit regelmäßigen Präsentationen auf den einschlägigen Oracle Konferenzen (DOAG, ODTUG, Oracle Open World).



Categories: Development

AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017

Amis Blog - Mon, 2017-02-20 23:25

Donderdag 16 maart

17.00-21.00 uur

AMIS, Nieuwegein

Aanmelden via: bu.om@amis.nl

Op donderdag 16 maart vindt de tweede AMIS Tools Showroom Sessie plaats. De eerste sessie was op 13 december: hierin hebben 16 AMIS-ers in korte en hele korte presentaties en demonstraties allerlei handige tools en hulpmiddelen laten zien. De nadruk in deze sessie lag op tools voor monitoring, communicatie en collaboration.

In deze tweede sessie gaan we op zoek naar nog een collectie tools. Deze uitnodiging betreft dan ook twee aspecten:

· Wil je er op 16 maart bij zijn om tools door je vakbroeders gepresenteerd te krijgen?

· Heb jij een tool waarover je tijdens deze sessie wil presenteren? Denk bijvoorbeeld aan tools rondom web conferencing & video streaming, screen cams, text editing, chat, image editing, data visualisatie, document sharing, voice recognition. En andere tools, apps en plugins die jij handig vindt in je werk en die je aan je vakgenoten zou willen laten zien – in een korte presentatie (5-15 min) – liefst met een demo.

Zou je via het volgende formulier willen aangeven welke tools voor jou interessant zijn en over welk tool jij wel zou willen presenteren: https://docs.google.com/forms/d/e/1FAIpQLSdNPwUACXxWaZGfs911UraVFQp5aWqeJVEx0xrSRFQTcYnYXA/viewform .
Op basis van de resultaten van deze survey kunnen we de agenda samenstellen voor deze sessie.

The post AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017 appeared first on AMIS Oracle and Java Blog.

Database Star Academy Membership is Now Open

Complete IT Professional - Mon, 2017-02-20 21:15
The Database Star Academy membership is now open! Here’s what you need to know. What Is the Database Star Academy Membership? It’s a membership site with a monthly fee, that gives you access to many different online video courses and PDF guides related to Oracle database development. What’s Included? As part of your membership, you […]
Categories: Development

Using bitmap indexes in OLTP database with mostly inserts

Tom Kyte - Mon, 2017-02-20 13:06
Hi, We have a table for logging metadata about processed messages in a production system. The table have approx 32M rows today and 25 columns. The <b>total number of rows is expected to be around 100M</b> in the future. When the processing of a...
Categories: DBA Blogs

Trigger based on set of data

Tom Kyte - Mon, 2017-02-20 13:06
Hi All, i have a scenario where I want to create a trigger which will generate a flat file whenever a set of data like department number's(10,20,30,40,50...) changes(insert/updates) on a particular date(sysdate).
Categories: DBA Blogs

How to fetch up to 5MB of text/string/data from a larger a clob in oracle

Tom Kyte - Mon, 2017-02-20 13:06
Hi Oracle, <code> CREATE TABLE XMLISSUE ( xmltablecolumn clob ); </code> Created a table As shown below code I have inserted some data to clob <code> DECLARE XMLCLO...
Categories: DBA Blogs

when I drop a plsql function, 1 view goes invalid, 1 view and 1 procedure remain valid

Tom Kyte - Mon, 2017-02-20 13:06
We recently upgraded to database version 12c... We found an anomaly that we can't explain and wondering if this is a bug or a misunderstanding of new functionality within 12c. We can reproduce this issue on demand with the following example....
Categories: DBA Blogs

use of Block change tracking file for offline incremental backup

Tom Kyte - Mon, 2017-02-20 13:06
Hi , If block change tracking is ENABLED, Does oracle uses this file when we take an offline incremental backup ( in mount mode ) ? As the Db is not open , I think it can not access the file ( change tracking file), I believe the feature is only...
Categories: DBA Blogs

Converting a column from one data type to another in PostgreSQL

Yann Neuhaus - Mon, 2017-02-20 11:15

Last week at a customer the following question came up: We have a varchar2 column in Oracle that contains a date but actually is stored as a string. When we want to convert this column to be a real date data type in Oracle we can use dbms_redefinition. How can we do that in PostgreSQL? The first answer that came to me mind is: Add a new column (of data type date) and populate it with the converted string from the source column, drop the source column and rename the new column to the name of the dropped column. This for sure will work but it is not the most convenient way: What happens if the application is somehow dependent on the order of the columns? Shouldn’t be the case if the application is written well, but you never know.

To start with lets generate some test data:

drop table if exists t1;
create table t1 ( a varchar(20) );
insert into t1 (a) values ('01012017');
insert into t1 (a) values ('02012017');
insert into t1 (a) values ('03012017');
insert into t1 (a) values ('04012017');
insert into t1 (a) values ('05012017');
insert into t1 (a) values ('06012017');
insert into t1 (a) values ('07012017');
insert into t1 (a) values ('08012017');
insert into t1 (a) values ('09012017');
insert into t1 (a) values ('10012017');
insert into t1 (a) values ('11012017');
(postgres@[local]:5440) [postgres] > select * from t1;
    a     
----------
 01012017
 02012017
 03012017
 04012017
 05012017
 06012017
 07012017
 08012017
 09012017
 10012017
 11012017
(11 rows)

We now have a varchar column holding the date values as a string. When you look at the “alter table” command in PostgtreSQL you’ll notice something like this:

where action is one of:

    ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ]
    ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ]

What will really help here is the “using” keyword because you can do things like this:

(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE
(postgres@[local]:5440) [postgres] > \d t1
     Table "public.t1"
 Column | Type | Modifiers 
--------+------+-----------
 a      | date | 

That’s really cool: You can pass a function (this mean a self written function as well) to the alter command to specify on how you want to do the conversion.

When there is an index on the column, what happens to the index?

(postgres@[local]:5440) [postgres] > \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | character varying(20) | 
Indexes:
    "i1" btree (a)
(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE
Time: 5.931 ms
(postgres@[local]:5440) [postgres] > \d t1
     Table "public.t1"
 Column | Type | Modifiers 
--------+------+-----------
 a      | date | 
Indexes:
    "i1" btree (a)
(postgres@[local]:5440) [postgres] > select indisvalid,indisready,indislive from pg_index where indexrelid = 'i1'::regclass; 
 indisvalid | indisready | indislive 
------------+------------+-----------
 t          | t          | t
(1 row)

Looks fine as well, lets do a quick test if the index is really usable:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 11
Time: 2.373 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 22
Time: 39.653 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 44
Time: 1.110 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 88
Time: 1.072 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 176
Time: 1.455 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 352
Time: 1.432 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 704
Time: 3.344 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 1408
Time: 20.972 ms
(postgres@[local]:5440) [postgres] > explain analyze select a from t1 where a = to_date('01012017','dd.mm.yyyy');
                                                 QUERY PLAN                                                 
------------------------------------------------------------------------------------------------------------
 Index Only Scan using i1 on t1  (cost=0.27..8.29 rows=1 width=4) (actual time=0.012..0.012 rows=0 loops=1)
   Index Cond: (a = to_date('01012017'::text, 'dd.mm.yyyy'::text))
   Heap Fetches: 0
 Planning time: 0.062 ms
 Execution time: 0.029 ms
(5 rows)

Perfect. The remaining question is: Does this operation block others from reading the table while it is being executed? Lets generate some more date to make the select operation a bit longer and then “watch” the statement in a separate session while we execute the conversion:

Time: 0.453 ms
drop table if exists t1;
create table t1 ( a varchar(20) );
insert into t1 (a) values ('01012017');
insert into t1 (a) values ('02012017');
insert into t1 (a) values ('03012017');
insert into t1 (a) values ('04012017');
insert into t1 (a) values ('05012017');
insert into t1 (a) values ('06012017');
insert into t1 (a) values ('07012017');
insert into t1 (a) values ('08012017');
insert into t1 (a) values ('09012017');
insert into t1 (a) values ('10012017');
insert into t1 (a) values ('11012017');
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 11
-- and so on and so on ...
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 360448

Then, in session 1 I am doing this:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > select count(*) from t1 where a = '01012017';
 count  
--------
 131072
(1 row)
(postgres@[local]:5440) [postgres] > \watch 0.1

In session 2 I am doing the conversion:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE

In session 1 you’ll notice that the statement is blocked and finally you get this:

Time: 0.453 ms
ERROR:  date/time field value out of range: "01012017"
LINE 1: select count(*) from t1 where a = '01012017';
                                          ^
HINT:  Perhaps you need a different "datestyle" setting.

Conclusion: Converting a column from one data type to another is nothing you want to do when you system is live. When you can afford some downtime the “using” syntax is what you want to do and it is working quite fast. Hope this helps …

 

Cet article Converting a column from one data type to another in PostgreSQL est apparu en premier sur Blog dbi services.

OBIEE 12c - Regression Tester for Application Roles and Mappings

Rittman Mead Consulting - Mon, 2017-02-20 10:08

Allow me to introduce you to the Application Role Mapping validation script for OBIEE 12c. With this bit of code, we can utilize OBIEE's runcat command script to export our application role and permissions mapping information from multiple environments, and ensure their consistency. The picture below is an example of what you'll see as the final product. While it does not show you a side by side comparison of both environments, what it does do is display those objects in your lower environment that contain inconsistent application role or permissions mappings vs the target environment, and their respective configuration. The items in the path column then point you to that object in your lower environment catalog for further examination.

sec_audit_pre

Our script only requires a few easy steps to generate a web-based view of any inconsistencies in application role and permissions mappings between your tested environments. Similar to the Baseline Validation Tool (BVT), this script goes one step further and executes a fine-grain examination and resulting view of application role and permissions mappings. The BVT only catches that something is different about the object, as indicated by the Name column value, and tells you where to look.

BVT_audit

While I'll be sure to go into more detail later, the first picture above shows us that we have a number of application role and/or permissions mappings that exist in the lower environment, however, do not in the target OBIEE environment. Curious? Let's jump right into it.


Overview


The security audit is essentially a 3-step process, and was designed to be really accessible and simple to use. It breaks out like this:

  • Run the security_audit.py script in both OBIEE12c environments (being the lower environment that possesses the proper app role/permission mappings and the target environment).
  • security_audit.py will generate a .csv file in each environment.
  • Move the CSV from the target environment into the directory where you've got the CSV in the lower environment. After you've got the files moved, you'll run security_compare.py and simply pass in the locations of your lower environment CSV, and then that of the target environment. Lastly, a browser will pop up, giving you an immediate view of any inconsistencies that exist between your two OBIEE 12c instances.

Let's take a look at the process in a bit more detail!


Step by Step


Run security_audit.py in Lower Environment

First, let's make sure we've got a few libraries installed that we'll need to run our code. I recommend using pip for this. You'll need to install pandas and flask if you have not done so already. Simply navigate to the security_audit directory you got from GitHub and then from the command line run:

python3 security_audit.py

First, the script is smart enough to figure out which kind of OS it's in. Second, if your DOMAIN_HOME variable is set correctly (probably something like ORACLE_HOME/user_projects/domains/bi), the runcat command will run, exporting a CSV that contains the information we need to run the next script, which does the actual crunching between environments. In Windows, the default output location is C:, in Linux, the /tmp directory.

script run

The tool will prompt you to enter your DOMAIN_HOME, should you not have one set in your environment.

enter domain home



Run security_audit.py in Target Environment

Next, in our target environment, being the OBIEE 12c instance we want to make sure contains the same app role mappings as in our lower environment, run the security_audit.py script once again, following the same steps as outlined above. Rename the CSV to something different than the file that was written in your lower environment, as we're going to need to put both of these guys in the same directory.

rename mapping files

So on that note, after you've renamed your security_mappings CSVs, move them to the same directory on your lower environment. We're simply renaming them so we don't clobber one version or the other, and for easy reference as to which file belongs to its corresponding environment.



Run security_compare.py in Lower Environment

At this point you should have two security_mappings.csv files (although, the names should be a bit changed at this point) in your lower environment. We're going to need them as inputs for the next part of the regression testing process. This next bit of code simply ensures consistency between the two environments. Once run, it will instantly display any catalog objects in your lower environment that contain any disparities in the way their application roles or permissions are mapped when compared against those in your target environment. As of this blog, there really is no good way to do this with any native OBIEE tool, that is aside from running the runcat reports yourself and doing the crunching. So let's do it!

Open a command prompt in your environment, taking note of where your two CSV files are located. In this example, we'll be using a Windows command prompt, with our files located directly off of the C:\ drive.

In your command prompt, navigate to the location of your .py files. On that note make sure you do not separate these from the other files in the security_audit directory. Flask will need the other files to render the resulting webpage. Back to it. In my example below, I've navigated to the security_audit directory, and then run the following:

run_script

And then watch the magic happen! Make sure you have pop-ups enabled if you're having trouble rendering the page. The script will auto-magically figure out the host name for your environment and run it there.

permissions_list

  1. Owner - this is the owner of the catalog object that is showing the variance in permissions assignment under:

  2. Name - this is the name of the object, as it is displayed in the catalog.

  3. Path - this is the path to the object in the web catalog

  4. ACL - these are the detailed permissions mappings based on each entity assigned to the particular object. There is an accompanying key to the left hand side to help you out.

  5. Permissions - detail level permissions mapped to each object by owner entity.


Summary


Having any trouble running the script? Please get in touch! I would also love to hear some feedback on how it might have helped you perform one task or the other. Feel free to use the comments section below for this or to report any issues.

Categories: BI & Warehousing

Node.js application writing to MongoDB – Kafka Streams findings read from Kafka Topic written to MongoDB from Node

Amis Blog - Mon, 2017-02-20 08:35

MongoDB is a popular, light weight, highly scalable, very fast and easy to use NoSQL document database. Written in C++, working with JSON documents (stored in binary format BSON), processing JavaScript commands using the V8 engine, MongoDB easily ties in into many different languages and platforms, one of which is Node.JS. In this article, I describe first of all how a very simple interaction between Node.JS and MongoDB can be implemented.

 

image

Then I do something a little more challenging: the Node.JS application consumes messages from an Apache Kafka topic and writes these messages to a MongoDB database collection, to make the results available for many clients to read and query. Finally I will show a little analytical query against the MongoDB collection, to retrieve some information we would not have been able to get from the plain Kafka Topic (although with Kafka Streams it just may be possible as well).

You will see the Mongo DB driver for Node.JS in action, as well as the kafka-node driver for Apache Kafka from Node.JS. All resources are in the GitHub Repo: https://github.com/lucasjellema/kafka-streams-running-topN.

Prerequisites

Node.JS is installed, as is MongoDB.

Run the MongoDB server. On Windows, the command is mongod, optionally followed by the dbpath parameter to specify in which directory the data files are to be stored

mongod --dbpath c:\node\nodetest1\data\

For the part where messages are consumed from a Kafka Topic, a running Apache Kafka Cluster is  available – as described in more detail in several previous articles such as https://technology.amis.nl/2017/02/13/kafka-streams-and-nodejs-consuming-and-periodically-reporting-in-node-js-on-the-results-from-a-kafka-streams-streaming-analytics-application/.

 

Getting Started

Start a new Node application, using npm init.

Into this application, install npm packages kafka-node en mongodb:

npm install mongodb –save

npm install kafka-node –save

This installs the two Node modules with their dependencies and adds them to the package.json

 

First Node Program – for Creating and Updating Two Static Documents

This simple Node.JS program uses the the mongodb driver for Node, connects to a MongoDB server running locally and a database called test. It then tries to update two documents in the top3 collection in the test database; if a document does not yet exist (based on the key which is the continent property) it is created. When the application is done running, two documents exist (and have their lastModified property set if they were updated).

var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');

// connect string for mongodb server running locally, connecting to a database called test
var url = 'mongodb://127.0.0.1:27017/test';

MongoClient.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected correctly to server.");
   var doc = {
        "continent" : "Europe",
         "nrs" : [ {"name":"Belgium"}, {"name":"Luxemburg"}]
      };
   var doc2 = {
        "continent" : "Asia",
         "nrs" : [ {"name":"China"}, {"name":"India"}]
      };
  insertDocument(db,doc, function() {
    console.log("returned from processing doc "+doc.continent);  
    insertDocument(db,doc2, function() {
      console.log("returned from processing doc "+doc2.continent);          
      db.close();
      console.log("Connection to database is closed. Two documents should exist, either just created or updated. ");
      console.log("From the MongoDB shell: db.top3.find() should list the documents. ");
    });
  });
});

var insertDocument = function(db, doc, callback) {
   // first try to update; if a document could be updated, we're done 
   console.log("Processing doc for "+doc.continent);
   updateTop3ForContinent( db, doc, function (results) {      
       if (!results || results.result.n == 0) {
          // the document was not updated so presumably it does not exist; let's insert it  
          db.collection('top3').insertOne( 
                doc
              , function(err, result) {
                   assert.equal(err, null);
                   callback();
                }
              );   
       }//if
       else {
         callback();
       }
 }); //updateTop3ForContinent
}; //insertDocument

var updateTop3ForContinent = function(db, top3 , callback) {
   db.collection('top3').updateOne(
      { "continent" : top3.continent },
      {
        $set: { "nrs": top3.nrs },
        $currentDate: { "lastModified": true }
      }, function(err, results) {
      //console.log(results);
      callback(results);
   });
};

The console output from the Node application:

image

The output on the MongoDB Shell:

image

Note: I have used db.top3.find() three times: before running the Node application, after it has ran once and after it has ran a second time. Note that after the second time, the lastModified property was added.

Second Node Program – Consume messages from Kafka Topic and Update MongoDB accordingly

This application registers as Kafka Consumer on the Topic Top3CountrySizePerContinent. Each message that is produced to that topic is consumed by the Node application and handled by function handleCountryMessage. This function parses the JSON message received from Kafka, adds a property continent derived from the key of the Kafka message, and calls the insertDocument function. This function attempts to update a record in the MongoDB collection top3 that has the same continent property value as the document passed in as parameter. If the update succeeds, the handling of the Kafka message is complete and the MongoDB collection  contains the most recent standings produced by the Kafka Streams application. If the update fails, presumably that happens because there is no record yet for the current continent. In that case, a new document is inserted for the continent.

image

/*
This program connects to MongoDB (using the mongodb module )
This program consumes Kafka messages from topic Top3CountrySizePerContinent to which the Running Top3 (size of countries by continent) is produced.

This program records each latest update of the top 3 largest countries for a continent in MongoDB. If a document does not yet exist for a continent (based on the key which is the continent property) it is inserted.

The program ensures that the MongoDB /test/top3 collection contains the latest Top 3 for each continent at any point in time.

*/

var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');

var kafka = require('kafka-node')
var Consumer = kafka.Consumer
var client = new kafka.Client("ubuntu:2181/")
var countriesTopic = "Top3CountrySizePerContinent";


// connect string for mongodb server running locally, connecting to a database called test
var url = 'mongodb://127.0.0.1:27017/test';
var mongodb;

MongoClient.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected correctly to MongoDB server.");
  mongodb = db;
});

var insertDocument = function(db, doc, callback) {
   // first try to update; if a document could be updated, we're done 
   updateTop3ForContinent( db, doc, function (results) {      
       if (!results || results.result.n == 0) {
          // the document was not updated so presumably it does not exist; let's insert it  
          db.collection('top3').insertOne( 
                doc
              , function(err, result) {
                   assert.equal(err, null);
                   console.log("Inserted doc for "+doc.continent);
                   callback();
                }
              );   
       }//if
       else {
         console.log("Updated doc for "+doc.continent);
         callback();
       }
 }); //updateTop3ForContinent
}; //insertDocument

var updateTop3ForContinent = function(db, top3 , callback) {
   db.collection('top3').updateOne(
      { "continent" : top3.continent },
      {
        $set: { "nrs": top3.nrs },
        $currentDate: { "lastModified": true }
      }, function(err, results) {
      //console.log(results);
      callback(results);
   });
};

// Configure Kafka Consumer for Kafka Top3 Topic and handle Kafka message (by calling updateSseClients)
var consumer = new Consumer(
  client,
  [],
  {fromOffset: true}
);

consumer.on('message', function (message) {
  handleCountryMessage(message);
});

consumer.addTopics([
  { topic: countriesTopic, partition: 0, offset: 0}
], () => console.log("topic "+countriesTopic+" added to consumer for listening"));

function handleCountryMessage(countryMessage) {
    var top3 = JSON.parse(countryMessage.value);
    var continent = new Buffer(countryMessage.key).toString('ascii');
    top3.continent = continent;
    // insert or update the top3 in the MongoDB server
    insertDocument(mongodb,top3, function() {
      console.log("Top3 recorded in MongoDB for "+top3.continent);  
    });

}// handleCountryMessage

Running the application produces the following output.

Producing Countries:

SNAGHTML44a937e

Producing Streaming Analysis – Running Top 3 per Continent:

SNAGHTML44b2c82

Processing Kafka Messages:

image

Resulting MongoDB collection:

SNAGHTML44bf675

And after a little while, here is the latest situation for Europe and Asia in the MongoDB collection :

image

Resulting from processing the latest Kafka Stream result messages:

image

 

 

Querying the MongoDB Collection

The current set of top3 documents – one for each continent – stored in MongoDB can be queried, using MongoDB find and aggregation facilities.

One query we can perform is to retrieve the top 5 largest countries in the world. Here is the query that gives us that insight. First it creates a single record per country (using unwind to join the nrs collection in each top3 document). The countries are then sorted by the size of each country (descending) and the first 5 of the sort result are retained. These five are then projected into a nicer looking output document that only contains continent, country and area fields.

db.top3.aggregate([ {$project: {nrs:1}},{$unwind:’$nrs’}, {$sort: {“nrs.size”:-1}}, {$limit:5}, {$project: {continent:’$nrs.continent’, country:’$nrs.name’, area:’$nrs.size’ }}])

db.top3.aggregate([ 
   {$project: {nrs:1}}
  ,{$unwind:'$nrs'}
  , {$sort: {"nrs.size":-1}}
  , {$limit:5}
  , {$project: {continent:'$nrs.continent', country:'$nrs.name', area:'$nrs.size' }}
])

image

(And because no continent has its number 3 country in the top 4 of this list, we can be sure that this top 5 is the actual top 5 of the world)

 

Resources

A very good read – although a little out of date – is this tutorial on 1st and 2nd steps with Node and Mongodb: http://cwbuecheler.com/web/tutorials/2013/node-express-mongo/ 

MongoDB Driver for Node.js in the official MongoDB documentation: https://docs.mongodb.com/getting-started/node/client/ 

Kafka Connect for MongoDB – YouTube intro – https://www.youtube.com/watch?v=AF9WyW4npwY 

Combining MongoDB and Apache Kafka – with a Java application talking and listening to both: https://www.mongodb.com/blog/post/mongodb-and-data-streaming-implementing-a-mongodb-kafka-consumer 

Tutorials Point MongoDB tutorials – https://www.tutorialspoint.com/mongodb/mongodb_sort_record.htm 

Data Aggregation with Node.JS driver for MongoDB – https://docs.mongodb.com/getting-started/node/aggregation/

The post Node.js application writing to MongoDB – Kafka Streams findings read from Kafka Topic written to MongoDB from Node appeared first on AMIS Oracle and Java Blog.

Why Focused Learning Is Better For Your Career

Complete IT Professional - Mon, 2017-02-20 05:00
In this article, I’ll explain what I mean by “focused learning” and why it’s a good idea for your career. What is Focused Learning? When we want to learn something, we tend to research on it. We look for websites, articles, videos on the topic. Or we enrol in a university course, if it’s career […]
Categories: Development

12cR1 RAC Posts -- 6 :Running the Cluster Verification Utility

Hemant K Chitale - Mon, 2017-02-20 03:45
With a successful RAC Cluster, running the Cluster Verification Utility

[oracle@collabn1 ~]$ cluvfy stage -post crsinst -n collabn1,collabn2

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "collabn1"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.78.0"
Node connectivity passed for subnet "192.168.78.0" with node(s) collabn2,collabn1
TCP connectivity check passed for subnet "192.168.78.0"


Check: Node connectivity using interfaces on subnet "172.16.100.0"
Node connectivity passed for subnet "172.16.100.0" with node(s) collabn1,collabn2
TCP connectivity check passed for subnet "172.16.100.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.78.0".
Subnet mask consistency check passed for subnet "172.16.100.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "172.16.100.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "172.16.100.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking whether the ASM filter driver is active and consistent on all nodes
ASM filter driver library is not installed on any of the cluster nodes.
ASM filter driver configuration was found consistent across all the cluster nodes.
Time zone consistency check passed

Checking Cluster manager integrity...


Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed


UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations

Default user file creation mask check passed

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations


Checking daemon liveness...
Liveness check passed for "CRS daemon"

Checking OCR config file "/etc/oracle/ocr.loc"...

OCR config file "/etc/oracle/ocr.loc" check successful


Disk group for ocr location "+OCRVOTE/collabn-cluster/OCRFILE/registry.255.934671619" is available on all the nodes


Checking OCR backup location "/u01/app/12.1.0/grid/cdata/collabn-cluster"

OCR backup location "/u01/app/12.1.0/grid/cdata/collabn-cluster" check passed

NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.

OCR integrity check passed

Checking CRS integrity...

Clusterware version consistency passed.

CRS integrity check passed

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of ONS node application (optional)
ONS node application check passed


Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN listeners...
TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "collabn-cluster-scan.racattack"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking SCAN IP addresses...
Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup passed

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

User "oracle" is not part of "root" group. Check passed
Oracle Clusterware is installed on all nodes.
CTSS resource check passed
Query of CTSS for time offset passed

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Post-check for cluster services setup was successful.
[oracle@collabn1 ~]$
[oracle@collabn1 ~]$ su
Password:
[root@collabn1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1688
Available space (kbytes) : 407880
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#
[root@collabn1 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1028
Available space (kbytes) : 408540
ID : 1014277103
Device/File Name : /u01/app/12.1.0/grid/cdata/collabn1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#



[root@collabn2 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 796
Available space (kbytes) : 408772
ID : 1896774486
Device/File Name : /u01/app/12.1.0/grid/cdata/collabn2.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@collabn2 oracle]#


The "cluvfy stage -post crsinst" command as the Grid Infrastructure user checks the status of all components after the installation and configuration of the Cluster.  Note : This does not and cannot check any database (RAC or non-RAC) that is created.

The "ocrcheck" commands by root are to check the integrity of the Cluster Registry and Local Registry.  (You would run "ocrcheck -local" on each node of the Cluster).

.
.
.

Categories: DBA Blogs

Links for 2017-02-19 [del.icio.us]

Categories: DBA Blogs

RMAN unused block compression against TABLE drop vs TABLE with lob column compression.

Tom Kyte - Sun, 2017-02-19 19:06
Hi Tom, I am just trying to understand RMAN unused block compression against TABLE with LOB column compression vs TABLE drop. Consider LOB data for column 'DATA' in a table 'EMPLOYEES' under SCOTT schema. - lob segment SELECT SEGMENT_NAME FR...
Categories: DBA Blogs

Index column order and Partitioned table

Tom Kyte - Sun, 2017-02-19 19:06
We have a table with few hundred millions of rows. We perform INSERT, DELETE, SEARCH operation on this table. Column ID is unique and column CODE has very few distinct values (lets say 100 distinct values). So we partition table by LIST CODE column...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator