Feed aggregator

Download all directly and indirectly required JAR files using Maven install dependency:copy-dependencies

Amis Blog - Thu, 2017-02-09 00:33

My challenge is simple: I am creating a small Java application – single class with main method – that has many direct and indirect dependencies. In order to run my simple class locally I need to:

  • code the Java Class
  • compile the class
  • run the class

In order to compile the class, all directly referenced classes from supporting libraries should be available. To run the class, all indirectly invoked classes should also be available. That means that in addition to the .class file that is result of compiling my Java code, I need a large number of JAR-files.

Maven is a great mechanism for describing the dependencies of a project. With a few simple XML elements, I can indicate which libraries my application has a direct dependency on. The Maven pom.xml file is where these dependencies are described. Maven uses these dependencies during compilation – to have all direct dependent classes available for the compiler.

In order to help out with all run time dependencies, Maven also can download all jar-files for the direct and even the indirect dependencies. Take the dependencies in this pom.xml file (for a Java application that will work with Kafka Streams):

 

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>nl.amis.streams.countries</groupId>
  <artifactId>Country-Events-Analyzer</artifactId>
  <packaging>jar</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>Country-Events-Analyzer</name>
  <url>http://maven.apache.org</url>
  <dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams -->  
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
        <version>0.10.0.0</version>    
    </dependency>
    <dependency>    
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>0.10.0.0</version>    
    </dependency>
    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
    	<artifactId>jackson-databind</artifactId>
    	<version>2.7.4</version>
    </dependency>
     <dependency>
          <groupId>junit</groupId>
          <artifactId>junit</artifactId>
          <version>3.8.1</version>
          <scope>test</scope>
     </dependency>
     <dependency>
        <groupId>org.rocksdb</groupId>
        <artifactId>rocksdbjni</artifactId>
        <version>4.9.0</version>
    </dependency>
  </dependencies>
  <build>
  <plugins>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.1</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
    </configuration>
</plugin>
</plugins>
</build>
</project>

The number of JAR files required to eventually run the generated class is substantially. To find all these JAR-files manually is not simple: it may not be so simple to determine which files are required, the files may not be easy to locate and the indirect dependencies (stemming from the JAR files that the application directly depends on) are almost impossible to determine.

Using a simple Maven instruction, all JAR files are gathered and copied to a designated directory. Before the operation, here is the application. Note that the target directory is empty.

image

The statement to use is:

mvn install dependency:copy-dependencies

This will instruct Maven to analyze the pom.xml file, find the direct dependencies, find the associated JAR files, determine the indirect dependencies for each of these direct dependencies and process them similarly and recursively.

image

after some dozens of seconds:

image

 

The JAR files are downloaded to the target/dependency directory:

SNAGHTML56d4f25

 

I can now run my simple application using this command line command, that adds all JAR files to the classpath for the JVM:

java -cp target/Country-Events-Analyzer-1.0-SNAPSHOT.jar;target/dependency/* nl.amis.streams.countries.App

Note: on Linux, the semi colon should be a colon: java -cp target/Country-Events-Analyzer-1.0-SNAPSHOT.jar:target/dependency/* nl.amis.streams.countries.App

Note: the maven dependencies for specific projects and libraries can be explored in MVNRepository , such as https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams/0.10.0.0 for Kafka Streams.

The post Download all directly and indirectly required JAR files using Maven install dependency:copy-dependencies appeared first on AMIS Oracle and Java Blog.

NodeJS – Publish messages to Apache Kafka Topic with random delays to generate sample events based on records in CSV file

Amis Blog - Wed, 2017-02-08 23:59

In a recent article I described how to implement a simple Node.JS program that reads and processes records from a delimiter separated file. That is  stepping stone on the way to my real goal: publish a load of messages on a Kafka Topic, based on records in a file, and semi-randomly spread over time.

In this article I will use the stepping stone and extend it:

  • read all records from CSV file into a memory array
  • create a Kafka Client and Producer using Node module kafka-node
  • process one record at a time, and when done schedule the next cycle using setTimeOut with a random delay
  • turn each parsed record into an object and publish the JSON stringified representation to the Kafka Topic

image

The steps:

1. npm init kafka-node-countries

2. npm install csv-parse –save

3. npm install kafka-node –save

4. Implement KafkaCountryProducer.js

 

/*
This program reads and parses all lines from csv files countries2.csv into an array (countriesArray) of arrays; each nested array represents a country.
The initial file read is synchronous. The country records are kept in memory.
After the the initial read is performed, a function is invoked to publish a message to Kafka for the first country in the array. This function then uses a time out with a random delay 
to schedule itself to process the next country record in the same way. Depending on how the delays pan out, this program will publish country messages to Kafka every 3 seconds for about 10 minutes.
*/

var fs = require('fs');
var parse = require('csv-parse');

// Kafka configuration
var kafka = require('kafka-node')
var Producer = kafka.Producer
// instantiate client with as connectstring host:port for  the ZooKeeper for the Kafka cluster
var client = new kafka.Client("ubuntu:2181/")

// name of the topic to produce to
var countriesTopic = "countries";

    KeyedMessage = kafka.KeyedMessage,
    producer = new Producer(client),
    km = new KeyedMessage('key', 'message'),
    countryProducerReady = false ;

producer.on('ready', function () {
    console.log("Producer for countries is ready");
    countryProducerReady = true;
});
 
producer.on('error', function (err) {
  console.error("Problem with producing Kafka message "+err);
})


var inputFile='countries2.csv';
var averageDelay = 3000;  // in miliseconds
var spreadInDelay = 2000; // in miliseconds

var countriesArray ;

var parser = parse({delimiter: ';'}, function (err, data) {
    countriesArray = data;
    // when all countries are available,then process the first one
    // note: array element at index 0 contains the row of headers that we should skip
    handleCountry(1);
});

// read the inputFile, feed the contents to the parser
fs.createReadStream(inputFile).pipe(parser);

// handle the current coountry record
function handleCountry( currentCountry) {   
    var line = countriesArray[currentCountry];
    var country = { "name" : line[0]
                  , "code" : line[1]
                  , "continent" : line[2]
                  , "population" : line[4]
                  , "size" : line[5]
                  };
     console.log(JSON.stringify(country));
     // produce country message to Kafka
     produceCountryMessage(country)
     // schedule this function to process next country after a random delay of between averageDelay plus or minus spreadInDelay )
     var delay = averageDelay + (Math.random() -0.5) * spreadInDelay;
     //note: use bind to pass in the value for the input parameter currentCountry     
     setTimeout(handleCountry.bind(null, currentCountry+1), delay);             
}//handleCountry

function produceCountryMessage(country) {
    KeyedMessage = kafka.KeyedMessage,
    countryKM = new KeyedMessage(country.code, JSON.stringify(country)),
    payloads = [
        { topic: countriesTopic, messages: countryKM, partition: 0 },
    ];
    if (countryProducerReady) {
    producer.send(payloads, function (err, data) {
        console.log(data);
    });
    } else {
        // the exception handling can be improved, for example schedule this message to be tried again later on
        console.error("sorry, CountryProducer is not ready yet, failed to produce message to Kafka.");
    }

}//produceCountryMessage

5. Run node KafkaCountryProducer.js

The post NodeJS – Publish messages to Apache Kafka Topic with random delays to generate sample events based on records in CSV file appeared first on AMIS Oracle and Java Blog.

NodeJS – reading and processing a delimiter separated file (csv)

Amis Blog - Wed, 2017-02-08 23:34

Frequently, there is a need to read data from a file, process it and route it onwards. In my case, the objective was to produce messages on a Kafka Topic. However, regardless of the objective, the basic steps of reading the file and processing its contents are required often. In this article I show the very basic steps with Node.js and and the Node module csv-parse.

1. npm init process-csv

Enter a small number of details in the command line dialog. Shown in blue:

image

2. npm install csv-parse -save

This will install Node module csv-parse. This module provides processing of delimiter separated files.

image

This also extends the generated file package.json with a reference to csv-parse:

image

3. Implement file processFile.js

The logic to read records from a csv file and do something (write to console) with each record is very straightforward. In this example, I will read data from the file countries2.csv, a file with records for all countries in the world (courtesy of https://restcountries.eu/)

image

The fields are semi colon separated, the records are each on a new line.

 

/*
This program reads and parses all lines from csv files countries2.csv into an array (countriesArray) of arrays; each nested array represents a country.
The initial file read is synchronous. The country records are kept in memory.
*/

var fs = require('fs');
var parse = require('csv-parse');

var inputFile='countries2.csv';
console.log("Processing Countries file");

var parser = parse({delimiter: ';'}, function (err, data) {
    // when all countries are available,then process them
    // note: array element at index 0 contains the row of headers that we should skip
    data.forEach(function(line) {
      // create country object out of parsed fields
      var country = { "name" : line[0]
                    , "code" : line[1]
                    , "continent" : line[2]
                    , "population" : line[4]
                    , "size" : line[5]
                    };
     console.log(JSON.stringify(country));
    });    
});

// read the inputFile, feed the contents to the parser
fs.createReadStream(inputFile).pipe(parser);

 

4. Run file with node procoessFile.js:

image

The post NodeJS – reading and processing a delimiter separated file (csv) appeared first on AMIS Oracle and Java Blog.

Steps to Recreate Central Inventory in Real Applications Clusters (Doc ID 413939.1)

Michael Dinh - Wed, 2017-02-08 21:13

$ echo $ORACLE_HOME

/u01/app/oracle/product/12.1.0/db_1

$ $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/db_1/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatch/opatch2017-02-08_15-56-03PM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

This happened due to error during install. – oraInventory mismatch.

$ cat /etc/oraInst.loc
inst_group=oinstall
inventory_loc=/u01/app/oraInventory

$ cd /u01/software/database
$ export DISTRIB=`pwd`
$ ./runInstaller -silent -showProgress -waitforcompletion -force -ignorePrereq -responseFile $DISTRIB/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=/u01/app/oracle/oraInventory \

Backup oraInventory for both nodes and attachHome

$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u02/app/12.1.0/grid" ORACLE_HOME_NAME="OraGI12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u01/app/oracle/product/12.1.0/db_1" ORACLE_HOME_NAME="OraDB12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

SQL Server AlwaysOn – Distributed availability groups, read-only with round-robin capabilities

Yann Neuhaus - Wed, 2017-02-08 14:04

 

This blog post comes from a very interesting discussion with one of my friends about the read-only capabilities of secondary replicas in the context of distributed availability groups. Initially, distributed availability groups are designed to address D/R scenarios and some migration scenario types as well. I already discussed about of one possible migration scenario here. However, we may also take advantage of using secondary replicas as read-only in Reporting Scenarios (obviously after making an assessment of whether the cost is worth it.). In addition, if you plan to introduce scale-out with secondary replicas (even with asynchronous replication) you may consider to use distributed availability groups and cascading feature which will address network bandwidth overhead especially if your cross-datacenter link is not designed to handle heavily replication workload. Considering this last scenario, my friend’s motivation (Sarah Bessard) was to assess distributed availability groups in the replacement of SQL Server replication.

As a reminder, SQL Server 2016 provides new round-robin feature with secondary read-only replicas and extending it by including additional replicas from another availability group seems to be a good idea. But here things become more complicated because transparent redirection and round-robin features sound promising but in fact let’s see if it works when distributed availability group comes into play.

Let’s have a demo on my lab environment. So for the moment two separate availability groups which run on the top of their own Windows Failover Cluster – respectively AdvGrp and AdvGrpDR

 

blog 116 - 01 - distributed ag - archi

At this stage, we will focus only on my second availability group AdvDrGrp. Firstly, I configured read-only routes for my 4 replicas and here the result:

SELECT 
	r.replica_server_name,
	r.read_only_routing_url,
	g.name AS group_name
FROM 
	sys.availability_replicas AS r
JOIN 
	sys.availability_groups AS g ON r.group_id = g.group_id
WHERE 
	g.name = N'AdvGrpDR'
ORDER BY 
	r.replica_server_name;

select 
	r.replica_server_name AS primary_replica,
	r.read_only_routing_url,
	rl.routing_priority,
	r2.replica_server_name AS read_only_secondary_replica,
	r2.secondary_role_allow_connections_desc,
	g.name AS availability_group
FROM 
	sys.availability_read_only_routing_lists AS rl
JOIN 
	sys.availability_replicas AS r ON rl.replica_id = r.replica_id
JOIN 
	sys.availability_replicas AS r2 ON rl.read_only_replica_id = r2.replica_id
JOIN 
	sys.availability_groups AS g ON g.group_id =  r.group_id
WHERE 
	g.name = N'AdvGrpDR'
ORDER BY 
	primary_replica, availability_group, routing_priority;
GO

 

blog 116 - 1 - distributed ag ro - RO config

URL read-only routes and preferred replicas are defined for all the replicas. I defined round-robin configuration for replicas WIN20161SQL16\SQL16 to WIN20163SQL16\SQL16 whereas the last one is configured with a preference order (WIN20163SQL16\SQL16 first and WIN20164SQL16\SQL16 if the previous one is not available).

After configuring read-only routes, I decided to check if round-robin comes into play before implementing my distributed availability group. Before running my test I also implemented a special extended event which includes read-only route events as follows:

CREATE EVENT SESSION [alwayson_ro] 
ON SERVER 
ADD EVENT sqlserver.hadr_evaluate_readonly_routing_info,
ADD EVENT sqlserver.read_only_route_complete,
ADD EVENT sqlserver.read_only_route_fail
ADD TARGET package0.event_file ( SET filename=N'alwayson_ro' ),
ADD TARGET package0.ring_buffer;

 

My test included a basic command based on SQLCMD and –K READONLY special parameter as follows:

blog 116 - 2 - distributed ag ro - RO test

According to the above output we may claim that my configuration is well configured. We may also double check by looking at the extend event output

blog 116 - 3 - distributed ag ro - xe ro output

But now let’s perform the same test after implementing my distributed availability group. The script I used was as follows:

:CONNECT WIN20161SQL16\SQL16
 
USE [master];
GO
 
-- Primary cluster 
CREATE AVAILABILITY GROUP [AdvDistGrp]  
WITH (DISTRIBUTED)   
AVAILABILITY GROUP ON 
'AdvGrp'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advgrp.dbi-services.test:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
),   
'AdvGrpDR'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advdrgrp.dbi-services.test:5022',   
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
);    
GO   
 

:CONNECT WIN20163SQL16\SQL16
 
USE [master];
GO
 
-- secondary cluster
ALTER AVAILABILITY GROUP [AdvDistGrp]   
JOIN  
AVAILABILITY GROUP ON 
'AdvGrp'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advgrp.dbi-services.test:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
),   
'AdvGrpDR'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advdrgrp.dbi-services.test:5022',   
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
);    
GO

 

blog 116 - 0 - distributed ag ro - archi

Performing the previous test after applying the new configuration gives me a different result this time.

blog 116 - 4 - distributed ag ro - RO test 2

It seems that the round-robin capability is not correctly performed although I used the same read-only routes configuration. In the same way, taking a look at the extended event output gave me no results. It seems that transparent redirection and round-robin features from the listener did not come into play this time.

Let’s perform a last test which includes moving AdvDrGrp availability to another replica to confirm transparent redirection does not work as we may expect

:CONNECT WIN20164SQL16\SQL16

ALTER AVAILABILITY GROUP AdvGrpDR FAILOVER;

 

blog 116 - 5 - distributed ag ro - RO test 3

Same output than previously. The AdvDrGrp availability group has moved from WIN20163SQL16\SQL16 replica to WIN20164SQL16\SQL16 replica and the connection reached out the new defined primary of the second availability group (secondary role from the distributed availability group perspective) meaning we are not redirected on one of defined secondaries.

At this stage, it seems that we will have to implement our own load balancing component – whatever it is – in order to benefit from all the secondary replicas and read-only features on the second availability group. Maybe one feature that Microsoft may consider as improvement for the future.

Happy high availability moment!

 

 

 

 

 

 

 

 

Cet article SQL Server AlwaysOn – Distributed availability groups, read-only with round-robin capabilities est apparu en premier sur Blog dbi services.

Oracle Public Cloud: 2 OCPU for 1 proc. license

Yann Neuhaus - Wed, 2017-02-08 11:40

I’ve blogged recently about the Oracle Core Factor in the Clouds. And then, in order to optimize your Oracle licences, you need to choose the instance type that can run faster on less cores. In a previous blog post, I tried to show how this can be complex, comparing the same workload (cached SLOB) on different instances of same Cloud provider (Amazon). I did that on instances with 2 virtual cores, covered by 2 Oracle Database processor licences. Here I’m doing the same on the Oracle Public Cloud where, with the same number of licenses, you can run on 4 hyper-threaded cores.

Trial IaaS

I’m running with the 30-months trial subscription. I did several tests because they were not consistent at first. I had some runs where it seems that I was not running at full CPU. What I know is that your CPU resources are guaranteed on the Oracle Public Cloud, but maybe it’s not the case on trial, or I were working on a maintenance window, or…

Well, I finally got consistent results and I’ve run the following test on the IaaS (Cloud Compute Service) to do something similar to what I did on AWS, with the Bring You Own License idea.

In Oracle Public Cloud, you can run 2 cores per 1 Oracle processor licence. This means that if I have 2 processor licences, I can run on an instance shape with 4 OCPU. This shape is called ‘OC5′. Here it is:

[oracle@a9f97f ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2294.938
BogoMIPS: 4589.87
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-7
 
[oracle@a9f97f ~]$ cat /proc/cpuinfo | tail -26
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
stepping : 2
microcode : 0x36
cpu MHz : 2294.938
cache size : 46080 KB
physical id : 0
siblings : 8
core id : 7
cpu cores : 8
apicid : 14
initial apicid : 14
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm xsaveopt fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 4589.87
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

And here are the results:


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.2 0.00 5.48
DB CPU(s): 1.0 30.1 0.00 5.47
Logical read (blocks): 884,286.7 26,660,977.4
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 25.0 0.00 9.53
DB CPU(s): 2.0 25.0 0.00 9.53
Logical read (blocks): 1,598,987.2 20,034,377.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 9.29
DB CPU(s): 3.0 40.9 0.00 9.28
Logical read (blocks): 2,195,570.8 29,999,381.1
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 14.46
DB CPU(s): 4.0 42.8 0.00 14.45
Logical read (blocks): 2,873,420.5 30,846,373.9
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 51.7 0.00 15.16
DB CPU(s): 5.0 51.7 0.00 15.15
Logical read (blocks): 3,520,059.0 36,487,232.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 81.8 0.00 17.15
DB CPU(s): 6.0 81.8 0.00 17.14
Logical read (blocks): 4,155,985.6 56,787,765.6
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 65.6 0.00 17.65
DB CPU(s): 7.0 65.5 0.00 17.62
Logical read (blocks): 4,638,929.5 43,572,740.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 92.3 0.00 19.20
DB CPU(s): 8.0 92.1 0.00 19.16
Logical read (blocks): 5,153,440.6 59,631,848.6
 

This is really good. This is x2.8 more LIOPS than the maximum I had on AWS EC2. A x2 factor is expected because I have x2 vCPUS here. But CPU is also faster. So, two conclusions here:

  • There is no technical reason behind the reject of core factor on Amazon EC2. It is only a marketing decision.
  • For sure, for same Oracle Database cost, Oracle Cloud outperforms Amazon EC2 because is is cheaper (not to mention the discounts you will get if you go to Oracle Cloud)
So what?

This is not a benchmark. The LIOPS may depend a lot on your application behaviour, and CPU is not the only resource to take care. But for sure, the Oracle Public Cloud IaaS is fast and costs less when used for Oracle products, because of the rules on core factor. But those rules are for information only. Check your contract for legal stuff.

 

Cet article Oracle Public Cloud: 2 OCPU for 1 proc. license est apparu en premier sur Blog dbi services.

How to Upgrade an Oracle-based Application without Downtime

Gerger Consulting - Wed, 2017-02-08 11:32
One of most common reasons IT departments avoid database development is the belief that an application upgrade in the database causes downtime. However, nothing can be further from the truth. On the contrary, Oracle Database provides one of the most bullet proof ways to upgrade an application without any downtime: Edition Based Redefinition (EBR)

EBR is a powerful and fascinating feature of Oracle (added in version 11.2), that enables application upgrades with zero downtime, while the application is actively used and operational.

Attend the free webinar by Oren Nakdimon on February 16th to learn how to use EBR, see many live examples, and get tips from real-life experience in a production site using EBR extensively.

The webinar is free but space is limited.

Sign up for the free webinar.


Categories: Development

February 15: Hillside Family of Agencies―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2017-02-08 10:02

Join us for an Oracle HCM Cloud Customer call on Wednesday, February 15, 2017, at 9:00 a.m. PDT.

Carolyn Kenny, Director of Information Services at Hillside Family of Agencies, will discuss why Hillside decided to move its Oracle E-Business Suite HR and ERP on premises to the Oracle HCM and ERP Cloud.

Hillside Family of Agencies is using the following Oracle Cloud products: Core HR, Payroll, Benefits and Absence Management, and Oracle ERP Cloud. The company is implementing in phases. Phase 1 included Core HR, HR Analytics, Recruiting, Social Sourcing, and Benefits. Phase 2 includes Performance and Goal Management, Career and Succession Planning, and Learning and Development.

Register now to attend the live forum and learn more about Hillside Family of Agencies’ experience with Oracle HCM Cloud.

IntraSee: All Aboard the Cloud Train

WebCenter Team - Wed, 2017-02-08 09:27

Authored by: Paul Isherwood. CEO & Co-Founder, IntraSee 

As one era ends, another begins. As client-server eventually succumbed to the ascendency of the Internet and web-based systems, so too will on-premise solutions fade into history as the Cloud becomes the new normal. For many organizations there will be concern about making this transition. The comfort that people feel for what is known is hard to let go, especially when what is new does not have a clearly defined path to adoption.

At IntraSee we believe in clarity of thought, which means providing clear direction on what can be a confusing subject. And in that spirit, we have identified a number of offerings that will help you painlessly get to your final destination.  We’ve grouped these into use-cases we believe are highly applicable for many organizations currently on the PeopleSoft platform.

  • Use-Case 1: I am using the PeopleSoft Interaction Hub as an HR or Campus portal, how do I provide the same kind of functionality in the Oracle Cloud?
  • Use-Case 2: I am using the PeopleSoft Interaction Hub to house all my content, policies and procedures. I have thousands of HTML objects and images, plus thousands of pdf files and Word docs. How do I move them into the Oracle Cloud so they complement HCM or Student Cloud? And how do I manage them once they are there?
  • Use-Case 3: I’ve created a number of bolt-ons in PeopleTools that I know won’t be available in the HCM Cloud. Is there some way I can rebuild them using Oracle’s Cloud tools? It’s not an option for us just to drop them. 
Read more about these Use-Cases in depth in Paul's original post here.

Outsourcing Inc. Standardizes on Oracle Identity Cloud Service

Oracle Press Releases - Wed, 2017-02-08 07:00
Press Release
Outsourcing Inc. Standardizes on Oracle Identity Cloud Service Selects solution that enhances security while not compromising on ease-of-use

Redwood Shores, Calif.—Feb 8, 2017

Oracle today announced that Outsourcing Inc., the leading outsourcing services for manufacturing companies, selected Oracle Identity Cloud Service, a next-generation security and identity management cloud platform that is designed to be an integral part of the enterprise security fabric.

Outsourcing is experiencing rapid growth to address the changes of its customer-base. Its sales for the period ending December 31, 2015 reached a record high 80.8 billion Yen, and it has grown by 36 percent year-after-year. Currently, the company focuses on key industries such as IT, construction, and healthcare. It has invested 43 billion Yen in mergers and acquisitions and has 31 subsidiaries in Japan and 54 subsidiaries worldwide.

In order to provide a solution to its expanding global work force, Outsourcing required a technology solution that would provide best-in-class security for employees without compromising user–experience.  Additionally, the company needed a solution that could work across multiple cloud services and on-premises applications used by the group’s companies in Japan and overseas. Outsourcing needed a solution that would integrate with Oracle's Documents Cloud Service so it could promptly operate with Oracle’s SaaS applications, applications built on the Oracle Cloud Platform, and third-party cloud services.

Oracle Identity Cloud will provide Outsourcing’s employees with Single Sign-on authentication that will allow them to access documents via the Oracle Documents Cloud tool. This will improve user experience and streamline operational management and enhance security. It will also build the technical foundation and operation of user ID management and authentication in the cloud. Outsourcing also plans to develop a collaboration with custom applications running on Oracle’s IaaS and PaaS and to establish a common ID and access management platform within group companies while sequentially deploying it to Oracle’s SaaS applications and third-party service.

"Outsourcing needed to establish an agile, secure system environment because of its expanding business through mergers & acquisitions (M&A), diversifying target industries, and growing domestic and overseas networks,” said Kinji Manabe, General Manager in Business Management Department, Outsourcing, Inc. “Oracle has a proven record of providing the best-in-class management solutions, and we are convinced that the Oracle Identity Cloud will be the foundation for the future growth of Outsourcing."

Contact Info
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
Norihito Yachita
Oracle Japan
+81.3.6834.4835
norihito.yachita@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Disclaimer

This document is for informational purposes only and may not be incorporated into a contract or agreement. 

Talk to a Press Contact

Sarah Fraser

  • +1.650.743.0660

Norihito Yachita

  • +81.3.6834.4835

runcluvfy.sh -pre crsinst NTP failed PRVF-07590 PRVG-01017

Michael Dinh - Wed, 2017-02-08 06:56

12c (12.1.0.2.0) RAC Oracle Linux Server release 7.3
/u01/software/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
  Node Name                             File exists?            
  ------------------------------------  ------------------------
  node02                                yes                     
  node01                                yes                     
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  node02                                no                      
  node01                                yes                     
PRVF-7590 : "ntpd" is not running on node "node02"
PRVG-1017 : NTP configuration file is present on nodes "node02" on which NTP daemon or service was not running
Result: Clock synchronization check using Network Time Protocol(NTP) failed

NTP was indeed running on both nodes.
The issue is /var/run/ntpd.pid does not exist on the failed node.
NTP was started with incorrect options.

GOOD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 20:37:18 CST; 3 days ago
 Main PID: 22517 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

# ll /var/run/ntpd.*
-rw-r--r-- 1 root root 5 Feb  3 20:37 /var/run/ntpd.pid

BAD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service           
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 18:10:23 CST; 3 days ago
 Main PID: 22403 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -g           

# ll /var/run/ntpd.*
ls: cannot access /var/run/ntpd.*: No such file or directory

SOLUTION:

Restart ntpd on failed node.

ORA-06512 At Line Solution

Complete IT Professional - Wed, 2017-02-08 05:00
Did you get an ORA-06512 error when running an SQL query? Learn what this error is and how to resolve it in this article. ORA-06512 Cause The error message you get will look similar to this: ORA-06512: at line n. Where n is a line number. This error message is a generic PL/SQL error message […]
Categories: Development

Oracle 12c – RMAN list failure does not show any failure even if there is one

Yann Neuhaus - Wed, 2017-02-08 04:11

Relying to much on the RMAN Data Recovery Advisor is not always the best idea. In a lot of situations,  it tells you the right things, however, sometimes it tells you not the optimal things, and sometimes, RMAN list failure does not show any failure at all, even if there is one.

So … let’s simulate quickly a loss of a datafile during the normal runtime of the database. The result is a clear error message which says that the datafile 5 is missing.

SQL> select count(*) from hr.employees;
select count(*) from hr.employees
                        *
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Of course, the error message is immediately reflected in the alert.log as well where it clearly says that Oracle in unable to open file number 5.

Errors in file /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/trace/DBTEST1_smon_17115.trc:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory

Only the RMAN Data Recovery advisor does not know what it’s going on.

RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

no failures found that match specification

Of course, I could shutdown the DB, and then startup again which would trigger a Health Check, but shutting down an instance is not always so easy on production systems. Especially when only one datafile is missing, but all others are available and only a part of the application is affected.

The solution to that issue, is to run a manual health check. Quite a lot of health checks can be run manually, like show in the following documentation.

https://docs.oracle.com/database/121/ADMIN/diag.htm#ADMIN11269

I start with the DB Structure Integrity Check. This check verifies the integrity of database files and reports failures if these files are inaccessible, corrupt or inconsistent.

SQL> begin
  2  dbms_hm.run_check ('DB Structure Integrity Check','Williams Check 00000001');
  3  end;
  4  /

PL/SQL procedure successfully completed.

After running the Health Check, Oracle finds the failure and in the alter.log you will see an entry like the following:

Checker run found 1 new persistent data failures

If you want to take a look what exactly the Health check found, you can invoke the ADRCI and execute the “show hm_run” command.

oracle@vmoratest1:/oracle/workshop/bombs/ [DBTEST1] adrci

ADRCI: Release 12.1.0.2.0 - Production on Tue Feb 7 16:02:21 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> show homes
ADR Homes:
diag/clients/user_oracle/host_1833655127_82
diag/tnslsnr/vmoratest1/listener
diag/rdbms/cdb1p/CDB1P
diag/rdbms/dbtest1/DBTEST1
diag/rdbms/rcat/RCAT

adrci> set home diag/rdbms/dbtest1/DBTEST1

adrci> show hm_run

ADR Home = /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1:
*************************************************************************

...
...

**********************************************************
HM RUN RECORD 9
**********************************************************
   RUN_ID                        206
   RUN_NAME                      Williams Check 00000001
   CHECK_NAME                    DB Structure Integrity Check
   NAME_ID                       2
   MODE                          0
   START_TIME                    2017-02-07 16:03:44.431601 +01:00
   RESUME_TIME                   <NULL>
   END_TIME                      2017-02-07 16:03:44.478127 +01:00
   MODIFIED_TIME                 2017-02-07 16:03:44.478127 +01:00
   TIMEOUT                       0
   FLAGS                         0
   STATUS                        5
   SRC_INCIDENT_ID               0
   NUM_INCIDENTS                 0
   ERR_NUMBER                    0
   REPORT_FILE                   <NULL>
9 rows fetched

adrci>

However, if you take a look at the HM RUN report, is gives you an error.

adrci> show report hm_run 'Williams Check 00000001'
DIA-48415: Syntax error found in string [show report hm_run 'Williams Check 00000001'] at column [44]

This is not a bug. The HM run name must be only alphanumeric and underscore.  So … better don’t use spaces in between your name. The following would have been better.

SQL> begin
  2  dbms_hm.run_check ('DB Structure Integrity Check','WilliamsCheck');
  3  end;
  4  /

PL/SQL procedure successfully completed.

In case, the “adrci show report hm_run” does not work for you, it is not the end of the story. We still can look up the v$hm_finding view.

select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';

SQL> select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';

RUN_ID TIME_DETECTED                STATUS       DESCRIPTION                                  DAMAGE_DESCRIPTION
------ ---------------------------- ------------ -------------------------------------------- --------------------------------------------
   206 07-FEB-17 04.03.44.475000 PM OPEN         Datafile 5: '/u01/oradata/DBTEST1/hrDBTEST01 Some objects in tablespace HR might be unava
                                                 .dbf' is missing                             ilable

Now let’s check the RMAN “list failure” again.

RMAN> list failure;

Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected        Summary
---------- -------- --------- -------------------- -------
2          HIGH     OPEN      07-FEB-2017 15:39:38 One or more non-system datafiles are missing


RMAN> advise failure;
...
Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 5
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm

  
RMAN> repair failure preview;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm

contents of repair script:
   # restore and recover datafile
   sql 'alter database datafile 5 offline';
   restore ( datafile 5 );
   recover datafile 5;
   sql 'alter database datafile 5 online';
Conclusion

The Oracle Data Recovery Advisor is quite good, but sometimes you need to push it into the right direction. Besides that, take care of the naming convention that you use for your health check runs. ;-)

 

Cet article Oracle 12c – RMAN list failure does not show any failure even if there is one est apparu en premier sur Blog dbi services.

Database 12.1 Extended Support Fee Waived through July 2019

Steven Chan - Wed, 2017-02-08 02:06

Oracle's Lifetime Support policy has three phases:  Premier Support, Extended Support, and Sustaining Support.  For details about coverage during each phase, see:

You can purchase a support plan for your licensed products to obtain Premier Support.  There is an additional fee for Extended Support. 

Premier Support for Database 12.1 runs to July 31, 2018. Extended Support for Database 12.1 runs to July 31, 2021. The Extended Support fee for Oracle Database 12c 12.1 has been waived to July 31, 2019.  See:

Related Articles


Categories: APPS Blogs

Changing the label of an item in Oracle APEX dynamically

Dimitri Gielis - Tue, 2017-02-07 16:37

Today I got the question how to change the label of an item in Oracle Application Express (APEX) based on some condition. I actually had this requirement myself a couple of times, so maybe other people too.

Here’s an example; whenever we change the Source item, we want the Affected Item to change it’s label:

 after change of the source item, the label of the affected item changes

The first thing that comes to mind (if you already know a little bit of APEX); lets use a Dynamic Action: on change of the Source item we will fire (in this example we will only fire when the value is A):

Dynamic Action in APEX

Now which action should we use when the dynamic action fires?

Default possibility of actions

Set Value will typically set the value of an Item, but what about the Label?
If I don’t find the option, I typically look for a plugin or write some code myself. In this case I wrote a bit of JavaScript, for example:

var newLabel = 'My new label for ' + $v('P2_SOURCE_ITEM');
$('#'+$(this.affectedElements).attr('id')+'_LABEL').html(newLabel);

This will set the label to "My new label for " and then the value of the item, at least if you select in the Affected Elements the item that needs the label change.

Whenever I think about writing custom code, my mind says “you should create a plugin for that”.
So I actually started to write an Oracle APEX Plug-in called “Set Label” (https://github.com/dgielis/orclapex-plugin-set-label)

While I was trying the plugin and writing up the things I needed to do, I guess something happend in my mind. I missed the obvious, it suddenly came to my mind there’s a much simpler solution to this…

You can actually use the Set Value action… just add after your item _LABEL, that’s it.

Use the Set Value dynamic action but add _LABEL to change the label of the item

Here’s the result:

enter image description here

Sometimes developing is much more simple than initially thought, you just have to see it :)

Categories: Development

DBMS_PARALLEL_EXECUTE getting chunks to work in special order

Tom Kyte - Tue, 2017-02-07 13:46
Hi Tom, I'm using DBMS_PARALLEL_EXECUTE package to run in parallel my PL/SQL procedure's work. The chunks are generated by my own SQL on table which contains numeric field "priority" like this <code> v_sql := ' select rowid, rowid ...
Categories: DBA Blogs

Real time scenarios

Tom Kyte - Tue, 2017-02-07 13:46
friends, I am searching complex real time scenarios and solutions on PL/SQL and SQL, Please provide reference documentation links if any. this will b great helpful Thanks, vin
Categories: DBA Blogs

Flush buffer cache and shared pool

Tom Kyte - Tue, 2017-02-07 13:46
Hi Tom, We have an application performing many inserts and updates from many machines. At peak time, we may have the application running on 300 machines performing inserts and updates. Once in a while, we saw some active sessions blocking other sess...
Categories: DBA Blogs

Question on ORA-12899: value too large

Tom Kyte - Tue, 2017-02-07 13:46
Hi Tom, We have a migration project which from Sybase SQL Anywhere to Oracle. And there is an issue we still have no perfect solution. In Sybase, When insert/update to a target column, the source string will be auto truncate if the length more t...
Categories: DBA Blogs

right way to grant permissions to developers

Tom Kyte - Tue, 2017-02-07 13:46
hello, i am junior dba with quite a little knoweledge about oracle. lately i wanted to make users for plsql developers who work on production database with the same user-bill(which has DBA role). i wanted to make them their own user for security p...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator