Feed aggregator

How to compare two tables of data????

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, I have two tables of values(dept1 and dept2). How do I compare all the data in these two tables??? It will return true if both tables contain the same data & false for otherwise... Another thing is that I do not know how to use CREATE OPE...
Categories: DBA Blogs

Java Connection Pooling with Oracle VPD

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, We have a 3-tier application that is built on Java and Oracle. In our application, we extensively make use of Oracle VPD policies for setting contexts and managing the data. Now, we are building in Java something on top of Oracle. We hit ...
Categories: DBA Blogs

PlSQL- Bulk Collect and Update (Better Approach)

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, I am looking for a better coding approach than what I have in my current system. I have two tables dog_owner(16 Million Records) and dog_owner_stage(8 Million Records). In the current process. I usually insert based on a common owner_accou...
Categories: DBA Blogs

Performance issue in CLOB\BLOB data migration

Tom Kyte - Fri, 2016-09-23 04:46
(did not get any answer for https://asktom.oracle.com/pls/apex/f?p=100:24:0::NO::P24_ID:9531842300346462307 ) Hello Tom, First of all, i would like you to thank you for your immense support on Database issues.It helps us a lot !! Question : M...
Categories: DBA Blogs

Compile_Error when refreshing a Materialized View from a procedure

Tom Kyte - Fri, 2016-09-23 04:46
We have Materialized Views which reference tables in other schemas. We can refresh/compile the Materialized Views from the command line however when we refresh/compile the Materialized View from within a procedure the job immediately aborts with...
Categories: DBA Blogs

Practise question

Tom Kyte - Fri, 2016-09-23 04:46
Hi, I was practicing some question on sql challenge about dml operation using multiple tables and got some doubt. CREATE TABLE plch_departments ( department_id INTEGER PRIMARY KEY, department_name VARCHAR2(30) ) / CREATE TABLE p...
Categories: DBA Blogs

Optimiser Trace

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, A tricky question, recently we upgraded our systems to 11.2.0.4 and started to observe some queries taking much longer (from mins to 10+ hours). On analysing the taces / explain we found the access path had changed from the previous one, so...
Categories: DBA Blogs

Converting an EE DB 12.1.0.2.0 to SE DB 12.1.0.1.0 - Version - empty table

Tom Kyte - Fri, 2016-09-23 04:46
Hi, Im trying to convert EE 12.1.0.2.0 to SE 12.1.0.1.0 with expdp/impdp. I've found example of converting EE 11.2.0.1 to SE 11.2.0.4 where it is stated: "During import to standard edition we must use keyword VERSION=11.1 to be able to import e...
Categories: DBA Blogs

Oracle Open World 2016 – Day 4 and 5

Yann Neuhaus - Fri, 2016-09-23 03:01

At the end of Oracle Open World my last BLOG concerning OOW 2016 covering day 4 and 5:

Wednesday is the day of the Party: Oracle’s appreciation event, a concert with Gwen Stefani and Sting at the AT&T Park (Stadium of the San Francisco Baseball team, the Giants). It was a great event with awesome musicians.

Before the party I visited the session “Oracle Active Data Guard: Power, Speed, Ease and Protection” provided by Larry M. Carpenter, the grandfather of Data Guard. Here a couple of nice new features of (Active) Data Guard in 12gR2:

  • Multi-Instance Redo Apply in RAC: Up until now the managed recovery processes (MRP) could only run on a single node on the standby RAC site and hence limited redo apply to the CPU and IO power of that one node. In the new release a coordinator process can distribute the redo data to MPR-processes on all nodes of the RAC cluster. This is called Multi-Instance Apply and is configured as follows:
    Without broker: recover managed standby database disconnect using instances 4;
    With broker: Through the ‘ApplyInstances’ property.
    Caveats in the first 12gR2 release:
    Using Multi-Instance Redo Apply disallows the use of the new feature In-Memory Column Store on Active Data Guard.
    RMAN block change tracking file is disabled.

 

  • Data Guard Broker enhancement for Multitenant: As Redo is generated at Container (CDB) level, switchover and failover will also happen for the whole CDB. In 12gR2 there is a new command when using the Data Guard Broker to migrate or failover a Pluggable DB (PDB) to another CDB on the same server:

    MIGRATE PLUGGABLE DATABASE PDBx TO CONTAINER CDB2 USING PDBx.xml CONNECT AS sys/mypassword@CDB2;

    Depending on what role the CDB I’m connected to has determines if a PDB is migrated to another CDB or is failed over to another CDB. So if e.g. a PDB has a failure on the primary site then I can “failover” its standby-equivalent to another Primary CDB on the standby machine and hence make the “standby PDB” a “primary PDB”. This works best when having 2 CDBs in 2 sites and replicate in opposite directions: CDB1 at site A replicates to CDB1 at site B. CDB2 at site B replicates to CDB2 at site A. So let’s assume PDBx in Primary DB CDB1 fails at site A. You can then migrate PDBx at site B to Primary DB CDB2. PDBx at site A in CDB1 will be dropped automatically, but the CDB2 at site A needs to be manually updated with the new data files of PDBx.

 

  • Use In-Memory Column Store on an Active Data Guard DB: As mentioned in my previous BLOG, in 12cR2 In-Memory can be used on an Active Data Guard Instance.
    Restrictions for In-Memory on Active Data Guard:
    In-Memory expressions are captured based on queries executed on the primary only. I.e. the expression statistics store (ESS) is maintained on the primary only.
    Automatic Data Optimization (ADO) policies are triggered only on access recorded on the primary database.
    In-Memory Fast-Start and In-Memory Join-Groups are not supported in an Active Data Guard

 

  • Diagnostics and Tuning for Active Data Guard: The Diagnostics Pack (AWR), the Tuning Pack features and SQL Plan Analyzer are supported in the new release on Active Data Guard.
    AWR: In an AWR catalog database the Active Data Guard DB is registered. From there remote snapshots can be taken from the Active Data Guard instance and stored in the AWR catalog: dbms_workload_repository.create_remote_snapshot("TYPICAL", ADG-id);
    SQL Tuning Advisor: All SQL Tuning Advisor Tasks are executed on the Active Data Guard instance. Necessary write activity are done through a DB-Link on the primary DB.

 

  • Repair blocks from NOLOGGING-operations: Blocks from NOLOGGING operations on primary can now be validated and repaired on Standby with rman commands:

    validate/recover ... nonlogged blocks;

    I.e. the primary DB does not necessarily need to be in FORCE LOGGING mode anymore. If NOLOGGING operations are necessary then they can be repaired on the Standby-DB. Previously complete datafiles had to be restored to repair NOLOGGING operations.

On Thursday I visited the panel discussion with the subject “Thinking clearly about Database Application Architecture”. Toon Koppelaars, Connor Mcdonald, Cary Milsap and Gerald Venzl discussed about the correct Application architecture when accessing data in an Oracle Database. The discussion was moderated by Bryn Llewellyn. Toon Koppelaars from the Real World Performance team at Oracle explained why the ThickDB approach by writing business logic (which need data processing) in PLSQL through set or bulk processing is the best method to have a well performing application (see also here). However, today the approach to process the data in layers outside the DB is being preferred (“data to processing” instead of “processing to data”). Unfortunately that results in row by row processing with lots of network roundtrips and higher CPU-usage on the DB-server due to the many times the whole stack on the DB-server has to be traversed.
It was clear and agreed in the audience that the ThickDB approach (“processing to data”) is correct, but why do developers not change their behavior since many years? The opinions on that differed, but also critical statements were expressed that “we as DBAs and DB-Consultants are part of the problem”, because there is no effort to change something in the base education of students to better understand the inner workings of a relational database system and the importance of “processing at the data”.

I’ll leave it to the reader to think about that and end my BLOGs about the Oracle Openworld 2016.

 

Cet article Oracle Open World 2016 – Day 4 and 5 est apparu en premier sur Blog dbi services.

Hadoop on IaaS - part 2

Pat Shuff - Fri, 2016-09-23 02:07
Today we are going to get our hands dirty and install a single instance standalong Hadoop Cluster on the Oracle Compute Cloud. This is a continuing series of installing public domain software on Oracle Cloud IaaS. We are going to base our installation on three components We are using Oracle Linux 6.7 because it is the easiest to install on Oracle Compute Cloud Services. We could have done Ubuntu or SUSE or Fedora and followed some of the tutorials from HortonWorks or Cloudera or Apache Single Node Cluster. Instead we are going old school and installing from the Hadoop home page by downloading a tar ball and configuring the operating system to run a single node cluster.

Step 1:

Install Oracle Linux 6.7 on an Oracle Compute Cloud instance. Note that you can do the same thing by installing on your favorite virtualization engine like VirtualBox, VMWare, HyperV, or any other cloud vendor. The only true dependency is the operating system beyond this point. If you are installing on the Oracle Cloud, go with the OL_67_3GB..... option, go with the smallest instance, delete the boot disk, replace it with a 60 GB disk, rename it and launch. The key reason that we need to delete the boot disk is that by default the 3 GB disk will not take the Hadoop binary. We need to grow it to at least 40 GB. We pad a little bit with a 60 GB disk. If you check the new disk as a boot disk it replaces the default Root disk and allows you to create an instance with a 60 GB disk.

Step 2:

Run yum to update the os, install w get, and java version 1.8. You need to login as opc to the instance so that you can run as root.

Note that we are going to diverge from the Hadoop for Dummies that we referenced yesterday. They suggest attaching to a yum repository and doing an install from the repository for the bigtop package. We don't have that option for Oracle Linux and need to do the install from the binaries by downloading a tar or src image. The bigtop package basically takes the Apache Hadoop bundle and translates them to rpm files for an operating system. Oracle does not provide this as part of the yum repository and Apache does not create one for Oracle Linux or RedHat. We are going to download the tar file from the links provided at Apache Hadoop homepage we are following install instructions for a single node cluster.

Step 3:

Get the tar.gz file by pulling it from http://apache.osuosl.org/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

Step 4: We unpack the tar.gz file with the tar xvzf hadoop-2.7.2.tar.gz command

Step 5:

Next we add the following to the .bashrc file in the home directory to setup some environment variables. The java code is done in the same location by the yum command. The location of the hadoop code is based on downloading into the opc home directory.

export JAVA_HOME=/usr
export HADOOP_HOME=/home/opc/hadoop-2.7.3
export HADOOP_CONFIG_DIR=/home/opc/hadoop-2.7.3/etc/hadoop
export HADOOP_MAPRED_HOME=/home/opc/hadoop-2.7.3
export HADOOP_COMMON_HOME=/home/opc/hadoop-2.7.3
export HADOOP_HDFS_HOME=/home/opc/hadoop-2.7.3
export YARN_HOME=/home/opc/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin

Step 6

Source the .bashrc to pull in these environment variables

Step 7 Edit the /etc/hosts file to add namenode to the file.

Step 8

Setup ssh so that we can loop back to localhost and launch an agent. I had to edit the authorized_keys to add a return before the new entry. If you don't the ssh won't work.

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
vi ~/.ssh/authorized_keys
ssh localhost
exit

Step 9 Test the configuration then configure the hadoop file system for single node.

cd $HADOOP_HOME
mkdir input
cp etc/hadoop/*.xml input
./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
vi etc/hadoop/core-site.xml

When we ran this and there were a couple of warnings which we can ignore. The test should finish without error and generate a long output list. We then edit to core-site.xml file by changing the following lines at the end. (omit the spaces, the blog software masked them and the only way to show the full file was to add spaces)

< configuration >
 < property >
  < name >fs.defaultFS< /name >
  < value >hdfs://namenode:8020< /value >
 < /property >
< /configuration >

Step 10

Create the hadoop file system with the command hdfs namenode -format

Step 11

Verify the configuration with the command hdfs getconf -namenodes

Step 12

Start the hadoop file system with the command sbin/start-dfs.sh

At this point we have the hadoop filesystem up and running. We now need to configure MapReduce and test functionality. Step 13

Make the HDFS directories required to execute MapReduce jobs with the commands

  hdfs dfs -mkdir /user
  hdfs dfs -mkdir /user/opc
  hdfs dfs -mkdir input
  hdfs dfs -put etc/hadoop/*.xml input

Step 14 Run a MapReduce example and look at the output

  hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep 
    input output 'dfs[a-z.]+'
  hdfs dfs -get output output
  cat output/* output/output/*

Step 15

Create a test program to do a wordcount of two files. This example comes from an Apache MapReduce Tutorial

hdfs dfs -mkdir wordcount
hdfs dfs -mkdir wordcount/input
mkdir ~/wordcount
mkdir ~/wordcount/input
vi ~/wordcount/input/file01
 - add 
Hello World Bye World
vi ~/wordcount/input/file02
- add
Hello Hadoop Goodbye Hadoop
hdfs dfs -put ~/wordcount/input/* wordcount/input
vi ~/wordcount/WordCount.java

Create WordCount.java with the following code

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

Step 16

Compile and run the WordCount.java code

cd ~/wordcount
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64
export HADOOP_CLASSPATH=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/lib/tools.jar
hadoop com.sun.tools.javac.Main WordCount.java
jar cf wc.jar WordCount*.class
hadoop jar wc.jar WordCount wordcount/input wordcount/output
hadoop fs -cat wordcount/output/part-r-00000

At this point we have a working system and can run more MapReduce jobs, look at results, and play around with Big Data foundations.

In summary, this is a relatively complex example. We have moved beyond a simple install of an Apache web server or Tomcat server and editing some files to get results. We have the foundations for a Big Data analytics solution running on the Oracle Compute Cloud Service. The steps to install are very similar to the other installation tutorials that we referenced earlier on Amazon and Virtual Machines. Oracle Compute is a good foundation for public domain code. Per core the processes are cheaper than other cloud vendors. Networking is non-blocking and higher performance. Storage throughput is faster and optimized for compute high I/O and tied to the compute engine. Hopefully this tutorial has given you the foundation to start playing with Hadoop on Oracle IaaS.

Links for 2016-09-22 [del.icio.us]

Categories: DBA Blogs

NetFlix – Blockbuster Movies

Marco Gralike - Thu, 2016-09-22 23:08
I mean, in principle it’s not that special, being that the info is already on…

Oracle Open World 2016 from a PeopleSofter point of view: Thursday 22nd and Wrap Up

Javier Delgado - Thu, 2016-09-22 18:33
So Open World 2016 has come to an end. But before the curtains fell, there was still some activity and interesting PeopleSoft sessions to attend.

Reporting and AnalyticsMy day started with a session name Getting the most Out of PeopleSoft - Reporting and Analytics [CON7075] with the participation of Matthew Haavisto, Jody Schnell and Ramasimha Rangaraju.

Reporting has evolved a lot in the last few years, and not only in PeopleSoft. Gone are (or should be) the days in which a report meant a PDF or a print out. Today reporting is not only interactive but also actionable. I actually delivered a presentation on this topic back in April 2016 at the PeopleSoft Tour in Madrid. I later recorded it in YouTube, but unfortunately it is only available in Spanish.



PeopleSoft is not an exception to this reporting evolution. Tools like Pivot Grids, actionable charts and Simplified Analytics all point to the same direction. Unfortunately, not all users are ready for this transition, as I have heard from many customers that upper management do not want to use a digital device to access the reports, so they still prefer the printed alternatives. And yes, I'm writing this as of September 2016.

Anyway, going back to the session, there were some points that I found particularly interesting:

  • The ability in PeopleTools 8.55 to generate submittable PDFs using BI Publisher. This functionality is particularly useful for government forms, but can also be used to gather and process ad-hoc data from users.
  • Oracle JET has been adopted as the charting engine, giving PeopleSoft a more consistent user experience with other Oracle products. Given the amount of development effort dedicated to Oracle JET charting features, PeopleSoft may take a quick benefit a rapidly evolve its charting capabilities.
  • The introduction of Self Service Scheduling simplifies the execution of reports by linking them to pages and hiding the complexity of run controls to users.

Another point I found interesting was the explanation of how macros adversely affect PS/nVision performance, as they require PeopleSoft to execute them twice, first using the Microsoft recommended openXML method and then, as the first does not support macros, using the traditional PeopleSoft Excel automation. Interesting to know!



Meet the PeopleTools ExpertsThe next session was one of my preferred ones, as it consists of several round tables where you can directly talk to the PeopleTools development team. It is also useful to hear the concerns and doubts of customers and partners.

There were plenty of questions about Fluid User Interface, Cloud Architecture, Lifecycle Management and so on. If you ever attend Oracle Open World in the future, I strongly recommend this session.

PeopleTools Product Team Panel DiscussionJust after lunch, the next session was this classic of Oracle Open World. It consists in an open discussion between the PeopleTools Product Team and customers and partners. It is always interesting to attend this type of sessions and listed to thoughts and ideas from the PeopleSoft community.


Monitoring and Securing PeopleSoft My last session was Hands-On with PeopleSoft: Develop Practices to Harden and Protect PeopleSoft [CON7074] delivered by Greg Kelly. The presentation was basically around the Securing Your PeopleSoft Application Environment document available here. I found it really illustrative, and taking into account my rather shallow knowledge of security, rather scary :). Next time I will make sure I prepare myself upfront to take more advantage of Greg's wide knowledge on the area.

Wrap UpThis was an interesting edition of Oracle Open World in what is related to PeopleSoft. There were not many announcements made since the last edition. Still, I think the PeopleSoft team at Oracle is doing a great job. This is still a great product indeed.

On the other hand, I have the feeling that PeopleSoft customers are lagging behind in terms of adoption of new features. Now, personally I don't think this is because the update of PeopleSoft to the latest features is complex. Actually, we can say that with Selective Adoption, DPK and other Lifecycle Management tools it has never been this easy to update. The barrier, in my opinion, is not in the product, but in marketing. All Oracle marketing and sales horsepower has been exclusively dedicated during the last years to their cloud offering. Under these circumstances, it is reasonable to have uncertainties about how wise is to perform future investment in PeopleSoft as opposed to moving to the cloud. And we know uncertainty does not accelerate investment...

From a more personal standpoint, this was great event in terms of networking. Being able to meet PeopleSoft talents such as Jim Marion, Graham Smith, Sasank Venama and many others including the PeopleSoft development team is always the best way to nurture and trigger new ideas.

Just my two cents. Thanks for following this blog during this event!

PS: I bought Jim Marion's book from the bookshop at Moscone South and have a good deal of fun and learning guaranteed for my flight back to Madrid.



OOW 2016: nouveautés base de donnée

Yann Neuhaus - Thu, 2016-09-22 14:20

Voici quelques infos sur ce qui a été annoncé ou présenté à l’Oracle Open World. L’info est relayée un peu partout principalement en anglais, donc voici un résumé à l’attention des francophones

Oracle Database 12c Release 2

Soyons clair, la base de donnée n’est pas le sujet principal de l’Open World. Comme prévu, c’est une sortie ‘Cloud First’ mais la version rendu publique lundi est une version limitée.
Si vous avez utilisé le ‘Schema as a Service’ c’est un peu la même idée sauf qu’il s’agit de ‘PDB as a Service’ ici. En multitenant, la consolidation par Schema est remplacée par la consolidation par PDB qui a l’avantage de présenter virtuellement une base complète, avec ses objects publics, ses multiples schemas, etc.
Donc pas d’accès d’administration: c’est un service “managed” – administré par Oracle.
Le multitenant permet de donner des droits DBA sur une PDB tout en empêchant d’interagir avec le reste du système. Ce sont des nouvelles fonctionnalités de la 12.2, entre autres les “lockdown profiles” qui ont été développées dans ce but.
Le service s’appelle “Exadata Express Cloud Service” car il tourne sur Exadata (donc compression HCC et bientôt SmartScan). La plupart des options sont d’ailleurs disponibles (In-Memory, Advanced Compression,…)
“Express” est pour la facilité et rapidité de provisonning: quelques minutes. Le but est qu’un développeur puisse en 5 minutes créer un service base de donnée facilement accessible (par SQL*Net encrypté). L’idée c’est qu’il soit aussi facile pour un développeur de créer une base Oracle que de créer des bases Postgres, Cassandra, MongoDB,…
Et bien sûr si on met toutes les option, le développeur va les utiliser et elles deviendront nécessaires en production.

Il y aura bientôt un Data Center en Europe. Pour le moment, c’est seulement aux USA. Le prix est attractif (CHF 170 par mois) mais la base est assez limitée en terme de CPU, stockage et mémoire. C’est principalement pour du développement et du bac à sable.

Donc la 12c Release 2 pour le moment n’est disponible que sous la forme de PDBaaS sur Exadata Express Cloud Service:

EXCS

Avant la fin de l’année, on devrait avoir la 12.2 en DBaaS (non-managed que l’on connait actuellement sir le PaaS Oracle) et la version General Availability viendra ensuite, probablement en 2017

 

Cet article OOW 2016: nouveautés base de donnée est apparu en premier sur Blog dbi services.

Rimini Street Permanently Enjoined from Illegally Accessing Oracle’s Software and Must Pay Oracle More Than $100 Million

Oracle Press Releases - Thu, 2016-09-22 12:07
Press Release
Rimini Street Permanently Enjoined from Illegally Accessing Oracle’s Software and Must Pay Oracle More Than $100 Million Court Finds Rimini Street Conducted Significant Litigation Misconduct

Redwood Shores, Calif.—Sep 22, 2016

The United States District Court for the District of Nevada granted Oracle’s motion for a permanent injunction against continued copyright infringement by Rimini Street and continued computer access violations by Rimini and its President and CEO, Seth Ravin, in Oracle’s long-running litigation against Rimini and Ravin. The Court also awarded Oracle $46 million in attorneys’ fees and costs against Rimini and Ravin personally on top of the $50M in damages awarded by the jury last year, including a $14 million award against Ravin personally. Additionally, the Court granted Oracle’s motion for an award of prejudgment interest, which we anticipate will exceed $24 million.

“The Court’s order sends a powerful message to protect innovators,” said Oracle’s General Counsel Dorian Daley.  “This order helps to protect Oracle’s investment in its software to benefit its customers and confirms that Rimini’s infringement was unrepentant. Rimini and Ravin continue to falsely assert that their conduct is permissible. Their customers and prospective customers are on notice of their grave misconduct and the consequences of that misconduct.”

The Court awarded the permanent injunction because “Rimini’s business model was built entirely on its infringement of Oracle’s copyrighted software.” The Court emphasized that “the evidence in action established Rimini’s callous disregard for Oracle’s copyrights and computer systems when it engaged in the infringing conduct.”  In awarding Oracle its attorneys’ fees, the Court noted that “Oracle successfully prevailed on its claim for copyright infringement as the jury found that Rimini infringed every one of the 93 separate copyright registrations at issue.” Further, the “$50 million verdict against defendants” was “five times the damages number presented at trial by defendants’ damages expert,” demonstrating Oracle’s “substantial success” against Rimini.

Significantly, the Court found that Rimini’s previous denial of copyright infringement was based on “a conscious disregard for the manner that Rimini used … Oracle’s copyrighted software,” and that Rimini’s “significant litigation misconduct,” including the intentional “destruction of evidence,” warranted the award of fees.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

OBIEE12c - Upgrading to Version 12.2.1.1

Rittman Mead Consulting - Thu, 2016-09-22 10:36

INTRODUCTION

The new version of OBIEE 12c, 12.2.1.1 to be exact, is out, so let’s talk about it. It’s my intent that after reading this, you can expect some degree of comfort in regards to possibly doing this thing yourself (should you find yourself in just such a circumstance), but if not, feel free to drop us a line or give us a ring. It should be noted that Oracle documentation explicitly indicates that you’re going to need to upgrade to OBIEE version 12.2.1.0, which is to say you’re going to have to bring your 11g instance up to 12c before you can proceed with another upgrade. A colleague here at RM and I recently sat down to give the upgrade process (click there for the Oracle doc) a go on one of our hosted windows servers, and here’s the cut and dry of it. The examples throughout will be referencing both Linux and Windows, so choose how you’d like. Now, if you’ve gone through the 12c install process before, you’ll be plenty familiar with roughly 80% of the steps involved in the upgrade. Just to get this out of the way, no, it’s not a patch (in the sense that you’re actually going through the OBIEE patching process using OPatch). In fact, the process almost exactly mirrors a basic 12c install, with the addition of a few steps that I will make darn sure we cover in their entirety below. Speaking of which, I’m not going to do a play-by-play of the whole thing, but simply highlight those steps that are wholly unfamiliar. To provide some context, let’s go through the bullet points of what we’ll actually be doing during the upgrade.

  1. First, we’ll make sure we have a server appropriate, supported version of java installed (8_77 is the lowest version) and that this guy corresponds to the JAVA_HOME you’ve got set up.

  2. Next, we’ll be running the install for the WebLogic server into a NEW oracle home. That’s right, you heard me. A. new. oracle. home.

  3. After that, we’ll be running a readiness check to make sure our OBIEE bits won’t run into any trouble during the actual upgrade process. This checks all OBIEE components, including those schemas you installed during the initial install process. Make sure to have your application database admin credentials on hand (we’ll talk about what you need below in more detail). The end of this step will actually have us upgrade all those pieces the readiness checker deems worthy of an upgrade.

  4. Next, we’ll reconfigure and upgrade our existing domain by running the RECONFIGURATION WIZARD!!!!! and upgrade assistant, respectively.

  5. Lastly, we’ll start up our services, cross our fingers, hold onto our four leaf clovers, etc.. (just kidding, at least about that last part).

Before we begin, however, let’s check off a few boxes on the ‘must have’ list.

  • Download all the files here, and make sure you get the right versions for whatever kind of server your version of OBIEE hangs out in. The java version will be 8_101 which will work out just fine even though the minimum needed is 8_77.

  • Get those database credentials! If you don’t know, drop everything and figure out how you’re going to access the application database within which the OBIEE 12c schemas were installed. You’ll need the user name/pass for the SYS user (or user with SYS privileges), and the database connection string as well, including the service name, host, and port.

  • Make sure you have enough disk space wherever you’re installing the upgrade. The downloads for the upgrade aren’t small. You should have at least 150GB, on a DEV box, say. You don’t want to have to manage allocating additional space at a time like this, especially if it involves putting in a ticket with IT (wink wink)! Speaking of which, you’ll also need the server credentials for whichever user 12c was installed under. Note that you probably don’t need root if it was a linux machine, however there have been some instances where I’ve needed to have these handy, as there were some file permission issues that required root credentials and were causing errors during an install. You’ll also need the weblogic/obiee admin user (if you changed the name for some reason).

  • Lastly, make sure you’re at least a tad bit familiar with both the path to the oracle and to the domain home.

SETTING UP JAVA

After downloading the version of Java you need, go ahead update it via the .rpm or .exe, etc… Make sure to update any environment variables you have set up, and to update both the JAVA_HOME variable AND the PATH to reference the new Java location. As stated above, at the time of this blog, the version we used, and that is currently available, is 8_101. During the upgrade process, we got a warning (see below) about our version not being 8_77. If this happens to you, just click Next. Everything will be alright, promise.

Java Version Warning

A NEW HOME FOR ORACLE

Did you click the link to the upgrade doc yet? If not, do so now, as things are about to get kind of crazy. Follow along as we walk through the next steps if you’d like. First, stop services and disable the SSL like it tells you to. Then, start OBIEE services back up and then run the infrastructure jar (java -jar fmw_12.2.1.1.0_infrastructure.jar) for the WebLogic server install. Again, I’m not going to go pic by pic here as you can assume most everything resembles the initial 12c install process, and this part is no different. The one piece of this puzzle we need to focus on is establishing a new oracle home. After skipping those auto updates, move onto step 3 where we are, in fact, going to designate a new oracle home. You’ll see that, after completing the WebLogic install, we’ll have a bunch of updated feature sets, in addition to some new directories in our 12.2.1.1 oracle home. For example, if your original home is something like:

/u01/app/oracle/fmw

change it to:

New Oracle Home

when it asks you to enter a new one.

Breeze through the rest of the steps here, and remember to save that response file!

UPDATING OBIEE

Unzip both of the fmw_12.2.1.1.0_bi_linux64_Disk#_#of2.zip files, making sure that your OBIEE install files are in the same directory. For windows, this will be the executable from the first zip file, and the zip file from the second part of disk 1. Execute the binary (on linux) or .exe, going through the usual motions and then in step 3, enter the NEW oracle home for 12.2.1.1. In the example above, it would be:

/u01/app/oracle/fmw2

for Linux, and likewise, for Windows:

Enter Existing Oracle Home

Again, there isn’t too much to note or trap you here beyond just making sure that you take special care not to enter your original oracle home, but the one you created in the previous section. Proceed through the next steps as usual and remember, save your response file!

UPDATING THE 12C SCHEMAS - USING THE READINESS CHECKER AND UPDATE ASSISTANT

Run the readiness checker from:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua -readiness

This next series of steps will take you through all the schemas currently deployed on your application database and confirm that they won’t explode once you take them through the upgrade process. In step 2 of 6, make sure that you’re entering the port for EM/Console (9500 by default). Remember when I said you’re going to need the DB credentials you used to install 12c in the first place? Well here’s where we’re going to use them. The readiness checker will guide you through a bunch of screens that essentially confirms the credentials for each schema installed, and then presents a report detailing which of these will actually get upgraded. That is to say, there are some that won’t be. I really like this new utility as an extra vote of confidence for a process that can admittedly be oftentimes troublesome.

Readiness Checker

Readiness Report

Once you’ve validated that those schemas ready for update, go ahead and stop OBI12c services using the EXISTING oracle home.

Pro tip: they’ve made it super easy to do this now by just pointing your bash_profile to the binaries directory in OBIEE’s bitools folder (ORACLE_HOME/user_projects/domains/bi/bitools/bin). After logging this entry in your profile, you can simply type start.sh or stop.sh to bring everything up or down, not to mention take advantage of the myriad other scripts that are in there. Don't type those paths out every time.

I digress… After the services come down, run the upgrade assistant from within the NEW oracle home, as below:

Citing the previous example:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua

After bringing up the install dialogue box, move on to step 2, and select the All Schemas Used by a Domain option (as in the example above), unless of course you’d like to hand select which ones you’d like to upgrade. I suppose if you were thinking about scrapping one you had previously installed, then this would be a good option for you. Make sure the domain directory you specify is from your existing/old 12c instance, as below:

Upgrade Assistant-Existing Domain

Move through the next series of steps, which are more or less self explanatory (no tricks here, promise), once again validating connection credentials until you get to step 12. As always, save the response file, select Upgrade, and then watch the magic happen,….hopefully. Congratulations, you’ve just updated your schemas!

Schema Update Protocol Complete

WHO INVITED A WIZARD TO THE PARTY? - RECONFIGURING THE BI DOMAIN AND UPDATING THE BI CONFIGURATION

Like I said before, I won’t be covering every single step of this process i.e, doing the map viewer portion, which means you’ll have to still consult the…oracle, on some of this stuff. That being said, don’t gloss over backing up the map viewer file..you have to do it. This is simply an attempt to help make the upgrade process a little easier to swallow and hopefully make some of the more confusing steps a bit clearer. Moving on. Guess what? It’s time to run another series of dialogue boxes. Beats the heck out of scripting this stuff though, I guess. Open up the RECONFIGURATION WIZARD!!!!! as instructed in the documentation, from the location within your NEW oracle home. The first step will prompt us for the location of the domain we want to upgrade. We want to upgrade our existing 12c domain (the old one). So type that in/browse for it. Right now.

Enter Existing Domain Home

Validate your java version and location in step 3 and then click your way through the next few screens, ensuring that you’ve at least given your stamp of approval on any pre-filled or manually filled entries in each dialogue box. Leave step 7 alone and click Next to get to the screen where we’re actually going to be starting the reconfiguration process. Click through and exit the RECONFIGURATION WIZARD!!!!!

Validate Java

Configuration Celebration

Don’t forget to restore the map viewer config file at this point, and then launch the configuration assistant again, this time selecting the All Configurations Used By a Domain option in step 2. Make sure you’ve entered the location of the existing 12c domain in this step as well, and NOT the one created under the new oracle home.

Enter Proper Domain

Click through the next steps, again, paying close attention to all prompts and the location for the map viewer xml file. Verify in step 7 that the directory locations referenced for both domain and oracle map viewer are for the existing locations and NOT those created by the install of the update.

Correct Location Verification Affirmation

WRAPPING UP AND NOTES

You can now boot up ssl (as below) and then start OBIEE services.

DOMAIN_HOME/bitools/bin/ssl.sh internalssl true

Note: if you have tnsadmin or ldap.ora, place copies under NEW_ORACLE_HOME/network/admin

You can ignore the new oracle home created at this time, as, in my opinion, we’re going to have to do something similar for any following updates
for 12c. What did you think of the upgrade process and did you run into any issues? Thanks so much for reading, and as always, if you find any inconsistencies or errors please let us hear about them!

Categories: BI & Warehousing

use of application error

Tom Kyte - Thu, 2016-09-22 10:26
Hi , I wanted to know is pragma exception init and raise_application_error does the same thing? I wanted to know what is the difference? Also, I wanted to know what is the use , if I place my exception declaration in the package specificat...
Categories: DBA Blogs

Intermittant ORA-08103: object no longer exists

Tom Kyte - Thu, 2016-09-22 10:26
In a batch job (in java), I am getting the below error details while reading the data (in a batch job). And this error is intermittant with one table on first run and in later runs the table keeps on changing. Before every job run, i will truncate th...
Categories: DBA Blogs

2 level subqueyries and problem select function result

Tom Kyte - Thu, 2016-09-22 10:26
Hello everybody, I've 2 problems with on 2 queries : 1) The following query <code>with pres as ( select t1.IDPROD AS spe1 , t2.IDPROD AS spe2 ,t1.INDICELIGNEPRESCRIPTION as id_ligne_1,t2.INDICELIGNEPRESCRIPTION as id_ligne_2, ...
Categories: DBA Blogs

SQL Query

Tom Kyte - Thu, 2016-09-22 10:26
Hi Tom, I have a table like this create table temp1 (f_name varchar2(100), f_date date, f_amount integer ); Records in this table will be like this: Ajay 15-JUL-02 500 Bhavani 15-JUL-02 700 Chakri ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator