Feed aggregator

How to connect to the to database using unix termianl

Tom Kyte - Mon, 2016-12-19 04:46
Hi , I wanted to connect to oracle database using unix terminal. The Idea is that I have installed ubuntu as a virtual machine and I wanted to run procedures using ubuntu and the result of which should be seen in the terminal. Is is possible to...
Categories: DBA Blogs

Problem in EXPDP AND IMPDP with virtual column

Tom Kyte - Mon, 2016-12-19 04:46
Hey all I have create table with virtual columns like the following CREATE TABLE employees ( id NUMBER, first_name VARCHAR2(10), last_name VARCHAR2(10), salary NUMBER(9,2), comm1 NUMBER(3), comm2 NUMB...
Categories: DBA Blogs

PLSQL: Best/Alternate way to implement FAST Refresh for better performance

Tom Kyte - Mon, 2016-12-19 04:46
In my present db implementation, my db does not has any data/table. All the data it gets is from other sources using dblinks and then populate Materialized Views. These MVs in actual being used by my db to serve customer requests. To implement these ...
Categories: DBA Blogs

format disrupted upon using it

Tom Kyte - Mon, 2016-12-19 04:46
Hi Tom (or Chris or Connor), While i was trying something out i came upon a strange feature when using to_char. I have 2 columns in a with clause, a value and a format. Then I use these 2 in a to_char function. The strange thing is that the for...
Categories: DBA Blogs

ETL Offload with Spark and Amazon EMR - Part 3 - Running pySpark on EMR

Rittman Mead Consulting - Mon, 2016-12-19 03:00

In the previous articles (here, and here) I gave the background to a project we did for a client, exploring the benefits of Spark-based ETL processing running on Amazon's Elastic Map Reduce (EMR) Hadoop platform. The proof of concept we ran was on a very simple requirement, taking inbound files from a third party, joining to them to some reference data, and then making the result available for analysis.

I showed here how I built up the prototype PySpark code on my local machine, using Docker to quickly and easily make available the full development environment needed.

Now it's time to get it running on a proper Hadoop platform. Since the client we were working with already have a big presence on Amazon Web Services (AWS), using Amazon's Hadoop platform made sense. Amazon's Elastic Map Reduce, commonly known as EMR, is a fully configured Hadoop cluster. You can specify the size of the cluster and vary it as you want (hence, "Elastic"). One of the very powerful features of it is that being a cloud service, you can provision it on demand, run your workload, and then shut it down. Instead of having a rack of physical servers running your Hadoop platform, you can instead spin up EMR whenever you want to do some processing - to a size appropriate to the processing required - and only pay for the processing time that you need.

Moving my locally-developed PySpark code to run on EMR should be easy, since they're both running Spark. Should be easy, right? Well, this is where it gets - as we say in the trade - "interesting". Part of my challenges were down to the learning curve in being new to this set of technology. However, others I would point to more as being examples of where the brave new world of Big Data tooling becomes less an exercise in exciting endless possibilities and more stubbornly Googling errors due to JAR clashes and software version mismatches...

Provisioning EMR

Whilst it's possible to make the entire execution of the PySpark job automated (including the provisioning of the EMR cluster itself), to start with I wanted to run it manually to check each step along the way.

To create an EMR cluster simply login to the EMR console and click Create

I used Amazon's EMR distribution, configured for Spark. You can also deploy a MapR-based hadoop platform, and use the Advanced tab to pick and mix the applications to deploy (such as Spark, Presto, etc).

The number and size of the nodes is configured here (I used the default, 3 machines of m3.xlarge spec), as is the SSH key. The latter is very important to get right, otherwise you won't be able to connect to your cluster over SSH.

Once you click Create cluster Amazon automagically provisions the underlying EC2 servers, and deploys and configures the software and Hadoop clustering across them. Anyone who's set up a Hadoop cluster will know that literally a one-click deploy of a cluster is a big deal!

If you're going to be connecting to the EMR cluster from your local machine you'll want to modify the security group assigned to it once provisioned and enable access to the necessary ports (e.g. for SSH) from your local IP.

Deploying the code

I developed the ETL code in Jupyter Notebooks, from where it's possible to export it to a variety of formats - including .py Python script. All the comment blocks from the Notebook are carried across as inline code comments.

To transfer the Python code to the EMR cluster master node I initially used scp, simply out of habit. But, a much more appropriate solution soon presented itself - S3! Not only is this a handy way of moving data around, but it comes into its own when we look at automating the EMR execution later on.

To upload a file to S3 you can use the S3 web interface, or a tool such as Cyberduck. Better, if you like the command line as I do, is the AWS CLI tools. Once installed, you can run this from your local machine:

aws s3 cp Acme.py s3://foobar-bucket/code/Acme.py

You'll see that the syntax is pretty much the same as the Linux cp comand, specifying source and then destination. You can do a vast amount of AWS work from this command line tool - including provisioning EMR clusters, as we'll see shortly.

So with the code up on S3, I then SSH'd to the EMR master node (as the hadoop user, not ec2-user), and transfered it locally. One of the nice things about EMR is that it comes with your AWS security automagically configred. Whereas on my local machine I need to configure my AWS credentials in order to use any of the aws commands, on EMR the credentials are there already.

aws s3 cp s3://foobar-bucket/code/Acme.py ~

This copied the Python code down into the home folder of the hadoop user.

Running the code - manually

To invoke the code, simply run:

spark-submit Acme.py

A very useful thing to use, if you aren't already, is GNU screen (or tmux, if that's your thing). GNU screen is installed by default on EMR (as it is on many modern Linux distros nowadays). Screen does lots of cool things, but of particular relevance here is it lets you close your SSH connection whilst keeping your session on the server open and running. You can then reconnect at a later time back to it, and pick up where you left off. Whilst you're disconnected, your session is still running and the work still being processed.

From the Spark console you can monitor the execution of the job running, as well as digging into the details of how it undertakes the work. See the EMR cluster home page on AWS for the Spark console URL

Problems encountered

I've worked in IT for 15 years now (gasp). Never has the phrase "The devil's in the detail" been more applicable than in the fast-moving world of big data tools. It's not suprising really given the staggering rate at which code is released that sometimes it's a bit quirky, or lacking what may be thought of as basic functionality (often in areas such as security). Each of these individual points could, I suppose, be explained away with a bit of RTFM - but the nett effect is that what on paper sounds simple took the best part of half a day and a LOT of Googling to resolve.

Bear in mind, this is code that ran just fine previously on my local development environment.

When using SigV4, you must specify a 'host' parameter
boto.s3.connection.HostRequiredError: BotoClientError: When using SigV4, you must specify a 'host' parameter.

To fix, switch

conn_s3 = boto.connect_s3()  


conn_s3 = boto.connect_s3(host='s3.amazonaws.com')  

You can see a list of endpoints here.

boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request

Make sure you're specifying the correct hostname (see above) for the bucket's region. Determine the bucket's region from the S3 control panel, and then use the endpoint listed here.

Error: Partition column not found in schema

Strike this one off as bad programming on my part; in the step to write the processed file back to S3, I had partitionBy='', in the save function


This, along with the coalesce (which combined all the partitions down to a single one) were wrong, and fixed by changing to:

Exception: Python in worker has different version 2.6 than that in driver 2.7, PySpark cannot run with different minor versions

To get the code to work on my local Docker/Jupyter development environment, I set an environment variable as part of the Python code to specify the Python executable:

os.environ['PYSPARK_PYTHON'] = '/usr/bin/python2'

I removed this (along with all the PYSPARK_SUBMIT_ARGS) and the code then ran fine.

Timestamp woes

In my original pySpark code I was letting it infer the schema from the source, which included it determining (correctly) that one of the columns was a timestamp. When it wrote the resulting processed file, it wrote the timestamp in a standard format (YYYY-MM-DD HH24:MI:SS). Redshift (of which more in the next article) was quite happy to process this as a timestamp, because it was one.
Once I moved the pySpark code to EMR, the Spark engine moved from my local 1.6 version to 2.0.0 - and the behaviour of the CSV writer changed. Instead of the format before, it switched to writing the timestamp in epoch form, and not just that but microseconds since epoch. Whilst Redshift could cope with epoch seconds, or milliseconds, it doesn't support microseconds, and the load job failed

Invalid timestamp format or value [YYYY-MM-DD HH24:MI:SS]

and then

Fails: Epoch time copy out of acceptable range of [-62167219200000, 253402300799999]

Whilst I did RTFM, it turns out that I read the wrong FM, taking the latest (2.0.1) instead of the version that EMR was running (2.0.0). And whilst 2.0.1 includes support for specifying the output timestampFormat, 2.0.0 doesn't.

In the end I changed the Spark job to not infer the schema, and so treat the timestamp as a string, thus writing it out in the same format. This was a successful workaround here, but if I'd needed to do some timestamp-based processing in the Spark job I'd have had to find another option.


I now had the ETL job running on Spark on EMR, processing multiple files in turn. Timings were approximately five minutes to process five files, half a million rows in total.

One important point to bear in mind through all of this is that I've gone with default settings throughout, and not made any effort to optimise the PySpark code. At this stage, it's simply proving the end-to-end process.

Automating the ETL

Having seen that the Spark job would run successfully manually, I now went to automate it. It's actually very simple to do. When you launch an EMR cluster, or indeed even if it's running, you can add a Step, such as a Spark job. You can also configure EMR to terminate itself once the step is complete.

From the EMR cluster create screen, switch to Advanced. Here you can specify exactly which applications you want deployed - and what steps to run. Remember how we copied the Acme.py code to S3 earlier? Now's when it comes in handy! We simply point EMR at the S3 path and it will run that code for us - no need to do anything else. Once the code's finished executing, the EMR cluster will terminate itself.

After testing out this approach successfully, I took it one step further - command line invocation. AWS make this ridiculously easier, because from the home page of any EMR cluster (running or not) there is a button to click which gives you the full command to run to spin up another cluster with the exact same configuration

This gives us a command like this:

    aws emr create-cluster \
    --termination-protected \
    --applications Name=Hadoop Name=Spark Name=ZooKeeper \
    --tags 'owner=Robin Moffatt' \
    --ec2-attributes '{"KeyName":"Test-Environment","InstanceProfile":"EMR_EC2_DefaultRole","AvailabilityZone":"us-east-1b","EmrManagedSlaveSecurityGroup":"sg-1eccd074","EmrManagedMasterSecurityGroup":"sg-d7cdd1bd"}' \
    --service-role EMR_DefaultRole \
    --enable-debugging \
    --release-label emr-5.0.0 \
    --log-uri 's3n://aws-logs-xxxxxxxxxx-us-east-1/elasticmapreduce/' \
    --steps '[{"Args":["spark-submit","--deploy-mode","cluster","s3://foobar-bucket/code/Acme.py"],"Type":"CUSTOM_JAR","ActionOnFailure":"TERMINATE_CLUSTER","Jar":"command-runner.jar","Properties":"","Name":"Acme"}]' \
    --name 'Rittman Mead Acme PoC' \
    --instance-groups '[{"InstanceCount":1,"InstanceGroupType":"MASTER","InstanceType":"m3.xlarge","Name":"Master instance group - 1"},{"InstanceCount":2,"InstanceGroupType":"CORE","InstanceType":"m3.xlarge","Name":"Core instance group - 2"}]' \
    --region us-east-1 \

This spins up an EMR cluster, runs the Spark job and waits for it to complete, and then terminates the cluster. Logs written by the Spark job get copied to S3, so that even once the cluster has been shutdown, the logs can still be accessed. Seperation of compute from storage - it makes a lot of sense. What's the point having a bunch of idle CPUs sat around just so that I can view the logs at some point if I want to?

The next logical step for this automation would be the automatic invocation of above process based on the presence of a defined number of files in the S3 bucket. Tools such as Lambda, Data Pipeline, and Simple Workflow Service are all ones that can help with this, and the broader management of ETL and data processing on AWS.

Spot Pricing

You can save money further with AWS by using Spot Pricing for EMR requests. Spot Pricing is used on Amazon's EC2 platform (on which EMR runs) as a way of utilising spare capacity. Instead of paying a fixed (higher) rate for some server time, you instead 'bid' at a (lower) rate and when the demand for capacity drops such that the spot price does too and your bid price is met, you get your turn on the hardware. If the spot price goes up again - your server gets killed.

Why spot pricing makes sense on EMR particularly is that Hadoop is designed to be fault-tolerant across distributed nodes. Whilst pulling the plug on an old-school database may end in tears, dropping a node from a Hadoop cluster may simply mean a delay in the processing whilst the particular piece of (distributed) work is restarted on another node.


We've developed out simple ETL application, and got it running on Amazon's EMR platform. Whilst we used AWS because it's the client's platform of choice, in general there's no reason we couldn't take it and run it on another Hadoop platform. This could be a Hadoop platform such as Oracle's Big Data Cloud Service, Cloudera's CDH running on Oracle's Big Data Appliance, or simply a self-managed Hadoop cluster on commodity hardware.

Processing time was in the region of 30 minutes to process 2M rows across 30 files, and in a separate batch run 3.8 hours to process 283 files of around 25M rows in total.

So far, the data that we've processed is only sat in a S3 bucket up in the cloud.

In the next article we'll look at what the options are for actually analysing the data and running reports against it.

Categories: BI & Warehousing


Yann Neuhaus - Sun, 2016-12-18 03:12

In a previous post I explained how to use transportabel tablespace from a standby database. Here I’m showing an alternative where you can transport from a backup instead of a standby database. RMAN can do that since 10gR2.

Transportable Tablespace is a beautiful feature: the performance of physical copy and the flexibility of logical export/import. But it has one drawback: the source tablespace must be opened read only when you copy it and export the metadata. This means that you cannot use it from production, such as moving data to a datawarehouse ODS. There’s an alternative to that: restore the tablespace with TSPITR (tablespace point-in-time recovery) into a temporary instance and transport from there.
This is what is automated by RMAN with a simple command: RMAN> TRANSPORT TABLESPACE.


This blog post shows how to do that when you are in 12c multitenant architecture. Even if 12.2 comes with online PDB clone, you may want to transport a single tablespace.

You cannot run TRANSPORT TABLESPACE when connected to a PDB. Let’s test it:

RMAN> connect target sys/oracle@//localhost/PDB1
connected to target database: CDB1:PDB1 (DBID=1975603085)

Here are the datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
List of Permanent Datafiles
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
9 250 SYSTEM NO /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 UNDOTBS1 NO /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
List of Temporary Files
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

Let’s run the TRANSPORT TABLESPACE command:

RMAN> transport tablespace USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='jlDa'
initialization parameters used for automatic instance:
#No auxiliary parameter file used
starting up automatic instance CDB1
Oracle instance started
Total System Global Area 805306368 bytes
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/17/2016 21:33:14
RMAN-07538: Pluggable Database qualifier not allowed when connected to a Pluggable Database

You got the idea: an auxiliary instance is automatically created but then it failed because an internal command cannot be run from a PDB.

Run from CDB

So let’s run it when connected to CDB$ROOT:

echo set on
RMAN> connect target sys/oracle
connected to target database: CDB1 (DBID=894360530)

Whe can see all pluggable databases and all datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
List of Permanent Datafiles
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB1A/system01.dbf
3 480 SYSAUX NO /u02/oradata/CDB1A/sysaux01.dbf
4 65 UNDOTBS1 YES /u02/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB1A/pdbseed/system01.dbf
6 350 PDB$SEED:SYSAUX NO /u02/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/oradata/CDB1A/users01.dbf
8 520 PDB$SEED:UNDOTBS1 NO /u02/oradata/CDB1A/pdbseed/undotbs01.dbf
9 250 PDB1:SYSTEM YES /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 PDB1:SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 PDB1:UNDOTBS1 YES /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
List of Temporary Files
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 240 TEMP 32767 /u02/oradata/CDB1A/temp01.dbf
2 32 PDB$SEED:TEMP 32767 /u02/oradata/CDB1A/pdbseed/temp012016-08-23_14-12-45-799-PM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

We can run the TRANSPORT TABLESPACE command from here, naming the tablespace prefixed with the PDB name PDB1:USERS

transport tablespace … auxiliary destination … tablespace destination

The TRANSPORT TABLESPACE command needs a destination where to put the datafiles and dump file to transport (TABLESPACE DESTINATION) and also needs an auxiliary destination (AUXILIARY DESTINATION). It seems it is mandatory, which is different from the PDBPITR where the FRA is used by default.

RMAN> transport tablespace PDB1:USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';

And then you will see all what RMAN does for you. I’ll show most of the output.


Restoring a tablespace will need to apply redo and then rollback the transactions that were opened at that PIT. This is why RMAN has to restore all tablespaces that may contain UNDO:

RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace PDB1:SYSTEM
Tablespace UNDOTBS1
Tablespace PDB1:UNDOTBS1

I suppose that when the UNDO_TABLESPACE has changed in the meantime, RMAN cannot guess which tablespace covered the transactions at the requested PIT but I seen nothing in the TRANSPORT TABLESPACE syntax to specify it. That’s probably for a future post and /or SR.

Auxiliary instance

So RMAN creates an auxiliary instance with some specific parameters to be sure there’s no side effect on the source database (the RMAN target one).

Creating automatic instance, with SID='qnDA'
initialization parameters used for automatic instance:
#No auxiliary parameter file used
starting up automatic instance CDB1
Oracle instance started
Total System Global Area 805306368 bytes
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created


The goal is to transport the tablespace, so RMAN checks that they are self-contained:

Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

and starts the restore of controlfile and datafiles (the CDB SYSTEM, SYSAUX, UNDO and the PDB SYSTEM, SYSAUX, UNDO and the tablespaces to transport)

contents of Memory Script:
# set requested point in time
set until scn 1836277;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
executing Memory Script
executing command: SET until clause
Starting restore at 17-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=253 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp tag=TAG20161217T213038
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/var/tmp/AUX/CDB1A/controlfile/o1_mf_d5c88zp3_.ctl
Finished restore at 17-DEC-16
sql statement: alter database mount clone database
sql statement: alter system archive log current
contents of Memory Script:
# set requested point in time
set until scn 1836277;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 9 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 11 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 10 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
set newname for datafile 12 to
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 9, 4, 11, 3, 10, 12;
switch clone datafile all;
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /var/tmp/AUX/CDB1A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_temp_%u_.tmp in control file
Starting restore at 17-DEC-16
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /var/tmp/AUX/CDB1A/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /var/tmp/TTS/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 17-DEC-16
datafile 1 switched to datafile copy
input datafile copy RECID=11 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_system_d5c8993k_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=12 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_d5c8d8ow_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_d5c8998b_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=14 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_d5c8d8g6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_d5c8996o_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=16 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_d5c8d8o7_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=17 STAMP=930865006 file name=/var/tmp/TTS/users01.dbf

You noticed that the SYSTEM, SYSAUX, UNDO are restored in the auxiliary location but the tablespaces to transport (USERS here) goes to its destination. If you want to transport it on the same server, you can avoid any copying of it.


The recovery continues automatically to the PIT (which you can also specify with an UNTIL clause or restore point)

contents of Memory Script:
# set requested point in time
set until scn 1836277;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone 'PDB1' "alter database datafile
9 online";
sql clone "alter database datafile 4 online";
sql clone 'PDB1' "alter database datafile
11 online";
sql clone "alter database datafile 3 online";
sql clone 'PDB1' "alter database datafile
10 online";
sql clone 'PDB1' "alter database datafile
12 online";
# recover and open resetlogs
recover clone database tablespace "PDB1":"USERS", "SYSTEM", "PDB1":"SYSTEM", "UNDOTBS1", "PDB1":"UNDOTBS1", "SYSAUX", "PDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile 1 online
sql statement: alter database datafile 9 online
sql statement: alter database datafile 4 online
sql statement: alter database datafile 11 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 10 online
sql statement: alter database datafile 12 online
Starting recover at 17-DEC-16
using channel ORA_AUX_DISK_1
starting media recovery
archived log for thread 1 with sequence 30 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc thread=1 sequence=30
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:02
Finished recover at 17-DEC-16
database opened
contents of Memory Script:
sql clone 'alter pluggable database PDB1 open';
executing Memory Script
sql statement: alter pluggable database PDB1 open

Export TTS

The restored tablespaces can be set read only, which was the goal.

contents of Memory Script:
# make read only the tablespace that will be exported
sql clone 'PDB1' 'alter tablespace
USERS read only';
# create directory for datapump export
sql clone 'PDB1' "create or replace directory
executing Memory Script
sql statement: alter tablespace USERS read only

Now the export of metadata run (equivalent to expdp transport_tablespace=Y)

sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/var/tmp/TTS''
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_qnDA_urDb":
EXPDP> Master table "SYS"."TSPITR_EXP_qnDA_urDb" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_qnDA_urDb is:
EXPDP> /var/tmp/TTS/dmpfile.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace USERS:
EXPDP> /var/tmp/TTS/users01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_qnDA_urDb" successfully completed at Sat Dec 17 21:41:06 2016 elapsed 0 00:00:47
Export completed
Not performing table import after point-in-time recovery

The last message let me think that the RMAN codes shares the one that manages RECOVER TABLE.

Then RMAN lists the commands to run the import (also available in a generated script) and removes the auxiliary instance.


Not everything has been removed:
[oracle@VM117 blogs]$ du -ha /var/tmp/AUX
0 /var/tmp/AUX/CDB1A/controlfile
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_51_d5c8k0oo_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_52_d5c8kcjp_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_53_d5c8kskz_.log
601M /var/tmp/AUX/CDB1A/onlinelog
0 /var/tmp/AUX/CDB1A/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undo_1_d5c8m1nx_.dbf
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9
1.1G /var/tmp/AUX/CDB1A
1.1G /var/tmp/AUX

Import TTS

In the destination you find the tablespace datafiles, the dump of metadata and a script that can be run to import it to the destination:

[oracle@VM117 blogs]$ du -ha /var/tmp/TTS
5.1M /var/tmp/TTS/users01.dbf
132K /var/tmp/TTS/dmpfile.dmp
4.0K /var/tmp/TTS/impscrpt.sql
5.2M /var/tmp/TTS

For this example, I import it on the same server, in a different pluggable database:

SQL> connect / as sysdba
SQL> alter session set container=PDB2;
Session altered.

and simply run the script provided:

SQL> set echo on
SQL> @/var/tmp/TTS/impscrpt.sql
SQL> /*
SQL> The following command may be used to import the tablespaces.
SQL> Substitute values for and .
SQL> impdp directory= dumpfile= 'dmpfile.dmp' transport_datafiles= /var/tmp/TTS/users01.dbf
SQL> */
SQL> --
SQL> --
SQL> --
SQL> --
Directory created.
Directory created.
SQL> /* PL/SQL Script to import the exported tablespaces */
2 --
3 tbs_files dbms_streams_tablespace_adm.file_set;
4 cvt_files dbms_streams_tablespace_adm.file_set;
6 --
7 dump_file dbms_streams_tablespace_adm.file;
8 dp_job_name VARCHAR2(30) := NULL;
10 --
11 ts_names dbms_streams_tablespace_adm.tablespace_set;
13 --
14 dump_file.file_name := 'dmpfile.dmp';
15 dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
17 --
18 tbs_files( 1).file_name := 'users01.dbf';
19 tbs_files( 1).directory_object := 'STREAMS$DIROBJ$1';
21 --
22 dbms_streams_tablespace_adm.attach_tablespaces(
23 datapump_job_name => dp_job_name,
24 dump_file => dump_file,
25 tablespace_files => tbs_files,
26 converted_files => cvt_files,
27 tablespace_names => ts_names);
29 --
30 IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
31 FOR i IN ts_names.first .. ts_names.last LOOP
32 dbms_output.put_line('imported tablespace '|| ts_names(i));
34 END IF;
35 END;
36 /
PL/SQL procedure successfully completed.
SQL> --
Directory dropped.
Directory dropped.
SQL> --------------------------------------------------------------
SQL> -- End of sample PL/SQL script
SQL> --------------------------------------------------------------

Of course, you don’t need to and you can run the import with IMPDP:

SQL> alter session set container=pdb2;
Session altered.
SQL> create directory tts as '/var/tmp/TTS';
Directory created.
SQL> host impdp pdbadmin/oracle@//localhost/PDB2 directory=TTS dumpfile='dmpfile.dmp' transport_datafiles=/var/tmp/TTS/users01.dbf

You may also use convert to transport to a different endianness.

Local Undo

Note that if you run it on current cloud first DBaaS you will get an error when RMAN opens the PDB in the auxiliary instance because there’s a bug with local undo. Here is the alert.log part:

PDB1(3):Opening pdb with no Resource Manager plan active
PDB1(3):Force tablespace UNDO_1 to be encrypted with AES128
PDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/fqkn_pitr_pdb1_cdb1/fqkn/trace/fqkn_ora_26146.trc
PDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE...
PDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database PDB1 open...

I did this demo with LOCAL UNDO OFF.

So what?

You can use Transportable Tablespaces from a database where you cannot put the tablespace read-only. The additional cost of it is to recover it from a backup, along with SYSTEM, SYSAUX and UNDO. But it is fully automated with only one RMAN command.


Cet article RMAN> TRANSPORT TABLESPACE est apparu en premier sur Blog dbi services.

Cloning 12c SE2 Oracle Home for Windows 2012 R2

Michael Dinh - Sun, 2016-12-18 00:13

Process is pretty much similar to *nix environment with a few exceptions.

It was harder that it should be since I wanted to perform task using CLI vs GUI.

This does not cover zip and unzip of OH and I cannot believe how junky Winzip has become since I have typically been using 7-Zip.

Cloning 12c SE2 Oracle Home for Windows 2012 R2



Introduction to Crate.io and CrateDB

DBMS2 - Sat, 2016-12-17 23:27

Crate.io and CrateDB basics include:

  • Crate.io makes CrateDB.
  • CrateDB is a quasi-RDBMS designed to receive sensor data and similar IoT (Internet of Things) inputs.
  • CrateDB’s creators were perhaps a little slow to realize that the “R” part was needed, but are playing catch-up in that regard.
  • Crate.io is an outfit founded by Austrian guys, headquartered in Berlin, that is turning into a San Francisco company.
  • Crate.io says it has 22 employees and 5 paying customers.
  • Crate.io cites bigger numbers than that for confirmed production users, clearly active clusters, and overall product downloads.

In essence, CrateDB is an open source and less mature alternative to MemSQL. The opportunity for MemSQL and CrateDB alike exists in part because analytic RDBMS vendors didn’t close it off.

CrateDB’s not-just-relational story starts:

  • A column can contain ordinary values (of usual-suspect datatypes) or “objects”, …
  • … where “objects” presumably are the kind of nested/hierarchical structures that are common in the NoSQL/internet-backend world, …
  • … except when they’re just BLOBs (Binary Large OBjects).
  • There’s a way to manually define “strict schemas” on the structured objects, and a syntax for navigating their structure in WHERE clauses.
  • There’s also a way to automagically infer “dynamic schemas”, but it’s simplistic enough to be more suitable for development/prototyping than for serious production.

Crate gave an example of data from >800 kinds of sensors being stored together in a single table. This leads to significant complexity in the FROM clauses. But querying the same data in a relational schema would be at least as complicated, and probably worse.

One key to understanding Crate’s architectural choices is to note that they’re willing to have different latency/consistency standards for:

  • Writes and single-row look-ups.
  • Aggregates and joins.

And so it makes sense that:

  • Data is banged into CrateDB in a NoSQL-ish kind of way as it arrives, with RYW consistency.
  • The indexes needed for SQL functionality are updated in microbatches as soon as possible thereafter. (Think 100 milliseconds as a base case.) Crate.io characterizes the consistency for this part as “eventual”.

CrateDB will never have real multi-statement transactions, but it has simpler levels of isolation that may be called “transactions” in some marketing contexts.

CrateDB technical highlights include:

  • CrateDB records are stored as JSON documents. (Actually, I didn’t ask whether this was true JSON or rather something “JSON-like”.)
    • In the purely relational case, the documents may be regarded as glorified text strings.
    • I got the impression that BLOB storage was somewhat separate from the rest.
  • CrateDB’s sharding story starts with consistent hashing.
    • Shards are physical-only. CrateDB lacks the elasticity-friendly feature of there being many logical shards for each physical shard.
    • However, you can change your shard count, and any future inserts will go into the new set of shards.
  • In line with its two consistency models, CrateDB also has two indexing strategies.
    • Single-row/primary-key lookups have a “forward lookup” index, whatever that is.
    • Tables also have a columnar index.
      • More complex queries and aggregations are commonly done straight against the columnar index, rather than the underlying data.
      • CrateDB’s principal columnar indexing strategy sounds a lot like inverted-list, which in turn is a lot like standard text indexing.
      • Specific datatypes — e.g. geospatial — can be indexed in different ways.
    • The columnar index is shard-specific, and located at the same node as the shard.
    • At least the hotter parts of the columnar index will commonly reside in memory. (I didn’t ask whether this was via straightforward caching or some more careful strategy.)
  • While I didn’t ask about CrateDB’s replication model in detail, I gathered that:
    • Data is written synchronously to all nodes. (That’s sort of implicit in RYW consistency anyway.)
    • Common replication factors are either 1 or 3, depending on considerations such as the value of the data. But as is usual, some tables can be replicated across all nodes.
    • Data can be read from all replicas, for obvious reasons of performance.
  • Where relevant — e.g. the wire protocol or various SQL syntax specifics — CrateDB tends to emulate Postgres.
  • The CrateDB stack includes Elasticsearch and Lucene, both of which make sense in connection with Crate’s text/document orientation.

Crate.io is proud of its distributed/parallel story.

  • Any CrateDB node can plan a query. Necessary metadata for that is replicated across the cluster.
  • Execution starts on a shard-by-shard basis. Data is sorted at each shard before being sent onward.
  • Crate.io encourages you to run Spark and CrateDB on the same nodes.
    • This is supported by parallel Spark-CrateDB integration of the obvious kind.
    • Crate.io notes a happy synergy to this plan, in that Spark stresses CPU while CrateDB is commonly I/O-bound.

The CrateDB-Spark integration was the only support I could find for various marketing claims about combining analytics with data management.

Given how small and young Crate.io is, there are of course many missing features in CrateDB. In particular:

  • A query can only reshuffle data once. Hence, CrateDB isn’t currently well-designed for queries that join more than 2 tables together.
  • The only join strategy currently implemented is nested loop. Others are in the future.
  • CrateDB has most of ANSI SQL 92, but little or nothing specific to SQL 99. In particular, SQL windowing is under development.
  • Geo-distribution is still under development (even though most CrateDB data isn’t actually about people).
  • I imagine CrateDB administrative tools are still rather primitive.

In any case, creating a robust DBMS is an expensive and time-consuming process. Crate has a long road ahead of it.

Categories: Other

JET Application - Generate with Yeoman - Debug in NetBeans

Andrejus Baranovski - Sat, 2016-12-17 13:53
Let's take a look today how to debug JET application which is initially generated with Yeoman. We could debug in NetBeans, but by default application generated with Yeoman is not runnable in NetBeans, we need to add manually some config files - I will describe how. Also note - JET application created with NetBeans can't be directly served with grunt from command line, it also would require manual changes in the config. It would be nice if Oracle would make JET applications generated with Yeoman automatically runnable in NetBeans and vice versa.

I will go step by step through the process (first I would recommend to go through JET Getting Started):

1. JET application creation with Yeoman and build with Grunt
2. Manual configuration to be able to open such application in NetBeans
3. JET CSS config to be able to run such application in NetBeans

1. JET application creation with Yeoman and build with Grunt

Run command: yo oraclejet basicjetapp --template=basic. This creates simple JET application with one module:

Scripts and various modules are being generated. JET content can be located under src folder - generated application structure:

This is the most simple JET application possible, based on basic template. I have added chart into main page (I'm using Atom text editor to edit JavaScript):

Supporting variables for the chart are created in Application Controller module:

Application Controller module is included into JET main module, where bindings are applied based on Application Controller module and JET context is initialized:

You can build minified JET structure ready for deployment with Grunt command build:release. Navigate to application root folder and run it from there: grunt build:release:

This will produce web folder (name can be changed) with JET minified content:

We could run JET application with Grunt using server:release command: grunt serve:release:

JET application is running:

2. Manual configuration to be able to open such application in NetBeans

To debug JET application generated with Yeoman we would need to open it in NetBeans. Unfortunately this is not available by default. NetBeans doesn't recognize JET project and shows disabled icon:

We need to copy manually NetBeans nbproject folder from any other JET application created with NetBeans into our application root folder:

Change project.xml file and update project name property:

Update web context root in project.properties file:

Update application paths in private.xml file:

After these changes, NetBeans can recognize JET application and it can be opened:

JET application generated with Yeoman is successfully opened in NetBeans:

But there is issue when trying to run application in NetBeans - it can't find JET Alta UI CSS. JET is running, but with ugly look:

3. JET CSS config to be able to run such application in NetBeans

JET application generated with Yeoman points to CSS location which doesnt exist in folder structure:

After we run Grunt command grunt build:release it automatically updates CSS location. This is why it works with grunt serve:release:

Things are a bit different for JET application created with NetBeans. JET application created with NetBeans indeed contains JET Alta UI CSS in the folder originally referenced by JET application created with Yeoman:

I copied this folder into JET application generated with Yeoman:

This time JET application runs and displays as it should in NetBeans:

Don't forget to remove duplicate (we need it only to run/debug in NetBeans) JET Alta UI CSS files folder from release if you run grunt serve:release:

Download sample JET application from GitHub directory - basicjetapp.

Start to develop in APEX 5.1, you will gain at least an hour a day!

Dimitri Gielis - Sat, 2016-12-17 05:19
Yesterday APEX 5.1 ( was installed on apex.oracle.com.
This means that you can start developing your apps in APEX 5.1 from now on. Unlike the early adopter releases (apexea.oracle.com) you can develop your apps on apex.oracle.com and later export them and import in your own environment once the on-premise version of APEX 5.1 is available.

APEX 5.1 is again a major update behind the scenes. The page processing is completely different from before; where previously full page reloads were done, now there's much more lightweight traffic and only necessary data is send across.

The big features in this new release are the introduction of Interactive Grids, which is both a successor for Interactive Reports as for Tabular Forms. The other big feature is the integration of Oracle JET, which you see mostly in the data visualisation (charts) part of APEX, but more components will probably follow in future versions. Although those two features addresses the most common issues we previously had (outdated tabular forms and charts), APEX 5.1 brings much more than that. Equally important for me are the "smaller" improvements which makes us even more productive. Below you find some examples...

When creating a new application, the login page is immediately a great looking page:

Previously in APEX 5.0 you had to adapt the login page, see my blog post Pimping the Login Page.

When you want your item to look like this:

APEX 5.1 has now a template option to display the Pre and Post text as a Block:

Or when you want an icon inside your item, there's an Icon CSS Class option selector which shows the gorgeous looking new handcrafted Font APEX icons:

You could do all the item customisations above in APEX 4.2 or 5.0 too, but it would require custom css and some code, whereas now it's declarative in APEX 5.1.

And there's so much more; ability to switch style by user, new packaged apps, warn on unsaved changes, no reload page on submit etc. features that haven't been talked about much yet, but which before you had to do with a plugin or a lot of custom code and now it's just there.

So those "smaller" features are actually not so small, they are an enormous timesaver and bring your apps in warp-speed to modern and great looking applications.

In the next blog posts I'll go in more detail on some specific features that will gain you at least an hour a day, but in the meantime, embrace APEX 5.1 and start earning those extra hours :)
Categories: Development

Oracle Security And Merry Xmas And A Happy New Year

Pete Finnigan - Fri, 2016-12-16 21:46

I want to wish all readers of my site and this blog a very happy Christmas and a very prosperous New Year!! It has been some time since my last blog post; that's because we have been incredibly busy on....[Read More]

Posted by Pete On 16/12/16 At 08:54 PM

Categories: Security Blogs

Demantra setup.exe does nothing on Windows 7

Vikram Das - Fri, 2016-12-16 17:38
Last evening, Mukhtiar called me to assist in Demantra installation on Windows. By the time we got on screen share, he had already downloaded patch 2195481 and installed JDK 1.8.0_77. However, when he clicked setup.exe, nothing happened.  We tried these :

1. Right click on setup.exe > Click Properties > Compatibility Tab > Run this program in compatibility mode for Windows XP
2. Run the program as Administrator
3. Tried changing registy settings as per some microsoft technotes

However nothing worked.

We located a file called setup.lax that was a configuration file for the installer and switched on stderr=console

However it showed a command window that used to disappear in a flash.

We tried uninstalling JDK 1.8.0_77 and installing JDK 1.7

Setting environment variable JAVA_HOME as described Demantra installation guide on https://docs.oracle.com/cd/E26401_01/doc.122/e22516/T573380T574948.htm

JAVA_HOME System Environment Variable
If the client machine does not have JDK version 1.6 or 1.7 installed, the Installer prompts you to download and install it. After installation, set JAVA_HOME to the JDK installation directory (for example, C:\Program Files\Java\jdk1.6.0_16).

Nothing worked.

Mukhtiar located a blog that advised setting java heap size to 512m by setting this environment variable:


The Java Virtual Machine takes two command line arguments which set the initial and maximum heap sizes: -Xms and -Xmx. You can add a system environment variable named _JAVA_OPTIONS, and set the heap size values there

As soon as this environment variable was set, setup.exe finally launched.  All this time it was erroring out as the default 256MB was insufficient for starting the installer.

There was information about Out of Memory error on the installation guide too, but we had missed that:

Out of Memory Error Message
Verify memory settings:
  1. From the Windows Start menu, choose Settings, Control Panel, and then Java. The Java Control Panel dialog box appears.
  2. Click the Java Tab.
  3. From the Java Applet Runtime Settings section, click View.
  4. Verify that the value of the field Java Runtime Parameters is at least -Xmx256M and is based on the client machine hardware/RAM capacity. Oracle recommends value -Xmx512M for heavy-duty client side use.

Categories: APPS Blogs

Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions

Yann Neuhaus - Fri, 2016-12-16 10:10

In my last blog post I presented auto-list partitioning, a new partitioning functionality coming with 12cR2.
In this one I will introduce two others : multi-column list partitioning and read-only partitions.

Multi-column list partitioning

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

With the first release of 12c it wasn’t possible to create list partitioned tables based on multi-column partition key :
ORA-14304: List partitioning method expects a single partitioning column
But now you can easily implement this functionality :
car_make VARCHAR2(30),
car_model VARCHAR2(30)
PARTITION BY LIST (car_make, car_model)
(PARTITION P1 VALUES ('Ford','Focus'),

Table created.

Check partitions :
SQL> SELECT partition_name, high_value FROM dba_tab_partitions WHERE table_name = 'CARS';

-------------------- ------------------------------
P1 ( 'Ford', 'Focus' )

SQL> SELECT partitioning_type, partition_count, partitioning_key_count FROM dba_part_tables WHERE table_name = 'CARS';

--------- --------------- ----------------------
LIST 2 2

This functionality allows you to use up to 16 columns for the partition key and as you can see it in this example, it’s also possible to define a DEFAULT partition.
Multi-column list partitioning on subpartitions is also permitted.

Read-only partitions

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

We all know that Oracle is able to define a tablespace or a table as read-only. But did you know that with 12cR2 it’s now possible to define this attribute at the partition level ? :
sales_product VARCHAR2(30),
sales_date DATE
PARTITION P_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','dd-MON-yyyy')));

Data insertion :
SQL> INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013');
INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013')
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

SQL> INSERT INTO sales VALUES ('SHOES', '29-MAR-2016');

1 row created.

Easy to implement and reliable way to protect data changes inside a table.


Cet article Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions est apparu en premier sur Blog dbi services.

Oracle SQRT Function with Examples

Complete IT Professional - Fri, 2016-12-16 05:00
In this article, I’ll explain the Oracle SQRT function and show you some examples. Purpose of the Oracle SQRT Function The purpose of the Oracle SQRT function is to find and return the square root of a provided number. The square root of a particular number answers this question: “Which number, when multiplied by itself, […]
Categories: Development

Designing and Building Oracle PaaS Cloud Applications: Never the Same Experience Twice

Usable Apps - Fri, 2016-12-16 03:48

By Vikki Lira (@vklira), Oracle Applications User Experience

I was thrilled to recently travel to London, UK, to participate in another Oracle Applications User Experience (OAUX) Building Simplified UIs PaaS4SaaS Workshop. This is an intensive design workshop that involves a partner, their customer’s user representatives and the Oracle UX team acting as design and technical facilitators.

It was my second time attending a workshop like this, and the first thing that struck me was that although the partner and customer involved were completely different, there was a consistency in the manner of how each workshop was conducted that stems from our adherence to OAUX design strategy: Simplicity, Mobility, Extensibility. It was fascinating to see how it was applied to this Oracle Sales Cloud application project.

Workshop kickoff with Ultan and Julian

Ultan O’Broin and Julian Orr kick off the workshop, a collaborative effort between Oracle Applications User Experience, Boxfusion Consulting and Panasonic. (Photo by Vikki Lira)

The OAUX team was delighted to collaborate with Boxfusion Consulting and Panasonic to help design and develop a solution for Oracle Sales Cloud using the Oracle Applications Cloud UX Rapid Development Kit (RDK). This hands-on workshop leveraged collaborative design work and technical explorations done remotely in advance by OAUX, Boxfusion Consulting, and Panasonic.

The Oracle Applications Cloud UX RDK is built on Oracle technologies and based on proven user experience design and development. It leverages the PaaS for SaaS advantage and accelerates the delivery of sleek, modern SaaS integrations and custom PaaS application solutions.

 Design Patterns; Code Samples; Technical and Design Guidance

Offered in Oracle Applications Cloud UX RDK: Design patterns eBook, coded samples, technical eBook, wireframe template The first day started with introductions.

The OAUX team was led by Ultan O’Broin (@ultan), Senior Director and Julian Orr (@Orr_UX), Principal User Experience Engineer. Ultan kicked off with an overview of the three-day workshop and shared the OAUX design strategy.

Boxfusion Consulting introduced their team as well. They were thrilled to participate with the OAUX team and assist their customer, Panasonic, on this design and build journey. Luis Figueira, Project Lead for Boxfusion, said that ”starting with the UX part of the application could mean that we were ensuring a big part of the project up front. We were ensuring that the customer had visibility of what the application would look like, thereby reducing the risk later on in terms of expectations that we deliver something that’s (designed) according to what their needs actually are.”

Next up, Panasonic gave an overview of their use case and its current capabilities. The group then analyzed possible flows, users, and goals. The discussion then turned to how Panasonic envisioned what their new application would look like and what the most important elements were.

Built-in Best Practices

Before design sketching/brainstorming started, the group discussed design fundamentals and defined the Jobs To Be Done in order to stay on track throughout the design process. Then it was time to get to work!

Lancy and team in action

Lancy Silveira (@LancyS), Senior User Experience Architect, Alex Sensier, Sr. CX Consultant, Boxfusion Consulting and Luis Figueira, CX Cloud Lead, Boxfusion Consulting discussing the RDK and ADF tools. (Photo by Vikki Lira)

The group broke up into two groups. One group dove into the details of the design and began sketching storyboards. The second group explored the RDK, Oracle ADF tools, how to apply design patterns, and the more technical back end such as options for deployment. After both groups agreed on a design sketch and flow, the team proceeded to use the wireframe template from the RDK to build their prototype.

The Multiplier: Speedy Design, Speedy Transfer

The second day started with a recap of the previous day’s accomplishments, and the group set goals for the day. The group agreed that they would work towards implementing a collaborative work plan for building a deployable solution.

Luis wireframes old school style!

Luis Figueira, Boxfusion Consulting, goes back to the basics—pen and paper—for storyboarding. (Photo by Vikki Lira)

Halfway through the day, the OAUX and the Boxfusion teams checked in with Panasonic to learn about their progress. After reviewing the designs, Daniel Creasey, European Channel Marketing Coordinator, Panasonic said, “It’s really, really good. I’m impressed at the speed of everything in just two days.”

The final day was once again focused on building and reviewing designs, as well as identifying any gaps in expectations. As Rhiannon Ainge, Business Systems and Marketing Manager, Panasonic said, “This project is not only for our internal users, but also our partners and customers. . . . . [I]t is really important that we deliver an excellent user experience.“ From all the hard work that I witnessed, I am confident that is exactly what will be delivered.

Want to Find Out More?

Query to format the output

Tom Kyte - Fri, 2016-12-16 03:26
Hi Team, Hope you are doing well.. I have table like below ACC_NO DATE_OPENED PRODUCT_TYPE =================================== UN1 02-SEP-16 A UN2 02-OCT-16 B UN3 12-DEC-16 C UN4 22...
Categories: DBA Blogs

Inline views

Tom Kyte - Fri, 2016-12-16 03:26
Hi Chris/Connor, I have written below SQL to get Difference of Debit & Credit sum (DB version we are using is 11g). Problem here is I have used two inline views (with similar joins only difference is Bill_Amt_Sign = 1 & Bill_Amt_Sign = 2). Is ...
Categories: DBA Blogs

Sql Tuning

Tom Kyte - Fri, 2016-12-16 03:26
Hi , Below SQL is executed for 4hours in user's db : <code> SELECT * FROM ( SELECT Lovalias.*,rownum Rno FROM (SELECT * FROM ( SELECT A.* FROM SMVW_USER_ACCOUNTS A, STTM_CUST_ACCOUNT B WHERE A.CUST_AC_NO=B.CUST_AC_NO(+)...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator