Feed aggregator

Success Story: Version Control for PL/SQL

Gerger Consulting - Mon, 2016-12-26 15:43
It’s been a little over three months since we released Gitora 2.0 and the first success stories have started to emerge. Here is one of them:

Rhenus Logistics, the leading logistics company from Germany uses Gitora to manage their Oracle Database.
Rhenus IT uses both Java and PL/SQL to serve their users and customers. They have a team of about 10 PL/SQL developers. The team manages more than 20,000 database packages, views, functions, procedures, object types and triggers spread over 30+ database schemas.
Rhenus IT wanted to move to a continuous delivery environment in which they can be more agile and deliver solutions to the business faster. Managing the PL/SQL code was the hardest piece of the puzzle.
After experimenting with other solutions in the market, Rhenus decided to move forward with Gitora.

Gitora enabled Rhenus Developers to:
  • Use Git, the prominent open source version control system used by millions of developers.
  • Move their database code between development and various staging databases automatically.
  • Move code between source and target databases very fast because Gitora only executes differences between source and target databases, without comparing the code bases in both databases first (which can be very time consuming).
  • Enforce check-in, check-out of database objects at the database level.
  • Automate build process for the database code using Gitora API’s.
  • Implement an affordable continuous delivery solution compared to alternatives.
Michiel Arentsen, the System Architect at Rhenus who implemented the solution at Rhenus has started an excellent blog in which he writes about his Gitora implementation. We highly recommend you to check it out. Below are the list of blog posts he wrote which should be very useful to anyone who is currently implementing Gitora at his/her company:
Categories: Development

PL/SQL Objects for JSON in Oracle 12cR2

Tim Hall - Mon, 2016-12-26 13:37

I’ve been playing around with some more of the new JSON features in Oracle Database 12c Release 2 (12.2).

The first thing I tried was the new PL/SQL support for the JSON functions and conditions that were introduced for SQL in 12.1. That was all pretty obvious.

Next I moved on to the new PL/SQL objects for JSON. These are essentially a replacement for APEX_JSON as far as generation and parsing of JSON data are concerned. If I’m honest I was kind-of confused by this stuff at first for a couple of reasons.

  • If you are coming to it with an APEX_JSON mindset it’s easy to miss the point. Once you “empty your cup” it’s pretty straight forward.
  • The documentation is pretty terrible at the moment. There are lots of mistakes. I tweeted about this the other day and some folks from the Oracle documentation team got back to me about it. I gave them some examples of the problems, so hopefully it will get cleaned up soon!

I was originally intending to write a single article covering both these JSON new features, but it got clumsy, so I separated them.

The second one isn’t much more than a glorified links page at the moment, but as I cover the remaining functionality it will either expand or contain more links depending on the nature of the new material. Big stuff will go in a separate article. Small stuff will be included in this one.

I also added a new section to this recent ORDS article, giving an example of handling the JSON payload using the new object types.

I’ve only scratched the surface of this stuff, so I’ll probably revisit the articles several times as I become more confident with it.



PS. Remember, you can practice a lot of this 12.2 stuff for free at https://livesql.oracle.com .

PL/SQL Objects for JSON in Oracle 12cR2 was first posted on December 26, 2016 at 8:37 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ADF REST Framework Version 2 (and later) -

Andrejus Baranovski - Mon, 2016-12-26 06:40
While building our new Oracle Cloud application with ADF BC REST and JET, I have discovered not announced feature in ADF BC REST Starting from ADF BC REST offers runtime versions. This is configurable in adf-config.xml file or could be provided through REST request header. ADF supports version 1, 2 and 3. Version 2 offers better query support, while version 3 provides better response for hierarchical data - 16.5.2 What You May Need to Know About Versioning the ADF REST Framework.

You can specify version in adf-config.xml, as per documentation:

Version 2 offers more advanced support for data query. Besides query by example from version 1, we could use advanced query syntax - 22.5.4 Filtering a Resource Collection with a Query Parameter. For example, like operator wasn't supported in version 1:

It is supported in version 2. I could specify version 2 directly in REST request header as in example below:

Download ADF BC REST sample from GitHub repository - jetcrud.

A Guide to the Oracle TRUNCATE TABLE Statement

Complete IT Professional - Mon, 2016-12-26 05:00
The Oracle TRUNCATE TABLE statement is a useful statement for removing data in Oracle SQL. I’ll explain what the statement does and show you some examples in this article. What Does the Oracle TRUNCATE TABLE Statement Do? The Oracle TRUNCATE statement, or TRUNCATE TABLE statement, removes all data from a table. It’s similar to the […]
Categories: Development

Oracle JDeveloper (SOA and BPM) 12c ( - Download Temporarily Not Available

Andrejus Baranovski - Mon, 2016-12-26 04:40
If you try to download JDeveloper (as well as SOA Suite or BPM Suite) from OTN - you will see a message on OTN download section - "This page is temporarily not available we'll be back soon".

You should not worry, as per Shay Shmeltzer answer on OTN Forum - "We discovered an issue with the installer - we are working to fix this. Once we have the updated installer we'll update the forum and the pages." Read more here.

There is solution - if you need urgently to download JDeveloper, go to Oracle Software Delivery Cloud and download from there.

Links for 2016-12-25 [del.icio.us]

Categories: DBA Blogs

12cR1 RAC Posts -- 1 : Grid Infrastructure Install completed (first cycle)

Hemant K Chitale - Sat, 2016-12-24 09:17
Just as I had posted 11gR2 RAC Posts in 2014  (listed here), I plan to post some 12cR1 RAC (GI, ASM) posts over the next few weeks.

Here's my Grid Infrastructure up and running.  (Yes, I used racattack for this first 12cR1 setup.

[root@collabn1 ~]# crsctl status resource -t
Name Target State Server State details
Local Resources
ONLINE ONLINE collabn1 Volume device /dev/a
sm/shared-141 is onl
ONLINE ONLINE collabn2 Volume device /dev/a
sm/shared-141 is onl
ONLINE ONLINE collabn1 mounted on /shared,S
ONLINE ONLINE collabn2 mounted on /shared,S
Cluster Resources
1 ONLINE ONLINE collabn1 172.16.
[root@collabn1 ~]#
[root@collabn1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 96fbcb40bfeb4ff7bf18881adcfef149 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
[root@collabn1 ~]#
[root@collabn1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1632
Available space (kbytes) : 407936
ID : 827167720
Device/File Name : +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 ~]#
[root@collabn1 ~]# nslookup collabn-cluster-scan

Name: collabn-cluster-scan.racattack
Name: collabn-cluster-scan.racattack
Name: collabn-cluster-scan.racattack

[root@collabn1 ~]#

I hope to run a few cycles of setups, switching to different node names, IPs, DiskGroup names etc over the next few weeks).

Categories: DBA Blogs


We share our skills to maximize your revenue!
Categories: DBA Blogs

Target=_Blank for Cards report template

Jeff Kemp - Sat, 2016-12-24 02:20

cardsreport.PNGI wanted to use the “Cards” report template for a small report which lists file attachments. When the user clicks on one of the cards, the file should download and open in a new tab/window. Unfortunately, the Cards report template does not include a placeholder for extra attributes for the anchor tag, so it won’t let me add “target=_blank” like I would normally.

One solution is to edit the Cards template to add the extra placeholder; however, this means breaking the subscription from the universal theme.

As a workaround for this I’ve added a small bit of javascript to add the attribute after page load, and whenever the region is refreshed.

  • Set report static ID, e.g. “mycardsreport”
  • Add Dynamic Action:
    • Event = After Refresh
    • Selection Type = Region
    • Region = (the region)
  • Add True Action: Execute JavaScript Code
    • Code = $("#mycardsreport a.t-Card-wrap").attr("target","_blank"); (replace the report static ID in the selector)
    • Fire On Page Load = Yes

Note: this code affects all cards in the chosen report.

Filed under: APEX Tagged: APEX, javascript, jQuery, tips-&-tricks

First touch with Oracle Container Cloud Service

Marcelo Ochoa - Fri, 2016-12-23 17:52
Some weeks ago Oracle releases Oracle Container Cloud Service, Oracle's support for Docker containers.
It basically provides several Linux machines to run Docker containers, one of them is designated as controller of the cluster services, here a simple screen shot of the service view:

by clicking on link Container Console you will access to the manager instance

the admin password for this console was defined during the service creation, here the look & feel the container console:
Basically the steps for using your containers are:

  • Define a service
  • Deploy a service
  • Monitor your running containers

Define a service is basically a web representation of docker run command, supported version is Docker 1.10, several services are defined as examples, it includes Apache/PHP web server, Jenkins, Maria DB among others.
I tested this functionality adding a service for running Oracle 12cR1, the image for running an Oracle 12c is not in at a public repository, so you have to define a private registry for Docker images, remember that you must not push Oracle binary in public repository because you violate the license term.
So if  you follow this guide you can build your own registry server, but this is not enough because the registry server should be enabled using https for security reason, so follow this guide you could put NGinx reverse proxy and SSL certificate signed by LetsEncrypt, but to get a free SSL certificate this registry server should be register in a public DNS server.
If you get a registry server up, running and accesible through Internet over https this server could be added at the section Registries->New Registry, mandatory entries are:
Email: user@domain
URL: server.domain.com
UserName: user_name
Password: your_password
Description: A description textPort 443 of the SSL traffic is not required, the URL will be translated for example to https://server.domain.com:443/v2/, registry server will ask for HTTP authenticated user and the Cloud Service will provide the UserName and Password values.
Once you have a registry server up, running and registered at the Cloud Service you can define your Docker test, the Service builder page look like:
the Builder pane is a graphical representation of the docker run command, the image name (in black) must includes 443 port, for example server.domain.com:443/oracle/database:
The process to build above image in your local machine is:
$ cd dockerfiles
$ ./buildDockerImage.sh -v -e
$ docker login -u user_name -p user_pwd server.domain.com:443
$ docker tag oracle/database: server.domain.com:443/oracle/database:
$ docker push server.domain.com:443/oracle/database:
then when you Deploy your service the Cloud Service will pull above image from your private registry.
The idea of building an Oracle 12cR1 EE image and test it using the Oracle Container Cloud Service is for comparing the performance against the DBAAS and IAAS testing. The result is:
Max IOPS = 1387654
Max MBPS = 21313
Latency  = 0not so bad, under the hood this Oracle RDBMS is running on Oracle Linux Server release 6.6.89/Docker 1.10,  4 x Intel(R) Xeon(R) CPU - 16Gb RAM. the file system seem to be XFS.

Drawbacks:- By not proving a private registry where you can build/pull/push your custom images the usage of the service could be complicated for most of non experimented Docker users.
- I can't find a way to define a shared filesystem for a given volume, for example, above deployment puts Oracle Datafiles into an internal container volume, if you stop your deployment all the data is lost, the only possibility is pause/unpause the service if you want not to loose your data. Note: at the Service Editor (Tips & Trick button) there is an example defining an external volume as /NFS/{{.EnvironmentID}}/{{.DeploymentID}}/{{.ServiceID}}/{{.ServiceSlot}}:/mnt/data, but it didn't work for me.
- You can't modify Hosts OS parameters, so for example if you want to deploy an ElasticSearch cluster is necessary to change at /etc/sysctl.conf file vm.max_map_count=262144, so is limited environment also for a simple test case.
- Docker version is 1.10.3, which means, if you want to deploy Oracle XE it doesn't work because --shm-size=1g is not supported
- Some time the Container Cloud Console kill my Chrome browser, Linux or Windows version, here the screenshot, seem to be a JavaScript problem:
Final thoughtsThe Containers Cloud Service console is a good abstraction (graphical interface) of typical Docker command line services, for example:
Services -> docker run command
Stacks -> docker-compose command, docker-compose.yml (graphical interface)
Hosts -> Bare metal/VM servers
In my personal opinion if I have to deploy a docker complex installation I'll deploy a set Oracle Compute Cloud Service running Oracle Linux/Ubuntu installations with latest Docker release and docker swarm native service, why?

  • It run faster, see my previous post.
  • I have control of the host parameter (sysctl among others, see host setup section)
  • I can define shared filesystem in ext4 or NFS partitions
  • I can build my own images by using Dockerfile commands
  • I can deploy/manage the infrastructure (nodes/network) using docker swarm command.

Primary Storage, Snapshots, Databases, Backup, and Archival.

Kubilay Çilkara - Fri, 2016-12-23 13:34
Data in the enterprise comes in many forms. Simple flat files, transactional databases, scratch files, complex binary blobs, encrypted files, and whole block devices, and filesystem metadata. Simple flat files, such as documents, images, application and operating system files are by far the easiest to manage. These files can simply be scanned for access time to be sorted and managed for backup and archival. Some systems can even transparently symlink these files to other locations for archival purposes. In general, basic files in this category are opened and closed in rapid succession, and actually rarely change. This makes them ideal for backup as they can be copied as they are, and in the distant past, they were all that there was and that was enough.

Then came multitasking. With the introduction of multiple programs running in a virtual memory space, it became possible that files could be opened by two different applications at once. It became also possible that these locked files could be opened and changed in memory without being synchronized back to disk. So elaborate systems were developed to handle file locks, and buffers that flush their changes back to those files on a periodic or triggered basis. Databases in this space were always open, and could not be backed up as they were. Every transaction was logged to a separate set of files,  which could be played back to restore the database to functionality. This is still in use today, as reading the entire database may not be possible, or performant in a production system. This is called a transaction log. Mail servers, database management systems, and networked applications all had to develop software programming interfaces to backup to a single string of files. Essentially this format is called Tape Archive (tar.)

Eventually and quite recently actually, these systems became so large and complex as to require another layer of interface with the whole filesystem, there were certain applications and operating system files that simply were never closed for copy. The concept of Copy on Write was born. The entire filesystem was essentially always closed, and any writes were written as an incremental or completely new file, and the old one was marked for deletion. Filesystems in this modern era progressively implemented more pure copy on write transaction based journaling so files could be assured intact on system failure, and could be read for archival, or multiple application access. Keep in mind this is a one paragraph summation of 25 years of filesystem technology, and not specifically applicable to any single filesystem.

Along with journaling, which allowed a system to retain filesystem integrity, there came an idea that the files could intelligently retain the old copies of these files, and the state of the filesystem itself, as something called a snapshot. All of this stems from the microcosm of databases applied to general filesystems. Again databases still need to be backed up and accessed through controlled methods, but slowly the features of databases find their way into operating systems and filesystems. Modern filesystems use shadow copies and snapshotting to allow rollback of file changes, complete system restore, and undeletion of files as long as the free space hasn’t been reallocated.

Which brings us to my next point which is the difference between a backup or archive, and a snapshot. A snapshot is a picture of what a disk used to be. This picture is kept on the same disk, and in the event of a physical media failure or overuse of the disk itself, is in totality useless. There needs to be sufficient free space on the disk to hold the old snapshots, and if the disk fails, all is still lost.  As media redundancy is easily managed to virtually preclude failure, space considerations especially in aged or unmanaged filesystems, can easily get out of hand. The effect of a filesystem growing near to capacity is essentially a limitation of usable features. As time moves on, simple file rollback features will lose all effectiveness, and users will have to go to the backup to find replacements.

There are products and systems to automatically compress and move files that are unlikely to be accessed in the near future. These systems usually create a separate filesystem and replace your files with links to that system. This has the net effect of reducing the primary storage footprint, the backup load, and allowing your filesystem to grow effectively forever. In general, this is not such a good thing as it sounds, as the archive storage may still fill up, and you then have an effective filesystem that is larger than the maximum theoretical size, which will have to be forcibly pruned to ever restore properly. Also, your backup system, if the archive system is not integrated, probably will be unaware of the archive system. This would mean that the archived data would be lost in the event of a disaster or catastrophe.

Which brings about another point, whatever your backup vendor supports, you are effectively bound to use those products for the life of the backup system. This may be ten or more years and may impact business flexibility. Enterprise business systems backup products easily can cost dozens of thousands per year, and however flexible your systems need to be, so your must your backup vendor provide.

Long term planning and backup systems go hand in hand. Ideally, you should be shooting for a 7 or 12-year lifespan for these systems. They should be able to scale in features and load for the predicted curve of growth with a very wide margin for error. Conservatively, you should plan on a 25% data growth rate per year minimum. Generally speaking 50 to 100% is far more likely.  Highly integrated backup systems truly are a requirement of Information Services, and while costly, failure to effectively plan for disaster or catastrophe will lead to and end of business continuity, and likely the continuity of your employment.

Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.
Categories: DBA Blogs

IT-Tage 2016 Informatik aktuell: feedback

Yann Neuhaus - Fri, 2016-12-23 09:07

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.


I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.


This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”


Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

Oracle MEDIAN Function with Examples

Complete IT Professional - Fri, 2016-12-23 05:00
In this article, I’ll explain what the Oracle MEDIAN function is, and show you some examples. Purpose of the Oracle MEDIAN Function The MEDIAN function returns the median of the set of provided values. The MEDIAN is the middle value in a set of values. So, for example, you had these five values: 10, 15, […]
Categories: Development

Migrating your existing availability group infrastructure to 2016

Yann Neuhaus - Fri, 2016-12-23 04:57

Have you already a plan to migrate your old SQL Server 2012 availability group infrastructure to windows 2016 and SQL Server 2016? In a previous post, I talked about distributed availability groups and cross-cluster migration scenarios but this time the game is not the same because we want to achieve an in-place upgrade of the existing AG infrastructure. This question will probably be a concern the next year and if we take a closer look at improvements shipped with new versions of Windows and SQL Server, we will not be disappointed for sure.

Indeed, cluster rolling upgrade is a new feature from Windows Server 2016 which allows us to migrate smoothly (and almost transparently) the WSFC side of the existing database infrastructure. On the other side, upgrading high available replicas from SQL Server 2012 to SQL Server 2016 is also possible without reinstalling completely the availability group infrastructure. Thus, we may benefit from a temporary and mixed infrastructure at the both sides to reduce the outage timeframe of our applications. I may think about some customers where it could be helpful regarding their business and their corresponding SLAs.

So let’s just demonstrate this kind of scenario which includes a classic customer’s availability group infrastructure. Most part of availability groups implemented in my area consists of two replicas meaning a WSFC with 2 cluster nodes at the low-level of the global architecture as shown above:

blog 111 - 00 - initial infra WSFC2012

So the question is how to achieve the migration of the above infrastructure from 2012 version to 2016 version (both Windows and SQL Server) with low downtime? Well, one solution would consist in preparing and adding temporary two extra nodes which would run on Windows Server 2016 and SQL Server 2012 in a first step. Unfortunately we cannot mix directly two different versions of SQL Server in an existing availability group yet. We have to execute an extra step to upgrade one by one each replica we want to run on SQL Server 2016. Having two extra nodes will allow to prepare smoothly our migration without impacting the existing high available infrastructure.

So let’s begin with adding 2 nodes with Windows Server 2016 version. Basically, we may use either GUI or PowerShell cmdlets command for that.

The initial scenario (2012 version) is as follows:

blog 111 - 0 - initial config cluster

Let’s add the two extra cluster nodes which run on Windows Server 2016. According to Microsoft technet procedure, the key point is to perform this action from a Windows Server 2016 node exclusively.

The PowerShell cmdlet used is the same than the previous version and I executed it for the two additional nodes (WIN20168SQL16 and WIN20169SQL16) which run both on Windows Server 2016. Just remember to exclude the two extra nodes from quorum vote to avoid impacting the existing configuration.

blog 111 - 1 - Add Cluster Node 2016

However we may notice new cluster functional level property as shown below. The value is equal to 8 meaning that the cluster has switched to a temporary / mixed mode because at the moment we have cluster nodes both on 2012 and 2016 versions. 

blog 111 - 2 - Cluster Functional Level 2016

The transitioned infrastructure includes now 4 nodes. The first 2 nodes run on Windows Server 2012 whereas the last 2 nodes run on Windows Server 2016. An availability group runs on the top of the first 2 nodes and two additional replicas are ready to be enrolled to the existing infrastructure.

blog 111 - 21 - transitioned WSFC

So now let’s move on the SQL Server side and let’s add the 2 additional replicas.

The initial AG scenario is as follows:

blog 111 - 3 - Initial AG 2012

After adding the two replicas in asynchronous mode, we get the following picture:

blog 111 - 4 - Add replica 2016 to AG 2012

At this step, we have now to upgrade the new added replicas to SQL Server 2016. One important thing to keep in mind here is that we have to prevent absolutely failover to an upgraded replica before ensuring all the new secondary replicas are already upgraded. Indeed according to the Microsoft documentation an upgraded primary replica can no longer ship logs to any secondary replica whose SQL Server 2016 instance has not yet been upgraded to the same version.

So in my case, I upgraded first the WIN20169SQL16\SQL12 replica and then the WIN20168SQL16\SQL12 as shown below:

blog 111 - 5- AG config after upgrade node

The new transitioned infrastructure is shown in the picture below:

blog 111 -51- Transitioned infrastructure


The next part of the migration step includes a short downtime. The previous steps did not imply outage so far.

Basically the next part of the procedure will include the following steps:

  • Choose the next replica in 2016 version that will be involved as primary and change its replication mode to synchronous in order to prevent losing data
  • Failover the availability group to this replica (at this step old replicas in 2012 version will not be synchronized because the new primary replica may no longer ship logs to them as said previously)

blog 111 - 5- AG health state after upgrade node

  • Change the replication mode of the second replica in 2016 to synchronous to meet the initial configuration
  • Remove old replicas in 2012 from the availability group

blog 111 -52- Transitioned infrastructure

The T-SQL script was as follows in my case:


-- Change temporary replication to synchronous
-- for next SQL Server 2016 primary replica 
USE [master]


-- Initiate failover to next SQL Server 2016 primary replica  
USE [master]



-- Change temporary replication to asynchronous
-- old replicas (2012)
USE [master]




-- Remove old replicas (2012)
USE [master]




Let’s go back to the cluster side and the next step will consist in removing old cluster nodes from the WSFC by using usual commands as Remove-ClusterNode

blog 111 - 6- Remove old cluster nodes

… And we may finally update the cluster functional level to 9 (2016 version). Just be aware that upgrading the cluster functional level to 2016 will make the process un-reversible. So reverting back to the initial configuration will simply not be possible or at least it will require extra steps and longer downtime as well.

blog 111 - 7 - Update cluster functionalLevel 2016

And let’s get the final view of our upgraded availability group dashboard:

blog 111 - 8 - AG 2016 dashboard

The listener stays the same and it is almost transparent from applications.


Bottom line

In this blog post we had a glimpse of new capabilities of both Windows 2016 and SQL Server 2016 in terms of rolling upgrade. Of course the reality would be probably a little more complicated when introducing other parameters as customer context, number of availability groups, performance impact of adding temporary replicas, external dependencies and so on. But these feature seems to be promising and may be very helpful for future migration scenarios. I’m looking forward to experiment such feature at customer shops!

Happy upgrade!



Cet article Migrating your existing availability group infrastructure to 2016 est apparu en premier sur Blog dbi services.

Links for 2016-12-22 [del.icio.us]

Categories: DBA Blogs

Can I do it with PostgreSQL? – 8 – Transportable tablespaces

Yann Neuhaus - Fri, 2016-12-23 01:47

My colleague Franck posted a comment to one of my last “Can I do it with PostgreSQL” blog posts. The comment he posted is: “Here is an idea for a future “Can I do it with PostgreSQL?”. My favorite Oracle feature: transportable tablespaces.” When you’ve read my post about how PostgreSQL implements tablespaces then you probably already know the answer: No, you can not do this in PostgreSQL. Having thought about this some time I decided to check the Oracle documentation for what transportable tablespaces are good for. I know the basics and how you can do it in Oracle, but better check twice :)

According to the documentation the main goal of transportable tablespaces is to transport data. This means you can transport one or more (which then becomes a transportable tablespace set) tablespaces from one host to another and then “plug” that set into an existing database (as long as the tablespaces are self containing). One, depending on the use case, great thing you can do with it is that the target database does not need to have the same standard block size as the source database. This means you can transport a tablespace space with a 16k block size to a database with a default block size of 8k. This is another thing you can not do in PostgreSQL: In PostgreSQL the block size is configured at configure/compile time. Once you have the compiled binaries you can not change that afterwards.

Probably the greatest benefit of transportable tablespaces is that it saves you time in moving your data around. You just copy the data files that make up your tablespace(s) and then use expdp/impdp for the meta data, that’s it. When you go for the multi-tenant architecture you can use transportable tablespaces to make a non-CDB a pluggable database on the target, too. For a more detailed introduction you can check the documentation (linked above).

Back to PostgreSQL: What options do you have for transporting your data from one database to another?

The tool of choice for most cases probably is pg_dump. The big advantage of pg_dump is that you can use it over the network and directly write everything you want to export into the target database using a pipe:


You can even do that using parallel processes when you combine pg_dump with pg_restore although using a pipe is not supported in that case. To demonstrate this we’ll need some sample data (I am using PostgreSQL 9.5.4 as the source):

postgres@pgbox:/home/postgres/ [PG954] psql
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create database test;
(postgres@[local]:5438) [postgres] > \c test
You are now connected to database "test" as user "postgres".
(postgres@[local]:5438) [test] > create table t1 ( a int, b int );
(postgres@[local]:5438) [test] > create table t2 ( a int, b int );
(postgres@[local]:5438) [test] > insert into t1 values (generate_series(1,1000000), generate_series(1,1000000));
INSERT 0 1000000
(postgres@[local]:5438) [test] > insert into t2 select * from t1;
INSERT 0 1000000

Lets say I want to copy the data to a PostgreSQL 9.6.1 instance on the same host. How can I do that? Quite easy:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -V
pg_dump (PostgreSQL) 9.6.1 dbi services build
postgres@pgbox:/home/postgres/ [PG961] mkdir /var/tmp/exp
postgres@pgbox:/home/postgres/ [PG961] pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -d postgres -F d -C /var/tmp/exp/
postgres@pgbox:/home/postgres/ [PG961] psql -c "\l" postgres
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
(4 rows)

postgres@pgbox:/home/postgres/ [PG961] psql -c "\d" test
        List of relations
 Schema | Name | Type  |  Owner   
 public | t1   | table | postgres
 public | t2   | table | postgres
(2 rows)

Not a big deal and possible over the network, too.

Another option would be to use pg_basebackup to create a complete new instance from the source (source and target need to be of the same PostgreSQL release in this case). Again this is possible over the network.

You can even use rsync to copy the whole cluster to a new host and then do a second resync while the source is down for a short time. This will copy only the files that changed since the first rsync and will probably be very fast, but you will need a downtime of the source for the second rsync.

There are other methods for moving your data around in PostgreSQL but the above are the most popular.

Hope this helps …


Cet article Can I do it with PostgreSQL? – 8 – Transportable tablespaces est apparu en premier sur Blog dbi services.

Updated Technical Best Practices

Anthony Shorten - Thu, 2016-12-22 16:20

The Technical Best Practices whitepaper available from My Oracle Support has been updated and published.

The new version has the following changes:

  • The format has been changed to make it more readable for both screen and printing use.
  • Old and outdated information has been removed.
  • Advice for versions of the products not on premier support has been removed. This streamlines the advice.
  • Duplicate advice already included in the online or documentation provided with the product has been removed. it is recommended to use the online or documentation provided with the product before using advice in this whitepaper. This advice applies to whitepapers.
  • IBM WebSphere specific advice has been removed as it already is covered by product documentation or IBM documentation already.
  • New and updated advice has been added to the document for new versions (including the latest releases).

Over the next month or two, additional whitepapers will be overhauled and re-released.

The whitepaper is available as Technical Best Practices (Doc Id: 560367.1) from My Oracle Support.

I wish you all a happy holidays and see you in 2017.

Python for Analytics - Exploring Data with Pandas

Rittman Mead Consulting - Thu, 2016-12-22 12:24
A Crack Team!

At Rittman Mead, we're always encouraged to branch out and pursue new skills in the field in an effort to improve upon our skill sets, and as a result, become more technically fluent. One of the great things about working here, aside from the previous, is that while we all have a really solid foundation in Oracle technologies, there are many who possess an incredibly diverse range of skills, fostered by years of working in tech-agnostic engagements. It's a great feeling to know that if you ever run up against some sort of bizarre challenge, or have to integrate some piece of arcane tech into an architectural solution, more than likely, someone, somewhere within Rittman Mead has had to do it. It is this freedom to pursue, within reason of course, other technical exploits that has nurtured a real spirit of innovation around the company within the past year. Our GitHub is overflowing with open source visualizations and performance monitoring and maintenance scripts that are simply there for the taking! We've put a lot of time into developing this stuff so our clients and partners don't have to.


python logo But I digress. This blog is about Python, and well, I haven't really mentioned it up until this point. It is in this spirit of innovation, learning, and frankly, automating the boring stuff, however, that a lot of us have been pursuing automation and analytical endeavors using the Python language. It has been touted by many as THE language for data science, and rightfully so, given its accessibility and healthy selection of libraries perfectly suited to the task, such as NumPy, Seaborn, Pandas, Matplotlib. In today's exercise, we're going to walk through common data munging, transformation, and visualization tasks using some of these libraries in order to deliver quick insights into a data set that's near and dear to my heart, Game of Thrones battles and character deaths!

GoT Logo

Through this process, we will be creating our own data narrative that will help to expound upon the idle numbers and labels of the data set. We'll see that the process is less a hard and fast, rigid, set of rules for which to approach data exploration, and something more akin to solving a crime, clue by clue, letting the data tell the story.


Aside from its myriad, community driven and maintained libraries, the greatest thing, to me anyway, about Python is its relatively low barrier to entry. Even with little to no previous programming skills, an enterprising lad or lady can get up and running, performing basic, functional programming tasks in no time. You'll be amazed at how quickly you'll start coming up with daily tasks that you can throw a bit (or a lot) of Python at. Today, we'll be tackling some tasks like these, common to the everyday processes of data analysis and data science. Utilizing the Pandas library, in addition to a few others, we'll see how we can programmatically go from question to answer in no time, and with most any structured or unstructured data set. The primary audience of this blog will be those with a bit of Python fluency, in addition to those with an interest in data science and analytics. I will be explaining the steps and providing a Jupyter notebook (link here) for those who wish to follow along, however, for those who might need the extra guidance. So don't bail now! Let's get to it. In this instance, we'll be downloading the Game of Thrones data set from kaggle, a great site that provides open data sets and accompanying analysis projects in multiple languages. Download the data set if you'd like to follow along.


stepping stones Let's begin by taking some steps to get our heads on straight and carve out a clear work flow. I find this is really helpful, especially in programming and/or analytical scenarios where one can begin to suffer from "analysis paralysis". So, at a high level, we'll be doing the following:

  1. First, we'll take a cursory look at the Python libraries we'll be incorporating into our data sleuthing exercise, how they're used, and some examples of their output and ideal use cases.

  2. Next we'll use the tools in these libraries to take a deeper dive into our data set and start to construct our initial line of questioning. This is where we get to be a bit creative in coming up with how we're going to wrap our heads around the data, and what kind of questions we're going to throw at it.

  3. We'll then chase down any leads, incorporating additional analyses where necessary, and begin to construct a narrative about our data set. At this point we'll be formulating hypotheses and attempting to construct visualizations that will help us to further or disprove our investigation.


Any great detective must always have with them a toolkit with which to thoroughly examine any crime scene, and that's essentially what we have in the Pandas, Seaborn, and Numpy ("num-pie") libraries for the Python programming language. They provide a set of methods (functions) that can take an input, or a number of inputs, do some magic, and then provide us with lots of really useful information. So let's begin by examining these libraries and what we can do with each.

Pandas and Numpy


Pandas is great at doing a bunch of really common tasks related to data exploration, not limited to, indexing and selection, merging and joining data sets, grouping and aggregations, and visualizing data. This will be the library with which we'll be doing a lot of the heavy lifting. Pandas also provides us with the Dataframe object that greatly expands on the comparatively more rigid Numpy's ndarray object. These 'objects' are simply containers that hold data of some kind, and allow us to interact on that data.

Matplotlib and Seaborn


Matplotlib is a robust visualization library built to enable interactive, MATLAB style plotting on most any platform or back-end. This library, along with Seaborn, should be your go-to for producing super malleable graphs and visualizations. Working alongside matplotlib, seaborn pitches itself as a go-to for statistical based visualizations, but also supports complex, grid and algorithm based charts as well. Both of these libraries will help us to make quick and insightful decisions about our data, and help us to gather evidence further supporting, or disproving and hypotheses we might form.



Now that we've armed ourselves with the tools we need to thoroughly examine any potential data set we're given, let's do just that! Opening up the Game of Thrones csv files from above, let's first take a look at what kinds of information we have. The basic stats are as follows:


  • Battles - a complete listing of the battles in the book series and their stats! Attacker, defender, army size, you name it, we've got it.

  • Character Deaths - something the series/show is quite known for, who died? This contains some great info, such as allegiance and nobility status.

  • Character Predictions - The more morbid of the lot, this data set lists predictions on which character will die. We won't be using this sheet for our exercise.

A Hypothesis of Thrones

Having just finished the monumental series myself, you could say that at this point I'm somewhat of a subject matter expert; that at this point, we have a situation not unlike that which you might find in any organization. We've got an interested party that wants to look further into some aspect of their data. We can use our investigatory tool-set to get real results and gain some valuable and informative insights. As subject matter experts though, we should ideally be coming at our data with at least some semblance of a hypothesis, or something that we're trying to prove using our data (or disprove for that matter). For the sake of this exercise, and fitting in with the theme of the data, I'm going to try and dig up an answer to the following:

Does House Lannister, for as evil and scheming as they are, and as much as they get away with, eventually get what's coming to them?

No lannisters!

As much as I'd like to believe it's true, however, we're going to need to run the numbers, and let our data do the talking.

Importing the Data

You can follow along in the Jupyter notebook here now. Working with our Pandas library, we first need to get our data into some sort of workable object. As we stated before, this is the data frame. It is simply a table type object that is really good at handling empty values and data of many different types. We can easily perform operations on these objects and visualize them with minimal fuss. So, enough talk. Let's do it!

Working in your favorite IDE (Pycharm is easy to use and comes in a free version), we start a new project, import the libraries we need, and then drop in our first piece of code. This is the section that imports our csv data set and then converts it to data frame. So, now that we have our object, what do we do with it?


A Graph Has No Name

Now that we have our data frame object, we can begin to throw some code at it, crunch some numbers, and see if, in fact, the Lannisters really did get what was coming to them by the end of book 5. Starting with the battles data set, let's see how they fared in the field through the arc of the story. Did they lose more or less troops comparatively? We can do this easily by breaking our data frame into smaller, more manageable chunks, and then graph these data points, accordingly. We are going to use the data set to build a step by step, set of analyses that examines the Lannister victories and defeats throughout the story.

Battle / Troop Loss Over Time

Did the Lannisters hit a losing streak, or did they do well throughout the story? Did they win or lose more of their battles over time?

  1. Start with new data frame based on house and troop sizes:
    troop sizes

  2. Filter to get new results (Lannisters only):

Lannister Troops

Right away we see we have some data issues, that is, there are some holes in the attacker size column. The good thing is that we can more or less look at this small table and get the all the info we need from it right away. The numbers drop down significantly through the years, and that's all there really is to it. But, was this in fact, because they lost more troops, or simply threw less at the problem as they began to carve out their claim to the kingdom? This analysis is not very telling. We're going to need to do some digging elsewhere to answer our question. Let's do some comparisons.

% of Battles Won / Lost

So how did the Lannisters do in the field? Of the 8 battles they fought in, how many did they win? How does this compare with the other armies of Seven Kingdoms?

As we did before, lets get a new data frame together, and then do our grunt work on it to try and answer these questions. Grabbing the columns we need, let's run the numbers on how the Lannisters stack up against the other houses of Westeros in the field.

How many battles did they fight compared to the other Houses?

battles fought

How many did they actually win?


We can see right away, that out of all the battles they fought throughout the series (which is decidedly more than the other houses in the series), that they came out on top. Could the Lannisters be the dominating force on the field, as well as at court? The Starks are the only house that meet them conflict for conflict, and the Lannisters still reign supreme! Let's take things down to a finer grain and see how those who aligned themselves with the Lion did compared to those who didn't.

Death by Allegiance

Opening up our character deaths file, right away we see we have some pretty good info here. We have a laundry list of characters, their death year, and the house, if any, to which they were aligned. Let's start by building a data frame, and first, filtering out those who are unaligned, in the Night's Watch, or a Wildling. We want to get a comparison between houses, and these groups will just muck up the works. Let's do the numbers. We can now plot this info on a basic bar chart to get a basic rundown of the massacre.

death by alignment

Things are starting to look up...depending on your point of view, I guess. The Lannisters, for all their dirty business, do seem to, in fact, lose the most named characters tied to their house. Of these, let's see how many were actually nobility, or rather, the most influential in furthering their cause!

noble deaths to total

It would seem our Lannisters aren't too good at keeping their hands clean, and letting those of lesser station do their dirty work for them. Although they have the second most aligned character deaths in the series, roughly 75% of them are Noble deaths, meaning that people important to their cause are dying! The only other houses that come close unfortunately, are the Starks (the Red Wedding, no doubt), and the Greyjoys. What this also means, however, is that our claim is gathering more support; the Lannisters may have climbed the royal ladder, but at what cost?

Paying Your Debts


We can see from the donut chart above (excuse the repetition of colors) that indeed, the Lannisters have one of the highest % to total death numbers out of all the major houses in the Seven Kingdoms. This actually goes quite a long way in backing up our hypothesis; that of all the named characters in the series, the Lannisters lost the lion's share (pun intended). The disconcerting thing is that they either seem to bring down many others with them, or the other noble houses aren't terribly great at keeping themselves among the living either.


Are these figures, combined with their high noble of ranking noble deaths enough to satisfy my desire for vengeance? Did they truly reap what they have sown? I have to say I am ultimately undecided in the matter, as, although they did lose a great many, they in turn took a a greater number down along with them. It seems that despite these losses, any notion of vengeful satisfaction must be tempered by this fact; that although the Lannisters did end up getting hit pretty hard with significant losses, this is bittersweet when compared to the real and lasting damage they did throughout the span of the book's and show's history. Were you able to come up with any additional evidence for or against my case? Link out and show us! Thanks for reading.

Categories: BI & Warehousing

APEX 5.1 - Client side messaging and apps not using Universal Theme

Anthony Rayner - Thu, 2016-12-22 09:42

In APEX 5.1, we are introducing new client side messaging functionality, which aims to provide more modern, faster messaging across APEX, including the capability to display both error and success messages from the client side, without requiring a full page reload (as is the traditional way in APEX). Messaging in APEX is something we have wanted to modernise for a long time, and in 5.1, precipitated by the introduction of the Interactive Grid and its requirement for such messaging, this comes to fruition. 

This new functionality works well with apps already using the Universal Theme, however apps using older, or custom themes may face some consistency issues in how inline item errors are displayed between pages that display messages the old way and the new way. The purpose of this blog post is to highlight when such inconsistencies may occur, and to provide guidance on how you can go about fixing them.

Note: If you have created your own custom theme, please see Method #2 for instructions on how to update your theme to work with client side messaging.

When can these inconsistencies occur?
Client side messaging for inline item errors will be used in the following scenarios:
  1. To display the error messages if an APEX validation error occurs, and the page attribute Reload on Submit is set to Only for Success (which is the default for new pages, irrespective of theme).

  2. Screenshot showing the new Reload on Submit page level attribute in APEX 5.1The new Reload on Submit page level attribute in APEX 5.1

  3. To display the error messages if a client side validation error occurs such as Value Required = Yes, and the app Compatibility Mode is set to 5.1 (default for new apps, irrespective of theme), a Submit Page button is used with set to Execute Validations = Yes.
Compatibility Mode, available in Application Attributes

So as soon as you start adding new pages to an existing app for example, you will start to hit these inconsistencies, because existing pages will have Reload on Submit set to Always, and new pages will have this set to Only for Success

In both of the above scenarios, an inline item error will be shown but it may not look exactly the same as on other existing pages. This is because as it is, the client side messaging logic isn't able to use the corresponding template markup from the theme because of some missing information (more on that in a moment), and instead relies on some generic fallback logic to display the error. We can see the inconsistency below. 

Here is an inline item error, rendered using server-side logic due to the page level attribute Reload on Submit being set to Always (default for existing apps).

Screenshot showing Inline item error display for Theme 21, when the page level attribute Reload on Submit is set to Always

And here is the same error, rendered on the client side due to the page level attribute Reload on Submit being set to Only for Success (default for new pages).

Screenshot showing Inline item display for Theme 21 in APEX 5.1 when the page level attribute Reload on Submit is set to Only for Success

The Known Issues page for 5.1 mention this issue, including a few suggestions for working around it.

Screenshot showing excerpt from known issues page detailing this issue. To read the full issue please follow the known issues page for 5.1 link, and search for 'client-side messaging issues in apps using old, or custom themes'

Now let's explore each of the suggestions in a bit more detail.

Method #1 - Disallowing client side validation

The first solution is described in the Known Issues as follows:
These differences can be avoided by setting Reload on Submit to Always, (please note the Interactive Grid feature requires this to be set to Only for Success and not using client side validation by either setting app Compatibility Mode to less than 5.1, or setting Execute Validations = No.Setting Reload on Submit to Always, or disallowing client side validation as described will of course resolve the inconsistencies, but are somewhat brute-force, with the negative side-effects that you cannot benefit from the new way, you are precluded from upgrading your compatibility mode and also by setting Reload on Submit to Always, this means you cannot use the Interactive Grid on that page.

This is good for a quick-fix, but does place limitations upon your app, which are far from ideal.

Method #2 - Updating your theme
A better way to resolve this, and one that doesn't limit you in the same way as detailed in Method #1, is to update your themes so that they work better with the new client side messaging. Specifically this involves an update to your theme's label templates. 

The reason that the client side messaging functionality is not able to use the markup from the label template, is because it relies on the following:
  1. The error markup being provided in the Error Template attribute of the label template.
  2. The #ERROR_TEMPLATE# substitution string being included in either the Before Item or After Item attribute of the label template.
Now we know what's required, let's look at an example using the standard Theme 21, and the Optional Label with Help label template. In this template, you will see the Error Display information is provided as follows:

Screenshot showing Theme 21 > Optional Label with Help label template before any changes. Shows an empty Error Template and error markup defined in the On Error Before Label and On Error After Label attributesTheme 21 > Optional Label with Help label template before any changes. Shows an empty Error Template and error markup defined in the On Error Before Label and On Error After Label attributes

Given the aforementioned rules, this clearly won't work, there is no Error Template defined, so the client side will need to resort to its fallback template, causing the possible inconsistencies.

So firstly, we need to move the error related markup into the Error Template attribute. However, this presents a slight problem, because the way it is currently, the On Error Before Label markup will be rendered exactly as described, before the label markup when an error occurs, and the On Error After Label markup similarly will be rendered after the label markup. This means that it will effectively wrap the label markup in the template, which is very difficult to replicate by switching to the Error Template approach (where the markup is defined in its entirety in a single place, and inserted into the template in a single place). Unfortunately this means we will need to compromise, and go with an approach that will be slightly different to how it looked prior to 5.1, however importantly this will be consistent between client side and server-side rendered messages now, and without the downsides of having to resort to Method #1.

So we move the combined error markup into the Error Template, as shown:

Screenshot showing Theme 21 > Optional Label with Help label template after moving the error markup into the Error Template attributeTheme 21 > Optional Label with Help label template after moving the error markup into the Error Template attribute

Note: We removed the line break because we no longer need to force a new line after the label as was the case before, and because the DIV will automatically start on a new line because it is a block-level element, so the error will appear on a new line after its preceding element.

The next part is to add the #ERROR_TEMPLATE# substitution string to the best place in either of the two attributes where it is supported in a label template, either the Before Item or After Item attributes. Let's go with the more common approach of displaying the error after the item, so simply add #ERROR_TEMPLATE# in the After Item attribute, as shown:

Screenshot showing #ERROR_TEMPLATE# substitution string added to the After Item attribute of the label template

You will need to replicate this in the different label templates in your app, and of course different themes will have slightly different implementations for label templates, but if you follow the rules laid out above, then it should be a fairly straight forward switch over. 

Screenshot showing Updated appearance following template change with the error displayed below the input, now consistent irrespective messaging typeUpdated appearance following template change, now consistent irrespective messaging type

Method #3 - Migrate to Universal Theme
This will almost certainly result in the most work, but will provide great benefit in the long run, and all these problems will automatically go away.

Screenshot showing Universal Theme Sample Application, available in your workspace via Packaged Apps > SampleUniversal Theme Sample Application, available in your workspace via Packaged Apps > Sample

This will fix these inconsistency issues, and would be the best thing you can do to also benefit from the huge improvements offered by Universal Theme over our legacy themes, but will most likely take the most time and should be considered as part of a migration project (rather than a simple change), so may of course not be feasible. To see our Migration Guide, specifically the section entitled Migrating from Other Themes for further information.

The new client side messaging of APEX 5.1 offers some great benefits over the server-side approach, but does represent some issues with inline errors and apps not using Universal Theme as detailed here. If you have time, converting to Universal Theme would definitely be the way to go, there are just such massive improvements to benefit from in terms of capability, accessibility, responsiveness, modern look and feel, and more, but this will of course take the most time and investment. If this is not feasible, we would recommend updating your existing themes as detailed here, to be able to provide a consistent look and feel, and still benefit from other new features in 5.1 as described.
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}
Categories: Development

Restricted sqlplus and sql_script

Laurent Schneider - Thu, 2016-12-22 08:49

Yesterday I wrote about execute sql script from plsql. Connor then posted a 12c new feature that I overviewed

@laurentsch Or perhaps just use scheduler to run them, job_type = SQL_SCRIPT

— Connor McDonald (@connor_mc_d) December 21, 2016

If you are still considering the 11g approach with sqlplus, you should read about restricted sqlplus

If you run

sqlplus -L -s -R 3 scott/tiger@db01

lot’s of OS-command are disabled

SQL> get /etc/passwd
SP2-0738: Restricted command "get" not available
SQL> spool /etc/oratab
SP2-0738: Restricted command "spool" not available
SQL> host reboot
SP2-0738: Restricted command "host" not available

There is also a Product User Profile. I never saw any customer using this. I doubt it is very safe.

You could disable some commands

SQL> insert into system.PRODUCT_USER_PROFILE

Which prevents SCOTT from doing rollabck

SQL> rollback;
SP2-0544: Command "rollback" disabled in Product User Profile

but it doesn’t resist Rock & Roll

SQL> roll & rock
Rollback complete.

If you have 12c, go have a look at the doc for SQL_SCRIPT jobs


Subscribe to Oracle FAQ aggregator