Feed aggregator

12cR1 RAC Posts -- 1 : Grid Infrastructure Install completed (first cycle)

Hemant K Chitale - Sat, 2016-12-24 09:17
Just as I had posted 11gR2 RAC Posts in 2014  (listed here), I plan to post some 12cR1 RAC (GI, ASM) posts over the next few weeks.

Here's my Grid Infrastructure up and running.  (Yes, I used racattack for this first 12cR1 setup.

[root@collabn1 ~]# crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.SHARED.advm
ONLINE ONLINE collabn1 Volume device /dev/a
sm/shared-141 is onl
ine,STABLE
ONLINE ONLINE collabn2 Volume device /dev/a
sm/shared-141 is onl
ine,STABLE
ora.DATA.dg
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.FRA.dg
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.asm
ONLINE ONLINE collabn1 Started,STABLE
ONLINE ONLINE collabn2 Started,STABLE
ora.data.shared.acfs
ONLINE ONLINE collabn1 mounted on /shared,S
TABLE
ONLINE ONLINE collabn2 mounted on /shared,S
TABLE
ora.net1.network
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.ons
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE collabn2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE collabn1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE collabn1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE collabn1 169.254.3.70 172.16.
100.51,STABLE
ora.collabn1.vip
1 ONLINE ONLINE collabn1 STABLE
ora.collabn2.vip
1 ONLINE ONLINE collabn2 STABLE
ora.cvu
1 ONLINE ONLINE collabn1 STABLE
ora.mgmtdb
1 ONLINE ONLINE collabn1 Open,STABLE
ora.oc4j
1 ONLINE ONLINE collabn1 STABLE
ora.scan1.vip
1 ONLINE ONLINE collabn2 STABLE
ora.scan2.vip
1 ONLINE ONLINE collabn1 STABLE
ora.scan3.vip
1 ONLINE ONLINE collabn1 STABLE
--------------------------------------------------------------------------------
[root@collabn1 ~]#
[root@collabn1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 96fbcb40bfeb4ff7bf18881adcfef149 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
[root@collabn1 ~]#
[root@collabn1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1632
Available space (kbytes) : 407936
ID : 827167720
Device/File Name : +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 ~]#
[root@collabn1 ~]# nslookup collabn-cluster-scan
Server: 192.168.78.51
Address: 192.168.78.51#53

Name: collabn-cluster-scan.racattack
Address: 192.168.78.252
Name: collabn-cluster-scan.racattack
Address: 192.168.78.253
Name: collabn-cluster-scan.racattack
Address: 192.168.78.251

[root@collabn1 ~]#


I hope to run a few cycles of setups, switching to different node names, IPs, DiskGroup names etc over the next few weeks).
.
.
.

Categories: DBA Blogs

HAPPY HOLIDAYS @OracleIMC



We share our skills to maximize your revenue!
Categories: DBA Blogs

Target=_Blank for Cards report template

Jeff Kemp - Sat, 2016-12-24 02:20

cardsreport.PNGI wanted to use the “Cards” report template for a small report which lists file attachments. When the user clicks on one of the cards, the file should download and open in a new tab/window. Unfortunately, the Cards report template does not include a placeholder for extra attributes for the anchor tag, so it won’t let me add “target=_blank” like I would normally.

One solution is to edit the Cards template to add the extra placeholder; however, this means breaking the subscription from the universal theme.

As a workaround for this I’ve added a small bit of javascript to add the attribute after page load, and whenever the region is refreshed.

  • Set report static ID, e.g. “mycardsreport”
  • Add Dynamic Action:
    • Event = After Refresh
    • Selection Type = Region
    • Region = (the region)
  • Add True Action: Execute JavaScript Code
    • Code = $("#mycardsreport a.t-Card-wrap").attr("target","_blank"); (replace the report static ID in the selector)
    • Fire On Page Load = Yes

Note: this code affects all cards in the chosen report.


Filed under: APEX Tagged: APEX, javascript, jQuery, tips-&-tricks

First touch with Oracle Container Cloud Service

Marcelo Ochoa - Fri, 2016-12-23 17:52
Some weeks ago Oracle releases Oracle Container Cloud Service, Oracle's support for Docker containers.
It basically provides several Linux machines to run Docker containers, one of them is designated as controller of the cluster services, here a simple screen shot of the service view:


by clicking on link Container Console you will access to the manager instance

the admin password for this console was defined during the service creation, here the look & feel the container console:
Basically the steps for using your containers are:

  • Define a service
  • Deploy a service
  • Monitor your running containers

Define a service is basically a web representation of docker run command, supported version is Docker 1.10, several services are defined as examples, it includes Apache/PHP web server, Jenkins, Maria DB among others.
I tested this functionality adding a service for running Oracle 12cR1, the image for running an Oracle 12c is not in at a public repository, so you have to define a private registry for Docker images, remember that you must not push Oracle binary in public repository because you violate the license term.
So if  you follow this guide you can build your own registry server, but this is not enough because the registry server should be enabled using https for security reason, so follow this guide you could put NGinx reverse proxy and SSL certificate signed by LetsEncrypt, but to get a free SSL certificate this registry server should be register in a public DNS server.
If you get a registry server up, running and accesible through Internet over https this server could be added at the section Registries->New Registry, mandatory entries are:
Email: user@domain
URL: server.domain.com
UserName: user_name
Password: your_password
Description: A description textPort 443 of the SSL traffic is not required, the URL will be translated for example to https://server.domain.com:443/v2/, registry server will ask for HTTP authenticated user and the Cloud Service will provide the UserName and Password values.
Once you have a registry server up, running and registered at the Cloud Service you can define your Docker test, the Service builder page look like:
the Builder pane is a graphical representation of the docker run command, the image name (in black) must includes 443 port, for example server.domain.com:443/oracle/database:12.1.0.2-ee.
The process to build above image in your local machine is:
$ cd dockerfiles
$ ./buildDockerImage.sh -v 12.1.0.2 -e
$ docker login -u user_name -p user_pwd server.domain.com:443
$ docker tag oracle/database:12.1.0.2-ee server.domain.com:443/oracle/database:12.1.0.2-ee
$ docker push server.domain.com:443/oracle/database:12.1.0.2-ee
then when you Deploy your service the Cloud Service will pull above image from your private registry.
The idea of building an Oracle 12cR1 EE image and test it using the Oracle Container Cloud Service is for comparing the performance against the DBAAS and IAAS testing. The result is:
OCCS
Max IOPS = 1387654
Max MBPS = 21313
Latency  = 0not so bad, under the hood this Oracle RDBMS is running on Oracle Linux Server release 6.6.89/Docker 1.10,  4 x Intel(R) Xeon(R) CPU - 16Gb RAM. the file system seem to be XFS.

Drawbacks:- By not proving a private registry where you can build/pull/push your custom images the usage of the service could be complicated for most of non experimented Docker users.
- I can't find a way to define a shared filesystem for a given volume, for example, above deployment puts Oracle Datafiles into an internal container volume, if you stop your deployment all the data is lost, the only possibility is pause/unpause the service if you want not to loose your data. Note: at the Service Editor (Tips & Trick button) there is an example defining an external volume as /NFS/{{.EnvironmentID}}/{{.DeploymentID}}/{{.ServiceID}}/{{.ServiceSlot}}:/mnt/data, but it didn't work for me.
- You can't modify Hosts OS parameters, so for example if you want to deploy an ElasticSearch cluster is necessary to change at /etc/sysctl.conf file vm.max_map_count=262144, so is limited environment also for a simple test case.
- Docker version is 1.10.3, which means, if you want to deploy Oracle XE it doesn't work because --shm-size=1g is not supported
- Some time the Container Cloud Console kill my Chrome browser, Linux or Windows version, here the screenshot, seem to be a JavaScript problem:
Final thoughtsThe Containers Cloud Service console is a good abstraction (graphical interface) of typical Docker command line services, for example:
Services -> docker run command
Stacks -> docker-compose command, docker-compose.yml (graphical interface)
Hosts -> Bare metal/VM servers
In my personal opinion if I have to deploy a docker complex installation I'll deploy a set Oracle Compute Cloud Service running Oracle Linux/Ubuntu installations with latest Docker release and docker swarm native service, why?

  • It run faster, see my previous post.
  • I have control of the host parameter (sysctl among others, see host setup section)
  • I can define shared filesystem in ext4 or NFS partitions
  • I can build my own images by using Dockerfile commands
  • I can deploy/manage the infrastructure (nodes/network) using docker swarm command.







Primary Storage, Snapshots, Databases, Backup, and Archival.

Kubilay Çilkara - Fri, 2016-12-23 13:34
Data in the enterprise comes in many forms. Simple flat files, transactional databases, scratch files, complex binary blobs, encrypted files, and whole block devices, and filesystem metadata. Simple flat files, such as documents, images, application and operating system files are by far the easiest to manage. These files can simply be scanned for access time to be sorted and managed for backup and archival. Some systems can even transparently symlink these files to other locations for archival purposes. In general, basic files in this category are opened and closed in rapid succession, and actually rarely change. This makes them ideal for backup as they can be copied as they are, and in the distant past, they were all that there was and that was enough.

Then came multitasking. With the introduction of multiple programs running in a virtual memory space, it became possible that files could be opened by two different applications at once. It became also possible that these locked files could be opened and changed in memory without being synchronized back to disk. So elaborate systems were developed to handle file locks, and buffers that flush their changes back to those files on a periodic or triggered basis. Databases in this space were always open, and could not be backed up as they were. Every transaction was logged to a separate set of files,  which could be played back to restore the database to functionality. This is still in use today, as reading the entire database may not be possible, or performant in a production system. This is called a transaction log. Mail servers, database management systems, and networked applications all had to develop software programming interfaces to backup to a single string of files. Essentially this format is called Tape Archive (tar.)

Eventually and quite recently actually, these systems became so large and complex as to require another layer of interface with the whole filesystem, there were certain applications and operating system files that simply were never closed for copy. The concept of Copy on Write was born. The entire filesystem was essentially always closed, and any writes were written as an incremental or completely new file, and the old one was marked for deletion. Filesystems in this modern era progressively implemented more pure copy on write transaction based journaling so files could be assured intact on system failure, and could be read for archival, or multiple application access. Keep in mind this is a one paragraph summation of 25 years of filesystem technology, and not specifically applicable to any single filesystem.

Along with journaling, which allowed a system to retain filesystem integrity, there came an idea that the files could intelligently retain the old copies of these files, and the state of the filesystem itself, as something called a snapshot. All of this stems from the microcosm of databases applied to general filesystems. Again databases still need to be backed up and accessed through controlled methods, but slowly the features of databases find their way into operating systems and filesystems. Modern filesystems use shadow copies and snapshotting to allow rollback of file changes, complete system restore, and undeletion of files as long as the free space hasn’t been reallocated.

Which brings us to my next point which is the difference between a backup or archive, and a snapshot. A snapshot is a picture of what a disk used to be. This picture is kept on the same disk, and in the event of a physical media failure or overuse of the disk itself, is in totality useless. There needs to be sufficient free space on the disk to hold the old snapshots, and if the disk fails, all is still lost.  As media redundancy is easily managed to virtually preclude failure, space considerations especially in aged or unmanaged filesystems, can easily get out of hand. The effect of a filesystem growing near to capacity is essentially a limitation of usable features. As time moves on, simple file rollback features will lose all effectiveness, and users will have to go to the backup to find replacements.

There are products and systems to automatically compress and move files that are unlikely to be accessed in the near future. These systems usually create a separate filesystem and replace your files with links to that system. This has the net effect of reducing the primary storage footprint, the backup load, and allowing your filesystem to grow effectively forever. In general, this is not such a good thing as it sounds, as the archive storage may still fill up, and you then have an effective filesystem that is larger than the maximum theoretical size, which will have to be forcibly pruned to ever restore properly. Also, your backup system, if the archive system is not integrated, probably will be unaware of the archive system. This would mean that the archived data would be lost in the event of a disaster or catastrophe.

Which brings about another point, whatever your backup vendor supports, you are effectively bound to use those products for the life of the backup system. This may be ten or more years and may impact business flexibility. Enterprise business systems backup products easily can cost dozens of thousands per year, and however flexible your systems need to be, so your must your backup vendor provide.


Long term planning and backup systems go hand in hand. Ideally, you should be shooting for a 7 or 12-year lifespan for these systems. They should be able to scale in features and load for the predicted curve of growth with a very wide margin for error. Conservatively, you should plan on a 25% data growth rate per year minimum. Generally speaking 50 to 100% is far more likely.  Highly integrated backup systems truly are a requirement of Information Services, and while costly, failure to effectively plan for disaster or catastrophe will lead to and end of business continuity, and likely the continuity of your employment.

Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.
Categories: DBA Blogs

IT-Tage 2016 Informatik aktuell: feedback

Yann Neuhaus - Fri, 2016-12-23 09:07

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.

IMG_3808

I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.

IMG_3823

This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”

 

Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

Oracle MEDIAN Function with Examples

Complete IT Professional - Fri, 2016-12-23 05:00
In this article, I’ll explain what the Oracle MEDIAN function is, and show you some examples. Purpose of the Oracle MEDIAN Function The MEDIAN function returns the median of the set of provided values. The MEDIAN is the middle value in a set of values. So, for example, you had these five values: 10, 15, […]
Categories: Development

Migrating your existing availability group infrastructure to 2016

Yann Neuhaus - Fri, 2016-12-23 04:57

Have you already a plan to migrate your old SQL Server 2012 availability group infrastructure to windows 2016 and SQL Server 2016? In a previous post, I talked about distributed availability groups and cross-cluster migration scenarios but this time the game is not the same because we want to achieve an in-place upgrade of the existing AG infrastructure. This question will probably be a concern the next year and if we take a closer look at improvements shipped with new versions of Windows and SQL Server, we will not be disappointed for sure.

Indeed, cluster rolling upgrade is a new feature from Windows Server 2016 which allows us to migrate smoothly (and almost transparently) the WSFC side of the existing database infrastructure. On the other side, upgrading high available replicas from SQL Server 2012 to SQL Server 2016 is also possible without reinstalling completely the availability group infrastructure. Thus, we may benefit from a temporary and mixed infrastructure at the both sides to reduce the outage timeframe of our applications. I may think about some customers where it could be helpful regarding their business and their corresponding SLAs.

So let’s just demonstrate this kind of scenario which includes a classic customer’s availability group infrastructure. Most part of availability groups implemented in my area consists of two replicas meaning a WSFC with 2 cluster nodes at the low-level of the global architecture as shown above:

blog 111 - 00 - initial infra WSFC2012

So the question is how to achieve the migration of the above infrastructure from 2012 version to 2016 version (both Windows and SQL Server) with low downtime? Well, one solution would consist in preparing and adding temporary two extra nodes which would run on Windows Server 2016 and SQL Server 2012 in a first step. Unfortunately we cannot mix directly two different versions of SQL Server in an existing availability group yet. We have to execute an extra step to upgrade one by one each replica we want to run on SQL Server 2016. Having two extra nodes will allow to prepare smoothly our migration without impacting the existing high available infrastructure.

So let’s begin with adding 2 nodes with Windows Server 2016 version. Basically, we may use either GUI or PowerShell cmdlets command for that.

The initial scenario (2012 version) is as follows:

blog 111 - 0 - initial config cluster

Let’s add the two extra cluster nodes which run on Windows Server 2016. According to Microsoft technet procedure, the key point is to perform this action from a Windows Server 2016 node exclusively.

The PowerShell cmdlet used is the same than the previous version and I executed it for the two additional nodes (WIN20168SQL16 and WIN20169SQL16) which run both on Windows Server 2016. Just remember to exclude the two extra nodes from quorum vote to avoid impacting the existing configuration.

blog 111 - 1 - Add Cluster Node 2016

However we may notice new cluster functional level property as shown below. The value is equal to 8 meaning that the cluster has switched to a temporary / mixed mode because at the moment we have cluster nodes both on 2012 and 2016 versions. 

blog 111 - 2 - Cluster Functional Level 2016

The transitioned infrastructure includes now 4 nodes. The first 2 nodes run on Windows Server 2012 whereas the last 2 nodes run on Windows Server 2016. An availability group runs on the top of the first 2 nodes and two additional replicas are ready to be enrolled to the existing infrastructure.

blog 111 - 21 - transitioned WSFC

So now let’s move on the SQL Server side and let’s add the 2 additional replicas.

The initial AG scenario is as follows:

blog 111 - 3 - Initial AG 2012

After adding the two replicas in asynchronous mode, we get the following picture:

blog 111 - 4 - Add replica 2016 to AG 2012

At this step, we have now to upgrade the new added replicas to SQL Server 2016. One important thing to keep in mind here is that we have to prevent absolutely failover to an upgraded replica before ensuring all the new secondary replicas are already upgraded. Indeed according to the Microsoft documentation an upgraded primary replica can no longer ship logs to any secondary replica whose SQL Server 2016 instance has not yet been upgraded to the same version.

So in my case, I upgraded first the WIN20169SQL16\SQL12 replica and then the WIN20168SQL16\SQL12 as shown below:

blog 111 - 5- AG config after upgrade node

The new transitioned infrastructure is shown in the picture below:

blog 111 -51- Transitioned infrastructure

 

The next part of the migration step includes a short downtime. The previous steps did not imply outage so far.

Basically the next part of the procedure will include the following steps:

  • Choose the next replica in 2016 version that will be involved as primary and change its replication mode to synchronous in order to prevent losing data
  • Failover the availability group to this replica (at this step old replicas in 2012 version will not be synchronized because the new primary replica may no longer ship logs to them as said previously)

blog 111 - 5- AG health state after upgrade node

  • Change the replication mode of the second replica in 2016 to synchronous to meet the initial configuration
  • Remove old replicas in 2012 from the availability group

blog 111 -52- Transitioned infrastructure

The T-SQL script was as follows in my case:

:CONNECT WIN20121SQL16\SQL12

-- Change temporary replication to synchronous
-- for next SQL Server 2016 primary replica 
USE [master]
GO
ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20168SQL16\SQL12' 
WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO

:CONNECT WIN20168SQL16\SQL12

-- Initiate failover to next SQL Server 2016 primary replica  
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
FAILOVER;
GO


:CONNECT WIN20168SQL16\SQL12

-- Change temporary replication to asynchronous
-- old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( AVAILABILITY_MODE = SYNCHRONOUS_COMMIT );

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( FAILOVER_MODE = AUTOMATIC );


:CONNECT WIN20168SQL16\SQL12

-- Remove old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20121SQL16\SQL12';
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20122SQL16\SQL12';
GO

 

Let’s go back to the cluster side and the next step will consist in removing old cluster nodes from the WSFC by using usual commands as Remove-ClusterNode

blog 111 - 6- Remove old cluster nodes

… And we may finally update the cluster functional level to 9 (2016 version). Just be aware that upgrading the cluster functional level to 2016 will make the process un-reversible. So reverting back to the initial configuration will simply not be possible or at least it will require extra steps and longer downtime as well.

blog 111 - 7 - Update cluster functionalLevel 2016

And let’s get the final view of our upgraded availability group dashboard:

blog 111 - 8 - AG 2016 dashboard

The listener stays the same and it is almost transparent from applications.

 

Bottom line

In this blog post we had a glimpse of new capabilities of both Windows 2016 and SQL Server 2016 in terms of rolling upgrade. Of course the reality would be probably a little more complicated when introducing other parameters as customer context, number of availability groups, performance impact of adding temporary replicas, external dependencies and so on. But these feature seems to be promising and may be very helpful for future migration scenarios. I’m looking forward to experiment such feature at customer shops!

Happy upgrade!

 

 

Cet article Migrating your existing availability group infrastructure to 2016 est apparu en premier sur Blog dbi services.

Links for 2016-12-22 [del.icio.us]

Categories: DBA Blogs

Can I do it with PostgreSQL? – 8 – Transportable tablespaces

Yann Neuhaus - Fri, 2016-12-23 01:47

My colleague Franck posted a comment to one of my last “Can I do it with PostgreSQL” blog posts. The comment he posted is: “Here is an idea for a future “Can I do it with PostgreSQL?”. My favorite Oracle feature: transportable tablespaces.” When you’ve read my post about how PostgreSQL implements tablespaces then you probably already know the answer: No, you can not do this in PostgreSQL. Having thought about this some time I decided to check the Oracle documentation for what transportable tablespaces are good for. I know the basics and how you can do it in Oracle, but better check twice :)

According to the documentation the main goal of transportable tablespaces is to transport data. This means you can transport one or more (which then becomes a transportable tablespace set) tablespaces from one host to another and then “plug” that set into an existing database (as long as the tablespaces are self containing). One, depending on the use case, great thing you can do with it is that the target database does not need to have the same standard block size as the source database. This means you can transport a tablespace space with a 16k block size to a database with a default block size of 8k. This is another thing you can not do in PostgreSQL: In PostgreSQL the block size is configured at configure/compile time. Once you have the compiled binaries you can not change that afterwards.

Probably the greatest benefit of transportable tablespaces is that it saves you time in moving your data around. You just copy the data files that make up your tablespace(s) and then use expdp/impdp for the meta data, that’s it. When you go for the multi-tenant architecture you can use transportable tablespaces to make a non-CDB a pluggable database on the target, too. For a more detailed introduction you can check the documentation (linked above).

Back to PostgreSQL: What options do you have for transporting your data from one database to another?

The tool of choice for most cases probably is pg_dump. The big advantage of pg_dump is that you can use it over the network and directly write everything you want to export into the target database using a pipe:

pg_dump -C -h [SOURCE_HOST] -U [USER] [DATABASE] | psql -h [TARGETHOST] -U [TARGETUSER] [TARGETDATABASE]

You can even do that using parallel processes when you combine pg_dump with pg_restore although using a pipe is not supported in that case. To demonstrate this we’ll need some sample data (I am using PostgreSQL 9.5.4 as the source):

postgres@pgbox:/home/postgres/ [PG954] psql
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create database test;
CREATE DATABASE
(postgres@[local]:5438) [postgres] > \c test
You are now connected to database "test" as user "postgres".
(postgres@[local]:5438) [test] > create table t1 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > create table t2 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > insert into t1 values (generate_series(1,1000000), generate_series(1,1000000));
INSERT 0 1000000
(postgres@[local]:5438) [test] > insert into t2 select * from t1;
INSERT 0 1000000

Lets say I want to copy the data to a PostgreSQL 9.6.1 instance on the same host. How can I do that? Quite easy:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -V
pg_dump (PostgreSQL) 9.6.1 dbi services build
postgres@pgbox:/home/postgres/ [PG961] mkdir /var/tmp/exp
postgres@pgbox:/home/postgres/ [PG961] pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -d postgres -F d -C /var/tmp/exp/
postgres@pgbox:/home/postgres/ [PG961] psql -c "\l" postgres
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
(4 rows)

postgres@pgbox:/home/postgres/ [PG961] psql -c "\d" test
        List of relations
 Schema | Name | Type  |  Owner   
--------+------+-------+----------
 public | t1   | table | postgres
 public | t2   | table | postgres
(2 rows)

Not a big deal and possible over the network, too.

Another option would be to use pg_basebackup to create a complete new instance from the source (source and target need to be of the same PostgreSQL release in this case). Again this is possible over the network.

You can even use rsync to copy the whole cluster to a new host and then do a second resync while the source is down for a short time. This will copy only the files that changed since the first rsync and will probably be very fast, but you will need a downtime of the source for the second rsync.

There are other methods for moving your data around in PostgreSQL but the above are the most popular.

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 8 – Transportable tablespaces est apparu en premier sur Blog dbi services.

Updated Technical Best Practices

Anthony Shorten - Thu, 2016-12-22 16:20

The Technical Best Practices whitepaper available from My Oracle Support has been updated and published.

The new version has the following changes:

  • The format has been changed to make it more readable for both screen and printing use.
  • Old and outdated information has been removed.
  • Advice for versions of the products not on premier support has been removed. This streamlines the advice.
  • Duplicate advice already included in the online or documentation provided with the product has been removed. it is recommended to use the online or documentation provided with the product before using advice in this whitepaper. This advice applies to whitepapers.
  • IBM WebSphere specific advice has been removed as it already is covered by product documentation or IBM documentation already.
  • New and updated advice has been added to the document for new versions (including the latest releases).

Over the next month or two, additional whitepapers will be overhauled and re-released.

The whitepaper is available as Technical Best Practices (Doc Id: 560367.1) from My Oracle Support.

I wish you all a happy holidays and see you in 2017.

Python for Analytics - Exploring Data with Pandas

Rittman Mead Consulting - Thu, 2016-12-22 12:24
A Crack Team!

At Rittman Mead, we're always encouraged to branch out and pursue new skills in the field in an effort to improve upon our skill sets, and as a result, become more technically fluent. One of the great things about working here, aside from the previous, is that while we all have a really solid foundation in Oracle technologies, there are many who possess an incredibly diverse range of skills, fostered by years of working in tech-agnostic engagements. It's a great feeling to know that if you ever run up against some sort of bizarre challenge, or have to integrate some piece of arcane tech into an architectural solution, more than likely, someone, somewhere within Rittman Mead has had to do it. It is this freedom to pursue, within reason of course, other technical exploits that has nurtured a real spirit of innovation around the company within the past year. Our GitHub is overflowing with open source visualizations and performance monitoring and maintenance scripts that are simply there for the taking! We've put a lot of time into developing this stuff so our clients and partners don't have to.

Python

python logo But I digress. This blog is about Python, and well, I haven't really mentioned it up until this point. It is in this spirit of innovation, learning, and frankly, automating the boring stuff, however, that a lot of us have been pursuing automation and analytical endeavors using the Python language. It has been touted by many as THE language for data science, and rightfully so, given its accessibility and healthy selection of libraries perfectly suited to the task, such as NumPy, Seaborn, Pandas, Matplotlib. In today's exercise, we're going to walk through common data munging, transformation, and visualization tasks using some of these libraries in order to deliver quick insights into a data set that's near and dear to my heart, Game of Thrones battles and character deaths!

GoT Logo

Through this process, we will be creating our own data narrative that will help to expound upon the idle numbers and labels of the data set. We'll see that the process is less a hard and fast, rigid, set of rules for which to approach data exploration, and something more akin to solving a crime, clue by clue, letting the data tell the story.

PYTHON FOR DATA SCIENCE

Aside from its myriad, community driven and maintained libraries, the greatest thing, to me anyway, about Python is its relatively low barrier to entry. Even with little to no previous programming skills, an enterprising lad or lady can get up and running, performing basic, functional programming tasks in no time. You'll be amazed at how quickly you'll start coming up with daily tasks that you can throw a bit (or a lot) of Python at. Today, we'll be tackling some tasks like these, common to the everyday processes of data analysis and data science. Utilizing the Pandas library, in addition to a few others, we'll see how we can programmatically go from question to answer in no time, and with most any structured or unstructured data set. The primary audience of this blog will be those with a bit of Python fluency, in addition to those with an interest in data science and analytics. I will be explaining the steps and providing a Jupyter notebook (link here) for those who wish to follow along, however, for those who might need the extra guidance. So don't bail now! Let's get to it. In this instance, we'll be downloading the Game of Thrones data set from kaggle, a great site that provides open data sets and accompanying analysis projects in multiple languages. Download the data set if you'd like to follow along.

GETTING STARTED

stepping stones Let's begin by taking some steps to get our heads on straight and carve out a clear work flow. I find this is really helpful, especially in programming and/or analytical scenarios where one can begin to suffer from "analysis paralysis". So, at a high level, we'll be doing the following:

  1. First, we'll take a cursory look at the Python libraries we'll be incorporating into our data sleuthing exercise, how they're used, and some examples of their output and ideal use cases.

  2. Next we'll use the tools in these libraries to take a deeper dive into our data set and start to construct our initial line of questioning. This is where we get to be a bit creative in coming up with how we're going to wrap our heads around the data, and what kind of questions we're going to throw at it.

  3. We'll then chase down any leads, incorporating additional analyses where necessary, and begin to construct a narrative about our data set. At this point we'll be formulating hypotheses and attempting to construct visualizations that will help us to further or disprove our investigation.

PANDAS IN THE JUNGLE

Any great detective must always have with them a toolkit with which to thoroughly examine any crime scene, and that's essentially what we have in the Pandas, Seaborn, and Numpy ("num-pie") libraries for the Python programming language. They provide a set of methods (functions) that can take an input, or a number of inputs, do some magic, and then provide us with lots of really useful information. So let's begin by examining these libraries and what we can do with each.

Pandas and Numpy

pands_logo

Pandas is great at doing a bunch of really common tasks related to data exploration, not limited to, indexing and selection, merging and joining data sets, grouping and aggregations, and visualizing data. This will be the library with which we'll be doing a lot of the heavy lifting. Pandas also provides us with the Dataframe object that greatly expands on the comparatively more rigid Numpy's ndarray object. These 'objects' are simply containers that hold data of some kind, and allow us to interact on that data.

Matplotlib and Seaborn

matplot_logo

Matplotlib is a robust visualization library built to enable interactive, MATLAB style plotting on most any platform or back-end. This library, along with Seaborn, should be your go-to for producing super malleable graphs and visualizations. Working alongside matplotlib, seaborn pitches itself as a go-to for statistical based visualizations, but also supports complex, grid and algorithm based charts as well. Both of these libraries will help us to make quick and insightful decisions about our data, and help us to gather evidence further supporting, or disproving and hypotheses we might form.

THE INVESTIGATION

desk

Now that we've armed ourselves with the tools we need to thoroughly examine any potential data set we're given, let's do just that! Opening up the Game of Thrones csv files from above, let's first take a look at what kinds of information we have. The basic stats are as follows:

Synopsis

  • Battles - a complete listing of the battles in the book series and their stats! Attacker, defender, army size, you name it, we've got it.

  • Character Deaths - something the series/show is quite known for, who died? This contains some great info, such as allegiance and nobility status.

  • Character Predictions - The more morbid of the lot, this data set lists predictions on which character will die. We won't be using this sheet for our exercise.

A Hypothesis of Thrones

Having just finished the monumental series myself, you could say that at this point I'm somewhat of a subject matter expert; that at this point, we have a situation not unlike that which you might find in any organization. We've got an interested party that wants to look further into some aspect of their data. We can use our investigatory tool-set to get real results and gain some valuable and informative insights. As subject matter experts though, we should ideally be coming at our data with at least some semblance of a hypothesis, or something that we're trying to prove using our data (or disprove for that matter). For the sake of this exercise, and fitting in with the theme of the data, I'm going to try and dig up an answer to the following:

Does House Lannister, for as evil and scheming as they are, and as much as they get away with, eventually get what's coming to them?

No lannisters!

As much as I'd like to believe it's true, however, we're going to need to run the numbers, and let our data do the talking.

Importing the Data

You can follow along in the Jupyter notebook here now. Working with our Pandas library, we first need to get our data into some sort of workable object. As we stated before, this is the data frame. It is simply a table type object that is really good at handling empty values and data of many different types. We can easily perform operations on these objects and visualize them with minimal fuss. So, enough talk. Let's do it!

Working in your favorite IDE (Pycharm is easy to use and comes in a free version), we start a new project, import the libraries we need, and then drop in our first piece of code. This is the section that imports our csv data set and then converts it to data frame. So, now that we have our object, what do we do with it?

dataframe

A Graph Has No Name

Now that we have our data frame object, we can begin to throw some code at it, crunch some numbers, and see if, in fact, the Lannisters really did get what was coming to them by the end of book 5. Starting with the battles data set, let's see how they fared in the field through the arc of the story. Did they lose more or less troops comparatively? We can do this easily by breaking our data frame into smaller, more manageable chunks, and then graph these data points, accordingly. We are going to use the data set to build a step by step, set of analyses that examines the Lannister victories and defeats throughout the story.

Battle / Troop Loss Over Time

Did the Lannisters hit a losing streak, or did they do well throughout the story? Did they win or lose more of their battles over time?

  1. Start with new data frame based on house and troop sizes:
    troop sizes

  2. Filter to get new results (Lannisters only):

Lannister Troops

Right away we see we have some data issues, that is, there are some holes in the attacker size column. The good thing is that we can more or less look at this small table and get the all the info we need from it right away. The numbers drop down significantly through the years, and that's all there really is to it. But, was this in fact, because they lost more troops, or simply threw less at the problem as they began to carve out their claim to the kingdom? This analysis is not very telling. We're going to need to do some digging elsewhere to answer our question. Let's do some comparisons.

% of Battles Won / Lost

So how did the Lannisters do in the field? Of the 8 battles they fought in, how many did they win? How does this compare with the other armies of Seven Kingdoms?

As we did before, lets get a new data frame together, and then do our grunt work on it to try and answer these questions. Grabbing the columns we need, let's run the numbers on how the Lannisters stack up against the other houses of Westeros in the field.

How many battles did they fight compared to the other Houses?

battles fought

How many did they actually win?

won-to-total

We can see right away, that out of all the battles they fought throughout the series (which is decidedly more than the other houses in the series), that they came out on top. Could the Lannisters be the dominating force on the field, as well as at court? The Starks are the only house that meet them conflict for conflict, and the Lannisters still reign supreme! Let's take things down to a finer grain and see how those who aligned themselves with the Lion did compared to those who didn't.

Death by Allegiance

Opening up our character deaths file, right away we see we have some pretty good info here. We have a laundry list of characters, their death year, and the house, if any, to which they were aligned. Let's start by building a data frame, and first, filtering out those who are unaligned, in the Night's Watch, or a Wildling. We want to get a comparison between houses, and these groups will just muck up the works. Let's do the numbers. We can now plot this info on a basic bar chart to get a basic rundown of the massacre.

death by alignment

Things are starting to look up...depending on your point of view, I guess. The Lannisters, for all their dirty business, do seem to, in fact, lose the most named characters tied to their house. Of these, let's see how many were actually nobility, or rather, the most influential in furthering their cause!

noble deaths to total

It would seem our Lannisters aren't too good at keeping their hands clean, and letting those of lesser station do their dirty work for them. Although they have the second most aligned character deaths in the series, roughly 75% of them are Noble deaths, meaning that people important to their cause are dying! The only other houses that come close unfortunately, are the Starks (the Red Wedding, no doubt), and the Greyjoys. What this also means, however, is that our claim is gathering more support; the Lannisters may have climbed the royal ladder, but at what cost?

Paying Your Debts

death_prop

We can see from the donut chart above (excuse the repetition of colors) that indeed, the Lannisters have one of the highest % to total death numbers out of all the major houses in the Seven Kingdoms. This actually goes quite a long way in backing up our hypothesis; that of all the named characters in the series, the Lannisters lost the lion's share (pun intended). The disconcerting thing is that they either seem to bring down many others with them, or the other noble houses aren't terribly great at keeping themselves among the living either.

Conclusion

Are these figures, combined with their high noble of ranking noble deaths enough to satisfy my desire for vengeance? Did they truly reap what they have sown? I have to say I am ultimately undecided in the matter, as, although they did lose a great many, they in turn took a a greater number down along with them. It seems that despite these losses, any notion of vengeful satisfaction must be tempered by this fact; that although the Lannisters did end up getting hit pretty hard with significant losses, this is bittersweet when compared to the real and lasting damage they did throughout the span of the book's and show's history. Were you able to come up with any additional evidence for or against my case? Link out and show us! Thanks for reading.

Categories: BI & Warehousing

APEX 5.1 - Client side messaging and apps not using Universal Theme

Anthony Rayner - Thu, 2016-12-22 09:42
Introduction

In APEX 5.1, we are introducing new client side messaging functionality, which aims to provide more modern, faster messaging across APEX, including the capability to display both error and success messages from the client side, without requiring a full page reload (as is the traditional way in APEX). Messaging in APEX is something we have wanted to modernise for a long time, and in 5.1, precipitated by the introduction of the Interactive Grid and its requirement for such messaging, this comes to fruition. 


This new functionality works well with apps already using the Universal Theme, however apps using older, or custom themes may face some consistency issues in how inline item errors are displayed between pages that display messages the old way and the new way. The purpose of this blog post is to highlight when such inconsistencies may occur, and to provide guidance on how you can go about fixing them.

Note: If you have created your own custom theme, please see Method #2 for instructions on how to update your theme to work with client side messaging.


When can these inconsistencies occur?
Client side messaging for inline item errors will be used in the following scenarios:
  1. To display the error messages if an APEX validation error occurs, and the page attribute Reload on Submit is set to Only for Success (which is the default for new pages, irrespective of theme).

  2. Screenshot showing the new Reload on Submit page level attribute in APEX 5.1The new Reload on Submit page level attribute in APEX 5.1

  3. To display the error messages if a client side validation error occurs such as Value Required = Yes, and the app Compatibility Mode is set to 5.1 (default for new apps, irrespective of theme), a Submit Page button is used with set to Execute Validations = Yes.
Compatibility Mode, available in Application Attributes

So as soon as you start adding new pages to an existing app for example, you will start to hit these inconsistencies, because existing pages will have Reload on Submit set to Always, and new pages will have this set to Only for Success

In both of the above scenarios, an inline item error will be shown but it may not look exactly the same as on other existing pages. This is because as it is, the client side messaging logic isn't able to use the corresponding template markup from the theme because of some missing information (more on that in a moment), and instead relies on some generic fallback logic to display the error. We can see the inconsistency below. 

Here is an inline item error, rendered using server-side logic due to the page level attribute Reload on Submit being set to Always (default for existing apps).

Screenshot showing Inline item error display for Theme 21, when the page level attribute Reload on Submit is set to Always



And here is the same error, rendered on the client side due to the page level attribute Reload on Submit being set to Only for Success (default for new pages).

Screenshot showing Inline item display for Theme 21 in APEX 5.1 when the page level attribute Reload on Submit is set to Only for Success



The Known Issues page for 5.1 mention this issue, including a few suggestions for working around it.

Screenshot showing excerpt from known issues page detailing this issue. To read the full issue please follow the known issues page for 5.1 link, and search for 'client-side messaging issues in apps using old, or custom themes'


Now let's explore each of the suggestions in a bit more detail.


Method #1 - Disallowing client side validation

The first solution is described in the Known Issues as follows:
These differences can be avoided by setting Reload on Submit to Always, (please note the Interactive Grid feature requires this to be set to Only for Success and not using client side validation by either setting app Compatibility Mode to less than 5.1, or setting Execute Validations = No.Setting Reload on Submit to Always, or disallowing client side validation as described will of course resolve the inconsistencies, but are somewhat brute-force, with the negative side-effects that you cannot benefit from the new way, you are precluded from upgrading your compatibility mode and also by setting Reload on Submit to Always, this means you cannot use the Interactive Grid on that page.

This is good for a quick-fix, but does place limitations upon your app, which are far from ideal.


Method #2 - Updating your theme
A better way to resolve this, and one that doesn't limit you in the same way as detailed in Method #1, is to update your themes so that they work better with the new client side messaging. Specifically this involves an update to your theme's label templates. 

The reason that the client side messaging functionality is not able to use the markup from the label template, is because it relies on the following:
  1. The error markup being provided in the Error Template attribute of the label template.
  2. The #ERROR_TEMPLATE# substitution string being included in either the Before Item or After Item attribute of the label template.
Now we know what's required, let's look at an example using the standard Theme 21, and the Optional Label with Help label template. In this template, you will see the Error Display information is provided as follows:


Screenshot showing Theme 21 > Optional Label with Help label template before any changes. Shows an empty Error Template and error markup defined in the On Error Before Label and On Error After Label attributesTheme 21 > Optional Label with Help label template before any changes. Shows an empty Error Template and error markup defined in the On Error Before Label and On Error After Label attributes

Given the aforementioned rules, this clearly won't work, there is no Error Template defined, so the client side will need to resort to its fallback template, causing the possible inconsistencies.

So firstly, we need to move the error related markup into the Error Template attribute. However, this presents a slight problem, because the way it is currently, the On Error Before Label markup will be rendered exactly as described, before the label markup when an error occurs, and the On Error After Label markup similarly will be rendered after the label markup. This means that it will effectively wrap the label markup in the template, which is very difficult to replicate by switching to the Error Template approach (where the markup is defined in its entirety in a single place, and inserted into the template in a single place). Unfortunately this means we will need to compromise, and go with an approach that will be slightly different to how it looked prior to 5.1, however importantly this will be consistent between client side and server-side rendered messages now, and without the downsides of having to resort to Method #1.

So we move the combined error markup into the Error Template, as shown:


Screenshot showing Theme 21 > Optional Label with Help label template after moving the error markup into the Error Template attributeTheme 21 > Optional Label with Help label template after moving the error markup into the Error Template attribute

Note: We removed the line break because we no longer need to force a new line after the label as was the case before, and because the DIV will automatically start on a new line because it is a block-level element, so the error will appear on a new line after its preceding element.

The next part is to add the #ERROR_TEMPLATE# substitution string to the best place in either of the two attributes where it is supported in a label template, either the Before Item or After Item attributes. Let's go with the more common approach of displaying the error after the item, so simply add #ERROR_TEMPLATE# in the After Item attribute, as shown:


Screenshot showing #ERROR_TEMPLATE# substitution string added to the After Item attribute of the label template


You will need to replicate this in the different label templates in your app, and of course different themes will have slightly different implementations for label templates, but if you follow the rules laid out above, then it should be a fairly straight forward switch over. 


Screenshot showing Updated appearance following template change with the error displayed below the input, now consistent irrespective messaging typeUpdated appearance following template change, now consistent irrespective messaging type


Method #3 - Migrate to Universal Theme
This will almost certainly result in the most work, but will provide great benefit in the long run, and all these problems will automatically go away.


Screenshot showing Universal Theme Sample Application, available in your workspace via Packaged Apps > SampleUniversal Theme Sample Application, available in your workspace via Packaged Apps > Sample

This will fix these inconsistency issues, and would be the best thing you can do to also benefit from the huge improvements offered by Universal Theme over our legacy themes, but will most likely take the most time and should be considered as part of a migration project (rather than a simple change), so may of course not be feasible. To see our Migration Guide, specifically the section entitled Migrating from Other Themes for further information.



Summary
The new client side messaging of APEX 5.1 offers some great benefits over the server-side approach, but does represent some issues with inline errors and apps not using Universal Theme as detailed here. If you have time, converting to Universal Theme would definitely be the way to go, there are just such massive improvements to benefit from in terms of capability, accessibility, responsiveness, modern look and feel, and more, but this will of course take the most time and investment. If this is not feasible, we would recommend updating your existing themes as detailed here, to be able to provide a consistent look and feel, and still benefit from other new features in 5.1 as described.
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}
Categories: Development

Restricted sqlplus and sql_script

Laurent Schneider - Thu, 2016-12-22 08:49

Yesterday I wrote about execute sql script from plsql. Connor then posted a 12c new feature that I overviewed

@laurentsch Or perhaps just use scheduler to run them, job_type = SQL_SCRIPT

— Connor McDonald (@connor_mc_d) December 21, 2016

If you are still considering the 11g approach with sqlplus, you should read about restricted sqlplus

If you run

sqlplus -L -s -R 3 scott/tiger@db01

lot’s of OS-command are disabled

SQL> get /etc/passwd
SP2-0738: Restricted command "get" not available
SQL> spool /etc/oratab
SP2-0738: Restricted command "spool" not available
SQL> host reboot
SP2-0738: Restricted command "host" not available

There is also a Product User Profile. I never saw any customer using this. I doubt it is very safe.

You could disable some commands


SQL> insert into system.PRODUCT_USER_PROFILE
  (PRODUCT,USERID,ATTRIBUTE,CHAR_VALUE)
  values 
  ('SQL*Plus','SCOTT','ROLLBACK','DISABLED');

Which prevents SCOTT from doing rollabck


SQL> rollback;
SP2-0544: Command "rollback" disabled in Product User Profile

but it doesn’t resist Rock & Roll


SQL> roll & rock
Rollback complete.

If you have 12c, go have a look at the doc for SQL_SCRIPT jobs

Apex 5.1 on Docker

Marcelo Ochoa - Thu, 2016-12-22 06:51
Two day after publish my post on Building a Docker image with Oracle 11gXE and latest Apex distribution Oracle releases latest production version 5.1.
So I just committed on GitHub a project update with the instruction to build a new Docker image using Oracle 11g official Docker image script and Apex 5.1 installation.
Here an screen shot of the first run when open the URL with http://localhost:8080/apex/apex_admin:

admin password is by default sys/system password output during first boot of the Docker image, see Docker logs for more details, for example:
ORACLE AUTO GENERATED PASSWORD FOR SYS AND SYSTEM: 80c55053b63dc749
immediately Oracle Apex will prompt you to change this default password, due security reason obviously admin/admin is not allowed, we choose for example OraPwd$16:
 next the welcome page is returned:
at the bottom of the welcome page you could check this new version:
and that's all, this is how easy you can provision a new Apex environment with Docker

Oracle SINH, COSH, and TANH Functions with Examples

Complete IT Professional - Thu, 2016-12-22 05:00
In this article, I’ll cover what the Oracle SINH, COSH, and TANH functions are, and show you some examples. Purpose of the Oracle SINH, COSH, and TANH Functions These three functions are used to calculate the hyperbolic sine, cosine, or tangent of a number: SINH: Calculate the hyperbolic sine of a number COSH: Calculate the […]
Categories: Development

Links for 2016-12-21 [del.icio.us]

Categories: DBA Blogs

Oracle Apex 5.1 is here

Flavio Casetta - Wed, 2016-12-21 16:34
Categories: DBA Blogs

SQL Plan Directives in 12cR2. Behavior is different than in 12cR1.

Yann Neuhaus - Wed, 2016-12-21 12:04

During the Frankfurter IT Tage I provided a presentation about SQL Plan Directives and how to reproduce a testcase in a test environment for a plan with used SQL Plan Directives. In that context I did a couple of tests with 12cR2 (12.2.0.1) and wanted to blog about the change in behavior in that release.

Here is my testcase I did initially run in 12.1.0.2:

First I created a table with 4 columns: A, B, C and D and load it with 1000 rows:


10:25:27 demo@GEN12102> create table DEMO_TABLE as
10:25:27 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:25:27 3 from dual connect by level <=1000;
 
10:25:30 demo@GEN12102> select * from DEMO_TABLE;
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
0 0 0 0
1 1 1 1
0 0 0 0

I.e. there is a correlation between the columns (all columns have a value 0 or all have a value 1).

Due to the new feature of stats gathering during CTAS and INSERT APPEND into empty tables in 12c the table has stats already:


10:28:29 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:28:29 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27

Let’s run a query:


10:29:40 demo@GEN12102> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:29:40 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
0 0 0 0
 
10:29:40 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------

There is a misestimation of the calculated cardinality (E-Rows) versus actual cardinality (A-Rows) of a factor 8 (63:500).
Oracle detected that and put the cursor on IS_REOPTIMIZABLE=Y:


10:32:49 demo@GEN12102> select sql_id,child_number,is_reoptimizable from v$sql
10:32:49 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

Executing the SQL again uses statistics feedback:


10:33:42 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Note
-----
- statistics feedback used for this statement

The computed cardinality is correct now.

After flushing the SQL Plan Directives to the SYSAUX-tablespace I can see them with the internal_state NEW:


10:34:37 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:34:39 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:34:39 2 from dba_sql_plan_directives where directive_id in(
10:34:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:34:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Executing a different SQL with the same type of predicate will use the SQL Plan Directive and dynmically gathers stats:


10:35:54 demo@GEN12102> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:35:54 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:35:54 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The internal_state of the directive changed to MISSING_STATS:


10:37:18 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:37:20 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

The SPD is still being used until fresh stats are gathered. Oracle will create extended (multi column) statistics then:


10:38:27 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:38:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
 
10:38:27 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
10:39:42 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:40:01 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27
 
10:40:03 demo@GEN12102> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
10:40:05 demo@GEN12102> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 10:40:04
B 2 10:40:04
C 2 10:40:04
D 2 10:40:04
SYS_STSPJNMI 2 10:40:04
 
10:40:05 demo@GEN12102> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Running a SQL again still dynamically gatheres stats, but will change the internal state to HAS_STATS:


10:40:37 demo@GEN12102> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
10:40:37 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
10:40:37 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
10:40:56 10:40:56

A next execution will use the extended stats instead of the directives:


10:45:10 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:45:10 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
10:45:10 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

Dropping the extended stats will result in cardinalilty misestimate and setting the directive to PERMANENT:


10:45:58 demo@GEN12102> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
10:46:19 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
10:46:19 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
10:46:19 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
 
10:47:10 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:47:10 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>PERMANENT</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Future executions of such SQL will use the directive again:


10:47:46 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

That’s how it worked in 12.1.0.2. Here’s the behavior of 12.2.0.1:

First of all I have to enable adaptive statistics and switch on the preference to automatically create extended statistics:


10:49:22 demo@GEN12201> alter system set optimizer_adaptive_statistics=true;
 
10:49:22 demo@GEN12201> exec DBMS_STATS.SET_PARAM ('AUTO_STAT_EXTENSIONS','ON');

Creating the table with the appropriate stats is the same as in 12.1.:


10:50:23 demo@GEN12201> create table DEMO_TABLE as
10:50:23 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:50:23 3 from dual connect by level <=1000;
 
10:51:02 demo@GEN12201> select * from DEMO_TABLE;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
1 1 1 1
0 0 0 0
 
10:51:31 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:51:31 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23

Let’s run the first query:


10:51:42 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:51:42 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
..
0 0 0 0
0 0 0 0
 
10:51:42 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

As in 12.1. we have the misestimate in the computed cardinality (E-Rows) versus actual cardinality (A-Rows).

But here comes the first difference to 12.1.: The cursor is not marked as reoptimizable:


10:53:31 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:53:31 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 N

I have to execute the SQL a second time to make the cursor IS_REOPTIMIZABLE=Y:


10:54:27 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:54:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0

The computed cardinality is still wrong:


10:54:27 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

But the cursor is on IS_REOPTIMIZABLE=Y now:


10:54:27 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:54:27 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

However, statistics feedback is NOT USED in 12.2.:


10:56:33 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:56:33 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
 
10:56:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

I.e. the cardinality is still wrong and the Note saying “statistics feedback used for this statement” is not there.
But I can see a new SPD created:


10:58:37 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:58:39 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:58:39 2 from dba_sql_plan_directives where directive_id in(
10:58:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:58:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

And it’s being used:


10:59:08 demo@GEN12201> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:59:08 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:59:08 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The new feature in 12cR2 is the created Directive of type DYNAMIC_SAMPLING_RESULT:


10:59:31 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:59:31 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. the result of the dynamically sampled data is stored in the repository. The SQL_ID 7fp7c6kcgmzux if a stripped Dynamic Sampling query is stored, so that the result can be reused by other queries in future. I.e. Oracle just has to generate the SQL_ID of the Dynamic Sampling query and can use a prior result if it finds it in the repository.

As the internal state of the directive is on MISSING_STATS and the DBMS_STATS-preference AUTO_STAT_EXTENSIONS is set to ON, Oracle will create extended stats when gathering stats next time:


11:02:17 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23
 
11:04:10 demo@GEN12201> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
11:04:11 demo@GEN12201> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 11:04:10
B 2 11:04:10
C 2 11:04:10
D 2 11:04:10
SYS_STSPJNMI 2 11:04:10
 
11:04:11 demo@GEN12201> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Once a query is excecuted again the internal state changes to HAS_STATS (same as in 12.1.):


11:04:33 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
11:04:33 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
11:04:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
11:04:35 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:04:35 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

The next query uses the stats instead of the SPD:


11:05:23 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
11:05:23 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
11:05:23 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

When dropping the extended stats the estimated cardinality is wrong again:


11:05:49 demo@GEN12201> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
11:05:57 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
11:05:57 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
11:05:57 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

However, the SPD won’t change to state to PERMANENT as in 12.1.:


11:06:38 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:06:38 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. future queries will have the misestimate, because the SPD remains on HAS_STATS:


11:07:16 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b from DEMO_TABLE
11:07:16 2 where a=1 and b=1 and c=1 and d=1;
 
B
----------
1
 
11:07:16 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

REMARK: Interesting is the E-Rows value of 62 instead of 63 before, but I haven’t checked the reason for that yet.


11:07:46 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:07:46 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

So in summary there are several changes in 12.2. compared to 12.1.:

1.) Statistics feedback won’t mark the cursor for reoptimization after the first execution. It needs 2 executions with a cardinality misestimate to mark the cursor IS_REOPTIMIZABLE=Y
2.) If the cursor is marked IS_REOPTIMIZABLE=Y it won’t be parsed again with additional information from cardinality feedback. In 12.2. only a SQL Plan Directive is being created.
3.) When executing a query which can use a SQL Plan Directive the first time, a new SQL Plan Directive of type DYNAMIC_SAMPLING_RESULT will be created. Future queries can use that result.
4.) Dropping automatically created extended stats won’t change the state of the SPD from HAS_STATS to PERMANENT with next execution of a query (and flushing the SPD).

Bottomline is that even if you enable SPDs in 12.2 like in 12.1 (with optimizer_adaptive_statistics = TRUE and DBMS_STATS.SET_PARAM (‘AUTO_STAT_EXTENSIONS’,’ON’)) the behavior in 12.2 is different. I would describe the change of using statistics feedback only on 2nd execution of a query to create a SPD then and not consider the feedback for the next parse as a more conservative approach towards more stable plans and being less “adaptive” than in 12.1.

 

Cet article SQL Plan Directives in 12cR2. Behavior is different than in 12cR1. est apparu en premier sur Blog dbi services.

Update Parent account(s) balance using Child Account(s) Balance

Tom Kyte - Wed, 2016-12-21 11:46
I would like update the balance of the Parent account(s) using the sum of child account(s). Can you please guide ?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator