Feed aggregator

ksplice

Wim Coekaerts - Fri, 2017-02-24 15:36
As many of you probably know by now, a few days ago there was a report of an old long-standing Linux bug that got fixed. Going back to kernels even down to 2.6.18 and possible earlier. This bug was recently fixed, see here.

Now, distribution vendors, including us, have released kernel updates that customers/users can download and install but as always a regular kernel upgrade requires a reboot. We have had ksplice as a service for Oracle Linux support customers for quite a few years now and we also support Ubuntu and Fedora for free for anyone (see here).

One thing that is not often talked about but, I believe is very powerful and I wanted to point out here, is the following:

Typically the distribution vendors (including us) will release an update kernel that's the 'latest' version with these CVEs fixed, but many customers run older versions of both the distribution and kernels. We now see some other vendors trying to provide the basics for some online patching but by and far it's based on one-offs and for specific kernels. A big part of the ksplice service is the backend infrastructure to easily build updates for literally a few 1000 kernels. This gives customers great flexibility. You can be on one of many dot-releases of the OS and you can use ksplice. Here is a list of example kernel versions for Oracle Linux that you could be running today and we provide updates for with ksplice,for ,for instance, this DCCP bug. That's a big difference with what other folks have been trying to mimic now that online patching has become more and more important for availability.

Here is an example kernel 2.6.32-573.7.1.el6.x86_64 #1 SMP Tue Sep 22 08:34:17 PDT 2015 So that's a kernel built back in September of 2015, a random 'dot release' I run on one of my machines, and there's a ksplice patch available for these recent CVEs. I don't have to worry about having to install the 'latest' kernel, nor doing a reboot.

# uptrack-upgrade 
The following steps will be taken:
Install [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
Install [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.

Go ahead [y/N]? y
Installing [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
Installing [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.
Your kernel is fully up to date.
Effective kernel version is 2.6.32-642.15.1.el6

and done. That easy. My old 2.6.32-573.7.1 kernel looks like 2.6.32-642.15.1 in terms of critical fixes and CVEs.

# uptrack-show
Installed updates:
[cct5dnbf] Clear garbage data on the kernel stack when handling signals.
[ektd95cj] Reduce usage of reserved percpu memory.
[uuhgbl3e] Remote denial-of-service in Brocade Ethernet driver.
[kg3f16ii] CVE-2015-7872: Denial-of-service when garbage collecting uninstantiated keyring.
[36ng2h1l] CVE-2015-7613: Privilege escalation in IPC object initialization.
[33jwvtbb] CVE-2015-5307: KVM host denial-of-service in alignment check.
[38gzh9gl] CVE-2015-8104: KVM host denial-of-service in debug exception.
[6wvrdj93] CVE-2015-2925: Privilege escalation in bind mounts inside namespaces.
[1l4i9dfh] CVE-2016-0774: Information leak in the pipe system call on failed atomic read.
[xu4auj49] CVE-2015-5157: Disable modification of LDT by userspace processes.
[554ck5nl] CVE-2015-8767: Denial-of-service in SCTP heartbeat timeout.
[adgeye5p] CVE-2015-8543: Denial-of-service on out of range protocol for raw sockets.
[5ojkw9lv] CVE-2015-7550: Denial-of-service when reading and revoking a key concurrently.
[gfr93o7j] CVE-2015-8324: NULL pointer dereference in ext4 on mount error.
[ft01zrkg] CVE-2013-2015, CVE-2015-7509: Possible privilege escalation when mounting an non-journaled ext4 filesystem.
[87lw5yyy] CVE-2015-8215: Remote denial-of-service of network traffic when changing the MTU.
[2bby9cuy] CVE-2010-5313, CVE-2014-7842: Denial of service in KVM L1 guest from L2 guest.
[orjsp65y] CVE-2015-5156: Denial-of-service in Virtio network device.
[5j4hp0ot] Device Mapper logic error when reloading the block multi-queue.
[a1e5kxp6] CVE-2016-4565: Privilege escalation in Infiniband ioctl.
[gfpg64bh] CVE-2016-5696: Session hijacking in TCP connections.
[b4ljcwin] Message corruption in pseudo terminal output.
[prijjgt5] CVE-2016-4470: Denial-of-service in the keyring subsystem.
[4y2f30ch] CVE-2016-5829: Memory corruption in unknown USB HID devices.
[j1mivn4f] Denial-of-service when resetting a Fibre Channel over Ethernet interface.
[nawv8jdu] CVE-2016-5195: Privilege escalation when handling private mapping copy-on-write.
[97fe0h7s] CVE-2016-1583: Privilege escalation in eCryptfs.
[fdztfgcv] Denial-of-service when sending a TCP reset from the netfilter.
[gm4ldjjf] CVE-2016-6828: Use after free during TCP transmission.
[s8pymcf8] CVE-2016-7117: Denial-of-service in recvmmsg() error handling.
[1ktf7029] CVE-2016-4997, CVE-2016-4998: Privilege escalation in the Netfilter driver.
[f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
[5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.

Effective kernel version is 2.6.32-642.15.1.el6

Here is the list of kernels we build modules for as part of Oracle Linux customers kernel choices:

oracle-2.6.18-238.0.0.0.1.el5
oracle-2.6.18-238.1.1.0.1.el5
oracle-2.6.18-238.5.1.0.1.el5
oracle-2.6.18-238.9.1.0.1.el5
oracle-2.6.18-238.12.1.0.1.el5
oracle-2.6.18-238.19.1.0.1.el5
oracle-2.6.18-274.0.0.0.1.el5
oracle-2.6.18-274.3.1.0.1.el5
oracle-2.6.18-274.7.1.0.1.el5
oracle-2.6.18-274.12.1.0.1.el5
oracle-2.6.18-274.17.1.0.1.el5
oracle-2.6.18-274.18.1.0.1.el5
oracle-2.6.18-308.0.0.0.1.el5
oracle-2.6.18-308.1.1.0.1.el5
oracle-2.6.18-308.4.1.0.1.el5
oracle-2.6.18-308.8.1.0.1.el5
oracle-2.6.18-308.8.2.0.1.el5
oracle-2.6.18-308.11.1.0.1.el5
oracle-2.6.18-308.13.1.0.1.el5
oracle-2.6.18-308.16.1.0.1.el5
oracle-2.6.18-308.20.1.0.1.el5
oracle-2.6.18-308.24.1.0.1.el5
oracle-2.6.18-348.0.0.0.1.el5
oracle-2.6.18-348.1.1.0.1.el5
oracle-2.6.18-348.2.1.0.1.el5
oracle-2.6.18-348.3.1.0.1.el5
oracle-2.6.18-348.4.1.0.1.el5
oracle-2.6.18-348.6.1.0.1.el5
oracle-2.6.18-348.12.1.0.1.el5
oracle-2.6.18-348.16.1.0.1.el5
oracle-2.6.18-348.18.1.0.1.el5
oracle-2.6.18-371.0.0.0.1.el5
oracle-2.6.18-371.1.2.0.1.el5
oracle-2.6.18-371.3.1.0.1.el5
oracle-2.6.18-371.4.1.0.1.el5
oracle-2.6.18-371.6.1.0.1.el5
oracle-2.6.18-371.8.1.0.1.el5
oracle-2.6.18-371.9.1.0.1.el5
oracle-2.6.18-371.11.1.0.1.el5
oracle-2.6.18-371.12.1.0.1.el5
oracle-2.6.18-398.0.0.0.1.el5
oracle-2.6.18-400.0.0.0.1.el5
oracle-2.6.18-400.1.1.0.1.el5
oracle-2.6.18-402.0.0.0.1.el5
oracle-2.6.18-404.0.0.0.1.el5
oracle-2.6.18-406.0.0.0.1.el5
oracle-2.6.18-407.0.0.0.1.el5
oracle-2.6.18-408.0.0.0.1.el5
oracle-2.6.18-409.0.0.0.1.el5
oracle-2.6.18-410.0.0.0.1.el5
oracle-2.6.18-411.0.0.0.1.el5
oracle-2.6.18-412.0.0.0.1.el5
oracle-2.6.18-416.0.0.0.1.el5
oracle-2.6.18-417.0.0.0.1.el5
oracle-2.6.18-418.0.0.0.1.el5
oracle-2.6.32-642.0.0.0.1.el6
oracle-3.10.0-514.6.1.0.1.el7
oracle-3.10.0-514.6.2.0.1.el7
oracle-uek-2.6.39-100.5.1
oracle-uek-2.6.39-100.6.1
oracle-uek-2.6.39-100.7.1
oracle-uek-2.6.39-100.10.1
oracle-uek-2.6.39-200.24.1
oracle-uek-2.6.39-200.29.1
oracle-uek-2.6.39-200.29.2
oracle-uek-2.6.39-200.29.3
oracle-uek-2.6.39-200.31.1
oracle-uek-2.6.39-200.32.1
oracle-uek-2.6.39-200.33.1
oracle-uek-2.6.39-200.34.1
oracle-uek-2.6.39-300.17.1
oracle-uek-2.6.39-300.17.2
oracle-uek-2.6.39-300.17.3
oracle-uek-2.6.39-300.26.1
oracle-uek-2.6.39-300.28.1
oracle-uek-2.6.39-300.32.4
oracle-uek-2.6.39-400.17.1
oracle-uek-2.6.39-400.17.2
oracle-uek-2.6.39-400.21.1
oracle-uek-2.6.39-400.21.2
oracle-uek-2.6.39-400.23.1
oracle-uek-2.6.39-400.24.1
oracle-uek-2.6.39-400.109.1
oracle-uek-2.6.39-400.109.3
oracle-uek-2.6.39-400.109.4
oracle-uek-2.6.39-400.109.5
oracle-uek-2.6.39-400.109.6
oracle-uek-2.6.39-400.209.1
oracle-uek-2.6.39-400.209.2
oracle-uek-2.6.39-400.210.2
oracle-uek-2.6.39-400.211.1
oracle-uek-2.6.39-400.211.2
oracle-uek-2.6.39-400.211.3
oracle-uek-2.6.39-400.212.1
oracle-uek-2.6.39-400.214.1
oracle-uek-2.6.39-400.214.3
oracle-uek-2.6.39-400.214.4
oracle-uek-2.6.39-400.214.5
oracle-uek-2.6.39-400.214.6
oracle-uek-2.6.39-400.215.1
oracle-uek-2.6.39-400.215.2
oracle-uek-2.6.39-400.215.3
oracle-uek-2.6.39-400.215.4
oracle-uek-2.6.39-400.215.6
oracle-uek-2.6.39-400.215.7
oracle-uek-2.6.39-400.215.10
oracle-uek-2.6.39-400.215.11
oracle-uek-2.6.39-400.215.12
oracle-uek-2.6.39-400.215.13
oracle-uek-2.6.39-400.215.14
oracle-uek-2.6.39-400.215.15
oracle-uek-2.6.39-400.243.1
oracle-uek-2.6.39-400.245.1
oracle-uek-2.6.39-400.246.2
oracle-uek-2.6.39-400.247.1
oracle-uek-2.6.39-400.248.3
oracle-uek-2.6.39-400.249.1
oracle-uek-2.6.39-400.249.3
oracle-uek-2.6.39-400.249.4
oracle-uek-2.6.39-400.250.2
oracle-uek-2.6.39-400.250.4
oracle-uek-2.6.39-400.250.5
oracle-uek-2.6.39-400.250.6
oracle-uek-2.6.39-400.250.7
oracle-uek-2.6.39-400.250.9
oracle-uek-2.6.39-400.250.10
oracle-uek-2.6.39-400.250.11
oracle-uek-2.6.39-400.264.1
oracle-uek-2.6.39-400.264.4
oracle-uek-2.6.39-400.264.5
oracle-uek-2.6.39-400.264.6
oracle-uek-2.6.39-400.264.13
oracle-uek-2.6.39-400.276.1
oracle-uek-2.6.39-400.277.1
oracle-uek-2.6.39-400.278.1
oracle-uek-2.6.39-400.278.2
oracle-uek-2.6.39-400.278.3
oracle-uek-2.6.39-400.280.1
oracle-uek-2.6.39-400.281.1
oracle-uek-2.6.39-400.282.1
oracle-uek-2.6.39-400.283.1
oracle-uek-2.6.39-400.283.2
oracle-uek-2.6.39-400.284.1
oracle-uek-2.6.39-400.284.2
oracle-uek-2.6.39-400.286.2
oracle-uek-2.6.39-400.286.3
oracle-uek-2.6.39-400.290.1
oracle-uek-2.6.39-400.290.2
oracle-uek-2.6.39-400.293.1
oracle-uek-2.6.39-400.293.2
oracle-uek-2.6.39-400.294.1
oracle-uek-2.6.39-400.294.2
oracle-uek-2.6.39-400.128.21
oracle-uek-3.8.13-16
oracle-uek-3.8.13-16.1.1
oracle-uek-3.8.13-16.2.1
oracle-uek-3.8.13-16.2.2
oracle-uek-3.8.13-16.2.3
oracle-uek-3.8.13-16.3.1
oracle-uek-3.8.13-26
oracle-uek-3.8.13-26.1.1
oracle-uek-3.8.13-26.2.1
oracle-uek-3.8.13-26.2.2
oracle-uek-3.8.13-26.2.3
oracle-uek-3.8.13-26.2.4
oracle-uek-3.8.13-35
oracle-uek-3.8.13-35.1.1
oracle-uek-3.8.13-35.1.2
oracle-uek-3.8.13-35.1.3
oracle-uek-3.8.13-35.3.1
oracle-uek-3.8.13-35.3.2
oracle-uek-3.8.13-35.3.3
oracle-uek-3.8.13-35.3.4
oracle-uek-3.8.13-35.3.5
oracle-uek-3.8.13-44
oracle-uek-3.8.13-44.1.1
oracle-uek-3.8.13-44.1.3
oracle-uek-3.8.13-44.1.4
oracle-uek-3.8.13-44.1.5
oracle-uek-3.8.13-55
oracle-uek-3.8.13-55.1.1
oracle-uek-3.8.13-55.1.2
oracle-uek-3.8.13-55.1.5
oracle-uek-3.8.13-55.1.6
oracle-uek-3.8.13-55.1.8
oracle-uek-3.8.13-55.2.1
oracle-uek-3.8.13-68
oracle-uek-3.8.13-68.1.2
oracle-uek-3.8.13-68.1.3
oracle-uek-3.8.13-68.2.2
oracle-uek-3.8.13-68.2.2.1
oracle-uek-3.8.13-68.2.2.2
oracle-uek-3.8.13-68.3.1
oracle-uek-3.8.13-68.3.2
oracle-uek-3.8.13-68.3.3
oracle-uek-3.8.13-68.3.4
oracle-uek-3.8.13-68.3.5
oracle-uek-3.8.13-98
oracle-uek-3.8.13-98.1.1
oracle-uek-3.8.13-98.1.2
oracle-uek-3.8.13-98.2.1
oracle-uek-3.8.13-98.2.2
oracle-uek-3.8.13-98.4.1
oracle-uek-3.8.13-98.5.2
oracle-uek-3.8.13-98.6.1
oracle-uek-3.8.13-98.7.1
oracle-uek-3.8.13-98.8.1
oracle-uek-3.8.13-118
oracle-uek-3.8.13-118.2.1
oracle-uek-3.8.13-118.2.2
oracle-uek-3.8.13-118.2.4
oracle-uek-3.8.13-118.2.5
oracle-uek-3.8.13-118.3.1
oracle-uek-3.8.13-118.3.2
oracle-uek-3.8.13-118.4.1
oracle-uek-3.8.13-118.4.2
oracle-uek-3.8.13-118.6.1
oracle-uek-3.8.13-118.6.2
oracle-uek-3.8.13-118.7.1
oracle-uek-3.8.13-118.8.1
oracle-uek-3.8.13-118.9.1
oracle-uek-3.8.13-118.9.2
oracle-uek-3.8.13-118.10.2
oracle-uek-3.8.13-118.11.2
oracle-uek-3.8.13-118.13.2
oracle-uek-3.8.13-118.13.3
oracle-uek-3.8.13-118.14.1
oracle-uek-3.8.13-118.14.2
oracle-uek-3.8.13-118.15.1
oracle-uek-3.8.13-118.15.2
oracle-uek-3.8.13-118.15.3
oracle-uek-3.8.13-118.16.2
oracle-uek-3.8.13-118.16.3
oracle-uek-4.1.12-32
oracle-uek-4.1.12-32.1.2
oracle-uek-4.1.12-32.1.3
oracle-uek-4.1.12-32.2.1
oracle-uek-4.1.12-32.2.3
oracle-uek-4.1.12-37.2.1
oracle-uek-4.1.12-37.2.2
oracle-uek-4.1.12-37.3.1
oracle-uek-4.1.12-37.4.1
oracle-uek-4.1.12-37.5.1
oracle-uek-4.1.12-37.6.1
oracle-uek-4.1.12-37.6.2
oracle-uek-4.1.12-37.6.3
oracle-uek-4.1.12-61.1.6
oracle-uek-4.1.12-61.1.9
oracle-uek-4.1.12-61.1.10
oracle-uek-4.1.12-61.1.13
oracle-uek-4.1.12-61.1.14
oracle-uek-4.1.12-61.1.16
oracle-uek-4.1.12-61.1.17
oracle-uek-4.1.12-61.1.18
oracle-uek-4.1.12-61.1.19
oracle-uek-4.1.12-61.1.21
oracle-uek-4.1.12-61.1.22
oracle-uek-4.1.12-61.1.23
oracle-uek-4.1.12-61.1.24
oracle-uek-4.1.12-61.1.25
oracle-uek-4.1.12-61.1.27
rhel-2.6.32-71.el6
rhel-2.6.32-71.7.1.el6
rhel-2.6.32-71.14.1.el6
rhel-2.6.32-71.18.1.el6
rhel-2.6.32-71.18.2.el6
rhel-2.6.32-71.24.1.el6
rhel-2.6.32-71.29.1.el6
rhel-2.6.32-131.0.15.el6
rhel-2.6.32-131.2.1.el6
rhel-2.6.32-131.4.1.el6
rhel-2.6.32-131.6.1.el6
rhel-2.6.32-131.12.1.el6
rhel-2.6.32-131.17.1.el6
rhel-2.6.32-131.21.1.el6
rhel-2.6.32-220.el6
rhel-2.6.32-220.2.1.el6
rhel-2.6.32-220.4.1.el6
rhel-2.6.32-220.4.2.el6
rhel-2.6.32-220.7.1.el6
rhel-2.6.32-220.13.1.el6
rhel-2.6.32-220.17.1.el6
rhel-2.6.32-220.23.1.el6
rhel-2.6.32-279.el6
rhel-2.6.32-279.1.1.el6
rhel-2.6.32-279.2.1.el6
rhel-2.6.32-279.5.1.el6
rhel-2.6.32-279.5.2.el6
rhel-2.6.32-279.9.1.el6
rhel-2.6.32-279.11.1.el6
rhel-2.6.32-279.14.1.el6
rhel-2.6.32-279.19.1.el6
rhel-2.6.32-279.22.1.el6
rhel-2.6.32-358.el6
rhel-2.6.32-358.0.1.el6
rhel-2.6.32-358.2.1.el6
rhel-2.6.32-358.6.1.el6
rhel-2.6.32-358.6.2.el6
rhel-2.6.32-358.6.2.el6.x86_64.crt1
rhel-2.6.32-358.11.1.el6
rhel-2.6.32-358.14.1.el6
rhel-2.6.32-358.18.1.el6
rhel-2.6.32-358.23.2.el6
rhel-2.6.32-431.el6
rhel-2.6.32-431.1.2.el6
rhel-2.6.32-431.3.1.el6
rhel-2.6.32-431.5.1.el6
rhel-2.6.32-431.11.2.el6
rhel-2.6.32-431.17.1.el6
rhel-2.6.32-431.20.3.el6
rhel-2.6.32-431.20.5.el6
rhel-2.6.32-431.23.3.el6
rhel-2.6.32-431.29.2.el6
rhel-2.6.32-504.el6
rhel-2.6.32-504.1.3.el6
rhel-2.6.32-504.3.3.el6
rhel-2.6.32-504.8.1.el6
rhel-2.6.32-504.12.2.el6
rhel-2.6.32-504.16.2.el6
rhel-2.6.32-504.23.4.el6
rhel-2.6.32-504.30.3.el6
rhel-2.6.32-573.el6
rhel-2.6.32-573.1.1.el6
rhel-2.6.32-573.3.1.el6
rhel-2.6.32-573.7.1.el6
rhel-2.6.32-573.8.1.el6
rhel-2.6.32-573.12.1.el6
rhel-2.6.32-573.18.1.el6
rhel-2.6.32-573.22.1.el6
rhel-2.6.32-573.26.1.el6
rhel-2.6.32-642.el6
rhel-2.6.32-642.1.1.el6
rhel-2.6.32-642.3.1.el6
rhel-2.6.32-642.4.2.el6
rhel-2.6.32-642.6.1.el6
rhel-2.6.32-642.6.2.el6
rhel-2.6.32-642.11.1.el6
rhel-2.6.32-642.13.1.el6
rhel-2.6.32-642.13.2.el6
rhel-3.10.0-123.el7
rhel-3.10.0-123.1.2.el7
rhel-3.10.0-123.4.2.el7
rhel-3.10.0-123.4.4.el7
rhel-3.10.0-123.6.3.el7
rhel-3.10.0-123.8.1.el7
rhel-3.10.0-123.9.2.el7
rhel-3.10.0-123.9.3.el7
rhel-3.10.0-123.13.1.el7
rhel-3.10.0-123.13.2.el7
rhel-3.10.0-123.20.1.el7
rhel-3.10.0-229.el7
rhel-3.10.0-229.1.2.el7
rhel-3.10.0-229.4.2.el7
rhel-3.10.0-229.7.2.el7
rhel-3.10.0-229.11.1.el7
rhel-3.10.0-229.14.1.el7
rhel-3.10.0-229.20.1.el6.x86_64.knl2
rhel-3.10.0-229.20.1.el7
rhel-3.10.0-327.el7
rhel-3.10.0-327.3.1.el7
rhel-3.10.0-327.4.4.el7
rhel-3.10.0-327.4.5.el7
rhel-3.10.0-327.10.1.el7
rhel-3.10.0-327.13.1.el7
rhel-3.10.0-327.18.2.el7
rhel-3.10.0-327.22.2.el7
rhel-3.10.0-327.28.2.el7
rhel-3.10.0-327.28.3.el7
rhel-3.10.0-327.36.1.el7
rhel-3.10.0-327.36.2.el7
rhel-3.10.0-327.36.3.el7
rhel-3.10.0-514.el7
rhel-3.10.0-514.2.2.el7
rhel-3.10.0-514.6.1.el7
rhel-3.10.0-514.6.2.el7
rhel-2.6.18-92.1.10.el5
rhel-2.6.18-92.1.13.el5
rhel-2.6.18-92.1.17.el5
rhel-2.6.18-92.1.18.el5
rhel-2.6.18-92.1.22.el5
rhel-2.6.18-128.el5
rhel-2.6.18-128.1.1.el5
rhel-2.6.18-128.1.6.el5
rhel-2.6.18-128.1.10.el5
rhel-2.6.18-128.1.14.el5
rhel-2.6.18-128.1.16.el5
rhel-2.6.18-128.2.1.el5
rhel-2.6.18-128.4.1.el5
rhel-2.6.18-128.7.1.el5
rhel-2.6.18-149.el5
rhel-2.6.18-164.el5
rhel-2.6.18-164.2.1.el5
rhel-2.6.18-164.6.1.el5
rhel-2.6.18-164.9.1.el5
rhel-2.6.18-164.10.1.el5
rhel-2.6.18-164.11.1.el5
rhel-2.6.18-164.15.1.el5
rhel-2.6.18-194.el5
rhel-2.6.18-194.3.1.el5
rhel-2.6.18-194.8.1.el5
rhel-2.6.18-194.11.1.el5
rhel-2.6.18-194.11.3.el5
rhel-2.6.18-194.11.4.el5
rhel-2.6.18-194.17.1.el5
rhel-2.6.18-194.17.4.el5
rhel-2.6.18-194.26.1.el5
rhel-2.6.18-194.32.1.el5
rhel-2.6.18-238.el5
rhel-2.6.18-238.1.1.el5
rhel-2.6.18-238.5.1.el5
rhel-2.6.18-238.9.1.el5
rhel-2.6.18-238.12.1.el5
rhel-2.6.18-238.19.1.el5
rhel-2.6.18-274.el5
rhel-2.6.18-274.3.1.el5
rhel-2.6.18-274.7.1.el5
rhel-2.6.18-274.12.1.el5
rhel-2.6.18-274.17.1.el5
rhel-2.6.18-274.18.1.el5
rhel-2.6.18-308.el5
rhel-2.6.18-308.1.1.el5
rhel-2.6.18-308.4.1.el5
rhel-2.6.18-308.8.1.el5
rhel-2.6.18-308.8.2.el5
rhel-2.6.18-308.11.1.el5
rhel-2.6.18-308.13.1.el5
rhel-2.6.18-308.16.1.el5
rhel-2.6.18-308.20.1.el5
rhel-2.6.18-308.24.1.el5
rhel-2.6.18-348.el5
rhel-2.6.18-348.1.1.el5
rhel-2.6.18-348.2.1.el5
rhel-2.6.18-348.3.1.el5
rhel-2.6.18-348.4.1.el5
rhel-2.6.18-348.6.1.el5
rhel-2.6.18-348.12.1.el5
rhel-2.6.18-348.16.1.el5
rhel-2.6.18-348.18.1.el5
rhel-2.6.18-371.el5
rhel-2.6.18-371.1.2.el5
rhel-2.6.18-371.3.1.el5
rhel-2.6.18-371.4.1.el5
rhel-2.6.18-371.6.1.el5
rhel-2.6.18-371.8.1.el5
rhel-2.6.18-371.9.1.el5
rhel-2.6.18-371.11.1.el5
rhel-2.6.18-371.12.1.el5
rhel-2.6.18-398.el5
rhel-2.6.18-400.el5
rhel-2.6.18-400.1.1.el5
rhel-2.6.18-402.el5
rhel-2.6.18-404.el5
rhel-2.6.18-406.el5
rhel-2.6.18-407.el5
rhel-2.6.18-408.el5
rhel-2.6.18-409.el5
rhel-2.6.18-410.el5
rhel-2.6.18-411.el5
rhel-2.6.18-412.el5
rhel-2.6.18-416.el5
rhel-2.6.18-417.el5
rhel-2.6.18-418.el5

compare that to kpatch or kgraft or so.

Spotlight Series Video on Elasticsearch and PeopleSoft

PeopleSoft Technology Blog - Fri, 2017-02-24 09:58

We've just posted a new Spotlight Series video on using Elasticsearch with PeopleSoft.  This video is the first in a series, and covers the following topics:

  • What is Elasticsearch and why did PeopleSoft move to it for our search engine?
  • What releases of PeopleTools and PeopleSoft applications are needed to deploy Elasticsearch?
  • How do customers obtain it?  Are there any licensing implications?
  • What are the advantages of Elasticsearch?

I addition to these points, the main purpose of the video is to provide an overview of the process for deploying Elasticsearch with your PeopleSoft environments.  We cover the overall process, and describe the scenarios that may apply to your enterprise.

We will be releasing additional videos on Elasticsearch with PeopleSoft in short order.  These videos will cover the following topics in greater detail:

  • Transitioning to Elasticsearch from Oracle's Secure Enterprise Search
  • Initial installation of Elasticsearch
  • Administration and maintenance of Elasticsearch

All these videos will be posted on the PeopleSoft YouTube channel.

Spotlight Series Video on Elasticsearch and PeopleSoft

PeopleSoft Technology Blog - Fri, 2017-02-24 09:58

We've just posted a new Spotlight Series video on using Elasticsearch with PeopleSoft.  This video is the first in a series, and covers the following topics:

  • What is Elasticsearch and why did PeopleSoft move to it for our search engine?
  • What releases of PeopleTools and PeopleSoft applications are needed to deploy Elasticsearch?
  • How do customers obtain it?  Are there any licensing implications?
  • What are the advantages of Elasticsearch?

I addition to these points, the main purpose of the video is to provide an overview of the process for deploying Elasticsearch with your PeopleSoft environments.  We cover the overall process, and describe the scenarios that may apply to your enterprise.

We will be releasing additional videos on Elasticsearch with PeopleSoft in short order.  These videos will cover the following topics in greater detail:

  • Transitioning to Elasticsearch from Oracle's Secure Enterprise Search
  • Initial installation of Elasticsearch
  • Administration and maintenance of Elasticsearch

All these videos will be posted on the PeopleSoft YouTube channel.

Data Preparation for "Analyse Elections with Oracle Data Visualisation Desktop"

Rittman Mead Consulting - Fri, 2017-02-24 09:13

In the previous post I wrote about some interesting insights we can get from the elections data. And this post concentrates on the data preparation for it. As the first one, it doesn't work as a guide and aimed to show some data manipulation things end-user can do without IT department and coding. As data preparation is not absolutely needed for the understanding of the visual part but still a piece of the story it was extracted as a separate post.

The Data

From the logical point of view I have two datasets.

  1. Voting results by election commissions: number of votes for every political party and a lot of technical measures like number of registered voters, number of good and damaged voting ballots and so on.
  2. Turnout figures at given times throughout the day.

From the physical point of view, both datasets are big files (about 500 Mb combined) each line of which is a single record. These records represent information at various levels of aggregation. I will use this fact to show some of Data flows features (great introduction from Francesco Tisiot).

Generally, each record is a JSON which looks like this:

{  
 [...]
 "Data":{  
  "1 | Number of voters included in the voters list at the time of the end of voting":"102",
  "2 | The number of ballots received by the precinct election commission":"102",
  "3 | The number of ballots issued to voters who voted early":"0",
  [...]
  "19 | 1 Russian political party 'Rodina'":"0",
  "20 | 2 Political party 'COMMUNISTS COMMUNIST PARTY OF RUSSIA'":"3",
  "21 | 3 Political party 'Russian Party of Pensioners for Justice'":"0",
  [...]
  },
  "Loc":[  
   "Nizhny Novgorod Region",
   "129 Nizhny Novgorod Region - Nizhny Novgorod",
   "28 Krasnooktyabrskaya",
   "PEC #1148"
   ]
}

You can find a full sample of the JSON here.

Data part is a set of 32 measures and Loc is an array of one to four elements. The more detailed the record, the more elements in Loc array.

The whole votes dataset is slightly smaller than 100'000 of records. As I said, records have various levels of aggregation, and this sample shows the most detailed data (its Loc has four elements). Nizhny Novgorod Region is obviously the region, 129 Nizhny Novgorod Oblast - Nizhny Novgorod is a concatenation of district electoral commission number (129), region name (Nizhny Novgorod Region) and district electoral commission name (Nizhny Novgorod). 28 Krasnooktyabrskaya is number and name of a territorial election commission, and PEC #1148 is precinct electoral commission. For the most aggregated records Loc looks like:

"Loc":[  
      "DEC #129"
   ]

This is an aggregated data for a district election commission #129.

Unfortunately, DVD doesn't work with JSON format directly. As one of the possibilities, we can use Apache Drill (for more information see previous posts by Francesco Tisiot and Robin Moffatt). Apache Drill is a supported option for Oracle DV, but it isn't an end-user tool and the story is not about building a complete and reusable system. It is about the storytelling. So I did a one-time manual conversion using Excel magic and got a plain table.

The second part of the data is turnout. In JSON form it's basically the same: one record per line, Data part with four values and Loc part based on the same principles. Data is significantly smaller and has only four values - turnout at four fixed moments. But here I need it in a more dimesional form. I want time as a dimension rather than pivoted.

In my opinion, pivot/unpivot is one more highly desired feature DVD needs. We can live without it in tools like OBIEE because its models aren't built on rapidly changing user-generated data structures, but for desktop data exploration tools like DVD powerful data transformation capabilities are crucial.

But while it isn't implemented, I made unpivot in Excel and got this.

It wasn't obvious that I need in in this particular form in the beginning of data preparation stage, but the good news is both visualisation and data manipulation live in one product, and in reality I was able to build charts, understand that I need more data/attributes/etc, add them and go back to analysis. Very smooth and clean.

I Thought We Weren’t Supposed to Use Excel?

While Data flows DVD feature is powerful and very useful there are some data manipulation tasks that Excel can do just easier or better than DVD. I hope that Oracle will improve Data flows over time but right now it’s not perfect.

I used Excel for the following:

  1. Splitting of long strings into fields. I could do it using locate+substring+left+right functions in DVD. But that would produce unreadable code. Excel Text to columns tool is a weapon of choice so far.
  2. UNION. When I need to combine two datasets into one, Excel’s copy and paste are beyond competition. Adding set operations (not only union) will greatly improve Oracle’s tool.
  3. A few technical tasks like translating the data. Initially, the data was not in English and I had to translate it. I could use both Excel and DVD for this task but chose Excel in order to simplify the post. But if I did it in DVD, it would be just a few more joins. Can't say what is simpler, so no clear winner here.

Note: I'm talking about static data sets. In case of a changing data that may be not true.

Data Transformations in DVD

While I was able to (had to) make some of the data preparation in Excel, it's better and easier to do some tasks in DVD using relatively new Data flows feature. I will start with building a clean dictionary of electoral commissions.

When I finished my Excel part, I got this:

It isn't so bad, but I can make it better in a few minutes. I want to rename Location1 into Subject, split Location2 into DEC #129 and Nizhny Novgorod, remove leading digits from Location3 and rename it to TEC name, and also rename Location4 into PEC #.

Here is the data flow which does it. It may not look like the simplest thing in the world, but I spent about the same time building it as you spent reading what it does.

The second thing I want to do is to split my wide file with 32 measures into two. One for party results and another one for other let's say "technical" measures like number of voters, ballots and so on. It's not absolutely necessary but will please me and anyone else who will see it. Both data flows for this task use Commissions dictionary I made earlier and the same source file. I filter the source to get only the most detailed data, join with commissions, select only interesting columns and save the result. Simple and straightforward. The only difference is sets of selected columns.

I could start building visualisations with this data, but I'd prefer to calculate some relative measures. Absolute numbers won't help me because every commission has its own number of voters. We can't compare them directly but relative measures will do the trick.
Data flow for these calculations is the simplest of all.
The only not immediately obvious part is why I placed two consequential Add columns steps. The first one calculates Total votes column as a sum of votes for all parties and the second one uses this column for calculation of a percentage.

And it is really cool how we can use results of different flows. I took Protocols, made Commisions, joined Protocols and Commisions and got Votes data and later made calculations based on Votes data. That was very cool.

And here I want to show how the autocomplete feature in formulas editor works (because I really like it). I hope to see it on OBIEE one day.

I have one more data flow for the turnout data, but it simply joins source file with the dictionary of commissions.

Summary

I believe Oracle DVD is a very powerful tool for a data analyst. While many users prefer to stick with Excel exclusively, DVD can significantly increase their data preparation and transformation capabilities. Even now at the very beginning of its journey DVD allows making many data manipulation operations fast and easy. It obviously worth give it a try and include into a data analysis process. Look how the data from this post may be used for analysis and making interesting conclusions: Analyse Elections with Oracle Data Visualisation Desktop.

Categories: BI & Warehousing

IMPDP over network link ORA-3916 Schema XXXXX not found error

Tom Kyte - Fri, 2017-02-24 08:46
Hi there. I'm trying ot migrate database objects between 2 instances via network link. Source database is 11.2.0.3 version Target database is 12.1.0.2 version On both systems exists a schema named TAGDATI with DATAPUMP_EXP_FULL_DATABASE and DATAP...
Categories: DBA Blogs

Alternative to Oracle Wrap Utility

Tom Kyte - Fri, 2017-02-24 08:46
Hello, We have requirement to release our product at customer location(usually we just deliver processed data to the customer) and allow them to use it for sometime before they are satisfied, hence to secure code/algorithm we need to wrap the code...
Categories: DBA Blogs

Dynamic SQL to execute a function

Tom Kyte - Fri, 2017-02-24 08:46
Hi, I'm trying to write a function which will accept arithmetic operator(example : mod(100,10)) as input and the result will be from the input arithmetic operation ----- declare l_retval varchar2(200); ...
Categories: DBA Blogs

'Order by' within or outside View, gives same execution plan and results

Tom Kyte - Fri, 2017-02-24 08:46
==================================================== -- v_product view definition create or replace view v_product select <<column_names>> from t_product order by col_1, col_2; --Below is the exiting code running since many years in product...
Categories: DBA Blogs

Calling SQL file from batch file with return vairables

Tom Kyte - Fri, 2017-02-24 08:46
Hi.. I have to write a batch file which will call a sql file by passing one parameter. The sql file has to select two columns from a table and return them to the batch file and the batch file should display it on the screen. Can I get some inpu...
Categories: DBA Blogs

Clob and NClob

Tom Kyte - Fri, 2017-02-24 08:46
Sir, I want to know when I would you a NClob and can not use a Clob ? The character of the database is now US7ASCII. I test using some Chinese and Japanese characters. Looks like I can get back what I inserted on the Clob column, but for NClob col...
Categories: DBA Blogs

Access updated column names in DDL alter trigger

Tom Kyte - Fri, 2017-02-24 08:46
I have a trigger create or replace TRIGGER audit_trigger_update AFTER ALTER OR CREATE ON SCHEMA WHEN (ORA_DICT_OBJ_TYPE = 'TABLE') BEGIN create_audit_trigger(ORA_DICT_OBJ_NAME); END audit_trigger_update; with calling procedure to re...
Categories: DBA Blogs

INTERVAL PARTITIONING with custom TABLESPACE names

Tom Kyte - Fri, 2017-02-24 08:46
We have a range partitioned table on daily basis. Every year, we are splitting the MAX partition into 365* days partitions such as: ALTER TABLE txn SPLIT PARTITION p_txn_max AT (to_date('01/02/2017','MM/DD/YYYY')) INTO (PARTITION ...
Categories: DBA Blogs

Update query using case and joins

Tom Kyte - Fri, 2017-02-24 08:46
<code>Table 1: select * from FV Table 2: select * From DB_FV_W UPDATE DB_FV_W Set FV_02 = (CASE WHEN db.FV_02 IS NULL THEN '0' Else fv.MD END) FROM DB_FV_W d LEFT OUTER JOIN FV f On...
Categories: DBA Blogs

Executing dynamic sql with

Tom Kyte - Fri, 2017-02-24 08:46
Hi Tom, I have a requirement wherein a sql saved in a table column (clob type) is executed in plsql . The results will then be used for further processing . SQLs to be executed will be selected based on some criteria. eg : Assume the table n...
Categories: DBA Blogs

Dump Oracle data into a delimited ascii file with PL/SQL

Amis Blog - Fri, 2017-02-24 08:30

This is how I dump data from an Oracle Database (tested on 8i,9i,10g,11g,12c) to a delimited ascii file:

SQL*Plus: Release 12.1.0.2.0 Production on Fri Feb 24 13:55:47 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> set timing on
SQL> select Dump_Delimited('select * from all_objects', 'all_objects.csv') nr_rows from dual;

   NR_ROWS
----------
     97116

Elapsed: 00:00:11.87
SQL> ! cat /u01/etl/report/all_objects_readme.txt


  *********************************************************************  
  Record Layout of file /u01/etl/report/all_objects.csv
  *********************************************************************  


  Column                          Sequence  MaxLength  Datatype  
  ------------------------------  --------  ---------  ----------  

  OWNER                           1         128        VARCHAR2                 
  OBJECT_NAME                     2         128        VARCHAR2                 
  SUBOBJECT_NAME                  3         128        VARCHAR2                 
  OBJECT_ID                       4         24         NUMBER                   
  DATA_OBJECT_ID                  5         24         NUMBER                   
  OBJECT_TYPE                     6         23         VARCHAR2                 
  CREATED                         7         20         DATE                     
  LAST_DDL_TIME                   8         20         DATE                     
  TIMESTAMP                       9         19         VARCHAR2                 
  STATUS                          10        7          VARCHAR2                 
  TEMPORARY                       11        1          VARCHAR2                 
  GENERATED                       12        1          VARCHAR2                 
  SECONDARY                       13        1          VARCHAR2                 
  NAMESPACE                       14        24         NUMBER                   
  EDITION_NAME                    15        128        VARCHAR2                 
  SHARING                         16        13         VARCHAR2                 
  EDITIONABLE                     17        1          VARCHAR2                 
  ORACLE_MAINTAINED               18        1          VARCHAR2                 


  ----------------------------------  
  Generated:     24-02-2017 13:56:50
  Generated by:  ETL
  Columns Count: 18
  Records Count: 97116
  Delimiter: ][
  Row Delimiter: ]
  ----------------------------------  

SQL> 

Next to the query and the generated filename the Dump_Delimited function takes another 6 parameters, each one with a default value. Check out the PL/SQL, and BTW… the basics for this code comes from Tom Kyte.

SET DEFINE OFF;
CREATE OR REPLACE DIRECTORY ETL_UNLOAD_DIR AS '/u01/etl/report';
GRANT READ, WRITE ON DIRECTORY ETL_UNLOAD_DIR TO ETL;

CREATE OR REPLACE FUNCTION Dump_Delimited
   ( P_query                IN VARCHAR2
   , P_filename             IN VARCHAR2
   , P_column_delimiter     IN VARCHAR2    := ']['
   , P_row_delimiter        IN VARCHAR2    := ']'
   , P_comment              IN VARCHAR2    := NULL
   , P_write_rec_layout     IN PLS_INTEGER := 1
   , P_dir                  IN VARCHAR2    := 'ETL_UNLOAD_DIR'
   , P_nr_is_pos_integer    IN PLS_INTEGER := 0 )
RETURN PLS_INTEGER
 IS
    filehandle             UTL_FILE.FILE_TYPE;
    filehandle_rc          UTL_FILE.FILE_TYPE;

    v_user_name            VARCHAR2(100);
    v_file_name_full       VARCHAR2(200);
    v_dir                  VARCHAR2(200);
    v_total_length         PLS_INTEGER := 0;
    v_startpos             PLS_INTEGER := 0;
    v_datatype             VARCHAR2(30);
    v_delimiter            VARCHAR2(10):= P_column_delimiter;
    v_rowdelimiter         VARCHAR2(10):= P_row_delimiter;

    v_cursorid             PLS_INTEGER := DBMS_SQL.OPEN_CURSOR;
    v_columnvalue          VARCHAR2(4000);
    v_ignore               PLS_INTEGER;
    v_colcount             PLS_INTEGER := 0;
    v_newline              VARCHAR2(32676);
    v_desc_cols_table      DBMS_SQL.DESC_TAB;
    v_dateformat           NLS_SESSION_PARAMETERS.VALUE%TYPE;
    v_stat                 VARCHAR2(1000);
    counter                PLS_INTEGER := 0;
BEGIN

    SELECT directory_path
      INTO v_dir 
    FROM DBA_DIRECTORIES
    WHERE directory_name = P_dir;
    v_file_name_full  := v_dir||'/'||P_filename;

    SELECT VALUE
      INTO v_dateformat
    FROM NLS_SESSION_PARAMETERS
    WHERE parameter = 'NLS_DATE_FORMAT';

    /* Use a date format that includes the time. */
    v_stat := 'alter session set nls_date_format=''dd-mm-yyyy hh24:mi:ss'' ';
    EXECUTE IMMEDIATE v_stat;

    filehandle := UTL_FILE.FOPEN( P_dir, P_filename, 'w', 32000 );

    /* Parse the input query so we can describe it. */
    DBMS_SQL.PARSE(  v_cursorid,  P_query, dbms_sql.native );

    /* Now, describe the outputs of the query. */
    DBMS_SQL.DESCRIBE_COLUMNS( v_cursorid, v_colcount, v_desc_cols_table );

    /* For each column, we need to define it, to tell the database
     * what we will fetch into. In this case, all data is going
     * to be fetched into a single varchar2(4000) variable.
     *
     * We will also adjust the max width of each column. 
     */
IF P_write_rec_layout = 1 THEN

   filehandle_rc := UTL_FILE.FOPEN(P_dir, SUBSTR(P_filename,1, INSTR(P_filename,'.',-1)-1)||'_readme.txt', 'w');

--Start Header
    v_newline := CHR(10)||CHR(10)||'  *********************************************************************  ';
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Record Layout of file '||v_file_name_full;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  *********************************************************************  '||CHR(10)||CHR(10);
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  Column                          Sequence  MaxLength  Datatype  ';
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  ------------------------------  --------  ---------  ----------  '||CHR(10);
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
--End Header

--Start Body
    FOR i IN 1 .. v_colcount
    LOOP
       DBMS_SQL.DEFINE_COLUMN( v_cursorid, i, v_columnvalue, 4000 );
       SELECT DECODE( v_desc_cols_table(i).col_type,  2, DECODE(v_desc_cols_table(i).col_precision,0,v_desc_cols_table(i).col_max_len,v_desc_cols_table(i).col_precision)+DECODE(P_nr_is_pos_integer,1,0,2)
                                                   , 12, 20, v_desc_cols_table(i).col_max_len )
         INTO v_desc_cols_table(i).col_max_len
       FROM dual;

       SELECT DECODE( TO_CHAR(v_desc_cols_table(i).col_type), '1'  , 'VARCHAR2'
                                                            , '2'  , 'NUMBER'
                                                            , '8'  , 'LONG'
                                                            , '11' , 'ROWID'
                                                            , '12' , 'DATE'
                                                            , '96' , 'CHAR'
                                                            , '108', 'USER_DEFINED_TYPE', TO_CHAR(v_desc_cols_table(i).col_type) )
         INTO v_datatype
       FROM DUAL;

       v_newline := RPAD('  '||v_desc_cols_table(i).col_name,34)||RPAD(i,10)||RPAD(v_desc_cols_table(i).col_max_len,11)||RPAD(v_datatype,25);
    UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    END LOOP;
--End Body

ELSE

    FOR i IN 1 .. v_colcount LOOP
       DBMS_SQL.DEFINE_COLUMN( v_cursorid, i, v_columnvalue, 4000 );
       SELECT DECODE( v_desc_cols_table(i).col_type,  2, DECODE(v_desc_cols_table(i).col_precision,0,v_desc_cols_table(i).col_max_len,v_desc_cols_table(i).col_precision)+DECODE(P_nr_is_pos_integer,1,0,2)
                                                   , 12, 20, v_desc_cols_table(i).col_max_len )
         INTO v_desc_cols_table(i).col_max_len
       FROM dual;
     END LOOP;

END IF;

    v_ignore := DBMS_SQL.EXECUTE(v_cursorid);

     WHILE ( DBMS_SQL.FETCH_ROWS(v_cursorid) > 0 )
     LOOP
        /* Build up a big output line. This is more efficient than
         * calling UTL_FILE.PUT inside the loop.
         */
        v_newline := NULL;
        FOR i IN 1 .. v_colcount LOOP
            DBMS_SQL.COLUMN_VALUE( v_cursorid, i, v_columnvalue );
            if i = 1 then
              v_newline := v_newline||v_columnvalue;
            else
              v_newline := v_newline||v_delimiter||v_columnvalue;
            end if;              
        END LOOP;

        /* Now print out that line and increment a counter. */
        UTL_FILE.PUT_LINE( filehandle, v_newline||v_rowdelimiter );
        counter := counter+1;
    END LOOP;

IF P_write_rec_layout = 1 THEN

--Start Footer
    v_newline := CHR(10)||CHR(10)||'  ----------------------------------  ';
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Generated:     '||SYSDATE;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Generated by:  '||USER;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Columns Count: '||v_colcount;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Records Count: '||counter;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Delimiter: '||v_delimiter;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Row Delimiter: '||v_rowdelimiter;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  ----------------------------------  '||CHR(10)||CHR(10);
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
--End Footer

--Start Commment
    v_newline := '  '||P_comment;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
--End Commment

UTL_FILE.FCLOSE(filehandle_rc);

END IF;

    /* Free up resources. */
    DBMS_SQL.CLOSE_CURSOR(v_cursorid);
    UTL_FILE.FCLOSE( filehandle );

    /* Reset the date format ... and return. */
    v_stat := 'alter session set nls_date_format=''' || v_dateformat || ''' ';
    EXECUTE IMMEDIATE v_stat;

    RETURN counter;
EXCEPTION
    WHEN OTHERS THEN
        DBMS_SQL.CLOSE_CURSOR( v_cursorid );
        EXECUTE IMMEDIATE 'alter session set nls_date_format=''' || v_dateformat || ''' ';
        RETURN counter;

END Dump_Delimited;
/

SHOW ERRORS;

The post Dump Oracle data into a delimited ascii file with PL/SQL appeared first on AMIS Oracle and Java Blog.

Oracle E-Business Suite Mobile and Web Services Security - What You Need To Know

Securing packaged software such as the Oracle E-Business Suite presents different challenges than securing bespoke custom software. Unlike custom software, both the structure of and the security vulnerabilities of the Oracle E-Business Suite are well known and documented, not only to users but also to threat actors.  To begin an attack, limited probing and/or reconnaissance is needed because threat actors know exactly what to target and what to expect.  This also makes the Oracle E-Business Suite, like other ERP platforms, vulnerable to automated attacks. Threat actors only need to compromise one publically facing URL or web service, which given the size and complexity of the Oracle E-Business Suite, makes securing it a somewhat daunting task.

Starting with version 12.1 and continuing with 12.2, the Oracle E-Business Suite delivers a considerable amount of new web services and Mobile functionality as standard core functionality.  Much, if not most, of this new Mobile and web services functionality, replicates functionality previously only available through the traditional user interface forms and/or public interfaces and these new web services can be easily deployed on the Internet through a DMZ node.  The security implications of 12.2’s increased web services capabilities is that the Oracle E-Business Suite’s attack surface has increased and harder to defend. 

This blog series summarize the new Mobile and web services functionality and review their security features before recommending best practices for using them securely.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

REFERENCES

 
 
 
 
 
Web Services
Categories: APPS Blogs, Security Blogs

ORA-06550 line N column N Solution

Complete IT Professional - Fri, 2017-02-24 05:00
Are you getting the ORA-06550 error when running an SQL statement? Learn the cause of this error and the solution in this article. Demonstration of the Error To demonstrate this error, I can run this code: CREATE OR REPLACE PROCEDURE testProcedure AS   textValue VARCHAR2(3); BEGIN   textValue := someOtherValue; END; If I compile this […]
Categories: Development

Oracle 12c – Recreating a Controlfile in a Data Guard environment with noresetlogs

Yann Neuhaus - Fri, 2017-02-24 04:44

Sometimes you might run into situations where the controlfile does not represent the backups and archivelogs correctly, because of a mismatch of the control_file_record_keep_time and the RMAN retention. The controlfile has non circular and a circular records. Non circular are e.g. database information, redo threads, datafiles and so on. These non circular records don’t age out, however, they can be reused, e.g. when a tablespace is dropped. The circular records are e.g. the log history, archived logs, backupsets, datafile copies and so on. These records can age out. So, when you have a control_file_record_keep_time of 7 days and a RMAN recovery window of 14 days, then you obviously have a mismatch here. In 11gR2, Oracle stores 37 different record types in the control file, which can be check with:

SELECT type FROM v$controlfile_record_section ORDER BY 1;

12cR1 stores 41 different record types, where the AUXILIARY DATAFILE COPY, MULTI INSTANCE REDO APPLY, PDB RECORD and PDBINC RECORD was added. In 12cR2 there are even more. The TABLESPACE KEY HISTORY record type was added, so you end up with 42 different record types in 12cR2.

If RMAN needs to add new backup set or archive log record to the control file, any records that expired as per the control_file_record_keep_time parameter are overwritten. But coming back to my issue. My controlfile is out of sync with the recovery catalog and in some situation you can’t correct it anymore, even with delete force commands or alike, and you end up with error like the following:

ORA-19633: control file record 8857 is out of sync with recovery catalog

There might be other solutions to fix it, however, I want to have a clean control file and so I am recreating it manually. However, I don’t want to open the DB with resetlogs.

The high level steps to get this done are

  • Disable everything that might interfere with your action e.g. Fast Start Failover, Broker and so on
  • Adjust your control_file_record_keep_time to a higher value
  • Create the controlfile to trace
  • Unregister from RMAN catalog
  • Shutdown immediate and re-create the controlfile
  • Re-catalog your backups and archivelogs
  • Re-register into the RMAN catalog

Ok, let’s get started and disable fast start failover first. We don’t want that the observer to kick in and do any nasty stuff during my action.

DGMGRL> show configuration;

Configuration - DBIT121

  Protection Mode: MaxAvailability
  Members:
  DBIT121_SITE1 - Primary database
    DBIT121_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 2 seconds ago)

DGMGRL> disable fast_start failover;
Disabled.

As a next step, I increase the control_file_record_keep_time to a much higher time. The formula is usually CONTROL_FILE_RECORD_KEEP_TIME = retention period + level 0 backup interval + 1. Meaning that with a retention period of 24 days and a weekly level 0 backup, it would be 24+7+1, so at least 32. But I don’t care if my controlfile is 20MB in size 30MB, so I set it directly to 72 days.

-- Primary

SQL> alter system set control_file_record_keep_time=72;

System altered.

-- Standby

SQL> alter system set control_file_record_keep_time=72;

System altered.

The next important step is to create a trace of the controlfile, which can be adjusted manually later on, depending on your needs. Beforehand, I specify a tracefile identifier, so that I easily spot my trace file in the DIAG destination.

SQL> alter session set tracefile_identifier='control';

Session altered.

SQL> alter database backup controlfile to trace noresetlogs;

Database altered.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit121_site1/DBIT121/trace/ [DBIT121] ls -rlt | grep control
-rw-r----- 1 oracle oinstall     101 Feb 24 09:10 DBIT121_ora_25050_control.trm
-rw-r----- 1 oracle oinstall    9398 Feb 24 09:10 DBIT121_ora_25050_control.trc

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit121_site1/DBIT121/trace/ [DBIT121] mv DBIT121_ora_25050_control.trc /u01/app/oracle/admin/DBIT121/create/recreate_controlfile.sql

Let’s take a look at the control file trace which was created. It contains nearly everything that we need. Some parts might have to be adjusted, and some parts do not work at all or have to be done in a different way, but we will see later. But in general it is a very good starting point to get the job done.

oracle@dbidg01:/u01/app/oracle/admin/DBIT121/create/ [DBIT121] cat recreate_controlfile.sql
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
--
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
--
-- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
--
-- DB_UNIQUE_NAME="DBIT121_SITE1"
--
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE'
-- LOG_ARCHIVE_CONFIG='DG_CONFIG=("DBIT121_SITE2")'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=AUTO
-- STANDBY_ARCHIVE_DEST=?/dbs/arch
-- FAL_CLIENT=''
-- FAL_SERVER=DBIT121_SITE2
--
-- LOG_ARCHIVE_DEST_2='SERVICE=DBIT121_SITE2'
-- LOG_ARCHIVE_DEST_2='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_2='LGWR AFFIRM NOVERIFY ASYNC=0'
-- LOG_ARCHIVE_DEST_2='REGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_2='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED'
-- LOG_ARCHIVE_DEST_2='DB_UNIQUE_NAME=DBIT121_SITE2'
-- LOG_ARCHIVE_DEST_2='VALID_FOR=(STANDBY_LOGFILE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_2=ENABLE
--
-- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
-- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='REGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
--  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "DBIT121" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnop9_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnq4o_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 2 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo42k_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo43q_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 3 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppn86_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppngb_.log'
  ) SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
--   GROUP 4 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t89m_.log'
--   ) SIZE 50M BLOCKSIZE 512,
--   GROUP 5 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj8m_.log'
--   ) SIZE 50M BLOCKSIZE 512,
--   GROUP 6 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tpb4_.log'
--   ) SIZE 50M BLOCKSIZE 512,
--   GROUP 7 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twkt_.log'
--   ) SIZE 50M BLOCKSIZE 512
DATAFILE
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf'
CHARACTER SET AL32UTF8
;
-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 14 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 32 DAYS');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE1'' CONNECT IDENTIFIER  ''DBIT121_SITE1''');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE2'' CONNECT IDENTIFIER  ''DBIT121_SITE2''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT121)''');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 10
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE
-- Block change tracking was enabled, so re-enable it now.
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/u02/oradata/DBIT121_SITE1/changetracking/o1_mf_dbx3wgqg_.chg' REUSE;
-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;
-- Database can now be opened normally.
ALTER DATABASE OPEN;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_temp_d4fjxn8l_.tmp'
     SIZE 206569472  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t89m_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj8m_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tpb4_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twkt_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log';

I am also stopping the broker to avoid any side effects and afterwards I unregister the database from the RMAN catalog. I will re-create it later on with the clean entries.

-- primary

SQL> alter system set dg_broker_start=false;

System altered.

oracle@dbidg01:/home/oracle/ [DBIT121] rman target sys/manager catalog rman/rman@rman

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:16:17 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)
connected to recovery catalog database
recovery catalog schema release 12.02.00.01. is newer than RMAN release

RMAN> unregister database;

database name is "DBIT121" and DBID is 172831209

Do you really want to unregister the database (enter YES or NO)? YES
database unregistered from the recovery catalog

RMAN>

The next step is very important. We need to shutdown the DB cleanly, either with normal or immediate. Afterwards, I create a copy of the current controlfiles. You never know, it is always good to have another fallback.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

oracle@dbidg01:/home/oracle/ [DBIT121] cd /u02/oradata/DBIT121_SITE1/controlfile/
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/controlfile/ [DBIT121] mv o1_mf_d4fjws55_.ctl o1_mf_d4fjws55_.ctl.old
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/controlfile/ [DBIT121] cd /u03/fast_recovery_area/DBIT121_SITE1/controlfile/
oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/controlfile/ [DBIT121] mv o1_mf_d4fjwsgr_.ctl o1_mf_d4fjwsgr_.ctl.old

Now we can startup nomount, and recreate our control from scratch. It is very important that you specify REUSE and NORESETLOGS here.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1325400064 bytes
Fixed Size                  2924112 bytes
Variable Size             436208048 bytes
Database Buffers          872415232 bytes
Redo Buffers               13852672 bytes

SQL> CREATE CONTROLFILE REUSE DATABASE "DBIT121" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnop9_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnq4o_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 2 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo42k_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo43q_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 3 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppn86_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppngb_.log'
 19    ) SIZE 50M BLOCKSIZE 512
DATAFILE
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf'
CHARACTER SET AL32UTF8
 27  ;

Control file created.

SQL>

Now we can configure the RMAN persistent settings like retention and so on.

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 14 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 32 DAYS');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE1'' CONNECT IDENTIFIER  ''DBIT121_SITE1''');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE2'' CONNECT IDENTIFIER  ''DBIT121_SITE2''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT121)''');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 10
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');

The next step is to the re-create the incarnation table. This might fail with a recursive SQL error if you use the SQL provided in the trace file. Just use REGISTER PHYSICAL LOGFILE instead of REGISTER LOGFILE and then it works.

-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_1_%u_.arc';

SQL> ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc';
ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc'
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level


SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc';

Database altered.

Because I have shutdown the database cleanly, there is no need to do any recovery and I can continue to enable the block change tracking file, open the database, and add my tempfile back to the database.

SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required

SQL> ALTER SYSTEM ARCHIVE LOG ALL;

System altered.

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/oradata/DBIT121_SITE1/changetracking/o1_mf_dbx3wgqg_.chg' REUSE;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL>

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_temp_d4fjxn8l_.tmp'
  2  SIZE 206569472  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;

Tablespace altered.

Regarding the Standby Redo logs, the easiest is to remove the old ones, and simply recreate them afterwards, because you can’t add them back as long as they have Oracle managed file names.

SQL> select * from v$standby_log;

no rows selected

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log' SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log' SIZE 50M BLOCKSIZE 512 REUSE
*
ERROR at line 1:
ORA-01276: Cannot add file
/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log.  File has an Oracle
Managed Files file name.

-- delete standby redo logs

oracle@dbidg01:/u01/app/oracle/admin/DBIT121/create/ [DBIT121] cd /u02/oradata/DBIT121_SITE1/onlinelog/
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] ls -l
total 358428
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_1_d4fpnop9_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_2_d4fpo42k_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:47 o1_mf_3_d4fppn86_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_4_dbx3t840_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_5_dbx3tj3b_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_6_dbx3tp52_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_7_dbx3twdq_.log
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] rm o1_mf_4_dbx3t840_.log o1_mf_5_dbx3tj3b_.log o1_mf_6_dbx3tp52_.log o1_mf_7_dbx3twdq_.log
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] cd /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/
oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/ [DBIT121] ls -l
total 358428
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_1_d4fpnq4o_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_2_d4fpo43q_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:47 o1_mf_3_d4fppngb_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_4_dbx3t89m_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_5_dbx3tj8m_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_6_dbx3tpb4_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_7_dbx3twkt_.log
oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/ [DBIT121] rm o1_mf_4_dbx3t89m_.log o1_mf_5_dbx3tj8m_.log o1_mf_6_dbx3tpb4_.log o1_mf_7_dbx3twkt_.log

-- recreate standby redo logs

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 4 SIZE 50M BLOCKSIZE 512;

Database altered.

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 5 SIZE 50M BLOCKSIZE 512;

Database altered.

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 6 SIZE 50M BLOCKSIZE 512;

Database altered.

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 7 SIZE 50M BLOCKSIZE 512;

Database altered.

Don’t forget to enable Flashback as well, if your DataGuard is running in Max availability mode.

SQL> alter database flashback on;

Database altered.

Now we need to recatalog all our backups and archivelogs again.

oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/ [DBIT121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:50:16 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)

RMAN> catalog recovery area;

using target database control file instead of recovery catalog
searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_140_dbzswh06_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_141_dbzsxpv5_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx641px_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx642pf_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dby398lz_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbymcg20_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbyyg1r0_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/controlfile/o1_mf_d4fjwsgr_.ctl.old
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64pz6_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64q0b_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s0z_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s3n_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnsnf_TAG20170223T090856_dbx65kmx_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_ncnnf_TAG20170223T090856_dbx65lnt_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090923_dbx65mto_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypdc_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypfp_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpysqh_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy2f_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy56_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnsnf_TAG20170224T080812_dbzpzqnz_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_ncnnf_TAG20170224T080812_dbzpzqop_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080841_dbzpzskt_.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_140_dbzswh06_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_141_dbzsxpv5_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64pz6_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64q0b_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s0z_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s3n_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnsnf_TAG20170223T090856_dbx65kmx_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_ncnnf_TAG20170223T090856_dbx65lnt_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090923_dbx65mto_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypdc_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypfp_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpysqh_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy2f_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy56_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnsnf_TAG20170224T080812_dbzpzqnz_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_ncnnf_TAG20170224T080812_dbzpzqop_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080841_dbzpzskt_.bkp

List of Files Which Were Not Cataloged
=======================================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx641px_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx642pf_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dby398lz_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbymcg20_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbyyg1r0_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/controlfile/o1_mf_d4fjwsgr_.ctl.old
  RMAN-07519: Reason: Error while cataloging. See alert.log.

List of files in Recovery Area not managed by the database
==========================================================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbzwt72f_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbzwtgl3_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbzwtn04_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbzwtvc7_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter

number of files not managed by recovery area is 4, totaling 200.00MB

RMAN>

We are nearly done. We just need to enable the Data Guard broker again and enable fast start failover.

SQL> alter system set dg_broker_start=true;

System altered.

SQL> alter system archive log current;

System altered.

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show configuration;

Configuration - DBIT121

 Protection Mode: MaxAvailability
 Members:
 DBIT121_SITE1 - Primary database
 DBIT121_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS (status updated 21 seconds ago)

DGMGRL> validate database 'DBIT121_SITE2';

 Database Role: Physical standby database
 Primary Database: DBIT121_SITE1

 Ready for Switchover: Yes
 Ready for Failover: Yes (Primary Running)

Re-register the database into the RMAN catalog.

oracle@dbidg01:/home/oracle/ [DBIT121] rman target sys/manager catalog rman/rman@rman

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:57:34 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)
connected to recovery catalog database
recovery catalog schema release 12.02.00.01. is newer than RMAN release

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>

Ready. That’s it. In case your Standby Controlfile is not correct as well (was not in my case), you can now simply create a new standby controlfile on the primary and move it to the standby like documented in the following MOS note (Steps to recreate a Physical Standby Controlfile (Doc ID 459411.1).

Conclusion

As a last resort, sometimes it is needed to recreate the controlfile manually, but in case you have all the online redo logs and your datafiles in place, you can do it with noresetlogs. And take care that your RMAN retention is always higher than your control file record keep time.

 

Cet article Oracle 12c – Recreating a Controlfile in a Data Guard environment with noresetlogs est apparu en premier sur Blog dbi services.

Analysing Elections Data with Oracle Data Visualisation Desktop

Rittman Mead Consulting - Fri, 2017-02-24 04:38

Disclaimer #1 This post is not about politics. Its dataset is about politics, but that's a coincidence. It could be about immunisation or transport or anything else. If you are strictly against any politics, here is a link to watch cute kittens.

Introduction

Let's pretend that I'm an analyst and got a supposedly interesting data set. Now I want to understand if the data is actually interesting or it's a total rubbish. Having been in IT for some time I can use tools and technologies which typical end-user can’t access. But this time I pretend I’m a usual analyst which has data and desktop tools. And my task is to do a research and tell if there are anomalies in the data or everything looks like it supposed to look like.
The main tool for this work is obviously Oracle Data Visualisation Desktop (DVD). And, as a supplementary tool, I use Excel. This post is not a guide for any particular DVD feature. It won’t give a step by step instruction or anything like that. The main idea is to show how we can use Oracle Data Visualisation for an analysis of a real dataset. Not simply show that we can build bar charts, and pie charts and other fancy whatever charts, but show how we can get insights from the data.

The Data

I should say a few words about the data. It is an official result of the Russian State Duma (parliament) elections in 2016. Half of the Duma was elected by party lists and for this post I took that data. I should confess that I cheated a little and decided not spend my time downloading and parsing the data piece by piece from the official site, and took a prepared set.

From a bird's-eye view I have the following data:

  1. Voting results by election commissions: number of votes for every political party and a lot of technical measures like number of registered voters, number of good and damaged voting ballots and so on.
  2. Turnout figures at given times throughout the day.

From a more technical point of view, the data was stored in two big files with multiple JSON in each. As the data preparation part is big enough, it was extracted to another post. This one concentrates purely on visualisation and the second one is about data flows and comparison to Excel.

Analysing the Data

I did some cleaning, refining and enriching of the data and it's time to use it. I started with a standard Stacked bar chart. It shows percentages of parties by regions and in addition width of bars shows Total votes. The chart is sorted by ascending Total votes.

What can we say here?

Before I start talking I need a lawyer and a disclaimer #2:

Disclaimer #2 Some of the results may be interpreted in different ways. Some of them may be not so pleasant for some people. But I'm not a court and this post is only a data visualisation exercise. Therefore I'm not accusing anyone of committing any crimes. I will make some conclusions because of rules of the genre, but they should be treated as hypotheses only.

That’s not a proven charge (see disclaimer #2) but for me these regions look a bit suspicious. Their results are very uncommon. United Russia ruling party (orange bars) got an extremely high result in these few regions. This may be a sign of some kind of interfere with an election process there. But of course, other explanations (including a measure incorrectly selected for sorting) exist.

Just for reference so we don’t forget the names: Tatarstan, Kemerovo, Dagestan, Chechnya and Mordovia. There are a few more regions with similar results but their number of voters is lower so I don’t show them here.

At this point I'm starting to suspect something. But I need more visuals to support my position, and my next hypothesis is that in these regions ballots were somehow added to voting boxes (or protocols were changed which is basically the same). From a data visualisation point of view that will mean that these regions will have higher turnout (because of added ballots) along with higher United Russia result.

To check this I need one more measure - Turnout, %. It shows how many of registered voters actually voted. I can create this field without leaving visualisation mode. Cool.

Note. This formula may be not absolutely correct but it works well for demonstration purposes.

In order to visualise this hypothesis, I built a Scatter chart. Its horizontal axis is Turnout,% and vertical one United Russia, %. And I added a trend line to make things more obvious. Colour density shows Total votes.

I think my hypothesis just got a strong support. As usual it is not an absolutely impossible situation. But it's hard to explain why the more people come to voting stations the higher one party result is. I'd expect either high result not depending on the turnout (more or less like Uniform distribution) or at least a few exceptions with high turnout and low result.

I consider this result strange because in real life I'd expect that higher turnout should mean more opposition voters (a passive group indeed) coming to voting stations. But that's only my opinion. And highly arguable I should agree. What I really want to show here is that these charts highlight an oddity that should be investigated and may or may not have a rational explanation.

And who are our heroes? Let’s zoom in on the chart …and we see the same regions.

But maybe other parties can show the same result? We can build the same Scatter charts for every party or we can visualise all at once with a usual Line chart. Here I’ve plotted the percent of vote won by each party (Y-axis) against the overall turnout % (X-axis).

United Russia is the only party that increases with turnout.

So far all my charts were about relative measures, it's time to check some absolute values. Here is a Bar chart which shows a number of precinct commissions by results. I'd expect to see something close to normal distribution - a bell-shaped chart with the maximum around 54% (average turnout). Now, look at the real chart (bin size=1.0%). Its maximum is at 36-37%.

Normal distribution

In probability theory, the normal (or Gaussian) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.

Strictly speaking all numbers I'm showing here are discrete and I should say Binomial distribution rather than Normal but right now for my purposes the diffence is not that big.

I'm definitely not Carl Gauss (and even not one of his best students) and you may ignore my opinion, but I expected something more like this:

And I don't have the slightest idea how it is possible that the most "popular" turnout is 100%.

What if we look at the same chart with more details? The previous one was grouped by 1% bins, and this one has 0.1% bins. And I had to add turnout not equal to 100% filter. Even with smaller bin size, the last one is almost the same ~3K commissions. This bar is much bigger than the others and the chart doesn't show anything in that case.
What can we see here? Well, people aren’t really good in generating random numbers. It's perfectly OK to have some runout on the chart. But hey, it's not normal to have them mostly at round values. That looks like someone was trying to fit the result to some plan.

Here is my favourite part of the chart. I marked 1% intervals, and you can see that round turnout value is always more probable than its closest non-round neighbours. No exceptions. A possible explanation is that the number of commissions with results that high is relatively low and even the slightest manipulation is clearly visible.

But wait. What about that 64.3 percent? It’s not round, but it is a big runaway. Let’s take a closer look at this value and check if there is anything interesting or that is a normal situation. Here is a few interesting visualisation for it.

The first one is Tree Diagram. It shows all existing combinations of district and precinct commissions by regions for the filtered data (turnout=64.3). And in order to demonstrate how it works for this case I made an animation. Most of the regions have a few commissions with 64.3% turnout. But Saratov region beats them all.

This visualisation has a serious flaw. End-user has to scroll it (I mean for this set of data) and can miss the point. Another visualisation can improve the situation. Network diagram doesn't need scrolling.

Looks good and shows exactly the same. But for this chart we must ensure that every data point is unique what is not true in my case. Different precinct commissions have the same numbers and I had to create a unique field first (DEC #||PEC #). It's easy to forget and get unpredictable or even misleading results.

Or if you prefer more traditional charts, here is Sunburst for you. Its sectors size shows Total votes and the colour is PEC count. It gives a good representation of how uncommon Saratov's result is.

And the last picture for the same topic boring never-old classic Bar chart.

Considering all these charts I'd say that almost exclusive concentration of commissions with 63.4% turnout in Saratov doesn't look normal for me. It's pretty weird that sibling commissions show exactly the same figures.

A few more diagrams which could work well are Sankey and Parallel coordinates, unfortunately, they are less informative because of the high number of precinct commissions. But if the number was lower I'd consider them too.

All previous charts are based on voting data. But I have one more dataset - official turnout. Let's check if we can find anything interesting there. And unfortunately significant part of commissions doesn't have official data, and sometimes I may use formulas that are not exactly the same as official ones, so numbers may differ slightly from what I got from the protocols data.

The first chart shows the number of commissions (vertical axis) by the official turnout (horizontal axis). Colour shows the moment of time. Strictly saying I shouldn't have used continuous linear charts for discrete values, but coloured overlapped bars don't give that clear picture.

Except for the 100% tail, everything is more or less natural. Graph shape looks more like Gamma distribution rather than Normal but I didn't test it.

What picture do I have for various regions?
Moscow city is known for a relatively high number of poll watchers and we may expect more clean data there. Ignoring the long tail, these look normal (or binomial if you want to be precise).

Saratov region. The one with 64.3% turnout. Look at these peaks. Do they look natural to you?
Do you remember Tatarstan (was the hero in the beginning of this story)? Here it is. I don't know how can anyone explain how it is possible (without manual results adjusting I mean).

Summary

This post shows how we can use Oracle DVD for visualisation of a real data set. And I hope I was able to convince you that this tool can be useful and can give you really interesting insights. Of course, visualisation alone doesn't answer all questions. And this time actually it was less about answers but more about questions. It helps to ask right questions.

More reading on the topic: 1, 2 (Russian language). If you can read Russian, here you will find more visualisations, discussions and interesting information. And this article is about elections in 2011. Its undisputable advantage is that it is in English.

Categories: BI & Warehousing

EBS Sessions at Alliance 2017

Steven Chan - Fri, 2017-02-24 02:05

HEUG Alliance 2017 logoSenior EBS Development staff (including me) are presenting at the Higher Education User Group (HEUG) Alliance 2017 conference in Las Vegas from February 27 to March 2.  This is your opportunity to get the latest updates on our E-Business Suite roadmap.  Check out the Alliance session catalog for a complete list of all Oracle EBS Development sessions:

We are also hosting a Meet The Experts roundtable where you can get your questions answered directly by EBS Development senior staff (including me):

  • Wednesday, March 1 at 3:30 PM
  • Third Floor Pre-function area in Conference Center, Table 6



Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator