Feed aggregator

NEW OTN Virtual Technlogy Summit Sessions Coming!

OTN TechBlog - Mon, 2016-02-08 12:36

Join us for free Hands-On Learning with Oracle and Community Experts! The Oracle Technology Network invites you to attend one of our latest next Virtual Technology Summits on March 8th, 15th and April 5th. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, share their insights and expertise through Hands-on-Labs (HOL), highly technical presentations and demos. This interactive, online event offers four technical tracks:

Database: The database track provides latest updates and in-depth topics covering Oracle Database 12c Advanced Options, new generation application development with JSON, Node.js and Oracle Database Cloud, as well as sessions dedicated to the most recent capabilities of MySQL, benefiting both Oracle and MySQL DBAs and Developers.

Middleware: The middleware track offers developers focused on gaining new skills and expertise in emerging technology areas such as Internet of Thing (IoT), Mobile and PaaS. This track also provides latest updates on Oracle WebLogic 12.2.1.and Java EE.
Java: In this track, we will show you improvements to the Java platform and APIs. You’ll also learn how the Java language enables you to develop innovative applications using Microservices, parallel programming, integrate with other languages and tools, as well as insight for the APIs that will substantially boost your productivity. System: Designed for System Administrators this track covers best practices for implementing, optimizing, and securing your operating system, management tools, and hardware. In addition, we will also discuss best practices for Storage, SPARC, and Software Development.

Register Today -

March 8th, 2016 - 9:30am to 1:30pm PT / 12:30pm to 4:30pm ET / 3:30pm to 7:30pm BRT

March 15, 2016 - 9:30 a.m. to 1:30 p.m. IST / 12:00 p.m. to 4:00 p.m. SGT  / 3:00 p.m. to 7:00 p.m. AEDT

April 5, 2016 - 3:30 p.m. to 7:30 p.m. BRT  / 09:30 - 13:00 GMT (UK)  / 10:30 - 14:00 CET

Resolving Hardware Issues with a Kernel Upgrade in Linux Mint

The Anti-Kyte - Sun, 2016-02-07 11:40

One evening recently, whilst climbing the wooden hills with netbook in hand, I encountered a cat who had decided that halfway up the stairs was a perfect place to catch forty winks.
One startled moggy later, I had become the owner of what I can only describe as…an ex-netbook.

Now, finally, I’ve managed to get a replacement (netbook, not cat).

As usual when I get a new machine, the first thing I did was to replace Windows with Linux Mint…with the immediate result being that the wireless card stopped working.

The solution ? Don’t (kernel) panic, kernel upgrade !

Support for most of the hardware out there is included in the Linux Kernel. The kernel is enhanced and released every few months. However, distributions, such as Mint, tend to stick on one kernel version for a while in order to provide a stable base on which to develop.
This means that, if Linux is not playing nicely with your Wireless card/web-cam/any other aspect of your machine’s hardware, a kernel upgrade may resolve your problem.
Obviously it’s always good to do a bit of checking to see if this might be the case.
It’s also good to have a way of putting things back as they were should the change we’re making not have the desired effect.

What I’m going to cover here is the specific issue I encountered with my new Netbook and the steps I took to figure out what kernel version might fix the problem.
I’ll then detail the kernel upgrade itself.

Machine details

The machine In question is an Acer TravelMate-B116.
It has an 11.6 inch screen, 4GB RAM and a 500GB HDD.
For the purposes of the steps that follow, I was able to connect to the internet via a wired connection to my router. Well, up until I got the wireless working.
The Linux OS I’m using is Linux Mint 17.3 Cinnamon.
Note that I have disabled UEFI and am booting the machine in Legacy mode.

Standard Warning – have a backup handy !

In my particular circumstances, I was trying to configure a new machine. If it all went wrong, I could simply re-install Mint and be back where I started.
If you have stuff on your machine that you don’t want to lose, it’s probably a good idea to back it up onto separate media ( e.g. a USB stick).
Additionally, if you are not presented with a grub menu when you boot your machine, you may consider running the boot-repair tool.
This will ensure that you have the option of which kernel to use if you have more than one to choose from ( which will be the case once you’ve done the kernel upgrade).

It is possible that upgrading the kernel may cause issues with some of the hardware that is working fine with the kernel you currently have installed, so it’s probably wise to be prepared.

Identifying the card

The first step then, is to identify exactly which wireless network card is in the machine.
From a terminal window …


00:00.0 Host bridge: Intel Corporation Device 2280 (rev 21)
00:02.0 VGA compatible controller: Intel Corporation Device 22b1 (rev 21)
00:0b.0 Signal processing controller: Intel Corporation Device 22dc (rev 21)
00:13.0 SATA controller: Intel Corporation Device 22a3 (rev 21)
00:14.0 USB controller: Intel Corporation Device 22b5 (rev 21)
00:1a.0 Encryption controller: Intel Corporation Device 2298 (rev 21)
00:1b.0 Audio device: Intel Corporation Device 2284 (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 22c8 (rev 21)
00:1c.2 PCI bridge: Intel Corporation Device 22cc (rev 21)
00:1c.3 PCI bridge: Intel Corporation Device 22ce (rev 21)
00:1f.0 ISA bridge: Intel Corporation Device 229c (rev 21)
00:1f.3 SMBus: Intel Corporation Device 2292 (rev 21)
02:00.0 Network controller: Intel Corporation Device 3165 (rev 81)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

It looks like the penultimate entry is our wireless card.
It is possible to get details of the card you have by using “Intel Corporation Device 3165” as a search term. However, we may be able to get the name of the card by running ….

lspci -vq |grep -i wireless -B 1 -A 4

In my case, this returns :

02:00.0 Network controller: Intel Corporation Wireless 3165 (rev 81)
	Subsystem: Intel Corporation Dual Band Wireless AC 3165
	Flags: bus master, fast devsel, latency 0, IRQ 200
	Memory at 91100000 (64-bit, non-prefetchable) [size=8K]
	Capabilities: <access denied>

Further digging around reveals that, according to Intel, this card is supported in linux starting at Kernel version 4.2.

Now, which version of the Kernel are we actually running ?

Identifying the current kernel version and packages

This is relatively simple. In the Terminal just type :

uname -r

On Mint 17.3, the output is :


At this point, we now know that an upgrade to the kernel may well solve our wireless problem. The question now is, which packages do we need to install to effect the upgrade ?

If you look in the repositories, there appear to be at least two distinct versions of kernel packages, the generic and something called low-latency.
In order to be confident of which packages we want to get, it’s probably a good idea to work out what we have now.
This can be achieved by searching the installed packages for the version number of the current kernel.
We can do this in the terminal :

dpkg --list |grep 3.19.0-32 |awk '{print $2}'

In my case, this returned :


As an alternative, you could use the graphical Synaptic Package Manager.
You can start this from the menu ( Administration/Synaptic Package Manager).


Now we know what we’ve got, the next step is to find the kernel version that we need…

Getting the new kernel packages

It may well be the case that the kernel version you’re after has already been added to the distro’s repository.
To see if this is the case, use Synaptic Package Manager to search as follows :

Start Synaptic Package Manager from the System Menu.
You will be prompted for your password.

Click the Status button and select Not Installed


In the Quick filter bar, enter the text : linux-headers-4.2*-generic


This should give you a list of any kernel 4.2 versions available in the repository.

If, as I did, you find the version you’re looking for, you need to select the packages that are equivalent to the ones you already have installed on your system.
Incidentally, there are a number of 4.2 kernel versions available, so I decided to go for the latest.
In my case then, I want to install :

  • linux-headers-4.20.0-25
  • linux-headers-4.20.0-25-generic
  • linux-image-4.20.0-25-generic
  • linux-image-extra-4.20.0-25-generic

NOTE – If you don’t find the kernel version you are looking for, you can always download the packages directly using these instructions.

Assuming we have found the version we want, we need to now search for the relevant packages.
In the Quick filter field in Synaptic, change the search string to : linux-*4.2.0-25

To Mark the packages for installation, right-click each one in turn and select Mark for Installation


Once you’ve selected them all, hit the Apply button.

Once the installation is completed, you need to re-start your computer.

On re-start, you should find that the Grub menu has an entry for Advanced Options.
If you select this, you’ll see that you have a list of kernels to choose to boot into.
This comes in handy if you want to go back to running the previous kernel version.

For now though, we’ll boot into the kernel we’ve just installed.
We can confirm that the installation has been successful, once the machine starts, by opening a Terminal and running :

uname -r

If all has gone to plan, we should now see…


Even better in my case, my wireless card has now been recognised.
Opening the systray icon, I can enable wireless and connect to my router.

Backing out of the Kernel Upgrade

If you find that the effects of the kernel upgrade are undesirable, you can always go back to the kernel you started with.
If at all possible, I’d recommend starting Mint using the old kernel before doing this.

If you’re running on the kernel for which you are deleting the packages, you may get some alarming warnings. However, once you re-start, you should be back to your original kernel version.

The command then, is :

sudo apt-get remove linux-headers-4.2* linux-image-4.2*

…where 4.2 is the version of the kernel you want to remove.
Run this and the output looks like this…

The following packages will be REMOVED
  linux-headers-4.2.0-25 linux-headers-4.2.0-25-generic
  linux-image-4.2.0-25-generic linux-image-extra-4.2.0-25-generic
0 to upgrade, 0 to newly install, 5 to remove and 7 not to upgrade.
After this operation, 294 MB disk space will be freed.
Do you want to continue? [Y/n]

Once the packages have been removed, the old kernel will be in use on the next re-boot.
After re-starting, you can check this with :

uname -r

Thankfully, these steps proved unnecessary in my case and the kernel upgrade has saved me from hardware cat-astrophe.

Filed under: Linux, Mint Tagged: Acer TravelMate-B116, apt-get remove, dpkg, Intel Corporation Dual Band Wireless AC 3165, kernel upgrade, lspci, synaptic package manager, uname -r

Big Nodes, Concurrent Parallel Execution And High System/Kernel Time

Randolf Geist - Sat, 2016-02-06 17:47
The following is probably only relevant for customers that run Oracle on big servers with lots of cores in single instance mode (this specific problem here doesn't reproduce in a RAC environment, see below for an explanation why), like one of my clients that makes use of the Exadata Xn-8 servers, for example a X4-8 with 120 cores / 240 CPUs per node (but also reproduced on older and smaller boxes with 64 cores / 128 CPUs per node).

They recently came up with a re-write of a core application functionality. Part of this code did start the same code path for different data sets potentially several times concurrently ending up with many sessions making use of Parallel Execution. In addition a significant part of the queries used by this code did make questionable use of Parallel Execution, in that sense that queries of very short duration used Parallel Execution, hence ending up with several Parallel Execution starts per second. You could see this pattern from the AWR reports like this, showing several "DFO trees" parallelized per second on average over an hour period:

When the new code was tested with production-like data volumes and patterns, in the beginning the CPU profile of such a big node (running in single instance mode) looked like this, when nothing else was running on that box:

As you can see, the node was completely CPU bound, spending most of the time in System/Kernel time. The AWR reports showed some pretty unusual PX wait events as significant:

"PX Deq: Slave Session Stats" shouldn't be a relevant wait event since it is about the PX slaves at the end of a PX execution passing an array of session statistics to the PX coordinator for aggregating the statistics on coordinator level. So obviously something was slowing down this PX communication significantly (and the excessive usage of Parallel Execution was required to see this happen).

Also some of the more complex Parallel Execution queries performing many joins and ending up with a significant number of data redistributions ran like in slow motion, although claiming to spend 100% of their time on CPU, but according to Active Session History almost 90% of that time was spent on the redistribution operations:

SQL statement execution ASH Summary

              |               |               |
            98|             86|             87|

Running the same query with the same execution plan on the same data and the same box during idle times showed a almost 20 times better performance, and less than 40% time spent on redistribution:

SQL statement execution ASH Summary

              |               |               |
            96|             38|             37|

So it looked like those queries ran into some kind of contention that wasn't instrumented in Oracle but happened outside on O/S level, showing up as CPU Kernel time - similar to what could be seen in previous versions of Oracle when spinning on mutexes.

Reducing the excessive usage of Parallel Execution showed a significant reduction in CPU time, but still the high System/Kernel time was rather questionable:

So the big question was - where was that time spent in the kernel to see whether this gives further clues.

Running "perf top" on the node during such a run showed this profile:

  PerfTop:  129074 irqs/sec  kernel:76.4%  exact:  0.0% [1000Hz cycles],  (all, 128 CPUs)

             samples  pcnt function                 DSO
             _______ _____ ________________________ ___________________________________________________________

          1889395.00 67.8% __ticket_spin_lock       /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            27746.00  1.0% ktime_get                /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            24622.00  0.9% weighted_cpuload         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            23169.00  0.8% find_busiest_group       /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            17243.00  0.6% pfrfd1_init_locals       /data/oracle/XXXXXXX/product/
            16961.00  0.6% sxorchk                  /data/oracle/XXXXXXX/product/
            15434.00  0.6% kafger                   /data/oracle/XXXXXXX/product/
            11531.00  0.4% try_atomic_semop         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            11006.00  0.4% __intel_new_memcpy       /data/oracle/XXXXXXX/product/
            10557.00  0.4% kaf_typed_stuff          /data/oracle/XXXXXXX/product/
            10380.00  0.4% idle_cpu                 /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             9977.00  0.4% kxfqfprFastPackRow       /data/oracle/XXXXXXX/product/
             9070.00  0.3% pfrinstr_INHFA1          /data/oracle/XXXXXXX/product/
             8905.00  0.3% kcbgtcr                  /data/oracle/XXXXXXX/product/
             8757.00  0.3% ktime_get_update_offsets /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             8641.00  0.3% kgxSharedExamine         /data/oracle/XXXXXXX/product/
             7487.00  0.3% update_queue             /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             7233.00  0.3% kxhrPack                 /data/oracle/XXXXXXX/product/
             6809.00  0.2% rworofprFastUnpackRow    /data/oracle/XXXXXXX/product/
             6581.00  0.2% ksliwat                  /data/oracle/XXXXXXX/product/
             6242.00  0.2% kdiss_fetch              /data/oracle/XXXXXXX/product/
             6126.00  0.2% audit_filter_syscall     /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             5860.00  0.2% cpumask_next_and         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             5618.00  0.2% kaf4reasrp1km            /data/oracle/XXXXXXX/product/
             5482.00  0.2% kaf4reasrp0km            /data/oracle/XXXXXXX/product/
             5314.00  0.2% kopp2upic                /data/oracle/XXXXXXX/product/
             5129.00  0.2% find_next_bit            /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             4991.00  0.2% kdstf01001000000km       /data/oracle/XXXXXXX/product/
             4842.00  0.2% ktrgcm                   /data/oracle/XXXXXXX/product/
             4762.00  0.2% evadcd                   /data/oracle/XXXXXXX/product/
             4580.00  0.2% kdiss_mf_sc              /data/oracle/XXXXXXX/product/

Running "perf" on a number of Parallel Slaves being busy on CPU showed this profile:

     0.36%     ora_xxxx  [kernel.kallsyms]             [k] 

               --- __ticket_spin_lock
                  |--99.98%-- _raw_spin_lock
                  |          |          
                  |          |--100.00%-- ipc_lock
                  |          |          ipc_lock_check
                  |          |          |          
                  |          |          |--99.83%-- semctl_main
                  |          |          |          sys_semctl
                  |          |          |          system_call
                  |          |          |          __semctl
                  |          |          |          |          
                  |          |          |           --100.00%-- skgpwpost
                  |          |          |                     kslpsprns
                  |          |          |                     kskpthr
                  |          |          |                     ksl_post_proc
                  |          |          |                     kxfprienq
                  |          |          |                     kxfpqrenq
                  |          |          |                     |          
                  |          |          |                     |--98.41%-- kxfqeqb
                  |          |          |                     |          kxfqfprFastPackRow
                  |          |          |                     |          kxfqenq
                  |          |          |                     |          qertqoRop
                  |          |          |                     |          kdstf01001010000100km
                  |          |          |                     |          kdsttgr
                  |          |          |                     |          qertbFetch
                  |          |          |                     |          qergiFetch
                  |          |          |                     |          rwsfcd
                  |          |          |                     |          qertqoFetch
                  |          |          |                     |          qerpxSlaveFetch
                  |          |          |                     |          qerpxFetch
                  |          |          |                     |          opiexe
                  |          |          |                     |          kpoal8

Running "strace" on those Parallel Slaves showed this:

semctl(1347842, 397, SETVAL, 0x1)       = 0
semctl(1347842, 388, SETVAL, 0x1)       = 0
semctl(1347842, 347, SETVAL, 0x1)       = 0
semctl(1347842, 394, SETVAL, 0x1)       = 0
semctl(1347842, 393, SETVAL, 0x1)       = 0
semctl(1347842, 392, SETVAL, 0x1)       = 0
semctl(1347842, 383, SETVAL, 0x1)       = 0
semctl(1347842, 406, SETVAL, 0x1)       = 0
semctl(1347842, 389, SETVAL, 0x1)       = 0
semctl(1347842, 380, SETVAL, 0x1)       = 0
semctl(1347842, 395, SETVAL, 0x1)       = 0
semctl(1347842, 386, SETVAL, 0x1)       = 0
semctl(1347842, 385, SETVAL, 0x1)       = 0
semctl(1347842, 384, SETVAL, 0x1)       = 0
semctl(1347842, 375, SETVAL, 0x1)       = 0
semctl(1347842, 398, SETVAL, 0x1)       = 0
semctl(1347842, 381, SETVAL, 0x1)       = 0
semctl(1347842, 372, SETVAL, 0x1)       = 0
semctl(1347842, 387, SETVAL, 0x1)       = 0
semctl(1347842, 378, SETVAL, 0x1)       = 0
semctl(1347842, 377, SETVAL, 0x1)       = 0
semctl(1347842, 376, SETVAL, 0x1)       = 0
semctl(1347842, 367, SETVAL, 0x1)       = 0
semctl(1347842, 390, SETVAL, 0x1)       = 0
semctl(1347842, 373, SETVAL, 0x1)       = 0
semctl(1347842, 332, SETVAL, 0x1)       = 0
semctl(1347842, 379, SETVAL, 0x1)       = 0
semctl(1347842, 346, SETVAL, 0x1)       = 0
semctl(1347842, 369, SETVAL, 0x1)       = 0
semctl(1347842, 368, SETVAL, 0x1)       = 0
semctl(1347842, 359, SETVAL, 0x1)       = 0

So the conclusion was: A lot of CPU time is spent spinning on the "spin lock" (critical code section) - caused by calls to "semctl" (semaphores), which are part of the PX code path and come from "ipc_lock"->"raw_lock". "strace" shows that all of the calls to "semctl" make use of the same semaphore set (first parameter), which explains the contention on that particular semaphore set (indicating that the locking granule is the semaphore set, not the semaphore).

Based on the "perf" results an Oracle engineer found a suitable, unfortunately unpublished and closed bug from 2013 for that comes up with three different ways how to address the problem:

- Run with "cluster_database" = true: This will take a different code path which simply reduces the number of semaphore calls by two orders of magnitude. We tested this approach and it showed immediate relief on kernel time - that is the explanation why in a RAC environment this specific issue doesn't reproduce.

- Run with different "kernel.sem" settings: The Exadata boxes came with the following predefined semaphore configuration:

kernel.sem = 2048 262144 256 256

"ipcs" showed the following semaphore arrays with this configuration when starting the Oracle instance:

------ Semaphore Arrays --------
key        semid      owner     perms      nsems    
0xd87a8934 12941057   oracle    640        1502     
0xd87a8935 12973826   oracle    640        1502     
0xd87a8936 12006595   oracle    640        1502    

By reducing the number of semaphores per set and increasing the number of sets, like this:

kernel.sem = 100 262144 256 4096

the following "ipcs" output could be seen:

------ Semaphore Arrays --------
key        semid      owner     perms      nsems    
0xd87a8934 13137665   oracle    640        93       
0xd87a8935 13170434   oracle    640        93       
0xd87a8936 13203203   oracle    640        93       
0xd87a8937 13235972   oracle    640        93       
0xd87a8938 13268741   oracle    640        93       
0xd87a8939 13301510   oracle    640        93       
0xd87a893a 13334279   oracle    640        93       
0xd87a893b 13367048   oracle    640        93       
0xd87a893c 13399817   oracle    640        93       
0xd87a893d 13432586   oracle    640        93       
0xd87a893e 13465355   oracle    640        93       
0xd87a893f 13498124   oracle    640        93       
0xd87a8940 13530893   oracle    640        93       
0xd87a8941 13563662   oracle    640        93       
0xd87a8942 13596431   oracle    640        93       
0xd87a8943 13629200   oracle    640        93       
0xd87a8944 13661969   oracle    640        93       
0xd87a8945 13694738   oracle    640        93       
0xd87a8946 13727507   oracle    640        93       
0xd87a8947 13760276   oracle    640        93       
0xd87a8948 13793045   oracle    640        93       
0xd87a8949 13825814   oracle    640        93       
0xd87a894a 13858583   oracle    640        93       
0xd87a894b 13891352   oracle    640        93       
0xd87a894c 13924121   oracle    640        93       
0xd87a894d 13956890   oracle    640        93       
0xd87a894e 13989659   oracle    640        93       
0xd87a894f 14022428   oracle    640        93       
0xd87a8950 14055197   oracle    640        93       
0xd87a8951 14087966   oracle    640        93       
0xd87a8952 14120735   oracle    640        93       
0xd87a8953 14153504   oracle    640        93       
0xd87a8954 14186273   oracle    640        93       
0xd87a8955 14219042   oracle    640        93

So Oracle now allocated a lot more sets with less semaphores per set. We tested this configuration instead of using "cluster_database = TRUE" and got the same low kernel CPU times

- The bug comes up with a third option how fix this, which has the advantage that the host configuration doesn't need to be changed, and the configuration can be done per instance: There is an undocumented parameter "_sem_per_sem_id" that defines the upper limit of semaphores to allocate per set. By setting this parameter to some comparable values like 100 or 128 the net result ought to be the same - Oracle allocates more sets with less semaphores per set, but we haven't tested this option.

So the bottom line was this: Certain usage patterns of the Oracle instance lead to contention on spin locks on Linux O/S level if Oracle runs in single instance mode and used the so far recommended semaphore settings, which resulted in all semaphore calls going for the same semaphore set. By having Oracle allocate more semaphore sets the calls were spread over more sets hence significantly reducing the contention.

There is probably some internal note available at Oracle that indicates that the default semaphore settings recommended for big nodes are not optimal for running single instance mode under certain circumstances, but I don't know if there is a definitive, official guide available yet.

This is the CPU profile of exactly the same test workload as before using the changed "kernel.sem" settings:

Also in the AWR report the unusual PX related wait events went away and performance improved significantly, in particular also for those complex queries mentioned above.

PaaS4SaaS Developers' Code Is Always 'On': OAUX is on OTN and GitHub

Usable Apps - Sat, 2016-02-06 09:35

Boom! That's the sound of thunder rolling as PaaS and SaaS developers work as fast as lightning in the cloud. The cloud has changed customer expectations about applicationstoo; if they don’t like their user experience (UX) or they don’t get it fast, they’ll go elsewhere.

PaaS4SaaS developers know their code is always 'on'.

But you can accelerate the development of your PaaS4SaaS solutions with a killer UX easily by now downloading the AppsCloudUIKit software part of the Cloud UX simplified UI Rapid Development Kit (RDK) for Release 10 PaaS4SaaS solutions from the Oracle Technology Network (OTN) or from GitHub.

The Oracle Applications User Experience (OAUX) team's Oracle Cloud UX RDK works with Oracle JDeveloper, and The kit downloads include a developer eBook that explains the technical requirements and how to build a complete SaaS or PaaS solution in a matter of hours

Build a simplified UI with the RDK

The AppsCloudUIKit software part of our partner training kit is on OTN and GitHub and is supported by video and eBook guidance.

Build a simplified UI developer eBook

The developer eBook is part of the AppsCloudUIKit downloads on OTN and GitHub.

For the complete developer experience fast, check out the cool Oracle Usable Apps channel YouTube videos from our own dev and design experts on how to design and build your own simplified UI for SaaS using PaaS.

Enjoy. Check in with us on any questions relating to versions or requirements. Share your thoughts in the comments after you've used the complete RDK and stay tuned for more information. It's an ongoing story...


Security Alert CVE-2016-0603 Released

Oracle Security Team - Fri, 2016-02-05 14:42

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6.

To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system.

Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later.

As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious.

For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html


PeopleTools CPU analysis and supported versions of PeopleTools (update for January 2016 CPU)

PeopleSoft Technology Blog - Fri, 2016-02-05 14:30

Questions often arise on the PeopleTools versions for which Critical Patch Updates have been published, or if a particular PeopleTools version is supported. 

The attached page shows the patch number matrix for PeopleTools versions associated with a particular CPU publication. This information will help you decide which CPU to apply and when to consider upgrading to a more current release.

The link in "CPU Date" goes to the landing page for CPU advisories, the link in the individual date, e.g. Apr-10, goes to the advisory for that date.

The page also shows the CVE's addressed in the CPU, a synopsis of the issue and the Common Vulnerability Scoring System (CVSS) value.

To find more details on any CVE, simply replace the CVE number in the sample URL below.


Common Vulnerability Scoring System Version 2 Calculator


This page shows the components of the CVSS score

Example CVSS response policy http://www.first.org/_assets/cvss/cvss-based-patch-policy.pdf

All the details in this page are available on My Oracle Support and public sites.

The RED column indicates the last patch for a PeopleTools version and effectively the last support date for that version.

Applications Unlimited support does NOT apply to PeopleTools versions.

How to get nfs info on 1000 or many hosts using Oracle Enterprise Manager

Arun Bavera - Fri, 2016-02-05 11:27
There was a requirement to get nfs info on all the hosts.
Here is the way to get it:

Create a OS JOB in EM12c with following text and execute on all interested hosts. Assuming you have common shared mount on all these hosts.
Otherwise you can create Metric Extension to collect this info and query repository using Configuration Manger or directly to get this info.
 echo -e `echo '\n';hostname --l;echo '\n=====================================\n';nfsstat -m;echo '\n=====================================\n';exit 0` >> /nfs_software/nfs_info_PROD.txt

Categories: Development

Storage difference between 2 identical Exa boxes. How and why?

Syed Jaffar - Thu, 2016-02-04 04:58
We noticed around 1.6TB storage difference between two Eight (1/8) Exadata boxes while configuring Data Guard. Wondered what went wrong. The Exa box configured for DR was around 1.6TB short compare to the other Exa box. Verified the lun, physical disk and grdidisk status on a cell, which showed active/online status. The tricky part on Exadata is, everything has to be active/online across all cell storage servers. We then figured-out that grid disk status on the 3rd cell storage server was inactive. After making them active on the 3rd cell server, everything become normal, i mean, the missing 1.6TB space appeared.
When you work with Exadata, you need to verify all cell storage servers to confirm the issue, rather than just query things over just one cell server.

Fluid Header and Navigation is the New Standard

PeopleSoft Technology Blog - Tue, 2016-02-02 16:15
Beginning with PeopleTools 8.55, PeopleSoft 9.2 applications will have a Fluid header on their classic pages that matches the fluid pages.  This unifies the user experience of classic pages with newer fluid pages and applications.  With the fluid user interface, user navigation is more seamless and intuitive.  Using fluid homepages, tiles, global search, related actions, and the new fluid Navigation Collection feature, users can more easily navigate to the information most important to them.  Refer to the PeopleSoft Fluid User Interface and Navigation Standards White Paper (Document ID 2063602.1) for more information on design best practices for navigation within PeopleSoft applications.

Part of this change that makes Fluid the default is the replacement of the drop down menu navigation.  In most cases, customers will want their users to simply use the Nav Bar in place of any classic menu navigation.  However, if there is a special circumstance where customers want to maintain the classic menus, they can do so.  There are two ways of displaying the classic menus:

 Method 1 – Switch back to default tangerine or Alt-Tang theme

1. Go to PeopleTools >> Portal >> Branding >> Branding System Options;
2. Change the system default theme back to default tangerine or alt-tang;
3. Sign out and sign in again to see the changes.

Method 2 – Unhide the drop down menu in default fluid theme

1. Go to PeopleTools >> Portal >> Branding >> Define Headers and Footers;
2. Search and open the DEFAULT_HEADER_FLUID header definition;
3. Copy the following styles into the “Style Definitions” field at bottom of the page, and then save;
.desktopFluidHdr .ptdropdownmenu {
    display: block;

4. Sign out and sign in again to see the changes.

We encourage customers to stick with Fluid navigation as the standard.  It's simply better and more intuitive. 

how to install powershell active directory module

Matt Penny - Tue, 2016-02-02 13:23

install: en_windows_7_professional_with_sp1_vl_build_x64_dvd_u_677791.iso

dism /online /enable-feature /featurename:RemoteServerAdministrationTools-Roles-AD
dism /online /enable-feature /featurename:RemoteServerAdministrationTools-Roles-AD-Powershell

Categories: DBA Blogs

BPM/SOA 12c: Symbolic Filebased MDS in Integrated Weblogic

Darwin IT - Tue, 2016-02-02 05:44
In BPM/SOA projects, we use the MDS all the time, for sharing xsd's and wsdl's between projects.

Since 12cR1 (12.1.3) we have the QuickStart installers for SOA and BPM,  that allows you to create an Integrated Weblogic domain to use for SOASuite and/or BPMSuite.

In most projects we have the contents of the MDS in subversion and of course a check out of that in a local svn working copy.

My whitepaper mentioned in this blog entry describes how you can use the mds in a SOA Suite project from 11g onwards.

But how use the MDS in your integrated weblogic? I would expect that some how 'magically' the integrated weblogic would 'know' of the mds references that I have in the adf-config.xml file in my SOA/BPM Application. But unfortunately it hasn't. That is only used on design/compile time.

Now you could just deploy/sync your MDS to your integrated weblogic as you would do to your test/production server and did on 11g.

But I wouldn't write this blog-entry if I did not find a cool trick: symbolic links, even on Windows.

As denoted by the JDEV_USER_DIR variable your (see also this blog entry), your DefaultDomain would be in 'c:\Data\JDeveloper\SOA\system12.\DefaultDomain' or 'c:\Users\MAG\AppData\Roaming\JDeveloper\system12.\DefaultDomain' (on Windows).

Within the  Domain folder you'll find the following folder structure: 'store\gmds\mds-soa\soa-infra'.
 This is apparently the folder that is used for the MDS for SOA and BPM Suite. Within there you'll find the folders:
  • deployed-composites
  • soa
In there you can create a symbolic link (in Windows a Junctions) named 'apps' and pointing to the folder in your svn working copy that holds the 'oramds://apps'-related content. In Windows this is done like:
C:\...\DefaultDomain\store\gmds\mds-soa\soa-infra>mklink /J apps y:\Generiek\MDS\trunk\SOA\soa-infra\apps
The /J makes it a 'hard symbolic link' or a 'Junction'. Under Linux you woud use 'ln -s ...'.

You'll get a response like:
C:\...\DefaultDomain\store\gmds\mds-soa\soa-infra>Junction created for apps <<===>> y:\Generiek\MDS\trunk\SOA\soa-infra\apps
When you perform a dir you'll see:

Volume in drive C is System
Volume Serial Number is E257-B299

Directory of c:\Data\JDeveloper\SOA\system12.\DefaultDomain\store\gmds\mds-soa\soa-infra

02-02-2016 12:06 <DIR> .
02-02-2016 12:06 <DIR> ..
02-02-2016 12:06 <JUNCTION> apps [y:\Generiek\MDS\trunk\SOA\soa-infra\apps]
02-02-2016 12:07 <DIR> deployed-composites
02-02-2016 11:23 <DIR> soa
0 File(s) 0 bytes
5 Dir(s) 18.475.872.256 bytes free
You can just CD to the apps folder and do a DIR there, it will then list the contents of the svn working copy folder of your MDS but just from within your Default Domain.

Just refire your Integrated Domain's DefaultServer and you should be able to deploy your composites that depend on the MDS.

Pareto Rocks!

Floyd Teter - Mon, 2016-02-01 17:55
I'm a big fan of Vifredo Pareto's work.  He observed the world around him and developed some very simple concepts to explain what he observed.  Pareto was ahead of his time.

Some of Dr. Pareto's work is based on the Pareto Principle:  the idea that 80% of effects come from 20% of causes.  In the real world, we continually see examples of the Pareto Principle.

I've been conducting one of my informal surveys lately...talking to lots of partners, customers and industry analysts about their experiences in implementing SaaS and the way it fits their business.  And I've found that, almost unanimously, the experience falls in line with the Pareto Principle.  Some sources vary the numbers a bit, but it generally plays out as follows:

  • Using the same SaaS footprint, 60% of any SaaS configuration is the same across all industries.  The configuration values and the data values may be different, but the overall scheme is the same.
  • Add another 20% for SaaS customers within the same vertical (healthcare, retail, higher education, public sector, etc.)..
  • Only about 20% of the configuration, business processes, and reporting/business intelligence is unique for the same SaaS footprint in the same industry sector between one customer and another.
Many of the customers I've spoken to in this context immediately place the qualifier: "but our business is different."  And they're right. In fact, for the sake of profitability and survival, their business must be different.  Every business needs differentiators.  But it's different within the scope of that 20% mentioned above.  That other 80% is common with everyone in their business sector.  And, when questioned, most customers agree with that idea.

This is what makes the business processes baked into SaaS so important; any business wants to burn their calories of effort on the differentiators rather than the processes that simply represent "the cost of being in business."  SaaS offers the opportunity to standardize the common 80%, allowing customers to focus their efforts on the unique 20%.  Pareto had it right.

Multisessioning with Python

Gary Myers - Sun, 2016-01-31 00:27
I'll admit that I pretty constantly have at least one window either open into SQL*Plus or at the command line ready to run a deployment script through it. But there's time when it is worth taking a step beyond.

One problem with the architecture of most SQL clients is they connect to a database, send off a SQL statement and do nothing until the database responds back with an answer. That's a great model when it takes no more than a second or two to get the response. It is cumbersome when the statement can take minutes to complete. Complex clients, like SQL Developer, allow the user to have multiple sessions open, even against a single schema if you use "unshared" worksheets. But they don't co-ordinate those sessions in any way.

Recently I needed to run a task in a number of schemas. We're all nicely packaged up and all I needed to do was execute a procedure in each of the schemas and we can do that from a master schema with appropriate grants. However the tasks would take several minutes for each schema, and we had dozens of schemas to process. Running them consecutively in a single stream would have taken many hours and we also didn't want to set them all off at once through the job scheduler due to the workload. Ideally we wanted a few running concurrently, and when one finished another would start. I haven't found an easy way to do that in the database scheduler.

Python, on the other hand, makes it so darn simple.
[Credit to Stackoverflow, of course]

proc connects to the database, executes the procedure (in this demo just setting the client info with a delay so you can see it), and returns.
Strs is a collection of parameters.
pool tells it how many concurrent operation to run. And then it maps the strings to the pool, so A, B and C will start, then as they finish D,E,F and G will be processed as threads become available.

I could my collection was a list of the schema names, and the statement was more like 'begin ' + arg + '.task; end;'


Global variables

db    = 'host:port/service'
user  = 'scott'
pwd   = 'tiger'

def proc(arg):
   con = cx_Oracle.connect(user + '/' + pwd + '@' + db)
   cur = con.cursor()
   cur.execute('begin sys.dbms_application_info.set_client_info(:info); end;',{'info':arg})
import cx_Oracle, time
from multiprocessing.dummy import Pool as ThreadPool 

strs = [
  'A',  'B',  'C',  'D',  'E',  'F',  'G'

# Make the Pool of workers
pool = ThreadPool(3) 
# Pass the elements of the array to the procedure using the pool 
#  In this case no values are returned so the results is a dummy
results = pool.map(proc, strs)
#close the pool and wait for the work to finish 

PS. In this case, I used cx_Oracle as the glue between Python and the database.
The pyOraGeek blog is a good starting point for that.

If/when I get around to blogging again, I'll discuss jaydebeapi / jpype as an alternative. In short, cx_Oracle goes through the OCI client (eg Instant Client) and jaydebeapi takes the JVM / JDBC route.

using powershell’s help system to stash your tips and tricks in ‘about_’ topics

Matt Penny - Sat, 2016-01-30 15:55

There are a bunch of bits of syntax which I struggle to remember.

I’m not always online when I’m using my laptop, but I always have a Powershell window open.

This is a possibly not-best-practice way of using Powershell’s wonderful help system to store bits of reference material.

The problem

I’m moving a WordPress blog to Hugo, which uses Markdown, but I’m struggling to remember the Markdown syntax. It’s not difficult, but I’m getting old and I get confused with Twiki syntax.

In any case this ‘technique’ could be used for anything.

I could equally well just store the content in a big text file, and select-string it….but this is more fun :)

The content

In this instance I only need a few lines as an aide-memoire:

    ## The second largest heading (an <h2> tag)
    > Blockquotes
    *italic* or _italic_
    **bold** or __bold__
    * Item (no spaces before the *) or
    - Item (no spaces before the -)
    1. Item 1
      1. Furthermore, ...
    2. Item 2
    `monospace` (backticks)
    ```` begin/end code block
    [A link!](http://mattypenny.net).
create a module

The module path is given by:


Mine is:

C:\Users\matty\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\PowerShell\Modules\

Pick one of this folders to create your module in and do this:

mkdir C:\Users\matty\Documents\WindowsPowerShell\Modules\QuickReference

Then create a dummy Powershell module file in the folder

notepad C:\Users\matty\Documents\WindowsPowerShell\Modules\QuickReference\QuickReference.psm1

The content of the module file is throwaway:

function dummy {write-output "This is a dummy"}
create the help file(s)

Create a language-specific folder for the help files

mkdir C:\Users\matty\Documents\WindowsPowerShell\Modules\QuickReference\en-US\

Edit a file called about_.help.txt

notepad C:\Users\mpenny2\Documents\WindowsPowerShell\Modules\QuickReference\en-US\about_Markdown.help.txt

My content looked like this:


    Syntax for Markdown 


    ## The second largest heading (an <h2> tag)
    > Blockquotes
    *italic* or _italic_
    **bold** or __bold__
    * Item (no spaces before the *) or
    - Item (no spaces before the -)
    1. Item 1
      1. Furthermore, ...
    2. Item 2
    `monospace` (backticks)
    ```` begin/end code block
    [A link!](http://mattypenny.net).
Use the help

I can now do this (I’ll import the module in my $profile):

PS C:\Windows> import-module QuickReference

Then I can access my Markdown help from within Powershelll

PS C:\Windows> help Markdown

    Syntax for Markdown 


    ## The second largest heading (an <h2> tag)
    > Blockquotes
    *italic* or _italic_
    **bold** or __bold__
    * Item (no spaces before the *) or
    - Item (no spaces before the -)
    1. Item 1
      1. Furthermore, ...
    2. Item 2
    `monospace` (backticks)
    ```` begin/end code block
    [A link!](http://mattypenny.net).

Categories: DBA Blogs

Oracle Database 12c Features Now Available on apex.oracle.com

Joel Kallman - Sat, 2016-01-30 06:42
As a lot of people know, apex.oracle.com is the customer evaluation instance of Oracle Application Express (APEX).  It's a place where anyone on the planet can sign up for a workspace and "kick the tires" of APEX.  After a brief signup process, in a matter of minutes you have access to a slice of an Oracle Database, Oracle REST Data Services, and Oracle Application Express, all easily accessed through your Web browser.

apex.oracle.com has been running Oracle Database 12c for a while now.  But a lot of the 12c-specific developer features weren't available, simply because the database initialization parameter COMPATIBLE wasn't set to or higher.  If you've ever tried to use one of these features in SQL on apex.oracle.com, you may have run into the dreaded ORA-00406.  But as of today (January 30, 2016), that's changed.  You can now make full use of the 12c specific features on apex.oracle.com.  Even if you don't care about APEX, you can still sign up on apex.oracle.com and kick the tires of Oracle Database 12c.

What are some things you can do now on apex.oracle.com? You can use IDENTITY columns.  You can generate a default value from a sequence.  You can specify a default value for explicit NULL columns.  And much more.

You might wonder what's taken so long, and let's just say that sometimes it takes a while to move a change like this through the machinery that is Oracle.

P.S.  I've made the request to update MAX_STRING_SIZE to EXTENDED, so you can define column datatypes up to VARCHAR2(32767).  Until this is implemented, you're limited to VARCHAR2(4000).

What PeopleSoft content was popular in 2015?

Duncan Davies - Thu, 2016-01-28 17:48

The ‘Year in Blogging’ reports have come through so I can see what posts and newsletter items garnered the most views.

PeopleSoft Tipster Blog

So, according to the summary, this blog was visited 130,000 times during the year, an average of ~350/day with the busiest day being just over double that at 749 visitors. About 50% of the traffic is from the US, 15% from India, and 5% from the UK and Canada.

Amazingly, the most viewed post was one written prior to 2015, about PeopleSoft Entity Relationship Diagrams. The most popular post that was actually authored last year was The Future of PeopleSoft video with Marc Weintraub, followed by PeopleSoft and Taleo integration, the Faster Download of PeopleSoft Images and the profile of Graham Smith and how he works.

The PeopleSoft Weekly Newsletter

The PSW newsletter seems to go from strength to strength. During 2015 the subscriber base rose from 919 to 1,104 which is an approx 20% increase. The ‘open rate’ sits around 40% for any one issue (against an industry average of 17%) with the US accounting for 55% of readers, the UK 15% and India 10%.

The top articles in terms of clicks were:

  1. Gartner’s Report on Oracle’s Commitment to PeopleSoft (263 clicks)
  2. Randy ‘Remote PS Admin’ on Forcing Cache Clears (198)
  3. PeopleSoft Planned Features and Enhancements (180)
  4. 5 Life Lessons I Learned at PeopleSoft (167)
  5. Dan Sticka on stopping writing Record Field PeopleCode (166)
  6. Greg Kelly’s Security Checklist from Alliance (155)
  7. Virginia Ebbeck’s list of PeopleSoft Links (145)
  8. Greg Wendt of Grey Heller on the PS Token Vulnerability (142)
  9. Dennis Howlett on the Oracle vs Rimini St court battle (142)
  10. Wade Coombs on PeopleSoft File Attachments (140)
  11. I’m Graham Smith and this is How I Work (139)
  12. Graham’s PeopleSoft Ping Survey (135)
  13. How to write an efficient PeopleCode (134)
  14. Mohit Jain on Tracing in PeopleSoft (131)
  15. The 4 types of PeopleSoft Testing (130)
  16. PS Admin.io on Cobol (127)
  17. Matthew Haavisto on the Cost of PeopleSoft vs SaaS (124)
  18. The PeopleSoft Spotlight Series (119)
  19. Prashant Tyagi on PeopleSoft Single Signon (118)
  20. Adding Watermarks to PeopleSoft Fields (116)



Sending notifications from Oracle Enterprise Manager to VictorOps

Don Seiler - Thu, 2016-01-28 11:55
We use VictorOps for our paging/notification system, and we're pretty happy with it so far. On the DBA team, we've just been using a simple email gateway to send notifications from Oracle Enterprise Manager (EM) to VictorOps. Even then, we can only send the initial notification and not really send an automated recovery without more hacking than its worth. Not a big deal, but would be nice to have some more functionality.

So yesterday I decided I'd just sort it all out since VictorOps has a nice REST API and Enterprise Manager has a nice OS script notification method framework. The initial result can be found on my github: entmgr_to_victorops.sh.

It doesn't do anything fancy, but will handle the messages sent by your notification rules and pass them on to VictorOps. It keys on the incident ID to track which events it is sending follow-up (ie RECOVERY) messages for.

Please do let me know if you have any bugs, requests, suggestions for it.

Many thanks to Sentry Data Systems (my employer) for allowing me to share this code. It isn't mind-blowing stuff but should save you a few hours of banging your head against a wall.
Categories: DBA Blogs

Stinkin' Badges

Scott Spendolini - Thu, 2016-01-28 07:55
Ever since APEX 5, the poor Navigation Bar has taken a back seat to the Navigation Menu. And for good reason, as the Navigation Menu offers a much more intuitive and flexible way to provide site-wide navigation that looks great, is responsive and just plain works. However, the Navigation Bar can and does still serve a purpose. Most application still use it to display the Logout link and perhaps the name of the currently signed on user. Some applications use it to also provide a link to a user's profile or something similar.

Another use for the Navigation Bar is to present simple metrics via badges. You've seen the before: the little red numbered icons that hover in the upper-right corner of an iPhone or Mac application, indicating that there's something that needs attention. Whether you consider them annoying or helpful, truth be told, they are a simple, minimalistic way to convey that something needs attention.

Fortunately, adding a badge to a Navigation Bar entry in the Universal Theme in APEX 5 is tremendously simple. In fact, it's almost too simple! Here's what you need to do:
First, navigate to the Shared Components of your application and select Navigation Bar List. From there, click Desktop Navigation Bar. There will likely only be one entry there: Log Out.

2016 01 28 08 40 05

Click Create List Entry to get started. Give the new entry a List Entry Label and make sure that the sequence number is lower than the Log Out link. This will ensure that your badged item displays to the left of the Log Out link. Optionally add a Target page. Ideally, this will be a modal page that will pop open from any page. This page can show the summary of whatever the badge is conveying. Next, scroll down to the User Defined Attributes section. Enter the value that you want the badge to display in the first (1.) field. Ideally, you should use an Application or Page Item here with this notation: &ITEM_NAME. But for simplicity's sake, it's OK to enter a value outright.
Run your application, and have a look:

2016 01 28 08 48 45

Not bad for almost no work. But we can make it a little better. You can control the color of the badge with a single line of CSS, which can easily be dropped in the CSS section of Theme Roller. Since most badges are red, let's make ours red as well. Run your application and Open Theme Roller and scroll to the bottom of the options. Expand the Custom CSS region and enter the following text:

.t-Button--navBar .t-Button-badge { background-color: red;}

Save your customizations, and note that the badge should now be red:

2016 01 28 08 49 49

Repeat for each metric that you want to display in your Navigation Bar.

Up in the JCS Clouds !!

Tim Dexter - Wed, 2016-01-27 04:05
Hello Friends,

Oracle BI Publisher has been in the cloud for quite sometime ....as a part of Fusion Applications or few other Oracle product offerings. We now announce certification of BI Publisher in the Java Cloud Services!! 

BI Publisher on JCS

Oracle Java Cloud Service (JCS) is a part of the platform service offerings in Oracle Cloud. Powered by Oracle WebLogic Server, it provides a platform on top of Oracle's enterprise-grade cloud infrastructure for developing and deploying new or existing Java EE applications. Check for more details on JCS here. In this page, under "Perform Advanced Tasks" you can find a link to "Leverage your on-premise licenses". This page cites all the products certified for Java Cloud Services and now we can see BI Publisher listed as one of the certified products using Fusion Middleware

How to Install BI Publisher on JCS?

Here are the steps to install BI Publisher on JCS. The certification supports the Virtual Image option only.

Step 1: Create DBaaS Instance

Step 2: Create JCS Instance

To create an Oracle Java Cloud Service instance, use the REST API for Oracle Java Cloud Service. Do not use the Wizard in the GUI. The Wizard does not allow an option to specify the MWHOME partition size, whereas REST API allows us to specify this. The default size created by the Wizard is generally insufficient for BI Publisher deployments.

The detailed instructions to install JCS instance are available in the Oracle By Example Tutorial under "Setting up your environment", "Creating an Oracle Java Cloud Service instance".

Step 3:  Install and Configure BI Publisher

  1. Set up RCU on DBaaS
    • Copy RCU
    • Run RCU
  2. Install BI Publisher in JCS instance
    • Copy BI Installer in JCS instance
    • Run Installer
    • Use Software Only Install
  3. Configure BI Publisher
    • Extend Weblogic Domain
    • Configure Policy Store
    • Configure JMS
    • Configure Security

You can follow the detailed installation instructions as documented in "Oracle By Example" tutorial. 

Minimum Cloud Compute and Storage Requirements:

  1. Oracle Java Cloud Service: 1 OCPU, 7.5 GB Memory, 62 GB Storage
    • To install Weblogic instance
    • To Install BI Publisher
    • To set Temp File Directory in BI Publisher
  2. Oracle Database Cloud Service: 1 OCPU, 7.5 GB Memory, 90 GB Storage
    • To install RCU
    • To use DBaaS as a data source
  3. Oracle IaaS (Compute & Storage): (Optional - Depends on sizing requirements)
    • To Enable Local & Cloud Storage option in DBaaS (Used with Full Tooling option)

So now you can use your on-premise license to host BI Publisher as a standalone on the Java Cloud Services for all your highly formatted, pixel perfect enterprise reports for your cloud based applications. Have a great Day !!

Categories: BI & Warehousing

Adding community based Plugins to the CF CLI Tool

Pas Apicella - Tue, 2016-01-26 17:02
I needed a community based plugin recently and this is how you would add it to your CF CLI interface.

1. Add Community based REPO as shown below

$ cf add-plugin-repo community http://plugins.cfapps.io/

2. Check available plugins from REPO added above

pasapicella@Pas-MacBook-Pro:~/ibm$ cf repo-plugins community
Getting plugins from all repositories ...

Repository: CF-Community
name                      version   description
Download Droplet          1.0.0     Download droplets to your local machine
Firehose Plugin           0.8.0     This plugin allows you to connect to the firehose (CF admins only)
doctor                    1.0.1     doctor scans your deployed applications, routes and services for anomalies and reports any issues found. (CLI v6.7.0+)
manifest-generator        1.0.0     Help you to generate a manifest from 0 (CLI v6.7.0+)
Diego-Enabler             1.0.1     Enable/Disable Diego support for an app (CLI v6.13.0+)

3. Install plugin as shown below

pasapicella@Pas-MacBook-Pro:~/ibm/$ cf install-plugin "Live Stats" -r community

**Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.**

Do you want to install the plugin Live Stats? (y or n)> y
Looking up 'Live Stats' from repository 'community'
7874156 bytes downloaded...
Installing plugin /var/folders/rj/5r89y5nd6pd4c9hwkbvdp_1w0000gn/T/cf-plugin-stats...
Plugin Live Stats v0.0.0 successfully installed.

4. View plugin commands

pasapicella@Pas-MacBook-Pro:~/ibm/$ cf plugins
Listing Installed Plugins...

Plugin Name       Version   Command Name                                           Command Help
IBM-Containers    0.8.788   ic                                                     IBM Containers plug-in

Live Stats        N/A       live-stats                                             Show browser based stats
active-deploy     0.1.22    active-deploy-service-info                             Reports version information about the CLI and Active Deploy service. It also reports the cloud back ends enabled by the Active Deploy service instance.

Categories: Fusion Middleware


Subscribe to Oracle FAQ aggregator