Feed aggregator

fix for ‘Cannot convert value “System.Xml.XmlElement” to type “System.Xml.XmlDocument”‘

Matt Penny - Mon, 2016-02-08 13:51
Quick fix

Change:

Param( [xml]$WordpressPostAsXml

to

Param( [system.xml.xmlelement]$WordpressPostAsXml
Slightly More Detail

I’m in the process of coding a Powershell module to convert an xml file containing the content of my old WordPress site to a set of Markdown files for Hugo.

I got the error:

get-wpHugoFileName : Cannot process argument transformation on parameter 'WordpressPostAsXml'. Cannot convert value "System.Xml.XmlElement" to 
type "System.Xml.XmlDocument". Error: "The specified node cannot be inserted as the valid child of this node, because the specified node is the 
wrong type."
At C:\users\matt\Documents\WindowsPowershell\functions\function-get-WordpressContent.ps1:43 char:63
+ ... goFileName = get-wpHugoFileName -WordPressPost $WordPressPost -Conten ...
+                                                    ~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidData: (:) [get-wpHugoFileName], ParameterBindingArgumentTransformationException
    + FullyQualifiedErrorId : ParameterArgumentTransformationError,get-wpHugoFileName

In summary, there are three relevant functions

  • convert-wpPostToHugo – receieves an exported WordPress site in Xml format. Calls get-wpMatchingWordpressPosts to extract the specified bits, then get-wpHugoFileName for each post
  • get-wpMatchingWordpressPosts – extracts the xml nodes for posts which match a specified string
  • get-wpHugoFileName – called for each xml ‘node’. Returns a file name for the markdown content

convert-wpPostToHugo:

function  convert-wpPostToHugo{ 

  [CmdletBinding()]
  Param( [xml][Alias ("xml")]$WordpressXML = "$wp_xml",
         [string][Alias ("string")]$PostString = "ramone" ,
         [string][Alias ("f")]$ContentFolder = "c:\temp"
) 
  write-debug "$(get-date -format 'hh:mm:ss.ffff') Function beg: $([string]$MyInvocation.Line) "

  $MatchingWordPressPosts = get-wpMatchingWordpressPosts -WordPressXml $WordPressXML -PostString $Poststring

  foreach ($WordPressPost in $MatchingWordPressPosts)
  {    
    [String]$HugoFileName = get-wpHugoFileName -WordPressPost $WordPressPost -ContentFolder $contentFolder
    write-verbose "`$HugoFileName: $HugoFileName"

get-wpMatchingWordpressPosts:

function get-wpMatchingWordpressPosts { 
  [CmdletBinding()]
  Param( [xml][Alias ("xml")]$WordpressXML = "$wp_xml",
         [string][Alias ("string")]$PostString = "ramone"         ) 

  write-debug "$(get-date -format 'hh:mm:ss.ffff') Function beg: $([string]$MyInvocation.Line) "

  $Nodes = select-xml -xml $WordpressXML -xpath "//channel/item" | select -expandproperty node | where-object title -like "*$PostString*"


  return $nodes

get-wpHugoFileName:

function get-wpHugoFileName { 

  [CmdletBinding()]
  Param( [xml][Alias ("x")]$WordpressPostAsXml,
         [string][Alias ("f")]$ContentFolder = "c:\temp" )

  write-debug "$(get-date -format 'hh:mm:ss.ffff') Function beg: $([string]$MyInvocation.Line) "

The fix was to change the definition of the $WordPressPost from xml to [system.xml.xmlelement]:
get-wpHugoFileName:

function get-wpHugoFileName { 

  [CmdletBinding()]
  Param( [system.xml.xmlelement][Alias ("x")]$WordpressPostAsXml,
         [string][Alias ("f")]$ContentFolder = "c:\temp" )

  write-debug "$(get-date -format 'hh:mm:ss.ffff') Function beg: $([string]$MyInvocation.Line) "

This makes sense….the variable returned from get-wpMatchingWordpressPosts is no longer a valid XML document – it is just elements of an XML document.


Categories: DBA Blogs

JRE 1.8.0_73/74 Certified with Oracle E-Business Suite

Steven Chan - Mon, 2016-02-08 13:07

Java logo

Java Runtime Environment 1.8.0_73 (a.k.a. JRE 8u73-b2) and JRE 1.8.0_74 (8u74-b2) and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

What's new in this release?

Oracle now releases a Critical Patch update (CPU) at the same time as the corresponding Patch Set Update (PSU) release for Java SE 8.

  • CPU Release:  JRE 1.8.0_73
  • PSU Release:  JRE 1.8.0_74

Oracle recommends that Oracle E-Business Suite customers use the CPU release (JRE 1.8.0_73) and only upgrade to the PSU release (1.8.0_74) if they require a specific bug fix.  For further information and bug fix details see Java CPU and PSU Releases Explained.

Internet Explorer users are advised to upgrade to the JRE 1.8.0_74 rather than JRE 1.8.0_73 due to a known performance issue.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

EBS + Discoverer 11g Users

This JRE release is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements:

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in July 2016
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Iusses

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue

Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

NEW OTN Virtual Technlogy Summit Sessions Coming!

OTN TechBlog - Mon, 2016-02-08 12:36

Join us for free Hands-On Learning with Oracle and Community Experts! The Oracle Technology Network invites you to attend one of our latest next Virtual Technology Summits on March 8th, 15th and April 5th. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, share their insights and expertise through Hands-on-Labs (HOL), highly technical presentations and demos. This interactive, online event offers four technical tracks:

Database: The database track provides latest updates and in-depth topics covering Oracle Database 12c Advanced Options, new generation application development with JSON, Node.js and Oracle Database Cloud, as well as sessions dedicated to the most recent capabilities of MySQL, benefiting both Oracle and MySQL DBAs and Developers.

Middleware: The middleware track offers developers focused on gaining new skills and expertise in emerging technology areas such as Internet of Thing (IoT), Mobile and PaaS. This track also provides latest updates on Oracle WebLogic 12.2.1.and Java EE.
Java: In this track, we will show you improvements to the Java platform and APIs. You’ll also learn how the Java language enables you to develop innovative applications using Microservices, parallel programming, integrate with other languages and tools, as well as insight for the APIs that will substantially boost your productivity. System: Designed for System Administrators this track covers best practices for implementing, optimizing, and securing your operating system, management tools, and hardware. In addition, we will also discuss best practices for Storage, SPARC, and Software Development.

Register Today -

March 8th, 2016 - 9:30am to 1:30pm PT / 12:30pm to 4:30pm ET / 3:30pm to 7:30pm BRT

March 15, 2016 - 9:30 a.m. to 1:30 p.m. IST / 12:00 p.m. to 4:00 p.m. SGT  / 3:00 p.m. to 7:00 p.m. AEDT

April 5, 2016 - 3:30 p.m. to 7:30 p.m. BRT  / 09:30 - 13:00 GMT (UK)  / 10:30 - 14:00 CET

Resolving Hardware Issues with a Kernel Upgrade in Linux Mint

The Anti-Kyte - Sun, 2016-02-07 11:40

One evening recently, whilst climbing the wooden hills with netbook in hand, I encountered a cat who had decided that halfway up the stairs was a perfect place to catch forty winks.
One startled moggy later, I had become the owner of what I can only describe as…an ex-netbook.

Now, finally, I’ve managed to get a replacement (netbook, not cat).

As usual when I get a new machine, the first thing I did was to replace Windows with Linux Mint…with the immediate result being that the wireless card stopped working.

The solution ? Don’t (kernel) panic, kernel upgrade !

Support for most of the hardware out there is included in the Linux Kernel. The kernel is enhanced and released every few months. However, distributions, such as Mint, tend to stick on one kernel version for a while in order to provide a stable base on which to develop.
This means that, if Linux is not playing nicely with your Wireless card/web-cam/any other aspect of your machine’s hardware, a kernel upgrade may resolve your problem.
Obviously it’s always good to do a bit of checking to see if this might be the case.
It’s also good to have a way of putting things back as they were should the change we’re making not have the desired effect.

What I’m going to cover here is the specific issue I encountered with my new Netbook and the steps I took to figure out what kernel version might fix the problem.
I’ll then detail the kernel upgrade itself.

Machine details

The machine In question is an Acer TravelMate-B116.
It has an 11.6 inch screen, 4GB RAM and a 500GB HDD.
For the purposes of the steps that follow, I was able to connect to the internet via a wired connection to my router. Well, up until I got the wireless working.
The Linux OS I’m using is Linux Mint 17.3 Cinnamon.
Note that I have disabled UEFI and am booting the machine in Legacy mode.

Standard Warning – have a backup handy !

In my particular circumstances, I was trying to configure a new machine. If it all went wrong, I could simply re-install Mint and be back where I started.
If you have stuff on your machine that you don’t want to lose, it’s probably a good idea to back it up onto separate media ( e.g. a USB stick).
Additionally, if you are not presented with a grub menu when you boot your machine, you may consider running the boot-repair tool.
This will ensure that you have the option of which kernel to use if you have more than one to choose from ( which will be the case once you’ve done the kernel upgrade).

It is possible that upgrading the kernel may cause issues with some of the hardware that is working fine with the kernel you currently have installed, so it’s probably wise to be prepared.

Identifying the card

The first step then, is to identify exactly which wireless network card is in the machine.
From a terminal window …

lspci

00:00.0 Host bridge: Intel Corporation Device 2280 (rev 21)
00:02.0 VGA compatible controller: Intel Corporation Device 22b1 (rev 21)
00:0b.0 Signal processing controller: Intel Corporation Device 22dc (rev 21)
00:13.0 SATA controller: Intel Corporation Device 22a3 (rev 21)
00:14.0 USB controller: Intel Corporation Device 22b5 (rev 21)
00:1a.0 Encryption controller: Intel Corporation Device 2298 (rev 21)
00:1b.0 Audio device: Intel Corporation Device 2284 (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 22c8 (rev 21)
00:1c.2 PCI bridge: Intel Corporation Device 22cc (rev 21)
00:1c.3 PCI bridge: Intel Corporation Device 22ce (rev 21)
00:1f.0 ISA bridge: Intel Corporation Device 229c (rev 21)
00:1f.3 SMBus: Intel Corporation Device 2292 (rev 21)
02:00.0 Network controller: Intel Corporation Device 3165 (rev 81)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

It looks like the penultimate entry is our wireless card.
It is possible to get details of the card you have by using “Intel Corporation Device 3165” as a search term. However, we may be able to get the name of the card by running ….

lspci -vq |grep -i wireless -B 1 -A 4

In my case, this returns :

02:00.0 Network controller: Intel Corporation Wireless 3165 (rev 81)
	Subsystem: Intel Corporation Dual Band Wireless AC 3165
	Flags: bus master, fast devsel, latency 0, IRQ 200
	Memory at 91100000 (64-bit, non-prefetchable) [size=8K]
	Capabilities: <access denied>

Further digging around reveals that, according to Intel, this card is supported in linux starting at Kernel version 4.2.

Now, which version of the Kernel are we actually running ?

Identifying the current kernel version and packages

This is relatively simple. In the Terminal just type :

uname -r

On Mint 17.3, the output is :

3.19.0-32-generic

At this point, we now know that an upgrade to the kernel may well solve our wireless problem. The question now is, which packages do we need to install to effect the upgrade ?

If you look in the repositories, there appear to be at least two distinct versions of kernel packages, the generic and something called low-latency.
In order to be confident of which packages we want to get, it’s probably a good idea to work out what we have now.
This can be achieved by searching the installed packages for the version number of the current kernel.
We can do this in the terminal :

dpkg --list |grep 3.19.0-32 |awk '{print $2}'

In my case, this returned :

linux-headers-3.19.0-32
linux-headers-3.19.0-32-generic
linux-image-3.19.0-32-generic
linux-image-extra-3.19.0.32-generic
linux-kernel-generic

As an alternative, you could use the graphical Synaptic Package Manager.
You can start this from the menu ( Administration/Synaptic Package Manager).

synaptic1

Now we know what we’ve got, the next step is to find the kernel version that we need…

Getting the new kernel packages

It may well be the case that the kernel version you’re after has already been added to the distro’s repository.
To see if this is the case, use Synaptic Package Manager to search as follows :

Start Synaptic Package Manager from the System Menu.
You will be prompted for your password.

Click the Status button and select Not Installed

synaptic_search1

In the Quick filter bar, enter the text : linux-headers-4.2*-generic

synaptic_search2

This should give you a list of any kernel 4.2 versions available in the repository.

If, as I did, you find the version you’re looking for, you need to select the packages that are equivalent to the ones you already have installed on your system.
Incidentally, there are a number of 4.2 kernel versions available, so I decided to go for the latest.
In my case then, I want to install :

  • linux-headers-4.20.0-25
  • linux-headers-4.20.0-25-generic
  • linux-image-4.20.0-25-generic
  • linux-image-extra-4.20.0-25-generic

NOTE – If you don’t find the kernel version you are looking for, you can always download the packages directly using these instructions.

Assuming we have found the version we want, we need to now search for the relevant packages.
In the Quick filter field in Synaptic, change the search string to : linux-*4.2.0-25

To Mark the packages for installation, right-click each one in turn and select Mark for Installation

synaptic_select

Once you’ve selected them all, hit the Apply button.

Once the installation is completed, you need to re-start your computer.

On re-start, you should find that the Grub menu has an entry for Advanced Options.
If you select this, you’ll see that you have a list of kernels to choose to boot into.
This comes in handy if you want to go back to running the previous kernel version.

For now though, we’ll boot into the kernel we’ve just installed.
We can confirm that the installation has been successful, once the machine starts, by opening a Terminal and running :

uname -r

If all has gone to plan, we should now see…

4.2.0-25-generic

Even better in my case, my wireless card has now been recognised.
Opening the systray icon, I can enable wireless and connect to my router.

Backing out of the Kernel Upgrade

If you find that the effects of the kernel upgrade are undesirable, you can always go back to the kernel you started with.
If at all possible, I’d recommend starting Mint using the old kernel before doing this.

If you’re running on the kernel for which you are deleting the packages, you may get some alarming warnings. However, once you re-start, you should be back to your original kernel version.

The command then, is :

sudo apt-get remove linux-headers-4.2* linux-image-4.2*

…where 4.2 is the version of the kernel you want to remove.
Run this and the output looks like this…

The following packages will be REMOVED
  linux-headers-4.2.0-25 linux-headers-4.2.0-25-generic
  linux-image-4.2.0-25-generic linux-image-extra-4.2.0-25-generic
  linux-signed-image-4.2.0-25-generic
0 to upgrade, 0 to newly install, 5 to remove and 7 not to upgrade.
After this operation, 294 MB disk space will be freed.
Do you want to continue? [Y/n]

Once the packages have been removed, the old kernel will be in use on the next re-boot.
After re-starting, you can check this with :

uname -r

Thankfully, these steps proved unnecessary in my case and the kernel upgrade has saved me from hardware cat-astrophe.


Filed under: Linux, Mint Tagged: Acer TravelMate-B116, apt-get remove, dpkg, Intel Corporation Dual Band Wireless AC 3165, kernel upgrade, lspci, synaptic package manager, uname -r

Big Nodes, Concurrent Parallel Execution And High System/Kernel Time

Randolf Geist - Sat, 2016-02-06 17:47
The following is probably only relevant for customers that run Oracle on big servers with lots of cores in single instance mode (this specific problem here doesn't reproduce in a RAC environment, see below for an explanation why), like one of my clients that makes use of the Exadata Xn-8 servers, for example a X4-8 with 120 cores / 240 CPUs per node (but also reproduced on older and smaller boxes with 64 cores / 128 CPUs per node).

They recently came up with a re-write of a core application functionality. Part of this code did start the same code path for different data sets potentially several times concurrently ending up with many sessions making use of Parallel Execution. In addition a significant part of the queries used by this code did make questionable use of Parallel Execution, in that sense that queries of very short duration used Parallel Execution, hence ending up with several Parallel Execution starts per second. You could see this pattern from the AWR reports like this, showing several "DFO trees" parallelized per second on average over an hour period:



When the new code was tested with production-like data volumes and patterns, in the beginning the CPU profile of such a big node (running in single instance mode) looked like this, when nothing else was running on that box:



As you can see, the node was completely CPU bound, spending most of the time in System/Kernel time. The AWR reports showed some pretty unusual PX wait events as significant:



"PX Deq: Slave Session Stats" shouldn't be a relevant wait event since it is about the PX slaves at the end of a PX execution passing an array of session statistics to the PX coordinator for aggregating the statistics on coordinator level. So obviously something was slowing down this PX communication significantly (and the excessive usage of Parallel Execution was required to see this happen).

Also some of the more complex Parallel Execution queries performing many joins and ending up with a significant number of data redistributions ran like in slow motion, although claiming to spend 100% of their time on CPU, but according to Active Session History almost 90% of that time was spent on the redistribution operations:

SQL statement execution ASH Summary
-----------------------------------------------

              |               |               |
              |PX SEND/RECEIVE|PX SEND/RECEIVE|
PERCENTAGE_CPU|        PERCENT|    CPU PERCENT|
--------------|---------------|---------------|
            98|             86|             87|


Running the same query with the same execution plan on the same data and the same box during idle times showed a almost 20 times better performance, and less than 40% time spent on redistribution:

SQL statement execution ASH Summary
-----------------------------------------------

              |               |               |
              |PX SEND/RECEIVE|PX SEND/RECEIVE|
PERCENTAGE_CPU|        PERCENT|    CPU PERCENT|
--------------|---------------|---------------|
            96|             38|             37|

So it looked like those queries ran into some kind of contention that wasn't instrumented in Oracle but happened outside on O/S level, showing up as CPU Kernel time - similar to what could be seen in previous versions of Oracle when spinning on mutexes.

Reducing the excessive usage of Parallel Execution showed a significant reduction in CPU time, but still the high System/Kernel time was rather questionable:



So the big question was - where was that time spent in the kernel to see whether this gives further clues.

Analysis
Running "perf top" on the node during such a run showed this profile:

  PerfTop:  129074 irqs/sec  kernel:76.4%  exact:  0.0% [1000Hz cycles],  (all, 128 CPUs)
-------------------------------------------------------------------------------------------------------------------

             samples  pcnt function                 DSO
             _______ _____ ________________________ ___________________________________________________________

          1889395.00 67.8% __ticket_spin_lock       /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            27746.00  1.0% ktime_get                /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            24622.00  0.9% weighted_cpuload         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            23169.00  0.8% find_busiest_group       /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            17243.00  0.6% pfrfd1_init_locals       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            16961.00  0.6% sxorchk                  /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            15434.00  0.6% kafger                   /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            11531.00  0.4% try_atomic_semop         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            11006.00  0.4% __intel_new_memcpy       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            10557.00  0.4% kaf_typed_stuff          /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            10380.00  0.4% idle_cpu                 /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             9977.00  0.4% kxfqfprFastPackRow       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             9070.00  0.3% pfrinstr_INHFA1          /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             8905.00  0.3% kcbgtcr                  /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             8757.00  0.3% ktime_get_update_offsets /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             8641.00  0.3% kgxSharedExamine         /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             7487.00  0.3% update_queue             /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             7233.00  0.3% kxhrPack                 /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6809.00  0.2% rworofprFastUnpackRow    /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6581.00  0.2% ksliwat                  /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6242.00  0.2% kdiss_fetch              /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6126.00  0.2% audit_filter_syscall     /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             5860.00  0.2% cpumask_next_and         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             5618.00  0.2% kaf4reasrp1km            /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             5482.00  0.2% kaf4reasrp0km            /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             5314.00  0.2% kopp2upic                /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             5129.00  0.2% find_next_bit            /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             4991.00  0.2% kdstf01001000000km       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             4842.00  0.2% ktrgcm                   /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             4762.00  0.2% evadcd                   /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             4580.00  0.2% kdiss_mf_sc              /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle

Running "perf" on a number of Parallel Slaves being busy on CPU showed this profile:

     0.36%     ora_xxxx  [kernel.kallsyms]             [k] 

__ticket_spin_lock
               |
               --- __ticket_spin_lock
                  |          
                  |--99.98%-- _raw_spin_lock
                  |          |          
                  |          |--100.00%-- ipc_lock
                  |          |          ipc_lock_check
                  |          |          |          
                  |          |          |--99.83%-- semctl_main
                  |          |          |          sys_semctl
                  |          |          |          system_call
                  |          |          |          __semctl
                  |          |          |          |          
                  |          |          |           --100.00%-- skgpwpost
                  |          |          |                     kslpsprns
                  |          |          |                     kskpthr
                  |          |          |                     ksl_post_proc
                  |          |          |                     kxfprienq
                  |          |          |                     kxfpqrenq
                  |          |          |                     |          
                  |          |          |                     |--98.41%-- kxfqeqb
                  |          |          |                     |          kxfqfprFastPackRow
                  |          |          |                     |          kxfqenq
                  |          |          |                     |          qertqoRop
                  |          |          |                     |          kdstf01001010000100km
                  |          |          |                     |          kdsttgr
                  |          |          |                     |          qertbFetch
                  |          |          |                     |          qergiFetch
                  |          |          |                     |          rwsfcd
                  |          |          |                     |          qertqoFetch
                  |          |          |                     |          qerpxSlaveFetch
                  |          |          |                     |          qerpxFetch
                  |          |          |                     |          opiexe
                  |          |          |                     |          kpoal8

Running "strace" on those Parallel Slaves showed this:

.
.
.
semctl(1347842, 397, SETVAL, 0x1)       = 0
semctl(1347842, 388, SETVAL, 0x1)       = 0
semctl(1347842, 347, SETVAL, 0x1)       = 0
semctl(1347842, 394, SETVAL, 0x1)       = 0
semctl(1347842, 393, SETVAL, 0x1)       = 0
semctl(1347842, 392, SETVAL, 0x1)       = 0
semctl(1347842, 383, SETVAL, 0x1)       = 0
semctl(1347842, 406, SETVAL, 0x1)       = 0
semctl(1347842, 389, SETVAL, 0x1)       = 0
semctl(1347842, 380, SETVAL, 0x1)       = 0
semctl(1347842, 395, SETVAL, 0x1)       = 0
semctl(1347842, 386, SETVAL, 0x1)       = 0
semctl(1347842, 385, SETVAL, 0x1)       = 0
semctl(1347842, 384, SETVAL, 0x1)       = 0
semctl(1347842, 375, SETVAL, 0x1)       = 0
semctl(1347842, 398, SETVAL, 0x1)       = 0
semctl(1347842, 381, SETVAL, 0x1)       = 0
semctl(1347842, 372, SETVAL, 0x1)       = 0
semctl(1347842, 387, SETVAL, 0x1)       = 0
semctl(1347842, 378, SETVAL, 0x1)       = 0
semctl(1347842, 377, SETVAL, 0x1)       = 0
semctl(1347842, 376, SETVAL, 0x1)       = 0
semctl(1347842, 367, SETVAL, 0x1)       = 0
semctl(1347842, 390, SETVAL, 0x1)       = 0
semctl(1347842, 373, SETVAL, 0x1)       = 0
semctl(1347842, 332, SETVAL, 0x1)       = 0
semctl(1347842, 379, SETVAL, 0x1)       = 0
semctl(1347842, 346, SETVAL, 0x1)       = 0
semctl(1347842, 369, SETVAL, 0x1)       = 0
semctl(1347842, 368, SETVAL, 0x1)       = 0
semctl(1347842, 359, SETVAL, 0x1)       = 0
.
.
.

So the conclusion was: A lot of CPU time is spent spinning on the "spin lock" (critical code section) - caused by calls to "semctl" (semaphores), which are part of the PX code path and come from "ipc_lock"->"raw_lock". "strace" shows that all of the calls to "semctl" make use of the same semaphore set (first parameter), which explains the contention on that particular semaphore set (indicating that the locking granule is the semaphore set, not the semaphore).

Solution
Based on the "perf" results an Oracle engineer found a suitable, unfortunately unpublished and closed bug from 2013 for 12.1.0.2 that comes up with three different ways how to address the problem:

- Run with "cluster_database" = true: This will take a different code path which simply reduces the number of semaphore calls by two orders of magnitude. We tested this approach and it showed immediate relief on kernel time - that is the explanation why in a RAC environment this specific issue doesn't reproduce.

- Run with different "kernel.sem" settings: The Exadata boxes came with the following predefined semaphore configuration:

kernel.sem = 2048 262144 256 256

"ipcs" showed the following semaphore arrays with this configuration when starting the Oracle instance:

------ Semaphore Arrays --------
key        semid      owner     perms      nsems    
.
.
.
0xd87a8934 12941057   oracle    640        1502     
0xd87a8935 12973826   oracle    640        1502     
0xd87a8936 12006595   oracle    640        1502    

By reducing the number of semaphores per set and increasing the number of sets, like this:

kernel.sem = 100 262144 256 4096

the following "ipcs" output could be seen:

------ Semaphore Arrays --------
key        semid      owner     perms      nsems    
.
.
.
0xd87a8934 13137665   oracle    640        93       
0xd87a8935 13170434   oracle    640        93       
0xd87a8936 13203203   oracle    640        93       
0xd87a8937 13235972   oracle    640        93       
0xd87a8938 13268741   oracle    640        93       
0xd87a8939 13301510   oracle    640        93       
0xd87a893a 13334279   oracle    640        93       
0xd87a893b 13367048   oracle    640        93       
0xd87a893c 13399817   oracle    640        93       
0xd87a893d 13432586   oracle    640        93       
0xd87a893e 13465355   oracle    640        93       
0xd87a893f 13498124   oracle    640        93       
0xd87a8940 13530893   oracle    640        93       
0xd87a8941 13563662   oracle    640        93       
0xd87a8942 13596431   oracle    640        93       
0xd87a8943 13629200   oracle    640        93       
0xd87a8944 13661969   oracle    640        93       
0xd87a8945 13694738   oracle    640        93       
0xd87a8946 13727507   oracle    640        93       
0xd87a8947 13760276   oracle    640        93       
0xd87a8948 13793045   oracle    640        93       
0xd87a8949 13825814   oracle    640        93       
0xd87a894a 13858583   oracle    640        93       
0xd87a894b 13891352   oracle    640        93       
0xd87a894c 13924121   oracle    640        93       
0xd87a894d 13956890   oracle    640        93       
0xd87a894e 13989659   oracle    640        93       
0xd87a894f 14022428   oracle    640        93       
0xd87a8950 14055197   oracle    640        93       
0xd87a8951 14087966   oracle    640        93       
0xd87a8952 14120735   oracle    640        93       
0xd87a8953 14153504   oracle    640        93       
0xd87a8954 14186273   oracle    640        93       
0xd87a8955 14219042   oracle    640        93


So Oracle now allocated a lot more sets with less semaphores per set. We tested this configuration instead of using "cluster_database = TRUE" and got the same low kernel CPU times

- The bug comes up with a third option how fix this, which has the advantage that the host configuration doesn't need to be changed, and the configuration can be done per instance: There is an undocumented parameter "_sem_per_sem_id" that defines the upper limit of semaphores to allocate per set. By setting this parameter to some comparable values like 100 or 128 the net result ought to be the same - Oracle allocates more sets with less semaphores per set, but we haven't tested this option.

Conclusion
So the bottom line was this: Certain usage patterns of the Oracle instance lead to contention on spin locks on Linux O/S level if Oracle runs in single instance mode and used the so far recommended semaphore settings, which resulted in all semaphore calls going for the same semaphore set. By having Oracle allocate more semaphore sets the calls were spread over more sets hence significantly reducing the contention.

There is probably some internal note available at Oracle that indicates that the default semaphore settings recommended for big nodes are not optimal for running single instance mode under certain circumstances, but I don't know if there is a definitive, official guide available yet.

This is the CPU profile of exactly the same test workload as before using the changed "kernel.sem" settings:



Also in the AWR report the unusual PX related wait events went away and performance improved significantly, in particular also for those complex queries mentioned above.

PaaS4SaaS Developers' Code Is Always 'On': OAUX is on OTN and GitHub

Usable Apps - Sat, 2016-02-06 09:35

Boom! That's the sound of thunder rolling as PaaS and SaaS developers work as fast as lightning in the cloud. The cloud has changed customer expectations about applicationstoo; if they don’t like their user experience (UX) or they don’t get it fast, they’ll go elsewhere.

PaaS4SaaS developers know their code is always 'on'.

But you can accelerate the development of your PaaS4SaaS solutions with a killer UX easily by now downloading the AppsCloudUIKit software part of the Cloud UX simplified UI Rapid Development Kit (RDK) for Release 10 PaaS4SaaS solutions from the Oracle Technology Network (OTN) or from GitHub.

The Oracle Applications User Experience (OAUX) team's Oracle Cloud UX RDK works with Oracle JDeveloper 11.1.1.9.0, 12.1.3.0.0 and 12.2.1.0.0. The kit downloads include a developer eBook that explains the technical requirements and how to build a complete SaaS or PaaS solution in a matter of hours

Build a simplified UI with the RDK

The AppsCloudUIKit software part of our partner training kit is on OTN and GitHub and is supported by video and eBook guidance.

Build a simplified UI developer eBook

The developer eBook is part of the AppsCloudUIKit downloads on OTN and GitHub.

For the complete developer experience fast, check out the cool Oracle Usable Apps channel YouTube videos from our own dev and design experts on how to design and build your own simplified UI for SaaS using PaaS.

Enjoy. Check in with us on any questions relating to versions or requirements. Share your thoughts in the comments after you've used the complete RDK and stay tuned for more information. It's an ongoing story...

Downloads 

Security Alert CVE-2016-0603 Released

Oracle Security Team - Fri, 2016-02-05 14:42

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6.

To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system.

Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later.

As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious.

For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html

 

PeopleTools CPU analysis and supported versions of PeopleTools (update for January 2016 CPU)

PeopleSoft Technology Blog - Fri, 2016-02-05 14:30

Questions often arise on the PeopleTools versions for which Critical Patch Updates have been published, or if a particular PeopleTools version is supported. 

The attached page shows the patch number matrix for PeopleTools versions associated with a particular CPU publication. This information will help you decide which CPU to apply and when to consider upgrading to a more current release.

The link in "CPU Date" goes to the landing page for CPU advisories, the link in the individual date, e.g. Apr-10, goes to the advisory for that date.

The page also shows the CVE's addressed in the CPU, a synopsis of the issue and the Common Vulnerability Scoring System (CVSS) value.

To find more details on any CVE, simply replace the CVE number in the sample URL below.

http://www.cvedetails.com/cve/CVE-2010-2377

Common Vulnerability Scoring System Version 2 Calculator

http://nvd.nist.gov/cvss.cfm?calculator&adv&version=2

This page shows the components of the CVSS score

Example CVSS response policy http://www.first.org/_assets/cvss/cvss-based-patch-policy.pdf

All the details in this page are available on My Oracle Support and public sites.

The RED column indicates the last patch for a PeopleTools version and effectively the last support date for that version.

Applications Unlimited support does NOT apply to PeopleTools versions.

How to get nfs info on 1000 or many hosts using Oracle Enterprise Manager

Arun Bavera - Fri, 2016-02-05 11:27
There was a requirement to get nfs info on all the hosts.
Here is the way to get it:

Create a OS JOB in EM12c with following text and execute on all interested hosts. Assuming you have common shared mount on all these hosts.
Otherwise you can create Metric Extension to collect this info and query repository using Configuration Manger or directly to get this info.
 echo -e `echo '\n';hostname --l;echo '\n=====================================\n';nfsstat -m;echo '\n=====================================\n';exit 0` >> /nfs_software/nfs_info_PROD.txt



Categories: Development

General troubleshooting lessons from recent Delphix issue

Bobby Durrett's DBA Blog - Fri, 2016-02-05 11:25

Delphix support helped me resolve an issue yesterday and the experience gave me the idea of writing this post about several general computer issue troubleshooting tips that I have learned down through the years. Never mind that I ignored these lessons during this particular problem. This is more of a “do as I say” and not a “do as I do” story.  Actually, some times I remember these lessons. I didn’t do so well this week. But the several mistakes that I made resolving this recent Delphix issue motivate me to write this post and if nothing else remind myself of the lessons I’ve learned in the past about how to resolve a computer problem.

Don’t panic!

I’m reminded of the friendly advice on the cover of the Hitchhiker’s Guide to the Galaxy: “Don’t panic!”. So, yesterday it was 4:30 pm. I had rebooted the Delphix virtual machine and then in a panic had the Unix team reboot the HP Unix target server. But, still I could not bring up any of the Delphix VDBs.  We had people coming over to our house for dinner that night and I was starting to worry that I would be working on this issue all night. I ended up getting out of the office by 5:30 pm and had a great dinner with friends. What was I so stressed about? Even the times that I have been up all night it didn’t kill me. Usually the all night issues lead to me learning things anyway.

Trust support

The primary mistake that I made was to get my mind fixed on a solution to the problem instead of working with Delphix support and trusting them to guide us to the solution. We had a number of system issues due to a recent network issue and I got my mind set on the idea that my Delphix issue was due to some network hangup. I feel sorry for our network team because it seems like the first thought people have any time there is some issue is that it is a “network issue”. I should know better. How many times have I been working on issues when everyone says it is a “database issue” and I’m annoyed because I know that the issue is somewhere else and they are not believing me when I point to things outside the database. Anyway, I opened a case with Delphix on Monday when I couldn’t get a VDB to come down. It just hung for 5 minutes until it gave me an error. I assumed that it was a network hangup and got fixated on rebooting the Delphix VM. Ack! Ultimately, I ended up working with two helpful and capable people in Delphix support and they resolved the issue which was not what I thought at all. There are times to disagree with support and push for your own solution but I did this too early in this case and I was dead wrong.

Keep it simple

I’ve heard people refer to Occam’s razor which I translate in computer terms to mean “look for simple problems first”. Instead of fixing my mind on some vague network issue where the hardware is not working properly, how about assuming that all the hardware and software is working normally and then thinking about what problems might cause my symptoms? I can’t remember how many times this has bit me. There is almost always some simple explanation.  In this case I had made a change to a Unix shell script that runs when someone logs in as the oracle user. This caused Delphix to no longer be able to do anything with the VDBs on that server. Oops! It was a simple blunder, no big deal. But I’m kicking myself for not first thinking about a simple problem like a script change instead of focusing on something more exotic.

What changed?

I found myself saying the same dumb thing that I’ve heard people say to me all the time: nothing changed. In this case I said something like “this has worked fine for 3 years now and nothing has changed”. The long-suffering and patient Delphix support folks never called me on this, but I was dead wrong. Something had to have changed for something that was working to stop working. I should have spent time looking at the various parts of our Delphix setup to see if anything had changed before I contacted support. All I had to do was see the timestamp on our login script and I would see that something had recently changed.

Understand how it all works

I think my Delphix skills are a little rusty. We just started a new expansion project to add new database sources to Delphix. It has been a couple of years since I’ve done any heavy configuration and trouble shooting. But I used to have a better feel for how all the pieces fit together. I should have thought about what must have gone on behind the scenes when I asked Delphix to stop a VDB and it hung for 5 minutes. What steps was it doing? Where in the process could the breakdown be occurring? Delphix support did follow this type of reasoning to find the issue. They manually tried some of the steps that the Delphix software would do automatically until they found the problem. If I stopped to think about the pieces of the process I could have done the same. This has been a powerful approach to solving problems all through my career. I think about resolving PeopleSoft issues. It just helps to understand how things work. For example, if you understand how the PeopleSoft login process works you can debug login issues by checking each step of the process for possible issues. The same is true for Oracle logins from clients. In general, the more you understand all the pieces of a computer system, down to the transistors on the chips, the better chance you have of visualizing where the problem might be.

Well, I can’t think of any other pearls of wisdom from this experience but I thought I would write these down while it was on my mind. Plus, I go on call Monday morning so I need to keep these in mind as I resolve any upcoming issues. Thanks to Delphix support for their good work on this issue.

Categories: DBA Blogs

Node-oracledb: Avoiding "ORA-01000: maximum open cursors exceeded"

Christopher Jones - Fri, 2016-02-05 05:45

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the initial hacking-around-with-node-oracledb phase you may sometimes encounter the error ORA-01000: maximum open cursors exceeded. A cursor is "a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information"

Here are things to do when you see an ORA-1000:

  • Avoid having too many incompletely processed statements open at one time:

    • Close ResultSets before releasing the connection.

    • If cursors are opened with dbms_sql.open_cursor() in a PL/SQL block, close them before the block returns - except for REF CURSORS being passed back to node-oracledb. (And if a future node-oracledb version supports Oracle Database 12c Implicit Result Sets, these cursors should likewise not be closed in the PL/SQL block)

    • Make sure your application is handling connections and statements in the order you expect.

  • Choose the appropriate Statement Cache size. Node-oracledb has a statement cache per connection. When node-oracledb internally releases a statement it will be put into the statement cache of that connection, but its cursor will remain open. This makes statement re-execution very efficient.

    The cache size is settable with the stmtCacheSize attribute. The appropriate statement cache size you choose will depend on your knowledge of the locality of the statements, and of the resources available to the application: are statements re-executed; will they still be in the cache when they get executed; how many statements do you want to be cached? In rare cases when statements are not re-executed, or are likely not to be in the cache, you might even want to disable the cache to eliminate its management overheads.

    Incorrectly sizing the statement cache will reduce application efficiency. Luckily with Oracle 12.1, the cache can be automatically tuned using an oraaccess.xml file.

    More information on node-oracledb statement caching is here.

  • Don't forget to use bind variables otherwise each variant of the statement will have its own statement cache entry and cursor. With appropriate binding, only one entry and cursor will be needed.

  • Set the database's open_cursors parameter appropriately. This parameter specifies the maximum number of cursors that each "session" (i.e each node-oracle connection) can use. When a connection exceeds the value, the ORA-1000 error is thrown. Documentation on open_cursors is here.

    Along with a cursor per entry in the connection's statement cache, any new statements that a connection is currently executing, or ResultSets that haven't been released (in neither situation are these yet cached), will also consume a cursor. Make sure that open_cursors is large enough to accommodate the maximum open cursors any connection may have. The upper bound required is stmtCacheSize + the maximum number of executing statements in a connection.

    Remember this is all per connection. Also cache management happens when statements are internally released. The majority of your connections may use less than open_cursors cursors, but if one connection is at the limit and it then tries to execute a new statement, that connection will get ORA-1000: maximum open cursors exceeded.

Storage difference between 2 identical Exa boxes. How and why?

Syed Jaffar - Thu, 2016-02-04 04:58
We noticed around 1.6TB storage difference between two Eight (1/8) Exadata boxes while configuring Data Guard. Wondered what went wrong. The Exa box configured for DR was around 1.6TB short compare to the other Exa box. Verified the lun, physical disk and grdidisk status on a cell, which showed active/online status. The tricky part on Exadata is, everything has to be active/online across all cell storage servers. We then figured-out that grid disk status on the 3rd cell storage server was inactive. After making them active on the 3rd cell server, everything become normal, i mean, the missing 1.6TB space appeared.
When you work with Exadata, you need to verify all cell storage servers to confirm the issue, rather than just query things over just one cell server.

CPQ Cloud Support Resources

Chris Warticki - Wed, 2016-02-03 09:48

First and ALWAYS – the #1 investment is made in the PRODUCT, PRODUCT, PRODUCT.

Remain a student of the product.

1. CPQ Cloud PRODUCT Information Page

2. CPQ Cloud Learning Center

3. Get trained on the PRODUCTCPQ Cloud Learning Subscription

4. Oracle Learning Library

a.Cloud Library

My Oracle Support CPQ Cloud Support Center 

CPQ Newsletter

Personalize My Oracle Support Experience

1. Setup Proactive Alerts and Notifications

2. Customize your MOS Dashboard

3. Remain in the Know – Subscribe to Cloud and SaaS, Newsletters

Collaborate. Communicate. Connect

1. Oracle Mobile App – News, Events, Mobile MOS, Videos etc

2. Applications Customer Connect

3. My Oracle Support Community

a. CPQ (BigMachines) Community

SOCIAL Circles of Influence

1. Oracle CPQ Cloud

2. Oracle Cloud Zone

3. Oracle Cloud Marketplace

4. Cloud Café (Podcasts)

5. CPQ Blog

6. Oracle Cloud Solutions Blog

Engage with Oracle Support

1. Upload ALL reports if logging a Service Request

2. Leverage Oracle Collaborative Support (web conferencing)

3. Better Yet – Record your issue and upload it (why wait for a scheduled web conference?)

4. Request Management Attention as necessary

Agile Development with Oracle Developer Cloud Service and JDeveloper 12.2.1

Shay Shmeltzer - Tue, 2016-02-02 18:49

I blogged in the past about using Oracle Developer Cloud Service (DevCS) together with JDeveloper/ADF to manage your code and automate your builds.

Since I wrote those blog entries, we released a new version of JDeveloper (12.2.1) that added deeper integration with the Developer Cloud Service functionality for tracking tasks/issues. In parallel Developer Cloud Service also added various features with one of the new areas being covered is managing sprints and an agile development processes

I thought it might be interesting to show some of the new features of both products working togethers.

In the video below you'll see how to:

  • Connect to DevCS and its projects from inside JDeveloper
  • Leverage the Team view in JDeveloper (tasks, builds, and code repositories)
  • Interact with Tasks/Issues in JDeveloper
  • Handle Git transactions
  • Associate code commits with specific tasks
  • Monitor team activity in the Team Dashboard
  • Create Agile boards and manage sprints in Developer Cloud Service

One other interesting feature I'm not showing above is the ability to do code reviews on your code by team members - before those are merged into your main code line.

If you want to try Developer Cloud Service out, just get a trial account of the Oracle Java Cloud Service - and you'll get an instance of the Developer Cloud Service that you can use to test this new way of working. 

Categories: Development

Fluid Header and Navigation is the New Standard

PeopleSoft Technology Blog - Tue, 2016-02-02 16:15
Beginning with PeopleTools 8.55, PeopleSoft 9.2 applications will have a Fluid header on their classic pages that matches the fluid pages.  This unifies the user experience of classic pages with newer fluid pages and applications.  With the fluid user interface, user navigation is more seamless and intuitive.  Using fluid homepages, tiles, global search, related actions, and the new fluid Navigation Collection feature, users can more easily navigate to the information most important to them.  Refer to the PeopleSoft Fluid User Interface and Navigation Standards White Paper (Document ID 2063602.1) for more information on design best practices for navigation within PeopleSoft applications.

Part of this change that makes Fluid the default is the replacement of the drop down menu navigation.  In most cases, customers will want their users to simply use the Nav Bar in place of any classic menu navigation.  However, if there is a special circumstance where customers want to maintain the classic menus, they can do so.  There are two ways of displaying the classic menus:

 Method 1 – Switch back to default tangerine or Alt-Tang theme

1. Go to PeopleTools >> Portal >> Branding >> Branding System Options;
2. Change the system default theme back to default tangerine or alt-tang;
3. Sign out and sign in again to see the changes.

Method 2 – Unhide the drop down menu in default fluid theme

1. Go to PeopleTools >> Portal >> Branding >> Define Headers and Footers;
2. Search and open the DEFAULT_HEADER_FLUID header definition;
3. Copy the following styles into the “Style Definitions” field at bottom of the page, and then save;
.desktopFluidHdr .ptdropdownmenu {
    display: block;
}

4. Sign out and sign in again to see the changes.

We encourage customers to stick with Fluid navigation as the standard.  It's simply better and more intuitive. 

how to install powershell active directory module

Matt Penny - Tue, 2016-02-02 13:23

install: en_windows_7_professional_with_sp1_vl_build_x64_dvd_u_677791.iso

dism /online /enable-feature /featurename:RemoteServerAdministrationTools-Roles-AD
dism /online /enable-feature /featurename:RemoteServerAdministrationTools-Roles-AD-Powershell

Categories: DBA Blogs

BPM/SOA 12c: Symbolic Filebased MDS in Integrated Weblogic

Darwin IT - Tue, 2016-02-02 05:44
In BPM/SOA projects, we use the MDS all the time, for sharing xsd's and wsdl's between projects.

Since 12cR1 (12.1.3) we have the QuickStart installers for SOA and BPM,  that allows you to create an Integrated Weblogic domain to use for SOASuite and/or BPMSuite.

In most projects we have the contents of the MDS in subversion and of course a check out of that in a local svn working copy.

My whitepaper mentioned in this blog entry describes how you can use the mds in a SOA Suite project from 11g onwards.

But how use the MDS in your integrated weblogic? I would expect that some how 'magically' the integrated weblogic would 'know' of the mds references that I have in the adf-config.xml file in my SOA/BPM Application. But unfortunately it hasn't. That is only used on design/compile time.

Now you could just deploy/sync your MDS to your integrated weblogic as you would do to your test/production server and did on 11g.

But I wouldn't write this blog-entry if I did not find a cool trick: symbolic links, even on Windows.

As denoted by the JDEV_USER_DIR variable your (see also this blog entry), your DefaultDomain would be in 'c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain' or 'c:\Users\MAG\AppData\Roaming\JDeveloper\system12.2.1.0.42.151011.0031\DefaultDomain' (on Windows).

Within the  Domain folder you'll find the following folder structure: 'store\gmds\mds-soa\soa-infra'.
 This is apparently the folder that is used for the MDS for SOA and BPM Suite. Within there you'll find the folders:
  • deployed-composites
  • soa
In there you can create a symbolic link (in Windows a Junctions) named 'apps' and pointing to the folder in your svn working copy that holds the 'oramds://apps'-related content. In Windows this is done like:
C:\...\DefaultDomain\store\gmds\mds-soa\soa-infra>mklink /J apps y:\Generiek\MDS\trunk\SOA\soa-infra\apps
The /J makes it a 'hard symbolic link' or a 'Junction'. Under Linux you woud use 'ln -s ...'.

You'll get a response like:
C:\...\DefaultDomain\store\gmds\mds-soa\soa-infra>Junction created for apps <<===>> y:\Generiek\MDS\trunk\SOA\soa-infra\apps
When you perform a dir you'll see:

c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain\store\gmds\mds-soa\soa-infra>dir
Volume in drive C is System
Volume Serial Number is E257-B299

Directory of c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain\store\gmds\mds-soa\soa-infra

02-02-2016 12:06 <DIR> .
02-02-2016 12:06 <DIR> ..
02-02-2016 12:06 <JUNCTION> apps [y:\Generiek\MDS\trunk\SOA\soa-infra\apps]
02-02-2016 12:07 <DIR> deployed-composites
02-02-2016 11:23 <DIR> soa
0 File(s) 0 bytes
5 Dir(s) 18.475.872.256 bytes free
You can just CD to the apps folder and do a DIR there, it will then list the contents of the svn working copy folder of your MDS but just from within your Default Domain.

Just refire your Integrated Domain's DefaultServer and you should be able to deploy your composites that depend on the MDS.

Pareto Rocks!

Floyd Teter - Mon, 2016-02-01 17:55
I'm a big fan of Vifredo Pareto's work.  He observed the world around him and developed some very simple concepts to explain what he observed.  Pareto was ahead of his time.

Some of Dr. Pareto's work is based on the Pareto Principle:  the idea that 80% of effects come from 20% of causes.  In the real world, we continually see examples of the Pareto Principle.

I've been conducting one of my informal surveys lately...talking to lots of partners, customers and industry analysts about their experiences in implementing SaaS and the way it fits their business.  And I've found that, almost unanimously, the experience falls in line with the Pareto Principle.  Some sources vary the numbers a bit, but it generally plays out as follows:

  • Using the same SaaS footprint, 60% of any SaaS configuration is the same across all industries.  The configuration values and the data values may be different, but the overall scheme is the same.
  • Add another 20% for SaaS customers within the same vertical (healthcare, retail, higher education, public sector, etc.)..
  • Only about 20% of the configuration, business processes, and reporting/business intelligence is unique for the same SaaS footprint in the same industry sector between one customer and another.
Many of the customers I've spoken to in this context immediately place the qualifier: "but our business is different."  And they're right. In fact, for the sake of profitability and survival, their business must be different.  Every business needs differentiators.  But it's different within the scope of that 20% mentioned above.  That other 80% is common with everyone in their business sector.  And, when questioned, most customers agree with that idea.

This is what makes the business processes baked into SaaS so important; any business wants to burn their calories of effort on the differentiators rather than the processes that simply represent "the cost of being in business."  SaaS offers the opportunity to standardize the common 80%, allowing customers to focus their efforts on the unique 20%.  Pareto had it right.






Multisessioning with Python

Gary Myers - Sun, 2016-01-31 00:27
I'll admit that I pretty constantly have at least one window either open into SQL*Plus or at the command line ready to run a deployment script through it. But there's time when it is worth taking a step beyond.

One problem with the architecture of most SQL clients is they connect to a database, send off a SQL statement and do nothing until the database responds back with an answer. That's a great model when it takes no more than a second or two to get the response. It is cumbersome when the statement can take minutes to complete. Complex clients, like SQL Developer, allow the user to have multiple sessions open, even against a single schema if you use "unshared" worksheets. But they don't co-ordinate those sessions in any way.

Recently I needed to run a task in a number of schemas. We're all nicely packaged up and all I needed to do was execute a procedure in each of the schemas and we can do that from a master schema with appropriate grants. However the tasks would take several minutes for each schema, and we had dozens of schemas to process. Running them consecutively in a single stream would have taken many hours and we also didn't want to set them all off at once through the job scheduler due to the workload. Ideally we wanted a few running concurrently, and when one finished another would start. I haven't found an easy way to do that in the database scheduler.

Python, on the other hand, makes it so darn simple.
[Credit to Stackoverflow, of course]

proc connects to the database, executes the procedure (in this demo just setting the client info with a delay so you can see it), and returns.
Strs is a collection of parameters.
pool tells it how many concurrent operation to run. And then it maps the strings to the pool, so A, B and C will start, then as they finish D,E,F and G will be processed as threads become available.

I could my collection was a list of the schema names, and the statement was more like 'begin ' + arg + '.task; end;'

#!/usr/bin/python

"""
Global variables
"""

db    = 'host:port/service'
user  = 'scott'
pwd   = 'tiger'

def proc(arg):
   con = cx_Oracle.connect(user + '/' + pwd + '@' + db)
   cur = con.cursor()
   cur.execute('begin sys.dbms_application_info.set_client_info(:info); end;',{'info':arg})
   time.sleep(10)   
   cur.close()
   con.close()
   return
   
import cx_Oracle, time
from multiprocessing.dummy import Pool as ThreadPool 

strs = [
  'A',  'B',  'C',  'D',  'E',  'F',  'G'
  ]

# Make the Pool of workers
pool = ThreadPool(3) 
# Pass the elements of the array to the procedure using the pool 
#  In this case no values are returned so the results is a dummy
results = pool.map(proc, strs)
#close the pool and wait for the work to finish 
pool.close() 
pool.join() 

PS. In this case, I used cx_Oracle as the glue between Python and the database.
The pyOraGeek blog is a good starting point for that.

If/when I get around to blogging again, I'll discuss jaydebeapi / jpype as an alternative. In short, cx_Oracle goes through the OCI client (eg Instant Client) and jaydebeapi takes the JVM / JDBC route.



using powershell’s help system to stash your tips and tricks in ‘about_’ topics

Matt Penny - Sat, 2016-01-30 15:55

There are a bunch of bits of syntax which I struggle to remember.

I’m not always online when I’m using my laptop, but I always have a Powershell window open.

This is a possibly not-best-practice way of using Powershell’s wonderful help system to store bits of reference material.

The problem

I’m moving a WordPress blog to Hugo, which uses Markdown, but I’m struggling to remember the Markdown syntax. It’s not difficult, but I’m getting old and I get confused with Twiki syntax.

In any case this ‘technique’ could be used for anything.

I could equally well just store the content in a big text file, and select-string it….but this is more fun :)

The content

In this instance I only need a few lines as an aide-memoire:

    ## The second largest heading (an <h2> tag)
    > Blockquotes
    *italic* or _italic_
    **bold** or __bold__
    * Item (no spaces before the *) or
    - Item (no spaces before the -)
    1. Item 1
      1. Furthermore, ...
    2. Item 2
    `monospace` (backticks)
    ```` begin/end code block
    [A link!](http://mattypenny.net).
create a module

The module path is given by:

$env:PSModulePath

Mine is:

C:\Users\matty\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\PowerShell\Modules\

Pick one of this folders to create your module in and do this:

mkdir C:\Users\matty\Documents\WindowsPowerShell\Modules\QuickReference

Then create a dummy Powershell module file in the folder

notepad C:\Users\matty\Documents\WindowsPowerShell\Modules\QuickReference\QuickReference.psm1

The content of the module file is throwaway:

function dummy {write-output "This is a dummy"}
create the help file(s)

Create a language-specific folder for the help files

mkdir C:\Users\matty\Documents\WindowsPowerShell\Modules\QuickReference\en-US\

Edit a file called about_.help.txt

notepad C:\Users\mpenny2\Documents\WindowsPowerShell\Modules\QuickReference\en-US\about_Markdown.help.txt

My content looked like this:

TOPIC
    about_Markdown

SHORT DESCRIPTION
    Syntax for Markdown 

LONG DESCRIPTION

    ## The second largest heading (an <h2> tag)
    > Blockquotes
    *italic* or _italic_
    **bold** or __bold__
    * Item (no spaces before the *) or
    - Item (no spaces before the -)
    1. Item 1
      1. Furthermore, ...
    2. Item 2
    `monospace` (backticks)
    ```` begin/end code block
    [A link!](http://mattypenny.net).
Use the help

I can now do this (I’ll import the module in my $profile):

PS C:\Windows> import-module QuickReference

Then I can access my Markdown help from within Powershelll

PS C:\Windows> help Markdown
TOPIC
    about_Markdown

SHORT DESCRIPTION
    Syntax for Markdown 

LONG DESCRIPTION

    ## The second largest heading (an <h2> tag)
    > Blockquotes
    *italic* or _italic_
    **bold** or __bold__
    * Item (no spaces before the *) or
    - Item (no spaces before the -)
    1. Item 1
      1. Furthermore, ...
    2. Item 2
    `monospace` (backticks)
    ```` begin/end code block
    [A link!](http://mattypenny.net).

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator