Skip navigation.

Feed aggregator

LittleArduinoProjects#173 Mini 64-LED Cube

Paul Gallagher - 11 hours 58 min ago
LED cubes were a "thing" a few years back maybe ... but I've never built one. Time to fix that...

Here's my "mini" 4x4x4 cube - 3cm per side with 3mm clear blue LEDs. Pretty compact, and delivers nice effects. The clear blue LEDs work really well - very bright, even when driven with minimal current.

It's encased in a Ferrero Rocher cube box. During the build, that raised some challenges - most of the effort in building the project concerned squeezing all the electronics into the space in the lid (which becomes the base of the cube). Not quite as small as HariFun's "World's Tiniest RGB LED Cube" but about as small as you can get without resorting to SMD LEDs!

All notes, schematics and code are on GitHub as usual. Here's a quick demo:

Retrogaming on an Arduino/OLED "console"

Paul Gallagher - 12 hours 4 min ago
(blogarhythm ~ invaders must die - The Prodigy)
Tiny 128x64 monochrome OLED screens are cheap and easy to come by, and quite popular for adding visual display to a microcontroller project.

My first experiments in driving them with raw SPI commands had me feeling distinctly old school, as the last time remember programming a bitmap screen display was probably about 30 years ago!

So while in a retro mood, what better than to attempt an arcade classic? At first I wasn't sure it was going to be possible to make a playable game due to the limited Arduino memory and relative slow screen communication protocol.

But after a few tweaks of the low-level SPI implementation, I was surprised myself at how well it can run. Even had enough clock cycles left to throw in a sound track and effects.

Here's a quick video on YouTube of the latest version. ArdWinVaders! .. in full lo-rez monochrome glory, packed into 14kb and speeding along at 8MHz.



Full source and schematics are in the LittleArduinoProjects collection on Github.

The Fretboard - a multi-project build status monitor

Paul Gallagher - 12 hours 5 min ago
(blogarhythm ~ Diablo - Don't Fret)

The Fretboard is a pretty simple Arduino project that visualizes the build status of up to 24 projects with an addressable LED array. The latest incarnation of the project is housed in an old classical guitar … hence the name ;-)

All the code and design details for The Fretboard are open-source and available at fretboard.tardate.com. Feel free to fork or borrow any ideas for your own build. If you build anything similar, I'd love to hear about it.

College Scorecard: ED quietly adds in 700 missing colleges

Michael Feldstein - 14 hours 28 min ago

By Phil HillMore Posts (384)

It’s worth giving credit where credit is due, and the US Department of Education (ED) has fixed a problem that Russ Poulin and I pointed out where they had previously left ~700 colleges out of the College Scorecard.

When the College Scorecard was announced, Russ noticed a handful of missing schools. When I did the whole data OCD thing, I discovered that more than 700 2-year institutions were missing, including nearly 1-in-4 community colleges. Eventually we published an article in the Washington Post describing this (and other) problems.

The missing community colleges were excluded on purely statistical grounds. If the college granted more certificates (official awards of less than a degree) than degrees in a year, then they were excluded as they were not “primarily degree-granting” institutions. We label this the “Brian Criterion” after the person authoring two discussion board posts that explained this undocumented filter.

This was a statistical decision because it affects graduation rates, but leaves the student wondering why so many colleges cannot be found. Consider Front Range Community College in Colorado with 1,673 associate’s degrees granted in 2012-13. Because they also awarded 1,771 certificates, the Scorecard filters them out from the consumer website.

Largely due to their community-serving mission, community colleges and other two-year institutions were primarily affected. By our calculations, approximately one in three two-year colleges were excluded (more than 700), including approximately one in four community colleges (more than 250).

It is ironic that the most-penalized institutions were community colleges and those innovating with interim certificates and stackable credentials in particular; indeed, the White House has been explicitly promoting both of these groups.

We never heard from the ED officially but had some backchannel communications from others that there were some fixes being considered.

On Wednesday I got a message from the infamous Brian on a Stack Exchange thread letting me know that ED had changed their approach.

The Department recently added institutions to the consumer site such that institutions that predominantly award certificates (PREDDEG=1) are included IF the highest degree is at least an Associate’s (HIGHDEG>=2 ) AND the institution offers an associate’s or bachelor’s degree (CIPxxASSOC>0 OR CIPxxBACHL>0)

In English, this means that the ED took out their artificial criterion and fixed this issue. Colleges that award degrees no longer get excluded because they award even more certificates.

Poulin Hill College Scorecard Graphic Updated

It was a little tricky verifying the fix, as they have also changed how the College Scorecard classifies schools. Previously they let the user filter on associate’s programs, leading to institutions that predominantly award associate’s degrees. Now the scorecard will show you all institutions that award associate’s degrees. So the checksum activity must be done at a higher level. Low and behold, the count of public institutions in the Scorecard approximately matches the count from IPEDS. I also did spot checks on a dozen institutions that had previously been missing, and they are now in the Scorecard.

The other issues in the Washington Post article remain, but this headline problem has been fixed, but very quietly. I cannot find any announcement or release notes from ED, just this one line in their release notes:

Update national statistics to include certificate schools

So consider this blog post as the official ED press release, I guess. Thanks for fixing.

The post College Scorecard: ED quietly adds in 700 missing colleges appeared first on e-Literate.

Big Nodes, Concurrent Parallel Execution And High System/Kernel Time

Randolf Geist - 16 hours 4 min ago
The following is probably only relevant for customers that run Oracle on big servers with lots of cores in single instance mode (this specific problem here doesn't reproduce in a RAC environment, see below for an explanation why), like one of my clients that makes use of the Exadata Xn-8 servers, for example a X4-8 with 120 cores / 240 CPUs per node (but also reproduced on older and smaller boxes with 64 cores / 128 CPUs per node).

They recently came up with a re-write of a core application functionality. Part of this code did start the same code path for different data sets potentially several times concurrently ending up with many sessions making use of Parallel Execution. In addition a significant part of the queries used by this code did make questionable use of Parallel Execution, in that sense that queries of very short duration used Parallel Execution, hence ending up with several Parallel Execution starts per second. You could see this pattern from the AWR reports like this, showing several "DFO trees" parallelized per second on average over an hour period:



When the new code was tested with production-like data volumes and patterns, in the beginning the CPU profile of such a big node (running in single instance mode) looked like this, when nothing else was running on that box:



As you can see, the node was completely CPU bound, spending most of the time in System/Kernel time. The AWR reports showed some pretty unusual PX wait events as significant:



"PX Deq: Slave Session Stats" shouldn't be a relevant wait event since it is about the PX slaves at the end of a PX execution passing an array of session statistics to the PX coordinator for aggregating the statistics on coordinator level. So obviously something was slowing down this PX communication significantly (and the excessive usage of Parallel Execution was required to see this happen).

Also some of the more complex Parallel Execution queries performing many joins and ending up with a significant number of data redistributions ran like in slow motion, although claiming to spend 100% of their time on CPU, but according to Active Session History almost 90% of that time was spent on the redistribution operations:

SQL statement execution ASH Summary
-----------------------------------------------

              |               |               |
              |PX SEND/RECEIVE|PX SEND/RECEIVE|
PERCENTAGE_CPU|        PERCENT|    CPU PERCENT|
--------------|---------------|---------------|
            98|             86|             87|


Running the same query with the same execution plan on the same data and the same box during idle times showed a almost 20 times better performance, and less than 40% time spent on redistribution:

SQL statement execution ASH Summary
-----------------------------------------------

              |               |               |
              |PX SEND/RECEIVE|PX SEND/RECEIVE|
PERCENTAGE_CPU|        PERCENT|    CPU PERCENT|
--------------|---------------|---------------|
            96|             38|             37|

So it looked like those queries ran into some kind of contention that wasn't instrumented in Oracle but happened outside on O/S level, showing up as CPU Kernel time - similar to what could be seen in previous versions of Oracle when spinning on mutexes.

Reducing the excessive usage of Parallel Execution showed a significant reduction in CPU time, but still the high System/Kernel time was rather questionable:



So the big question was - where was that time spent in the kernel to see whether this gives further clues.

Analysis
Running "perf top" on the node during such a run showed this profile:

  PerfTop:  129074 irqs/sec  kernel:76.4%  exact:  0.0% [1000Hz cycles],  (all, 128 CPUs)
-------------------------------------------------------------------------------------------------------------------

             samples  pcnt function                 DSO
             _______ _____ ________________________ ___________________________________________________________

          1889395.00 67.8% __ticket_spin_lock       /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            27746.00  1.0% ktime_get                /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            24622.00  0.9% weighted_cpuload         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            23169.00  0.8% find_busiest_group       /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            17243.00  0.6% pfrfd1_init_locals       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            16961.00  0.6% sxorchk                  /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            15434.00  0.6% kafger                   /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            11531.00  0.4% try_atomic_semop         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
            11006.00  0.4% __intel_new_memcpy       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            10557.00  0.4% kaf_typed_stuff          /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
            10380.00  0.4% idle_cpu                 /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             9977.00  0.4% kxfqfprFastPackRow       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             9070.00  0.3% pfrinstr_INHFA1          /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             8905.00  0.3% kcbgtcr                  /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             8757.00  0.3% ktime_get_update_offsets /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             8641.00  0.3% kgxSharedExamine         /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             7487.00  0.3% update_queue             /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             7233.00  0.3% kxhrPack                 /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6809.00  0.2% rworofprFastUnpackRow    /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6581.00  0.2% ksliwat                  /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6242.00  0.2% kdiss_fetch              /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             6126.00  0.2% audit_filter_syscall     /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             5860.00  0.2% cpumask_next_and         /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             5618.00  0.2% kaf4reasrp1km            /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             5482.00  0.2% kaf4reasrp0km            /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             5314.00  0.2% kopp2upic                /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             5129.00  0.2% find_next_bit            /usr/lib/debug/lib/modules/2.6.39-400.128.17.el5uek/vmlinux
             4991.00  0.2% kdstf01001000000km       /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             4842.00  0.2% ktrgcm                   /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             4762.00  0.2% evadcd                   /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle
             4580.00  0.2% kdiss_mf_sc              /data/oracle/XXXXXXX/product/11.2.0.4/bin/oracle

Running "perf" on a number of Parallel Slaves being busy on CPU showed this profile:

     0.36%     ora_xxxx  [kernel.kallsyms]             [k] 

__ticket_spin_lock
               |
               --- __ticket_spin_lock
                  |          
                  |--99.98%-- _raw_spin_lock
                  |          |          
                  |          |--100.00%-- ipc_lock
                  |          |          ipc_lock_check
                  |          |          |          
                  |          |          |--99.83%-- semctl_main
                  |          |          |          sys_semctl
                  |          |          |          system_call
                  |          |          |          __semctl
                  |          |          |          |          
                  |          |          |           --100.00%-- skgpwpost
                  |          |          |                     kslpsprns
                  |          |          |                     kskpthr
                  |          |          |                     ksl_post_proc
                  |          |          |                     kxfprienq
                  |          |          |                     kxfpqrenq
                  |          |          |                     |          
                  |          |          |                     |--98.41%-- kxfqeqb
                  |          |          |                     |          kxfqfprFastPackRow
                  |          |          |                     |          kxfqenq
                  |          |          |                     |          qertqoRop
                  |          |          |                     |          kdstf01001010000100km
                  |          |          |                     |          kdsttgr
                  |          |          |                     |          qertbFetch
                  |          |          |                     |          qergiFetch
                  |          |          |                     |          rwsfcd
                  |          |          |                     |          qertqoFetch
                  |          |          |                     |          qerpxSlaveFetch
                  |          |          |                     |          qerpxFetch
                  |          |          |                     |          opiexe
                  |          |          |                     |          kpoal8

Running "strace" on those Parallel Slaves showed this:

.
.
.
semctl(1347842, 397, SETVAL, 0x1)       = 0
semctl(1347842, 388, SETVAL, 0x1)       = 0
semctl(1347842, 347, SETVAL, 0x1)       = 0
semctl(1347842, 394, SETVAL, 0x1)       = 0
semctl(1347842, 393, SETVAL, 0x1)       = 0
semctl(1347842, 392, SETVAL, 0x1)       = 0
semctl(1347842, 383, SETVAL, 0x1)       = 0
semctl(1347842, 406, SETVAL, 0x1)       = 0
semctl(1347842, 389, SETVAL, 0x1)       = 0
semctl(1347842, 380, SETVAL, 0x1)       = 0
semctl(1347842, 395, SETVAL, 0x1)       = 0
semctl(1347842, 386, SETVAL, 0x1)       = 0
semctl(1347842, 385, SETVAL, 0x1)       = 0
semctl(1347842, 384, SETVAL, 0x1)       = 0
semctl(1347842, 375, SETVAL, 0x1)       = 0
semctl(1347842, 398, SETVAL, 0x1)       = 0
semctl(1347842, 381, SETVAL, 0x1)       = 0
semctl(1347842, 372, SETVAL, 0x1)       = 0
semctl(1347842, 387, SETVAL, 0x1)       = 0
semctl(1347842, 378, SETVAL, 0x1)       = 0
semctl(1347842, 377, SETVAL, 0x1)       = 0
semctl(1347842, 376, SETVAL, 0x1)       = 0
semctl(1347842, 367, SETVAL, 0x1)       = 0
semctl(1347842, 390, SETVAL, 0x1)       = 0
semctl(1347842, 373, SETVAL, 0x1)       = 0
semctl(1347842, 332, SETVAL, 0x1)       = 0
semctl(1347842, 379, SETVAL, 0x1)       = 0
semctl(1347842, 346, SETVAL, 0x1)       = 0
semctl(1347842, 369, SETVAL, 0x1)       = 0
semctl(1347842, 368, SETVAL, 0x1)       = 0
semctl(1347842, 359, SETVAL, 0x1)       = 0
.
.
.

So the conclusion was: A lot of CPU time is spent spinning on the "spin lock" (critical code section) - caused by calls to "semctl" (semaphores), which are part of the PX code path and come from "ipc_lock"->"raw_lock". "strace" shows that all of the calls to "semctl" make use of the same semaphore set (first parameter), which explains the contention on that particular semaphore set (indicating that the locking granule is the semaphore set, not the semaphore).

Solution
Based on the "perf" results an Oracle engineer found a suitable, unfortunately unpublished and closed bug from 2013 for 12.1.0.2 that comes up with three different ways how to address the problem:

- Run with "cluster_database" = true: This will take a different code path which simply reduces the number of semaphore calls by two orders of magnitude. We tested this approach and it showed immediate relief on kernel time - that is the explanation why in a RAC environment this specific issue doesn't reproduce.

- Run with different "kernel.sem" settings: The Exadata boxes came with the following predefined semaphore configuration:

kernel.sem = 2048 262144 256 256

"ipcs" showed the following semaphore arrays with this configuration when starting the Oracle instance:

------ Semaphore Arrays --------
key        semid      owner     perms      nsems    
.
.
.
0xd87a8934 12941057   oracle    640        1502     
0xd87a8935 12973826   oracle    640        1502     
0xd87a8936 12006595   oracle    640        1502    

By reducing the number of semaphores per set and increasing the number of sets, like this:

kernel.sem = 100 262144 256 4096

the following "ipcs" output could be seen:

------ Semaphore Arrays --------
key        semid      owner     perms      nsems    
.
.
.
0xd87a8934 13137665   oracle    640        93       
0xd87a8935 13170434   oracle    640        93       
0xd87a8936 13203203   oracle    640        93       
0xd87a8937 13235972   oracle    640        93       
0xd87a8938 13268741   oracle    640        93       
0xd87a8939 13301510   oracle    640        93       
0xd87a893a 13334279   oracle    640        93       
0xd87a893b 13367048   oracle    640        93       
0xd87a893c 13399817   oracle    640        93       
0xd87a893d 13432586   oracle    640        93       
0xd87a893e 13465355   oracle    640        93       
0xd87a893f 13498124   oracle    640        93       
0xd87a8940 13530893   oracle    640        93       
0xd87a8941 13563662   oracle    640        93       
0xd87a8942 13596431   oracle    640        93       
0xd87a8943 13629200   oracle    640        93       
0xd87a8944 13661969   oracle    640        93       
0xd87a8945 13694738   oracle    640        93       
0xd87a8946 13727507   oracle    640        93       
0xd87a8947 13760276   oracle    640        93       
0xd87a8948 13793045   oracle    640        93       
0xd87a8949 13825814   oracle    640        93       
0xd87a894a 13858583   oracle    640        93       
0xd87a894b 13891352   oracle    640        93       
0xd87a894c 13924121   oracle    640        93       
0xd87a894d 13956890   oracle    640        93       
0xd87a894e 13989659   oracle    640        93       
0xd87a894f 14022428   oracle    640        93       
0xd87a8950 14055197   oracle    640        93       
0xd87a8951 14087966   oracle    640        93       
0xd87a8952 14120735   oracle    640        93       
0xd87a8953 14153504   oracle    640        93       
0xd87a8954 14186273   oracle    640        93       
0xd87a8955 14219042   oracle    640        93


So Oracle now allocated a lot more sets with less semaphores per set. We tested this configuration instead of using "cluster_database = TRUE" and got the same low kernel CPU times

- The bug comes up with a third option how fix this, which has the advantage that the host configuration doesn't need to be changed, and the configuration can be done per instance: There is an undocumented parameter "_sem_per_sem_id" that defines the upper limit of semaphores to allocate per set. By setting this parameter to some comparable values like 100 or 128 the net result ought to be the same - Oracle allocates more sets with less semaphores per set, but we haven't tested this option.

Conclusion
So the bottom line was this: Certain usage patterns of the Oracle instance lead to contention on spin locks on Linux O/S level if Oracle runs in single instance mode and used the so far recommended semaphore settings, which resulted in all semaphore calls going for the same semaphore set. By having Oracle allocate more semaphore sets the calls were spread over more sets hence significantly reducing the contention.

There is probably some internal note available at Oracle that indicates that the default semaphore settings recommended for big nodes are not optimal for running single instance mode under certain circumstances, but I don't know if there is a definitive, official guide available yet.

This is the CPU profile of exactly the same test workload as before using the changed "kernel.sem" settings:



Also in the AWR report the unusual PX related wait events went away and performance improved significantly, in particular also for those complex queries mentioned above.

Oracle JET and WebSocket Integration for Live Data

Andrejus Baranovski - Sat, 2016-02-06 09:57
I was researching how to plugin WebSocket into JET. I would like to share my findings and explain how it works. It is amazing, how scalable and quick it is to send JSON message through WebSocket channel and render it in JET. Luckily WebSocket client is implemented in JavaScript, this integrates perfectly with Oracle JET (also based on JavaScript).

Watch recorded demo, where I'm sending updated data with JSON message through WebSocket. UI chart is rendered with JET. See how nicely JET re-draws chart, based on received new data (watch at 6th second):


Download sample application - JETWebSocket.zip. This contains both WebSocket application (implemented with JDeveloper) and JET (implemented with NetBeans).

On WebSocket server side, I'm listening for DB changes (Database Change Notification Listener Implementation). Each time when DB change happens, all changes are collected into one collection and sent to the clients through WebSocket:


Changes are collected into array. WebSocket message must be encoded into JSON Array, this makes it easier to read it on client. Multiple JSON Objects are added into JSON Array:


Message is sent to all connected WebSocket clients:


All this happens on server side. Now lets take a look into client side. We should open WebSocket connection when JET is loading. I have included WebSocket client JavaScript code from separate file websocket.js. JET renders chart component:


Each time when WebSocket message is received by the client, onMessage method is invoked. This method is responsible to parse JSON message and pass it to function from JET context, where chart data update happens:


WebSocket is opened, when document is rendered. In the next step, JET View Model is created along with bindings:


Variable viewModel is accessible from WebSocket onMessage method, because it is defined outside of JET function:


Method updateChart invoked from onMessage, applies received changes to chart data. JET re-draws chart automatically, because changes are applies for observable array. This is really cool:


See how it works. Page is loaded with JET chart component, WebSocket is opened:


Go to DB and update few records, commit changes:


JET receives changes through WebSocket and chart is updated instantly:

PaaS4SaaS Developers' Code Is Always 'On': OAUX is on OTN and GitHub

Usable Apps - Sat, 2016-02-06 09:35

Boom! That's the sound of thunder as SaaS and PaaS developers work like lightning in the cloud. The cloud has changed customer expectations about applications; if they don’t like their user experience (UX) or they don’t get it fast, they’ll go elsewhere.

PaaS4SaaS developers know their code is always 'on'.

But you can accelerate the development of your PaaS4SaaS solutions with a killer UX easily by now downloading the AppsCloudUIKit software part of the Cloud UX simplified UI Rapid Development Kit (RDK) for Release 10 PaaS4SaaS solutions from the Oracle Technology Network (OTN) or from GitHub.

The Oracle Applications User Experience (OAUX) team's Oracle Cloud RDK is certified with 11c and 12g and the download includes a developer eBook that explains quickly how to build a complete SaaS or PaaS solution in a matter of hours.

Build a simplified UI with the RDK

Build a simplified UI developer eBook

For the complete developer experience fast, check out the cool Oracle Usable Apps channel YouTube videos from our own dev and design experts on how to design and build your own simplified UI for SaaS using PaaS.

Enjoy. Do share your thoughts in the comments after you've used the complete RDK.

Security Alert CVE-2016-0603 Released

Oracle Security Team - Fri, 2016-02-05 14:42

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6.

To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system.

Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later.

As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious.

For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html

 

PeopleTools CPU analysis and supported versions of PeopleTools (update for January 2016 CPU)

PeopleSoft Technology Blog - Fri, 2016-02-05 14:30

Questions often arise on the PeopleTools versions for which Critical Patch Updates have been published, or if a particular PeopleTools version is supported. 

The attached page shows the patch number matrix for PeopleTools versions associated with a particular CPU publication. This information will help you decide which CPU to apply and when to consider upgrading to a more current release.

The link in "CPU Date" goes to the landing page for CPU advisories, the link in the individual date, e.g. Apr-10, goes to the advisory for that date.

The page also shows the CVE's addressed in the CPU, a synopsis of the issue and the Common Vulnerability Scoring System (CVSS) value.

To find more details on any CVE, simply replace the CVE number in the sample URL below.

http://www.cvedetails.com/cve/CVE-2010-2377

Common Vulnerability Scoring System Version 2 Calculator

http://nvd.nist.gov/cvss.cfm?calculator&adv&version=2

This page shows the components of the CVSS score

Example CVSS response policy http://www.first.org/_assets/cvss/cvss-based-patch-policy.pdf

All the details in this page are available on My Oracle Support and public sites.

The RED column indicates the last patch for a PeopleTools version and effectively the last support date for that version.

Applications Unlimited support does NOT apply to PeopleTools versions.

SQL On The Edge #8 – SQL Server Mobile Report Publisher

Pythian Group - Fri, 2016-02-05 14:07

One of the cool things about SQL Server is that it comes bundled with all the Business Intelligence services with the core database engine license. Reporting Services (which includes the Mobile Report Publisher), Analysis Services, and Integration Services are all integrated, and are ready to get going as soon as you install SQL Server. This has made it not only cost-efficient for many organizations to deploy BI, but it has also contributed to a wide adoption among the SQL Server customer base.

 

What is the Mobile Report Publisher?

Currently in preview, the Mobile Report Publisher is a new report and dashboard editor that publishes reports to Reporting Services, and it’s part of the bigger road map that Microsoft has for their Business Intelligence On-Premises story. We all know that in the cloud, Power BI has been getting a large amount of investment, but with on-premises there was a big gap that was getting wider and wider, until now.

With this upcoming SQL 2016 release, the Microsoft team is focusing on bringing Reporting Services into the future as a one-stop integrated solution for BI deployment so that cloud BI or a competitor’s product (*cough*Tableau*cough) are not the only modern alternatives.

This Reporting Services refactor is the biggest change made to the product since SQL Server 2005 was released over 10 years ago. Leveraging the best parts of the Datazen acquisition, the Microsoft team is looking to provide a cohesive BI story that integrates web-based and mobile reports, Report Builder reports, and Power View style of modern visuals.


How is it different?

You’re probably used to working with SSRS Report Builder and are wondering what the idea is with Mobile Report Publisher. The demo below will make it very clear, but let’s just say that reports have now been split in two types:

1. Paginated reports: this is the ‘legacy’ style report that is built by Report Builder. It looks more flat, has the same controls as before, and is fully compatible with all the existing reports you have already deployed.

2. Mobile reports: Even though the name says ‘mobile’ these reports work just as well on desktop, tablet and mobile. If you’re familiar with web design, then the appropriate term would be that these reports are “responsive”. They can be done once, and will display nicely across devices. This is the new experience that the Mobile Report Publisher is targeting.

 

Where do we go from here?

Right now the current version is SQL Server 2016 CTP 3.2. We still have several different versions that will be released before SQL 2016 goes RTM later on this year. Currently, a lot of the planned functionality is not entirely there for the main SSRS interface, and you will be asked to switch to the ‘”classic” view often.

The Mobile Report Publisher experience is also very much targeted towards creating a visualization, and there’s no data modeling to be found. You pretty much need to have your datasets pre-made and in the format that the tool will expect, or you won’t get very far. Hopefully, at some point the team will add modeling capabilities like the robust experience we already have on the Power Bi desktop tool.

If you want to practice and get your feet wet with this release you can do it now, and for free by visiting SQL Server 2016 CTP 3.2. You can also preview the Mobile Report Publisher. Like I said before, this is a big departure so most people will want to train up to be ready when the new versions go live.

 

Demo

In the demo below I’m going to walk through the new SSRS interface preview in CTP 3.2, and then walk through creating a quick dashboard for both desktop and mobile consumption using the publisher. Let’s check it out!

Discover more about our expertise with SQL Server in the Cloud.

Categories: DBA Blogs

How to get nfs info on 1000 or many hosts using Oracle Enterprise Manager

Arun Bavera - Fri, 2016-02-05 11:27
There was a requirement to get nfs info on all the hosts.
Here is the way to get it:

Create a OS JOB in EM12c with following text and execute on all interested hosts. Assuming you have common shared mount on all these hosts.
Otherwise you can create Metric Extension to collect this info and query repository using Configuration Manger or directly to get this info.
 echo -e `echo '\n';hostname --l;echo '\n=====================================\n';nfsstat -m;echo '\n=====================================\n';exit 0` >> /nfs_software/nfs_info_PROD.txt



Categories: Development

Parallel DML

Jonathan Lewis - Fri, 2016-02-05 07:02

A recent posting on OTN presented a performance anomaly when comparing a parallel “insert /*+ append */” with a parallel “create table as select”.  The CTAS statement took about 4 minutes, the insert about 45 minutes. Since the process of getting the data into the data blocks would be the same in both cases something was clearly not working properly. Following Occam’s razor, the first check had to be the execution plans – when two statements that “ought” to do the same amount of work take very different times it’s probably something to do with the execution plans – so here are the two statements with their plans:

First the insert, which took 45 minutes:

insert  /*+ append parallel(a,16) */ into    
        dg.tiz_irdm_g02_cc  a
select
        /*+ parallel (a,16) parallel (b,16) */ 
        *
from    tgarstg.tst_irdm_g02_f01 a, 
        tgarstg.tst_irdm_g02_f02 b
where   a.ip_id = b.ip_id
;

------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |        |      |            |
|   1 |  LOAD AS SELECT                  | TIZ_IRDM_G02_CC  |       |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR                 |                  |       |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)           | :TQ10002         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,02 | P->S | QC (RAND)  |
|*  4 |     HASH JOIN BUFFERED           |                  |    13M|    36G|   921M|   127K  (1)| 00:00:05 |  Q1,02 | PCWP |            |
|   5 |      PX RECEIVE                  |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,02 | PCWP |            |
|   6 |       PX SEND HASH               | :TQ10000         |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   7 |        PX BLOCK ITERATOR         |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F02 |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|   9 |      PX RECEIVE                  |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|  10 |       PX SEND HASH               | :TQ10001         |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  11 |        PX BLOCK ITERATOR         |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWC |            |
|  12 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F01 |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------------

And here’s the ‘create table’ at 4:00 minutes:

create table dg.tiz_irdm_g02_cc 
nologging 
parallel 16 
compress for query high 
as
select
        /*+ parallel (a,16) parallel (b,16) */ 
        *
from    tgarstg.tst_irdm_g02_f01 a , 
        tgarstg.tst_irdm_g02_f02 b 
where
        a.ip_id = b.ip_id

------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |                  |    13M|    36G|       |   397K  (1)| 00:00:14 |        |      |            |
|   1 |  PX COORDINATOR                  |                  |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)            | :TQ10002         |    13M|    36G|       |   255K  (1)| 00:00:09 |  Q1,02 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                | TIZ_IRDM_G02_CC  |       |       |       |            |          |  Q1,02 | PCWP |            |
|*  4 |     HASH JOIN                    |                  |    13M|    36G|  1842M|   255K  (1)| 00:00:09 |  Q1,02 | PCWP |            |
|   5 |      PX RECEIVE                  |                  |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,02 | PCWP |            |
|   6 |       PX SEND HASH               | :TQ10000         |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   7 |        PX BLOCK ITERATOR         |                  |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F02 |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|   9 |      PX RECEIVE                  |                  |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,02 | PCWP |            |
|  10 |       PX SEND HASH               | :TQ10001         |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,01 | P->P | HASH       |
|  11 |        PX BLOCK ITERATOR         |                  |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,01 | PCWC |            |
|  12 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F01 |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,01 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------------

As you can see, the statements are supposed to operate with degree of parallelism 16, and we were assured that the pre-existing table had been declared as nologging with the same level of compression as that given in the CTAS so, assuming the queries did run with the degree expected, they should take virtually the same amount of time.

But there’s an important clue in the plan about why there was a difference, and why the difference could be so great. The first statement is DML, the second is DDL. Parallel DDL is automatically enabled, parallel DML has to be enabled explicitly otherwise the select will run in parallel but the insert will be serialized. Look at operations 1 – 4 of the insert – the query co-ordinator does the “load as select” of the rowsource sent to it by the parallel execution slaves. Not only does this mean that one process (rather than 16) does the insert, you also have all the extra time for all the messaging and the hash join (at line 4) has to be buffered – which means a HUGE amount of data could have been dumped to disc by each slave prior to the join actually taking place and then been read back from disc, joined, and forwarded.

Note that the hash join in the CTAS is not buffered – each slave does the join as the data arrives and writes the result directly to its local segment. Basically the insert could be doing something like twice the I/O of the CTAS (and this is Exadata, so reads from temp can be MUCH slower than the tablescans that supply the data to be joined).

So the OP checked, and found that (although he thought he had enabled parallel DML) he hadn’t actually done so. And after enabling parallel DML the timing was … just as bad. Ooops!! Something else must have gone wrong. Here’s the plan after enabling parallel DML:


--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |        |      |            |
|   1 |  PX COORDINATOR                    |                  |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10003         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                  | TIZ_IRDM_G02_CC  |       |       |       |            |          |  Q1,03 | PCWP |            |
|   4 |     PX RECEIVE                     |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,03 | PCWP |            |
|   5 |      PX SEND RANDOM LOCAL          | :TQ10002         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,02 | P->P | RANDOM LOCA|
|*  6 |       HASH JOIN BUFFERED           |                  |    13M|    36G|   921M|   127K  (1)| 00:00:05 |  Q1,02 | PCWP |            |
|   7 |        PX RECEIVE                  |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,02 | PCWP |            |
|   8 |         PX SEND HASH               | :TQ10000         |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   9 |          PX BLOCK ITERATOR         |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|  10 |           TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F02 |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |        PX RECEIVE                  |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|  12 |         PX SEND HASH               | :TQ10001         |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  13 |          PX BLOCK ITERATOR         |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWC |            |
|  14 |           TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F01 |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------------------

As you can see, line 3 has the LOAD AS SELECT after which the slaves message the query co-ordinator – so the DML certainly was parallel even though it wasn’t any faster. But why is the hash join (line 6) still buffered, and why is there an extra data flow (lines 5 and 4 – PX SEND RANDOM LOCAL / PX RECEIVE). The hash join has to be buffered because of that extra data flow (which suggests that the buffering and messaging could still be the big problem) – but WHY is the data flow there at all, it shouldn’t be.

At this point I remembered that the first message in the thread had mentioned testing partitioned tables as well as non-partitioned tables – and if you do a parallel insert to a partitioned table and the data is going to be spread across several partitions, and the number of partitions is not a good match for the degree of parallelism then you’re likely to an extra stage of data distribution as Oracle tries to share the data and the partitions as efficiently as possible across slaves. One of the possible distribution methods is “local random” – which is fairly likely to appear if the number of slaves is larger than the number of partitions. This behaviour can be modified with the newer “single distribution” version of the pq_distribute hint. So I asked the OP if their latest test was on a partitioned table, and suggested they insert the hint /*+ pq_distribute(a none) */ just after the parallel hint.

The answer was yes, and the hint had the effect of dropping the run time down to 7 minutes – still not as good as the CTAS, but then the CTAS wasn’t creating a partitioned table so it’s still not a completely fair test. Here’s the (start of the) final plan:

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |        |      |            |
|   1 |  PX COORDINATOR                    |                  |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10002         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,02 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                  | TIZ_IRDM_G02_CC  |       |       |       |            |          |  Q1,02 | PCWP |            |
|*  4 |     HASH JOIN                      |                  |    13M|    36G|   921M|   127K  (1)| 00:00:05 |  Q1,02 | PCWP |            |

As you can see, we have a hash join that is NOT buffered; we don’t have a third distribution, and the slaves do the data load and then message the query co-ordinator.

It would be interesting to know if there was a significant skew in the data volumes that went into each partition of the partitioned table, and check where the time was spent for both the partitioned insert and the non-partitioned CTAS (and compare with a non-partitioned insert) – but real-world DBAs don’t necessarily have all the time for investigations that I do.

My reference: parallel_dml.sql


Node-oracledb: Avoiding "ORA-01000: maximum open cursors exceeded"

Christopher Jones - Fri, 2016-02-05 05:45

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the initial hacking-around-with-node-oracledb phase you may sometimes encounter the error ORA-01000: maximum open cursors exceeded.

Here are some things to do about it:

  • Avoid having too many incompletely processed statements open at one time:

    • Close ResultSets before releasing the connection.

    • If cursors are opened with dbms_sql.open_cursor() in a PL/SQL block, close them before the block returns - except for REF CURSORS being passed back to node-oracledb. (And if a future node-oracledb version supports Oracle Database 12c Implicit Result Sets, these cursors should likewise not be closed in the PL/SQL block)

    • Make sure your application is handling connections and statements in the order you expect.

  • Choose the appropriate Statement Cache size. Node-oracledb has a statement cache per connection. When node-oracledb internally releases a statement it will be put into the statement cache of that connection, but its cursor will remain open. This makes statement re-execution very efficient.

    The cache size is settable with the stmtCacheSize attribute. The appropriate statement cache size you choose will depend on your knowledge of the locality of the statements, and of the resources available to the application: are statements re-executed; will they still be in the cache when they get executed; how many statements do you want to be cached? In rare cases when statements are not re-executed, or are likely not to be in the cache, you might even want to disable the cache to eliminate its management overheads.

    Incorrectly sizing the statement cache will reduce application efficiency. Luckily with Oracle 12.1, the cache can be automatically tuned using an oraaccess.xml file.

    More information on node-oracledb statement caching is here.

  • Don't forget to use bind variables otherwise each variant of the statement will have its own statement cache entry and cursor. With appropriate binding, only one entry and cursor will be needed.

  • Set the database's open_cursors parameter appropriately. This parameter specifies the maximum number of cursors that each "session" (i.e each node-oracle connection) can use. When a connection exceeds the value, the ORA-1000 error is thrown. Documentation on open_cursors is here.

    Along with a cursor per entry in the connection's statement cache, any new statements that a connection is currently executing, or ResultSets that haven't been released (in neither situation are these yet cached), will also consume a cursor. Make sure that open_cursors is large enough to accommodate the maximum open cursors any connection may have. The upper bound required is stmtCacheSize + the maximum number of executing statements in a connection.

    Remember this is all per connection. Also cache management happens when statements are internally released. The majority of your connections may use less than open_cursors cursors, but if one connection is at the limit and it then tries to execute a new statement, that connection will get ORA-1000: maximum open cursors exceeded.

OGh DBA / SQL Celebration Day 2016 Keynote Speaker

Marco Gralike - Fri, 2016-02-05 05:24
I am very happy and honored, to announce, that Mr. Graham Wood (Oracle US) agreed…

Do you even [ Agile | DevOps ] bruh?

Tim Hall - Fri, 2016-02-05 05:01

It seems I can’t turn around without getting myself involved in some discussion about Agile or DevOps these days.

agile-devops-meme2

I agree with many of the concepts and the aims of Agile, DevOps, Continuous Delivery etc. I find it hard to believe anyone wouldn’t see value in what they are trying to promote. As always, it is how people interpret and implement them that makes all the difference.

It’s just like religion. They all seem to be pretty sound at heart, but let a few lunatics and fundamentalists loose on them and next thing you know…

Things like Agile and DevOps have arisen to address perceived problems. If your organisation doesn’t suffer from those problems, you may not need to consider them, or you may already be doing something like them without knowing you are. :)

Your company can be agile, without following Scrum or Kanban. You will inevitably have arrived at similar patterns I guess. Likewise, your streamlining of process, automation of testing and deployment, good communication between silos (if present) may leave you wondering what all the DevOps fuss is about.

I am both a fan and hater of Agile and DevOps. I’m a fan of what they are able to achieve when used correctly. I’m a hater of all the bullshit that surrounds them!

Rant over. :)

Cheers

Tim…

Do you even [ Agile | DevOps ] bruh? was first posted on February 5, 2016 at 12:01 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Upgrade Oracle Apps (EBS) to 12.2 ? ORA-01804: failure to initialize timezone information – issue while running AutoConfig

Online Apps DBA - Fri, 2016-02-05 03:23
This entry is part 6 of 6 in the series Oracle EBS 12.2 Upgrade

This post covers issue running Autoconfig on DB Tier after upgrading database reported in our Oracle EBS Upgrade R12.2 training (next batch starts on 20th Feb and only limited seats are available . We limit number trainees to 15 and where we cover Architecture, Overview of R12.2 & Major features in Upgrading to R12.2, Different upgrade paths available to R12.2, Best practices for R12.2 Upgrade, How to minimize down time for R12.2 Upgrade, Difficulties/Issues while upgrading to R12.2)

One of the trainee from our previous batch, encountered  issue “ORA-01804: failure to initialize timezone information” in Post Database upgrade 11.2.0.4  step while running AutoConfig.

Note: There are total 9 stages in 12.2 Upgrade where database upgrade to 11gR2 or 12c is one of the step.

Issue:

1. Running Autoconfig on Database tier as

$ORACLE_HOME>/appsutil/bin/adconfig.sh contextfile=$ORACLE_HOME/appsutil/PROD1211_iamdemo01.xml

autocfg_error_new

Error messages in AutoConfig logs: /u01/oracle/PROD1211_11204/db/tech_st/11.2.0/appsutil/log/

PROD1211_iamdemo01/12180900/adconfig.log

SQLPLUS Executable : /u01/oracle/PROD1211_11204/db/tech_st/11.2.0/bin/sqlplus

ERROR:
ORA-01804: failure to initialize timezone information

SP2-0152: ORACLE may not be functioning properly
adcrobj.sh exiting with status 1
ERRORCODE = 1 ERRORCODE_END

Fix:

Apply patch 7651166 (as per ReadMe instructions ) to fix the above issue and run AutoConfig again as

$ORACLE_HOME/ appsutil/ bin/ adconfig.sh contextfile=$ORACLE_HOME/ appsutil/PROD1211_iamdemo01.xml

You should see a message “AutoConfig Completed Successfully”

autoconfig_success

References:

  • EBS 12.1.1: Autoconfig Fails While Running Scripts “afdbprf.sh” and “adcrobj.sh” with 11GR2 Database (Doc ID 1336807.1)
  • AutoConfig On Db Tier Fails With Error – SP2-1503: Unable to initialize Oracle call interface (Doc ID 1187616.1)

If you want to learn more about Oracle EBS Upgrade to R12.2  then click the button below and register for our  Oracle Upgrade 12.2  (next batch starts on 20th February, 2016 )

Note: We are so confident on our workshops that we provide 100% Money back guarantee, in unlikely case of you being not happy after first sessions, just drop us a mail before second session and We’ll refund FULL money

Oracle E-Business Suite Upgrade to R12.2 Training

Live Instructor led Online sessions with Hands-on Lab Exercises, Dedicated Machines to Practice and Recorded sessions of the Training

Click here to learn more with limited time discounts

Stay Tuned for more Information on Oracle Apps 12.2 Upgrade!!

The post Upgrade Oracle Apps (EBS) to 12.2 ? ORA-01804: failure to initialize timezone information – issue while running AutoConfig appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Put Glance on It

Oracle AppsLab - Fri, 2016-02-05 02:59

Because I live in Portland, I’m often asked if “Portlandia” is accurate.

It is, mostly, and so it seems appropriate to channel an early episode to talk about Glance, our wearables framework.

Actually, Glance has grown beyond wearables to support cars and other devices, the latest of which is Noel’s (@noelportugal) gadget du jour, the LaMetric Time (@smartatoms).

Insert mildly amusing video here.

And of course Noel had to push Glance notifications to LaMetric, because Noel. Pics, it happened.

IMG_20160129_101740

The text is truncated, and I tried to take a video of the scrolling notification, but it goes a bit fast for the camera. Beyond just the concept, we’ll have to break up the notification to fit LaMetric’s model better, but this was only a few minutes of effort from Noel. I know, because he was sitting next to me while he coded it.

In case you need a refresher, here’s Glance of a bunch of other things.

IMG_20160202_141310-4

I didn’t have a separate camera so I couldn’t show the Android notification.

Screenshot_20160127-113935

We haven’t updated the framework for them, but if you recall, Glance also worked on Google Glass and Pebble in its 1.0 version.

IMG_0098

Screenshot_2014-09-08-07-20-50Possibly Related Posts:

Network multicast support on Azure

Pythian Group - Thu, 2016-02-04 15:07

 

Today I would like to talk about multicast support on Azure, and how to make it work. While it’s not the most required feature on a Virtual environment, nevertheless, some applications require multicast support for networks. The perfect example is Oracle RAC, where multicast is required starting from version 11.2.0.2. In Oracle RAC, multicast is used for highly available IP (HAIP) on interconnect. If you’re thinking about building a training environment with Oracle RAC on Azure you will need the multicast support.

How can we check if it works, or if it’s working now? First, you can check if it’s supported by your kernel using the netstat utility.

[root@oradb5 ~]# netstat -g | grep mcast
lo 1 all-systems.mcast.net
eth0 1 all-systems.mcast.net
eth1 1 all-systems.mcast.net

You can see that all my interfaces are ready for the multicast support. That’s fine, but how can we check if it works on our network? We can use either iperf utility or a perl script created by Oracle. You can download the script from Oracle support if you have account, from the Oracle note “How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)”.
Here’s what I got: I have two Azure VM A3 size with Oracle Linux 6, with two network interfaces each. The VM hostnames were oradb5 and oradb6. You can check out my blog on how to make an Azure VM with two network interface here. The second interface eth1 is one where we are going to enable multicast.

I ran the mcasttest.pl script and saw that:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i eth1
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 15:05:23 | Multicast Failed for eth1 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 15:05:53 | Multicast Failed for eth1 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

The output clearly tells us that we don’t have multicast support for either for 230.0.1.0 or 224.0.0.251 multicast addresses.

What does the Virtual Network FAQ for Azure tell us about it?
Here is the answer:

Do VNets support multicast or broadcast?
No. We do not support multicast or broadcast.
What protocols can I use within VNets?
You can use standard IP-based protocols within VNets. However, multicast, broadcast, IP-in-IP encapsulated packets and Generic Routing Encapsulation (GRE) packets are blocked within VNets. Standard protocols that work include:
* TCP
* UDP
* ICMP

So, we need a workaround. Luckily we have one. Some time ago, while discussing RAC on Amazon AWS, I was pointed to an article written by my former colleague Jeremiah Wilton, where he had described how he could work around the same problem on Amazon. You can read the article here. I decided to give a try and see if it works for Azure.

We are going to use a Peer-to-Peer VPN n2n provided by ntop.
They have mentioned that the development for the product has been put on hold, but the tool is still widely used and provides an acceptable solution for our problem. I used a Stuart Buckell’s article on how to set it up and it worked for me.
We could just use an already precompiled package, but compiling the utility from the sources provides us with an opportunity to disable encryption and compression, or change any other options.

Here is what I’ve done:
Installed kernel headers to be able to compile the n2n :

[root@oradb5 n2n_v2]# yum install kernel-headers
ol6_UEK_latest | 1.2 kB 00:00
ol6_u4_base | 1.4 kB 00:00
ol6_u4_base/primary | 2.7 MB 00:00
ol6_u4_base 8396/8396
Setting up Install Process
Resolving Dependencies
....

Installed subversion utility:

[root@oradb5 /]# yum install subversion.x86_64
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package subversion.x86_64 0:1.6.11-15.el6_7 will be installed
.............

Downloaded the sources using svn:

[root@oradb5 /]# svn co https://svn.ntop.org/svn/ntop/trunk/n2n
Error validating server certificate for 'https://svn.ntop.org:443':
- The certificate hostname does not match.
Certificate information:
- Hostname: shop.ntop.org
- Valid: from Sun, 15 Nov 2015 00:00:00 GMT until Wed, 14 Nov 2018 23:59:59 GMT
- Issuer: COMODO CA Limited, Salford, Greater Manchester, GB
- Fingerprint: fb:a6:ff:a7:58:f3:9d:54:24:45:e5:a0:c4:04:18:d5:58:91:e0:34
(R)eject, accept (t)emporarily or accept (p)ermanently? p
A n2n/n2n_v1
A n2n/n2n_v1/lzodefs.h
A n2n/n2n_v1/README
...............

Disabled encryption and compression using this article
Changed directory to n2n/n2n_v2 and compiled it.

[root@oradb5 n2n_v2]# make
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n_keyfile.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c wire.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c minilzo.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c twofish.c
..............................

Copied files to the both my servers (oradb5 and oradb6) to /usr/sbin directory:

[root@oradb5 n2n_v2]# cp supernode /usr/sbin/
[root@oradb5 n2n_v2]# cp edge /usr/sbin/

Start a supernode daemon on the 1-st node. We only need it running on one machine, and it can even be a totally different machine. I am using port 1200 for it:

[root@oradb5 ~]# supernode -l 1200
[root@oradb5 ~]#

Started the edge on both servers. On oradb5 I am creating a sub-interface with IP 192.168.1.1 and providing some parameters:
-E – Accept multicast MAC addresses (default=drop).
-r – Enable packet forwarding through n2n community.
-c – n2n community name the edge belongs to.
-l – our supernode address:port.

[root@oradb5 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.1 -E -r

[root@oradb6 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.2 -E -r

So we are getting an interface edge0 on both nodes and can use it for connection required multicast:

[root@oradb5 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 52:CD:8E:20:3D:E5
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::50cd:8eff:fe20:3de5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:592 (592.0 b)

[root@oradb5 ~]#

On the second box:

[root@oradb6 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 7E:B1:F1:41:7B:B7
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7cb1:f1ff:fe41:7bb7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:362 (362.0 b)

[root@oradb6 ~]#

Now we can run our multicast test again for edge0 interface and see how it works.

[oracle@oradb5 ~]$ cd mcasttest/
[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

As you can see, the test has completed successfully. So, the edge0 interface can be used now for any connections requiring multicast support.

In my next article I will show you how to create an Oracle RAC on Azure using the created multicast interface and a shared storage.

Categories: DBA Blogs

Come Visit the OAUX Gadget Lab

Oracle AppsLab - Thu, 2016-02-04 11:28

In September 2014, Oracle Applications User Experience (@usableapps) opened a brand new lab that showcases Oracle’s Cloud Applications, specifically the many innovations that our organization has made to and around Cloud Applications in the past handful of year.

We call it the Cloud User Experience Lab, or affectionately, the Cloud Lab.

Our team has several projects featured in the Cloud Lab, and many of our team members have presented our work to customers, prospects, partners, analysts, internal groups, press, media and even schools and Girl and Boy Scout troops.

In 2015, the Cloud Lab hosted more than 200 such tours, actually quite a bit more, but I don’t have the exact number in front of me.

Beyond the numbers, Jeremy (@jrwashely), our group vice president, has been asked to replicate the Cloud Lab in other places, on the HQ campus and abroad at other Oracle campuses.

People really like it.

In October 2015, we opened an adjoining space to the Cloud Lab that extends the experience to include more hands-on projects. We call this new lab, the Gadget Lab, and it features many more of our projects, including several you’ve seen here.

In the Gadget Lab, we’re hoping to get people excited about the possible and give them a glimpse of what our team does because saying “we focus on emerging technologies” isn’t nearly as descriptive as showing our work.

RS3499_151029_OAUXHeadquartersLabTours_0262-scr

So, the next time you’re at Oracle HQ, sign up for a tour of the Cloud and Gadget Labs and let us know what you think.Possibly Related Posts:

An UNDO in a PDB in Oracle 12c?

Pythian Group - Thu, 2016-02-04 10:08

 

According to the Oracle 12cR1 documentation and concepts, it is 100% clear that there can be only one UNDO tablespace in a multitenant architecture and it is at CDB level; thus, a PDB cannot have any UNDO tablespace.

Are we really sure about that? Let’s test it!

First, we need a PDB with few tablespaces:

 

FRED_PDB> select NAME, OPEN_MODE, CON_ID from v$pdbs ;

NAME OPEN_MODE CON_ID
-------------------------------------------------- ---------- ----------
FRED_PDB READ WRITE 4

FRED_PDB> select tablespace_name from dba_tablespaces ;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TBS_DATA

5 rows selected.

FRED_PDB> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
FRED_PDB>

 

There we have an UNDO tablespace named UNDOTBS1 at CDB level and no UNDO at PDB level. Let’s try to create one :

FRED_CDB> create undo tablespace MY_PDB_UNDO ;

Tablespace created.

FRED_CDB>

 

It worked! Is the Oracle documentation wrong? Let’s verify this weird successful UNDO tablespace creation:

FRED_PDB> select tablespace_name from dba_tablespaces where tablespace_name like '%UNDO%' ;

no rows selected

FRED_PDB> select tablespace_name from dba_tablespaces

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TBS_DATA

5 rows selected.

FRED_PDB>

 

No UNDO tablespace has in fact been created even if no error message has been raised by Oracle. Digging in the documentation, this is not a not a bug but a feature. Indeed, it is well specified that:

When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error.

 

Please note that this is the behavior of the 12cR1 release; from my side, I think that this a “not yet feature” and we should see some real UNDO tablespaces in PDBs in the next release(s)!

Discover more about our expertise in Oracle

Categories: DBA Blogs