Feed aggregator

Death and taxes - and Oracle 11gR2?

Andrew Clarke - Thu, 2015-12-31 02:48
Oracle Premier Support for 11gR2 Database expired this time last. However, Oracle announced they would waive the fees for Extended Support for 2015. This was supposed to provide 11gR2 customers an additional twelve months to migrate to 12c. So, twelve months on, how many of those laggards are still on 11gR2. My entirely unscientific guess is, most of them. Why else would Oracle announce the extension of the Extended Support fees waiver until May 2017?

But 11gR2's continued longevity should not be a surprise.

For a start, it is a really good product. It is fully-featured and extremely robust. It offers pretty much everything an organization might want from a database. Basically it's the Windows XP of RDBMS.

The marketing of 12c has compounded this. It has focused on the "big ticket" features of 12c: Cloud, Multi-tenancy and In-Memory Database. Which is fair enough, except that these are all chargeable extras. So to get any actual benefits from upgrading to 12c requires laying out additional license fees, which is not a popular message these days.

And then there's Big Data. The hype has swept up lots of organizations who are now convinced they should be replacing their databases with Hadoop. They have heard the siren singing of free software and vendor-independence. In reality, most enterprises' core business rests on structured data for which they need an RDBMS, and their use cases for Big Data are marginal. But right now, it seems easier to make a business case for the shiny new toys than spending more on the existing estate.

So how can Oracle shift organizations onto 12c? They need to offer compelling positive reasons, not just the fear of loss of Support. My suggestion would be to make a couple of the Options part of the core product. For instance, freeing Partitioning and In-Memory Database would make Oracle 12c database a much more interesting proposition for many organizations.

12c Parallel Execution New Features: Parallel FILTER Subquery Evaluation - Part 3: The Optimizer And Distribution Methods

Randolf Geist - Wed, 2015-12-30 13:45
As mentioned in the first and second part of this instalment the different available distribution methods of the new parallel FILTER are selected automatically by the optimizer - in this last post of this series I want to focus on that optimizer behaviour.

It looks like there are two new optimizer related parameters that control the behaviour of the new feature: "_px_filter_parallelized" is the overall switch to enable/disable the new parallel filter capability - and defaults to "true" in 12c, and "_px_filter_skew_handling" influences how the optimizer determines the distribution methods - the parameter naming suggests that it somehow has to do with some kind of "skew" - note that the internal parameter that handles the new automatic join skew handling is called "_px_join_skew_handling" - rather similar in name.

But even after playing around with the feature for quite a while I couldn't come up with a good test case where the optimizer chose a different distribution method based on the typical data distribution skew patterns - so that the expression used for the FILTER lookup had some more popular values than others. So I got in touch with Yasin Baskan - product manager for Parallel Execution at Oracle, asking what kind of skew is meant to see a difference in behaviour.

As it turns out "skew" means something different in this context here. When the mentioned parameter "_px_filter_skew_handling" is set to "true" (default value in 12c) the optimizer will choose a different distribution method based on the size of object driving the filter. According to my tests this effectively means: If the object is such small that only one granule (usually 13 blocks) per PX slave can be assigned the optimizer will use automatically a HASH distribution, otherwise - if the object is larger than this threshold - no re-distribution will be selected. I wasn't able to come up with an example where the optimizer automatically comes up with the other available distribution method, which is RANDOM / ROUND-ROBIN (see previous post). To demonstrate the point, here is a small example:

create table t2 as select * from dba_objects where rownum <= 90000;

exec dbms_stats.gather_table_stats(null, 't2')

create table t3 as select * from dba_objects where rownum <= 90000;

exec dbms_stats.gather_table_stats(null, 't3')

explain plan for
select /*+ monitor
parallel(4)
--opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
t3 t
--(select /*+ no_merge */ a.* from t3 a) t
--(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- Default plan, no redistribution before parallel FILTER
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 5 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 5 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 5 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 5 | | | Q1,00 | PCWP | |
|* 5 | FILTER | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------

exec dbms_stats.set_table_stats(null, 't3', numblks => 52)

-- Setting stats of T3 to 52 (13 * DOP) blocks or smaller - HASH distribution will be used, 53 blocks or greater => no redistribution
-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 5 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 5 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 5 | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 5 | | | Q1,01 | PCWP | |
|* 5 | FILTER | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 90000 | 439K| 5 (20)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 90000 | 439K| 5 (20)| 00:00:01 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 90000 | 439K| 5 (20)| 00:00:01 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL| T3 | 90000 | 439K| 5 (20)| 00:00:01 | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-------------------------------------------------------------------------------------------------------------------
So this example shows that the HASH distribution will be used by the optimizer if the object T3 driving the FILTER operation is 52 blocks or smaller, which corresponds to 13 blocks per PX slave at a degree of 4.

Now I find this behaviour pretty odd to explain - since usually you wouldn't want to use Parallel Execution on such a small object anyway. But things become even worse: Not only to me the "skew" handling based on the object size is questionable, but the behaviour can become a potential threat if the row source driving the FILTER operator no longer is a plain table but the result of a more complex operation, which can be simply a join or non-mergeable view:

-- Resetting stats to true size of table - this would mean no redistribution at a DOP of 4, see above
exec dbms_stats.gather_table_stats(null, 't3')

explain plan for
select /*+ monitor
parallel(4)
--opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
--t3 t
(select /*+ no_merge */ a.* from t3 a) t
--(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- But simply using a NO_MERGE hint on the select from the simple T3 row source results in an unnecessary HASH re-distribution
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 9755K (1)| 00:06:22 | | | |
| 1 | SORT AGGREGATE | | 1 | 13 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 13 | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 13 | | | Q1,01 | PCWP | |
|* 5 | FILTER | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 90000 | 1142K| 114 (0)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 90000 | 1142K| 114 (0)| 00:00:01 | Q1,00 | P->P | HASH |
| 8 | VIEW | | 90000 | 1142K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
| 9 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 10 | TABLE ACCESS FULL| T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
|* 11 | TABLE ACCESS FULL | T2 | 1 | 5 | 114 (0)| 00:00:01 | | | |
--------------------------------------------------------------------------------------------------------------------

explain plan for
select /*+ monitor
parallel(4)
--opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
--t3 t
--(select /*+ no_merge */ a.* from t3 a) t
(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- If we use a simple join as driving row source again a HASH re-distribution before the FILTER gets added
-- As a result the dreaded HASH JOIN BUFFERED will be used instead of the plain HASH JOIN
-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 10 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 10 | | | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 10 | | | Q1,03 | PCWP | |
|* 5 | FILTER | | | | | | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | 90000 | 878K| 229 (1)| 00:00:01 | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10002 | 90000 | 878K| 229 (1)| 00:00:01 | Q1,02 | P->P | HASH |
|* 8 | HASH JOIN BUFFERED | | 90000 | 878K| 229 (1)| 00:00:01 | Q1,02 | PCWP | |
| 9 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10000 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 11 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 12 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 15 | PX SEND HYBRID HASH | :TQ10001 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 16 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWC | |
| 17 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 18 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-------------------------------------------------------------------------------------------------------------------------

explain plan for
select /*+ monitor
parallel(4)
opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
--t3 t
--(select /*+ no_merge */ a.* from t3 a) t
(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- Disabling the FILTER skew handling behaviour means no re-distribution before the FILTER, and hence no HASH JOIN BUFFERED
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 10 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 10 | | | Q1,02 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 10 | | | Q1,02 | PCWP | |
|* 5 | FILTER | | | | | | Q1,02 | PCWC | |
|* 6 | HASH JOIN | | 90000 | 878K| 229 (1)| 00:00:01 | Q1,02 | PCWP | |
| 7 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 8 | PX SEND HYBRID HASH | :TQ10000 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 9 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 10 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 11 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
| 12 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 13 | PX SEND HYBRID HASH | :TQ10001 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 14 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWC | |
| 15 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 16 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------------
So it looks like if the row source driving the parallel FILTER operator is complex (in this case by complex I mean not a simple table) the optimizer will always add a HASH distribution unconditionally before the FILTER. It it obvious that such a re-distribution adds overhead - it requires resources to perform. What is even worse is that in general the rules is: The more redistributions the more likely the dreaded buffering will be added to the execution plans, as can be seen from the example above, where the HASH JOIN turns into a HASH JOIN BUFFERED due to the HASH distribution by default added by the optimizer after the join and before the FILTER. By disabling the filter "skew" handling this in my opinion unnecessary redistribution doesn't show up and hence the HASH JOIN without buffering can be used in this example.

Summary
The new parallel FILTER operator comes with different distribution methods available to the optimizer. However, at present the way the optimizer determines automatically if and how to re-distribute the data seems to be questionable to me.

The skew handling is based on the size of the driving object - for very small objects a re-distribution gets added before the FILTER. For row sources driving the filter that are no simple tables the skew handling seems to add a re-distribution unconditionally.

For the reasons outlined at present I would recommend considering to disable the filter skew handling by setting the parameter "_px_filter_skew_handling" to "false", of course not without getting the blessing from Oracle Support before doing so - this should allow minimising the number of re-distributions added to an execution plan. Losing the capability of handling the "skew" caused by very small objects in my opinion is negligible in most cases.

New Version Of XPLAN_ASH Utility

Randolf Geist - Mon, 2015-12-28 13:35
A new version 4.22 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

This version primarily addresses an issue with 12c - if the HIST mode got used to pull ASH information from AWR in 12c it turned out that Oracle forgot to add the new "DELTA_READ_MEM_BYTES" columns to DBA_HIST_ACTIVE_SESS_HISTORY - although it got officially added to V$ACTIVE_SESSION_HISTORY in 12c. So now I had to implement several additional if/then/else constructs to the script to handle this inconsistency. It's the first time that the HIST view doesn't seem to reflect all columns from the V$ view - very likely an oversight rather than by design I assume.

Apart from that the I/O figures (Read Bytes / Write Bytes etc.) in the "Activity Timeline" make more sense for those cases where a process hasn't been sampled for several sample points (see below for more details).

Also in case an execution plan could not be found it is now made more obvious with a corresponding message that you might be able to pull the execution plan from AWR by using different ASH modes (MIXED / HIST).

Here are the notes from the change log:

- Fixed a funny bug that in 12c they have forgotton to add the DELTA_READ_MEM_BYTES to DBA_HIST_ACTIVE_SESS_HISTORY, so in HIST mode with 12c prior XPLAN_ASH versions could error out with invalid column name

- Change the way the I/O figures are treated in the "Activity Timeline based on ASH". Now the I/O per second is spread over the (previous) samples covered by DELTA_TIME. This should give a smoother representation of the I/O performed and much closer to what you see in Real-Time SQL Monitoring reports. The difference to prior versions is only visible in cases where a session wasn't sampled for quite a while and hence has a DELTA_TIME spanning multiple previous sample points. This also means that the I/O related columns in the "Activity Timeline based on ASH" now show only the PER SECOND values, no longer to the totals like prior versions

- Added a SET NULL "" in the configuration and initialization section for SQL*Plus environments that use a non-default SET NULL setting. This screwed up some internal switches so that XPLAN_ASH for example thought it's running in a S-ASH repository

- Added a note to the end of the output if no execution plan could be found and falling back to retrieving plan operation details from ASH. Also added the note to use MIXED or HIST ASH source option if no execution plan could be found in CURR mode, so execution plan has been purged from Shared Pool in the meanwhile

- Cloned the "cleanup" section from the end to the beginning of the script to ensure no current SQL*Plus environment settings influence the script execution. This is particularly relevant if the script execution gets cancelled before the final cleanup section is reached or some other, previous scripts left a mess behind

Selfies. Social. And Style: Smartwatch UX Trends

Usable Apps - Sun, 2015-12-27 16:58

From Antiques to Apple

“I don’t own a watch myself”, a great parting shot by Kevin of Timepiece Antique Clocks in the Liberties, Dublin.

I had popped in one rainy day in November to discover more about clock making and to get an old school perspective on smartwatches. Kevin’s comment made sense. “Why would he need to own a watch?” I asked myself, surrounded by so many wonderful clocks from across the ages, all keeping perfect time.

This made me consider what might influence people to use smartwatches? Such devices offer more than just telling the time.

Timepiece Antiques in Dublin

From antiques to Apple: UX research in the Liberties, Dublin

2015 was very much the year of the smartwatch. The arrival of the Apple Watch earlier in 2015 sparked much press excitement and Usable Apps covered the enterprise user experience (UX) angle with two much-read blog pieces featuring our Group Vice President, Jeremy Ashley (@jrwashley).

Although the Apple Watch retains that initial consumer excitement (at the last count about 7 million units have shipped), we need to bear in mind that the Oracle Applications User Experience cloud strategy is not about one device. The Glance UX framework runs just as well on Pebble and Android Wear devices, for example.

 Exciting Offerings in 2015

It's not all about the face. Two exciting devices came my way in 2015 for evaluation against the cloud user experience: The Basis (left) and Vector Watch. 

Overall, the interest in wearable tech and what it can do for the enterprise is stronger than ever. Here's my (non-Oracle endorsed) take on what's going to be hot and why in 2016 for smartwatch UX.

Trending Beyond Trendy 

There were two devices that came my way in 2015 for evaluation that for me captured happening trends in smartwatch user experience. 

First there was the Basis Peak (now just Basis). I covered elsewhere my travails in setting up the Basis and how my perseverance eventually paid off.

 The Ultimate Fitness and Sleep Tracker

Basis: The ultimate fitness and sleep tracker. Quantified self heaven for those non-fans of Microsoft Excel and notebooks. Looks great too! 

Not only does the Basis look good, but its fitness functionality, range of activity and sleep monitoring "habits," data gathering, and visualizations matched and thrilled my busy work/life balance. Over the year, the Basis added new features that reflected a more personal quantified self angle (urging users to take a "selfie") and then acknowledged that fitness fans might be social creatures (or at least in need of friends) by prompting them to share their achievements, or "bragging rights," to put it the modern way.

Bragging Rights notification on Basis

Your bragging rights are about to peak: Notifications on Basis (middle) 

Second there was the Vector Watch, which came to me by way of a visit to Oracle EPC in Bucharest. I was given a device to evaluate.

A British design, with development and product operations in Bucharest and Palo Alto too, the Vector looks awesome. The sophisticated, stylish appearance of the watch screams class and quality. It is easily worn by the most fashionable people around and yet packs a mighty user experience.  

 Style and function together

Vector Watch: Fit executive meets fashion 

I simply love the sleek, subtle, How To Spend It positioning, the range of customized watch faces, notifications integration, activity monitoring capability, and the analytics of the mobile app that it connects with via Bluetooth. Having to charge the watch battery only 12 times (or fewer) each year means one less strand to deal with in my traveling Kabelsalat

The Vector Watch affordance for notifications is a little quirky, and sure it’s not the Garmin or Suunto that official race pacers or the hardcore fitness types will rely on, and maybe the watch itself could be a little slimmer. But it’s an emerging story, and overall this is the kind of device for me, attracting positive comments from admirers (of the watch, not me) worldwide, from San Francisco to Florence, mostly on its classy looks alone.

I'm so there with the whole #fitexecutive thing.

Perhaps the Vector Watch exposes that qualitative self to match the quantified self needs of our well-being that the Basis delivers on. Regardless, the Vector Watch tells us that wearable tech is coming of age in the fashion sense. Wearable tech has to. These are deeply personal devices, and as such, continue the evolution of wristwatches looking good and functioning well while matching the user's world and responding to what's hot in fashion.

Heck, we are now even seeing the re-emergence of pocket watches as tailoring adapts and facilitates their use. Tech innovation keeps time and keeps up, too, and so we have Kickstarter wearabletech solutions for pocket watches appearing, designed for the Apple Watch.

The Three "Fs"

Form and function is a mix that doesn't always quite gel. Sometimes compromises must be made trying to make great-looking, yet useful, personal technology. Such decisions can shape product adoption. The history of watch making tells us that.

Whereas the “F” of the smartwatch era of 2014–2015 was “Fitness,” it’s now apparent that the “F” that UX pros need to empathize with in 2016 will be "Fashion." Fashionable technology (#fashtech) in the cloud, the device's overall style and emotional pull, will be as powerful a driver of adoption as the mere outer form and the inner functionality of the watch.

The Beauty of Our UX Strategy 

The Oracle Applications Cloud UX strategy—device neutral that it is—is aware of such trends, ahead of them even.

The design and delivery of beautiful things has always been at the heart of Jeremy Ashley’s group. Watching people use those beautiful things in a satisfied way and hearing them talk passionately about them is a story that every enterprise UX designer and developer wants the bragging rights to.

So, what will we see on the runway from Usable Apps in 2016 in this regard?

Stay tuned, fashtechistas!

Getting one of your Five-a-Day – connecting Remotely to a Raspberry Pi from Linux Mint

The Anti-Kyte - Sat, 2015-12-26 12:42

It’s Christmas. To mark the occasion, my son bought me a top-of-the-range computer…

pi_board

Christmas has come early ! Er, hang, on…

Yes, a Raspberry Pi 2 b-spec, complete with 900 MHz Quad-core ARM processor and 1 GB RAM.

Getting it up and running was a bit more of a challenge than I had anticipated.
The Pi uses HDMI for Video output and my ageing monitor is not equipped for HDMI…

tv

The best program on TV – NOOBS doing it’s thing.

In the end, I had to “borrow” the TV.
This arrangement was, of necessity, extremely temporary. The TV had to be back in it’s usual place ready for The Strictly-TOWIE-Dancing-Get-Me-Out-Of-Here Christmas Special, on pain of pain.
Therefore, my first Pi project was to connect to it remotely from another machine, namely, my Linux Mint Laptop.
This will enable me to run the Pi headless (i.e. without a monitor/keyboard/mouse attached to it).

I’m going to cover two different methods of connecting to the Pi.
The first is using ssh to connect to the command line.
The second is to connect remotely to the Raspbian desktop itself.

Just to avoid any confusion, I will be referring to the Raspberry Pi as “the Pi” and the machine I’m connecting from as “Mint”.

About the Environment The Pi

The Pi I’m using for this is running the Raspbian Jessie OS.
It is set to start the desktop on boot.
The Pi is up and the desktop is running.
The Pi is connected to the router via a network cable.

The Mint Machine

The version of Mint I’m running on is 17.2.

The Network

I’m running on a standard home network with all devices connecting to a router.
The router assigns IP addresses to each connected machine dynamically via DHCP.
Apart from the router itself, no device on the network has a fixed IP address.

SSH – Setup

We need to perform these steps on the Pi.

First of all, we need to know what the machine name of the Pi is.

As the devices that connecting to the network are dynamically allocated an IP address it’s simpler to address a specific machine by name.

So, on the Pi, open a terminal and type :

uname -n

This returns the name of the computer. In my case :

raspberrypi

The next thing we need to do is to make sure that the Pi will accept connections via SSH.

On the Raspbian Desktop, click on the Menu and select Preferences/Raspberry Pi Configuration

prefs_menu

Let’s face it, this is the closest I’m going to get to fresh fruit and veg for the next couple of weeks.

Next click on the Interfaces tab and make sure that ssh is enabled.

config_window

Once the steps have been completed, we’re ready to test…

SSH – from the remote machine

We want to connect to the Pi from a remote machine as a user that exists on the Pi.
Note that this user does not need to exist on the remote machine.

When we run the ssh command, we need to specify the user we’re connecting as, and the name of the machine we’re connecting to (i.e. the Raspberry Pi itself).

I’m going to connect as the user pi. We’ve already found out that the name of the Raspberry Pi is “raspberrypi”.

So, I just need to open a terminal window on my remote machine and type :

ssh pi@raspberrypi

The first time you ssh to another computer on your network, you’ll get this warning :

The authenticity of host 'raspberrypi (192.168.1.144)' can't be established.
ECDSA key fingerprint is 03:72:d9:84:58:c8:a6:cc:37:bc:c3:47:8f:1c:90:e0.
Are you sure you want to continue connecting (yes/no)? 

Type “yes” and Hit Return…

Warning: Permanently added 'raspberrypi,192.168.1.144' (ECDSA) to the list of known hosts.
pi@raspberrypi's password:

Enter the password and…

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

To confirm that we’re now connected to the pi…

uname -a
Linux raspberrypi 4.1.13-v7+ #826 SMP PREEMPT Fri Nov 13 20:19:03 GMT 2015 armv7l GNU/Linux

Whilst ssh is handy if you just need command line access to the Pi, if you want access to the Desktop, you’ll need to try something a bit different.

Using RDP to run the Pi Desktop Remotely

Now, there is more than one way to do this. I’ve chosen to use RDP as it’s quite simple to setup.

Installing xrdp on the Pi

To start with, we need to install the xrdp package on the Pi. At this point, you can either do this on the machine itself ( by opening a Terminal window), or connect via ssh.
Either way, the command you need to enter is :

sudo apt-get install xrdp

You will be prompted for your password and should then get some output that looks like this :

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  vnc4server x11-apps x11-session-utils xbase-clients xbitmaps xfonts-base
Suggested packages:
  vnc-java mesa-utils x11-xfs-utils
The following NEW packages will be installed:
  vnc4server x11-apps x11-session-utils xbase-clients xbitmaps xfonts-base xrdp
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 8,468 kB of archives.
After this operation, 17.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://mirrordirector.raspbian.org/raspbian/ jessie/main x11-apps armhf 7.7+4 [529 kB]
Get:2 http://mirrordirector.raspbian.org/raspbian/ jessie/main x11-session-utils armhf 7.7+1 [60.1 kB]
Get:3 http://mirrordirector.raspbian.org/raspbian/ jessie/main xbase-clients all 1:7.7+7 [36.7 kB]
Get:4 http://mirrordirector.raspbian.org/raspbian/ jessie/main vnc4server armhf 4.1.1+X4.3.0-37.6 [1,434 kB]
Get:5 http://mirrordirector.raspbian.org/raspbian/ jessie/main xbitmaps all 1.1.1-2 [32.1 kB]
Get:6 http://mirrordirector.raspbian.org/raspbian/ jessie/main xfonts-base all 1:1.0.3 [6,181 kB]
Get:7 http://mirrordirector.raspbian.org/raspbian/ jessie/main xrdp armhf 0.6.1-2 [195 kB]
Fetched 8,468 kB in 35s (236 kB/s)
Selecting previously unselected package x11-apps.
(Reading database ... 123536 files and directories currently installed.)
Preparing to unpack .../x11-apps_7.7+4_armhf.deb ...
Unpacking x11-apps (7.7+4) ...
Selecting previously unselected package x11-session-utils.
Preparing to unpack .../x11-session-utils_7.7+1_armhf.deb ...
Unpacking x11-session-utils (7.7+1) ...
Selecting previously unselected package xbase-clients.
Preparing to unpack .../xbase-clients_1%3a7.7+7_all.deb ...
Unpacking xbase-clients (1:7.7+7) ...
Selecting previously unselected package vnc4server.
Preparing to unpack .../vnc4server_4.1.1+X4.3.0-37.6_armhf.deb ...
Unpacking vnc4server (4.1.1+X4.3.0-37.6) ...
Selecting previously unselected package xbitmaps.
Preparing to unpack .../xbitmaps_1.1.1-2_all.deb ...
Unpacking xbitmaps (1.1.1-2) ...
Selecting previously unselected package xfonts-base.
Preparing to unpack .../xfonts-base_1%3a1.0.3_all.deb ...
Unpacking xfonts-base (1:1.0.3) ...
Selecting previously unselected package xrdp.
Preparing to unpack .../xrdp_0.6.1-2_armhf.deb ...
Unpacking xrdp (0.6.1-2) ...
Processing triggers for man-db (2.7.0.2-5) ...
Processing triggers for fontconfig (2.11.0-6.3) ...
Processing triggers for systemd (215-17+deb8u2) ...
Setting up x11-apps (7.7+4) ...
Setting up x11-session-utils (7.7+1) ...
Setting up xbase-clients (1:7.7+7) ...
Setting up vnc4server (4.1.1+X4.3.0-37.6) ...
update-alternatives: using /usr/bin/vnc4server to provide /usr/bin/vncserver (vncserver) in auto mode
update-alternatives: using /usr/bin/Xvnc4 to provide /usr/bin/Xvnc (Xvnc) in auto mode
update-alternatives: using /usr/bin/x0vnc4server to provide /usr/bin/x0vncserver (x0vncserver) in auto mode
update-alternatives: using /usr/bin/vnc4passwd to provide /usr/bin/vncpasswd (vncpasswd) in auto mode
update-alternatives: using /usr/bin/vnc4config to provide /usr/bin/vncconfig (vncconfig) in auto mode
Setting up xbitmaps (1.1.1-2) ...
Setting up xfonts-base (1:1.0.3) ...
Setting up xrdp (0.6.1-2) ...
Processing triggers for systemd (215-17+deb8u2) ...

Once that little lot has scrolled up your screen, you can exit the session ( just type “exit”).
There are some guides which suggest that you need to re-boot the Pi at this point. I found that this was not necessary. However, if things don’t quite work as described from this point on, it may be worth doing this. After all, “Have you tried turning it off and on again ?” is a cliche for a reason !

It’s probably worth mentioning that, at this point, you should be able to connect from any Windows ( Windows 7 or above) remote machine using the built-in Remote Desktop app.
That’s not we’re after though. Oh no. We want to be able to do this from Mint…

Installing rdesktop on Linux Mint

Back on Mint, open a Terminal window and…

sudo apt-get install rdesktop

Once again you should be prompted for your password ( remember this is for your user on the Mint machine, not on the Pi). You should then see something like …

[sudo] password for mike:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
  pcscd
The following NEW packages will be installed
  rdesktop
0 to upgrade, 1 to newly install, 0 to remove and 83 not to upgrade.
Need to get 139 kB of archives.
After this operation, 427 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/universe rdesktop amd64 1.7.1-1ubuntu2 [139 kB]
Fetched 139 kB in 10s (12.6 kB/s)
Selecting previously unselected package rdesktop.
(Reading database ... 192214 files and directories currently installed.)
Preparing to unpack .../rdesktop_1.7.1-1ubuntu2_amd64.deb ...
Unpacking rdesktop (1.7.1-1ubuntu2) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up rdesktop (1.7.1-1ubuntu2) ...

Again, I found that no re-boot was required on the Mint machine. Once again, it might be different for you.

Either way, we should now be able to prove that the Pi will run happily without all of those wires plugged into it…

Connecting via rdesktop

…but I’d recommend keeping it all plugged in until you’ve tested first.

To connect from Mint, open a Terminal window and type :

rdesktop raspberrypi -u username_on_pi -p password -g 90%

…where username_on_pi is the name of the user on the Pi and password is the password for that user.

The -g switch tells rdesktop what size to create the window ( in our case, 90% of the screen size on our main machine)

You should now see (after a short interlude whilst the desktop is rendered)….

rdp_on_pi

Alternatively, if you want to run in a full screen, you can use the -f switch for rdesktop instead – i.e. :

rdesktop raspberrypi -u pi -p raspberry -f

Once you’re in full screen mode, you can toggle between full screen and window mode at any point by pressing CTRL+ALT+ENTER.
It’s worth noting if you do this and then minimize the rdesktop window, when you maximize again, desktop can appear to be blank and or the re-drawing might not be complete. I’m not sure why this is, or what the fix is.

One other point to note, it’s more secure to run rdesktop without specifying the password. In this way, you’ll be prompted for it when you connect.
So, if you run…

rdesktop raspberrypi -u pi -g 90%

…you will be presented with

pi_login

A Python Program to automate connection

To save us typing in the rdesktop command each time we want to connect to the Pi, we could write a simple bash script to automate our rdesktop command. However, in the circumstances, Python seems a more appropriate medium…

#!/usr/bin/env python

import sys, subprocess

def main():
    # Make sure that we've got a single argument passed in
    # NOTE - the first argument is the name of this program.
    if len(sys.argv) != 2 :
        sys.exit('Usage : %s screen_size as percentage between 25 and 100', sys.argv[0])

    if is_valid_screen_size(sys.argv[1]) :
        # Pass in the screen size to the function to build the rdesktop command
        command = build_rdp_command(sys.argv[1])
    else :
        sys.exit('Usage : %s screen_size as percentage between 25 and 100', sys.argv[0])

    try :
        # Run the command...
        status = subprocess.call(command, shell=True)
    except OSError as e :
        print >> sys.stderr, 'Error : ', e
    sys.exit( status)

def is_valid_screen_size( input_size) :
    # Validate the screen size.
    # Return True if it's a valid value

    # Make sure requested size is an integer
    try :
        int( input_size)
    except ValueError :
        return False
    # Now make sure it's not ridiculously small...or over 100
    if int( input_size) < 25 or int(input_size) > 100 :
        return False
    else :
        return True

def build_rdp_command(screen_size):
    # Return the appropriate rdesktop command

    # Initialize &quot;constants&quot; to use in the rdesktop command.
    PI_NAME = 'raspberrypi'
    PI_USER = 'pi'

    command_str = "rdesktop " + PI_NAME + " -u " + PI_USER
    if screen_size == 100 :
        # Full screen
        command_str = command_str + " -f "
    else :
        # Specify the percentage
        command_str = command_str + " -g " + screen_size +"%"
    return command_str    

if __name__ == "__main__" :
    main()

The program is saved as rdp_to_pi.py.
At the time of writing, the default Python version on Mint (17.2) is 2.7.6 ( although you can invoke a Python 3.4.3 interpreter by typing python3 at the prompt).
Therefore, this program is written for Python v2.

The first line of the program tells Linux to use the Python interpreter when this program is executed.

The program then following :

  • validates that it’s been passed a sensible argument value on the command line for the screen size percentage
  • builds the appropriate rdesktop command line using “constant” value for the machine name for the Pi and the name of the user to connect as
  • executes the command

To run the program, you first need to set the appropriate file permissions…

chmod u+x rdp_to_pi.py

…and then run it using a single “.” followed by a “/” ( not the usual “. ./” to run a bash script)…

./rdp_to_pi.py 90
Keyboard Mapping Issues

There are some other issues as well, most notably, the Pi seems to have forgotten where it is and has somehow adopted settings for a US keyboard.
If you want to test this and happen to have a UK keyboard, try opening a text editor and typing any of the following :

  • ” – will print @
  • @ – will print “
  • £ – will print #
  • | – will print ???
  • ~ – will print |

I’ve tried various things to fix this, but to no avail.
Despite the fact that both the Pi and the remote machine are configured with a UK keyboard, rdesktop seems to ignore this and insist on using US keyboard mappings.
I suspect that this is something to do with an X-Server configuration setting somewhere but I just can’t figure out where.

You may have more luck using this link as a starting point.

If anyone does have a solution to this, please let me know.

For now though, I’ve achieved my project goals :

  • get the telly back in place before there’s trouble
  • allow me to play with my Raspberry Pi whilst the festive extravaganza plays out on said TV

That should keep me quiet for a bit.


Filed under: Linux, Mint Tagged: python, python main function, python subprocess module, python sys module, python ValueError exception, Raspberry Pi, SSH, xrdp

Nativity

Greg Pavlik - Thu, 2015-12-24 09:24
Shadows flicker against the wall
within the cave it is perpetually night
(I find my vision gets dimmer with age
- when we are alleged to see more sharply -
in the low light of a single candle flame
it is getting much harder to read
year by year)
there is a form I barely am able to perceive.
I wonder if it is better here than the open air
where my eyes would surely be closed against the sun
where all forms find their origin in the one.

2015

LEVERAGE GEOGRAPHICALLY-DISTRIBUTED DEVELOPMENT

Kubilay Çilkara - Wed, 2015-12-23 14:44
As technology advances at warp speed, there are certain tech methodologies that will go by the wayside to make room for more advanced and efficient versions; and how development projects are managed is a case, in point.  Companies in every industrialized nation of the world are embracing Geographically-Distributed Development or GDD, which has embedded itself an impressive and proof-positive IT strategy model.  Outdated procedures that have been utilized for the administration of virtually any type of development project have been limited to one or several building sites.  That was then; this is now.

Let’s take a look at the advantages that GDD offers:
decreased labor expenses
increased availability to skilled resources
reduced time-to-market, with round-the-clock flexible staffing
The beauty of GDD is that is allows enterprises, regardless of location, to respond to changes in business circumstances as they happen.  Any feedback can be presented, instantaneously, within a global framework.

In order for GDD to achieve its vast benefit potential, major barriers that might impede an enterprise’s successes must be reduced to a minimum or entirely eliminated within the GDD strategy.   It is crucial that increased expenses associated with communication and coordination logistics that occur on an international level within a globally-distributed market, be uncovered and targeted.  If communication and coordination-specific expenses are allowed to flourish, the very benefits of GDD can be sorely compromised.  Various challenges must be reckoned with:  1) cultural variances 2) language differences and 3) inaccessibility to time-sensitive information.  These can all jeopardize the progress of distributed projects.

GDD is oblivious to location. it is an IT strategy model without borders.  This allows development team-members to work collectively and cohesively within a city, across state lines or beyond continents.  A site or sites might be engaged with one particular software-development venture while one or more outsourcing companies work, simultaneously, towards the projected goal.   Outsourcing companies would contribute their efforts and expertise, like a fine-tuned engine, within the software’s project-development cycle.  Optimized efficiency and cost savings, via structured and coordinated local or global team-work, becomes refreshingly realized.

With GDD, thorough and clear communication is established between all team members and project coordination.  Business demands incorporate global-sourcing, service-oriented architecture, new compliance regulations, new development methodologies, reduced release cycles and broadened application lifetimes.  Because of this, highly-effective, unencumbered communication is mission-critical; and a necessity arises that begs for a solution that has the power to:
Provide management visibility of all change activities among distributed development teams 
Integrate and automate current change processes and best practices within the enterprise
Organize the distribution of dependent change components among platforms and teams
Protect intellectual property

Track and authenticate Service Level Agreements (SLAs)
Engaging an organization to efficiently manage and significantly optimize communication among all stakeholders in the change process is a priceless component of an Application Lifecycle Management (ALM) solution.  Multiple GDD locales present inherent challenges:  language and cultural divides, varying software-development methods, change-management protocol, security employment, adherence to industry mandates and client business requisites.  The good news is the ALM solution tackles these hurdles with ease!

Provide Management Visibility of all Change Activities among Distributed Development Teams

When a centralized repository allows for the viewing of all the activities, communications and artifacts that could be impacted by the change process, you have beauty in motion; and this is what ALM does.  Via ALM, users have the luxury of effortlessly viewing project endeavors by each developer, development group or project team--irrespective of location, platform and development setting.  This type of amenity becomes especially striking when one begins to compare this model-type with other distributed environments where work-in-progress is not visible across teams due to niche teams employing their own code repositories.

ALM provides the opportunity for development managers to not only track, but validate a project’s standing.  A project’s status can be verified which helps to guarantee the completion of tasks.  User-friendly dashboards will alert management if vital processes indicate signs of sluggishness or inefficiency.

ALM ensures that the overall development objectives will be met on a consistent basis.  This is accomplished through the seamless coordination between both remote and local development activities.  The ALM-accumulated data plays a crucial role with boosting project management, status tracking, traceability, and resource distribution.  Development procedures can be continually improved upon, thanks to generated reports that allow for process metrics to be collected and assessed.  Also, ALM allows regulatory and best-practices compliance to be effortlessly monitored and evaluated.  Compliance deals with structuring the applicable processes and creating the necessary reports.  ALM executes compliance strategy and offers visibility to the needed historical information, regardless of users’ geographic locations.

Integrate and Automate Current Change Processes and Best Practices within the Enterprise

In a perfect world, each and every facet of a company’s application development would be super easy; and with ALM it is.  By way of ALM, companies can establish the defined, repeatable, measureable and traceable processes based on best practices, with absolute perfection.  User-friendly point-and-click set-up functions enable one to create a collection of authorized processes that automate task assignments and movement of application artifacts.

ALM permits the streamlining of change management by means of its simplicity when dealing with changes and necessary proceedings.  This in turn means changes can be analyzed and prioritized.  The approval management functions demand that official authorizations must be secured before any changes are permitted to go forth.  The ATM’s automated logging functions totally un-complicate the tracking of software changes.  This is huge since changes can be tracked from the time a request is received up to the time when a solution is submitted to production.

Every member that is part of the global development team would be duly notified regarding any required assignments as well as any happenings that would have an impact on their efforts.

Organize the Distribution of Dependent Change Components among Teams and Platforms

It’s no secret that when there are changes within just one system of a cohesive enterprise, those changes can impact other systems.  ALM offers multi-platform support which ensures that modifications made on disparate platforms, by way of geographically-dispersed teams can be navigated through the application lifecycle jointly.  A Bill of Materials Process, or BOMP, serves as an on-board feature that permits users to create file portfolios that incorporate characteristics from various platforms.  This means those portfolios can travel through the lifecycle as a unit.  Additionally, some ALM solutions absolutely ensure that the parts within the assemblies are positioned with the suitable platforms at each state of the lifecycle. 

Protect Intellectual Property

An ALM solution is the perfect component that allows for access and function control over all managed artifacts.  Managers are in a position to easily determine and authorize any access to vital intellectual property due to ALM functioning on a role-based control system.  The role-based structure means administrative operations are streamlined which permits any system administrator to bypass assigning individual rights to a single user.  Additionally, a role-based system delivers a clearly-defined synopsis of access rights between groups of individuals.

Track and Authenticate Service Level Agreements

The overall project plan, absolutely, must remain on schedule while administering accountability for established deliveries; and this can be easily realized through ALM’s ability to track and authenticate tasks and processes.  The ALM solution caters to satisfying Service Level Agreement (SLA) requirements within an outsourcing contract.  As a result, project management is enhanced by ensuring performance of specific tasks.  Optimizing the user’s ability to track emphasized achievements is made possible due to the consistency between tasks that have been assigned to developers and tasks that are part of the project plan.   Invaluable ALM-generated reports will track response and resolution times; and service-level workflows automate service processes and offer flexibility.  This translates into an acceleration of processes to the respective resources to meet project deadlines.  The ability to track performance against service-level agreements is made possible due to the availability of reports and dashboards that are at one’s fingertips.

Enhance Your Geographically-Distributed Development

As stated, ALM is beauty in motion; and aside from promoting perfected-levels of communication and coordination, it utilizes management strategies designed to plow through any obstructions that have the potential to compromise success.   ALM’s centralized repository is purposed to present multiple ideas, designs, dialogue, requirements, tasks and much more to team-members who would require or desire instant access to data.  Development procedures and tasks can be precisely and efficiently automated and managed due to ALM’s cohesive workflow capabilities.  Vital intellectual property, all of it is embedded and safeguarded in a central repository.  Due to this caliber of reinforced protection, loss and unauthorized access is null and void.  When remote software development is in-sync with local development, project management becomes seamless, highly-coordinated and error-free.  Integration of the monitoring, tracking and auditing of reports and dashboards means management can successfully satisfy project deadlines.  It would behoove any enterprise who wishes to reap the rewards of GDD to fully embrace ALM as its solution, it is truly mission-critical.

Application Lifecycle Management Solutions

Application lifecycle management programs are able to easily deliver a plethora of enterprise software modifications and configuration management facilities.  ALM solutions have the ability to support the needs of multiple groups of geographically-distributed developers.  Business process automation services, designated to automate and enforce on-going service delivery processes throughout enterprise organizations, is a vital component of ALM solutions.  Within  those groups of geographically-distributed developers, the product continues to reveal the magnitude of its talents since it:   targets permission-based assignment and enforcement services, caters to role-based interfaces which allows support for developers, software engineers, project managers, IT specialists, etc, delivers enterprise application inventory-management services, oversees and coordinates  large software inventories and configurations, guards user access, manages differing versions of application code,  supports the existence of concurrent development projects, coordinates a diversity of release management facilities. 

Mike Miranda is a writer concerning topics ranging from Legacy modernization to Application life cycle management, data management, big data and more



Categories: DBA Blogs

Update multiple columns

Bar Solutions - Wed, 2015-12-23 05:03

.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

This is something I knew somewhere in the back of my head, but had forgotten about until now.

When you want to update multiple columns in a single SQL statement based on a sub query you can of course duplicate this query for every column you want to update. But this violates the SPOD (Single Point Of Definition) ‘rule’.

As an example I have the following requirement:

Add two columns to the EMP table containing the name and job of the manager and fill these columns with the right values.

First of all I need to add the columns to the table, with is easy:

alter table emp add (mgr_name varchar2(10)
                    ,mgr_job varchar2(9)
                    )

Then comes the ‘tricky’ part. I can of course fill up these columns in separate statements, like this:

update emp e
   set e.mgr_name = (select m.ename
                       from emp m
                      where 1=1
                        and m.empno = e.mgr)
 where 1=1
   and e.mgr is not null
/
update emp e
   set e.mgr_job = (select m.job
                       from emp m
                      where 1=1
                        and m.empno = e.mgr)
 where 1=1
   and e.mgr is not null
/

But this implies two roundtrips from the client to the database. These statements can be combined into a single one:

update emp e
   set e.mgr_name = (select m.ename
                       from emp m
                      where 1=1
                        and m.empno = e.mgr)
     , e.mgr_job = (select m.job
                       from emp m
                      where 1=1
                        and m.empno = e.mgr)
 where 1=1
   and e.mgr is not null
/

But there is an easier, and more elegant way to do this:

update emp e
   set (e.mgr_name, e.mgr_job) = (select m.ename, m.job
                                   from emp m
                                  where 1=1
                                    and m.empno = e.mgr)
 where 1=1
   and e.mgr is not null
/

This is of course a pretty simple example, but you can imagine what would happen if you want to update more columns, create a complex sub query or worse, make modifications to the predicates. You are more than likely going to forget one or more sub queries giving you an undesired result.

update December 25th 2015: Find a demonstration script on LiveSQL

Calling all Apps DBAs doing 11i to R12.x upgrades

Vikram Das - Tue, 2015-12-22 09:02
At this time of the year during holidays, the Apps DBA community is busy doing upgrades as longer downtimes are possible.  In case you are facing any issues, please feel free to write to me at my email: oracleappstechnology@gmail.com .  I will be glad to hear from you and help you.
Categories: APPS Blogs

Remove Blanks from Formated Date Column

Denes Kubicek - Tue, 2015-12-22 02:31
This small example is showing how powerful jQuery is and how easy it is to use it in APEX. In this case, I needed to correct the oracle formatting, which adds blanks to the name of the day prior to comma. Probably oracle RPAD's the day names to the longest (Wednesday). Those are showing in APEX as one blank - see the left column. The only good way to go around that annoying side effect is to use jquery and basically one line of code.


Categories: Development

Simplified Analytics: Putting Power in the Hands of End Users

PeopleSoft Technology Blog - Mon, 2015-12-21 16:19

PeopleTools 8.55 is an important release with lots of exciting features.  We recently held a Partner Summit, where PeopleSoft Development and Strategy met with many of our implementation partners to describe and discuss the new capabilities of our products and the value they provide to customers.  One feature that garnered quite a bit of interest is our new Simplified Analytics feature.

Simplified Analytics is important because it puts reporting and analytic power in the hands of end users.  It enables users to generate ad hoc analytics in the context of transactions.  This can be tremendously valuable for a myriad of business processes.  For example, when a manager wants to give an employee a raise, they may want to analyze how that employee is currently compensated compared to peers in the same role and performance class.  Or suppose a person is working with suppliers and they want to determine which suppliers to use.  They may want to analyze suppliers’ performance, price, etc.  Simplified analytics can be valuable in so many transactions, the list is almost endless.  And again, end users can employ this feature without having to turn to technical experts to generate reports for them.

Let’s look at how this would work from the end user perspective for one of the use cases I mentioned above:  I am a manager who is planning to give one of my employees a raise, and need additional information to provide the appropriate amount.

As a manager, I use Related Actions to initiate a Salary Change for Rosanna, one of my subordinates.

This takes me to the transaction where I can provide the change, but I’m not sure how much to give Rosanna, so I create a new analytic in the related information section on the right side of the page.

This initiates the simple process for generating a Simplified Analytic.  Note that the analytic is contextually bound, so the results are generated for me as the user of the transaction in this context.  

It’s quite easy for me to create the analytic I want for my purpose with this guided process.  First, I’ll give my new analytic a name and determine how I want the data visualized.

In this case I’ve chosen a chart to render the output.  Next I’ll choose the data that I want.  Notice that the Manager name is already determined.  (Betty is the person I logged in as.) 

I can select or deselect fields until I have what I want.  

When I save the Analytic, it is presented to me using the visualization I chose earlier; in this case, a chart.

Note that I have row and column data, and other data is represented as facets.  Facets can be selected to filter the results, or I can drag data items to or from the Row and Column areas to slice and dice the analytic any way I wish.  It’s flexible and powerful.

In this case, I’ve dragged the Department field to the Row area, which changes the Analytic and gives me a new perspective on the information.

Now when I save the analytic, it appears in the related information area so I can run this any time I’m in this transaction.  The data is fresh and secure. 

If I’ve created an analytic that I want to use frequently and have at hand regardless of context, I can easily create a tile for it and add it to any of my home pages. 

Now the analytic appears as a tile, and I can run it easily any time from my home page. 

As you can see, creating simplified analytics is easy and fast for end users.  It is wonderfully useful, because users can get the information they need when they need it, without having to seek assitance from IT or reporting experts.  

Our applications teams will be delivering some simplified analytics with our products, so you will see some when you simply take upcoming PeopleSoft Update Images.  However, simplified analytics are easy to create, so you can provide them to your users wherever you see a need.  

Look for more posts on this topic.  We’ll also cover how to publish reports to make them available to other users, and we’ll show you how to set up simplified analytics to make them available for your end users. 

Virtual Technology Summit Presentation PDF's Now Available

OTN TechBlog - Mon, 2015-12-21 12:24

It's almost time for the OTN team to take a break for the Holidays, but not just yet.  We were busy elves last week and posted all the presentations for the videos in each VTS Replay Group as an additional resource.  Check them out and the highlighted sessions for this week below!

Automate Your Oracle Solaris 11 and Linux Deployments with Puppet - By Bjoern Rost 

Puppet is a popular open source configuration management tool that is used by many organizations to automate the setup and configuration of servers and virtual machines. Solaris 11.2 includes native support for puppet and extends the resources that can be managed to Solaris specific things like zones and ZFS. This presentation will give system administrators that are new to puppet an introduction and a way to get started with automating the configuration of Oracle Linux and Oracle Solaris systems, talk about how puppet integrates with version control and other projects and look at the Solaris specific resource types.

In this video you will learn how to use Oracle Mobile Cloud Service to enable your enterprise applications and existing services for simpler, faster, more transparent and more secure mobile access. You'll learn how to build a robust Mobile Backend as a Service (MBaaS) using Oracle Mobile Suite, and how to customize that MBaaS using one of many thousands of available Node.js modules.

Using Oracle SQL Developer and Oracle REST Data Services to Enable the Oracle Cloud - By Jeff Smith, Principal Product Director, Oracle SQL Developer Product

This session presents the latest capabilities in Oracle's popular SQL Developer and ORDS tools for database application development and deployment in the cloud. See how to clone and migrate data between Oracle cloud and on-premise implementations and other powerful techniques for developing applications for the cloud, in the cloud.

Taking healthcare as an example, this session will demonstrate using a mobile phone and a smart watch in combination with a Java based gateway, iBeacons and other sensors to monitor the activity of elderly people. With the help of an IoT cloud service this data can be analyzed to detect situations that might be critical (illness, bone fracture etc.). If such a case was detected, the cloud service can trigger enterprise applications. With this approach it might be possible to connect such a system to existing healthcare applications.This session will give you an idea how you can combine existing technologies to do something useful and help elderly people in case of an emergency.

Brazil

Greg Pavlik - Sun, 2015-12-20 23:11
Blown away to get my purple belt in Brazilian Jiu Jitsu from 10th Planet black belt Alex Canders.


ORAMEX Tech Day 2015 Guadalajara: Global Cloud UX Goes Local

Usable Apps - Sun, 2015-12-20 22:59

You asked. We came. We were already there.

In November, the Oracle México Development Center (MDC) in Guadalajara hosted the ORAMEX Tech Day 2015 event. This great location gave the Grupo de Usuarios Oracle de México (the Oracle User Group in México) (@oramexico) access to the very strong technical community in the region, and attendees from Guadalajara and surrounding cities such as Colima, León, and Morelia heard MDC General Manager Erik Peterson kick off the proceedings with a timely keynote on the important role that MDC (now celebrating its 5th year) plays in delivering Oracle products and services.

Erik Peterson delivers the MDC keynote

Erik Peterson delivers the MDC keynote at the ORAMEX Tech Day.

Naturally, Tech Day was also a perfect opportunity for the Oracle Applications User Experience (UX) team to support our ORAMEX friends with the latest and greatest UX outreach.

UX team at ORAMEX Tech Day 2015

Oracle Applications UX staffers at ORAMEX Tech Day (Left to right): Sarahi Mireles, Tim Dubois, Rafael (Rafa) Belloni, and Noel Portugal (image courtesy of ORAMEX) 

UX team members from the U.S., Senior Manager UX Product and Program Management Tim Dubois (@timdubis) and Senior Manager Emerging Tech Development Noel Portugal (@noelportugal), joined local staffers Senior UX Design Developer Rafa Belloni (@rafabelloni) and UX Developer Sarahi Mireles (@sarahimireles) to demo the latest UX technical resources for the community, to bring everyone up to speed on the latest UX cloud strategy and messages, and to take the pulse of the local market for our cloud UX and innovation enablement and outreach.

Tim and Sarahi demoed the latest from the Release 10 Simplified UI PaaS4SaaS Rapid Development Kit (RDK) and Rafa and Noel showed off cool Internet of Things proof of concept innovations; all seamlessly part of the same Oracle UX cloud strategy.

Sarahi Mireles introduces the RDK

Sarahi leading and winning with the RDK 

Tim and Sarahi provided a dual-language (in English and Spanish), real-time, exploration of what the RDK is, why you need it, what it contains, and how you get started.

Tim Dubois deep dives into the RDK for ORAMEX audience

The long view: Tim explains that the RDK is part of an overall enablement strategy for the Oracle Cloud UX: Simple to use, simple to build, simple to sell solutions. 

You can get started by grabbing technology-neutral Simplified UI UX Design Patterns for Release 10 eBook. It's free. And, watch out for updates to the RDK on the "Build a Simplified UI" page on the Usable Apps website. Bookmark it now! 

Simplified UI UX Design Patterns eBook

Your FREE Simplified UI UX Design Patterns eBook for PaaS and SaaS is now available

ORAMEX Tech Day 2015 was a great success, representing an opportunity for OAUX to collaborate with, and enable, a local technical community and Oracle User Group, to demonstrate, in practical ways, our commitment to bringing that must-have cloud UX message and resources to partners and customers worldwide, and of course, to show examples of the awesome role the MDC UX team plays within Oracle.

 UX Team Ready to Fly

Where will we go next? I wonder…

What's next? Stay tuned to the Usable Apps website for event details and how you can participate in our outreach and follow us on Twitter at @usableapps for up-to-the-minute happenings!

Special thanks goes to Plinio Arbizu (@parbizu) and Rolando Carrasco (@borland_c) and to the rest of the ORAMEX team for inviting us and for organizing such a great event.

WS-Policy Support for IWS

Anthony Shorten - Sun, 2015-12-20 20:43

One of the major advantages of Inbound Web Services (IWS) is the support for security based around the WS-Policy standard. By supporting WS-Policy it is now possible to support a wide range of transport and message security standards within each individual web service. It is also possible to support multiple policies. This allows maximum flexibility for interfacing to Oracle Utilities products using the WS-Policy support provided by Oracle WebLogic and IBM WebSphere (for selected products).This means the Web Services client calling our Inbound Web Services must comply with at least one of the WS-Policy directives attached to the service.

The support for WS-Policy is implemented in a number of areas:

  • It is possible to attach custom WS-Policy compliant policies directly to the Inbound Web Service as an annotation. The Oracle Utilities product supplies an optional default annotation to implement backward compatibility with XML Application Integration (XAI). This allows customers using XAI to more for Inbound Web Services. Oracle recommends not to attach policies within the Inbound Web Services definition as that can reduce the flexibility of your interfaces.
  • It is possible to attach policies within the J2EE Web Application service AFTER deployment to individual web services. This information is retained across deployments using a deployment file that is generated by the container at deployment time. In Oracle WebLogic this is contained in the deployment plan generated by the deployment activity, it will  reapply the policies during each redeployment process automatically. For Oracle WebLogic, a large number of WS-Policy files are supported for message and transport.
  • For Oracle WebLogic EE customers, it is possible to also use Oracle Web Services Manager to attach additional WS-Policy security policies supported by that product. Again this is done by deployment time. The advantage of Oracle Web Services Manager is that it reuses all of the policies supplied with Oracle WebLogic, adds advanced policies and also adds access control functionality rules you can attach to an Inbound Web Service to control when, who and where it is used.

The bottom line is that you can use any policy (supplied with the J2EE container or custom) that is supported by the J2EE container. You cannot introduce a policy that is not compatible with the container itself as we delegate security to the container.

The only thing we do not support at the present is applying a WS-Policy to part of a message or at the operation level. The WS-Policy is applies across the web service.

IT Tage 2015 - "Analysing and troubleshooting Parallel Execution" presentation material

Randolf Geist - Sun, 2015-12-20 12:24
Thanks to all attending my presentation "Analysing and troubleshooting Parallel Execution" at the IT Tage conference 2015 in Frankfurt, Germany. You can download the presentation material here in Powerpoint of PDF format, as well as check the Slideshare upload.

Note that the Powerpoint format adds value in that sense that many of the slides come with additional explanations in the notes section.

If you are interested in more details I recommend visiting this post which links to many other posts describing the different new features in greater depth.

Join OTN team @ Oracle CloudWorld Developer January 19th..in NYC!

OTN TechBlog - Fri, 2015-12-18 12:10

The OTN team is excited to see you at the Oracle CloudWorld Developer event in New York, January 19, 2016. Join us to learn how you can leverage Oracle Cloud Platform technologies for the complete development lifecycle! Explore Oracle Paas that enables you to deliver better quality code with the agility you need to meet today's IT challenges.

  

Hear from the top technology experts about how cloud will transform your development organizations through sessions spanning across four tracks: Database, DevOps, Enterprise Integration and Mobile. Attend this free event and enjoy networking opportunities with your peers, product demos, break-out sessions, and hear from Chris Tonas, Vice President of Mobility and Development Tools at Oracle on best practices for development in the cloud.

Who should attend?
  • Developers 
  • Architects 
  • Database Administrators 
  • System Administrators 
  • Project Managers 
  • Students

11i pre-upgrade data fix script ap_wrg_11i_chrg_alloc_fix.sql runs very slow

Vikram Das - Wed, 2015-12-16 20:51
We are currently upgrading one of our ERP instances from 11.5.10.2 to R12.2.5.  One of the pre-upgrade steps is to execute the data fix script ap_wrg_11i_chrg_alloc_fix.sql.  However, this script has been running very very slow. After 4 weeks of monitoring, logging SRs with Oracle, escalating etc., we started a group chat today with our internal experts.  We had Ali, Germaine, Aditya, Mukhtiar, Martha Gomez and Zoltan.  I also invited our top notch EBS Techstack expert John Felix. After doing explain plan on the sql, Based on the updates being done by the query I predicted that it will take 65 days to complete.

John pointed out that the query was using the index AP_INVOICE_DISTRIBUTIONS_N4  that had a very high cost.  We used an sql profile that replaced AP_INVOICE_DISTRIBUTIONS_N4  with AP_INVOICE_DISTRIBUTIONS_U1.  The query started running faster and my new prediction was that it would complete in 5.45 days.

John mentioned that now another select statement was using the same index AP_INVOICE_DISTRIBUTIONS_N4 that had a very high cost.

After discussing among ourselves, we decided to drop the index, run the script and re-create the index. Aditya saved the definition of the index and dropped it.

DBMS_METADATA.GET_DDL('INDEX','AP_INVOICE_DISTRIBUTIONS_N4','AP')
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  CREATE INDEX "AP"."AP_INVOICE_DISTRIBUTIONS_N4" ON "AP"."AP_INVOICE_DISTRIBUTIONS_ALL" ("ACCOUNTING_DATE")
  PCTFREE 10 INITRANS 11 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "APPS_TS_TX_IDX"

1 row selected.

SQL> drop index AP.AP_INVOICE_DISTRIBUTIONS_N4;

Index dropped.

The updates started happening blazing fast.  The whole thing got done in 39 minutes and we saw the much awaited:

SQL> set time on
16:34:16 SQL> @ap_wrg_11i_chrg_alloc_fix.sql
Enter value for resp_name: Payables Manager
Enter value for usr_name: 123456
-------------------------------------------------------------------------------
/erp11i/applcsf/temp/9570496-fix-16:34:40.html is the log file created
-------------------------------------------------------------------------------

PL/SQL procedure successfully completed.

17:13:36 SQL>

From 65 days to 5.45 days to 39 minutes.  Remarkable.  Thank you John for your correct diagnosis and solution.
Categories: APPS Blogs

Yes, there is MORE...OTN VTS Replay Content to binge on!

OTN TechBlog - Tue, 2015-12-15 14:50

As a reminder the Virtual Technology Summit content is highly technical demos, presentations and HOL prepared by both internal and external Oracle product experts.  Below are the latest highlighted sessions from the replay library groups on community.oracle.com.

Now onto the bingeing.... 

Pi on Wheels, Make Your Own Robot 

By Michael Hoffer, computer scientist, Goethe-Center for Scientific Computing in Frankfurt

The Pi on Wheels is an affordable open source DIY robot that is ideal for learning Java-related technologies in the context of the 
Internet of Things. In this session we will talk about how 3D printing works and how it can be utilized to build robots. The most 
fascinating aspect of 3D printing is that it is astonishingly easy to customize the robot. It allows you to build something completely 
new and different. We provide a Java based IDE that allows you to control and program the robot. In addition to that it can be used 
to programmatically design 3D geometries.  


By Alex Barclay, Principal Product Manager, Solaris and Systems Security, Oracle

Learn and understand about the security threats to your public and private cloud and gain insight into how the Oracle Security 
Architecture helps reduce risk. This webcast will provide detailed information on the top 20 cloud security threats and how different 
parts of the Oracle systems stack help eliminate each threat.

By Christian Shay, Principal Product Director, Oracle

This session explores .NET coding and tuning best practices to achieve faster data access performance. It presents techniques 
and trade-offs for optimizing connection pooling, caching, data fetching and updating, statement batching, and Oracle datatype 
usage. We will also explore using Oracle Performance Analyzer from Visual Studio to tune a .NET application's use of the Oracle 
Database end to end.

By Kent Graziano, Oracle ACE Director and Principal of Data Warrior
Oracle SQL Developer Data Modeler (SDDM) has been around for a few years now and is up to version 4.1. It really is an industrial 
strength data modeling tool that can be used for any data modeling task you need to tackle. This presentation will demonstrate at 
least five features, tips, and tricks that I rely on to make me more efficient (and agile) in developing my models. 

Get SDDM installed on your device and bring it to the session so you can follow along.

By Shukie Ganguly

WebLogic Server 12.1.3 provides support for innovative APIs and productive Tools for application development, including APIs for 
JAX-RS 2.0, JSON Processing (JSR 353), WebSocket (JSR 356), and JPA 2.1. This session from the July 2015 OTN Virtual 
Technology Summit provides an overview of each of these APIs, and then demonstrate how you can use these capabilities to 
simplify the development of server applications accessed by "rich" clients using lightweight web-based protocols such as REST 
and WebSocket.



More On Wearable Tech

Floyd Teter - Mon, 2015-12-14 14:04
I've been going through an amazing experience over the past month plus...purchased and began wearing an Apple iWatch.  Never really thought I would do so...kind of did it on the spur of the moment.  Plus a little push from my team lead at Oracle, who wears one and loves it.

Even after a month of wearing the iWatch, I can't really point at one particular feature that makes it worthwhile.  It's really more a collection of little things that add up to big value.

One example:  I have a reputation far and wide for being late for meetings (could be a Freudian thing, considering how much I detest meetings).  5 minutes before a meeting begins, my iWatch starts to vibrate on my wrist like a nano-jackhammer.  My punctuality for meetings is much improved now, much to the joy of my managers, peers and customers.

Another example:  I can take a phone call wherever I am, distraction free.  That's right, calling Dick Tracy.  Driving, taking a walk, whatever...we can talk now.

Notifications are wonderfully designed...much better than the iPhone or the iPad or the Mac or whatever.  I've actually turned on Notifications again, because they notify without being intrusive or distracting.

A few other dandies as well, like the idea of getting through the security line with the iWatch is a bit quicker than the iPhone or the much-improved implementation of Siri making voice-dictation for texting something I can now user reliably.

So its improved my productivity... not so much by hitting a home run in any particular area, but through lots of incremental little improvements.  Pretty cool wearable tech.

Pages

Subscribe to Oracle FAQ aggregator