Skip navigation.

Feed aggregator

KeePass 2.30

Tim Hall - Mon, 2015-08-17 00:50

KeePass 2.30 was released about a week ago. This passed me by as I was distracted with the whole tour thing. :)

The downloads and changelog are in the usual placed.

You can read how I use KeePass (Windows & Linux) and KeePassX2 (Mac) here.

Cheers

Tim…

KeePass 2.30 was first posted on August 17, 2015 at 7:50 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

My Glamorous Life : Just so you don’t misunderstand…

Tim Hall - Sun, 2015-08-16 10:10

If you’ve subscribed to my YouTube channel, you will have noticed me posting some videos with the title “My Glamorous Life : …“.

I had several distinct plans for this trip:

  • Do the OTN tour itself. That is of course the real reason we are doing this!
  • Collect some video footage of the conferences so I could produce little montage for each, just to help me remember it. I’ll do that when I get home and can sift through the footage to see if any is usable. Fingers crossed.
  • Film Machu Picchu. I kind-of failed there because I got I’ll, but I do have this little montage of the journey.
  • Document how boring, tedious and stressful the logistics of doing these tours really is.

I started on that last task with the footage of Charles de Gaule airport and Buenos Aires airport, which I think pretty much summed up how dull travelling is. Its not a criticism of the airports themselves. Just that most of your time on these tours is spent sitting in airports, planes, taxies and sleeping in hotels. There is very little time actually in each country.

After those first two videos, I went a bit off the plan and started to film the hotel rooms, which are actually rather glamorous really, at least to me anyway. Added to that, we were rushing around airports so much I kept forgetting to film them. So this series that was meant to convince you how bad travelling can be, now looks more like two weeks in the life of a budget Kim Kardashian.

That makes me a little nervous, as I don’t want people to get the wrong message about what we are doing here. Just to clear things up, here are a few things to keep in mind:

  • We use Oracle approved hotels, typically with an Oracle discount, unless we can get it cheaper than the corporate rate. In most cases, this discount makes them a similar price to staying in a Travelodge in London. So despite how cool some of these places look, they are really rather cheap. If you booked them yourself they are crazily expensive, but with the corporate discount, they are a bargain.
  • Several people on the tour travel for work and have airline and hotel status, allowing them to sign mere mortals like me into executive lounges to get freebies, like breakfast and evening meals, which means I’m not having to pay for them myself. Without this, the tour would be even more expensive as we can’t claim those expenses back.
  • All sightseeing discussed is naturally at our own expense. We (Debra really) arranged flight times to maximise the time we spent in cities, so we could fit in the odd tour, but if we had gone for midday flights we would have seen pretty much nothing of any of the cities, as it was conference-fly-conference-fly pretty much all the way through.
  • Since this tour finished in Peru, Debra and I decided to tag on an extra couple of days to go and see Machu Picchu. All flights, transport, hotels etc. during this time came out of our own pockets.
  • During my trip home from Peru I spent the day in a hotel because of a long layover (14 hours) and upgraded my flight home to business class. These costs came out of my own pocket. They are not paid for by the ACE Program.

I guess I’m getting a bit paranoid now, but it does make me nervous to think I might be giving people the wrong impression about these tours. They are bloody hard work. Anything else you can fit in around them is a bonus, but certainly not the main focus.

Anyway, enough of my paranoid wittering. I’m off to eat more food in an airport executive lounge, which I paid for myself. :)

Cheers

Tim…

My Glamorous Life : Just so you don’t misunderstand… was first posted on August 16, 2015 at 5:10 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Lima to Amsterdam

Tim Hall - Sun, 2015-08-16 08:59

I left the hotel a little late, but the airport was literally across the road, so it was no big deal. Having a business class ticket meant I checked in immediately (+1) and even had time to hit the lounge (+2). High class swanky time, and without needing to be signed in for once. :)

Boarding the flight was pretty straight forward. Once again, the business class ticket gives priority boarding (+3), without me having to tag along with Debra.

The KLM flight from Lima, Peru to Amsterdam, Netherlands was about 12 hours and 30 minutes, but it was a great flight. Upgrading to business class was a great move. I find it really hard to sleep in an upright position, so being able to lie flat is awesome (+4). I was in a seat with nobody either side of me, so I felt really isolated, which made sleeping even easier. These long flights are so much better if you can get some sleep!

Aside from sleeping, I watched:

  • Wild Card : Not too bad. I like quite a few of the films Jason Statham has been in. Even the bad ones. :)
  • Seventh Son : Typically fantasy stuff. Witches, dragons and slayers etc. Quite good, but Jeff Bridges voice annoyed me.
  • The Big Lebowski : Seeing Jeff Bridges in the previous film made me want to re-watch this film, where his voice does not annoy me. :)
  • The Amityville Horror : Slept through a lot of it. I’ve seen it before. It’s an OK remake I guess.
  • The Green Lantern : OK. I know it is a pretty poor film, but I just scanned through to find clips that looked cool. :)

The staff were really pleasant and helpful. All in all a very good experience and well worth the money in my opinion.

On arriving in Amsterdam, I headed over to the lounge to see if I could get in. I’m not sure how other lounges work, but KLM allow you in on arrival as well as departure (+5), which is awesome, because I’m stuck here for about 6 hours in total. If I had spent 14 hours in Lima airport and 12.5 hours in economy, I would be feeling totally psycho by now. As it is, I’m feeling pretty good. Hopefully, by the time I get home I will be tired enough to sleep and I can wake up and go to work as normal tomorrow…

So for me, that was +5 for the flight upgrade. Thanks KLM! I could get addicted to this, and very poor. :)

I’ll write a wrap-up post when I get home… :)

Cheers

Tim…

PS. I’ve also got some quick montage videos of the conferences to edit when I get home, provided the footage I’ve got works OK…

Lima to Amsterdam was first posted on August 16, 2015 at 3:59 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Parallel Projection

Randolf Geist - Sun, 2015-08-16 08:09
A recent case at a client reminded me of something that isn't really new but not so well known - Oracle by default performs evaluation at the latest possible point in the execution plan.

So if you happen to have expressions in the projection of a simple SQL statement that runs parallel it might be counter-intuitive that by default Oracle won't evaluate the projection in the Parallel Slaves but in the Query Coordinator - even if it was technically possible - because the latest possible point is the SELECT operation with the ID = 0 of the plan, which is always performed by the Query Coordinator.

Of course, if you make use of expressions that can't be evaluated in parallel or aren't implemented for parallel evaluation, then there is no other choice than doing this in the Query Coordinator.

The specific case in question was a generic export functionality that allowed exporting report results to some CSV or Excel like format, and some of these reports had a lot of rows and complex - in that case CPU intensive - expressions in their projection clause.

When looking at the run time profile of such an export query it became obvious that although it was a (very simple) parallel plan, all of the time was spent in the Query Coordinator, effectively turning this at runtime into a serial execution.

This effect can be reproduced very easily:

create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

alter table t_1 parallel cache;

-- Run some CPU intensive expressions in the projection
-- of a simple parallel Full Table Scan
set echo on timing on time on

set autotrace traceonly statistics

set arraysize 500

select
regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as some_cpu_intensive_exp1
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') as some_cpu_intensive_exp2
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') as some_cpu_intensive_exp3
from t_1
;

-- The plan is clearly parallel
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000K| 192M| 221 (1)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 4 | TABLE ACCESS FULL| T_1 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------

-- But the runtime profile looks more serial
-- although the Parallel Slaves get used to run the Full Table Scan
-- All time spent in the operation ID = 0
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Operation | Name | Execs | A-Rows| ReadB | ReadReq | Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH | Parallel Execution Skew ASH | Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 5 | 2000K | | | 3 | 136 | 120 | #################### | 1:sqlplus.exe(120)[2000K],P008(0)[0],P009(0)[0],P00A(0)[0],P00B(0)[0] | ################################ | @@@@@@@@@@@@@@@@@@@ ( 98%) | ON CPU(120) |
| 1 | 0 | PX COORDINATOR | | 5 | 2000K | | | 119 | 1 | 1 | # | 1:sqlplus.exe(1)[2000K],P008(0)[0],P009(0)[0],P00A(0)[0],P00B(0)[0] | | ( .8%) | ON CPU(1) |
| 2 | 1 | PX SEND QC (RANDOM)| :TQ10000 | 4 | 2000K | | | 66 | 11 | 2 | ## | 2:P00B(1)[508K],P00A(1)[490K],P008(0)[505K],P009(0)[497K],sqlplus.exe(0)[0] | | (1.6%) | PX qref latch(2) |
| 3 | 2 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | | | 0:P00B(0)[508K],P008(0)[505K],P009(0)[497K],P00A(0)[490K],sqlplus.exe(0)[0] | | | |
|* 4 | 3 | TABLE ACCESS FULL| T_1 | 52 | 2000K | 23M | 74 | | | | | 0:P00B(0)[508K],P008(0)[505K],P009(0)[497K],P00A(0)[490K],sqlplus.exe(0)[0] | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fortunately there is a simple and straightforward way to make use of the Parallel Slaves for evaluation of projection expressions that can be evaluated in parallel - simply add a suitable NO_MERGE hint for the query block that you want the projection to be evaluated for in the Parallel Slaves.

If you don't want to have side effects on the overall plan shape by not merging views you could always wrap the original query in an outer SELECT and not merging the now inner query block. There seems to be a rule that the projection of a view always get evaluated at the VIEW operator, and if we check the execution plan we can see that the VIEW operator is marked parallel:

set echo on timing on time on

set autotrace traceonly statistics

set arraysize 500

select /*+ no_merge(x) */ * from (
select
regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as some_cpu_intensive_exp1
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') as some_cpu_intensive_exp2
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') as some_cpu_intensive_exp3
from t_1
) x
;

-- View operator is marked parallel
-- This is were the projection clause of the VIEW will be evaluated
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000K| 11G| 221 (1)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 2000K| 11G| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 2000K| 11G| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
| 4 | PX BLOCK ITERATOR | | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T_1 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
---------------------------------------------------------------------------------------------------------------

-- Runtime profile now shows effective usage of Parallel Slaves
-- for doing the CPU intensive work
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH | Parallel Execution Skew ASH| Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 5 | 2000K | | | | | 0:sqlplus.exe(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] | | | |
| 1 | 0 | PX COORDINATOR | | 5 | 2000K | 17 | 63 | 10 | # ## # #### | 1:sqlplus.exe(10)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] | #### | * (5.6%) | resmgr:cpu quantum(10) |
| 2 | 1 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 2000K | 5 | 61 | 10 | ## # ## ## ## # | 3:P002(5)[544K],P001(4)[487K],P000(1)[535K],P003(0)[434K],sqlplus.exe(0)[0] | # | (5.6%) | ON CPU(7),resmgr:cpu quantum(3) |
| 3 | 2 | VIEW | | 4 | 2000K | 2 | 82 | 69 | #################### | 4:P003(42)[434K],P001(35)[487K],P000(26)[535K],P002(22)[544K],sqlplus.exe(0)[0] | ############ | @@@@@@@@@@@@@@@@@@@ ( 70%) | ON CPU(125) |
| 4 | 3 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P002(0)[544K],P000(0)[535K],P001(0)[487K],P003(0)[434K],sqlplus.exe(0)[0] | | | |
|* 5 | 4 | TABLE ACCESS FULL| T_1 | 52 | 2000K | 3 | 78 | 29 | ###### ####### # ### | 4:P000(11)[535K],P002(8)[544K],P001(8)[487K],P003(7)[434K],sqlplus.exe(0)[0] | ### | ***** ( 19%) | resmgr:cpu quantum(30),ON CPU(4) |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
At runtime the duration of the query now gets reduced significantly and we can see the Parallel Slaves getting used when the VIEW operator gets evaluated. Although the overall CPU time used is similar to the previous example, the duration of the query execution is less since this CPU time is now spent in parallel in the slaves instead in the Query Coordinator.

Summary
By default Oracle performs evaluation at the latest possible point of the execution plan. Sometimes you can improve runtime by actively influencing when the projection will be evaluated by preventing view merging and introducing a VIEW operator that will be used to evaluate the projection clause.

The optimizer so far doesn't seem to incorporate such possibilities in its evaluations of possible plan shapes, so this is something you need to do manually up to and including Oracle 12c (version 12.1.0.2 as of time of writing this).

Workplace Visibility

OracleApps Epicenter - Sat, 2015-08-15 17:51
What is visibility? The M-w Dictionary defines visibility as “clarity of vision or The quality or state of being known to the public.” But for in Orginizational prospective, workforce, visibility means much more. Not only is workforce visibility the ability to see the breadth, depth and make-up of your organization, it also extends to your […]
Categories: APPS Blogs

EBS R12 (12.1) installation failed at Post Install Checks on Linux : libdb.so.2

Online Apps DBA - Sat, 2015-08-15 16:47

 

This post is from our Oracle Apps DBA Training where trainees install Oracle E-Business Suite (R12) on our servers remotely.

We use Oracle Linux 5.5 to install Oracle Apps and on installing version 12.1.1 installation completes but at post install check failed with above screen shot (Post validation checks failed for HTTP Server).

If you get error like above then

1. Check status OHS logs under OPMN (OHS is managed by OPMN) 

In log $LOG_HOME/ ora/ 10.1.3/ opmn/ HTTP_Server~1.log (where LOG_HOME is $ORACLE_BASE/inst/apps/$SID_[hostname]/logs )

_______
/u01/oracle/PRD1211/inst/apps/PRD1211_iamdemo07/ora/10.1.3/Apache/Apache/bin/apachectl startssl: execing httpd
/u01/oracle/PRD1211/apps/tech_st/10.1.3/Apache/Apache/bin/httpd: error while loading shared libraries: libdb.so.2: cannot open shared object file: No such file or directory

______

 

Looked at Bala’s blog and

1. Installed gdbm using Yum and here

yum install gdbm

2. Created softlink for libdb.so.2 pointing to libgdbm.so.2

ln -s /usr/lib/libgdbm.so.2 /usr/lib/libdb.so.2

3. Start OHS (Set environment & then use OPMNCTL to start OHS)

. $ORACLE_BASE/ apps/ apps_st/ appl/ APPS[SID]_[hostname.env]

cd $ADMIN_SCRIPTS_HOME

adopmctl startall

4. Click on retry on post install validation steps

r12_post_install_validation

 

Related

  • MyOracle Support784162.1 OHS 10.1.3 Fails to Start on Linux – ” apachectl startssl .. error loading shared libraries : libdb.so.2 “

 

Register before 24th August for next Oracle Apps DBA (R12) training batch and get 300 USD off .

The post EBS R12 (12.1) installation failed at Post Install Checks on Linux : libdb.so.2 appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Cusco to Lima

Tim Hall - Sat, 2015-08-15 14:40

It was a 3:30 start, which after broken sleep and the events of the day before had me a little worried. We got a taxi to the airport in Cusco, which is the coldest airport I have ever experienced. After checking in, we headed to the departure gate, which was also freezing. The departure gate was interesting. The lady brought her own laptop, microphone and speaker to make the announcements. :)

We got on to the coldest plane I’ve ever been on. I don’t remember seeing people on a plane in coats and woolly hats before. :) After a quick flight we got to Lima airport, where I said goodbye to Debra, who is flying back to Northern Ireland, via Miami and London.

Having a 14 hour layover in Lima, I decided to check in to a hotel at the airport and sleep for a while. I also upgraded my flight home to a business class flight. The combination of the Machu Picchu trip, airport hotel and business class flight home have added up to quite a lot of money, but if I get home in a reasonable state, it will be worth it. :)

Cheers

Tim…

Cusco to Lima was first posted on August 15, 2015 at 9:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Machu Picchu

Tim Hall - Sat, 2015-08-15 14:03

At about 04:00 we were queuing for the bus ride to the base of Machu Picchu. I started to feel a bit ill again. A little after 05:00 we were on the bus driving up to the base of Machu Picchu. It took about 30 mins to get there, most of which I spent trying not to puke.

I was very dissapointed with the entrance to Machu Picchu. It felt like the entrance to a theme park. There was even Machu Picchu WiFi. We were there to witness wonder and spectacle, but seemed to be getting Disneyland. After being on the verge all morning, I puked and felt much better.

When we eventually got through the turnstyles, we started to walk up the hill. The stairs are quite steep, but nothing I would be worried about if it weren’t for the altitude. It makes it feel like hard work, so you have to take it slow. I used the turns to my advantage and mostly hid the fact I was repeatedly throwing up. After a few minutes we got to area that overlooks the former residential area of Machu Picchu. If you’ve ever seen a picture of Machu Picchu, chances are you’ve seen the one taken from this spot. A few levels up and we were at the guardhouse, which gives an even better view. I puked there too. :) For me, this was all I wanted to do as far as Machu Picchu was concerned. I wanted to stand there and see this for myself. Everything else was a bonus. People visit several times and spend days there. This was really all I wanted. :)

After that we walked down towards the residential area. At that point, I really felt like I was done for. I told Debra to carry on and I walked down to the entrance to look for medical attention. I finally got to see the medic, and puke in her bin a few times. She injected me with a concoction of anti-nausea and electrolytes and left me to sleep for a while. By the time Debra returned I was feeling much better. Interestingly, it was nothing to do with the altitude. My blood O2 was fine. It was pretty similar to what happened to me in India. I’m starting to think it’s nausea caused by a type of migrane, induced by lack of sleep.

Anyway, after my rather brief visit to Machu Picchu, we were heading down the mountain in the bus. We got some food and chilled out before boarding the train to take us back to Cusco and the rest of our luggage.

The train journey back took about 3.5 hours. Lot’s of great sights, only marred by some intensely annoying children, who were complaining about being bored. Why do adults drag children along to this stuff? They don’t enjoy it and ruin it for everyone else!

Back at Cusco, it was a quick taxi ride to the hotel, where I puked and went to bed. We were hoping to have a brief look at Cusco, but it gets dark so early in Peru, there really wasn’t time.

I would like to say I got a good night’s sleep, but the hotel we stayed at was so noisey. I woke several times in the night because of fireworks, music and general noise in the town, which made the 03:30 start the next day even harder to cope with.

Now I know this all sounds really negative and bad, but it was worth it. Machu Picchu is one of those places I always hoped to see before I died. The fact it nearly killed me in the process is besides the point. :) I’m pretty sure if I hadn’t been so beaten up by two weeks of travelling and presenting it would have been a breeze. Part of me thinks it would be nice to go back and see again, but part of me thinks I’ve done all I wanted to do. It is a very expensive experience, but worth it in my opinion.

I wasn’t really in a fit state to take photos, but fortunately Debra was and she let me have a copy of them, which you can see here. I especially like the ones of me looking like dreadful. :)

Cheers

Tim…

Update 1: I think it is great how much work they are doing to preserve the Machu Picchu site, but the amount of rebuilding is a bit of a concern. At the moment, about 30% of the site has been rebuilt and the work is continuing. If too much is done, it ceases to be an ancient site and becomes a modern site in the style of an ancient one. They need to tread very carefully, or risk taking the final step and completing the transition to Disneyland!

Update 2: At no point did I see Pikachu! Apparently, Machu Picchu and Pikachu are not the same thing. Who’da thunk it?

Machu Picchu was first posted on August 15, 2015 at 9:03 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Lima to Cusco to Machu Picchu

Tim Hall - Sat, 2015-08-15 13:21

With the tour over, Debra and I had arranged to spend a couple of days visiting Machu Picchu, before heading home.

We woke up early on Friday to get a flight from Lima to Cusco. We arrived at the airport in plenty of time, got to our gate and saw a list of delayed and cancelled flights to Cusco. The weather was too bad in Cusco for flights to take off and land. Luckily, after a while the weather apparently cleared in Cusco, allowing us to take a flight which arrived about 1 hour late.

We had arranged to drop our luggage off at a hotel in Cusco a day early, then continue on to Machu Picchu. The taxi ride to the hotel was interesting. Cusco has some very narrow streets that are barely wide enough for get a car through. It was quite hairy at times. We eventually got there, dropped our bags off and continued in the taxi to Ollantaytambo, which took about 90 mins. This allowed us to briefly see some of the sacred valley up close. During the drive I had a funny turn, which I put down to the high altitude. Debra said I looked green. By the time we got to Ollantaytambo and got some food I was feeling better.

While we were waiting for the train, I noticed the arrivals/departures screen on the wall had a session of TOAD running, doing some queries. By the time we had cameras ready, it was gone and the announcements screen was back. Debra went on the hunt and found a lady in an office that confirmed they (PeruRail) were using Oracle. :) We got on the Vistadome train, which has lots of extra windows, including in the roof, which is essential if you want a good view of the mountains around you. The train has a rather narrow gauge, which is a little disconcerting at first. The train takes you to Aguas Calientes, now known as Machu Picchu Pueblo, which is the best place to stay if you plan an early visit to Machu Picchu.

Just a quick word of warning, I did not like Machu Picchu Pueblo at all. It is a great setting in the mountains with the river running through, but it is one giant tourist centre, full to the brim with restaurants, markets and tourist shops. Many of the write-ups about Machu Picchu talk about it being ruined by tourists. This town proves the point! We bought our bus tickets for the next day, grabbed some food and headed to bed for an early start.

Cheers

Tim…

Update: Here is a quick montage of the journey to Machu Picchu.

Lima to Cusco to Machu Picchu was first posted on August 15, 2015 at 8:21 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

IFTTT Easy Button

Oracle AppsLab - Sat, 2015-08-15 10:00

IMG_2798

The Amazon Dash button, it’s all the buzz lately. Regardless whether you think it is the greatest invention or just a passing fad, it is a nice little IoT device. There is already work underway to try and make it work with custom code.

There are a couple crowdfunding projects (flic and btn) that are attempting to create custom IoT buttons as well. But these often come with a high price tag (around $100).

This is where the up and coming ESP8266 mcu can shine. For under $3 you can have a wifi chip plus a programable micro-controller. You just need to add a cheap button (like the Staples Easy Button for around $7.) Add good ol’ IFTTT Maker Channel and you will be set to go with your custom IoT button for about $10.

IMG_2771IMG_2784

Check my hackster.io project (https://www.hackster.io/noelportugal/esp8266-ifttt-easy-button) to learn how to make your own.

Possibly Related Posts:

What’s that Skippy ? Mike’s doing too much typing to Instrument his PL/SQL code ?

The Anti-Kyte - Sat, 2015-08-15 06:53

Australian readers will be pleased to note that, despite the Antipodean flavour of this post, there will be no mention of The Ashes. It is a well known fact that Kangaroos are not interested in cricket.

My brother used to run a motorcycling school. One of his teaching techniques, out on the road was to say things like “What’s that Skippy ? Mike’s left his indicator on after the turn ?”
This is in reference to Skippy the Bush Kangaroo – a children’s TV program about the adventures of the eponymous hero with and uncanny knack of communicating life-threatening situations to humans, simply by means of a few tongue-clicking sounds.
My son spent quite a bit of time with his Uncle Steve.
Uncle Steve had quite a bit of influence on said child.
As a result, I’d often be on the receiving end of the distilled wisdom of Skippy…
“What’s that Skippy ? Dad’s left his keys on the table ?”
“What’s that Skippy ? Dad’s left the eight-ball over the pocket ?”
“What’s that Skippy ? Pocket money should be going up in line with inflation ?”

Over the years, this began to seep into my internal monologue… “What’s that Skippy ? I’ve forgotten to close the cursor ?”
It is with thanks to “Uncle Steve” and the help of a know-it-all marsupial with a unique linguistic talent that I will be looking at logging in PL/SQL applications and ways of…well…doing less typing to achieve the same level of instrumentation.
Specifically, what we’ll cover is :

  • Why logging in PL/SQL is special
  • Logging error messages by magic
  • Using OWA_UTIL.WHO_CALLED_ME
  • Using PL/SCOPE to figure out where you are
  • An neater way to log runtime parameter values
  • A logging package that incorporates these techniques

The ultimate combination of all of these changes may well not be ideal in every situation.
However, Skippy has tried to reduce the instrumentation code required to and absolute minimum. After all, kangaroos don’t like typing.

Like most of the Oracle world, Skippy and myself are still on 11gR2.
The sunny uplands of 12c remain, for the moment, the province of messing about in VirtualBox.
Therefore, we won’t be covering any of the 12c utilities ( e.g. UTL_CALL_STACK) here.

What’s that Skippy ? Oh yes, if you are considering a replacement for your existing logging sub-system, or even planning one from scratch, then you might want to check out the OraOpenSource Logger.

The Logging Application

Typically, a PL/SQL logging application will consist of a table that looks something like this :

create table application_message_logs
(
    log_id number not null,
    log_ts timestamp not null,
    username varchar2(30) not null,
    session_id number not null,
    program_unit varchar2(30),
    sub_program varchar2(30),
    location number,
    message_type varchar2(1) not null,
    message varchar2(4000) not null
)
/

…complete with a sequence to generate the LOG_ID values…

create sequence aml_log_id_seq
/

NOTE – I’ve kept things relatively simple here. Of course, there may be instances where you want to log a message of more than 4000 characters ( maybe when debugging a dynamic SQL statement for example). For such eventualities you’d probably have an “overflow” column for a continuation of the message.

The logging package would probably resemble this :

create or replace package log_pkg
as

    --
    -- Set the logging level
    --
    -- E(rror) = just log error messages
    -- I(nfo) = log Error and Information messages
    -- D(ebug) = log everything
    
    g_log_level varchar2(1) := 'E';
    
    procedure write_pr
    (
        i_program_unit in application_message_logs.program_unit%type default null,
        i_sub_program in application_message_logs.sub_program%type default null,
        i_location in application_message_logs.location%type,
        i_message_type in application_message_logs.message_type%type,
        i_message in application_message_logs.message%type
    );
end log_pkg;
/

create or replace package body log_pkg
as

    function log_level_to_num_fn( i_level in varchar2)
        return pls_integer
    is
        --
        -- Private function to convert the log level or message type
        -- from a character to a number
        --
    begin
        return
        case i_level
            when 'E' then 1
            when 'I' then 2
            when 'D' then 3
        end;
    end log_level_to_num_fn;
    
    procedure write_pr
    (
        i_program_unit in application_message_logs.program_unit%type default null,
        i_sub_program in application_message_logs.sub_program%type default null,
        i_location in application_message_logs.location%type,
        i_message_type in application_message_logs.message_type%type,
        i_message in application_message_logs.message%type
    )
    is
    --
    -- Check the message_type against the current log level setting
    -- (g_log_level) and if, appropriate, write the message to the log table
    --
        pragma autonomous_transaction;
    begin
        if log_level_to_num_fn( nvl(i_message_type, 'E')) > log_level_to_num_fn( g_log_level)
        then
            --
            -- Nothing to do
            --
            return;
        end if;
        insert into application_message_logs
        (
            log_id, log_ts, username, session_id,
            program_unit, sub_program, location,
            message_type, message
        )
        values
        (
            aml_log_id_seq.nextval, systimestamp, user, sys_context('userenv', 'sessionid'),
            i_program_unit, i_sub_program, i_location,
            i_message_type, substr(i_message, 1, 4000)
        );
        commit;
    exception
        when others then
            rollback;
    end write_pr;

end log_pkg;
/

Once again, for the sake of simplicity, I’ve used package variable to control the logging level rather than something a bit more elaborate.

What’s that Skippy ? A programmer’s trapped in the Standards Document ?

Before we go any further, it’s worth stopping to consider this package.
As well as using an Autonomous Transaction, it also employs another much despised feature of PL/SQL – the WHEN-OTHERS exception without a re-raising of the error.

In other contexts, use of these features would be a sure-fire way of inviting the opprobrium of the QA department.
Here, however, both are entirely reasonable, not to say necessary.

If you’re adding a log record in a job that’s part way through a transaction, you definitely want to save the log record, without affecting that ongoing transaction. Indeed, even if that transaction is subsequently rolled back, you want to keep the log record.
Therefore, the autonomous transaction is, in this context, entirely appropriate.

The WHEN-OTHERS with no re-raise is probably a bit more contentious at first sight. However, consider that the package’s one procedure is used to log any messages, not simply errors.
Say, for the sake of argument that the logging table is on a different tablespace from your application tables.
There might be an occasion where an insert into this table would cause an error due to the tablespace being full.
Do you really want your batch data load to fail because there has been an unexpected error logging a message to say the job has started ?
If the answer is no, then such errors are by definition non-fatal. Therefore, the lack of a re-raise is sensible in this context.

It’s not just the code itself that is somewhat out of the ordinary when it comes to logging.
Consider the following code, in which the logging package is called.
Incidentally, this isn’t the actual algorithm used by the UK Met Office to produce a weather forecast…although sometimes I wonder…

create or replace package forecast_pkg
as
    procedure tomorrow_pr
    (
        i_forecast_date date,
        i_detail in varchar2,
        i_just_guess in boolean default true,
        o_forecast out varchar2
    );
end forecast_pkg;
/

create or replace package body forecast_pkg
as
    procedure tomorrow_pr
    (
        i_forecast_date date,
        i_detail in varchar2,
        i_just_guess in boolean default true,
        o_forecast out varchar2
    )
    is
        lc_proc_name constant application_message_logs.sub_program%type := 'TOMORROW_PR';
        l_params application_message_logs.message%type;
        l_loc pls_integer;
        l_forecast varchar2(10);
    begin
        -- record the fact that we're starting...
        log_pkg.write_pr
        (
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'I', 
            i_message => 'Starting forecast for tomorrow'
        );
        -- ...and the parameter values we've been passed
        l_params := 'i_forecast_date = '||to_char(i_forecast_date, sys_context('userenv', 'nls_date_format'))||', '
            ||' i_detail = '||i_detail||', '
            ||' i_just_guess = '||case i_just_guess when true then 'TRUE' else 'FALSE' end;
        log_pkg.write_pr
        ( 
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'I', 
            i_message => substr(l_params,1,4000)
        );
        l_loc := $$plsql_line;
        --
        -- Do some weather forecasting here... and throw in a debug message as well..
        --
        log_pkg.write_pr
        (
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'D', 
            i_message => 'Running complicated algorithm to get forecast'
        );
        if floor( dbms_random.value(1,3)) = 1 then
            o_forecast := 'SUNNY';
        else
            o_forecast := 'SOGGY';
        end if;
        -- then...
        --
        log_pkg.write_pr
        (
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'I', 
            i_message => 'Forecast completed outlook is '||o_forecast
        );
    exception when others then
        log_pkg.write_pr
        ( 
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => l_loc, 
            i_message_type => 'E', 
            i_message => sqlerrm||chr(10)||dbms_utility.format_error_backtrace
        );
        -- What's that Skippy ? Oh, yes...
        raise;
    end tomorrow_pr;
end forecast_pkg;
/

Aside from the fact that the input parameters are there purely for the purposes of demonstration, we have three distinct logging scenarios in this package :

  • Recording Parameter values
  • recording where we are in the procedure at various points
  • Logging an error from an exception handler

If we set the log level to Debug and run the procedure…

set serveroutput on size unlimited
declare
    l_forecast varchar2(100);
begin
    log_pkg.g_log_level := 'D';
    forecast_pkg.tomorrow_pr
    (
        i_forecast_date => to_date('12052015', 'DDMMYYYY'),
        i_detail => 'Some random value',
        i_just_guess => false,
        o_forecast => l_forecast
    );
    dbms_output.put_line('The forecast is '||l_forecast);
end;
/

The forecast is SOGGY

PL/SQL procedure successfully completed.

SQL> 

…we then get the following entries in our log table :

select program_unit, sub_program, location,
    message_type, message
from application_message_logs
order by log_id
/

PROGRAM_UNIT	SUB_PROGRAM	LOCATION MESSA MESSAGE
--------------- --------------- -------- ----- --------------------------------------------------------------------------------
FORECAST_PKG	TOMORROW_PR	      21 I     Starting forecast for tomorrow
FORECAST_PKG	TOMORROW_PR	      33 I     i_forecast_date = 12-MAY-15 i_detail = Some random value i_just_guess = FALSE
FORECAST_PKG	TOMORROW_PR	      45 D     Running complicated algorithm to get forecast
FORECAST_PKG	TOMORROW_PR	      60 I     Forecast completed outlook is SOGGY

Now, the coding standards being applied here include :

  • DATE and BOOLEAN IN parameter values must be explicitly converted to VARCHAR2 prior to logging
  • Calls to stored program units must pass parameters by reference (one parameter per line)
  • The naming convention is that packages are suffixed pkg and procedures suffixed pr

Whilst, of themselves, these standards are fairly reasonable, it does add up to an awful lot of typing to do some fairly standard instrumentation.
This in turn serves to make it that bit harder to spot the actual application code.

So, how can we make logging more readable and less of an overhead in terms of the amount of code required, whilst still capturing the same level of information ?

Well, for a start, we apply some common sense in lieu of standards.

To digress for a moment, I’m firmly of the opinion that standards documents should always contain a get-out-clause for situations such as this.

Firstly, we don’t really need those suffixes on the logging package and procedure names.
If you see a call to logs.write in a package, you can be reasonably sure that it’s a call to a packaged procedure that’s writing to a logging table. OK, it could be a type method, but either way, it’s clear what’s going on.
In case you’re wondering, log is a reserved word which is why I’m proposing to call the package logs.

The next thing we can do is to re-order the parameters to be better able to pass arguments by position rather than by reference.
We can do this by putting the optional parameters (i.e. those with default values) last.
We can also apply a reasonable default value for the message type.

Therefore, we could change our procedure’s signature to this :

procedure write
(
    i_message in message_logs.message%type
    i_location in message_logs.location%type,
    i_program_unit in message_logs.program_unit%type default null,
    i_sub_program in message_logs.sub_program%type default null,
    i_message_type in message_logs.message_level%type default 'E'
);

…which means that the call to it could be simplified (at minimum) to something like …

log.write( 'Logging a message', $$plsql_line);

It’s a start, I suppose. However, such a call isn’t going to result in a particularly useful log record. We wouldn’t know the program unit or the sub program name.
It would be good if we could get rid of the need for some of these parameters altogether but still be able to record the relevant information.

It would be even better if we could find a way not to have to type in the code to format the error stack in every exception handler in the application.

What’s that Skippy? Do I want to see a magic trick ?

Logging the Error Stack – the lazy way

Skippy’s knocked up a quick example :

create or replace procedure nothing_up_my_sleeves
as
begin
    dbms_output.put_line(sqlerrm||chr(10)||dbms_utility.format_error_backtrace);
end;
/

On the face of it, this doesn’t look that impressive. However, there’s more to SQLERRM and SQLCODE than meets the eye.

When we invoke this procedure…

set serveroutput on size unlimited
begin
    raise_application_error(-20000, q'[What's that Skippy ?]');
exception
    when others then
        nothing_up_my_sleeves;
        -- for demonstration purposes only - no raise here.
end;
/

ORA-20000: What's that Skippy ?
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

All of which means that, with the appropriate changes to the logging procedure, we could just use the following code in our exception handlers :

exception when some_error then
    logs.err;
    raise;

But hang on, that still won’t give us the name of the program or the location. How can we get this information without passing it in from the calling program ?

What’s that Skippy? Why don’t we use OWA_UTIL.WHO_CALLED_ME ?

OWA_UTIL.WHO_CALLED_ME provides information about the program unit from which the current program has been called. This includes both the name of the program unit and it’s current line number.
This procedure takes no in parameters and populates four out parameters :

  • owner – the owner of the calling program unit
  • name – the name of the calling program unit
  • lineno – the line number within the program unit where the call was made
  • caller_t – the type of program unit that made the call

To demonstrate :

create or replace function get_caller_fn
    return varchar2 
is
    l_owner varchar2(30);
    l_name varchar2(30);
    l_line number;
    l_type varchar2(30);
begin
    owa_util.who_called_me
    (
        owner => l_owner,
        name => l_name,
        lineno => l_line,
        caller_t => l_type
    );
    
    return 'Called from line '||l_line||' of a program of type '||l_type;
end;
/

If I now call this from an anonymous block…

set serveroutput on size unlimited
begin
    dbms_output.put_line(get_caller_fn);
end;
/

Called from line 2 of a program of type ANONYMOUS BLOCK

PL/SQL procedure successfully completed.

SQL> 

One point to bear in mind is that, when called from a package, the procedure passes the package name on the name out parameter, rather than the name of the package member. This is something we’ll come back to in a moment.

In the meantime, however, we can now eliminate the need for both the program name and the location parameters. The signature of the write procedure now looks like this :

procedure write
(
    i_message in message_logs.message%type
    i_sub_program in message_logs.sub_program%type default null,
    i_message_type in message_logs.message_level%type default 'E'
);

One thing to be aware of, this procedure will return details of where it was called from, irrespective of whether that’s another member of the current package. Therefore, the call to this needs to go in-line in our logs.write and logs.err procedures. This will enable us to capture the details of the program unit we’re logging from.

At this point, you may consider that the logging of the sub-program name is superfluous. After all, we know the line number from which the logging call originated so it should be simple enough to figure out which package member it came from.
To do this programmatically would no doubt require interrogation of the _SOURCE views, together with the application of some clever regular expressions.
On the other hand, you could avoid the top row of your keyboard…

PL/SCOPE

In case you are unfamiliar with it, PL/SCOPE is, to quote the documentation
“… a compiler driven tool that collects data about identifiers in PL/SQL source code at program-unit compilation time…”

In short, when you set the plscope_settings parameter appropriately and then compile a program unit, information about the program unit is written to the _IDENTIFIERS views.

Now, you may have heard that there were some issues with PL/SCOPE in 11.1.
It looks like 11GR2 doesn’t have this problem as the STANDARD and DBMS_STANDARD packages are compiled with PL/SCOPE enabled.
To verify this, run the following :

select distinct object_name, object_type
from dba_identifiers
where owner = 'SYS'
and object_name in ('STANDARD', 'DBMS_STANDARD');

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -------------
DBMS_STANDARD		       PACKAGE
STANDARD		       PACKAGE

In any event, we do not need to touch these packages. For our present purposes, we’re only interested in our application code.
However, it’s probably worth reviewing the aforementioned documentation before deciding to enable PL/SCOPE across your entire database.

So, if we now enable PL/SCOPE in our current session…

alter session set plscope_settings = 'identifiers:all'
/

…and recompile our package…

alter package forecast_pkg compile
/
alter package forecast_pkg compile body
/

…we should now have some information about the package in USER_IDENTIFIERS…

select name, type, usage,
    line, usage_id, usage_context_id
from user_identifiers
where object_name = 'FORECAST_PKG'
and object_type = 'PACKAGE BODY'
and usage = 'DEFINITION'
order by line
/

NAME			       TYPE		  USAGE 	    LINE   USAGE_ID USAGE_CONTEXT_ID
------------------------------ ------------------ ----------- ---------- ---------- ----------------
FORECAST_PKG		       PACKAGE		  DEFINITION	       1	  1		   0
TOMORROW_PR		       PROCEDURE	  DEFINITION	       3	  2		   1

We can see from this that the FORECAST_PKG package body has a single top-level procedure.
We can tell that TOMORROW_PR is a top-level procedure because the usage_context_id is the same value as the usage_id for the package definition ( i.e. 1).
We can also see that the definition of this procedure begins on line 3 of the package body.
As we’ll already have a line number in the logging procedure, all we’ll need to do is to work backwards and find the last top-level definition prior to the call to the logging procedure. This query should do the trick :

with sub_progs as
(
    select name, line
    from user_identifiers
    where object_name = 'FORECAST_PKG'
    and object_type = 'PACKAGE BODY'
    and type in ('FUNCTION', 'PROCEDURE')
    and usage = 'DEFINITION'
    and usage_context_id = 1
)
select name 
from sub_progs
where line = (select max(line) from sub_progs where line < 17)
/

NAME
------------------------------
TOMORROW_PR

SQL> 

What’s that Skippy ? Oh, if you want to find out how much space is being taken up to store all of this additional metadata, you can run the following query :

select space_usage_kbytes 
from v$sysaux_occupants
where occupant_name = 'PL/SCOPE'
/

By setting aside our coding standards and getting Oracle to do more of the work, we’ve got almost all of the components we need to build our logging procedure with a more streamlined interface.
There is, however, one issue that we still need to address.

Logging parameter values

It is possible to find the parameters defined for a given program unit, or sub-program by means of the _ARGUMENTS views.
However, getting the runtime values of those parameters is something that’s only really practical from within the program unit being executed.
Additionally, given that we want to log the runtime parameter values as a string, we’re faced with the issue of conversion.
Implicit conversion from NUMBER to VARCHAR2 is probably not an issue in this instance. However, DATE and BOOLEAN values present something more of a challenge.

The aforementioned OraOpenSource Logger has an elegant approach to this problem and it’s from that, that I have taken my lead here.

The approach will be to have some overloaded procedures in the logging package which handle the various datatype conversions and then append the resulting string to the VARCHAR2 value passed in as an in-out parameter.
These can then be called from the program unit we’re logging from.
Once all of the parameters have been processed, the final string can be logged in the usual way.

It’s probably easier to see what I mean with a quick demo :

create or replace package convert_pkg
as
    -- Overload for varchar values
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    );
    
    -- Overload for date values
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    );
    
    -- Overload for boolean values
    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    );
end convert_pkg;
/

create or replace package body convert_pkg
as
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    )
    is
    begin
        if io_list is not null then
            io_list := io_list||', ';
        end if;
        dbms_output.put_line('Varchar');
        io_list := io_list||' '||i_name||' => '||i_value;
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    )
    is
    begin
        dbms_output.put_line('Date');
        if io_list is not null then
            io_list := io_list||', ';
        end if;
        io_list := io_list||' '||i_name||' => '
            ||to_char(i_value, sys_context('userenv', 'nls_date_format'));
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    )
    is
    begin
        dbms_output.put_line('Boolean');
        if io_list is not null then
            io_list := io_list||', ';
        end if;
        io_list := io_list||' '||i_name||' => '
        ||case i_value when true then 'TRUE' when false then 'FALSE' end;
    end add_param;
end convert_pkg;
/

If we run the following script to test this :

set serveroutput on size unlimited
declare
    l_paramlist varchar2(4000);
begin
    convert_pkg.add_param( 'i_char_param', 'MIKE', l_paramlist);
    convert_pkg.add_param( 'i_date_param', sysdate, l_paramlist);
    convert_pkg.add_param( 'i_bool_param', true, l_paramlist);
    
    dbms_output.put_line(l_paramlist);
end;
/

Varchar
Date
Boolean
i_char_param => MIKE,  i_date_param => 14-AUG-15,  i_bool_param => TRUE

PL/SQL procedure successfully completed.


What’s that Skippy ? If you pass in a null value for the parameter, you will get an error :

set serveroutput on size unlimited
declare
    l_paramlist varchar2(4000);
begin
    convert_pkg.add_param( 'i_null_param', null, l_paramlist);

    dbms_output.put_line(l_paramlist);
end;
/

    convert_pkg.add_param( 'i_null_param', null, l_paramlist);
    *
ERROR at line 4:
ORA-06550: line 4, column 5:
PLS-00307: too many declarations of 'ADD_PARAM' match this call
ORA-06550: line 4, column 5:
PL/SQL: Statement ignored

As NULL is a valid value for a VARCHAR2, a DATE and a BOOLEAN, Oracle gets confused about which overload to use.

Therefore, you may want to weigh the convenience of this approach against the fact that you need to remember to handle potential null values in your parameters before logging them.

In our headlong rush to minimize typing in our application code, Skippy and I have decided to accept this responsibility. As a result we now have…

The new improved logging package
create or replace package logs
as
    --
    -- Set the logging level
    --
    -- E(rror) = just log error messages 
    -- I(nfo) = log Error and Information messages
    -- D(ebug) = log everything
    -- Note that we also have a message_type of P(arameter). 
    -- Messages of this type will be treated as being the same level as
    -- E(rror) - i.e. they will always be logged.
    
    g_log_level varchar2(1) := 'E'; 
    
    -- Parameter conversion procedures - overloaded for VARCHAR2, DATE and BOOLEAN
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    );
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    );

    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    );

    -- Mainly for P(arameter), I(nformation) and D(ebug) messages but also possible
    -- to log E(rror) messages if the message is something other than the error stack
    procedure write
    (
        i_message in application_message_logs.message%type,
        i_message_type in application_message_logs.message_type%type default 'E' 
    );
    
    -- Just log the error stack - i.e.
    -- sqlerrm||chr(10)||dbms_utility.format_error_backtrace
    procedure err;
end logs;
/

create or replace package body logs
as

    function log_level_to_num_fn( i_level in varchar2)
        return pls_integer
    is
    --
    -- Private function to convert the log level or message type
    -- from a character to a number
    --
    begin
        return
        case i_level
            when 'E' then 1
            when 'P' then 1
            when 'I' then 2
            when 'D' then 3
        end;
    end log_level_to_num_fn;

        
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    )
    is
    -- Overload for VARCHAR2 parameter values
    begin
        if io_list is not null then
            io_list := io_list||' , ';
        end if;
        io_list := io_list||i_name||' => '||i_value;
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    )
    is
    -- Overload for DATE parameter values
    begin
        if io_list is not null then
            io_list := io_list||' , ';
        end if;
        io_list := io_list||i_name||' => '
            ||to_char( i_value, sys_context( 'userenv', 'nls_date_format'));
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    )
    is
    -- Overload for BOOLEAN parameter values
    begin
        if io_list is not null then
            io_list := io_list||' , ';
        end if;
        io_list := io_list||i_name||' => '||case when i_value then 'TRUE' else 'FALSE' end;
    end add_param;
    
    procedure process_log_pr
    (
        i_owner in varchar2,
        i_name in varchar2,
        i_type in varchar2,
        i_line in number,
        i_message in application_message_logs.message%type,
        i_message_type in application_message_logs.message_type%type
    )
    is
    --
    -- Private procedure to process the log record
    -- Called from the write and err procedures.
    --
    
        l_sub_program application_message_logs.sub_program%type;
        
        pragma autonomous_transaction;
    
    begin
    
        if i_type = 'PACKAGE BODY' then
            -- find the sub-program name
            -- Do this in a nested block just in case the package in question
            -- has not been compiled with PL/SCOPE enabled
            begin
                with sub_progs as
                (
                    select name, line
                    from dba_identifiers
                    where type in ('FUNCTION', 'PROCEDURE')
                    and usage = 'DEFINITION'
                    and usage_context_id = 1
                    and  owner = i_owner
                    and object_name = i_name
                    and object_type = i_type
                )
                select name
                into l_sub_program
                from sub_progs
                where line =
                (
                    select max(line)
                    from sub_progs
                    where line < i_line
                );
            exception 
                when no_data_found then
                    -- Calling package was not compiled with PL/SCOPE enabled
                    l_sub_program := null;
            end;
        end if;

        insert into application_message_logs
        (
            log_id, log_ts, username, session_id,
            program_unit, sub_program, location,
            message_type, message
        )
        values
        (
            aml_log_id_seq.nextval, systimestamp, user, sys_context('userenv', 'sessionid'),
            i_name, l_sub_program, i_line,
            i_message_type, substr( i_message, 1, 4000)
        );
        commit;
    exception 
        when others then
            rollback;
    end process_log_pr;



    procedure write
    (
        i_message in application_message_logs.message%type,
        i_message_type in application_message_logs.message_type%type default 'E'
    )
    is
        --
        -- Main procedure for general messages ( usually non-error).
        -- Check that the message is of a level that needs to be recorded based on
        -- the current g_log_level setting. If so, find out where the call originates
        -- from
        --
        
        l_owner varchar2(30);
        l_name varchar2(30);
        l_line number;
        l_type varchar2(30);
        
        l_sub_program application_message_logs.sub_program%type;
        
    begin

        if log_level_to_num_fn( nvl(i_message_type, 'E')) > log_level_to_num_fn( g_log_level)
        then
            -- Don't need to log this message at the current logging level
            return;
        end if;
        
        -- What's that Skippy ?
        -- This call needs to be in-line. If we move it to a function and call
        -- it from here, then it'll just return details of the current package ?
        owa_util.who_called_me
        (
            owner => l_owner,
            name => l_name,
            lineno => l_line,
            caller_t => l_type
        );
        
        process_log_pr
        (
            i_owner => l_owner,
            i_name => l_name,
            i_type => l_type,
            i_line => l_line,
            i_message_type => i_message_type,
            i_message => i_message
        );
    exception when others then
        -- If you're going to break a taboo, do it properly !
        null;
    end write;
    
    procedure err 
    is
    --
    -- Retrieve the error stack, get the details of the caller and
    -- then pass it for logging.
    --
        l_message application_message_logs.message%type;
        
        l_owner varchar2(30);
        l_name varchar2(30);
        l_line number;
        l_type varchar2(30);
        
        l_sub_program application_message_logs.sub_program%type;

    begin
        l_message := sqlerrm|| chr(10) ||dbms_utility.format_error_backtrace;
        
        -- As per Skippy - this call needs to be in-line (see write procedure above)
        owa_util.who_called_me
        (
            owner => l_owner,
            name => l_name,
            lineno => l_line,
            caller_t => l_type
        );

        process_log_pr
        (
            i_owner => l_owner,
            i_name => l_name,
            i_type => l_type,
            i_line => l_line,
            i_message_type => 'E',
            i_message => l_message
        );
    exception
        when others then
            -- And again, just in case QA aren't annoyed enough by this point
            null;
    end err;   
end logs;
/

Just a minute. That package looks quite a bit bigger than the one we started with.
On the plus side however, we’ve got all that code in one place.
As a result of this, our application code is somewhat less cluttered :

create or replace package body forecast_pkg
as
    procedure tomorrow_pr
    (
        i_forecast_date date,
        i_detail in varchar2,
        i_just_guess in boolean default true,
        o_forecast out varchar2
    )
    is
        l_params application_message_logs.message%type;

        l_forecast varchar2(10);
    begin
        -- Have randomly decided that all parameters need to be not null
        -- or the procedure will error.
        -- This is purely for cosmetic purposes...
        if i_forecast_date is null or i_detail is null or i_just_guess is null
        then
            raise_application_error( -20000, 'Missing mandatory parameters.');
        end if;
        -- record the fact that we're starting...
        logs.write('Starting forecast for tomorrow', 'I');
        
        -- ...and the parameter values we've been passed, which we now know, conveniently are not null
        
        logs.add_param('i_forecast_date', i_forecast_date, l_params);
        logs.add_param('i_detail', i_detail, l_params);
        logs.add_param('i_just_guess', i_just_guess, l_params);
        logs.write(l_params, 'P');
        --
        -- Do some weather forecasting here... and throw in a debug message as well..
        --
        logs.write('Running complicated algorithm to get forecast', 'D');
        if floor( dbms_random.value(1,3)) = 1 then
            o_forecast := 'SUNNY';
        else
            o_forecast := 'SOGGY';
        end if;
        -- then...
        --
        logs.write('Forecast completed outlook is '||o_forecast, 'I');
    exception when others then
        logs.err;
        raise;
    end tomorrow_pr;
end forecast_pkg;
/

If we run this now…

set serveroutput on size unlimited
declare
    l_forecast varchar2(100);
begin
    logs.g_log_level := 'D';
    forecast_pkg.tomorrow_pr
    (
        i_forecast_date => to_date('12052015', 'DDMMYYYY'),
        i_detail => 'Some random value',
        i_just_guess => false,
        o_forecast => l_forecast
    );
    dbms_output.put_line('The forecast is '||l_forecast);
end;
/

The forecast is SOGGY

PL/SQL procedure successfully completed.

SQL>

…we still get the same amount of information in the log table…

select program_unit, sub_program, location,
    message_type, message
from application_message_logs
order by log_id
/

PROGRAM_UNIT	SUB_PROGRAM	LOCATION MESSA MESSAGE
--------------- --------------- -------- ----- --------------------------------------------------------------------------------
FORECAST_PKG	TOMORROW_PR	      23 I     Starting forecast for tomorrow
FORECAST_PKG	TOMORROW_PR	      30 P     i_forecast_date => 12-MAY-15 , i_detail => Some random value , i_just_guess => FALSE
FORECAST_PKG	TOMORROW_PR	      34 D     Running complicated algorithm to get forecast
FORECAST_PKG	TOMORROW_PR	      42 I     Forecast completed outlook is SOGGY

SQL>

We can also record the error stack without having to physically enter it every time we call the logs package…

set serveroutput on size unlimited
declare
    l_forecast varchar2(100);
begin
    logs.g_log_level := 'D';
    forecast_pkg.tomorrow_pr
    (
        i_forecast_date => to_date('12052015', 'DDMMYYYY'),
        i_detail => 'Some random value',
        i_just_guess => null,
        o_forecast => l_forecast
    );
    dbms_output.put_line('The forecast is '||l_forecast);
end;
/

declare
*
ERROR at line 1:
ORA-20000: Missing mandatory parameters.
ORA-06512: at "MIKE.FORECAST_PKG", line 45
ORA-06512: at line 5

Sure enough, in the log table :

select program_unit, sub_program, location,
    message_type, message
from application_message_logs
order by log_id
/

PROGRAM_UNIT	SUB_PROGRAM	LOCATION MESSA MESSAGE
--------------- --------------- -------- ----- --------------------------------------------------------------------------------
FORECAST_PKG	TOMORROW_PR	      44 E     ORA-20000: Missing mandatory parameters.
					       ORA-06512: at "MIKE.FORECAST_PKG", line 20

Whilst you may well be skeptical about the abilities of a kangaroo in the matter of PL/SQL programming, there may be one or two techniques here that you’ll find useful.


Filed under: Oracle, PL/SQL Tagged: dba_identifiers, logging parameter values, overloaded procedures, owa_util.who_called_me, pl/scope, PL/SQL instrumentation, PLS-00307: too many declarations of x match this call, plscope_settings

IntelliJ IDEA 14.1.4 adds Spring Initializr

Pas Apicella - Fri, 2015-08-14 20:57
Just upgraded to to Intellij IDEA 14.1.4 and found that the Spring Initializr web page for quickly creating spring boot applications has been added to the New Project dialog. The web site I normally drive new spring boot applications from as follows, is now part of IntelliJ IDEA which is great.

http://start.spring.io/

Some screen shots of this.





http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

What Kids Tell Us about Touch and Voice

Oracle AppsLab - Fri, 2015-08-14 12:28

Recently, my four year-old daughter and her little bestie were fiddling with someone’s iPhone. I’m not sure which parent had sacrificed the device for our collective sanity.

Anyway, they were talking to Siri. Her bestie was putting Siri through its paces, and my daughter asked for a joke, because that’s her main question for Alexa, a.k.a. the Amazon Echo.

AmazonEcho

Siri failed at that, and my daughter remarked something like “Our Siri knows the weather too.”

Thus began an interesting comparison of what Siri and “our Siri” i.e. the Echo can do, a pretty typical four year-old topping contest. You know, mine’s better, no mine is, and so forth.

After resolving that argument, I thought about how natural it was for them to talk to devices, something that I’ve never really liked to do, although I do find talking to Alexa more natural than talking to Google Now or Siri.

I’m reminded of a post, which I cannot find, Paul (@ppedrazzi) wrote many years ago about how easily a young child, possibly one of his daughters, picked up and used an iPhone. This was in 2008 or 2009, early days for the iPhone, and the child was probably two, maybe three, years old. Wish I could find that post.

From what I recall, Paul mused on how natural touch was as an input mechanism for humans, as displayed by how a child could easily pick up and use an iPhone. I’ve seen the same with my daughter, who has been using iOS on one device or another since she was much younger.

I’m observing that speech as equally natural to her.

Kids provide great anecdotal research for me because they’re not biased by what they already know about technology.

When I use something like gesture or voice control, I can’t help but compare it to what I know already, i.e. keyboard, mouse, which colors my impressions.

Watching kids use touch and voice input, the interactions seem very natural.

This is obvious stuff that’s been known forever, but it took how long for someone, Apple, to get touch right? Voice is in an earlier phase, advancing, but not completely natural.

One point Noel (@noelportugal) makes about about voice input is that having a wake word is awkward, i.e. “Alexa” or “OK Google,” but given privacy concerns, this is the best solution for the moment. Noel wants to customize that wake word, but that’s only incrementally better.

When commanding the Amazon Echo, it’s not very natural to say “Alexa” and pause to ensure she’s listening. My daughter tends to blurt out a full sentence without the pause, “Alexa tell us a joke” which sometimes works.

That pause creates awkward usability, at least I think it does.

Since its release, Noel has led the charge for Amazon Echo research, testing and hacking (lots of hacking) on our team, and we’ve got some pretty cool projects brewing to test our theories. I’ve been using it around my home for a while, and I’m liking it a lot, especially the regular updates Amazon pushes to enhance it, e.g. IFTTT integration, smart home controlGoogle Calendar integration, reordering items from Amazon and a lot more.

Amazon is expanding its voice investment too, providing Alexa as a service, VaaS or AVS as they call it.

I fully believe the not-so-distant future will feature touch and speech, and maybe gestures, at the glance and scan layers of interaction, with the old school keyboard and mouse for heavy duty commit interactions.

Quick review, glance, scan, commit is our strategic design philosophy. Check out Ultan (@ultan) explaining it if you need a refresher.

So, what do you think? Thank you Captain Obvious, or pump the brakes Jake?

Find the comments.Possibly Related Posts:

VirtualBox 5.0.2

Tim Hall - Fri, 2015-08-14 11:29

VirtualBox 5.0.2 has been released. It’s the first maintenance release for the 5.0 version.

Downloads and changelog in the usual places.

Cheers

Tim…

VirtualBox 5.0.2 was first posted on August 14, 2015 at 6:29 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Limit length of listagg

Mike Moore - Fri, 2015-08-14 11:23
SQL> select student_name, course_id from studentx order by student_name

STUDENT_NAME COURSE_ID
------------ ---------
Chris Jones  A102     
Chris Jones  C102     
Chris Jones  C102     
Chris Jones  A102     
Chris Jones  A103     
Chris Jones  A103     
Joe Rogers   B103     
Joe Rogers   A222     
Joe Rogers   A222     
Kathy Smith  B102     
Kathy Smith  A102     
Kathy Smith  A103     
Kathy Smith  B102     
Kathy Smith  A103     
Kathy Smith  A102     
Mark Robert  B103     

16 rows selected.
SQL> WITH x AS
        (SELECT student_name,
                course_id,
                ROW_NUMBER () OVER (PARTITION BY student_name ORDER BY 1) AS grouprownum
           FROM studentx)
  SELECT student_name,
         LISTAGG (CASE WHEN grouprownum < 5 THEN course_id ELSE NULL END, ',')
            WITHIN GROUP (ORDER BY student_name)
            courses
    FROM x
GROUP BY student_name

STUDENT_NAME
------------
COURSES                                                                         
--------------------------------------------------------------------------------
Chris Jones 
A102,A102,C102,C102                                                             
                                                                                
Joe Rogers  
A222,A222,B103                                                                  
                                                                                
Kathy Smith 
A102,A103,B102,B102                                                             
                                                                                
Mark Robert 
B103                                                                            
                                                                                

4 rows selected.

Log Buffer #436: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-14 08:00

This Log Buffer Edition covers the top blog posts of the week from the Oracle, SQL Server and MySQL arenas.

Oracle:

  • Momentum and activity regarding the Data Act is gathering steam, and off to a great start too. The Data Act directs the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) to establish government-wide financial reporting data standards by May 2015.
  • RMS has a number of async queues for processing new item location, store add, warehouse add, item and po induction. We have seen rows stuck in the queues and needed to release the stuck AQ Jobs.
  • We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans.
  • Commerce Cloud, a new flexible and scalable SaaS solution built for the Oracle Public Cloud, adds a key new piece to the rich Oracle Customer Experience (CX) applications portfolio. Built with the latest commerce technology, Oracle Commerce Cloud is designed to ignite business innovation and rapid growth, while simplifying IT management and reducing costs.
  • Have you used R12: Master Data Fix Diagnostic to Validate Data Related to Purchase Orders and Requisitions?

SQL Server:

  • SQL Server 2016 Community Technology Preview 2.2 is available
  • What is Database Lifecycle Management (DLM)?
  • SSIS Catalog – Path to backup file could not be determined
  • SQL SERVER – Unable to Bring SQL Cluster Resource Online – Online Pending and then Failed
  • Snapshot Isolation Level and Concurrent Modification Collisions – On Disk and In Memory OLTP

MySQL:

  • A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving.
  • What is MySQL Package Verification? Package verification (Pkgver for short) refers to black box testing of MySQL packages across all supported platforms and across different MySQL versions. In Pkgver, packages are tested in order to ensure that the basic user experience is as it should be, focusing on installation, initial startup and rudimentary functionality.
  • With the rise of agile development methodologies, more and more systems and applications are built in series of iterations. This is true for the database schema as well, as it has to evolve together with the application. Unfortunately, schema changes and databases do not play well together.
  • MySQL replication is a process that allows you to easily maintain multiple copies of MySQL data by having them copied automatically from a master to a slave database.
  • In Case You Missed It – Breaking Databases – Keeping your Ruby on Rails ORM under Control.

The post Log Buffer #436: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

OAM PS3 State-of-the-art

Frank van Bortel - Fri, 2015-08-14 06:25
An attempt to run OAM 11G Release 2 PS3 on Oracle Linux 6.7, WLS 12C, RDBMS 12C. Install Linux Pretty straightforward. Used Oracle 6.7, as 7 is not certified. Create a 200MB /boot, and an LVM for /, both ext4. Install just the server. Deselect *all* options, just X system and X legacy support (the OUI needs it). Some 566 packages will get installed. Make sure it boots, and the network starts. Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

WebLogic Server 12.1.3 Developer Zip - Update 3 Posted

Steve Button - Thu, 2015-08-13 17:48
An update has just been posted on OTN for the WebLogic Server 12.1.3 Developer Zip distribution.

WebLogic Server 12.1.3 Developer Zip Update 3 is built with the fixes from the WebLogic Server 12.1.3.0.4 Patch Set Update, providing developers with access to the latest set of fixes available in the corresponding production release.

See the download page for access to the update:

http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/wls1213_dev_update3.zip

The Update 3 README provides details of what has been included:

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/README_WIN_UP3.txt


Thoughts on Google Cloud Dataflow

Pythian Group - Thu, 2015-08-13 15:20

Google Cloud Dataflow is a data processing tool developed by Google that runs in the cloud. Dataflow is an easy to use, flexible tool that delivers completely automated scaling. It is deeply tied to the Google cloud infrastructure, making it a very powerful for projects running in Google Cloud.

Dataflow is an attractive resource management and job monitoring tool because it automatically manages all of the Google Cloud resources, including creating and tearing down  Google Compute Engine resources, communicating with Google Cloud Storage, working with Google Cloud Pub/Sub, aggregating logs, etc.

Cloud Dataflow has the following major components:

SDK – The Dataflow SDK provides a programming mode that simplifies/abstracts out the processing of large amounts of data. Dataflow only provides a Java SDK at the moment, which is a barrier for non-Java programmers. More on the programming model later.

Google Cloud Platform Managed Services – This is one of my favourite features in Dataflow. Dataflow manages and ties together components, such as Google Compute Engine, spins up and tears down VMs, manages BigQuery, aggregates logs, etc.

These two components can be used together to create jobs.

Being programmatic, Dataflow is extremely flexible. It works well for both batch and streaming jobs. Dataflow excels at high-volume computations and provides a unified programming model, which is very efficient and rather simple considering how powerful it is.

The Dataflow programming model simplifies the mechanics of large-scale data processing and abstracts out a lot of the lower level tasks, such as cluster management, adding more nodes, etc. It lets you focus on the logical aspect of your pipeline and not worry about how the job will run.

The Dataflow pipeline consists of four major abstractions:

  • Pipelines – A pipeline represents a complete process on a dataset or datasets. The data could be brought in from external data sources. It could then have a series of transformation operations, such as filter, joins, aggregation, etc., applied to the data to give it meaning and to achieve its desired form. This data could be then written to a sink. The sink could be within the Google Cloud platform or external. The sink could even be the same as the data source.
  • PCollections – PCollections are datasets in the pipeline. PCollections could represent datasets of any size. These datasets could be bounded (fixed size – such as national census data) or unbounded (such as a Twitter feed or data from weather sensors). PCollections are the input and output of every transform operation.
  • Transforms – Transforms are the data processing steps in the pipeline. Transforms take one or more PCollections, apply some transform operations to those collections, and then output to a PCollection.
  • I/O Sinks and Sources – The Source and Sink APIs provide functions to read data into and out of collections. The sources act as the roots of the pipeline and the sinks are the endpoints of the pipeline. Dataflow has a set of built in sinks/sources, but it is also possible to write sinks sources for custom data sources.

Dataflow is also planning to add integration for Apache Flink and Apache Spark. Adding Spark and Flink integration would be a huge feature since it would open up the possibilities to use MLlib, Spark SQL, and Flink machine-learning capabilities.

One of the use cases we explored was to create a pipeline that ingests streaming data from several POS systems using Dataflow’s streaming APIs. This data can be then joined with customer profile data that is ingested incrementally on a daily basis from a relational database. We can then run some filtering and aggregation operations on this data. Using the sink for BigQuery, we can insert the data into BigQuery and then run queries. What makes this so attractive is that in this whole process of ingesting vast amounts of streaming data, there was no need to set up clusters or networks or install software, etc. We stayed focused on the data processing and the logic that went into it.

To summarize, Dataflow is the only data processing tool that completely manages the lower level infrastructure. This removes several API calls for monitoring the load and spinning up and tearing down VMs, aggregating logs, etc., and lets you focus on the logic of the task at hand.  The abstractions are very easy to understand and work with and the Dataflow API also provides a good set of built in transform operations for tasks such as filtering, joining, grouping, and aggregation. Dataflow integrates really well with all components in the Google Cloud Platform, however, Dataflow does not have SDKs in any language besides Java, which is somewhat restrictive.

The post Thoughts on Google Cloud Dataflow appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reuters: Instructure has filed for IPO later this year

Michael Feldstein - Thu, 2015-08-13 10:29

By Phil HillMore Posts (358)

Reuters is on a breaking news roll lately with ed tech. This time it is about Instructure filing for an initial public offering (IPO).

Instructure is planning an initial public offering later this year that could value the education software company at $500 million to $800 million, according to people familiar with the matter.

Instructure, based in Salt Lake City, has hired Morgan Stanley (MS.N) and Goldman Sachs (GS.N) to help prepare for the IPO, which has been filed confidentially, the people said. They requested anonymity because the news of the IPO was not public.

Under the Jumpstart Our Business Startups Act, new companies that generate less than $1 billion in revenue can file for IPOs with the U.S. Securities and Exchange Commission without immediately disclosing details publicly.

Instructure has long stated its plans to eventually IPO, so the main question has been one of timing. Now we know that it is late 2015 (assuming Reuters story is correct, but they have been quite accurate with similar stories).

Michael and I have written recently about Instructure’s strong performance, including this note about expanding markets and their consistent growth in higher ed, K-12 and potentially corporate learning.

InstructureCon 2015 Growth Slide

Taken together, what we see is a company with a fairly straightforward strategy. Pick a market where the company can introduce a learning platform that is far simpler and more elegant than the status quo, then just deliver and go for happy customers. Don’t expand beyond your core competency, don’t add parallel product lines, don’t over-complicate the product, don’t rely on corporate M&A. Where you have problems, address the gap. Rinse. Repeat.

Instructure has now solidified their dominance in US higher ed (having the most new client wins), they have hit their stride with K-12, and they are just starting with corporate learning. What’s next? I would assume international education markets, where Instructure has already started to make inroads in the UK and a few other locations.

The other pattern we see is that the company focuses on the mainstream from a technology adoption perspective. That doesn’t mean that they don’t want to serve early adopters with Canvas or Bridge, but Instructure more than any other LMS company knows how to say ‘No’. They don’t add features or change designs unless the result will help the mainstream adoption – which is primarily instructors. Of course students care, but they don’t choose whether to use an LMS for their course – faculty and teachers do. For education markets, the ability to satisfy early adopters rests heavily on the Canvas LTI-enabled integrations and acceptance of external application usage; this is in contrast to primarily relying on having all the features in one system.

Combine this news with that of Blackboard being up for sale and changes in Moodle’s approach, and you have some big moves in the LMS market that should have long-term impacts on institutional decision-making. Watch this space for more coverage.

The post Reuters: Instructure has filed for IPO later this year appeared first on e-Literate.