Skip navigation.

Feed aggregator

Keep ready to test the final EA of APEX 5.0

Dimitri Gielis - Tue, 2015-01-13 13:04
Oracle is gearing up to release APEX 5.0... the final early adopter release (EA3) will be released soon. Over 6000 people participated in APEX EA2...

Here's the email of Joel Kallman:

As EA3 will be very close to the final release of APEX 5.0 many more people will probably join EA3, so keep ready for it! I look forward what color scheme people will create with Theme Roller and how they make the universal theme look like, fun guaranteed :)

Categories: Development

What Is Wrong With Thanet?

Pete Scott - Tue, 2015-01-13 12:25
Well that title can be taken many ways. It could be a plaintive “get your act together, Thanet!” or perhaps an appraisal of the issues that make Thanet a bit of a mess. I suppose it’s up to you to decide which. Historically, Thanet is a real place, it was an island sitting on the […]

Dear Julia: SmartWatch Habits and Preferences

Oracle AppsLab - Tue, 2015-01-13 11:40

Julia’s recent post about her experiences with the Samsung Gear watches triggered a lively conversation here at the AppsLab. I’m going to share my response here and sprinkle in some of Julia’s replies.  I’ll also make a separate post about the interesting paper she referenced.

Dear Julia,

You embraced the idea of the smart watch as a fully functional replacement for the smart phone (nicely captured by your Fred Flintstone image). I am on the other end of the spectrum. I like my Pebble precisely because it is so simple and limited.

I wonder if gender-typical fashion and habit is a partial factor here. One reason I prefer my phone to my watch is that I always keep my phone in my hip pocket and can reliably pull it out in less than two seconds. My attitude might change if I had to fish around for it in a purse which may or may not be close at hand.

Julia’s response:

I don’t do much on the watch either. I use it on the go to:

  • read and send SMS
  • make and receive a call
  • read email headlines
  • receive alerts when meetings start
  • take small notes

and with Gear Live:

  • get driving directions
  • ask for factoids

I have two modes to my typical day. One is when I am moving around with hands busy. Second is when I have 5+ minutes of still time with my hands free. In the first mode I would prefer to use a watch instead of a phone. In the second mode I would prefer to use a tablet or a desktop instead of a phone. I understand that some people find it useful to have just one device – the phone – for both modes. From Raymond’s description of Gear S, it sounds like reading on a watch is also okay.

Another possible differentiator, correlated with gender, is finger size. For delicate tasks I sometimes ask my wife for help. Her small, nimble fingers can do some things more easily than my big man paws. Thus I am wary of depending too heavily on interactions with the small screen of a watch. Pinch-zooming a map is delightful on a phone but almost impossible on a watch. Even pushing a virtual button is awkward because my finger obscures almost the entire surface of the watch. I am comfortable swiping the surface of the watch, and tapping one or two button targets on it, but not much more. For this reason I actually prefer the analog side buttons of the Pebble.

Julia’s response:

Gear has a very usable interface. It is controlled by a tap, swipe, single analog button, and voice. Pinch-zoom of images was enabled on old Gear, but there were no interaction that depended on pinch-zoom.

How comfortable are you talking to your watch in public? I have become a big fan of dictation, and do ask Siri questions from time to time, but generally only when I am alone (in my car, on a walk, or after everyone else has gone to bed). I am a bit self-conscious about talking to gadgets in public spaces. When other people do it near me I sometimes wonder if they are talking to me or are crazy, which is distracting or alarming, so I don’t want to commit the same offense.

I can still remember watching Noel talking to his Google Glass at a meeting we were in. He stood in a corner of the room, facing the wall, so that other people wouldn’t be distracted or think he was talking to them. An interesting adaption to this problem, but I’m not sure I want a world in which people are literally driven into corners.

Julia’s Response:

I am not at all comfortable talking to my watch. We should teach lipreading to our devices (wouldn’t that be a good kickstarter project?) But I would speak to the watch out of safety or convenience. Speaking to a watch is not as bad as to glasses. I am holding the watch to my mouth, looking at it, and, in case of Gear Live, first say “Okay, Google.” I don’t think many think I am talking to them. I must say most look at me with curiosity and, yes, admiration.

What acrobatics did you have to go through to use your watch as a camera? Did you take it off your wrist? Or were you able to simultaneously point your watch at your subject while watching the image on the watch? Did tapping the watch to take the photo jiggle the camera? Using the watch to take pictures of wine bottles and books and what-not is a compelling use case but often means that you have to use your non-watch hand to hold the object. If you ever expand your evaluation, I would love it if you could have someone else video you (with their smart watch?) as you take photos of wine bottles and children with your watch.

Julia’s Response:

No acrobatics at all. The camera was positioned at the right place. As a piece of industrial design it looked awful. My husband called it the “carbuncle” (I suspect it might be the true reason for camera’s disappearance in Gear Live). But it worked great. See my reflection in the mirror as I was taking the picture below? No acrobatics. The screen of the watch worked well as a viewfinder. I didn’t have to hold these “objects” in my hands. Tapping didn’t jiggle the screen.

dhdibfff      julia-spy-photo

Thanks again for a thought-provoking post, Julia.  I am also not sure how typical I am. But clearly there is a spectrum of how much smart watch interaction people are comfortable with.

JohnPossibly Related Posts:

Securing Big Data Part 6 - Classifying risk

Steve Jones - Tue, 2015-01-13 09:00
So now your Information Governance groups consider Information Security to be important you have to then think about how they should be classifying the risk.  Now there are docs out there on some of these which talk about frameworks.  British Columbia's government has one for instance that talks about High, Medium and Low risk, but for me that really misses the point and over simplifies the
Categories: Fusion Middleware

The RedstoneXperience

WebCenter Team - Tue, 2015-01-13 07:29

<span id="XinhaEditingPostion"></span>

Redstone Content Solutions Guest Blog Post

At Redstone Content Solutions, our #1 priority is enabling your business with WebCenter.   

We continually strive to earn your trust and believe WebCenter initiatives are most successful when a strong working relationship exists.
Redstone has developed a hybrid project methodology that incorporates Agile principles, years of WebCenter experience, and client feedback relating to their own best practices, governance guidelines and mandates.  We call this the RedstoneXperience.
Understanding that no two engagements are identical, Agile & Scrum aspects of the RedstoneXperience enable our team to proactively identify changing requirements and analyze remaining tasks. 
Our team utilizes Scrum, a Pathway centered on the Empirical Process Control Model to provide and exercise control through frequent inspection and adaptation. 
RedstoneXperience provides working solutions and visible progress on a frequent basis so that stakeholders are better equipped to make informed decisions.   
We are committed to your success, and have found that employing the RedstoneXperience improves project outcome, accelerates solution understanding, and shortens the learning curve to self-sufficiency.

To learn more about the RedstoneXperience, please follow the links below. 

Our Process. Project Phases. Team Members. Management Tools.

Redstone Content Solutions…We Deliver

Two Changes in PeopleTools Requirements

Duncan Davies - Tue, 2015-01-13 07:00

Oracle have just announced two changes to what they require customers to be running on.

PeopleTools 8.53 Patch 10 or above for PUM Patches

If you’re on PeopleSoft v9.2 and using the Update Images to select the patches to apply then Oracle ‘strongly advises’ customers to be on the .10 patch of PeopleTools 8.53 or higher.

From Oracle:

FSCM Update Image 9.2.010 and higher, HCM Update Image 9.2.009 and higher, and ELM Update 9.2.006 and higher all need PeopleTools 8.53.10 for many of the updates and fixes to be applied. Failure to update your PeopleTools patch level to PeopleTools 8.53.10 or higher will result in the inability to take these updates and fixes. It may also inhibit you from applying critical maintenance in the future.

New PeopleTools Requirements for PeopleSoft Interaction Hub

Oracle also announced that they’re changing the support policy for Interaction Hub and PeopleTools. Basically, if you use Interaction Hub you must upgrade to a PeopleTools release no later than 24 months after that PeopleTools release becomes generally available.

It was originally a little confusingly worded, but there’s now an example that made it clearer for me: For example, PeopleTools 8.53 was released in February 2013. Therefore, customers who use Interaction Hub will be required to upgrade to PeopleTools 8.53 (or newer, such as PeopleTools 8.54) no later than February 2015 (24 months after the General Availability date of PeopleTools 8.53). As of February 2015, product maintenance and new features may require PeopleTools 8.53. I suspect that this is going to impact quite a few of customers. Full details here:

JDeveloper 12c: Resolving 'MDS-01368: Variable "oracle.home" used in configuration document is not defined' error

Darwin IT - Tue, 2015-01-13 04:02
 When I started up my Integrated Domain I encountered loads of errors in the log regarding the starting of my ADF application:

Caused By: oracle.mds.config.MDSConfigurationException: MDS-01330: Kan MDS-configuratiedocument niet laden.    
MDS-01329: Kan element "persistence-config" niet laden.
MDS-01370: Configuratie van MetadataStore voor metadata-store-usage "mstore-usage_1" is ongeldig.   
MDS-01368: Variabele "oracle.home" in het configuratiedocument is niet gedefinieerd als systeemeigenschap of omgevingsvariabele.


I found several questions on the oracle community about this, but none answers. Yet the answer turns out to be quite simple.

It is apparently caused by the adf-config.xml in the workspace where you'll find a snippet like:
                    <metadata-store-usage id="mstore-usage_1">
<metadata-store class-name="oracle.mds.persistence.stores.file.FileMetadataStore">
<property name="metadata-path" value="${oracle.home}/integration"/>
<property name="partition-name" value="seed"/>

Here you see that in the metadata-path a reference to the oracle.home property is made.

It turns out that Oracle lacked to add this property in the startup script of the integrated weblogic default domain.

So open the setDomainEnv.cmd (Windows) or (Linux) script. Under Window for the DefaultDomain of the Integrated Weblogic, it is found in your Roaming Application data similar to: c:\Users\martien\AppData\Roaming\JDeveloper\system12.\DefaultDomain\bin\

Then find the setting for EXTRA_JAVA_PROPERTIES and add the property -Doracle.home=%SOA_ORACLE_HOME%
set EXTRA_JAVA_PROPERTIES=%EXTRA_JAVA_PROPERTIES% -Dsoa.archives.dir=%SOA_ORACLE_HOME%\soa -Doracle.home=%SOA_ORACLE_HOME% -Dsoa.instance.home=%DOMAIN_HOME% -Dtangosol.coherence.log=jdk -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true\server\lib\DemoTrust.jks -Doracle.xml.schema\Ignore_Duplicate_Components=true -Doracle.xdkjava.compatibility.version=11.1.1 -Doracle.soa.compatibility.version=11.1.1

Refire your domain and it should startup smoothly, regarding this issue. Probably it saves you some startup time too.

Build failed: where's my bc4j.xcfg?

Darwin IT - Tue, 2015-01-13 02:30

Yesterday and this morning I lost a lot of time with building failures, trying to build our ADF HumanTask forms. We used a adf-lib for amongst others XML DataControls, and reusable page-comoponents.

Trying to build the ear file, and actually that adf-lib, I got an error stating 'Unable to copy to output directory ... <default-package>/common/bc4j.xcfg not found'. Indeed in that default ADFBc folder the bc4j.xcfg is not available, nor anywhere in our whole project or workspace.

After a while of searching on our friend Google, I found this post of Andrejus Baranovski.

Apparently in Jdeveloper 11gR2 Oracle introduced a caching mechanism for the IDE.Probably some paging mechanism where Jdeveloper pages files in memory to disk. It's, to be honest, actually one of the behaviours I did not like in Eclipse. I liked the 'what you see is what you edit' approach of JDeveloper.

And since I'm mostly a SOA and BPM developer, I was stuck in JDeveloper 11gR1, never go to R2. But now in 12c I ran into this behaviour of JDeveloper. Anyway, JDeveloper does this caching on Application level, thus on a per Application basis. So go to Application Properties:
 Then to the node IDE Performance Cache:
 There you'll find the default location of your cache. In that folder you'll find a .data subfolder, containing caching data of all or some of your Application projects. Close JDeveloper (since under Windows it locks those files) and clear that folder.
Since this folder is a subfolder of the application, by default, you should make sure to ignore it with versioning, override the folder with a path outside of your subversion working copy.

SQL Server 2014: FCIs, availability groups, and TCP port conflict issues

Yann Neuhaus - Mon, 2015-01-12 23:26

After giving my session about SQL Server AlwaysOn and availability groups at the last French event “Les journées SQL Server 2014”, I had several questions concerning the port conflict issues, particularly the differences that exist between FCIs and availability groups (AAGs) on this subject.

In fact, in both cases, we may have port conflicts depending on which components that are installed on each cluster node. Fundamentally, FCIs and AAGs are both clustered-based features but each of them use the WSFC differently: SQL Server FCIs are “cluster-aware” services while AAGs use standalone instances by default (using of clustered instances with AAGs is possible but this scenario is relatively uncommon and it doesn’t change in any way the story).

First of all, my thinking is based on the following question: Why does having an availability group listener on the same TCP port than an SQL Server instance (but on a different process) cause a conflict issue whereas having both SQL Server FCIs with the same port is working fine?

Let’s begin with the SQL Server FCIs. When you install two SQL Server FCIs (on the same WSFC), you can configure the same listen port for the both instances and it works perfectly right? Why? The main reason is that each SQL Server FCI has its dedicated virtual IP address and as you know, a process can open a socket to a particular IP address on a specific port. However, two or more processes that attempt to open a socket on the same specific port and on the same IP address will result to a conflict. For instance, in my case, I have two SQL Server FCIs - SQLCLUST-01\SQL01 and SQLCLUST-02\SQL02 – that respectively listen on the same TCP port number: 1490. Here the picture of netstat –ano command output




Notice that each SQL Server process listens to its IP address and only to this one. We can confirm this by taking a look at each SQL Server error log.








Now let’s continue with the availability groups. The story is not the same because in most scenarios, we use standalone instances and by default they listen on all available IP addresses. In my case, this time I have two standalone instances – MSSQLSERVER (default) and APP - that listen respectively on the TCP port 1433 and 1438. By looking at the netstat –ano output we can notice that each process listen on all available IP addresses (LocalAddress =




We can also verify the SQL Server error log of each standalone instance (default and APP)








At this point I am sure you are beginning to understand the issue you may have with availability groups and listeners. Let’s try to create a listener for an availability group with the default instances (MSSQLSERVER). My default instances on each cluster node listen on the port 1433 whereas the APP instances listen on the port 1438 as showed on the above picture. If I attempt to create my listener LST-DUMMY on the port 1433 it will be successful because my availability group and my default instances are on the same process.




Notice that the listener LST-DUMMY listens to the same port than the default instance and both are on the same process (PID = 1416). Of course if I try to change the TCP port number of my listener with 1438, SQL Server will raise the well-known error message with id 19486.




Msg 19486, Level 16, State 1, Line 3 The configuration changes to the availability group listener were completed, but the TCP provider of the instance of SQL Server failed to listen on the specified port [LST-DUMMY:1438]. This TCP port is already in use. Reconfigure the availability group listener, specifying an available TCP port. For information about altering an availability group listener, see the "ALTER AVAILABILITY GROUP (Transact-SQL)" topic in SQL Server Books Online.


The response becomes obvious now. Indeed, the SQL Server instance APP listens on TCP port 1438 for all available IP addresses (including the IP address of the listener LST-DUMMY).




You don't trust me? Well, I can prove it by connecting directly to the SQL Server named instance APP with the IP address of the listener LST-DUMMY - - and the TCP port of the named instance – 1438 -




To summarize:

  • Having several SQL Server FCI that listen on the same port is not a problem because they can open a socket on their distinct IP address. However you can face port conflicts in the case you have also a standalone instance installed on one of the cluster node.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on the same process will not result to a TCP port conflict.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on a different process will result to a TCP port conflict. In this case each SQL Server process will attempt to open a socket on the same TCP port and on the same address IP.

Hope it helps!

Oracle Maven Repository is Live with WebLogic Server Artifacts

Steve Button - Mon, 2015-01-12 22:17
Oracle Maven Repository 
Very. Exciting. News.

The Oracle Maven Repository has just gone live and is now available for public access, loaded with WebLogic Server artifacts from the 12.1.2 and 12.1.3 releases. 

Want free and easy access to WebLogic Server APIs, libraries and plugins - just have at it!

We are looking to hire someone who loves working with open source developer communities.

Christopher Jones - Mon, 2015-01-12 16:37

We are looking to hire someone who loves working with open source developer communities.

The ideal candidate would have experience with the Oracle Database and at least two or more open source developer environments. The successful candidate would join a team of people responsible for helping database app developers be more productive while using prominent open source development tools. That person would also help to represent the needs of the community back to Oracle development. If you are a technical person that would like to make a difference, please check out this link.

Using Shuttles in a many-to-many relationship (Form)

Dimitri Gielis - Mon, 2015-01-12 16:36
In the previous post I showed some options how you can represent a many-to-many table relationship in a report using the LISTAGG Oracle function.

In this post we will edit a record and see how we can represent the data in a Form and save the data back to the different tables.

First I create a Form on the main table (customers) by just following the wizards.

Next I'll add a Shuttle item to the page: P2_PRODUCT_IDS
The SQL statement for the LOV (List of Values) looks like this, so just like in a select list, the shuttle is able to show the name, but store the id. Note there's no where clause in the statement as we want to show all possible products on the left in the shuttle.

Finally for the item source value we can't use a Database Column as the data is in a different table, so we enter the select statement to get all the product ids for that customer. Note that the Source Type needs to be set to SQL Query (return colon separated value), so it returns 1 or more product ids.
The selected products will be shown on the right in the shuttle.

Here's how the Form looks like when we select John Dulles who's interested in two products (Jacket, Business Shirt):

When we move Products from left to right and the other way and hit Apply Changes we need to store those values.

Add a new Process after the build-in Process and call it Save Products with this code:

There're many ways to store the values, but let me walk you through this one.
We first store the selected products in an array (l_vc_arr2) which apex_util.string_to_table is doing.

Next we delete all (possible) records that are not selected. You could remove the last line in the where clause so all products for that customer are deleted (if you add all the selected ones later again), but if you're auditing that table your info in not correct as that person might not have actually deleted it.

I added some debug info in the process too.

Finally we loop through the array and check if the record already exists in our table, if it doesn't we add it. Again here you could not do the lookup if you are dropping all records in the delete statement and just add all selected again.

I typically have a condition on this request to not run when the request is Delete.

In the online example (click on the edit icon in the report) I dropped the Create and Delete buttons in the Form, but if you keep them and want everything to work, there're two more things you have to do:

-) For the Create - in the "Automatic Row Processing (DML)" Process (of the Customer table) you need to specify P2_ID in "Return Key Into Item" field so the next process (the one you see above) has a value for P2_ID (= customer id).

-) For the Delete - you need to add another process before the "Automatic Row Processing (DML)", so the child records get deleted first, before the automatic row process deletes the customer.

In the next post I'll give an example of working with this data in a Master-Detail form.
Categories: Development

Do The LGWRs Always Sleep For The Full Three Seconds?

Do Oracle Database LGWRs (10g, 11g, 12c) Always Sleep For The Full Three Seconds?
Back in June I wrote (included a video) about the Oracle Database log writer(s) "3 second sleep rule." That's the rule we were all taught by our instructors when we started learning about Oracle yet never really knew if it was true. In that post, I demonstrated Oracle Database log writer background processes are normally put to sleep for three seconds.

In this post, I want to answer a related but different question.

Do Oracle Database log writer background processes ALWAYS sleep for the full three seconds? Our initial response would likely be, "Of course not! Because what if a foreground process commits during the three second sleep? The log writer(s) must wake up." That would make sense.

But, is this really true and what else could we learn by digging into this? I created an experiment to check this out, and that is what this post is all about.

The Experiment
In my June post I demonstrated the Three Second Rule. You will see this again below. But in this experiment we are looking for a situation when one of the 12c log writers wakes BEFORE their three second sleep.

You can download the experimental script I detail below HERE.

This is really tricky to demonstrate because of all the processes involved. There is a the Oracle foreground process and in 12c, there are multiple log writer background processes. Because this is experiment follows a timeline, I needed to gather the process activity data and then somehow merge it all together in a way that we humans can understand.

What I did was to do an operating system trace ( strace ) each process ( strace -p $lgwr )  with the timestamp option ( strace -p $lgwr -tt ) sending each process's the output to a separate file ( strace -p $lgwr -tt -o lgwr.txt ). This was done to all four processes and of course, I needed to start the scripts to run in the background. Shown directly below are the log writer strace details.

lgwr=`ps -eaf | grep $sid | grep lgwr | awk '{print $2}'`
lg00=`ps -eaf | grep $sid | grep lg00 | awk '{print $2}'`
lg01=`ps -eaf | grep $sid | grep lg01 | awk '{print $2}'`

echo "lgwr=$lgwr lg00=$lg00 lg01=$lg01"

strace -p $lgwr -tt -o lgwr.str &
strace -p $lg00 -tt -o lg00.str &
strace -p $lg01 -tt -o lg01.str &

Once the log writers were being traced, I connected to sqlplus and launched the below text in the background as well.

drop table bogus;
create table bogus as select * from dba_objects where object_id in (83395,176271,176279,176280);
select * from bogus;
exec dbms_lock.sleep(2.1);

exec dbms_lock.sleep(2.2);
exec dbms_lock.sleep(2.3);
update bogus set object_name='83395' where object_id=83395;
exec dbms_lock.sleep(3.1);
update bogus set object_name='176271' where object_id=176271;
exec dbms_lock.sleep(3.2);
update bogus set object_name='176279' where object_id=176279;
exec dbms_lock.sleep(3.3);
update bogus set object_name='176280' where object_id=176280;
exec dbms_lock.sleep(3.4);
exec dbms_lock.sleep(3.5);
update bogus set object_name='89567' where object_id=89567;
exec dbms_lock.sleep(3.6);
exec dbms_lock.sleep(3.7);

Once the sqlplus session was connected,

sqlplus system/manager @/tmp/runit.bogus &
sleep 2

I grabbed it's OS process id and started an OS trace on it as well:

svpr=`ps -eaf | grep -v grep | grep oracle$sid | awk '{print $2}' `
echo "svpr=$svpr"

strace -p $svpr -tt -o svpr.str &

Then I slept for 30 seconds, killed the tracing processes (not the log writers!):

sleep 30

for pid in `ps -eaf | grep -v grep | grep strace | awk '{print $2}'`
echo "killing pid $pid"
kill -2 $pid

Then I merged the trace files, sorted them by time, got rid of stuff in the trace files I didn't want to see and put the results into a final "clean" file.

rm -f $merge
for fn in lgwr lg00 lg01 svpr
cat ${fn}.str | awk -v FN=$fn '{print $1 " " FN " " $2 " " $3 " " $4 " " $5 " " $6 " " $7 " " $8 " " $9}' >> $merge

ls -ltr $merge
cat $merge | sort > /tmp/final.bogus

cat /tmp/final.bogus | grep -v times | grep -v getrusage | grep -v "svpr lseek" | grep -v clock_gettime | grep -v gettimeofday | grep -v "svpr read" | grep -v "svpr write" > /tmp/final.bogus.clean

The amazing thing is... this actually worked! Here is the output below:

19:11:41.981934 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {2, 200000000}) =
19:11:42.859905 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:43.986421 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:44.186404 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {2, 300000000}) =
19:11:44.982768 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:45.860871 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:46.499014 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 100000000}) =
19:11:46.989885 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:47.983782 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:48.861837 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:49.608154 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 200000000}) =
19:11:49.993520 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:50.984737 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:51.862921 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:52.817751 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 300000000}) =
19:11:52.997116 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:53.985784 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:54.863809 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:55.998974 lgwr open("/proc/41955/stat", O_RDONLY) = 19
19:11:55.999029 lgwr read(19, "41955 (ora_pmon_prod35) S 1 4195"..., 999) =
19:11:55.999075 lgwr close(19) = 0
19:11:55.999746 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:56.127326 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 400000000}) =
19:11:56.986935 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:57.864930 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:59.003212 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:59.531161 svpr semctl(7503875, 16, SETVAL, 0x7fff00000001) = 0
19:11:59.531544 lgwr semctl(7503875, 18, SETVAL, 0x7fff00000001) = 0
19:11:59.532204 lg00 pwrite(256, "\1\"\0\0\311\21\0\0\354\277\0\0\20\200{\356\220\6\0\0\r\0\0\0\367^K\5\1\0\0\0"..., 2048, 2331136) = 2048
19:11:59.532317 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {2, 480000000}) =
19:11:59.532680 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {0, 100000000}) =
19:11:59.537202 lg00 semctl(7503875, 34, SETVAL, 0x7fff00000001) = 0
19:11:59.537263 lg00 semctl(7503875, 16, SETVAL, 0x7fff00000001) = 0
19:11:59.537350 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:59.538483 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {2, 470000000}) =
19:11:59.540574 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 500000000}) =
19:12:00.865928 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:12:02.011876 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:12:02.537887 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:12:03.050381 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 600000000}) =
19:12:03.866796 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:12:05.014819 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:12:05.538797 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:12:06.657075 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 700000000}) =
19:12:06.867922 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:12:08.017814 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:12:08.539750 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:12:09.868825 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}

There is a lot of detail in the above output. I'm only going to make a few comments that pertain to the objectives of this post.

Oracle is using the semaphore call semtimedop to sleep. The beauty of this call, is it allow the process to be woken (that is, signaled) by another process! Keep that mind as you follow the timeline.

Here we go:

19:11:41.981934. Notice the server process' "2, 2" and later the "2,3" and "3, 1" and "3, 2"? This is the result of the dbms_lock.sleep commands contained in the sqlplus script!

19:11:42.859905. Notice lg01 and the other log writer background processes always have a "3, 0" semtimedop call? That is their "3 second sleep."

Look at the first few lgwr entries. I've listed them here:


Notice anything strange about the above times? They are all just about 3 seconds apart of from each other. That's the 3 second sleep in action. But that's not the focus of this post. So let's move on.

Read this slow: I want to focus on just one part of the output which, is shown below. Notice the server process is sleeping for 3.4 seconds. If you look at the sqlplus script (near the top of this post), immediately after the 3.4 second sleep the server process issues a commit. Therefore, because the 3.4 sleep starts at 19:11:56.1 and I'm expecting to see some log writer activity in 3.4 seconds. This would be at This could occur in the middle of the log writer 3 second sleep, which means we will likely see a log writer kick into action before their 3 second sleep completes! Let's take a look.

19:11:56.127326 svpr semtimedop(7503875, {{34, -1, 0}}, 1, {3, 400000000}) =
19:11:56.986935 lg00 semtimedop(7503875, {{18, -1, 0}}, 1, {3, 0}) =
19:11:57.864930 lg01 semtimedop(7503875, {{19, -1, 0}}, 1, {3, 0}) =
19:11:59.003212 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) =
19:11:59.531161 svpr semctl(7503875, 16, SETVAL, 0x7fff00000001) = 0
19:11:59.531544 lgwr semctl(7503875, 18, SETVAL, 0x7fff00000001) = 0
19:11:59.532204 lg00 pwrite(256, "\1\"\0\0\311\21\0\0\354\277\0\0\20\200{\356\220\6\0\0\r\0\0\0\367^K\5\1\0\0\0"..., 2048, 2331136) = 2048
19:11:59.532317 lgwr semtimedop(7503875, {{16, -1, 0}}, 1, {2, 480000000})

We can see the server process 3.4 second sleep starting at time 19:11:56.1 and we can see the sleep end and the server process' next command begin at the expected time of 19:11:59.5. Next in the trace file output is result of the commit. The commit results in the wake of both the lgwr and lg00 background processes.

But notice the lgwr background process started one of its 3 second sleeps at 19:11:59.0 which means it doesn't want to wake until 19:12:02.0. But look at when the lgwr process woke up. It woke up at which is clearly before the expected time of 19:12:02.0. What you just noticed was the lgwr background process was signaled to wake up before its three second sleep completed.

But why did the lgwr need to be woken up? Because the server process' redo must be immediately written.

But it gets even better because the lgwr background process doesn't do the redo write! The lgwr process signals the lg00 process to do the write, which we can see occurs at time 19:11:59:5. Wow. Amazing!

What We Can Learn From This
Personally, I love these kinds of postings because we can see Oracle in action and demonstrating what we believe to be true. So what does all this actually demonstrate? Here's a list:

  1. We can see the 12c log writers involved. Not only lgwr.
  2. All log writer background process initiate a sleep for the default three seconds. I have seen situations where it is not three seconds, but it appears the default is three seconds.
  3. The server process signals the lgwr process to write immediately after a commit is issued.
  4. The server process signals the lgwr process to write using a semaphore.
  5. The log writers (starting in 12c) can signal each other using semaphores. We saw lgwr signal the lg00 background process to write.
  6. The server process was performing updates over 10+ a second period, yet its redo was not written to disk until it committed. This demonstrates that ALL redo is not flushed every three seconds. (This is probably not what you learned... unless you joined one of my Oracle Performance Firefighting classes.)
  7. The log writers while normally put to sleep for three seconds, can be woken in the middle for an urgent task (like writing committed data to an online redo log).

I hope you enjoyed this post!

Thanks for reading,

Categories: DBA Blogs

Setup Streams Performance Advisor (UTL_SPADV) for #GoldenGate

DBASolved - Mon, 2015-01-12 14:47

With Oracle “merging” Oracle GoldenGate into Oracle Streams (or vise-versa), capturing statitics on the intergrated extract (capture) or integrated replicat (happy) will be needed.  In order to do this, the Streams Performance Advisor (UTL_SPADV) can be used.  Before using the Stream Performance Advisor, it needs to be configured under the Streams Administrator, i.e. Oracle GoldenGate user.  In my test lab, I use a user named GGATE for all my Oracle GoldenGate work.

Configure user for UTL_SPADV:

The Oracle user (GGATE) needs to be granted priviliges to run the performance advisor.  This is done by granting permissions through DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE.

SQL> conn ggate/test123
SQL> exec dbms_streams_auth.grant_admin_privilege(‘GGATE’);

Install performance advisor:

After granting the requried permissions to the Oracle user, then the UTL_SPADV package can be installed.

SQL> conn ggate/test123
SQL> @?/rdbms/admin/utlspadv.sql

Gather statistics:

Now that the UTL_SPADV package has been installed, the package can be used from sql*plus to gather statistics on the integrated extract/replicat.

SQL> conn ggate/test123
SQL> exec utl_spadv.collect_stats;

Note: This will take some time to run.  From my tests, it appears to complete as my test sessions disconnect.  

Display statistics:

Once the statistics have been gathered, they can be displayed using the SHOW_STATS option.

SQL> conn ggate/test123
SQL> set serveroutput size 50000
SQL> exec utl_spadv.show_stats;

Statistics Output:

The output will be displayed through sql*plus and will be displayed in intervals of one minute.  Before the display of the statistics start it the advisor provides a legend at the top to help dechiper the output.


<statistics>= <capture> [ <queue> <psender> <preceiver> <queue> ] <apply>

<capture>   = ‘|<C>’ <name> <msgs captured/sec> <msgs enqueued/sec> <latency>

   ‘LMR’ <idl%> <flwctrl%> <topevt%> <topevt>

   ‘LMP’ (<parallelism>) <idl%> <flwctrl%> <topevt%> <topevt>

   ‘LMB’ <idl%> <flwctrl%> <topevt%> <topevt>

   ‘CAP’ <idl%> <flwctrl%> <topevt%> <topevt>

   ‘CAP+PS’ <msgs sent/sec> <bytes sent/sec> <latency> <idl%>

<flwctrl%> <topevt%> <topevt>

<apply>     = ‘|<A>’ <name> <msgs applied/sec> <txns applied/sec> <latency>

   ‘PS+PR’ <idl%> <flwctrl%> <topevt%> <topevt>

   ‘APR’ <idl%> <flwctrl%> <topevt%> <topevt>

   ‘APC’ <idl%> <flwctrl%> <topevt%> <topevt>

   ‘APS’ (<parallelism>) <idl%> <flwctrl%> <topevt%> <topevt>

<queue>     = ‘|<Q>’ <name> <msgs enqueued/sec> <msgs spilled/sec> <msgs in


<psender>   = ‘|<PS>’ <name> <msgs sent/sec> <bytes sent/sec> <latency> <idl%>

<flwctrl%> <topevt%> <topevt>

<preceiver> = ‘|<PR>’ <name> <idl%> <flwctrl%> <topevt%> <topevt>

<bottleneck>= ‘|<B>’ <name> <sub_name> <sessionid> <serial#> <topevt%> <topevt>

<msgs in
PATH 1 RUN_ID 1 RUN_TIME 2015-JAN-12 15:17:31 CCA Y
| OGG$CAP_EXTI 31 31 0 LMR 99.7% 0% 0.3% “” LMP (2) 199.7% 0% 0.3% “” LMB
99.3% 0% 0.3% “”  CAP 99.7% 0% 0.3% “” | “GGATE”.”OGG$Q_EXTI” 0.01 0.01 0

PATH 1 RUN_ID 2 RUN_TIME 2015-JAN-12 15:18:32 CCA Y
| OGG$CAP_EXTI 37 33 1 LMR 98.4% 0% 1.6% “” LMP (2) 198.4% 0% 1.6% “” LMB
98.4% 0% 1.6% “” CAP 100% 0% 0% “” | “GGATE”.”OGG$Q_EXTI” 0.01 0.01 0 |

If you want to find out more on how to decipher these statistics, the legend is located


Filed under: Golden Gate, Performance
Categories: DBA Blogs

An Interaction Designer’s Perspective: Samsung Gear vs. Samsung Gear Live

Oracle AppsLab - Mon, 2015-01-12 11:52

Editor’s note: In January of 2014, our team held a wearables summit of sorts, test-driving five popular watches, fitness bands and head-mounted displays to collect experiential evidence of each form factor, initial experience, device software and ecosystem and development capabilities.

Julia drew the original Samsung Galaxy Gear smartwatch, and she’s been using it ever since. A few months ago, she began using the new Android Wear hotness, the Samsung Gear Live, which several of us have.

What follows are Julia’s impressions and opinions of the two watches. Enjoy.

Original Galaxy Gear versus Gear Live

When I had to keep track of time, I used to wear my Skagen watch, and I loved my little Skagen. Last year it ran out of battery. Coincidently, it happened when Thao (@thaobnguyen) ordered then just released Samsung Galaxy Gear for me to “test.”

Life is busy, and it took me some ten months to get new battery for my Skagen.

In the meantime, I wore Gear. When I got my Skagen back, I had a “Lucy of Prince Caspian” moment. I felt my watch was bewitched – I couldn’t talk to it (I tried), and it couldn’t talk back to me. Mute and dumb. That’s how I realized I am hooked on smart watches.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

This is just a preface, the write up is about original Gear versus Gear Live, which I’ve been testing for few months. In a nutshell, I have mixed feelings about Gear Live. Though there are some improvements over the original watch, I find many setbacks.


On the left, original Gear, on the right, Gear Live.

Left, original Gear, right, Gear Live, note the minimalistic typography of original Gear versus decorative typography of Android Wear.

Original Samsung Galaxy Gear featured clean bold typography. I could read a notification at a glance even when driving. In Gear Live, the minimalistic typography of Samsung Gear was replaced by smaller fonts and decorative backgrounds of Android Wear. Not only those decorations are useless, they make the watch unusable in the situations when it could’ve been most helpful. (And yes, I understand Samsung had to showcase the impressive display).


 Gear Live I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then (unless I am connected to car speakers) I need to pick up the phone to talk.

Left, original Gear, right Gear Live, 
I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then, unless I am connected to car speakers, I need to pick up the phone to talk.

Getting calls on a Gear in awkward situations was my main usage of it. As clunky as placement of the speaker and mic was on the original Gear, I still was able to get the calls safely while driving, or while walking with my hands full. Gear Live has no speaker. It can initiate the call hands-free, but what is the use if I still need to get to my phone to speak?


 Gear Live Gear Live has no camera

Left, original Gear, right, Gear Live, which has no camera.

Location, voice-to-text, AND image-to-text are three most logical input methods for the watch. I got very used to taking image notes with the original Gear. Did you know that Evernote can search for text in images? For me, the flagman demo application of the original Gear was Vivino. With Vivino, one can take a picture of a wine label at a store with a watch camera, and get the rating/pricing back on the watch. This application was a great demonstration of smart watch retail potential. Gear Live has no camera, dismissing all such use cases.

Vivino application on original Gear (no longer supported) Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Vivino application on original Gear, no longer supported.
Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Google Speech Recognition


Google Speech Recognition is superbly usable technology, way beyond S-Voice or Siri. Big Data in real action! Voice Search, Voice Commands, and dictation work fabulously. The only issue I found is with recognizing email contacts from speech.

Smart Watch

Google Voice Search makes Smart Watch smart. It brings the knowledgebase of the world – Internet – to the tip of your tongue, and it is MAGIC!




Google Now

I must confess I am annoyed by Google Now cards. I know it tries really hard, but the recommendations are wrong about 50% of the time. The other 49% they are irrelevant. Given that, I feel that Now shall stick to the back rows. Instead, it is putting itself on a central stage. Lesson learned – for smart watch, precision/recall balance needs to be skewed heavily toward precision.

Google Now on Gear Live Ah? I am at home, silly!

Google Now on Gear Live Ah? I am at home, silly!


These opinions are my own. At least half of my day is spent on the go – driving kids around, in classrooms or lessons, and doing family errands. I rarely have idle hands or idle time.

You’ll be the judge if I am an atypical user. In addition, I do not subscribe to the school of thought that a smart watch is a phone satellite, and a fetish. I believe it can be useful gadget way beyond that.

Yes, it is given that no one will use the watch to write or read a novel, not even a long email. Afar from that, I don’t see why a good smart watch cannot do all a person on a go needs to do, replacing the phones, and giving us back our other hand.

Therefore, I feel that a good smart watch shall aspire to:

  • work at a glance
  • be precise
  • hands free
  • self-contained


If that is your typical day, then this is your gadget.


Last Thought: Smart Watch and IoT

Last but not the least, I believe that a smart watch naturally lends itself to become a universal remote control for all IoT “smart things” – it can be your ID, it can sense “smart things,” it can output small chunks of information as voice or text, and it can take commands. As you walk next to (your) refrigerator, refrigerator can remind you via your watch to buy more milk, and you can adjust refrigerator’s temperature via the watch. This assumes that a “smart thing” can beam a description of all the knobs and buttons you need to control it.


I am surprised there is not much written on that, but here is a very good paper (pdf) “User Interfaces for Smart Things A Generative Approach with Semantic Interaction Descriptions” Simon Mayer, Andreas Tschofen, Anind K. Dey, and Friedemann Mattern, Institute for Pervasive Computing, ETH Zurich, HCI Institute, Carnegie Mellon University, April 4, 2014.Possibly Related Posts:

SOUG-Romand: Journée performance le 21 mai

Yann Neuhaus - Mon, 2015-01-12 10:19

(english below)

Bonne nouvelle pour les francophones: le SOUG-R est de plus en plus actif.

Le 21 mai 2015 une journée performance est organisée sur Lausanne.

How to create a pub/sub application with MongoDB ? Introduction

Tugdual Grall - Mon, 2015-01-12 09:30
In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, ... ). So, what needs to be done to achieve such thing: an application "publish" a message. In our case, we simply save a document into MongoDB another application, or thread, subscribe to these events and will received Tugdual Grall

The greatest cybersecurity concerns of the new year [VIDEO]

Chris Foot - Mon, 2015-01-12 09:29


Hi, welcome to RDX! While cybersecurity experts may not have a crystal ball to tell them which threats will impact companies the most, it’s still important to prepare for the future.

So, what does the average data breach look like in 2015? More people are expected to use mobile payment solutions and other similar systems this year. As a result, it’s likely that cybercriminals will use any tactics at their disposal to infiltrate this technology and the protocols associated with it.

Forbes noted that experts also acknowledged how bugs in old open source software pose a threat to companies. One example of such a threat was the Heartbleed bug that was discovered in 2014.

Ultimately, using database security monitoring to ensure all back-end systems are protected and accounted for is a step organizations shouldn’t ignore. In many cases, this can be the last line of defense.

Thanks for watching! Visit us next time for more security news and tips.

The post The greatest cybersecurity concerns of the new year [VIDEO] appeared first on Remote DBA Experts.