Feed aggregator

Agile and UX can mix

Rob Baillie - Thu, 2013-08-29 05:19
User experience design is an agile developer's worst nightmare. You want to make a change to a system, so you research. You collect usage stats, you analyse hotspots, you review, you examine user journeys, you review, you look at drop off rates, you review. Once you've got enough data you start to design. You paper prototype, run through with users, create wireframes, run through with users, build prototypes, run through with users, do spoken journey and video analysis, iterate, iterate, iterate, until finally you have a design.

Then you get the developers to build it, exactly as you designed it.

Agile development, on the other hand, is a user experience expert's worst nightmare. You want to make a change to a system, so you decide what's the most important bit, and you design and build that - don't worry how it fits into the bigger picture, show it to the users, move on to the next bit, iterate, iterate, iterate, until finally you have a system.

Then you get the user experience expert to fix all the clumsy workflows.

The two approaches are fundamentally opposed.

Aren't they?

Well, of course, I'm exaggerating for comic effect, but these impressions are only exaggerations - they're not complete fabrications.

If you look at what's going on, both approaches have the same underlying principle - your users don't know what they want until they see something. Only then do they have something to test their ideas against.  Both sides agree, the earlier you get something tangible in front of users and the more appropriate and successful the solution will be.

The only real difference in the two approaches as described is the balance between scope of design and fullness of implementation. On the UX side the favour is for maximum scope of design and minimal implementation; the agile side favours minimal scope of design and maximum implementation.

The trick is to acknowledge this difference and bring them closer together, or mitigate against the risks those differences bring.

Or, the put it another way, the main problem you have with combining these two approaches is the lead up time before development starts.

In the agile world some people would like to think that developing based on a whim is a great way to work, but the reality is different. Every story that is developed will have gone through some phase of analysis even in the lightest of light touch processes. Not least someone has decided that a problem needs fixing.  Even in the most agile of teams there needs to be some due diligence and prioritisation.

This happens not just at the small scale, but also when deciding which overarching areas of functionality to change. In some organisations there will be a project (not a dirty word), in some a phase, in others a sprint. Whatever its called it'll be a consistent set of stories that build up to be a fairly large scale change in the system. This will have gone through some kind of appraisal process, and rightly so.

Whilst I don't particularly believe in business cases, I do believe in due diligence.

It is in this phase, the research, appraisal and problem definition stage, that UX research can start without having a significant impact on the start-up time. Statistics can be gathered and evidence amassed to describe the problem that needs to be addressed. This can form a critical part of the argument to start work.

In fact, this research can become part the business-as-usual activities of the team and can be used to discover issues that need to be addressed. This can be as "big process" as you want it to be, just as long as you are willing, and have the resources to pick up the problems that you find, and that you have the agility to react to clear findings as quickly as possible. Basically, you need to avoid being in the situation where you know there's a problem but you can't start to fix it because your process states you need to finish your 2 month research phase.

When you are in this discovery phase there's nothing wrong with starting to feel out some possible solutions. Ideas that can be used to illustrate the problem and the potential benefits of addressing it. Just as long as the techniques you use do not result in high cost and (to reiterate) a lack of ability to react quickly.

Whilst I think its OK to use whatever techniques work for you, for me the key to keeping the reaction time down is to keep it lightweight.  That is, make sure you're always doing enough to find out what you need to know, but not so much that it takes you a long time to reach conclusions and start to address them. User surveys, spoken narrative and video recordings, all of which can be done remotely, can be done at any time, and once you're in the routine of doing them they needn't be expensive.   Be aware that large sample sets might improve the accuracy of your research, but they also slow you down.  Keep the groups small and focused - applicable to the size of team you have to analyse and react to the data. Done right, these groups can be used to continually scrutinise your system and uncover problems.

Once those problems are found, the same evidence can be used to guide potential solutions. Produce some quick lo-fi designs, present them to another (or the same, if you are so inclined) small group and wireframe the best ones to include in your argument to proceed.  I honestly believe that once you're in the habit, this whole process can be implemented in two or three weeks.

Having got the go ahead, you have a coherent picture of the problem and a solid starting point for you commence the full blown design work.  You can then move into a short, sharp and probably seriously intense design phase.

At all points, the design that you're coming up with is, of course, important. However, it's vital that you don't underestimate the value of the thinking process that goes into the design. Keep earlier iterations of the design, keep notes on why the design changed. This forms a reference document that you can use to remind yourself of the reasoning behind your design. This needn't be a huge formal tome; it could be as simple as comments in your wireframes, but an aide mémoire for the rationale behind where you are today is important.
In this short sharp design phase you need to make sure that you get to an initial conclusion quickly and that you bear in mind that this will almost certainly not be the design that you actually end up with.  This initial design is primarily used to illustrate the problem and the current thinking on the solution to the developers. It is absolutely not a final reference document.

As soon as you become wedded to a design, you lose the ability to be agile. Almost by definition, an agile project will not deliver exactly the functionality it set out deliver. Recognise this and ensure that you do the level of design appropriate to bring the project to life and no more.

When the development starts, the UX design work doesn't stop. This is where the ultimate design work begins - the point at which the two approaches start to meld.

As the developers start to produce work, the UX expert starts to have the richest material he could have - a real system. It is quite amazing how quickly an agile project can produce a working system that you are able to put in front of users, and there's nothing quite like a real system for investigating system design.

It's not that the wireframes are longer of use. In fact, early on the wireframes remain a vital, and probably only coherent view of the system and these should evolve as the project develops.  As elements in the system get built and more rigidly set the wireframes are updated to reflect them. As new problems and opportunities are discovered, the wireframes are used to explore them.

This process moves along in parallel to the BA work that's taking place on the project. As the customer team splits and prioritises the work, the UX expert turns their attention to the detail of their immediate problems, hand in hand with the BAs. The design that's produced is then used to explain the proposed solutions to the development team and act as a useful piece of reference material.

At this point the developers will often have strong opinions on the design of the solution, and these should obviously be heard. The advantage the design team now have is that they have a body of research and previous design directions to draw on, and a coherent complete picture against which these ideas (and often criticisms) can be scrutinised.  It's not that the design is complete, or final, it's that a valuable body of work has just been done, which can be drawn upon in order to produce the solution.

As you get towards the end of the project, more and more of the wireframe represents the final product.  At this point functionality can be removed from the wireframe in line with what's expected to be built.  In fact, this is true all the way through the project, it's just that people become more acutely aware of it towards the end.

This is a useful means of testing the minimum viable product. It allows you to check with the customer team how much can be taken away before you have a system that could not be released: a crucial tool in a truly agile project.  If you don't have the wireframes to show people, the description of functionality that's going to be in or out can be open to interpretation - which means it's open to misunderstanding.
Conclusion
It takes work to bring a UX expert into an agile project, and it takes awareness and honesty to ensure that you're not introducing a big-up-front design process that reduces your ability to react.

However, by keeping in mind some core principles - that you need to be able to throw and willing to throw work away, you should not become wedded to a design early on, you listen to feedback and react, you keep your level of work and techniques fit for the just-in-time problem that you need to solve right now - you can add four huge advantages to your project.

  • A coherent view and design that bind the disparate elements together into a complete system.
  • Expert techniques and knowledge that allow you to discover the right problems to fix with greater accuracy.
  • Design practices and investigative processes that allow you to test potential solutions earlier in the project (i.e. with less cost) than would otherwise be possible, helping ensure you do the right things at the right time.
  • Extremely expressive communication tools that allow you to describe the system you're going to deliver as that understanding changes through the project.

Do it right and you can do all this and still be agile.

Find Your Brilliance

Tim Tow - Wed, 2013-08-28 16:33
I’d like to interrupt our regularly scheduled programming to tell you about one my personal highlight's of Kscope 13 which was held back in June in New Orleans. Every year ODTUG announces who the keynote speaker will be well in advance of the conference. Most of the time, the speaker is a person of note; interesting, relevant, sometimes even inspiring. And then there are those times when, frankly, I’m not particularly interested in them or what they have to say. But this year was different. At the end of January, ODTUG announced that the Kscope 13 keynote speaker would be Doc Hendley.

Who is Doc Hendley, you ask? Well, from my perspective, Doc Hendley is one of the most inspiring and truly extraordinary individuals I’ve ever come across. And after meeting him and having the privilege of spending time with him in New Orleans, I’m proud and truly humbled to be able to call this man a friend. He is truly extraordinary, which is ironic when you consider that Doc thinks of himself as “just an ordinary, regular, everyday guy.”

Let me tell you, Doc is anything but. This is the story of how a boy who grew up in Greensboro, North Carolina saved thousands of lives all the way across the globe and in the process, proved to himself and everyone else that one person – even an ordinary regular everyday person - can do something extraordinary.

Doc was “just a bartender” and musician who worked and played in nightclubs in Raleigh, NC. In fact, bartending was the only job he’d ever had. But in his own words, he was “dying to make a difference in this world.” In 2003, standing behind the bar, he heard that polluted water kills more children globally than HIV/AIDS, Malaria, and Tuberculosis combined, yet at that time, no one aware of this crisis.

So what did Doc do? In his words, "He got angry, he got pissed off, he took action." And he did it the only way he knew how. He tapped into the "marginalized people in his community, the bar crowd, the regulars"  – the people that everyone else said were too ordinary - to create Wine to Water, an organization that would take him to the site of the greatest humanitarian disaster in the world – Darfur, Sudan, and eventually to 9 other countries. Doc lived in Darfur for a year, and taught the locals how to clean their water and utilize their own resources to keep it clean.

Ordinary guy? I don’t think so.

I watched his TEDx talk on YouTube before going to Kscope13. I was so moved by what he’d done, so overwhelmed, and so energized, that I made everyone in my company watch it before the conference. I wanted every person who worked for me to hear what Doc had to say, and to understand how we all can change the world if we try.  I love Doc's commitment to his cause and I hope we remain friends for a long time to come.


I know my commitments don't allow me to travel the world helping others like Doc does on a regular basis, but that doesn't mean I can't help.  We decided that Applied OLAP could help support the efforts of Wine to Water and so I presented Doc with a $5,000 check as our small contribution. During his keynote speech, Doc demonstrated, again, how one person, one donation, can change the world. I’m pledging to find a way to make a difference in the world too.

After all, I’m a regular ordinary every day kind of guy too.

Categories: BI & Warehousing

Database 11.2.0.4 Patchset Released

Antonio Romero - Wed, 2013-08-28 14:06

The 11.2.0.4 database patchset was released today, checking twitterland you can see news is already out. Tom Kyte tweeted 'look what slipped into 11.2.0.4' amongst others. There will be a standalone OWB 11.2.0.4 image also based on the database 11.2.0.4 components coming soon so I am told.

Database 11.2.0.4 Patchset Released

Antonio Romero - Wed, 2013-08-28 14:06

The 11.2.0.4 database patchset was released today, checking twitterland you can see news is already out. Tom Kyte tweeted 'look what slipped into 11.2.0.4' amongst others. There will be a standalone OWB 11.2.0.4 image also based on the database 11.2.0.4 components coming soon so I am told.

How to Configure The SSL Certificate For Oracle Warehouse Builder Repository Browser

Antonio Romero - Tue, 2013-08-27 22:09

  The Repository Browser is a browser-based tool that generates reports from data stored in Oracle Warehouse Builder (OWB) repositories. It use OC4j as the web server. Users need to use HTTPS to access the web interface. ( HTTP on top of the SSL/TLS protocol)

If the Repository Browser Listener is running on a computer named owb_server, then typing the following address will start the Repository Browser:

   https://owb_server:8999/owbb/RABLogin.uix?mode=design

   or

   https://owb_server:8999/owbb/RABLogin.uix?mode=runtime


On the server side, the SSL certificate for the browser is required. Users can create it by themselves.

First, uses can user the JRE's util "keytool" to generate a keystore, name it keystore.jks.

For example: keytool -genkey -keyalg RSA -alias mykey -keystore keystore.jks -validity 2000 -storepass  welcome1

Please pay attention to the password of the store, it need to be the same as the credentials of keystoreadmin in the file called "system-jazn-data.xml".


If the password is not the same, the error message like "Keystore was tampered with, or password was incorrect" will be generated.


In order to change the credentials, there are two files you can edit.


  • http-web-site.xml: It is in the path of %OWB_HOME%/j2ee/config. The password is stored as clear text in the http-web-site.xml, Users can change it to fit the password they use to generate the keysotre. For the security reason, if users don't want to store clear text, they can use the point (->keystoreadmin) to point another file named system-jazn-data.xml.


  • system-jazn-data.xml: User can find "system-jazn-data.xml" in the %OWB_HOME%/j2ee/config. There is a entry in it called "keystoreadmin".  Password store in this file is encrypted password. The pointer mentioned above is pointing to this place. In order to change the password, you can edit "system-jazn-data.xml",  change the value "<credentials>" of the entry "keystoreadmin". Please added "!" in front of your password. For example, if you want to change the password to welcome,change it to <credentials>!welcome</credentials>

The next time OC4J reads "system-jazn-data.xml", it will rewrite the file with all passwords obfuscated and unreadable.(So  your clear text like "!welcome" will become encrypted password, something like '{903}dnHlnv/Mp892K8ySQan+zGTlvUDeFYyW'

How to Configure The SSL Certificate For Oracle Warehouse Builder Repository Browser

Antonio Romero - Tue, 2013-08-27 22:09

  The Repository Browser is a browser-based tool that generates reports from data stored in Oracle Warehouse Builder (OWB) repositories. It use OC4j as the web server. Users need to use HTTPS to access the web interface. ( HTTP on top of the SSL/TLS protocol)

If the Repository Browser Listener is running on a computer named owb_server, then typing the following address will start the Repository Browser:

   https://owb_server:8999/owbb/RABLogin.uix?mode=design

   or

   https://owb_server:8999/owbb/RABLogin.uix?mode=runtime


On the server side, the SSL certificate for the browser is required. Users can create it by themselves.

First, uses can user the JRE's util "keytool" to generate a keystore, name it keystore.jks.

For example: keytool -genkey -keyalg RSA -alias mykey -keystore keystore.jks -validity 2000 -storepass  welcome1

Please pay attention to the password of the store, it need to be the same as the credentials of keystoreadmin in the file called "system-jazn-data.xml".


If the password is not the same, the error message like "Keystore was tampered with, or password was incorrect" will be generated.


In order to change the credentials, there are two files you can edit.


  • http-web-site.xml: It is in the path of %OWB_HOME%/j2ee/config. The password is stored as clear text in the http-web-site.xml, Users can change it to fit the password they use to generate the keysotre. For the security reason, if users don't want to store clear text, they can use the point (->keystoreadmin) to point another file named system-jazn-data.xml.


  • system-jazn-data.xml: User can find "system-jazn-data.xml" in the %OWB_HOME%/j2ee/config. There is a entry in it called "keystoreadmin".  Password store in this file is encrypted password. The pointer mentioned above is pointing to this place. In order to change the password, you can edit "system-jazn-data.xml",  change the value "<credentials>" of the entry "keystoreadmin". Please added "!" in front of your password. For example, if you want to change the password to welcome,change it to <credentials>!welcome</credentials>

The next time OC4J reads "system-jazn-data.xml", it will rewrite the file with all passwords obfuscated and unreadable.(So  your clear text like "!welcome" will become encrypted password, something like '{903}dnHlnv/Mp892K8ySQan+zGTlvUDeFYyW'

Data Pump 12c – Pumping Data with the LOGTIME Parameter

alt.oracle - Tue, 2013-08-27 09:38
Since its release, Oracle Data Pump has been a worthy successor to the traditional exp/imp tools.  However, one area lacking with Data Pump has been something as simple as the ability to identify how long each step of a Data Pump job actually takes.  The log will show start time at the top of the log and end time at the bottom, but the time of execution for each step is a mystery.  Oracle 12c solves this problem with the LOGTIME parameter, which adds a timestamp to the execution of each step of the Data Pump job.  Here’s what it looks like without the parameter.

/home/oracle:test1:expdp altdotoracle/altdotoracle \
> directory=data_pump_dir dumpfile=expdp.dmp \
> tables=employee

Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:32:38 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01":  altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "ALTDOTORACLE"."EMPLOYEE"                   10.93 KB      16 rows
Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
  /oracle/base/admin/test1/dpdump/expdp.dmp
Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:32:51 2013 elapsed 0 00:00:11

With the LOGTIME parameter, each step is prefixed with a timestamp, indicating the start time for each event that is processed.

/home/oracle:test1:expdp altdotoracle/altdotoracle \
> directory=data_pump_dir dumpfile=expdp.dmp \
> tables=employee LOGTIME=ALL

Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:34:54 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
13-AUG-13 09:34:56.757: Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01":  altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee LOGTIME=ALL
13-AUG-13 09:34:57.019: Estimate in progress using BLOCKS method...
13-AUG-13 09:34:57.364: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
13-AUG-13 09:34:57.396: Total estimation using BLOCKS method: 64 KB
13-AUG-13 09:34:57.742: Processing object type TABLE_EXPORT/TABLE/TABLE
13-AUG-13 09:34:57.894: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
13-AUG-13 09:34:57.964: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
13-AUG-13 09:35:04.853: . . exported "ALTDOTORACLE"."EMPLOYEE"   10.93 KB      16 rows
13-AUG-13 09:35:05.123: Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
13-AUG-13 09:35:05.127: ******************************************************************************
13-AUG-13 09:35:05.128: Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
13-AUG-13 09:35:05.131:   /oracle/base/admin/test1/dpdump/expdp.dmp
13-AUG-13 09:35:05.134: Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:35:05 2013 elapsed 0 00:00:09

The parameter works similarly with Data Pump Import.  Note that, although it is documented, the LOGTIME parameter is not described when you do a expdp help=y or impdp help=y command.

Categories: DBA Blogs

DBA or Developer?

Chet Justice - Mon, 2013-08-26 16:01
I've always considered myself a developer and a LOWER(DBA). I may have recovered perhaps one database and that was just a sandbox, nothing production worthy. I've built out instances for development and testing and I've installed the software a few hundred times, at least. I've done DBA-like duties, but I just don't think of myself that way. I'm a power developer maybe? Whatevs.

I'm sure it would be nearly impossible to come up with One True Definition of The DBA ™. So I won't.

I've read that Tom Kyte does not consider himself a DBA, but I'm not sure most people know that. From Mr. Kyte himself:



At the same conference, I asked Cary Millsap the same question:



I read Cary for years and always assumed he was a DBA. I mean, have you read his papers? Have you read Optimizing Oracle Performance? Performance? That's what DBAs do (or so I used to think)!

It was only after working with him at #kscope11 on the Building Better Software track that I learned otherwise.

Perhaps I'll make this a standard interview question in the future...

Semi-related discussions:

1. Application Developers vs. Database Developers
2. Application Developers vs. Database Developers: Part II
Categories: BI & Warehousing

IPython at Ohio LinuxFest 2013

Catherine Devlin - Mon, 2013-08-26 04:56

Are you signed up yet for Ohio LinuxFest on Sep. 13-15? I'll be there to present

IPython for non-Pythonistas Break out of your (bash) shell! IPython and the IPython Notebook have swept over the Python programming community, but they're not just for Python programmers - they make for high-powered shell replacements even with little to no Python knowledge. They'll also let you document your work and collaborate with others like never before. Find out how these beautiful tools can improve your daily Linux work!

At PyOhio, I argued that all Python programmers need IPython. At OLF, I'll make the case that non-Pythonistas need IPython, too. Perhaps my next talk will be "Even Your Cat Needs IPython".

Also at OLF, look for PyOhio's booth for info on next year's PyOhio, other Python events around the region, and general Python love!

Configuring ODBC to MySQL from Oracle

Barry McGillin - Sun, 2013-08-25 03:14
Sometimes people want to connect to MySQL from Oracle and copy table data between the databases.  You can do that with Oracle Hetrogenous Services via ODBC.  This post will show how to create an odbc connection to your MySQL database which is the first part of this.

For my example, I'm using unixODBC and its on the Oracle public yum repository
[root@localEL5 ~]$ yum install unixODBC
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
> Running transaction check
> Processing Dependency: libboundparam.so.1 for package: unixODBC-devel
> Processing Dependency: libesoobS.so.1 for package: unixODBC-devel
> Processing Dependency: libgtrtst.so.1 for package: unixODBC-devel
> Processing Dependency: libmimerS.so.1 for package: unixODBC-devel
> Processing Dependency: libnn.so.1 for package: unixODBC-devel
> Processing Dependency: libodbc.so.1 for package: unixODBC
.....
> Running transaction check
> Package unixODBC-devel.i386 0:2.2.11-10.el5 set to be updated
> Package unixODBC-libs.i386 0:2.2.11-10.el5 set to be updated
> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Updating:
unixODBC i386 2.2.11-10.el5 el5_latest 290 k
Installing for dependencies:
unixODBC-libs i386 2.2.11-10.el5 el5_latest 551 k
Updating for dependencies:
unixODBC-devel i386 2.2.11-10.el5 el5_latest 738 k

Transaction Summary
================================================================================
Install 1 Package(s)
Upgrade 2 Package(s)

Total download size: 1.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): unixODBC-2.2.11-10.el5.i386.rpm | 290 kB 00:02
(2/3): unixODBC-libs-2.2.11-10.el5.i386.rpm | 551 kB 00:04
(3/3): unixODBC-devel-2.2.11-10.el5.i386.rpm | 738 kB 00:17
--------------------------------------------------------------------------------
Total 60 kB/s | 1.5 MB 00:26
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : unixODBC-libs 1/5
warning: /etc/odbcinst.ini created as /etc/odbcinst.ini.rpmnew
Updating : unixODBC 2/5
Updating : unixODBC-devel 3/5
Cleanup : unixODBC 4/5
Cleanup : unixODBC-devel 5/5

Dependency Installed:
unixODBC-libs.i386 0:2.2.11-10.el5

Updated:
unixODBC.i386 0:2.2.11-10.el5

Dependency Updated:
unixODBC-devel.i386 0:2.2.11-10.el5

Complete!
[root@localEL5 ~]$


Now make sure odbc connector is installed for MySQL. Again, we're using our friend yum to provide it


[root@localEL5 ~]$ yum install mysql-connector-odbc
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
The program yum-complete-transaction is found in the yum-utils package.
> Running transaction check
...
> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Installing:
mysql-connector-odbc i386 3.51.26r1127-2.el5 el5_latest 159 k
Installing for dependencies:
libtool-ltdl i386 1.5.22-7.el5_4 el5_latest 37 k

Transaction Summary
=================================================================================================
Install 2 Package(s)
Upgrade 0 Package(s)

Total download size: 196 k
Is this ok [y/N]: y
Downloading Packages:
(1/2): libtool-ltdl-1.5.22-7.el5_4.i386.rpm | 37 kB 00:04
(2/2): mysql-connector-odbc-3.51.26r1127-2.el5.i386.rpm | 159 kB 00:01
-------------------------------------------------------------------------------------------------
Total 21 kB/s | 196 kB 00:09
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libtool-ltdl 1/2
Installing : mysql-connector-odbc 2/2

Installed:
mysql-connector-odbc.i386 0:3.51.26r1127-2.el5

Dependency Installed:
libtool-ltdl.i386 0:1.5.22-7.el5_4

Complete!
[root@localEL5 ~]$

Now lets  check driver locations and DSNs. Firstly we can check the installed drivers now in the file /etc/odbcinst.ini

# driver definitinions
#
#

# Included in the unixODBC package
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/libodbcpsql.so
Setup = /usr/lib/libodbcpsqlS.so
FileUsage = 1


# Driver from the MyODBC package
# Setup from the unixODBC package
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/libmyodbc.so
Setup = /usr/lib/libodbcmyS.so
FileUsage = 1

Then, we can specify a DSN to connect with in /etc/odbc.ini (Be careful here the option names are case sensitive.

[sakila-connector]
driver=MySQL
Database=sakila
Socket=/var/lib/mysql/mysql.sock
User=root
Password=oracle
Finally, we can now check our dsn defined above.  We'll use iSQL from the unixODBC package here.
[oracle@Unknown-08:00:27:c8:2a:1c lib]$ isql -v sakila-connector
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
Cool. When we get this we are connected via odbc to the DSN.  Now we can prove it by doing a show tables or something to prove its working.
NB: If you get an error at this stage asking for libraries, its likely you specified your Drivers incorrectly in the odbcinst.ini.
Now we have this working we can setup HS on the Oracle side.

People, Performance and Perception

TalentedApps - Sun, 2013-08-25 01:04

A commonly heard complaint: “My manager is a control-freak and practice micro-management. He asks for suggestions but provide them all himself. He doesn’t believe in imperfection and try to fit us into his unrealistic expectations.”

Perceiver believes that it is the reality but he might be just focusing on the aspects which reinforce his existing beliefs.  Your friends, co-workers or the society don’t know “a real you” but “they know you as they perceive you” as they always have perception about you.  But when most of the decision makers in an organization shares the same perception about someone than it doesn’t really matter what “reality is” as “perception becomes the reality”.Perception

It’s hard for an organization to get required contribution from an individual who is not able to accomplish his own goals within organization and in order to achieve your purpose you need to know that how people perceive you. Any undesired perception about you, your products or services will not go away if you deny its existence.

It‘s not possible for you to communicate with each perceiver to explain who you really are, in case you are not happy about it. But before you make any effort to change perception, you need to understand three critical factors which contribute to perception formation process. And they are:

  1. Your performance: How you perform in a given context contributes majorly towards perception about you. Initial performance is a foundation stone in this process and often takes a lot of effort to change, in case you want to change it later, for better.
  2. Your competitor’s performance: Comparison and competition are unavoidable and if you are afraid of them, then you really fear your own incompetence.  You need to know what your competitor’s are doing before they walk over you.
  3. Perceivers’ viewpoint:  You are dealing with humans and they are prone to mistakes and prejudices. It can go in your favor or against you. These people can be your customers, key influencers in your organization or anyone who is a stakeholder in your current or future endeavors.

You may want to manage one or more of these factors depending on your power of influence but at minimal you should always be in a position to improve your own performance.

If you don’t want to be defined by what you are not, if you want to feed your opportunities and starve your problems; you need to take charge to change perception about you and the time is now. But what if you have already tried your best and are fully convinced that perception about you is unchangeable? You are neither the first nor the last person to feel it, recharge your batteries and hit the trail again. New jobs are waiting to be done, new teams are waiting to be led and new ideas are waiting to be born…

Photo Credit: Unknown


Tagged: perception, performance

Configuring MySQL on EL5, Setting Passwords and Network Access

Barry McGillin - Sat, 2013-08-24 10:35
I find myself installing and running mysql of different versions in different places for different reasons all the time (well often enough to do it and not remember the little things that cost time when setting up)   Its with that in mind, I'm making notes for myself and you guys as well to help you along.

We use MySQL a lot with Oracle SQLDeveloper and many use SQLDeveloper to co-exist between MySQL and Oracle.

For most versions of Oracle Linux, we will install SQL Server from the Yum repository. If you dont have one set up you can configure one under /etc/yum.repos.d.  These notes for yum are a reference (blatant copy) from the Oracle Linux Admin guide
  1. As root, change directory to /etc/yum.repos.d.
    # cd /etc/yum.repos.d
  2. Use the wget utility to download the repository configuration file that is appropriate for your system.
    # wget http://public-yum.oracle.com/public-yum-release.repo
    For Oracle Linux 6, enter:
    # wget http://public-yum.oracle.com/public-yum-ol6.repo
    The /etc/yum.repos.d directory is updated with the repository configuration file, in this example, public-yum-ol6.repo.
  3. You can enable or disable repositories in the file by setting the value of the enabled directive to 1 or 0 as required.
Now we are ready to install MySQL. If you havent used yum before play with some of the options to list packages and switch repos as you need them.  Its a great tool saving us all lots of time with dependencies.

root@localEl5# yum install mysql-server
You can see if its installed by doing
root@localEl5> yum list mysql-server
Loaded plugins: security
el5_latest | 1.4 kB 00:00
Installed Packages
mysql-server.i386 5.0.95-5.el5_9 installed
root@localEl5>
You can then start it with
root@localEL5> /etc/init.d/mysqld start
and check its running by
root@localEL5> /etc/init.d/mysqld status

mysqld (pid 31298) is running...
In general, you can start mysql on the server without a server password in order to set one up for yourself. My one caveat here, is that all this is for development folks, some one with a security hat on will complain (bitterly).  I'm going to show you how to clear down all permissions so you can connect from any machine.
root@localEL5> /etc/init.d/mysqld stop
root@localEL5> /etc/init.d/mysqld status
root@localEL5> mysqld_safe --skip-grant-tables &amp;
mysql -uroot
Now we are logged into mysql as root with no passwords.  We can check what users are here and what permissions they have. Now, in this case, I have 
mysql> select user, host, password from user; 
+-------+-------------+-------------------------------------------+
| user | host | password |
+-------+--------------+-------------------------------------------+
| root | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| root | 192.168.1.201| *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| barry | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
+-------+--------------+-------------------------------------------+

The first thing I want to do is to remove duplicate entries for my user
mysql> delete from user where user='root' and host ='192.168.1.201';
now we have
+-------+--------------+-------------------------------------------+
| user | host | password |
+-------+--------------+-------------------------------------------+
| root | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| barry | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
+-------+--------------+-------------------------------------------+
Now, next I want to update the hosts to any host which is '%' in mysql

 mysql> update user set host='%';

which now gives me

+-------+------+-------------------------------------------+
| user | host | password |
+-------+------+-------------------------------------------+
| root | % | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| barry | % | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
+-------+------+-------------------------------------------+
2 rows in set (0.00 sec)


Now, if you want to change your passwords, make sure you do that now.  If you are on 5.1 and over secure_auth is set on and old passwords are off  by default. In my version 5.0, I need to set them to get new passwords and secure_auth which is default on all mysql clients now.  This is done in /etc/my.conf followed by a restart of mysql

old_passwords=0
secure-auth=1

mysql> update user set password=PASSWORD('oracle') where user='root';

lastly flush privileges and exit

mysql> flush privileges;

Lastly, I like my prompts to be informative so, You can also set this in your profile to setup your prompts.

export MYSQL_PS1="\u@\h [\d] > "

It'll give you a prompt like this when I log in with

root@localEl5> mysql -uroot -poracle -Dmysql

giving this prompt in mysql

root@localEL5 [mysql] >

Now, you are all set to connect from SQL Developer to the this instance.  We can also install the sample databases from http://dev.mysql.com/doc/index-other.html

Custom Jersey WADL generation

Edwin Biemond - Thu, 2013-08-22 14:46
I had a situation where the auto generated WADL did not match with my Rest services. The first difference was that the response is presented as an object instead of a collection of objects and the second one is that it could not handle JSONWithPadding as response.  Because I use this WADL in my Rest client generation, I need to fix these issues. Lucky for me, Jersey JAX-RS allows us to provide

All you need is Log, All you need is Log, ..., Log is all you need.

Marcelo Ochoa - Thu, 2013-08-22 11:44
Paraphrasing Beatles's song I am starting this post telling that logging is one of the most important features that developers, sysadmins and DBA needs to successful develop, deploy, test and tune Java applications inside the RDBMS.
Inside the OJVM this feature is even more important because there is no visual feedback when you are running Java code.
Starting with 10g, I have been using this feature as is explained in my posts Getting logging entering, exiting and throwing work at Oracle JVM, which according to Google Analytics is one of most visited page, but most of tweak requires uploading new classes and configuring it with certain knowledge.
Starting with 12c, the logging feature is directly integrated into the OJVM also for PDB.
This mean that by simply loading a logging.properties file into a user's schema you can control the logging facilities for Java code running with an effective user (Oracle JVM performs specific steps to configure logging options).
For example if I have a class loaded into SCOTT's schema and this class run with an effective user SCOTT, to activate the logging level for that class and user you have to execute:
loadjava -u scott/tiger@mydb -v javavm/lib/logging.propertiesif the logging.properties file includes something like:
$ cat javavm/lib/logging.properties
# Limit the message that are printed on the console to INFO and above.
java.util.logging.ConsoleHandler.level = ALL
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
# Scotas OLS
com.scotas.level = INFO
org.apache.level = INFO
remember that for using logging inside the OJVM SCOTT user should have SYS:java.util.logging.LoggingPermission = control, to do that connected as SYS excecute:
$ sqlplus "sys/pwd@mydb as sysdba"
SQL> exec dbms_java.grant_permission( 'SCOTT', 'SYS:java.util.logging.LoggingPermission', 'control', '');
SQL> commit;
also your application can control logging level properties during run time, above configuration are static and defined when the OJVM start the execution of the Java code for a specific DB session.
The example below is Scotas OLS web page available for controlling logging properties during run time (the old logging configuration Servlet of Apache Solr).
as is shown in the screen shot logging level for each hierarchy is defined with a default setting inherit from logging.properties file, but we can change that value to any other value using above form.
Finally where my logs goes?
Independent if you are running your Java application in a traditional RDBMS installation or in a new CDB/PDB configuration log information will goes to the .trc file associated to the session, this is at:
$ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/tracethe .trc file is named using this filename convention $ORACLE_SID_jnnn_$PID.trc, for example:
$ cat orclc_j020_11761.trc
Trace file /u01/app/oracle/diag/rdbms/orclc/orclc/trace/orclc_j020_11761.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0.1.0/dbhome_1
System name: Linux
Node name: localhost
Release: 3.2.0-52-generic
Version: #78-Ubuntu SMP Fri Jul 26 16:21:44 UTC 2013
Machine: x86_64
Instance name: orclc
Redo thread mounted by this instance: 1
Oracle process number: 7
Unix process pid: 11761, image: oracle@pocho (J020)

*** 2013-08-22 11:29:41.805
*** SESSION ID:(387.581) 2013-08-22 11:29:41.805
*** CLIENT ID:() 2013-08-22 11:29:41.805
*** SERVICE NAME:(SYS$USERS) 2013-08-22 11:29:41.805
*** MODULE NAME:(DBMS_SCHEDULER) 2013-08-22 11:29:41.805
*** ACTION NAME:(SOLRSERVLETSERV#1) 2013-08-22 11:29:41.805
*** CONTAINER ID:(3) 2013-08-22 11:29:41.805
.....
*** 2013-08-22 11:52:37.581
INFO: Unset log level on 'root'.
Aug 22, 2013 11:52:37 AM com.scotas.solr.servlet.LogLevelSelection doPost
INFO: Set 'com.scotas' to INFO level.
is important that even if you have multiple PDBs, all log files will go to the trace directory of the CDB because all PDBs re-use the process infra-structure of the CDB.
I will post another blog entry about the impact of the process infra-structure of the Multitenant Environment and the OJVM applications.

An efficient way to do massive inserts with Oracle JDBC

Marcelo Ochoa - Wed, 2013-08-21 10:34
  I was exchanging emails with my friend Kuassi Mensah (@kmensah) to see how to use JDBC from what is known as array DML.
  Typically known for the DBA of the form:
INSERT INTO TEST_SOURCE_BIG_A
        SELECT a.line a_line, a.name a_name, a.owner a_owner, a.text a_text, a.type a_type
           FROM TEST_SOURCE_BIG a;  if the two tables have same structure the RDBMS realizes that and is putting together batchs between 140-170 rows each.
  This will cause that any index associated (such as Scotas OLS) with a call to the function:
FUNCTION ODCIIndexInsert (
   ODCIIndexInfo ia,
   ridlist ODCIRidList,
   newvallist varray_of_column_type,
   ODCIEnv env)
RETURN NUMBER  where the argument takes an array ridlist with all ROWIDs.
  The point is how can I exploit that functionality from JDBC, or how to make a massive inserts efficiently from Java (very important if your app. is inserting twits or FB comments)?
  The tip is: SetExecuteBatch method of preparedStatment.
  Broadly speaking, the code should be of the form:
        PreparedStatement ps =
            conn.prepareStatement ("insert into test_source_big_a values ​​(?,?,?,?,?)");
        / / Change batch size for this statement to 500
        ((OraclePreparedStatement) ps). SetExecuteBatch (500);
        for (int i = 0; i <= 500, i + +) {
            ps.setInt (1, i);
            ps.setString (2, "name-" + i);
            ps.setString (3, "owner-" + i);
            ps.setString (4, "this is a long test using sendBatch - statement" + i);
            ps.setString (5, "type-" + i);
            ps.executeUpdate () / / JDBC this for later execution queues
        }
        ((OraclePreparedStatement) ps). SendBatch () / / JDBC sends the queued request
        conn.commit ();   calling that piece of code generates 6 batchs of rows:
       100 + (4 x 93) + 98
   which from the point of view of the index implementation will be much more efficient because instead of enqueuing 500 messages in the AQ, just going to enqueue 6!!
   If you send more values with parameter rows in the batch (500) will go automatically sending without waiting SendBatch or commit.
   A parameter which Kuassi recommended me to consider is the SDU (session data unit), which is defined at the level of SQLNet to avoid SQLNet packet fragmentation.
  Note that not only is going to be much more efficient from the point of the insert tables but also with respect to the use of the network since it generates far fewer round-trip from the App. side and RDBMS.

12C ADG Enhancements

Fairlie Rego - Tue, 2013-08-20 22:27
  •          Explain plan now works on Standby
SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PHYSICAL STANDBY

SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ ONLY WITH APPLY

SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0
PL/SQL Release 12.1.0.1.0 - Production                                                    0
CORE    12.1.0.1.0      Production                                                        0
TNS for Linux: Version 12.1.0.1.0 - Production                                            0
NLSRTL Version 12.1.0.1.0 - Production                                                    0

SQL> explain plan for select * from dual;

Explained.

SQL> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 272002086

----------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS STORAGE FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

  •         Data files can be moved in ADG mode. 

In 11G this could not be done and a shutdown of the standby was needed if you needed to move
datafiles as part of an ILM process if you were running an active dataguard configuration

In 11.2.0.3
SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

SQL> select thread#,sequence# from v$managed_standby where process like 'MRP%';

   THREAD#  SEQUENCE#
---------- ----------
         2       1652

SQL>  select tablespace_name from dba_tablespaces where status='READ ONLY';

TABLESPACE_NAME
------------------------------
TEST_2009Q1

SQL> select file_name from dba_data_files where tablespace_name='TEST_2009Q1';

FILE_NAME
-------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/FOOdr/datafile/TEST_2009Q1.510.dbf

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

[oracle@exadb03:FOODR1]/home/oracle => rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Tue Aug 13 11:08:52 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: FOO (DBID=2633655758)

RMAN> backup as copy tablespace TEST_2009Q1 format='+RECO';

Starting backup at 13-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=4906 instance=FOODR1 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00640 name=+DATA/FOOdr/datafile/TEST_2009Q1.510.dbf
output file name=+RECO/FOOdr/datafile/TEST_2009Q1.18424.823345739 tag=TAG20130813T110857 RECID=1539 STAMP=823345738
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 13-AUG-13

SQL> recover managed standby database cancel;
Media recovery complete.
SQL>

RMAN> switch tablespace TEST_2009Q1 to copy;

using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 08/13/2013 11:11:17
RMAN-06572: database is open and datafile 640 is not offline

RMAN>


At this stage you need to shutdown the standby and restart it in mount mode and then run the switch to copy command.

This obviously impacts service if you have database services which run only on the standby and *DO NOT* run on the primary.

The 12C feature ALTER DATABASE MOVE DATAFILE solves this problem. Details from the documentation below

Python Workshop for Women Indy #2 and CMH #2 coming up!

Catherine Devlin - Mon, 2013-08-19 14:59

The Midwest Python Workshop for women and their friends is back! We've got new workshops scheduled, ready to take new batches of students:

Indianapolis Python Workshop, Sep. 27-28, 2013; sponsored by Six Feet Up and hosted at Launch Fishers

Columbus Python Workshop, Oct. 18-19, 2013; sponsored by LeadingEdje and hosted at The Forge by Pillar

The Workshop is a free, friendly, hands-on introduction to computer programming using Python. Women of all ages and backgrounds are the primary target (but you can bring a male participant as your guest).

Please spread the word!

Deprecated database init. parameters in 12c

Syed Jaffar - Sun, 2013-08-18 01:39
Here is a very short blog about all the deprecated database initialization parameters in 12c.  It is sometime becomes essential to be familiar with those database features and init. parameters that are either de-supported or deprecated in a new release of Oracle database before proceeding with the upgrade. This is because, some of the legacy applications upgrade decision either directly or in-directly depends on those changes.
To obtain a list of deprecated database parameter in 12c, use the following SQL statement:

SQL> SELECT name from v$parameter WHERE isdeprecated = 'TRUE' ORDER BY name;
 
Here is the list:
 
active_instance_count
background_dump_dest
buffer_pool_keep
buffer_pool_recycle
commit_write
cursor_space_for_time
fast_start_io_target
global_context_pool_size
instance_groups
lock_name_space
log_archive_local_first
log_archive_start
max_enabled_roles
parallel_automatic_tuning
parallel_io_cap_enabled
parallel_server
parallel_server_instances
plsql_v2_compatibility
remote_os_authent
resource_manager_cpu_allocation
sec_case_sensitive_logon
serial_reuse
sql_trace
standby_archive_dest
user_dump_dest
 
I am little bit surprised to see the sec_case_sensitive_logon parameter in the list.

Session based sequences in 12c.

Gary Myers - Sat, 2013-08-17 21:40
Catching up on some blogs, and I saw this 12c gem by Syed Jaffar Hussain.

Sequences are great. But sometimes they're just a little more unique than you actually need. Sort of like GUIDs. Start churning through them and they're quickly nine or ten digits. There's nothing wrong with long keys from a technical perspective as the extra byte or two of storage is rarely significant. But they can be greedy from a screen layout point of view (especially in tabular forms). And there's a greater chance of getting digits mixed up in email or conversations about them.

If you are doing a nightly or weekly load, it can be nice to load up your half-a-million rows with a 'batch id' and a 'sequence in batch'. Session based sequences are a nice concept that give you scope for keeping those values in a smaller range.

Another trick for reducing the size of IDs is to covert them into HEX. You'll probably only save one character there though, but the mix of alphabetic and numeric characters. TO_CHAR and TO_NUMBER are both happy to use 'XXXXXXXX' format masks to convert between decimal and hex.


Oracle SOA Suite 11g Performance Tuning Cookbook

Antony Reynolds - Tue, 2013-08-13 17:39

Just received this to review.

It’s a Java World

The first chapter identifies tools and methods to identify performance bottlenecks, generally covering low level JVM and database issues.  Useful material but not really SOA specific and the authors I think missed the opportunity to share the knowledge they obviously have of how to relate these low level JVM measurements into SOA causes.

Chapter 2 uses the EMC Hyperic tool to monitor SOA Suite and so this chapter may be of limited use to many readers.  Many but not all of the recipes could have been accomplished using the FMW Control that ships and is included in the license of SOA Suite.  One of the recipes uses DMS, which is the built in FMW monitoring system built by Oracle before the acquisition of BEA.  Again this seems to be more about Hyperic than SOA Suite.

Chapter 3 covers performance testing using Apache JMeter.  Like the previous chapters there is very little specific to SOA Suite, indeed in my experience many SOA Suite implementations do not have a Web Service to initiate composites, instead relying on adapters.

Chapter 4 covers JVM memory management, this is another good general Java section but has little SOA specifics in it.

Chapter 5 is yet more Java tuning, in this case generic garbage collection tuning.  Like the earlier chapters, good material but not very SOA specific.  I can’t help feeling that the authors could have made more connections with SOA Suite specifics in their recipes.

Chapter 6 is called platform tuning, but it could have been titled miscellaneous tuning.  This includes a number of Linux optimizations, WebLogic optimizations and JVM optimizations.  I am not sure that yet another explanation of how to create a boot.properties file was needed.

Chapter 7 homes in on JMS & JDBC tuning in WebLogic.

SOA at Last

Chapter 8 finally turns to SOA specifics, unfortunately the description of what dispatcher invoke threads do is misleading, they only control the number of threads retrieving messages from the request queue, synchronous web service calls do not use the request queue and hence do not use these threads.  Several of the recipes in this chapter do more than alter the performance characteristics, they also alter the semantics of the BPEL engine (such as “Changing a BPEL process to be transient”) and I wish there was more discussion of the impacts of these in the chapter.  I didn’t see any reference to the impact on recoverability of processes when turning on in-memory message delivery.  That said the recipes do cover a lot of useful optimizations, and if used judiciously will cause a boost in performance.

Chapter 9 covers optimizing the Mediator, primarily tweaking Mediator threading.  THe descriptions of the impacts of changes in this chapter are very good, and give some helpful indications on whether they will apply to your environment.

Chapter 10 touches very lightly on Rules and Human Workflow, this chapter would have benefited from more recipes.  The two recipes for Rules do offer very valuable advice.  The two workflow recipes seem less valuable.

Chapter 11 takes into the area where the greatest performance optimizations are to be found, the SOA composite itself.  7 generally useful recipes are provided, and I would have liked to see more in this chapter, perhaps at the expense of some of the java tuning in the first half of the book.  I have to say that I do not agree with the “Designing BPEL processes to reduce persistence” recipe, there are better more maintainable and faster ways to deal with this.  The other recipes provide valuable ideas that may help performance of your composites.

Chapter 12 promises “High Performance Configuration”.  Three of the recipes on creating a cluster, configuring an HTTP plug in and setting up distributed queues are covered better in the Oracle documentation, particularly the Enterprise Deployment Guide.  There are however some good suggestions in the recipe about deploying on virtualized environments, I wish they had spoken more about this.  The use of JMS bridges recipe is also a very valuable one that people should be aware of.

The Good, the Bad, and the Ugly

A lot of the recipes are really just trivial variations on other recipes, for example they have one recipe on “Increasing the JVM heap size” and another on “Setting Xmx and Xms to the same value”.

Although the book spends a lot of time on Java tuning, that of itself is reasonable as a lot fo SOA performance tuning is tweaking JVM and WLS parameters.  I would have found it more useful if the dots were connected to relate the Java/WLS tuning sections to specific SOA use cases.

As the authors say when talking about adapter tuning “The preceding sets of recipes are the basics … available in Oracle SOA Suite. There are many other properties that can be tuned, but their effectiveness is situational, so we have chosen to focus on the ones that we feel give improvement for the most projects.”.  They have made a good start, and maybe in a 12c version of the book they can provide more SOA specific information in their Java tuning sections.

Add the book to your library, you are almost certain to find useful ideas in here, but make sure you understand the implications of the changes you are making, the authors do not always spell out the impact on the semantics of your composites.

A sample chapter is available on the Packt Web Site.

Pages

Subscribe to Oracle FAQ aggregator