We are now accepting abstracts for speaker proposals GLOC 2014 abstracts
It is the standard speaking gig free conference attendance for speakers ... one hour time slot.
Keynotes tuesday by Tom Kyte and Steven Feuerstein ... workshops monday by Carlos Sierra and Scott Spendolini ( maybe a 3rd or 4th workshop ? ).
This year we are working on a stronger Apps track ... expecting the usual high quality sessions for DBA / Developer / DW&BI tracks.
The call for abstracts will be open for a long time ... just first notification that it is now available.
I hope to see you at GLOC 2014!
Thanks John Hurley aka @GrumpyOldDBA aka NEOOUG President
Yesterday I was trying to figure out if any queries on a particular production database were using subpartition statistics on a certain table. We are having trouble getting the statistics gathering job to finish gathering stats on all the subpartitions of this table in the window of time we have given the stats job. My thought was that we may not even need stats on the subpartitions, so I wanted to find queries that would prove me wrong.
My understanding of Oracle optimizer statistics is that there are three levels – table or global, partition, and subpartition. The table I am working on is partitioned by range and subpartitioned by list. So, I think that the levels are used in these conditions:
- Global or table: Range that crosses partition boundaries
- Partition: Range is within one partition but specifies more than one list value
- Subpartition: Range is within one partition and specifies one list value
In the table I was working on it was partitioned by week and subpartitioned by location so a query that specified a particular week and an individual location should use the subpartition stats.
So, I did some experimentation and came up with this query:
select p.PLAN_HASH_VALUE, min(p.sql_id), count(*) from DBA_HIST_SQL_PLAN p where p.OBJECT_OWNER='MYOWNER' and p.OBJECT_NAME='MYTABLE' and p.partition_start=p.partition_stop and substr(p.partition_start,1,1) in ('0','1','2','3','4','5','6','7','8','9') and p.sql_id in (select sql_id from DBA_HIST_SQLSTAT) group by p.PLAN_HASH_VALUE order by p.PLAN_HASH_VALUE;
I’ve replaced the real owner and table name with MYOWNER and MYTABLE. The point of this query is to find the distinct plans that use subpartition statistics and one sql query as an example of each plan. There were multiple queries with the same plans but slightly different constants in their where clause so I just needed one example of each.
In my experimentation I found that plans that had the same numbers for the partition stop and start were the plans that used subpartition stats. I’m not sure about the plans that don’t have numbers in their partition start and stop columns.
Here is what the output looks like:
PLAN_HASH_VALUE MIN(P.SQL_ID) COUNT(*) --------------- ------------- ---------- 151462653 fugdxj00cnwxt 1 488358452 21kr79rst8663 2 634063666 5fp4rnzgw6gvc 1 1266515004 98zbx8gw95zf8 2 1397966543 37gaxy58sr1np 2 1468891601 5fp4rnzgw6gvc 1 1681407819 001aysuwx1ba4 230 1736890182 64tmnnap05m6b 2 2242394890 2tp8jx3un534j 1 2243586448 9fcd80ms6h7j4 2 2418902214 64tmnnap05m6b 1 2464907982 5fp4rnzgw6gvc 1 3840767159 05u7fy79g0jgr 143 4097240051 5mjgz2v8a3p6h 1
This is the output on our real system. Once I got this list I built a script to dump out all of these plans and the one sql_id for each:
select * from table(DBMS_XPLAN.DISPLAY_AWR('fugdxj00cnwxt',151462653,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('21kr79rst8663',488358452,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5fp4rnzgw6gvc',634063666,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('98zbx8gw95zf8',1266515004,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('37gaxy58sr1np',1397966543,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5fp4rnzgw6gvc',1468891601,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('001aysuwx1ba4',1681407819,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('64tmnnap05m6b',1736890182,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('2tp8jx3un534j',2242394890,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('9fcd80ms6h7j4',2243586448,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('64tmnnap05m6b',2418902214,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5fp4rnzgw6gvc',2464907982,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('05u7fy79g0jgr',3840767159,NULL,'ALL')); select * from table(DBMS_XPLAN.DISPLAY_AWR('5mjgz2v8a3p6h',4097240051,NULL,'ALL'));
Here is a edited down output of just the relevant part of the first plan:
Plan hash value: 151462653 ------------------------------------------------------------------ | Id | Operation | Name | Pstart| Pstop | ------------------------------------------------------------------ | 31 | TABLE ACCESS STORAGE FULL | MYTABLE | 41017 | 41017 | ------------------------------------------------------------------
This query had conditions in its where clause like this:
LIST_COLUMN = 1234 AND (RANGE_COLUMN BETWEEN TO_DATE('20130609000000','YYYYMMDDHH24MISS') AND TO_DATE('20130615000000','YYYYMMDDHH24MISS'))
I’ve renamed the real column for the list subpartioning to LIST_COLUMN and renamed the real column for the range partitioning to RANGE_COLUMN.
One interesting thing I realized was that since we are on an Exadata system and there are no visible indexes on the subpartitioned table the subpartition stats aren’t being used to determine whether the query will use an index scan or full scan. But, they are used in these queries to determine the number of rows the full scan will return so that could impact the plan.
I’m thinking of using table preferences to just turn off the subpartition stats gathering using a call like this:
begin DBMS_STATS.SET_TABLE_PREFS ('MYOWNER','MYTABLE', pname=>'GRANULARITY', pvalue=>'GLOBAL AND PARTITION'); end; /
As it is the table has 40,000 subpartitions and the daily stats job isn’t finishing anyway so regardless of the queries that use the subpartition stats I think we should set the preference. Maybe just leave dynamic sampling to handle the queries that actually use the one subpartition’s stats or have some application job gather stats on the one subpartition when it is initially loaded. It is a work in progress, but I thought I would share what I’ve been doing.
As of now this already seems to be not the most current version, JasperReports has already moved on to 5.5.0. But since I have already created the 5.2.0 package a while ago, I might just post the instructions here. The 5.5.0 version will follow.
Here you go ...
This is an updated version of the original instructions found here: http://daust.blogspot.de/2013/01/upgrading-jasperreports-libraries-to-501.html
Step 1: Download the libary files for 5.2.0
You can download the files here:
Step 2: Shutdown the Apache Tomcat J2EE server Step 3: Remove the existing JasperReportsLibraries from your existing installation
Typically, after you have installed your previous version of the JasperReportsIntegration toolkit on your Apache Tomcat J2EE server, the files will be located in the directory $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib, for example version 4.7.0 of JasperReports, where $CATALINA_HOME represents the path to your installation of Tomcat.
Then you would have to remove these libraries first. In this directory you should find two files for removal: _jasper-reports-delete-libs-4.7.0.sh and _jasper-reports-delete-libs-4.7.0.cmd, for *nix or Windows respectively. For *nix systems you would have to make it executable, though, e.g.: chmod u+x _jasper-reports-delete-libs-4.7.0.sh . Then you can call it and it will remove all files for version 4.7.0. But it will NOT remove the file for the JasperReportsIntegration and all other libraries which YOU might have placed there deliberately.
You can always find the required removal scripts here: http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/ .
Whenever I release another package, the removal scripts for that package will be shipped as well.
Step 4: Install the new 5.2.0 libraries
Now you can just copy the new libraries from JasperReportsLibraries-5.2.0.zip into $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib.
Step 5: Start the Apache Tomcat J2EE server again
Now you system should be upgraded to the most current JasperReports 5.2.0 !
Just drop me a note when you need updated libraries for a later JasperReports version, etc. I have scripts in place to create a new package of the libraries.
Here you can find the notes from my upgrade (4.5.0 => 4.8.0) on Linux to illustrate the process, I hope it makes sense:
** download the libraries from:
** to /home/jasper/JasperReportsLibraries
** unzip them
unzip JasperReportsLibraries-4.8.0.zip -d JasperReportsLibraries-4.8.0
** stop tomcat server
** remove libraries of current jasper reports release
chmod +x _jasper-reports-delete-libs-4.5.0.sh
** copy libraries of the new release to the WEB-INF/lib directory
cp /home/jasper/JasperReportsLibraries/JasperReportsLibraries-4.8.0/* /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib
** restart tomcat
Cloud computing is allowing big data enterprises to transform how they manage stores of information. Rather than relying on legacy solutions, such as on-premises datacenters, to house digital assets, businesses are now capable of purchasing scalable memory. With this strategy, it's easier for IT managers to upgrade storage swiftly.
More options means more solutions
In addition to these strategies, InformationWeek reported that the recent surge of cloud services has provided enterprises with new options for crafting unique and efficient methods of governing their data. For example, information can be partitioned into different sections of the storage space according to how often it is accessed.
As the cloud enables employees to sync with data at any time, as long as they are utilizing an Internet-optimized device, the introduction of a cloud-based infrastructure could potentially make certain layers of information more heavily trafficked. According to the source, however, enterprises can now leverage flash storage arrays, which will provide organizations with an alternative storage functionality to handle increases in data access. All other information that typically has a much lower traffic rate can still be housed traditionally.
Hybrid cloud options, such as the example listed above, enable IT managers to retain more control over their sensitive data. Converged cloud infrastructures, for instance, allow decision-makers to leverage more applications that can be deployed across digital architectures. These strategies, such as database administration services, provide organizations with fully-operational cloud solutions that can be configured to support the overall needs of the company. Remote DBA services also make categorizing data simpler and faster, which allow tech teams to focus their energies on maintaining the daily operations of the company.
Transforming the business landscape
The Guardian noted that cloud computing has become more than a trend in the enterprise landscape. More companies are integrating a cloud option into their existing infrastructures, making it easier for them to augment the speed and agility of their services. Additionally, as the cloud advances, more options are becoming available for IT managers that will allow companies to maintain a stronger edge over their competition.
Decision-makers should be considering how the cloud can transform their operations without costing more in legacy upgrades.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more about our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
If you are using a desktop PC running windows, but spend your life connecting UNIX and Linux servers, like most DBAs and sys admins, you really need this in your life! It’s so much better than anything I’ve ever used before. Even those really expensive desktop X emulators (you know who you are)! What’s more, it’s a self contained .exe, so no need for installation. Just unzip and go. Perfect on a memory stick!
Tim…MobaXterm 6.6 was first posted on November 21, 2013 at 9:54 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
I tried executing top and it was also ending up with segmentation fault some of the time. At the times it would start, it would give a flashing
I checked the server memory and it had 73 GB free memory.
Eventually I opened up the adpreclone.pl script in vi to study what it was doing. At one place, I noticed that it was looking for the environment variable $USER. When I checked, on the server, echo $USER, did not return anything. I checked on other working servers and echo $USER returned the unix applmgr username.
So I manually set
perl adpreclone.pl appsTier
Entered apps password, and it worked fine.
This was a new server. Seems like unix team missed something during this build. I tried looking up /etc/profile, but wasn't able to find out why the environment variable USER was not set. I'll update this post, when I discover the reason.
by Steven Feuerstein, 2013
I should not be here.
I should not be.
Too many humans
devouring the world.
I would leave
to save a single tree
or to ensure
a fern leaf unfurled.
But I'll stay, instead,
and switch allegiance
from a genocidal species
to my planet, my home:
my home of surpassing beauty.
If I stay, instead,
and save a single tree
and help tree ferns
unfurl their leaves,
perhaps I could at least
Out of these I would recommend Console (From Browser, Batch), EMCLI (command line, Batch) and PULL (command line) methods.
Refer: Console Method (From Browser, Batch),http://docs.oracle.com/cd/E24628_01/install.121/e22624/install_agent.htm#CACJEFJI
EMCLI (command line, Batch) (New from 184.108.40.206)http://www.oracle.com/technetwork/oem/install-upgrade/em-12103-agent-deployment-1967206.pdf
All other methods:http://docs.oracle.com/cd/E24628_01/install.121/e24089/install_agent_usng_rsp.htm
You can contribute your EM 12c contents on the above site
A Guest Post by Esteban Kolsky, industry influencer (pictured left)The great experiment continues. We are exploring and establishing the right questions to ask when undertaking a customer experience initiative.
So far we have discussed who owns the customer experience and the cultural (people) aspects of deploying a customer experience initiative. In this third post I’m going to talk about processes. (This sponsored research investigation into customer experience is brought to you via my good friends at Oracle.)
The format of this exercise is to pose four questions around the topics of people, process, and technology and explore the implications of each question. The questions, and more importantly your answers, should give you sufficient information to launch your customer experience initiative—or at least to build a framework towards it.
First, are your processes well documented?
I have very interesting discussions with clients when I ask this question. Of course, the initial answer is always yes—followed by something like “we spent x amount of time doing a BPO project and it is all documented.” My follow up question is always—are your processes updated? Most organizations fail to implement some sort of technology or workflow that will allow them to continuously update the documentation (or in some cases, even find the documentation after the initial project). Any minor change—a compliance requirement, a change in organization hierarchies, a departure of a staff member in some cases—can change the process. Even if the changes are slight, they can accumulate over time and translate into large changes.
Action Items: 1) Ensure that documentation exists and can be easily found, 2) Keep the documentation updated, and 3) Make sure the process changes you make can be introduced into the existing documentation.
Second, do you have flexible processes?
Most everyone believes their processes are flexible. After all, nearly all processes have been modified numerous times since their inception—and that clearly denotes their flexibility—right? Yes, to a certain extent. However, a large number of processes are inflexible because of their dependency with a specific person, channel, location, or even a system or solution (called external dependencies). This is not about dependencies between processes (I cover that below). This is about processes that have external dependencies and no inter-process dependencies. In reality, not having interaction with other processes is what makes a process less flexible, because some of the work and information is likely (or at least possibly) going to be repeated across different processes. Therefore, having rigid processes that cannot be replaced or integrated with new processes can create major problems.
Action Item: First, understand the dependencies between processes and external factors, such as people, solutions, and technologies—anything that is not a process. Second, find a way to replace the dependency with one that is more sensible. Then the process can be flexible enough to be extended or modified.
Third, do you understand the dependencies between processes?
There is no process that exists by itself, independent of anything else. As a matter of fact, the entire concept of creating and building processes exists because of their interdependencies—to ensure that the actions executed by those processes are leveraged in one part of the organization and then generate a result in another part of the organization. The question to ask is: are the dependencies between processes well documented so if (more likely, when) you make a change to a process, you know what other processes are affected and can quickly implement necessary changes?
Action Item: Ensure the interdependencies between projects are documented and that the documentation processes can accommodate new and different relationships between them.
Fourth, do you have processes for changing processes?
This might seem like a redundant question to ask; however, it is actually one of the most important aspects of changing processes—and all due to a single reason. When you change a process once, you will need to change it again. The benefits from a single change compound over time as more changes are made (from single adjustments to entire end-to-end reengineering). This is why you need to make sure you have a process in place to make the changes: nothing is once-and-done when it comes to processes. Furthermore, changes done for the purpose of customer experience are always going to have a shorter life than any other change, mainly because the customers’ needs and wants will change constantly. Other variables, such as channels, resolutions, or compliance, also will change, and there is going to be a need to revamp them often.
Action Item: If you don’t have a process for changing processes, that should be your first stop in the adoption of customer experience.
These are the basic questions you will need to ask yourself to undertake the process changes necessary to adopt a customer experience initiative. Of course, the project becomes more complex when you weave these answers with your answers from the previous post (changes in culture to deliver better customer experiences). And that needs to be done before you undertake the final set of questions (coming in the next post) on technology use for customer experience.
Until then, I would love to hear from you. Are these questions representative of the changes in process that you have experienced when undertaking a customer experience initiative? Is your experience different? What am I missing?
Note: You can respond via my blog or scroll down and post a comment.
Busy times lately here at the ‘Lab. We’ve grown from a small band of three to six in the past six weeks.
Joining our happy little crew are Osvaldo, whom we were lucky to find on our adventure to Mexico, Raymond, a friend of Anthony’s from Taleo, and Tony, whom I’ve known for many years.
Our ‘Lab veterans have been road warriors lately. Earlier this week, Anthony spoke at the OTN China Tour in Beijing, showing off the Glass concept app he built, as well as the Leap Motion-controlled robotic arms he and Noel hacked together right before OpenWorld.
Noel was in Mexico, and soon, he’ll be heading to UKOUG Tech 13 to speak. His session is called “Oracle Fusion & Cloud Applications: A Platform for Building New User Experiences” at the happy hour friendly time of 17:45 on Tuesday, December 3.
If you’re attending Tech 13, drop by and say hi, or just look for Noel. He’ll be hanging around the show all week.
Anyway, I have a backlog of posts, just not a backlog of time to push them. Stay tuned.Possibly Related Posts:
- See You at Kscope 13
- OpenWorld Developer Challenges
- Messing around with Glass and Fusion CRM for Kscope 13
- NWOUG 2012 Conference
- On Applications User Experience Usability Labs
We all love a good commandline utility. It gives us that warm feeling of control and puts hairs on our chests. Either that, or it means we can script the heck out of a system, automate many processes, and concentrate on more important matters.
However, some of the OBIEE commandline utilities can’t be used in Production environments at many sites because they need the credentials for OBIEE stored in a plain-text file. Passwords in plain-text are bad, mmmm’kay?
Two of the utilities in particular that it is a shame that can’t be scripted up and deployed in Production because of this limitation are the Presentation Services Catalog Manager, and the Presentation Services Replication Agent. Both these perform very useful purposes, and what I want to share here is a way of invoking them more securely.Caveat
IANAC : I Am Not A Cryptographer! Nor am I a trained security professional. Always consult a security expert for the final word on security matters.
The rationale behind developing the method described below is that some some sites will have a “No Plaintext Passwords” policy which flatout prevents the use of these OBIEE utilities. However, at the same sites the use of SSH keys to enable one server to connect to another automatically is permitted. On that basis, the key-based encryption for the OBIEE credentials may therefore be considered an acceptable risk. As per Culp’s 9th law of security administration, it’s all about striking the balance between enabling functionality and mitigating risk.
The method described below I believe is a bit more secure that plaintext credentials, but it is not totally secure. It uses key based encryption to secure the previously-plaintext credentials that the OBI utility requires. This is one step better than plaintext alone, but is still not perfect. If an attacker gained access to the machine they could still decrypt the file, because the key is held on the machine without a passphrase to protect it. The risk here is that we are using security by obscurity (because the OBIEE credentials are in an encrypted file it appears secure, even though the key is held locally), and like the emperor’s new clothes, if someone takes the time to look closely enough there is still a security vulnerability.
My final point on this caveat is that you should always bear in mind that if an attacker gains access to your OBIEE machine then they will almost certainly be able to do whatever they want regardless, including decrypting the weblogic superuser credentials or reseting it to a password of their own choosing.Overview
Two new shiny tools I’ve acquired recently and am going to put to use here are GnuPG (
mkfifo. GPG provides key-based encryption and decryption and is available by default on common Linux distributions including Oracle Linux.
mkfifo is also commonly available and is a utility that creates named pipes, enabling two unreleated processes to communicate. For a detailed description and advanced usage of named pipes, see here
This is a one-time set up activity. We create a key in
gpg, and then encrypt the plain text credentials file using it.
The first step is to create a gpg key, using
gpg --gen-key. You need to specify a “Real name” to associate with the key, I just used “obiee”. Make sure you don’t specify a passphrase (otherwise you’ll be back in the position of passing plain text credentials around when you use this script).
$ gpg --gen-key gpg (GnuPG) 1.4.5; Copyright (C) 2006 Free Software Foundation, Inc. This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the file COPYING for details. Please select what kind of key you want: (1) DSA and Elgamal (default) (2) DSA (sign only) (5) RSA (sign only) Your selection? DSA keypair will have 1024 bits. ELG-E keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 2048 [...] Real name: obiee Email address: Comment: [...] You don't want a passphrase - this is probably a *bad* idea! I will do it anyway. You can change your passphrase at any time, using this program with the option "--edit-key". [...] gpg: key 94DF4ABA marked as ultimately trusted public and secret key created and signed.
Once this is done, you can encrypt the credentials file you need for the utility. For example, the Catalog Manager credentials file has the format:
To encrypt it use
gpg --recipient obiee --output saw_creds.gpg --encrypt saw_creds.txt
Now remove the plaintext password file
Using the secure credentials file
Once we have our encrypted credentials file we need a way of using it with the utility it is intended for. The main thing we’re doing is making sure we don’t expose the plaintext contents. We do this using the named pipes method:
In this example I am going to show how to use the secure credentials file with
runcat.sh, the Catalog Manager utility, to purge the Presentation Services cache. However it should work absolutely fine with any utility that expects credentials passed to it in a file (or stdin).
There is a three step process:
Create a named pipe with
mkfifo. This appears on a disk listing with the
pbit to indicate that it is a pipe. Access to it can be controlled by the same
chmodprocess as a regular file. With a pipe, a process can request to consume from it, and anything that is passed to it by another process will go straight to the consuming process, in a FIFO fashion. What we’re doing through the use of a named pipe is ensuring that the plain text credentials are not visible in a plain text file on the disk.
Invoke the OBIEE utility that we want to run. Where it expects the plaintext credentials file, we pass it the named pipe. The important bit here is that the utility will wait until it receives the input from the named pipe – so we call the utility with an ampersand so that it returns control whilst still running in the background
gpgto decrypt the credentials file, and pass the decrypted contents to the named pipe. The OBIEE utility is already running and listening on the named pipe, so will receive (and remove from the pipe) the credentials as soon as they are passed from
The script that will do this is as follows:
# Change folder to where we're invoking the utility from cd $FMW_HOME/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalogmanager # Create a named pipe mkfifo cred_pipe # Let's make sure only we can access it chmod 600 cred_pipe # Invoke Catalog Manager. Because we're using a named pipe, it's actually going to sit and wait until it gets input on the pipe, so we need to put the ampersand in there so that it returns control to the script ./runcat.sh -cmd clearQueryCache -online http://localhost:9704/analytics/saw.dll -credentials cred_pipe & # Decrypt the credentials and send them to the named pipe gpg --quiet --recipient obiee --decrypt saw_creds.gpg > cred_pipe # Remove the named pipe rm cred_pipe
Depending on the utility that you are invoking, you may need to customise this script. For example, if the utility reads the credentials file multiple times then using the named pipes method it will fail after the first read. Your option would be to read the credentials into the pipe multiple times (possibly a bit hacky), or land the plaintext credentials to disk and delete them after the utility complete (could be less secure if the delete doesn’t get invoked)Using a secure credentials file for command line arguments
Whilst the sticking point that triggered this article was around utilities requiring whole files with credentials in, it is also common to see command line utilities that want a password passed as an argument to them. For example, nqcmd :
nqcmd -d AnalyticsWeb -u weblogic -p Password01 -s myscript.lsql
Let’s assume we’ve created an encrypted file containing “Password01” (using the
gpg --encrypt method shown above) and saved it as password.gpg.
To invoke the utility and pass across the decrypted password, there’s no need for named pipes. Instead we can just use a normal (“unnamed”) pipe to send the output straight from gpg to the target utility (nqcmd in this example), via xargs:
gpg --batch --quiet --recipient obiee --decrypt ~/password.gpg | xargs -I GPGOUT nqcmd -d AnalyticsWeb -u weblogic -p GPGOUT -s input.lsql
xargs has a
--interactive option that makes it a lot easier when developing piped commands such as the above
Because there is no passphrase on the gpg key, a user who obtained access to the server would still be able to decrypt the credentials file. In many ways this is the same situation that would arise if a server was configured to use ssh-key authentication to carry out tasks or transfer files on another server.Uses
Here are some of the utilities that the above now enables us to run more securely:
- nqcmd is a mainstay of my OBIEE toolkit, being useful for performance testing, regression testing, aggregate building, and more. Using the method above, it’s now easy to avoid storing a plaintext password in a script that calls it.
- Keeping the Presentation Catalog in sync on an OBIEE warm standby server, using Presentation Services Replication
- Purging the Presentation Services Cache from the command line (with Catalog Manager, per the above example)
- SampleApp comes with four excellent utilities that Oracle have provided, however all but one by default requires plaintext credentials. If you’ve not looked at the utilities closely yet, you should! You can see them in action in SampleApp itself, or get an idea of what they do looking at the SampleApp User Guide pages 14–17 or watching the YouTube video.
If in a JDeveloper application you point to artifacts in a file based MDS residing in a Windows folder for example "d:\projects\MDS", then JDeveloper will create an new entry in the adf-config.xml file (that you can find in Application Resources -> Descriptors -> ADF META-INF) pointing to the absolute location of that folder:
The problem with this is that if you have other colleagues that use Linux instead of Window, it's not possible to make it work for both by defining some relative location (as you could do with ant), like "../../projects/MDS", so now what?
As in most cases, the solution is so simple that I successfully missed it many times: use an environment variable! What worked was creating an environment variable with name MDS_HOME, and after that I could use it like this:
Problem solved! Such an evironment variable you can create in Windows as well as Linux.
I have not yet fully tested if this works for everything, like deploying with ant, but if it doesn't I expect you can easily fix that by modifying the ant-sca-compile.xml file in your JDeveloper installation folder by adding a property "mds_home" in there as well.
Effective measurements are required to judge the success of any activity. The quality of support the DBA team provides should be reviewed on a regular basis. Customer surveys allow business and application development units to provide feedback on the quality and timeliness of DBA support activities. The survey also allows your customers to measure how well they feel you are meeting your internal Service Level Agreements.
As a remote services provider, we are judged daily on our ability to meet our external service level agreements. Our customers are a tough bunch, and we are OK with that. They have entrusted their most valuable corporate data assets to us. A responsibility we do NOT take lightly. Regularly scheduled surveys can also provide benefits to internal DBA units. You will never know how good of a job your team is doing until you ask.
Each group that you support has their own set of value drivers. You have to understand what they want. Database administrators have a highly visible role in every organization. You can take advantage of that role to be viewed as someone who is a key player. Someone who is focused on providing high-quality support to all internal customers. That and it makes your job easier when you know what your customers expect and the criteria they use to evaluate the services you provide.Survey Tools
Searching the web will produce a wealth of products that can be used to create, distribute and process customer surveys. The costs range from free to expensive. Google provides a very robust survey product that allows you to generate the survey questions with check boxes, free-form input and the ability to generate charts and graphs based on customer feedback.Survey Questions
It should be obvious that the set of survey questions sent to your customers will be critical to the success of the survey. The key is to use open-ended questions that foster candid and honest customer opinions. The questions must be phrased in a way that allows the service provider to generate the appropriate conclusions from customer responses. The number of questions should be kept to a minimum. If they’re not, many respondents will not take the time to complete the survey.
Keep the survey questions short. Phrase them to be to the point and in the shortest way possible. Each question should have a purpose. Review each question before distributing and determine how your unit will process the possible results that could be returned. If you are struggling with how you would process and/or respond to a survey response, review the question to determine if it is worded correctly. Leading the customer to respond one way or another should be avoided at all costs.
All of RDX’s questions have two parts. The first part is ranked by a numerical factor. This is followed by a free-form section that allows the customer to tell us why they assigned that ranking in their response.
Since we support so many different customers, let me give you some examples of the questions that RDX asks its customers:
- General Support – The intent of the first question is to obtain a general, high-level understanding of how we are doing.
- Individual Support Teams – Many of our customers receive support from multiple product teams at RDX. It allows us to drill down into a specific team’s performance.
- Responsiveness – Are we responding quickly enough to their requests? Do we complete tasks when they are needed?
- What services does RDX excel at providing?
- Which services can RDX improve?
- Would you recommend RDX to another customer?
- Current Issues – Does the customer have any current issues that need to be addressed?
- Additional Information – What other questions should we be asking?
RDX’s standard is to respond to all surveys within 8 hours of receiving them. RDX will schedule follow up meetings and set deadlines for all action items. This is the most critical aspect of the survey. Quick followups and action items with deadlines tell your customers their concerns are important to your organization.
If you provide services to internal customers, here are a few sample questions to start you on your way:
- How would you rate the turnaround times for DBA unit work requests?
- How would you evaluate the DBA unit’s responsiveness to questions?
- How would you evaluate the DBA unit’s responsiveness to requests for assistance?
- Please rank the quality of communications the DBA unit provides.
- Please rank the overall quality of work the DBA unit provides.
- What are your top three technical challenges that you face?
- What are the top three non-technical challenges?
- Please list your current priorities. Rank them in order of importance.
- List the most important services the DBA unit provides. Rank them in order of importance.
- What support services does the DBA unit do a good job of providing?
- What support services should the DBA unit improve?
- What additional services would you like the DBA unit to provide?
As we learned in this blog, effective measurements are required to judge the success of any activity. The quality of support you provide needs to be reviewed on a regular basis. These questions allow your customers to provide you with important feedback on the quality of your support. You can then “tune and tweak” your services accordingly. Meetings can be held with your internal customers to discuss their reviews. DBA team members participating in the reviews must be prepared to respond to criticism in a professional manner.
As a lot of you know, there are many benefits that come with managing content properly and efficiently across the organization, the true measure of success is demonstrated in real-life business scenarios. How any organization manages content has a direct impact on business efficiency, employee productivity, IT infrastructure complexity, and certainly, the bottom line.
With that in mind, we have compiled a collection of Oracle WebCenter Content customers to showcase the specific benefits that each of them have realized. While every vendor in the ECM space talks about the value that can be realized from effective content and information management, the proof ultimately comes from existing customers and the real-life value that they are realizing. This most recent compilation of customer success stories includes testimonials and examples from Toyo Engineering, Texas A&M University, Schneider National, Mortenson Construction and others.
Click this link to take a look at these various customer case studies and see for yourself how Oracle WebCenter Content can make a real difference. We think you will gain insights into how you can do the same within your organization and realize new successes of your own.
Enterprises that are considering building their digital architectures on the cloud have the benefit of a rapidly evolving, competitive market. Because these storage technologies are constantly improving, cloud providers are having to produce unique innovations in order to stand out. For IT teams, this means that there are always new options to choose from to construct an infrastructure that is suitable to the company's goals and operational needs.
Is PaaS the new standard in cloud computing?
Platform as a service (PaaS) has become a widespread resource that gives enterprises an easy and agile cloud-based strategy. Rather than requiring IT teams to craft an entire environment from scratch, PaaS is typically ready to deploy immediately. Additionally, by making use of the flexibility of the cloud, this strategy makes it easier for decision-makers to leverage cloud-based applications.
Database administration, for instance, is a cloud-based application that assists big data corporations with their information categorization. The services provided by remote DBA experts connect to the company's PaaS solution, enabling the third-party source access to assist with general IT maintenance and security.
According to InfoWorld, IDC recently reported that PaaS is growing. By 2017, it is expected that this strategy will reach $14 billion in the global IT marketplace, which places the compound annual growth rate (CAGR) at 30 percent, which is a stark contrast from the 4 percent of overall tech spending. The source noted that competition is the driving force behind the predictions.
Knowing what the options are
As more industries adopt PaaS, its functionality continues to change to suit different business strategies. Smart Data Collective noted that it's important for IT managers to be aware of the trajectory of these evolutions. For instance, PaaS has become a valuable option for most companies, but cloud services can be deployed within several different digital infrastructures, such as virtualization and data centers. It's important, therefore, that decision-makers have considered all of their options before committing to one solution.
The flexibility of the cloud and the widespread use of the Internet in most computing resources has made it easier for IT managers to weave together different strategies. Whether it's PaaS, a hybrid strategy or something else altogether, cloud services are providing businesses with seemingly limitless opportunities for customization.
RDX's business intelligence and big data experts assist customers in leveraging data contained in large data stores. For more information, please visit our Business Intelligence and Predictive Analytics pages or contact us.
Selecting a cloud provider can be confusing for decision-makers, especially now that the services have expanded to encompass a wide scope of features. As the cloud industry expands, however, businesses can use innovative new cloud broker companies to determine which digital infrastructure will be perfect for their operations.
Using cloud brokers to find the right service
According to Tech Republic, these services have been created out of a need for a simplified approach to finding and leveraging the best cloud provider. IT managers have several responsibilities for keeping their organizations' daily operations functioning, and because the cloud continues to expand with new technologies and features, it's important that tech teams aren't having to spend unnecessary amounts of time looking for the best cloud for the job.
These services are not necessarily all-inclusive, however. Rather than providing corporations with a comprehensive list of cloud providers and cloud application companies, brokers typically organize the apps and cloud services they represent into an easy-to-use online catalog. As such, it's important for decision-makers who are using this strategy to be mindful of the fact that if they cannot find the right cloud provider with a broker, it does not necessarily mean that a suitable solution doesn't exist.
Building onto the cloud
The source noted that by using a cloud broker's services, end-users will also obtain a simple management portal for their new digital architectures. This can make accounting for budgetary expenses easier, as this strategy consolidates multiple cloud-based options into a structured payment option. Once a cloud option has been decided upon, decision-makers are capable of utilizing its flexibility to add more applications and continue the customization process.
For example, after a cloud has been deployed, it's easier to take on additional services. Database administration, for example, enables tech teams to outsource some of their data management requirements to remote DBA experts who will assist with categorizing information and provide extra security.
Enterprises that have not considered the cloud have more reasons to make the transition, now that the cloud market is evolving rapidly. Decision-makers should consider utilizing the above-mentioned strategy to enhance their own digital infrastructures.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more about our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
Retrouvez l'étude: http://info.enterprisedb.com/gartner_operational_database_management_systems_2013.html