Feed aggregator

Tracefile Automation – Simplify Your Troubleshooting Tasks

Pythian Group - Thu, 2016-05-26 10:06

Here’s a common Oracle troubleshooting scenario when a SQL statement needs tuning and/or troubleshooting:

  • log on to dev server
  • connect to database (on a different server)
  • run the SQL statement with 10046 tracing enabled
  • ssh to the database server
  • copy the trace file back to work environment
  • process the trace file.

 

All of this takes time. Granted, not a great deal of time, but tuning is an iterative process and so these steps will be performed multiple times. Not only are these steps a productivity killer, but they are repetitive and annoying. No one wants to keep running the same manual command over and over.

This task is ripe for some simple automation.

If both the client and database servers are some form of Unix, automating these tasks is straightforward.

Please note that these scripts require an 11g or later version of the Oracle database. These scripts are dependent on the v$diag_info view to retrieve the tracefile name. While these scripts could be made to work on 10g databases, that is left as an exercise for the reader.

Step by Step

To simplify the process it can be broken down into steps.

 

1. Reconnect

The first step is to create a new connection each time the SQL is executed. Doing so ensures the database session gets a new tracefile, as we want each execution to be isolated.

-- reconnect.sql

connect jkstill/XXXX@oravm

 

2. Get the Tracefile hostname, owner and filename

Oracle provides all the information needed.

In addition the script will set the 10046 event, run the SQL of interest and then disable the 10046 event.

Following is a snippet from the tracefile_identifier_demo.sql script.

 


-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

 

In this case sql2trace.sql is a simple SELECT from a test table.  All of the scripts used here appear in Github as mentioned at the end of this article.

 

3. Process the Tracefile

Now that the tracefile has been created, it is time to retrieve it.

The following script scp.sql is called from tracefile_identifier_demo.sql.

 


col scp_src new_value scp_src noprint
col scp_target new_value scp_target noprint

set term off feed off verify off echo off

select '&&1' scp_src from dual;
select '&&2' scp_target from dual;

set feed on term on verify on

--disconnect

host scp &&scp_src &&scp_target

Following is an example putting it all together in tracefile_identifier_demo.sql.

 

SQL> @tracefile_identifier_demo
Connected.

1 row selected.


PRODUCT                        VERSION              STATUS
------------------------------ -------------------- --------------------
NLSRTL                         12.1.0.2.0           Production
Oracle Database 12c Enterprise 12.1.0.2.0           64bit Production
 Edition

PL/SQL                         12.1.0.2.0           Production
TNS for Linux:                 12.1.0.2.0           Production

Data Base
------------------------------
P1.JKS.COM

INSTANCE_NAME        HOST_NAME                      CURRDATE
-------------------- ------------------------------ ----------------------
js122a1              ora12c102rac01.jks.com         2016-05-23 16:38:11

STARTUP
--------------------
04/02/2016 11:22:12


Session altered.

Elapsed: 00:00:00.00

OWNER        OBJECT NAME                     OBJECT_ID OBJECT_TYPE             CREATED
------------ ------------------------------ ---------- ----------------------- -------------------
SYS          OLAP_EXPRESSION                     18200 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_BOOL                18206 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_DATE                18204 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_TEXT                18202 OPERATOR                2016-01-07 21:46:54
SYS          XMLSEQUENCE                          6379 OPERATOR                2016-01-07 21:41:25
SYS          XQSEQUENCE                           6380 OPERATOR                2016-01-07 21:41:25
SYS          XQWINDOWSEQUENCE                     6393 OPERATOR                2016-01-07 21:41:25

7 rows selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

js122a1_ora_1725_MYTRACEFILE.trc                                                                                                                                                            100% 3014     2.9KB/s   00:00

SQL> host ls -l js122a1_ora_1725_MYTRACEFILE.trc
-rw-r----- 1 jkstill dba 3014 May 23 16:38 js122a1_ora_1725_MYTRACEFILE.trc

But Wait, There’s More!

This demo shows you how to automate the retrieval of the trace file. But why stop there?  The processing of the file can be modified as well.

Really, it isn’t even necessary to copy the script over, as the content can be retrieved and piped to your favorite command.  The script mrskew.sql for instance uses ssh to cat the tracefile, and then pipe the contents to the Method R utility, mrskew.  Note: mrskew is a commercial utility, not open source software.

 

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on

--disconnect

host ssh &&ssh_target 'cat &&scp_filename' | mrskew

 

Following is another execution of tracefile_identifier_demo.sql, this time piping output to mrskew. Only the final part of the output is shown following

 

...

Elapsed: 00:00:00.01

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
---------------------------  --------  ------  -----  --------  --------  --------
PARSE                        0.002000   33.1%      2  0.001000  0.000000  0.002000
db file sequential read      0.001211   20.0%      5  0.000242  0.000056  0.000342
FETCH                        0.001000   16.5%      1  0.001000  0.001000  0.001000
gc cr grant 2-way            0.000999   16.5%      1  0.000999  0.000999  0.000999
SQL*Net message from client  0.000817   13.5%      2  0.000409  0.000254  0.000563
Disk file operations I/O     0.000018    0.3%      2  0.000009  0.000002  0.000016
SQL*Net message to client    0.000002    0.0%      2  0.000001  0.000001  0.000001
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
---------------------------  --------  ------  -----  --------  --------  --------
TOTAL (9)                    0.006047  100.0%     19  0.000318  0.000000  0.002000

Now we can see where all the db time was consumed for this SQL statement, and there was no need to copy the trace file to the current working directory. The same can be done for tkprof and other operations.  Please see the plan.sql and tkprof.sql scripts in the Github repository.

 

Wrapping It Up

A little bit of automation goes a long way. Commands that are repetitive, boring and error prone can easily be automated.  Both your productivity and your mood will soar when little steps like these are used to improve your workflow.

All files for this demo can be found at https://github.com/jkstill/tracefile_identifier

Categories: DBA Blogs

On Demand Webcast: Driving a Connected, More Productive Next-Gen Workplace

WebCenter Team - Thu, 2016-05-26 09:25
Over the past two decades technological change has transformed business processes. Among the biggest changes is mobile ubiquity.

Finding ways to engage with customers in a mobile world is becoming increasingly challenging for outdated systems to handle.

Discover how a cloud platform can unite content, social, and mobile engagement with simplified line of business automation, as well as microsites and communities for an integrated next-generation workplace in this on demand webcast. View today!

Using Enterprise Manager to manage cloud services

Pat Shuff - Thu, 2016-05-26 08:18
Yesterday we talked about the virtues of Enterprise Manager. To honest the type of monitoring tool is not important but the fact that you have one is. One of the virtues that VMWare touts of VSphere is that you can manage instances on your server as well as instances in VCloud. This is something worthy of playing with. The same tool for your on premise instances also managing your instances in the cloud has power. Unfortunately, VCloud allows you to allocate virtual machines and storage associated with it so you only have a IaaS option of compute only. You can't allocate just storage. You can't deploy a database server unless you have a database deployed that you want to clone. You need to start with an operating system and build from there. There are benefits of PaaS and SaaS that you will never see in the VCloud implementation.

Oracle Enterprise Manager provides the same universal management interface for on premise and in cloud services. Amazon falls short on this. First, they don't have on premise instances so the tools that they have don't monitor anything in your data center, only in their cloud. Microsoft has tools for monitoring services plugins for looking at Azure services. It is important to note that you need a gateway server in the Azure cloud to aggregate the data and ship the telemetry data back and report it in the monitoring tool. There is a good Blog detailing the cost if IaaS monitoring in Azure. The blog points out that the outbound data transfer for monitoring can cost up to $17/month/server so this is not something that comes for free.

Today we are going to look at using Enterprise Manager as a management tool for on premise systems, the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. We are going to cheat a little and use a VirtualBox instance of Enterprise Manager 13c. We are not going to go through the installation process. The books and blogs that we referenced yesterday detail how to do this. Unfortunately, the VirtualBox instance is available from edelivery.oracle.com. We are not going to use this instance but are going to use an instance for demo purposes only available internal to Oracle. The key difference between the two systems is that the edelivery instance is 21 GB in size for download and expands to provide an OEM 13c instance for testing while the internal system (retriever.us.oracle.com) has a 12c and 11g database installed and is 39.5 GB (expanded to almost 90 GB when uncompressed). Given the size of the instance I really can't provide external access to this instance. You can recreate this by downloading the edelivery system, installing an 11g database instance, installing a 12c database instance, and configuring OEM to include data from those instances to replicate the screen shots that we are including.

If we look at the details on the virtual box instance we notice that we need at least 2 cores and 10 GB of memory to run this instance. The system is unusable at 8 GB of RAM. We really should bump this up to 12 GB of RAM but given that it is for demo purposes and for training it is ok if it runs a little slow. If we were running this in production it is recommended to grow this to 4 cores and 16 GB of memory and also recommended that you not use a downloaded VirtualBox instance for production but install from scratch.

The key things that we are going to do are walk through what it takes to add a monitoring agent onto the service that we are trying to monitor and manage. If we look at the architecture of Enterprise Manager we notice that there are three key components; the Oracle Management Repository (OMR), the Oracle Management Service (OMS), and the Oracle Management Agent (OMA). The OMR is basically a database that keeps a history of all telemetry actions as well as reports and analytics for the systems being monitored. The OMS is the heart of Enterprise Manager and runs on a WebLogic server. The code is written in Java and presents the primary user interface to the administrators as well as being the gateway between the OMR and the agents or OMAs. The agents are installed on the target systems and collect operating system data, database data, weblogic data, and all other log data to ship back to the OMR for analysis by the users.

It is important to note at this point that most PaaS and SaaS providers do not allow you to install an Enterprise Manager Agent or any other management agent on their instances. They want to manage the services for you and force you to use their tools to manage their instance. SalesForce, for example, only gives you access to your customer relationship data. You can export your contact lists to an csv file to backup your data but you can't correlate the contact list to the documents that you have shared with these users. Amazon RDS does not provide a file system access, system access to the database, or access to the operating system so that you can install the management agent. You must use their tools to monitor services provided on their sites. Unfortunately, this inhibits you from looking at important things like workload repository reports or sql tuning guides to see if something is running slow or waiting on a lock. Your only choice is to deploy the desired PaaS or SaaS as a manual or bundled install on IaaS forcing you to manually manage things like backups and patching on your own.

The first thing that we need to do in Enterprise Manager is to log in and click on the Setup button on the top right. We need to define named credentials since we are going to connect to the cloud service using public and private ssh keys. We need to follow the Security pull down to Named Credentials.

We click on the Create icon in the top left and add credentials with public and private keys. If we don't have an ssh key to access the service we can generate an ssh key using ssh-keygen which generates a public and private key and upload the key using the SSH Access pull down in the hamburger menu. Once we upload the ssh key we can use ssh -i keyname.ppk opc@ip_address for our database server. We will use this keyname.ppk to connect with Enterprise Manager and have all telemetry traffic transferred via the ssh protocol.

Once we have the credentials valid in the cloud account we can create the ssh access through Enterprise Manager. To do this we to to Setup at the top right, Security, Named Credentials. We then click on the Create button in the middle left to start entering data about the credentials. The name in the the screen shot below failed because it begins with a number so we switched it to ssh2017 since 2017ssh failed the naming convention. We are trying to use host access via ssh which is done with pull down menu definitions. The system defaults to a host access but we need to change from host to global which does not tie our credentials to one ip address. We upload our public and private key as well as associate this with the opc user since that user has sudo rights. We can verify the credentials by looking at the bottom of the list. This should allow us to access our cloud host via ssh and deploy an agent to our cloud target.

Note that we created two credentials because we had a step fail later. We created credentials for the opc user and for the oracle user. The opc credentials are called ssh2017 as shown in the screen shots. The oracle credentials are called oracle2017 and are not shown. The same steps are used just the username is changed as well as the name of the credentials.

If we want to install the management agent onto our instance we need to know the ip address of the service that we are going to monitor as well as an account that can sudo to root or run elevated admin services. We go to the Enterprise Manager splash screen, login, select the Setup button in the top right and drill down to Add Target and Add Target Manually. This takes us to the Add Target screen where we can Install Agent on Host. To get rid of the warnings, we added our cloud target ip address to the /etc/hosts file and used a fully qualified and short name associated with the ip address. We probably did not add the right external dns name but it works with Enterprise Manager. When we add the host we use the fully qualified host name. We can find this by logging into the cloud target and looking at the /etc/hosts file on that server. This gives us the local ip address and a fully qualified host name. Once we have this we can enter a directory to upload the agent software to. We had to create an agent directory under the /u01/app/oracle directory. We select the oracle2017 credentials (the screen shots use ssh2017 but this generates an error later) we defined in the previous step and start uploading the agent software and configuring the host as a target.

Note that we could have entered the ip address rather than going through adding the ip address to /etc/hosts. We would have received a warning with the ip address.

When we first tried this we got an error during the initialization phase that opc did not own the /u01/app/oracle directory and had to create an agent directory and change ownership. Fortunately, we could easily resubmit and enter a new directory without having to reenter all of the other information. The deployment takes a while because Enterprise Manager needs to upload the agent binaries, extract, and install them. The process is updated with status so that you can see the progress and restart when errors happen. When we changed the ownership, the installation failed at a later step stating the opc did not have permission to add the agent to the inventory. We corrected this by installing as oracle and setting the /u01/app/oracle/agent directory to be owned by oracle.

When we commit the ip address or host name as well as the ssh credentials, we can track progress as the management server deploys the agent. We get to a point where we note that the oracle user does not have ssh capabilities and we will need to run some stuff manually from the opc account.

At this point we should have an enterprise manager connection to a cloud host. To get this working from my VirtualBox behind my AT&T Uverse wireless router I first had to configure a route on my broadband connection and set the ip address of the Enterprise Manager VirtualBox image to a static ip address. This allows the cloud instance to talk back to the OMS and store data in the OMR.

The next step is to discover the database instances. This is done by going through a guided discovery on the host that we just provisioned. It took a few minutes to sync up with the OMS but we could verify this with the emctl status agent command on the target host. We add the target manually using the guided discovery and select database services to look for on the target.

At this point we should have a database, listener, and host connected to our single pane of management glass. We should see a local database (em12c) and a cloud based database (prs12cHP). We can look at the host characteristics as well as dive into sql monitoring, database performance, and database management like backup and restore options or adding users to the repository. We could add a Java Cloud Service as well as link these two systems together and trace a web page request down to a sql read and look at what the longest latency component is. We can figure out if the network, java memory allocation, or databse disk is causing the slowest response. We can also look at sql tuning recommendations to get suggestions on changing our sql code or execution plans using the arw report and sql tuning utilities in Enterprise Manager.

In summary, we can connect to an on premise server as well as a cloud server. We can't connect to an Amazon RDS instance because we don't get file system level access to push a client to or a root user to change the agent permissions. We do get this with IaaS on Oracle, Compute servers on Azure, and EC2 on Amazon. We also get this with PaaS on Oracle and potentially event Force.com from SalesForce. No one give you this ability with SaaS. It is assumed that you will take the SaaS solution as is and not need to look under the covers. Having a single pane of glass for monitoring and provisioning services is important. The tool should do more than tell you how full a disk is or how much of a cpu is loaded or available. It should dive into the application and let you look at where bottlenecks are and help troubleshoot issues. We could spend weeks diving into Enterprise Manager and the different management packs but we are on a journey to look at PaaS options from Amazon, Microsoft, and Oracle.

How to clear MDS Cache

Darwin IT - Thu, 2016-05-26 07:06
In the answer on a question on community.oracle.com, I found the following great tip: http://www.soatutor.com/2014/09/clear-mds-cache.html.

In 12cR2 this looks like:
1. Start System MBean Browser; In Domain, pull down the Weblogic Domain menu and choose 'System MBean Browser':


2. Browse for the Application Defined Beans:



3. Expand it and navigate to oracle.mds.lcm, choose server (AdminServer or SOAServer1)
 4. Navigate to the node Application em (AdminServer) or soa-infra (SOAServer) -> MDSAppRuntime -> MDSAppRuntime. Click on the tab Operations, and then on clearCache


5. Click on Invoke:


Then the following confirmation is shown:


By the way, often a redeploy of the SOA Composite that calls the WSDL or other artefact helps. But unfortunately not always.





My "Must See" ADF/MAF Sessions at KScope 16

Scott Spendolini - Thu, 2016-05-26 07:00
Yes, you read that right - it's not a typo, nor did one of my kids or wife gain access to my laptop.  It's part of a "blog hop" - where a number of experts made recommendations about KScope sessions that are "must attend" and are not in their core technology.  I picked ADF/MAF, as I don't have any practical experience in either technology, but they are at least similar enough that I would not be totally lost.

In any case, the following sessions in the ADF/MAF track are worth checking out at Kscope 16 this year:

How to Use Oracle ALTA UI to Create a Smashing UI for Web and Mobile
Luc Bors, eProseed NL
When: Jun 28, 2016, Session 12, 4:45 pm - 5:45 pm

I've always liked UI, and Oracle ALTA is a new set of templates that we'll be seeing quite a bit of across a number of new technologies.

Three's Company: Going Mobile with Oracle APEX, Oracle MAF, and Oracle MCS
Frederic Desbiens , Oracle Corporation
When: Jun 27, 2016, Session 6, 4:30 pm - 5:30 pm

I'll admit - anytime there's a comparison of APEX and other similar technologies, it's always interesting to witness the discussion.  If nothing else, there will be a good healthy debate as a result of this session!

Introduction to Oracle JET: JavaScript Extension Toolkit
Shay Shmeltzer, Oracle Corporation
When: Jun 28, 2016, Session 7, 8:30 am - 9:30 am

Oracle JET is a lot more than just charts, and there's a lot of momentum behind this technology.  I'm very interested to learn more and perhaps even see a thing or two that you can do with it, as well as the various integration points that are possible with other technologies.

Build a Mobile App in 60 Minutes with MAF
John King , King Training Resources
When: Jun 27, 2016, Session 5, 3:15 pm - 4:15 pm

Native mobile applications are something that APEX doesn't do, so it would be nice to see how this would be possible, should the need ever arise.

Thanks for attending this ODTUG blog hop! Looking for some other juicy cross-track sessions to make your Kscope16 experience more educational? Check out the following session recommendations from fellow experts!

Oracle CURRENT_TIMESTAMP Function with Examples

Complete IT Professional - Thu, 2016-05-26 06:00
The CURRENT_TIMESTAMP function is one of the many time and date-related functions in Oracle. Learn how to use it and see some examples in this article. Purpose of the Oracle CURRENT_TIMESTAMP Function The Oracle CURRENT_TIMESTAMP function will display the current date and time. It’s displayed in the session time zone, which is the timezone of […]
Categories: Development

Storing Date Values As Characters (What’s Really Happening)

Richard Foote - Thu, 2016-05-26 03:00
For something that’s generally considered an extremely bad idea, I’ve lost count of the number of times I’ve come across applications that insist on storing date values as characters within the database. We’ve all seen them … I recently got called in to assist a customer who was having issues with a POC in relation […]
Categories: DBA Blogs

The river floes break in spring...

Greg Pavlik - Wed, 2016-05-25 19:37

Alexander Blok
 The river floes break in spring...
March 1902
translation by Greg Pavlik 


The river floes break in spring,
And for the dead I feel no sorrow -
Toward new summits I am rising,
Forgetting crevasses of past striving,
I see the blue horizon of tomorrow.

What regret, in fire and smoke,
What agony of Aaron’s rod,
With each hour, with each stroke -
Or instead - the heavens’ gift stoked,
From the Bush of Moses, the Mother of God!

Original:

Весна в реке ломает льдины,
И милых мертвых мне не жаль:
Преодолев мои вершины,
Забыл я зимние теснины
И вижу голубую даль.

Что сожалеть в дыму пожара,
Что сокрушаться у креста,
Когда всечасно жду удара
Или божественного дара
Из Моисеева куста!
 
 Март 1902

Please, use HTTPS for your APEX apps

Dimitri Gielis - Wed, 2016-05-25 17:07
Why use HTTPS?

When you Google this question you get many different answers, but this answer of Google Developers answers it for me in short (click the link for more details):
  • HTTPS protects the integrity of your website/APEX app
  • HTTPS protects the privacy and security of your users
  • HTTPS is the future of the web; many new technologies only work with https (for example Service Workers; you can read more about Service Workers and APEX in my presentation)
Industry going to HTTPS

Before websites had an HTTP portion and an HTTPS portion, which became active when you would login to the site, but nowadays everything is under HTTPS. Google will actually rank your site higher when it's using HTTPS. Look at the sites you visit; many of them will now use HTTPS as a default.

HTTPS on localhost

If you're developing locally, you don't really need HTTPS on localhost, but I still like to have that.
Here're the steps I did in Chrome on my Mac (OSX) to get the nice green lock when developing locally (works also with APEX Front-End Boost)
  • In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says "Certificate Information."
  • Click and drag the certificate image to your desktop. 
  • Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it.
  • Be sure you add the certificate to the System keychain, NOT the login keychain. 
  • After it has been added, double-click it. 
  • Expand the "Trust" section. "When using this certificate," set to "Always Trust"
  • Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser.
HTTPS on your own server

For years I've been using SSL certificates ordered from Godaddy, but depending the certificate you get, it might not be that cheap. The APEX R&D website is a multi-site certificate - the same certificate is used for the APEX Office Print website.

But there's some good news... you can get SSL for free too (and it's very easy to do!), thanks to Letsencrypt. I used Letsencrypt to protect the Euro2016challenge.eu APEX app/website for example.
Here's the Getting Started Guide from Let's Encrypt. This is the command I used (after installing the package):

./letsencrypt-auto certonly --webroot -w /var/www/euro2016 -d euro2016challenge.eu -d www.euro2016challenge.eu


If you're not yet on https with your APEX app/site, I would definitely recommend looking into it :)

Categories: Development

Two New Oracle Security Presentations Available

Pete Finnigan - Wed, 2016-05-25 16:50

I attended the UKOUG conference last week Monday to Wednesday in Birmingham. This is the first year for three years that it has been back at the ICC in the center of Birmingham. The last two years have seen the....[Read More]

Posted by Pete On 14/12/15 At 08:54 PM

Categories: Security Blogs

Log Buffer #475: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-05-25 14:32

This Log Buffer Edition goes through various blogs, and selects some of the top posts from Oracle, SQL Server and MySQL.

Oracle:

MOS Note:136697.1 – New HCHECK.SQL for Oracle Database 12c

ORAchk / EXAchk questions.

Cloud control won’t start!

ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices.

ORA-56841: Master Diskmon cannot connect to a CELL.

Oracle BITAND Function with Examples.

 

SQL Server:

Natively Compiled Stored Procedures: What they are all about

Considerations around validation errors 41305 and 41325 on memory optimized tables with foreign keys

Taking Azure SQL Data Warehouse for a Test-Drive.

Persistent PowerShell: The PowerShell Profile.

SQL Server Always On Endpoint Encryption Algorithm Compatibility Error.

 

MySQL:

Fixing MySQL scalability problems with ProxySQL or thread pool.

Installing a Web, Email & MySQL Database Cluster on Debian 8.4 Jessie with ISPConfig 3.1

Planets9s – Download the new ClusterControl 1.3 for MySQL, MongoDB & PostgreSQL

AWS Aurora Benchmark – Choose the right tool for the job

Where is the MySQL 5.7 root password?

Categories: DBA Blogs

#IHateYourFace, a BizIntel Love Story

Rittman Mead Consulting - Wed, 2016-05-25 13:53
#IHateYourFace, a BizIntel Love Story

Sometimes love isn't fair

A teenager, just emerging from an awkward growing phase drives a tendency to spew non sensical permutations of varying cringe worthy affections toward their love target.  The data points behind the intent are solid.  Let’s go Boolean now:

Intent of good = 1 ;

attracted to target = 1;

ability to provide for target = 1;

time available  = 1;

prior or current committed relationship = 0.

So here we are with all the solid intentions in the world and the variables all align to an optimal outcome.  Except one thing:

love target hates your face = 1;

Exit(0)

Assuming that this is a hierarchical value chain with face hating as the parent, we are at an impasse.

The above is not intended as an affront to teenagers spitting game at their prospective boo.  It is though, an analogy to the fantastic time and effort spent in integrating, transforming and presenting Business Intelligence to the enterprise with all of the proper best practices and technology.  Only to have the user subtly tell us that the information is “great but… I don’t know what I am looking at and I was looking for it to be in 3D” (AKA, I hate your face).  There have been some great books out there about visualization and we have read a LOT of them.  Best practices aside, visualization is about taste.  The love target above may well have a completely different opinion of facial beauty than Mr. or Ms. hottie-mc-toddy that sits next to you in your advanced Kafka course (seriously, it might happen).  Love at Kafka training happens, I swear it does, really and when it does, it involves unicorns and ends of rainbows.

So now I have to figure out my end user’s tastes in visualization?

Yep, accept and move on.  

You wouldn’t saddle up with a love interest if you hated their face.  Why would an end user sign up for something they don’t want to use.  Of course, you being the BI practitioner that you are know your BI stack up and down and are well aware of all of the available visualization options.  These end users don’t know what they are talking about…right?

—they don’t have to.  

Lots of the big BI players have been losing ground in the industry of viz because new competitors are building attractive viz capabilities.  The new guys in BI are going to the end users with something the end users want…pretty and flexible visualizations.  Both the viz provider and end user need not worry themselves with data integrity, governance, or how they got the data to start.

Welcome to Enterprise BI 2.0

What if I could use my existing enterprise platform and allow users to integrate more data sources, mash it up and then have an almost story telling approach to BI?  You can and the next releases from the big BI vendors are all focused on that.  Clearly at Rittman Mead we have an affinity for OBIEE and with 12c, the dream of viz has become a an out of the box reality.  Data visualization is core to new functionality with more on the way.  So my advice is upgrade to 12c ASAP to get your new face on.

Rittman Mead has less invasive facelift versus replace options with viz for 12c and 11g.  Visual Plug in Pack (VPP) extends the native charting in OBI into dynamic attractive hollywood level viz.  We also have an User Engagement offering  that comes with that visual facelift all the girls at the salon are talking about.  Shoot me an email if you are interested in our streamlined OBI 12c upgrade service or VPP. Jason.davis@rittmanmead.com

Categories: BI & Warehousing

SharePlex Replication Between Two Instances On The Same Host

Pythian Group - Wed, 2016-05-25 10:12

Several days ago I was asked a question about SharePlex and should verify behaviour before providing the answer. I had one linux VM with two databases and needed to setup replication between them. One of them (orcl) was a target 12c EE database while the second one was a source 10g SE. Accordingly, I should use two different versions of SharePlex to mimic the questionable behaviour, so I should have two different SharePlex instances talking to each other on the same host machine. It worked pretty well for, as an example, GoldenGate, where you just setup different ports for manager processes. However, in SharePlex all the instances participating in a configuration should use the same port. The SharePlex documentation states:
“Important! The SharePlex port number must be the same one on all machines in the replication configuration so that they can communicate through TCP/IP connections.”

Of course you cannot use the same port on the same network interface for two independent SharePlex processes working from different homes. In addition, SharePlex is not asking you about a hostname either during installation, or when you start it.
In my case I had 2 interfaces in different subnets on my VM, and I could use them for my replication. But how should I tell to the SharePlex to use one or the other? The answer was simple. You should use “SP_SYS_HOST_NAME” parameter as the environment variable for your shell. Here is how I’ve done that.
I added two new hostnames for those interfaces to my /etc/hosts file to be used for my SharePlex instances:

[root@sandbox ~]$ cat /etc/hosts | grep splex
10.177.130.58	splexhost
172.16.128.10	splexstor
[root@sandbox ~]$ 

I unzipped SharePlex 8.6.3 for Oracle 10 and prepared installation. By default the installer will pick up the hostname automatically and use it during installation.
Here is my hostname and it is not what I want to use for my first SharePlex instance:

[oracle@sandbox ~]$ hostname
sandbox.localdomain
[oracle@sandbox ~]$ ping sandbox.localdomain
PING sandbox.localdomain (127.0.0.1) 56(84) bytes of data.
64 bytes from sandbox.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from sandbox.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.061 ms
^C
--- sandbox.localdomain ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1468ms
rtt min/avg/max/mdev = 0.058/0.059/0.061/0.007 ms
[oracle@sandbox ~]$

What you need to do is to setup an environment variable SP_SYS_HOST_NAME. I wanted the first SharePlex listening and working on hostname splexhost using default port 2100. I should also define proper Oracle home and Oracle database SID during installation. I used standard Oracle utility oraenv to have Oracle variables setup. Here is how I started installation for the source:

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexhost
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ?
The /u01/app/oracle/product/10.2.0/sehome_1/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
[oracle@sandbox ~]$ cd /u01/distr/SharePlex
[oracle@sandbox SharePlex]$ ll
total 96736
-rwxr-xr-x. 1 oracle oinstall 99056391 Jan 11 21:56 SharePlex-8.6.3-b171-oracle100-rh-40-amd64-m64.tpm
[oracle@sandbox SharePlex]$ ./SharePlex-8.6.3-b171-oracle100-rh-40-amd64-m64.tpm
Unpacking ..................................................................
  ..........................................................................

During the installation I provided directories for program and variable files, port number and information about ORACLE_SID and ORACLE_HOME. Everything went smoothly. I finished the installation by running the “ora_setup” utility creating necessary schema and objects in the source database.
For target I used the same strategy setting up Oracle variables by “oraenv” and exporting explicitly SP_SYS_HOST_NAME=splexstor. The installation worked out without any problems and I got two SharePlex instances installed to different directories.
To start a SharePlex home for a certain IP address and interface you need to explicitly setup SP_SYS_HOST_NAME to an appropriate value. Let’s see how it’s been done for source.

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexhost
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ?
The /u01/app/oracle/product/10.2.0/sehome_1/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
[oracle@sandbox ~]$ cd /u01/sp10/bin
[oracle@sandbox bin]$ telnet splexhost 2100
Trying 10.177.130.58...
telnet: connect to address 10.177.130.58: Connection refused

[oracle@sandbox bin]$ nohup /u01/sp10/bin/sp_cop -usp10 &
[1] 2023
[oracle@sandbox bin]$ nohup: ignoring input and appending output to `nohup.out'

[oracle@sandbox bin]$ telnet splexhost 2100
Trying 10.177.130.58...
Connected to splexhost.
Escape character is '^]'.
^]
telnet> q
Connection closed.
[oracle@sandbox bin]$

For target you have to adjust your environment variables accordingly and do the same.

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexstor
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ? orcl
The Oracle base has been set to /u01/app/oracle
[oracle@sandbox ~]$cd /u01/sp12/bin
[oracle@sandbox bin]$ nohup /u01/sp12/bin/sp_cop -usp12 &
[1] 2066
[oracle@sandbox bin]$ nohup: ignoring input and appending output to `nohup.out'

[oracle@sandbox bin]$ 

As result we have two SharePlex instances running on the same host and talking to each other. Now we can create a sample replication. In the database schema “splex” created by “ora_setup” utility we have DEMO_SRC and DEMO_DST tables. SharePlex is using those tables for a demo replication. We can use them too and see how it works in our case. We can either modify a default pre-created sample configuration “ORA_config” or we can create a new one. I’ve created a new config “sample” on my source SharePlex and activated it:

sp_ctrl (splexhost:2100)> list config

File   Name                                         State       Datasource
--------------------------------------------------  ----------  ---------------
ORA_config                                          Inactive    o.SOURCE_SID
Last Modified At: 17-May-16 11:17    Size: 151

sp_ctrl (splexhost:2100)> create config sample

The command opens default editor and you can write your configuration.
Here is what I put to my “sample” config.

datasource:o.test

#source tables      target tables           routing map

splex.demo_src      splex.demo_dest             splexstor@o.orcl

Now we can activate config.

sp_ctrl (splexhost:2100)> activate config sample

sp_ctrl (splexhost:2100)> list config

File   Name                                         State       Datasource
--------------------------------------------------  ----------  ---------------
ORA_config                                          Inactive    o.SOURCE_SID
Last Modified At: 17-May-16 11:17    Size: 151

sample                                              Active      o.test
Last Modified At: 17-May-16 11:30    Size: 134     Internal Name: .conf.1

sp_ctrl (splexhost:2100)>

Now we can see all the processes running :

On the source:

sp_ctrl (splexhost:2100)> lstatus

Detailed Status for splexhost
Process          State                             PID     Running   Since
---------------  ------------------------------  --------  --------------------
Cop              Running                             2023  17-May-16 11:24:39
Capture          Running                             2250  17-May-16 11:30:53
  Data/Host:   o.test
Read             Running                             2279  17-May-16 11:30:53
  Data/Host:   o.test
Export           Running                             2304  17-May-16 11:30:56
  Data/Host:   splexstor
  Queue Name:  splexhost
Cmd & Ctrl       Running                             2581  17-May-16 11:40:39
  Data/Host:   splexhost

On the target:

sp_ctrl (splexstor:2100)> lstatus

Detailed Status for splexstor
Process          State                             PID     Running   Since
---------------  ------------------------------  --------  --------------------
Cop              Running                             2066  17-May-16 11:26:23
Import           Running                             2305  17-May-16 11:30:56
  Data/Host:   splexhost
  Queue Name:  splexhost
Post             Running                             2306  17-May-16 11:30:56
  Data/Host:   o.test-o.orcl
  Queue Name:  splexhost
Cmd & Ctrl       Running                             2533  17-May-16 11:38:18
  Data/Host:   splexstor

Let’s insert a row on the source :

test>  insert into splex.demo_src values ('JIM', '8001 Irvine Center Drive', '949-754-8000');

1 row created.

test> commit;

Commit complete.

test>

And we can see the row was successfully replicated to target:

orcl> select * from splex.demo_dest;

NAME							     ADDRESS							  PHONE#
------------------------------------------------------------ ------------------------------------------------------------ ------------
JIM							     8001 Irvine Center Drive					  949-754-8000

orcl>

As you can see, we were able to use one box to replicate data between two different databases, using two different SharePlex installations. The idea was simple and clear from the start, but I couldn’t find enough information in the installation guide for the SP_SYS_HOST_NAME parameter except documentation about configuring it for cluster installation. The parameter was documented in the reference section of documentation, though in reality it was not.
It could be even better if we could place the parameter inside, and not think about setting variables, but unfortunately setting the parameter in the “paramdb” didn’t work for me. Even having the parameter, you still need to setup your environment variable SP_SYS_HOST_NAME=your_host_name for non default hostname. I hope the article may help somebody save a bit of time.

Categories: DBA Blogs

Maker Faire 2016

Oracle AppsLab - Wed, 2016-05-25 09:13

Another year, another amazing at the Maker Faire.

I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.

Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.

Robot battle demo for kids

Robot battle demo for kids

Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.

While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.

Jeremy talks on Maker Pro stage

Jeremy talks on Maker Pro stage

Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.

Flight of the bumble drone

Flight of the bumble drone

Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.

Lotus boat

Lotus boat

Blueberry muffin

Blueberry muffin

Flying dragon blowing fire

Flying dragon blowing fire

Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.

Electrifying experience

Electrifying experience

Possibly Related Posts:

Managing servers and instances in the cloud

Pat Shuff - Wed, 2016-05-25 07:59
Managing servers and instances has been an ongoing issue since the introduction of the first computer. Recently with the advent of virtualization the idea of a management console to control what processors are running what services and what storage is allocated to what operating system has gained popularity. Many people are familiar with VMWare VSphere where you get a view of processors. We get a view of a server and can see virtual images deployed on this server. We can see how well the resources (memory, cpu, and disk) are being utilized. We can allocate more or less resources since this is a dynamic allocation and make sure that we are not over allocating resources and wasting them or under allocating them and causing applications to run slower.

In this example we can see that we have two processors, 2 GB of memory, and just under 300 GB of disk on this computer. We have five virtual machines running on this computer and can dive into each operating system and look at what operating system is installed and how the limited resources are allocated and utilized. What we can't see is what applications are installed and how the applications are running. For example, is the Windows Home Server 2011 running an Apache Web Server and how many hits did the web server get in the past four days? Monitoring tools beg the question of what are you monitoring. If you are managing limited resources and making sure that you have not over or under allocated services, tools like VSphere are excellent tools. Unfortunately, you will need other tools to dive into another tool. EMC, for example, has a storage manager that lets you look not only at a logical unit level but a controller and disk level. It understands VMWare and lets you look at how disks are related to virtualization engines and how they are consuming resources.

Again, this is a very good tool to look at how well a disk is performing, how well data is laid out across spindles, and how well your data network is being transmitted between disk and server. We can see hot spots. We can see disks that are over and under utilized. We can manage a scarce resource and make sure that it is properly utilized.

When we talk about monitoring we need to shift our thought process. Yes, it is important to manage compute, memory, and storage resources but it is also important to realize that these resources are commodities. If we run low, we get more. If we use too much we are wasting resources. We should be able to automate allocation of resources and size up or size down resources without manual monitoring. What we are really interested in is how well is our company running. If we are a university we might be interested in the latency of delivering online video classes. We might be interested in how many classes are being added to a student schedule during registration. If we are a ticket retailer we might be interested in how many tickets were requested and paid for on a minute by minute basis. Note that we are not talking about how well a disk drive is allocated or if we have enough processors allocated to a virtual machine, we are talking in term of business terms. We are looking at tying revenue generating services back to computer resources and trying to figure out what is causing a problem. In the online video classroom example, we might have our processors allocated properly, storage tuned to the last IOP, and memory allocated to buffer data and reduce disk reads. If we are on the same network as the athletic department and our basketball team made it to the elite eight during March madness and the athletic department live streams the game on the same network as our classroom servers our classes will be offline due to demand to watch the basketball game. Tools from EMC and VMware will show that everything is working fine and life is good. Meanwhile the help desk is getting calls from students off campus that can't access their assignments during midterms and their Thursday class is not available. What we need is a monitoring system that can look at systems and incorporate more than just processor and disk. What we need is a tool that can look at systems and services and not just resources. We would like to look at the video distribution system and be able to dive into the disk, network, or processor and see what the bottleneck is and fix it quickly.

Oracle released a tool years ago called Enterprise Manager. The tool started out as a database monitoring tool that allowed you to dive into sql calls and figure out why it was taking longer than necessary. With acquisitions of companies like BEA and Sun Microsystems the tool expanded to look at how Java was performing inside a WebLogic server and how disk drives were performing that were serving up requests for the database and WebLogic server. Acquisitions of companies like JD Edwards and PeopleSoft drove the monitoring tools in the opposite direction and screens showing how many purchase orders were being processed on an hourly basis were suddenly available. You could look at what was the bottleneck in closing your books for the end of month reconciliation. Was it a manual process waiting on a report to drop into a directory or was it a sql statement that was taking minutes rather than seconds to complete? You could start looking at a process like purchase orders and dive into a database to see if a table was reaching storage limits as well as figure out that someone recently patched the database which caused an index to not look at a new column that was created and searches are now going against this column so select statements are doing a full table scan rather than using an index to report answers quicker. Adding more storage in this case will be a waste of time. Yes, we are running out of storage on a table but the real issue is we need to re-index the database or execute a new sql execution plan. Below is a screen shot of how well a database is performing with links to look at all the sub-components of the database.

Books have been written on Enterprise Manager. We are not going to cover everything in this blog to make you an expert on the subject.

There are also a number of blogs related to Enterprise Manager

This is a partial list of blogs returned by a Google search. I am sure I missed a few. Note that the list of books and blogs is not a short list. There are classes offered by Oracle University that you can take virtually or in a classroom (both cost money).

The way that Oracle Enterprise Manager is paid for is simple. The base system is free and you pay for the options that you want to use. Unfortunately, the Technology Price Guide is not very clear as to what is and is not Enterprise Manager and what is an option on the database. For example, on page 7, most of the management packs are listed. If you want diagnostics for the database you will need to license your database at $7,500 per processor and not Enterprise Manager. You can license at $150 per named user but the licensing metrics for your database need to match the licensing for you management pack. You could have a two processor license for production and a 25 named user license for development and testing so you will need to blend these licenses into Enterprise Manager with the management packs. Diagnostics is specifically confusing because you enable or disable this feature in Enterprise Manager and not in the database. The telemetry data is being collected for the database but the reporting on the results of the analysis is not being done in the database. You could turn on the reporting in Enterprise Manager without involving the DBA thus incurring an additional license fee that you had not paid for. There is no license key or email that is sent to Oracle saying that you enabled the license it is a simple checkbox in Enterprise Manager that says turn on diagnostic reporting. In recent versions a warning screen pops up telling you that this is not a free feature. In OEM 10g the feature was turned on by default and you had to turn it off. This has changed in recent releases. If you try to turn this feature on when connecting to an Enterprise Edition in the Oracle Public Cloud you will get a feature not available message. You need to go with High Performance or Extreme Performance edition of the database to get the diagnostics enabled.

There are also management packs for Oracle Applications and the pricing for these products can be found in the Oracle Applications Price List. You need to search for the word "packs" to find the price of the management packs in this list. You can get a list of all the management packs from the Oracle Tech Network page for Enterprise Manager

It is important to note that the Enterprise Manager that runs in your data center monitoring your servers and Oracle hardware and software products is the same tool that you can use to monitor and manage PaaS and IaaS resources in the Oracle Public Cloud. You can connect to the instance in the cloud using ssh and read the telemetry from the cloud instance as if it were installed on one of your servers. You can use extensions to the latest version of Enterprise Manager, 13c, to clone a pluggable database instance from your on site installation to a cloud instance.

You can also setup reporting and self service requests to have end users ask for a new service to be provisioned either on site or in the cloud. Below is a screen shot of how to do this for a database. We could do something similar for a WebLogic server, an Apache Web server, a PeopleSoft instance for dev/test, or any layer of the Oracle stack.

In summary, selection of a management tool is important. Tools are good to understand and properly use. At some point you need to step back and ask what is the questions that I need answers to. Am I diving too deep on trying to optimize something that is not that worth deep analysis? Could I automate this and not have to monitor it at all? If I run out of processing power does it make sense to automatically scale up the number of processors? Should I scale out by spinning up more web servers? Do I need to re-architect my network topology to isolate disk traffic from client traffic? If I generate a report who will consume the results? Is the report for someone in IT? Purchasing? The process owner? Is it a technology or financial report? Products like Enterprise Manager allow you to generate all of these reports using different management extensions. My suggestion is to look at some of the introductory videos on the Oracle Tech Network to get an introduction to the problem that you are trying to solve then figure out how much it will cost to measure what is important to you.

Changing the page title in Fluid at run-time

Javier Delgado - Wed, 2016-05-25 07:26
One of our customers asked us to implement nested landing pages, in which some tiles would open a second landing page and eventually a third one and so on. Let me illustrate the use case with some screenshots (I apologise as they are in Spanish, but it should be useful anyway). This would be how the main landing page would look like:


By clicking the "Formación y Desarrollo" tile, a new landing page will be displayed:


And eventually, you can click a tile in this landing page which opens a third one. Let's pick "Formación Acceso al Puesto" for instance:


Unfortunately, the customer could not take advantage of the Master - Detail Framework as they are on PeopleTools 8.54 and this functionality is only available in release 8.55 (which is more complex to upgrade to as Crystal Reports are no longer supported).

So, we decided to build a custom component for our nested landing pages. The component would be called again and again with different URL parameters, in such a way the history could be maintained and the user could go to the previous step instead of going back all the way through the top landing page.

As we were reusing the same component, we needed to adjust the page title in PeopleCode. There involved not only updating the page title itself, but also making sure the back button showed the title of the previous page.


Changing the Navigation Bar Title
By default, the title is set to the component label in the menu. Luckily, there is a good number of examples in the standard functionality on which the title is set at run time, so this one was not particularly difficult to implement. The code that makes the trick is the following:

PTLAYOUT.PAGETITLE_GROUPBOX.Label = &title;

This code needs to be placed in the page activate event. If placed anywhere else, the standard PT_HEADERPAGE Activate code will override the title back to the default one.

Another option is to create a custom header page and add it to the component, but at least from the back button functionality point of view, it did not seem an easy solution.
Changing the Back Button Title
This one was trickier. PeopleSoft maintains a navigation history stack in Javascript which is populated with the default page title at load time using the following Javascript call:

AddToHistory('Cns Navgrppage', '', '', 'CNS_NAVGRPPAGE', 1, 27);

So, in order to keep the right page title in the navigation stack, we needed to update it. Fortunately, there is another Javascript function provided by PeopleSoft called UpdateHistory. The PeopleCode function AddOnLoadScript is particularly helpful when trying to run Javascript functions after the page is loaded. This is the way we implemented the call also in the page activate PeopleCode event:

AddOnLoadScript("UpdateHistory('" | &title | "', undefined, undefined, undefined, 1);");









CBO++

Jonathan Lewis - Wed, 2016-05-25 07:23

While browsing the web recently for articles on the HyperLogLog algorithm that Oracle uses for some of its approximate functions, I came upon a blog post written in Jan 2014 with the title Use Subqueries to Count Distinct 50X Faster. There are various ways that subqueries can be used to rewrite queries for improved performance, but when the title caught my eye I couldn’t think of a way in which they could improve “count distinct”.  It turned out that the word “subquery” was being used (quite correctly) in the sense of “inline view” while my mind had immediately turned to subqueries in the select list or where clause.

The article started by pointing out that if you have a query that does a join then aggregates the result you might be able to improve performance by finding a way of rewriting the query to aggregate before doing the join. (See this note from 2008). The article then went one step further to optimise a “count distinct” by wrapping a “select count” around a “select distinct” inline view as follows:

Original
--------
  select
    dashboard_id,
    count(distinct user_id) as ct
  from time_on_site_logs 
  group by dashboard_id

Rewrite
-------
select 
    inline.dashboard_id, 
    count(1) as ct
  from (
    select distinct dashboard_id, user_id
    from time_on_site_logs
  ) as inline
  group by inline.dashboard_id

(I’ve reproduced only the central part of the query being examined and I’ve changed the name of the inline view to eliminate the potential visual confusion due to the word “distinct” appearing in its name in the original).

The article was written using the Postgres SQL with the comment that the technique was universal; and this brings me to the point of the post. The technique can be applied to Oracle’s dialect of SQL. Both ideas are good ideas whose effectiveness depends on the data patterns, data volume, and (potentially) indexing; but you may not need to rewrite the code because the optimizer is programmed to know that the ideas are good and it can transform your query to the appropriate form internally. The “place group by” transformation appeared in 11.1.0.6 in 2007, and the “transform distinct aggregation” appeared in 11.2.0.1 in 2009.

Here’s a litte demo of Oracle handling a variation of the query I’ve shown above:


rem     Script: transform_distinct_agg.sql
rem     Dated:  May 2016
rem     Author: J.P.Lewis

create table t1 nologging 
as 
select  * 
from    all_objects 
where   rownum <= 60000
;
execute dbms_stats.gather_table_stats(user,'t1', method_opt=>'for all columns size 1')

alter session set statistics_level = all;

select owner, count(distinct object_type) from t1 group by owner;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

prompt  ===============
prompt  Rewritten query
prompt  ===============

select  owner, count(1)
from    (
         select distinct owner, object_type
         from   t1
        ) distinct_types
group by
        owner
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

Here are the two execution plans, pulled from memory – with the outline and some other peripheral lines deleted:


-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |           |      1 |        |      5 |00:00:00.23 |     865 |       |       |          |
|   1 |  HASH GROUP BY       |           |      1 |      5 |      5 |00:00:00.23 |     865 |  1452K|  1452K|  728K (0)|
|   2 |   VIEW               | VM_NWVW_1 |      1 |     78 |     30 |00:00:00.23 |     865 |       |       |          |
|   3 |    HASH GROUP BY     |           |      1 |     78 |     30 |00:00:00.23 |     865 |  4588K|  1708K| 2497K (0)|
|   4 |     TABLE ACCESS FULL| T1        |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------

===============
Rewritten query
===============

------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      5 |00:00:00.23 |     865 |       |       |          |
|   1 |  HASH GROUP BY       |      |      1 |      5 |      5 |00:00:00.23 |     865 |  1452K|  1452K|  735K (0)|
|   2 |   VIEW               |      |      1 |     78 |     30 |00:00:00.23 |     865 |       |       |          |
|   3 |    HASH UNIQUE       |      |      1 |     78 |     30 |00:00:00.23 |     865 |  4588K|  1708K| 1345K (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
------------------------------------------------------------------------------------------------------------------

Apart from the change from “HASH UNIQUE” to “HASH GROUP BY” the two plans are the same, using the same resources – the UNIQUE being a special case of the algorithm for the GROUP BY. Here (with some cosmetic editing) is the SQL of the “unparsed query” taken from the 10053 (CBO) trace file – notice how similar it is to the text suggested by the original article, in particular the inline view to get the distinct list of owner and object_type (using a group by with no aggregated columns, rather than a distinct):

SELECT 
        VM_NWVW_1.$vm_col_2 OWNER,
        COUNT(VM_NWVW_1.$vm_col_1) COUNT(DISTINCTOBJECT_TYPE)
FROM    (
                SELECT
                        T1.OBJECT_TYPE $vm_col_1,
                        T1.OWNER $vm_col_2
                FROM    TEST_USER.T1 T1
                GROUP BY 
                        T1.OWNER,T1.OBJECT_TYPE
        ) VM_NWVW_1
GROUP BY
        VM_NWVW_1.$vm_col_2
;

The Oracle optimizer is pretty good at finding efficient transformations for the query you wrote so, rather than rewriting a query (with the option for making a mistake as you do so), you may only need to add a couple of hints to generate a suitable SQL Plan Baseline that you can attach to the original query.

Footnote:

Sometimes the optimizer will decide not to transform when it should, or decide to transform when it shouldn’t, so it’s nice to know that there are hints to block transformations – here’s the effect of adding /*+ qb_name(main) no_transform_distinct_agg(main) */ to my query:


----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      5 |00:00:00.25 |     865 |       |       |          |
|   1 |  SORT GROUP BY     |      |      1 |      5 |      5 |00:00:00.25 |     865 |  4096 |  4096 | 4096  (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
----------------------------------------------------------------------------------------------------------------

The interesting thing to note here is that even though the query took a little longer to complete the amount of memory allocated to run the query in memory was only 4K compared to the 2M needed by the transformed query (In this example both workareas would have been in existence at the same time – that won’t be true of every query using multiple workareas.) This isn’t significant in this trivial case, but it demonstrates the point that sometimes there is no one best path – you can choose the path that protects the resource that’s under most pressure.


Automatic install of SOA Suite and Service Bus 12cR2.

Darwin IT - Wed, 2016-05-25 05:41
Lately I worked on a set of scripts to automatically install Weblogic Infrastructure, SOA/BPM Suite, Service Bus, etc. Since I implemented a reworked set yesterday at another customer it might be nice to describe them here.

The scripts help in installing the software and creating the Repository. I started to create a script for creating the domain, but haven't it working yet. A good starting poing would be this blog of Edwin Biemond for the 12cR1 (12.1.3) version. If I managed have it working for 12c related to my other scripts I will get back to it. Probably a nice reference would also be this description of Lucas Jellema (also 12.1.3).

To create the scripts I followed the Enterprise Deployment guide for SOASuite 12c, Install tasks documentation. To administer your different environments (dev, test, acc, prod) of the Fusion Middleware the Enterprise Deployment Workbook might come in handy. And then there is the Installing and Configuring Oracle SOA Suite and Oracle Business Process Management.

The scripts are based on my earlier work on the automatic install of the quickstarts under Linux.

By the way: for these scripts I use shell (bash) under Linux. But since the response files use references that you'd probably want to have based on properties (I would) I should rework those using something like awk/sed (which I don't know) or ANT (which I do know, but need an ANT installation. But maybe in a next phase.

For this installation we need the following downloads, from edelivery:
Product
Jar File
Zip file
Note
Fusion Middleware Infrastructure fmw_12.2.1.0.0_infrastructure.jar V78156-01.zip OracleFMW12cInfrastructure SOA & BPM Suite fmw_12.2.1.0.0_soa.jar V78169-01.zip SOASuiteAndBPM Service Bus fmw_12.2.1.0.0_osb.jar V78173-01.zip ServiceBus Managed File Transfer fmw_12.2.1.0.0_mft.jar V78174-01.zip ManagedFileTransfer
The scripts and software is placed in a folderstructure containing the following sub-folders:
Folder Name
Containing
JavaJava jdk U74+ rpm: jdk-8u74-linux-x64.rpmManagedFileTransfer
  • V78174-01.zip
  • fmw_12.2.1.0.0_mft.rsp
OracleFMW12cInfrastructure
  • V78156-01.zip
  • fmw_12.2.1.0.0_infrastructure.rsp
rcu
  • rcuSOAPasswords.txt
  • rcuSOA.rsp
scripts
  • fmw12c_env.sh
  • install.sh
  • installFMW.sh
  • installJava.sh
  • installMFT.sh
  • installSB.sh
  • installSOA.sh
  • rcuSOA.sh
ServiceBus
  • V78173-01.zip
  • fmw_12.2.1.0.0_osb.rsp
SOASuiteAndBPM
  • V78169-01.zip
  • fmw_12.2.1.0.0_soa.rsp

The scripts and response (.rsp) files I'll explain below. In each product subfolder there is the downloaded zip file (containing the installation-jar file) and the accompanying response file. In the scripts folders there are the product installation scripts and the master script install.sh. So create a folder structure as above and place the downloaded products and the provided scripts in the appropriate folder.

So here we go.

Setting the environmentFirst I need a fmw12c_env.sh script to set some basic environment variables and especially the location of the FMW_HOME, where the software is going to be installed:
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft


Adapt the location of the FMW_HOME and possibly the (desired or current) location of your JAVA_HOME. The other 'homes' are relative to the FMW_HOME: these are the locations within the FMW_HOME where the products are installed (In 11g these were Oracle_SOA1 or Oracle_OSB1.
Install JavaFor the 12cR2 version of the  we need an Java 8 Installment. Of course preferably the latest version but at least above Update 65. I used update 74, but you can change it to a later update. The script for the installation is as follows:
#!/bin/bash
. $PWD/fmw12c_env.sh
export JAVA_INSTALL_HOME=$PWD/../Java
export JAVA_INSTALL_RPM=jdk-8u74-linux-x64.rpm
#
echo JAVA_HOME=$JAVA_HOME
if [ ! -d "$JAVA_HOME" ]; then
# Install jdk
echo Install jdk 1.8
sudo rpm -ihv $JAVA_INSTALL_HOME/$JAVA_INSTALL_RPM
else
echo jdk 1.8 already installed
fi
Save it as installJava.sh under scripts.

Update the JAVA_INSTALL_RPM according to the downloaded rpm as placed in the Java subfolder. Again adapt the JAVA_HOME in the fmw12c_env.sh accordingly.What this script does is check if the folder as in JAVA_HOME exists. If not then apparently the denoted version is not installed and so it does.

Sudo grants to  oracle-user
To be able to run the script above (since it uses rpm via sudo)  we need to adapt the sudo-ers file.

Log on as root via the command:
[oracle@darlin-vce- db ~]$ su -

Password:

Last login: Fri Feb 26 06:44:05 EST 2016 on pts/0

Edit de sudoers file:
[root@darlin-vce- db ~]# vi /etc/sudoers

Uncomment the lines for the Cmnd_Alias-es SOFTWARE en SERVICES (remove the hash ’#’ at the beginning of the line):
## Installation and management of software

Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum

## Services

Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig

And add the follwing two lines at the end of the file:
## Extra rights for oracle to do for instance rpm without password.

oracle ALL= NOPASSWD: SERVICES, SOFTWARE

Save the file (use an exclamation mark in the ‘:wq!’ command, since sudoers is readonly.After this you can run the installJava.sh.
Install InfrastructureFirst we need to install the Fusion Middleware InfraStructure.This is a Weblogic Server delivery that includes a RCU for the infrastructure schema's in the database. You can't use the 'vanilla' delivery of weblogic server, you'll need this one.

The install script is as follows:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export FMW_INSTALL_HOME=$PWD/../OracleFMW12cInfrastructure
export FMW_INSTALL_JAR=fmw_12.2.1.0.0_infrastructure.jar
export FMW_INSTALL_RSP=fmw_12.2.1.0.0_infrastructure.rsp
export FMW_INSTALL_ZIP=V78156-01.zip
#
# Fusion Middlware Infrastucture
if [ ! -d "$FMW_HOME" ]; then
#Unzip FMW
if [ ! -f "$FMW_INSTALL_HOME/$FMW_INSTALL_JAR" ]; then
if [ -f "$FMW_INSTALL_HOME/$FMW_INSTALL_ZIP" ]; then
echo Unzip $FMW_INSTALL_HOME/$FMW_INSTALL_ZIP to $FMW_INSTALL_HOME/$FMW_INSTALL_JAR
unzip $FMW_INSTALL_HOME/$FMW_INSTALL_ZIP -d $FMW_INSTALL_HOME
else
echo $FMW_INSTALL_HOME/$FMW_INSTALL_ZIP does not exist
fi
else
echo $FMW_INSTALL_JAR already unzipped.
fi
if [ -f "$FMW_INSTALL_HOME/$FMW_INSTALL_JAR" ]; then
echo Install Fusion Middleware Infrastucture 12cR2
$JAVA_HOME/bin/java -jar $FMW_INSTALL_HOME/$FMW_INSTALL_JAR -silent -responseFile $FMW_INSTALL_HOME/$FMW_INSTALL_RSP
else
echo $FMW_INSTALL_JAR not available!
fi
else
echo $FMW_HOME available: Fusion Middleware 12c Infrastucture already installed.
fi

Save it as installFMW.sh under scripts. As in the installJava.sh this script checks if the FMW_HOME already exists. If not it checks on the availability of the installer-jar. If not then it checks the zip file that should contain the installer-jar. If so then it will unzip the zipfile. If the zip file does not exist then it stops with a message. You can unzip the zip-file prior in starting the scripts, because that is the primary requirement. You can leave the jar file for subsequent installation on other servers. It would be handy if you put this on a shared staging repository folder.

If  in the end the jar-file exists it starts the installer with java from the JAVA_HOME and performs a silent install using a  a response file. This is a file that is recorded at the end of a manual installation session and contains the choices made in the Oracle Universal Installer wizard. It is placed together with the zip file in the product folder.
It looks like as follows:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210

#Set this variable value to the Installation Type selected. e.g. Fusion Middleware Infrastructure, Fusion Middleware Infrastructure With Examples.
INSTALL_TYPE=Fusion Middleware Infrastructure

#Provide the My Oracle Support Username. If you wish to ignore Oracle Configuration Manager configuration provide empty string for user name.
MYORACLESUPPORT_USERNAME=

#Provide the My Oracle Support Password
MYORACLESUPPORT_PASSWORD=<SECURE VALUE>

#Set this to true if you wish to decline the security updates. Setting this to true and providing empty string for My Oracle Support username will ignore the Oracle Configuration Manager configuration
DECLINE_SECURITY_UPDATES=true

#Set this to true if My Oracle Support Password is specified
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false

#Provide the Proxy Host
PROXY_HOST=

#Provide the Proxy Port
PROXY_PORT=

#Provide the Proxy Username
PROXY_USER=

#Provide the Proxy Password
PROXY_PWD=<SECURE VALUE>

#Type String (URL format) Indicates the OCM Repeater URL which should be of the format [scheme[Http/Https]]://[repeater host]:[repeater port]
COLLECTOR_SUPPORTHUB_URL=



Save it as fmw_12.2.1.0.0_infrastructure.rsp under OracleFMW12cInfrastructure.

If you choose to use another FMW_HOME as suggested, you'll need to change the ORACLE_HOME variable in the file accordingly. This is one of the elements that I want to have replaced automatically using a property, based on the FMW_HOME env-variable.

Install SOA and BPM SuiteThe script for installation of the SOA and BPM Software is more or less the same as the FMW Infrastructure:

#!/bin/bash
. $PWD/fmw12c_env.sh
#
export SOA_INSTALL_HOME=$PWD/../SOASuiteAndBPM
export SOA_INSTALL_JAR=fmw_12.2.1.0.0_soa.jar
export SOA_INSTALL_RSP=fmw_12.2.1.0.0_soa.rsp
export SOA_INSTALL_ZIP=V78169-01.zip
#
# SOA and BPM Suite 12c
if [[ -d "$FMW_HOME" && ! -d "$SOA_HOME" ]]; then
#
#Unzip SOA&BPM
if [ ! -f "$SOA_INSTALL_HOME/$SOA_INSTALL_JAR" ]; then
if [ -f "$SOA_INSTALL_HOME/$SOA_INSTALL_ZIP" ]; then
echo Unzip $SOA_INSTALL_HOME/$SOA_INSTALL_ZIP to $SOA_INSTALL_HOME/$SOA_INSTALL_JAR
unzip $SOA_INSTALL_HOME/$SOA_INSTALL_ZIP -d $SOA_INSTALL_HOME
else
echo $SOA_INSTALL_HOME/$SOA_INSTALL_ZIP does not exist!
fi
else
echo $SOA_INSTALL_JAR already unzipped
fi
if [ -f "$SOA_INSTALL_HOME/$SOA_INSTALL_JAR" ]; then
echo Install SOA and BPM Suite 12cR2
$JAVA_HOME/bin/java -jar $SOA_INSTALL_HOME/$SOA_INSTALL_JAR -silent -responseFile $SOA_INSTALL_HOME/$SOA_INSTALL_RSP
else
echo $SOA_INSTALL_JAR not available!.
fi
else
if [ ! -d "$FMW_HOME" ]; then
echo $FMW_HOME not available: First install Fusion Middlware Infrastucture
fi
if [ -d "$SOA_HOME" ]; then
echo $SOA_HOME available: SOA Already installed
fi
fi

Save it as installSOA.sh under scripts.
This installs the software for both SOA and BPM. The choice to include BPM or not are made at creation of the domain. Or adapt the INSTALL_TYPE element in the response file below. this one use BPM, but if you adapt it to SOA (I haven't got the actual value at hand, but assume it would be SOA) I assume the BPM software is omitted.

As in the FMW infrastructure installation we need a response file:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210

#Set this variable value to the Installation Type selected. e.g. SOA Suite, BPM.
INSTALL_TYPE=BPM



Save it as fmw_12.2.1.0.0_soa.rsp under SOASuiteAndBPM.
This one is a little smaller then the FMW-infra one. And again here the ORACLE_HOME should be adapted in the case you choose to use another FMW_HOME location.
Install Service BusThe script for installation of the Service Bus Software is more or less the same as the SOA and BPM:

#!/bin/bash
. $PWD/fmw12c_env.sh
#
export OSB_INSTALL_HOME=$PWD/../ServiceBus
export OSB_INSTALL_JAR=fmw_12.2.1.0.0_osb.jar
export OSB_INSTALL_RSP=fmw_12.2.1.0.0_osb.rsp
export OSB_INSTALL_ZIP=V78173-01.zip
#
# ServiceBus 12c
if [[ -d "$FMW_HOME" && ! -d "$OSB_HOME/bin" ]]; then
#
#Unzip ServiceBus
if [ ! -f "$OSB_INSTALL_HOME/$OSB_INSTALL_JAR" ]; then
if [ -f "$OSB_INSTALL_HOME/$OSB_INSTALL_ZIP" ]; then
echo Unzip $OSB_INSTALL_HOME/$OSB_INSTALL_ZIP to $OSB_INSTALL_HOME/$OSB_INSTALL_JAR
unzip $OSB_INSTALL_HOME/$OSB_INSTALL_ZIP -d $OSB_INSTALL_HOME
else
echo $OSB_INSTALL_HOME/$OSB_INSTALL_ZIP does not exist!
fi
else
echo $OSB_INSTALL_JAR already unzipped
fi
if [ -f "$OSB_INSTALL_HOME/$OSB_INSTALL_JAR" ]; then
echo Install ServiceBus 12cR2
$JAVA_HOME/bin/java -jar $OSB_INSTALL_HOME/$OSB_INSTALL_JAR -silent -responseFile $OSB_INSTALL_HOME/$OSB_INSTALL_RSP
else
echo $OSB_INSTALL_JAR not available!
fi
else
if [ ! -d "$FMW_HOME" ]; then
echo $FMW_HOME not available: First install Fusion Middlware Infrastucture
fi
if [ -d "$OSB_HOME" ]; then
echo $OSB_HOME available: ServiceBus Already installed
fi
fi

Save it as installSB.sh under scripts.
This installs the software for both SOA and BPM. The choice to include BPM or not are made at creation of the domain.
As in the FMW infrastructure installation we need a response file:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210

#Set this variable value to the Installation Type selected. e.g. Service Bus.
INSTALL_TYPE=Service Bus



Save it as fmw_12.2.1.0.0_osb.rsp under ServiceBus.
This one is a little smaller then the FMW-infra one. And again here the ORACLE_HOME should be adapted in the case you choose to use another FMW_HOME location.
Install Managed File TransferThe script for installation of the Managed File Transfer Software is again more or less the same as the SOA and BPM:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export MFT_INSTALL_HOME=$PWD/../ManagedFileTransfer
export MFT_INSTALL_JAR=fmw_12.2.1.0.0_mft.jar
export MFT_INSTALL_RSP=fmw_12.2.1.0.0_mft.rsp
export MFT_INSTALL_ZIP=V78174-01.zip
#
# MFT 12c
if [[ -d "$FMW_HOME" && ! -d "$MFT_HOME/bin" ]]; then
#
#Unzip MFT
if [ ! -f "$MFT_INSTALL_HOME/$MFT_INSTALL_JAR" ]; then
if [ -f "$MFT_INSTALL_HOME/$MFT_INSTALL_ZIP" ]; then
echo Unzip $MFT_INSTALL_HOME/$MFT_INSTALL_ZIP to $MFT_INSTALL_HOME/$MFT_INSTALL_JAR
unzip $MFT_INSTALL_HOME/$MFT_INSTALL_ZIP -d $MFT_INSTALL_HOME
else
echo $MFT_INSTALL_HOME/$MFT_INSTALL_ZIP does not exist!
fi
else
echo $MFT_INSTALL_JAR already unzipped
fi
if [ -f "$MFT_INSTALL_HOME/$MFT_INSTALL_JAR" ]; then
echo Install MFT 12cR2
$JAVA_HOME/bin/java -jar $MFT_INSTALL_HOME/$MFT_INSTALL_JAR -silent -responseFile $MFT_INSTALL_HOME/$MFT_INSTALL_RSP
else
echo $MFT_INSTALL_JAR not available!
fi
else
if [ ! -d "$FMW_HOME" ]; then
echo $FMW_HOME not available: First install Fusion Middlware Infrastucture
fi
if [ -d "$MFT_HOME" ]; then
echo $MFT_HOME available: MFT Already installed
fi
fi

Save it as installMFT.sh under scripts.
Again we need a response file:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210



Save it as fmw_12.2.1.0.0_mft.rsp under ManagedFileTransfer.
And again here the ORACLE_HOME should be adapted in the case you choose to use another FMW_HOME location.
Install the lotYou could run the scripts above one-by-one. Or have them called using a master script:
#!/bin/bash
echo _______________________________________________________________________________
echo Java SDK 8
./installJava.sh
echo
echo _______________________________________________________________________________
echo Fusion Middleware Infrastructure
./installFMW.sh
echo
echo _______________________________________________________________________________
echo SOA & BPM Suite
./installSOA.sh
echo
echo _______________________________________________________________________________
echo ServiceBus
./installSB.sh
echo
echo _______________________________________________________________________________
echo Managed File Transfer
./installMFT.sh

Save it as install.sh under scripts.
Repository CreationWhen the software is installed, it's time to create the repository. This requires:
  • a database, for instance an 11g XE, 11gR2 latest or 12c
  • Sys password
Where in 11g you had a separate repository creation utility in a giant (as big as a soasuite installation) installer, in 12c the RCU comes in parts per product. However, in the end it is one utility that 'grows' with each added product.

The commandline interface of the RCU is described here. In that document the commandline interface and options are described. In turns out (but not described) that the RCU also supports a response file.

The rcu install script is as follows:
#!/bin/bash
. $PWD/FMW12c_env.sh
echo Run rcu for SOA Infrastucture
export RCU_INSTALL_HOME=$PWD/../rcu
export RCU_SOA_RSP=rcuSOA.rsp
export RCU_SOA_PWD=rcuSOAPasswords.txt
#export RCU_SOA_PWD=rcuSOAPasswords-same.txt
$FMW_HOME/oracle_common/bin/rcu -silent -responseFile $RCU_INSTALL_HOME/$RCU_SOA_RSP -f < $RCU_INSTALL_HOME/$RCU_SOA_PWD

Save it as rcuSOA.sh under scripts.

This script uses both a respone file and a password file.
The response file is as follows:
#RCU Operation - createRepository, generateScript, dataLoad, dropRepository, consolidate, generateConsolidateScript, consolidateSyn, dropConsolidatedSchema, reconsolidate
operation=createRepository

#Enter the database connection details in the supported format. Database Connect String. This can be specified in the following format - For Oracle Database: host:port:SID OR host:port/service , For SQLServer, IBM DB2, MySQL and JavaDB Database: Server name/host:port:databaseName. For RAC database, specify VIP name or one of the Node name as Host name.For SCAN enabled RAC database, specify SCAN host as Host name.
connectString=darlin-vce-db:1521:PDBORCL

#Database Type - [ORACLE|SQLSERVER|IBMDB2|EBR|MYSQL] - default is ORACLE
databaseType=ORACLE

#Database User
dbUser=sys

#Database Role - sysdba or Normal
dbRole=SYSDBA

#This is applicable only for database type - EBR
#edition=

#Prefix to be used for the schema. This is optional for non-prefixable components.
schemaPrefix=DEV

#List of components separated by comma. Remove the components which are not needed.
componentList=UCSUMS,MDS,WLS,STB,OPSS,IAU,IAU_APPEND,IAU_VIEWER,SOAINFRA,ESS,MFT

#Specify whether dependent components of the given componentList have to be selected. true | false - default is false
#selectDependentsForComponents=false

#If below property is set to true, then all the schemas specified will be set to the same password.
useSamePasswordForAllSchemaUsers=false

#This allows user to skip cleanup on failure. yes | no. Default is no.
#skipCleanupOnFailure=no

#Yes | No - default is Yes. This is applicable only for database type - SQLSERVER.
#unicodeSupport=no

#Location of ComponentInfo xml file - optional.
#compInfoXMLLocation=

#Location of Storage xml file - optional
#storageXMLLocation=

#Tablespace name for the component. Tablespace should already exist if this option is used.
#tablespace=

#Temp tablespace name for the component. Temp Tablespace should already exist if this option is used.
#tempTablespace=

#Absolute path of Wallet directory. If wallet is not provided, passwords will be prompted.
#walletDir=

#true | false - default is false. RCU will create encrypted tablespace if TDE is enabled in the database.
#encryptTablespace=false

#true | false - default is false. RCU will create datafiles using Oracle-Managed Files (OMF) naming format if value set to true.
#honorOMF=false

#Variable required for component SOAINFRA. Database Profile (SMALL/MED/LARGE)
SOA_PROFILE_TYPE=SMALL

#Variable required for component SOAINFRA. Healthcare Integration(YES/NO)
HEALTHCARE_INTEGRATION=NO


Regarding the elements you want to fill using properties, this one is the largest. Important are mostly:
  • connectString=darlin-vce-db:1521:PDBORCL
  • databaseType=ORACLE
  • dbUser=sys
  • dbRole=SYSDBA
  • schemaPrefix=DEV
  • componentList=UCSUMS,MDS,WLS,STB,OPSS,IAU,IAU_APPEND,IAU_VIEWER,SOAINFRA,ESS,MFT
  • useSamePasswordForAllSchemaUsers=false

I think properties like connectString, databaseType, dbUser, dbRole speak more or less for them selves. The property 'schemaPrefix' need to be adapted according to the target environment. This can be something like DEV, TST, ACC or PRD. Or SOAO, SOAT, SOAA, SOAP (the last one is funny...)

Then the component list. For SOA and MFT there are several required components. These can be found here in the 12.1.3 docs. For 12.2.1 the list of component id's can be founde here. Unfortunately there you can't find the requirements in detail as in 12.1.3.

Then there is a password file. If you set useSamePasswordForAllSchemaUsers to true, you need only two: the sys password and the generic schema password. If as in this example the value is false you need to specify them for each schema. The password file I use looks like:

welcome1
DEV_UMS
DEV_MDS
DEV_WLS
DEV_WLS_RUNTIME
DEV_STB
DEV_OPSS
DEV_IAU
DEV_IAU_APPEND
DEV_IAU_VIEWER
DEV_SOAINFRA
DEV_ESS
DEV_MFT

The first password in the list is the system password. Then in the order of the components the passwords are listed. A few remarks:
  • The component UCSUMS (User Messaging Services) result in a schema DEV_UMS (provided that he schemaPrefix = DEV).
  • I use here passwords that equal the schema names. You probably would not do that in acceptance and/or production, but maybe you do in Dev and test. However, in the example it is handy to know at which place which password need to go.
  • The component WLS needs two passwords, since it results in two schema's: DEV_WLS and DEV_WLS_RUNTIME. It is not documented (I could not find it) but it took me considerable time, since afte DEV_WLS the passwords did not match and it complained about a missing password. Looking in a manual created repository I found that it also created the DEV_WLS_RUNTIME.
  • For Managed File Transfer (MFT) also Enterprise Schedule Service (ESS) is needed. As well as the prerequisites for SOAINFRA.
  • SOAINFRA is needed for both SOA&BPM and  Service Bus. So even if you only install Service Bus, you need to install SOAINFRA.

ConclusionAs said I these scripts help in installing the software and installing the Repository. They use shell scripts but it should not be too hard to translate them to ANT or other tooling like Ansible or Puppet if you're into one of those. To me it would be a nice finger-practice to translate it to ANT to be able to dynamically adapt the response files. I'd probably do that in the near future. And it would be a nice learning path to implement this in Ansible or Puppet.

But first for me it would be a challence to create a domain script in wlst. So hopefully I get to write about that soon.

A Business Intelligence Love Story: #IHateYourFace

Rittman Mead Consulting - Wed, 2016-05-25 05:00

Sometimes love isn’t fair

A teenager, just emerging from an awkward growing phase drives a tendency to spew non sensical permutations of varying cringe worthy affections toward their love target. The data points behind the intent are solid. Let’s go Boolean now:

Intent of good = 1 ;

attracted to target = 1;

ability to provide for target = 1;

time available = 1;

prior or current committed relationship = 0.

So here we are with all the solid intentions in the world and the variables all align to an optimal outcome. Except one thing:

love target hates your face = 1;

Exit(0)

Assuming that this is a hierarchical value chain with face hating as the parent, we are at an impasse.

The above is not intended as an affront to teenagers spitting game at their prospective boo. It is though, an analogy to the fantastic time and effort spent in integrating, transforming and presenting Business Intelligence to the enterprise with all of the proper best practices and technology. Only to have the user subtly tell us that the information is “great but… I don’t know what I am looking at and I was looking for it to be in 3D” (AKA, I hate your face). There have been some great books out there about visualization and we have read a LOT of them. Best practices aside, visualization is about taste. The love target above may well have a completely different opinion of facial beauty than Mr. or Ms. hottie-mc-toddy that sits next to you in your advanced Kafka course (seriously, it might happen). Love at Kafka training happens, I swear it does, really and when it does, it involves unicorns and ends of rainbows.

So now I have to figure out my end user’s tastes in visualization?

Yep, accept and move on.

You wouldn’t saddle up with a love interest if you hated their face. Why would an end user sign up for something they don’t want to use. Of course, you being the BI practitioner that you are know your BI stack up and down and are well aware of all of the available visualization options. These end users don’t know what they are talking about…right?

—they don’t have to.

Lots of the big BI players have been losing ground in the industry of viz because new competitors are building attractive viz capabilities. The new guys in BI are going to the end users with something the end users want…pretty and flexible visualizations. Both the viz provider and end user need not worry themselves with data integrity, governance, or how they got the data to start.

Welcome to Enterprise BI 2.0

What if I could use my existing enterprise platform and allow users to integrate more data sources, mash it up and then have an almost story telling approach to BI? You can and the next releases from the big BI vendors are all focused on that. Clearly at Rittman Mead we have an affinity for OBIEE and with 12c, the dream of viz has become a an out of the box reality. Data visualization is core to new functionality with more on the way. So my advice is upgrade to 12c ASAP to get your new face on.

Rittman Mead has less invasive facelift versus replace options with viz for 12c and 11g. Visual Plug in Pack (VPP) extends the native charting in OBI into dynamic attractive hollywood level viz. We also have an User Engagement offering that comes with that visual facelift all the girls at the salon are talking about. Shoot me an email if you are interested in our streamlined OBI 12c upgrade service or VPP. Jason.davis@rittmanmead.com

The post A Business Intelligence Love Story: #IHateYourFace appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Formspider is coming to US

Gerger Consulting - Wed, 2016-05-25 02:56
Dear Oracle Developers,
We are opening a branch in US! As part of this effort, I will be in US between Jun 13 – July 4 to meet with organizations and developers who are interested in Formspider.
Formspider offers  interesting opportunities for freelancers, consulting firms, IT Departments and ISV’s. 
Please get in touch to find out how Formspider can help you grow your business, meet your goals and make it easier to develop cross platform applications.
I’ll be happy to arrange a meeting to talk to you.
Kind Regards,
Yalim K. Gerger
Founder
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator