Feed aggregator

Embedded mode limitations for Production systems

Anthony Shorten - Thu, 2016-05-26 20:33

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is ideal for demonstration and development environments as the default setup is enough for those types of activities.

Over time though customers and partners will want to use more and more of the Oracle WebLogic domain facilities including advanced setups like multiple servers, clusters, advanced security setups etc. Here are a few important things to remember about embedded mode:

  • The embedded mode domain setup is fixed with a fixed single server that houses the product and the administration server with the internal basic security setup. In non-production this is reasonable as the requirements for the environment are simple.
  • The domain file (config.xml) is generated by the product, using a template, assuming it is embedded only.
  • When implementations need additional requirements within the domain there are three alternatives:
    • Make the changes in the domain from the administration console and then convert the new config.xml generated by the console as a custom template. This needs to be done as remember when Oracle deliver ANY patches or upgrades (or when you make configuration changes) we need to run initialSetup[.sh] to add the patch, upgrade or configuration to the product. This will reset the file back to the factory provided template unless you are using the custom template. Basically, if you decide to use this option, and do not implement a custom template then you will lose your changes each time.
    • In later versions of OUAF we introduced user exits. These allow implementations to add to the configuration using XML snippets. It does require you to understand the configuration file that is being manipulated and we have sprinkled user exits all over the configuration files to allow extensions. Using this method means that you make changes to the domain using the configuration files, examine the changes to the domain file and then decide which user exit is available to reflect that change and add the relevant XML snippet. Again you must understand the configuration file to make sure you do not corrupt the domain.
    • The easiest option is to migrate to native mode. This basically removes the embedded nature of the domain and houses it within Oracle Weblogic. This is explained in Native Installation whitepaper (Doc Id: 1544969.1) available from My Oracle Support.

Native Installations allows you to use the full facilities within Oracle WebLogic without the restrictions of embedded mode. The advantages of native installations is the following:

  • The domain can be setup according to your company standards.
  • You can implement clusters, multiple servers including dynamic clustering..
  • You can use the security features of Oracle WebLogic to implement complex security setups including SSO solutions.
  • You can lay out the architecture according to your volumes to manage within your SLA's.
  • You can implement JDBC connection pooling, Work Managers, advanced diagnostics etc.

Oracle recommends that native installations be used for environments where you need to take advantage of the domain facilities. Embedded mode should only be used within the restrictions it poses.

Crowdsourced software development experiment

Dimitri Gielis - Thu, 2016-05-26 16:30
A few days ago I got an email about an experiment how to program with the crowd.

I didn't really heard about it before, but found it an interesting thought. In this experiment people will perform microtasks (10 minutes task), as a member of the crowd. People don't know each other, but will collaborate together. The system is distributing the work and supplies instructions. The challenge is in creating quality code that meets the specifications.

Job is still searching for some people to be part of the experiment, so I thought to put it on my blog, in case you're interested you find more details below and how to contact him.



Categories: Development

Understanding MySQL Fabric Faulty Server Detection

Pythian Group - Thu, 2016-05-26 13:39

Awhile ago I found myself analyzing a MySQL fabric installation to understand why a group member was occasionally being marked as FAULTY even when the server was up and running and no failures were observed.  

         
                         server_uuid     address  status       mode weight
------------------------------------ ----------- ------- ---------- ------
ab0b0653-6121-11c5-55a0-007543445454 mysql1:3306 PRIMARY READ_WRITE    1.0
f34dd331-2432-11f4-a2d3-006754678533 mysql2:3306 FAULTY  READ_ONLY     1.0

 

Upon reviewing mysqlfabric logs, I found the following warnings were being logged from time to time:

[WARNING] 1442221217.920115 - FailureDetector(xc_grp_1) - Server (f34dd331-2432-11f4-a2d3-006754678533) in group (xc_grp_1) is unreachable

 

Since I was not clear under which circumstances a server is marked as FAULTY, I decided to review MySQL Fabric code (Python) to better understand the process.

The module responsible for printing this message is failure_detection.py and more specifically, the _run method belonging to FailureDetector class. This method will loop through every server in a group, and attempt a connection to the MySQL instance running on that node. MySQLServer.Is_alive (mysql/fabric/server.py) method is called for this purpose.

Before reviewing the failure detection process, we first need to know that there are four MySQL fabric parameters that will affect when a server is considered unstable or faulty:

DETECTION_TIMEOUT
DETECTION_INTERVAL
DETECTIONS
FAILOVER_INTERVAL

 

Based on the above variables, the logic followed by FailureDetector._run() to mark a server as FAULTY is the following:

1) Every {DETECTION_INTERVAL/DETECTIONS} seconds, a connection against each server in the group is attempted with a timeout equal to DETECTION_TIMEOUT

2) If DETECTION_TIMEOUT is exceeded, the observed message is logged and a counter incremented

3) When this counter reaches DETECTIONS, the server is marked as “unstable” and if the last time the master changed was greater than FAILOVER_INTERVAL ago, the server is marked as FAULTY

With a better understanding of the logic followed by MySQL fabric to detect faulty nodes, I went to the configuration file to check the existing values for each of the parameters:

DETECTIONS=3
DETECTION_TIMEOUT=1
FAILOVER_INTERVAL=0
DETECTION_INTERVAL=6

From the values above we can notice that each group will be polled every 2 seconds (DETECTION_INTERVAL/DETECTIONS) and that the monitored server should respond within a second for the test to be considered successful.

On high concurrency nodes, or nodes under heavy load, a high polling frequency combined with tight timeouts could cause the servers to be marked as FAULTY just because the connection attempt would not be completed or processed (in the case of high connection rates or saturated network interfaces) before the timeout occurs.

Also, having FAILOVER_INTERVAL reduced to 0, will cause the server to be marked as FAULTY even if a failover had just occurred.

A less aggressive configuration would be more appropriated for heavy loaded environment:

DETECTIONS=3 -> It's Ok
DETECTION_TIMEOUT -> 5 TO 8
FAILOVER_INTERVAL -> 1200
DETECTION_INTERVAL -> 45 TO 60

Conclusion

As with any other database clustering solution that relies on a database connection to test node status, situations where the database server would take longer to respond should also be considered. The polling frequency should be adjusted so the detection window is within an acceptable range, but the rate of monitoring connections generated is also kept to the minimum. Check timeouts should also be adjusted to avoid false positives caused by the server not being able to respond in a timely manner.

 

Categories: DBA Blogs

INSERT ALL caveat

Dominic Brooks - Thu, 2016-05-26 11:33

Why you might want to think twice about using INSERT ALL.

One of those things I knew and then forgot.

So, let’s say you’ve got three tables or a partitioned table or something like that.

Let’s use regional tables for simplicity.

drop table t1_r1;
drop table t1_r2;
drop table t1_r3;

create table t1_r1
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R1')));

create table t1_r2
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R2')));

create table t1_r3
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R3')));

insert into t1_r1 values ('R1',1);
insert into t1_r2 values ('R2',1);
insert into t1_r3 values ('R3',1);

commit;

And you want a routine that will insert into one of those tables depending on region.

And you’re a simple fellow, so you go with an IF statement:

create or replace procedure p1 (
  col1 in varchar2, 
  col2 in number
)
as
begin
  if col1 = 'R1'
  then
      insert into t1_r1 values(col1,col2);
  elsif col1 = 'R2'
  then
      insert into t1_r3 values(col1,col2);
  else 
      insert into t1_r3 values(col1,col2);
  end if;
end p1;
/

Procedure P1 compiled

And then in the same session you run this uncommitted:

exec p1('R1',2);

PL/SQL procedure successfully completed.

And then in another session you decide to truncate table T1_R3:

truncate table t1_r3;

Table T1_R3 truncated.

No problem.
None was expected.

However…

Let’s say that we decide to tidy up that procedure and get rid of some of the repetition by using an INSERT ALL statement.
I will use a standalone sql statement just to demonstrate a further minor aspect rather than using a procedure with a bound parameter.

insert all
when col1 = 'R1' then into t1_r1
when col1 = 'R2' then into t1_r2
when col1 = 'R3' then into t1_r3
select 'R1' col1,2 col2
from dual;

1 row inserted.

Let’s revisit the truncate:

truncate table t1_r3;

SQL Error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
00054. 00000 -  "resource busy and acquire with NOWAIT specified or timeout expired"
*Cause:    Interested resource is busy.
*Action:   Retry if necessary or increase timeout.

TM share locks from the INSERT ALL on all three possible targets prevent the TRUNCATE.

So, a simple/simplisitic illustration of why you might want to think twice about whether INSERT ALL is the best feature for your use case, based on a real life problem.


Tracefile Automation – Simplify Your Troubleshooting Tasks

Pythian Group - Thu, 2016-05-26 10:06

Here’s a common Oracle troubleshooting scenario when a SQL statement needs tuning and/or troubleshooting:

  • log on to dev server
  • connect to database (on a different server)
  • run the SQL statement with 10046 tracing enabled
  • ssh to the database server
  • copy the trace file back to work environment
  • process the trace file.

 

All of this takes time. Granted, not a great deal of time, but tuning is an iterative process and so these steps will be performed multiple times. Not only are these steps a productivity killer, but they are repetitive and annoying. No one wants to keep running the same manual command over and over.

This task is ripe for some simple automation.

If both the client and database servers are some form of Unix, automating these tasks is straightforward.

Please note that these scripts require an 11g or later version of the Oracle database. These scripts are dependent on the v$diag_info view to retrieve the tracefile name. While these scripts could be made to work on 10g databases, that is left as an exercise for the reader.

Step by Step

To simplify the process it can be broken down into steps.

 

1. Reconnect

The first step is to create a new connection each time the SQL is executed. Doing so ensures the database session gets a new tracefile, as we want each execution to be isolated.

-- reconnect.sql

connect jkstill/XXXX@oravm

 

2. Get the Tracefile hostname, owner and filename

Oracle provides all the information needed.

In addition the script will set the 10046 event, run the SQL of interest and then disable the 10046 event.

Following is a snippet from the tracefile_identifier_demo.sql script.

 


-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

 

In this case sql2trace.sql is a simple SELECT from a test table.  All of the scripts used here appear in Github as mentioned at the end of this article.

 

3. Process the Tracefile

Now that the tracefile has been created, it is time to retrieve it.

The following script scp.sql is called from tracefile_identifier_demo.sql.

 


col scp_src new_value scp_src noprint
col scp_target new_value scp_target noprint

set term off feed off verify off echo off

select '&&1' scp_src from dual;
select '&&2' scp_target from dual;

set feed on term on verify on

--disconnect

host scp &&scp_src &&scp_target

Following is an example putting it all together in tracefile_identifier_demo.sql.

 

SQL> @tracefile_identifier_demo
Connected.

1 row selected.


PRODUCT                        VERSION              STATUS
------------------------------ -------------------- --------------------
NLSRTL                         12.1.0.2.0           Production
Oracle Database 12c Enterprise 12.1.0.2.0           64bit Production
 Edition

PL/SQL                         12.1.0.2.0           Production
TNS for Linux:                 12.1.0.2.0           Production

Data Base
------------------------------
P1.JKS.COM

INSTANCE_NAME        HOST_NAME                      CURRDATE
-------------------- ------------------------------ ----------------------
js122a1              ora12c102rac01.jks.com         2016-05-23 16:38:11

STARTUP
--------------------
04/02/2016 11:22:12


Session altered.

Elapsed: 00:00:00.00

OWNER        OBJECT NAME                     OBJECT_ID OBJECT_TYPE             CREATED
------------ ------------------------------ ---------- ----------------------- -------------------
SYS          OLAP_EXPRESSION                     18200 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_BOOL                18206 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_DATE                18204 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_TEXT                18202 OPERATOR                2016-01-07 21:46:54
SYS          XMLSEQUENCE                          6379 OPERATOR                2016-01-07 21:41:25
SYS          XQSEQUENCE                           6380 OPERATOR                2016-01-07 21:41:25
SYS          XQWINDOWSEQUENCE                     6393 OPERATOR                2016-01-07 21:41:25

7 rows selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

js122a1_ora_1725_MYTRACEFILE.trc                                                                                                                                                            100% 3014     2.9KB/s   00:00

SQL> host ls -l js122a1_ora_1725_MYTRACEFILE.trc
-rw-r----- 1 jkstill dba 3014 May 23 16:38 js122a1_ora_1725_MYTRACEFILE.trc

But Wait, There’s More!

This demo shows you how to automate the retrieval of the trace file. But why stop there?  The processing of the file can be modified as well.

Really, it isn’t even necessary to copy the script over, as the content can be retrieved and piped to your favorite command.  The script mrskew.sql for instance uses ssh to cat the tracefile, and then pipe the contents to the Method R utility, mrskew.  Note: mrskew is a commercial utility, not open source software.

 

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on

--disconnect

host ssh &&ssh_target 'cat &&scp_filename' | mrskew

 

Following is another execution of tracefile_identifier_demo.sql, this time piping output to mrskew. Only the final part of the output is shown following

 

...

Elapsed: 00:00:00.01

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
---------------------------  --------  ------  -----  --------  --------  --------
PARSE                        0.002000   33.1%      2  0.001000  0.000000  0.002000
db file sequential read      0.001211   20.0%      5  0.000242  0.000056  0.000342
FETCH                        0.001000   16.5%      1  0.001000  0.001000  0.001000
gc cr grant 2-way            0.000999   16.5%      1  0.000999  0.000999  0.000999
SQL*Net message from client  0.000817   13.5%      2  0.000409  0.000254  0.000563
Disk file operations I/O     0.000018    0.3%      2  0.000009  0.000002  0.000016
SQL*Net message to client    0.000002    0.0%      2  0.000001  0.000001  0.000001
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
---------------------------  --------  ------  -----  --------  --------  --------
TOTAL (9)                    0.006047  100.0%     19  0.000318  0.000000  0.002000

Now we can see where all the db time was consumed for this SQL statement, and there was no need to copy the trace file to the current working directory. The same can be done for tkprof and other operations.  Please see the plan.sql and tkprof.sql scripts in the Github repository.

 

Wrapping It Up

A little bit of automation goes a long way. Commands that are repetitive, boring and error prone can easily be automated.  Both your productivity and your mood will soar when little steps like these are used to improve your workflow.

All files for this demo can be found at https://github.com/jkstill/tracefile_identifier

Categories: DBA Blogs

On Demand Webcast: Driving a Connected, More Productive Next-Gen Workplace

WebCenter Team - Thu, 2016-05-26 09:25
Over the past two decades technological change has transformed business processes. Among the biggest changes is mobile ubiquity.

Finding ways to engage with customers in a mobile world is becoming increasingly challenging for outdated systems to handle.

Discover how a cloud platform can unite content, social, and mobile engagement with simplified line of business automation, as well as microsites and communities for an integrated next-generation workplace in this on demand webcast. View today!

Using Enterprise Manager to manage cloud services

Pat Shuff - Thu, 2016-05-26 08:18
Yesterday we talked about the virtues of Enterprise Manager. To honest the type of monitoring tool is not important but the fact that you have one is. One of the virtues that VMWare touts of VSphere is that you can manage instances on your server as well as instances in VCloud. This is something worthy of playing with. The same tool for your on premise instances also managing your instances in the cloud has power. Unfortunately, VCloud allows you to allocate virtual machines and storage associated with it so you only have a IaaS option of compute only. You can't allocate just storage. You can't deploy a database server unless you have a database deployed that you want to clone. You need to start with an operating system and build from there. There are benefits of PaaS and SaaS that you will never see in the VCloud implementation.

Oracle Enterprise Manager provides the same universal management interface for on premise and in cloud services. Amazon falls short on this. First, they don't have on premise instances so the tools that they have don't monitor anything in your data center, only in their cloud. Microsoft has tools for monitoring services plugins for looking at Azure services. It is important to note that you need a gateway server in the Azure cloud to aggregate the data and ship the telemetry data back and report it in the monitoring tool. There is a good Blog detailing the cost if IaaS monitoring in Azure. The blog points out that the outbound data transfer for monitoring can cost up to $17/month/server so this is not something that comes for free.

Today we are going to look at using Enterprise Manager as a management tool for on premise systems, the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. We are going to cheat a little and use a VirtualBox instance of Enterprise Manager 13c. We are not going to go through the installation process. The books and blogs that we referenced yesterday detail how to do this. Unfortunately, the VirtualBox instance is available from edelivery.oracle.com. We are not going to use this instance but are going to use an instance for demo purposes only available internal to Oracle. The key difference between the two systems is that the edelivery instance is 21 GB in size for download and expands to provide an OEM 13c instance for testing while the internal system (retriever.us.oracle.com) has a 12c and 11g database installed and is 39.5 GB (expanded to almost 90 GB when uncompressed). Given the size of the instance I really can't provide external access to this instance. You can recreate this by downloading the edelivery system, installing an 11g database instance, installing a 12c database instance, and configuring OEM to include data from those instances to replicate the screen shots that we are including.

If we look at the details on the virtual box instance we notice that we need at least 2 cores and 10 GB of memory to run this instance. The system is unusable at 8 GB of RAM. We really should bump this up to 12 GB of RAM but given that it is for demo purposes and for training it is ok if it runs a little slow. If we were running this in production it is recommended to grow this to 4 cores and 16 GB of memory and also recommended that you not use a downloaded VirtualBox instance for production but install from scratch.

The key things that we are going to do are walk through what it takes to add a monitoring agent onto the service that we are trying to monitor and manage. If we look at the architecture of Enterprise Manager we notice that there are three key components; the Oracle Management Repository (OMR), the Oracle Management Service (OMS), and the Oracle Management Agent (OMA). The OMR is basically a database that keeps a history of all telemetry actions as well as reports and analytics for the systems being monitored. The OMS is the heart of Enterprise Manager and runs on a WebLogic server. The code is written in Java and presents the primary user interface to the administrators as well as being the gateway between the OMR and the agents or OMAs. The agents are installed on the target systems and collect operating system data, database data, weblogic data, and all other log data to ship back to the OMR for analysis by the users.

It is important to note at this point that most PaaS and SaaS providers do not allow you to install an Enterprise Manager Agent or any other management agent on their instances. They want to manage the services for you and force you to use their tools to manage their instance. SalesForce, for example, only gives you access to your customer relationship data. You can export your contact lists to an csv file to backup your data but you can't correlate the contact list to the documents that you have shared with these users. Amazon RDS does not provide a file system access, system access to the database, or access to the operating system so that you can install the management agent. You must use their tools to monitor services provided on their sites. Unfortunately, this inhibits you from looking at important things like workload repository reports or sql tuning guides to see if something is running slow or waiting on a lock. Your only choice is to deploy the desired PaaS or SaaS as a manual or bundled install on IaaS forcing you to manually manage things like backups and patching on your own.

The first thing that we need to do in Enterprise Manager is to log in and click on the Setup button on the top right. We need to define named credentials since we are going to connect to the cloud service using public and private ssh keys. We need to follow the Security pull down to Named Credentials.

We click on the Create icon in the top left and add credentials with public and private keys. If we don't have an ssh key to access the service we can generate an ssh key using ssh-keygen which generates a public and private key and upload the key using the SSH Access pull down in the hamburger menu. Once we upload the ssh key we can use ssh -i keyname.ppk opc@ip_address for our database server. We will use this keyname.ppk to connect with Enterprise Manager and have all telemetry traffic transferred via the ssh protocol.

Once we have the credentials valid in the cloud account we can create the ssh access through Enterprise Manager. To do this we to to Setup at the top right, Security, Named Credentials. We then click on the Create button in the middle left to start entering data about the credentials. The name in the the screen shot below failed because it begins with a number so we switched it to ssh2017 since 2017ssh failed the naming convention. We are trying to use host access via ssh which is done with pull down menu definitions. The system defaults to a host access but we need to change from host to global which does not tie our credentials to one ip address. We upload our public and private key as well as associate this with the opc user since that user has sudo rights. We can verify the credentials by looking at the bottom of the list. This should allow us to access our cloud host via ssh and deploy an agent to our cloud target.

Note that we created two credentials because we had a step fail later. We created credentials for the opc user and for the oracle user. The opc credentials are called ssh2017 as shown in the screen shots. The oracle credentials are called oracle2017 and are not shown. The same steps are used just the username is changed as well as the name of the credentials.

If we want to install the management agent onto our instance we need to know the ip address of the service that we are going to monitor as well as an account that can sudo to root or run elevated admin services. We go to the Enterprise Manager splash screen, login, select the Setup button in the top right and drill down to Add Target and Add Target Manually. This takes us to the Add Target screen where we can Install Agent on Host. To get rid of the warnings, we added our cloud target ip address to the /etc/hosts file and used a fully qualified and short name associated with the ip address. We probably did not add the right external dns name but it works with Enterprise Manager. When we add the host we use the fully qualified host name. We can find this by logging into the cloud target and looking at the /etc/hosts file on that server. This gives us the local ip address and a fully qualified host name. Once we have this we can enter a directory to upload the agent software to. We had to create an agent directory under the /u01/app/oracle directory. We select the oracle2017 credentials (the screen shots use ssh2017 but this generates an error later) we defined in the previous step and start uploading the agent software and configuring the host as a target.

Note that we could have entered the ip address rather than going through adding the ip address to /etc/hosts. We would have received a warning with the ip address.

When we first tried this we got an error during the initialization phase that opc did not own the /u01/app/oracle directory and had to create an agent directory and change ownership. Fortunately, we could easily resubmit and enter a new directory without having to reenter all of the other information. The deployment takes a while because Enterprise Manager needs to upload the agent binaries, extract, and install them. The process is updated with status so that you can see the progress and restart when errors happen. When we changed the ownership, the installation failed at a later step stating the opc did not have permission to add the agent to the inventory. We corrected this by installing as oracle and setting the /u01/app/oracle/agent directory to be owned by oracle.

When we commit the ip address or host name as well as the ssh credentials, we can track progress as the management server deploys the agent. We get to a point where we note that the oracle user does not have ssh capabilities and we will need to run some stuff manually from the opc account.

At this point we should have an enterprise manager connection to a cloud host. To get this working from my VirtualBox behind my AT&T Uverse wireless router I first had to configure a route on my broadband connection and set the ip address of the Enterprise Manager VirtualBox image to a static ip address. This allows the cloud instance to talk back to the OMS and store data in the OMR.

The next step is to discover the database instances. This is done by going through a guided discovery on the host that we just provisioned. It took a few minutes to sync up with the OMS but we could verify this with the emctl status agent command on the target host. We add the target manually using the guided discovery and select database services to look for on the target.

At this point we should have a database, listener, and host connected to our single pane of management glass. We should see a local database (em12c) and a cloud based database (prs12cHP). We can look at the host characteristics as well as dive into sql monitoring, database performance, and database management like backup and restore options or adding users to the repository. We could add a Java Cloud Service as well as link these two systems together and trace a web page request down to a sql read and look at what the longest latency component is. We can figure out if the network, java memory allocation, or databse disk is causing the slowest response. We can also look at sql tuning recommendations to get suggestions on changing our sql code or execution plans using the arw report and sql tuning utilities in Enterprise Manager.

In summary, we can connect to an on premise server as well as a cloud server. We can't connect to an Amazon RDS instance because we don't get file system level access to push a client to or a root user to change the agent permissions. We do get this with IaaS on Oracle, Compute servers on Azure, and EC2 on Amazon. We also get this with PaaS on Oracle and potentially event Force.com from SalesForce. No one give you this ability with SaaS. It is assumed that you will take the SaaS solution as is and not need to look under the covers. Having a single pane of glass for monitoring and provisioning services is important. The tool should do more than tell you how full a disk is or how much of a cpu is loaded or available. It should dive into the application and let you look at where bottlenecks are and help troubleshoot issues. We could spend weeks diving into Enterprise Manager and the different management packs but we are on a journey to look at PaaS options from Amazon, Microsoft, and Oracle.

How to clear MDS Cache

Darwin IT - Thu, 2016-05-26 07:06
In the answer on a question on community.oracle.com, I found the following great tip: http://www.soatutor.com/2014/09/clear-mds-cache.html.

In 12cR2 this looks like:
1. Start System MBean Browser; In Domain, pull down the Weblogic Domain menu and choose 'System MBean Browser':


2. Browse for the Application Defined Beans:



3. Expand it and navigate to oracle.mds.lcm, choose server (AdminServer or SOAServer1)
 4. Navigate to the node Application em (AdminServer) or soa-infra (SOAServer) -> MDSAppRuntime -> MDSAppRuntime. Click on the tab Operations, and then on clearCache


5. Click on Invoke:


Then the following confirmation is shown:


By the way, often a redeploy of the SOA Composite that calls the WSDL or other artefact helps. But unfortunately not always.





My "Must See" ADF/MAF Sessions at KScope 16

Scott Spendolini - Thu, 2016-05-26 07:00
Yes, you read that right - it's not a typo, nor did one of my kids or wife gain access to my laptop.  It's part of a "blog hop" - where a number of experts made recommendations about KScope sessions that are "must attend" and are not in their core technology.  I picked ADF/MAF, as I don't have any practical experience in either technology, but they are at least similar enough that I would not be totally lost.

In any case, the following sessions in the ADF/MAF track are worth checking out at Kscope 16 this year:

How to Use Oracle ALTA UI to Create a Smashing UI for Web and Mobile
Luc Bors, eProseed NL
When: Jun 28, 2016, Session 12, 4:45 pm - 5:45 pm

I've always liked UI, and Oracle ALTA is a new set of templates that we'll be seeing quite a bit of across a number of new technologies.

Three's Company: Going Mobile with Oracle APEX, Oracle MAF, and Oracle MCS
Frederic Desbiens , Oracle Corporation
When: Jun 27, 2016, Session 6, 4:30 pm - 5:30 pm

I'll admit - anytime there's a comparison of APEX and other similar technologies, it's always interesting to witness the discussion.  If nothing else, there will be a good healthy debate as a result of this session!

Introduction to Oracle JET: JavaScript Extension Toolkit
Shay Shmeltzer, Oracle Corporation
When: Jun 28, 2016, Session 7, 8:30 am - 9:30 am

Oracle JET is a lot more than just charts, and there's a lot of momentum behind this technology.  I'm very interested to learn more and perhaps even see a thing or two that you can do with it, as well as the various integration points that are possible with other technologies.

Build a Mobile App in 60 Minutes with MAF
John King , King Training Resources
When: Jun 27, 2016, Session 5, 3:15 pm - 4:15 pm

Native mobile applications are something that APEX doesn't do, so it would be nice to see how this would be possible, should the need ever arise.

Thanks for attending this ODTUG blog hop! Looking for some other juicy cross-track sessions to make your Kscope16 experience more educational? Check out the following session recommendations from fellow experts!

Oracle CURRENT_TIMESTAMP Function with Examples

Complete IT Professional - Thu, 2016-05-26 06:00
The CURRENT_TIMESTAMP function is one of the many time and date-related functions in Oracle. Learn how to use it and see some examples in this article. Purpose of the Oracle CURRENT_TIMESTAMP Function The Oracle CURRENT_TIMESTAMP function will display the current date and time. It’s displayed in the session time zone, which is the timezone of […]
Categories: Development

Storing Date Values As Characters (What’s Really Happening)

Richard Foote - Thu, 2016-05-26 03:00
For something that’s generally considered an extremely bad idea, I’ve lost count of the number of times I’ve come across applications that insist on storing date values as characters within the database. We’ve all seen them … I recently got called in to assist a customer who was having issues with a POC in relation […]
Categories: DBA Blogs

The river floes break in spring...

Greg Pavlik - Wed, 2016-05-25 19:37

Alexander Blok
 The river floes break in spring...
March 1902
translation by Greg Pavlik 


The river floes break in spring,
And for the dead I feel no sorrow -
Toward new summits I am rising,
Forgetting crevasses of past striving,
I see the blue horizon of tomorrow.

What regret, in fire and smoke,
What agony of Aaron’s rod,
With each hour, with each stroke -
Or instead - the heavens’ gift stoked,
From the Bush of Moses, the Mother of God!

Original:

Весна в реке ломает льдины,
И милых мертвых мне не жаль:
Преодолев мои вершины,
Забыл я зимние теснины
И вижу голубую даль.

Что сожалеть в дыму пожара,
Что сокрушаться у креста,
Когда всечасно жду удара
Или божественного дара
Из Моисеева куста!
 
 Март 1902

Please, use HTTPS for your APEX apps

Dimitri Gielis - Wed, 2016-05-25 17:07
Why use HTTPS?

When you Google this question you get many different answers, but this answer of Google Developers answers it for me in short (click the link for more details):
  • HTTPS protects the integrity of your website/APEX app
  • HTTPS protects the privacy and security of your users
  • HTTPS is the future of the web; many new technologies only work with https (for example Service Workers; you can read more about Service Workers and APEX in my presentation)
Industry going to HTTPS

Before websites had an HTTP portion and an HTTPS portion, which became active when you would login to the site, but nowadays everything is under HTTPS. Google will actually rank your site higher when it's using HTTPS. Look at the sites you visit; many of them will now use HTTPS as a default.

HTTPS on localhost

If you're developing locally, you don't really need HTTPS on localhost, but I still like to have that.
Here're the steps I did in Chrome on my Mac (OSX) to get the nice green lock when developing locally (works also with APEX Front-End Boost)
  • In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says "Certificate Information."
  • Click and drag the certificate image to your desktop. 
  • Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it.
  • Be sure you add the certificate to the System keychain, NOT the login keychain. 
  • After it has been added, double-click it. 
  • Expand the "Trust" section. "When using this certificate," set to "Always Trust"
  • Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser.
HTTPS on your own server

For years I've been using SSL certificates ordered from Godaddy, but depending the certificate you get, it might not be that cheap. The APEX R&D website is a multi-site certificate - the same certificate is used for the APEX Office Print website.

But there's some good news... you can get SSL for free too (and it's very easy to do!), thanks to Letsencrypt. I used Letsencrypt to protect the Euro2016challenge.eu APEX app/website for example.
Here's the Getting Started Guide from Let's Encrypt. This is the command I used (after installing the package):

./letsencrypt-auto certonly --webroot -w /var/www/euro2016 -d euro2016challenge.eu -d www.euro2016challenge.eu


If you're not yet on https with your APEX app/site, I would definitely recommend looking into it :)

Categories: Development

Two New Oracle Security Presentations Available

Pete Finnigan - Wed, 2016-05-25 16:50

I attended the UKOUG conference last week Monday to Wednesday in Birmingham. This is the first year for three years that it has been back at the ICC in the center of Birmingham. The last two years have seen the....[Read More]

Posted by Pete On 14/12/15 At 08:54 PM

Categories: Security Blogs

Log Buffer #475: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-05-25 14:32

This Log Buffer Edition goes through various blogs, and selects some of the top posts from Oracle, SQL Server and MySQL.

Oracle:

MOS Note:136697.1 – New HCHECK.SQL for Oracle Database 12c

ORAchk / EXAchk questions.

Cloud control won’t start!

ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices.

ORA-56841: Master Diskmon cannot connect to a CELL.

Oracle BITAND Function with Examples.

 

SQL Server:

Natively Compiled Stored Procedures: What they are all about

Considerations around validation errors 41305 and 41325 on memory optimized tables with foreign keys

Taking Azure SQL Data Warehouse for a Test-Drive.

Persistent PowerShell: The PowerShell Profile.

SQL Server Always On Endpoint Encryption Algorithm Compatibility Error.

 

MySQL:

Fixing MySQL scalability problems with ProxySQL or thread pool.

Installing a Web, Email & MySQL Database Cluster on Debian 8.4 Jessie with ISPConfig 3.1

Planets9s – Download the new ClusterControl 1.3 for MySQL, MongoDB & PostgreSQL

AWS Aurora Benchmark – Choose the right tool for the job

Where is the MySQL 5.7 root password?

Categories: DBA Blogs

#IHateYourFace, a BizIntel Love Story

Rittman Mead Consulting - Wed, 2016-05-25 13:53
#IHateYourFace, a BizIntel Love Story

Sometimes love isn't fair

A teenager, just emerging from an awkward growing phase drives a tendency to spew non sensical permutations of varying cringe worthy affections toward their love target.  The data points behind the intent are solid.  Let’s go Boolean now:

Intent of good = 1 ;

attracted to target = 1;

ability to provide for target = 1;

time available  = 1;

prior or current committed relationship = 0.

So here we are with all the solid intentions in the world and the variables all align to an optimal outcome.  Except one thing:

love target hates your face = 1;

Exit(0)

Assuming that this is a hierarchical value chain with face hating as the parent, we are at an impasse.

The above is not intended as an affront to teenagers spitting game at their prospective boo.  It is though, an analogy to the fantastic time and effort spent in integrating, transforming and presenting Business Intelligence to the enterprise with all of the proper best practices and technology.  Only to have the user subtly tell us that the information is “great but… I don’t know what I am looking at and I was looking for it to be in 3D” (AKA, I hate your face).  There have been some great books out there about visualization and we have read a LOT of them.  Best practices aside, visualization is about taste.  The love target above may well have a completely different opinion of facial beauty than Mr. or Ms. hottie-mc-toddy that sits next to you in your advanced Kafka course (seriously, it might happen).  Love at Kafka training happens, I swear it does, really and when it does, it involves unicorns and ends of rainbows.

So now I have to figure out my end user’s tastes in visualization?

Yep, accept and move on.  

You wouldn’t saddle up with a love interest if you hated their face.  Why would an end user sign up for something they don’t want to use.  Of course, you being the BI practitioner that you are know your BI stack up and down and are well aware of all of the available visualization options.  These end users don’t know what they are talking about…right?

—they don’t have to.  

Lots of the big BI players have been losing ground in the industry of viz because new competitors are building attractive viz capabilities.  The new guys in BI are going to the end users with something the end users want…pretty and flexible visualizations.  Both the viz provider and end user need not worry themselves with data integrity, governance, or how they got the data to start.

Welcome to Enterprise BI 2.0

What if I could use my existing enterprise platform and allow users to integrate more data sources, mash it up and then have an almost story telling approach to BI?  You can and the next releases from the big BI vendors are all focused on that.  Clearly at Rittman Mead we have an affinity for OBIEE and with 12c, the dream of viz has become a an out of the box reality.  Data visualization is core to new functionality with more on the way.  So my advice is upgrade to 12c ASAP to get your new face on.

Rittman Mead has less invasive facelift versus replace options with viz for 12c and 11g.  Visual Plug in Pack (VPP) extends the native charting in OBI into dynamic attractive hollywood level viz.  We also have an User Engagement offering  that comes with that visual facelift all the girls at the salon are talking about.  Shoot me an email if you are interested in our streamlined OBI 12c upgrade service or VPP. Jason.davis@rittmanmead.com

Categories: BI & Warehousing

SharePlex Replication Between Two Instances On The Same Host

Pythian Group - Wed, 2016-05-25 10:12

Several days ago I was asked a question about SharePlex and should verify behaviour before providing the answer. I had one linux VM with two databases and needed to setup replication between them. One of them (orcl) was a target 12c EE database while the second one was a source 10g SE. Accordingly, I should use two different versions of SharePlex to mimic the questionable behaviour, so I should have two different SharePlex instances talking to each other on the same host machine. It worked pretty well for, as an example, GoldenGate, where you just setup different ports for manager processes. However, in SharePlex all the instances participating in a configuration should use the same port. The SharePlex documentation states:
“Important! The SharePlex port number must be the same one on all machines in the replication configuration so that they can communicate through TCP/IP connections.”

Of course you cannot use the same port on the same network interface for two independent SharePlex processes working from different homes. In addition, SharePlex is not asking you about a hostname either during installation, or when you start it.
In my case I had 2 interfaces in different subnets on my VM, and I could use them for my replication. But how should I tell to the SharePlex to use one or the other? The answer was simple. You should use “SP_SYS_HOST_NAME” parameter as the environment variable for your shell. Here is how I’ve done that.
I added two new hostnames for those interfaces to my /etc/hosts file to be used for my SharePlex instances:

[root@sandbox ~]$ cat /etc/hosts | grep splex
10.177.130.58	splexhost
172.16.128.10	splexstor
[root@sandbox ~]$ 

I unzipped SharePlex 8.6.3 for Oracle 10 and prepared installation. By default the installer will pick up the hostname automatically and use it during installation.
Here is my hostname and it is not what I want to use for my first SharePlex instance:

[oracle@sandbox ~]$ hostname
sandbox.localdomain
[oracle@sandbox ~]$ ping sandbox.localdomain
PING sandbox.localdomain (127.0.0.1) 56(84) bytes of data.
64 bytes from sandbox.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from sandbox.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.061 ms
^C
--- sandbox.localdomain ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1468ms
rtt min/avg/max/mdev = 0.058/0.059/0.061/0.007 ms
[oracle@sandbox ~]$

What you need to do is to setup an environment variable SP_SYS_HOST_NAME. I wanted the first SharePlex listening and working on hostname splexhost using default port 2100. I should also define proper Oracle home and Oracle database SID during installation. I used standard Oracle utility oraenv to have Oracle variables setup. Here is how I started installation for the source:

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexhost
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ?
The /u01/app/oracle/product/10.2.0/sehome_1/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
[oracle@sandbox ~]$ cd /u01/distr/SharePlex
[oracle@sandbox SharePlex]$ ll
total 96736
-rwxr-xr-x. 1 oracle oinstall 99056391 Jan 11 21:56 SharePlex-8.6.3-b171-oracle100-rh-40-amd64-m64.tpm
[oracle@sandbox SharePlex]$ ./SharePlex-8.6.3-b171-oracle100-rh-40-amd64-m64.tpm
Unpacking ..................................................................
  ..........................................................................

During the installation I provided directories for program and variable files, port number and information about ORACLE_SID and ORACLE_HOME. Everything went smoothly. I finished the installation by running the “ora_setup” utility creating necessary schema and objects in the source database.
For target I used the same strategy setting up Oracle variables by “oraenv” and exporting explicitly SP_SYS_HOST_NAME=splexstor. The installation worked out without any problems and I got two SharePlex instances installed to different directories.
To start a SharePlex home for a certain IP address and interface you need to explicitly setup SP_SYS_HOST_NAME to an appropriate value. Let’s see how it’s been done for source.

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexhost
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ?
The /u01/app/oracle/product/10.2.0/sehome_1/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
[oracle@sandbox ~]$ cd /u01/sp10/bin
[oracle@sandbox bin]$ telnet splexhost 2100
Trying 10.177.130.58...
telnet: connect to address 10.177.130.58: Connection refused

[oracle@sandbox bin]$ nohup /u01/sp10/bin/sp_cop -usp10 &
[1] 2023
[oracle@sandbox bin]$ nohup: ignoring input and appending output to `nohup.out'

[oracle@sandbox bin]$ telnet splexhost 2100
Trying 10.177.130.58...
Connected to splexhost.
Escape character is '^]'.
^]
telnet> q
Connection closed.
[oracle@sandbox bin]$

For target you have to adjust your environment variables accordingly and do the same.

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexstor
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ? orcl
The Oracle base has been set to /u01/app/oracle
[oracle@sandbox ~]$cd /u01/sp12/bin
[oracle@sandbox bin]$ nohup /u01/sp12/bin/sp_cop -usp12 &
[1] 2066
[oracle@sandbox bin]$ nohup: ignoring input and appending output to `nohup.out'

[oracle@sandbox bin]$ 

As result we have two SharePlex instances running on the same host and talking to each other. Now we can create a sample replication. In the database schema “splex” created by “ora_setup” utility we have DEMO_SRC and DEMO_DST tables. SharePlex is using those tables for a demo replication. We can use them too and see how it works in our case. We can either modify a default pre-created sample configuration “ORA_config” or we can create a new one. I’ve created a new config “sample” on my source SharePlex and activated it:

sp_ctrl (splexhost:2100)> list config

File   Name                                         State       Datasource
--------------------------------------------------  ----------  ---------------
ORA_config                                          Inactive    o.SOURCE_SID
Last Modified At: 17-May-16 11:17    Size: 151

sp_ctrl (splexhost:2100)> create config sample

The command opens default editor and you can write your configuration.
Here is what I put to my “sample” config.

datasource:o.test

#source tables      target tables           routing map

splex.demo_src      splex.demo_dest             splexstor@o.orcl

Now we can activate config.

sp_ctrl (splexhost:2100)> activate config sample

sp_ctrl (splexhost:2100)> list config

File   Name                                         State       Datasource
--------------------------------------------------  ----------  ---------------
ORA_config                                          Inactive    o.SOURCE_SID
Last Modified At: 17-May-16 11:17    Size: 151

sample                                              Active      o.test
Last Modified At: 17-May-16 11:30    Size: 134     Internal Name: .conf.1

sp_ctrl (splexhost:2100)>

Now we can see all the processes running :

On the source:

sp_ctrl (splexhost:2100)> lstatus

Detailed Status for splexhost
Process          State                             PID     Running   Since
---------------  ------------------------------  --------  --------------------
Cop              Running                             2023  17-May-16 11:24:39
Capture          Running                             2250  17-May-16 11:30:53
  Data/Host:   o.test
Read             Running                             2279  17-May-16 11:30:53
  Data/Host:   o.test
Export           Running                             2304  17-May-16 11:30:56
  Data/Host:   splexstor
  Queue Name:  splexhost
Cmd & Ctrl       Running                             2581  17-May-16 11:40:39
  Data/Host:   splexhost

On the target:

sp_ctrl (splexstor:2100)> lstatus

Detailed Status for splexstor
Process          State                             PID     Running   Since
---------------  ------------------------------  --------  --------------------
Cop              Running                             2066  17-May-16 11:26:23
Import           Running                             2305  17-May-16 11:30:56
  Data/Host:   splexhost
  Queue Name:  splexhost
Post             Running                             2306  17-May-16 11:30:56
  Data/Host:   o.test-o.orcl
  Queue Name:  splexhost
Cmd & Ctrl       Running                             2533  17-May-16 11:38:18
  Data/Host:   splexstor

Let’s insert a row on the source :

test>  insert into splex.demo_src values ('JIM', '8001 Irvine Center Drive', '949-754-8000');

1 row created.

test> commit;

Commit complete.

test>

And we can see the row was successfully replicated to target:

orcl> select * from splex.demo_dest;

NAME							     ADDRESS							  PHONE#
------------------------------------------------------------ ------------------------------------------------------------ ------------
JIM							     8001 Irvine Center Drive					  949-754-8000

orcl>

As you can see, we were able to use one box to replicate data between two different databases, using two different SharePlex installations. The idea was simple and clear from the start, but I couldn’t find enough information in the installation guide for the SP_SYS_HOST_NAME parameter except documentation about configuring it for cluster installation. The parameter was documented in the reference section of documentation, though in reality it was not.
It could be even better if we could place the parameter inside, and not think about setting variables, but unfortunately setting the parameter in the “paramdb” didn’t work for me. Even having the parameter, you still need to setup your environment variable SP_SYS_HOST_NAME=your_host_name for non default hostname. I hope the article may help somebody save a bit of time.

Categories: DBA Blogs

Maker Faire 2016

Oracle AppsLab - Wed, 2016-05-25 09:13

Another year, another amazing at the Maker Faire.

I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.

Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.

Robot battle demo for kids

Robot battle demo for kids

Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.

While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.

Jeremy talks on Maker Pro stage

Jeremy talks on Maker Pro stage

Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.

Flight of the bumble drone

Flight of the bumble drone

Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.

Lotus boat

Lotus boat

Blueberry muffin

Blueberry muffin

Flying dragon blowing fire

Flying dragon blowing fire

Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.

Electrifying experience

Electrifying experience

Possibly Related Posts:

Managing servers and instances in the cloud

Pat Shuff - Wed, 2016-05-25 07:59
Managing servers and instances has been an ongoing issue since the introduction of the first computer. Recently with the advent of virtualization the idea of a management console to control what processors are running what services and what storage is allocated to what operating system has gained popularity. Many people are familiar with VMWare VSphere where you get a view of processors. We get a view of a server and can see virtual images deployed on this server. We can see how well the resources (memory, cpu, and disk) are being utilized. We can allocate more or less resources since this is a dynamic allocation and make sure that we are not over allocating resources and wasting them or under allocating them and causing applications to run slower.

In this example we can see that we have two processors, 2 GB of memory, and just under 300 GB of disk on this computer. We have five virtual machines running on this computer and can dive into each operating system and look at what operating system is installed and how the limited resources are allocated and utilized. What we can't see is what applications are installed and how the applications are running. For example, is the Windows Home Server 2011 running an Apache Web Server and how many hits did the web server get in the past four days? Monitoring tools beg the question of what are you monitoring. If you are managing limited resources and making sure that you have not over or under allocated services, tools like VSphere are excellent tools. Unfortunately, you will need other tools to dive into another tool. EMC, for example, has a storage manager that lets you look not only at a logical unit level but a controller and disk level. It understands VMWare and lets you look at how disks are related to virtualization engines and how they are consuming resources.

Again, this is a very good tool to look at how well a disk is performing, how well data is laid out across spindles, and how well your data network is being transmitted between disk and server. We can see hot spots. We can see disks that are over and under utilized. We can manage a scarce resource and make sure that it is properly utilized.

When we talk about monitoring we need to shift our thought process. Yes, it is important to manage compute, memory, and storage resources but it is also important to realize that these resources are commodities. If we run low, we get more. If we use too much we are wasting resources. We should be able to automate allocation of resources and size up or size down resources without manual monitoring. What we are really interested in is how well is our company running. If we are a university we might be interested in the latency of delivering online video classes. We might be interested in how many classes are being added to a student schedule during registration. If we are a ticket retailer we might be interested in how many tickets were requested and paid for on a minute by minute basis. Note that we are not talking about how well a disk drive is allocated or if we have enough processors allocated to a virtual machine, we are talking in term of business terms. We are looking at tying revenue generating services back to computer resources and trying to figure out what is causing a problem. In the online video classroom example, we might have our processors allocated properly, storage tuned to the last IOP, and memory allocated to buffer data and reduce disk reads. If we are on the same network as the athletic department and our basketball team made it to the elite eight during March madness and the athletic department live streams the game on the same network as our classroom servers our classes will be offline due to demand to watch the basketball game. Tools from EMC and VMware will show that everything is working fine and life is good. Meanwhile the help desk is getting calls from students off campus that can't access their assignments during midterms and their Thursday class is not available. What we need is a monitoring system that can look at systems and incorporate more than just processor and disk. What we need is a tool that can look at systems and services and not just resources. We would like to look at the video distribution system and be able to dive into the disk, network, or processor and see what the bottleneck is and fix it quickly.

Oracle released a tool years ago called Enterprise Manager. The tool started out as a database monitoring tool that allowed you to dive into sql calls and figure out why it was taking longer than necessary. With acquisitions of companies like BEA and Sun Microsystems the tool expanded to look at how Java was performing inside a WebLogic server and how disk drives were performing that were serving up requests for the database and WebLogic server. Acquisitions of companies like JD Edwards and PeopleSoft drove the monitoring tools in the opposite direction and screens showing how many purchase orders were being processed on an hourly basis were suddenly available. You could look at what was the bottleneck in closing your books for the end of month reconciliation. Was it a manual process waiting on a report to drop into a directory or was it a sql statement that was taking minutes rather than seconds to complete? You could start looking at a process like purchase orders and dive into a database to see if a table was reaching storage limits as well as figure out that someone recently patched the database which caused an index to not look at a new column that was created and searches are now going against this column so select statements are doing a full table scan rather than using an index to report answers quicker. Adding more storage in this case will be a waste of time. Yes, we are running out of storage on a table but the real issue is we need to re-index the database or execute a new sql execution plan. Below is a screen shot of how well a database is performing with links to look at all the sub-components of the database.

Books have been written on Enterprise Manager. We are not going to cover everything in this blog to make you an expert on the subject.

There are also a number of blogs related to Enterprise Manager

This is a partial list of blogs returned by a Google search. I am sure I missed a few. Note that the list of books and blogs is not a short list. There are classes offered by Oracle University that you can take virtually or in a classroom (both cost money).

The way that Oracle Enterprise Manager is paid for is simple. The base system is free and you pay for the options that you want to use. Unfortunately, the Technology Price Guide is not very clear as to what is and is not Enterprise Manager and what is an option on the database. For example, on page 7, most of the management packs are listed. If you want diagnostics for the database you will need to license your database at $7,500 per processor and not Enterprise Manager. You can license at $150 per named user but the licensing metrics for your database need to match the licensing for you management pack. You could have a two processor license for production and a 25 named user license for development and testing so you will need to blend these licenses into Enterprise Manager with the management packs. Diagnostics is specifically confusing because you enable or disable this feature in Enterprise Manager and not in the database. The telemetry data is being collected for the database but the reporting on the results of the analysis is not being done in the database. You could turn on the reporting in Enterprise Manager without involving the DBA thus incurring an additional license fee that you had not paid for. There is no license key or email that is sent to Oracle saying that you enabled the license it is a simple checkbox in Enterprise Manager that says turn on diagnostic reporting. In recent versions a warning screen pops up telling you that this is not a free feature. In OEM 10g the feature was turned on by default and you had to turn it off. This has changed in recent releases. If you try to turn this feature on when connecting to an Enterprise Edition in the Oracle Public Cloud you will get a feature not available message. You need to go with High Performance or Extreme Performance edition of the database to get the diagnostics enabled.

There are also management packs for Oracle Applications and the pricing for these products can be found in the Oracle Applications Price List. You need to search for the word "packs" to find the price of the management packs in this list. You can get a list of all the management packs from the Oracle Tech Network page for Enterprise Manager

It is important to note that the Enterprise Manager that runs in your data center monitoring your servers and Oracle hardware and software products is the same tool that you can use to monitor and manage PaaS and IaaS resources in the Oracle Public Cloud. You can connect to the instance in the cloud using ssh and read the telemetry from the cloud instance as if it were installed on one of your servers. You can use extensions to the latest version of Enterprise Manager, 13c, to clone a pluggable database instance from your on site installation to a cloud instance.

You can also setup reporting and self service requests to have end users ask for a new service to be provisioned either on site or in the cloud. Below is a screen shot of how to do this for a database. We could do something similar for a WebLogic server, an Apache Web server, a PeopleSoft instance for dev/test, or any layer of the Oracle stack.

In summary, selection of a management tool is important. Tools are good to understand and properly use. At some point you need to step back and ask what is the questions that I need answers to. Am I diving too deep on trying to optimize something that is not that worth deep analysis? Could I automate this and not have to monitor it at all? If I run out of processing power does it make sense to automatically scale up the number of processors? Should I scale out by spinning up more web servers? Do I need to re-architect my network topology to isolate disk traffic from client traffic? If I generate a report who will consume the results? Is the report for someone in IT? Purchasing? The process owner? Is it a technology or financial report? Products like Enterprise Manager allow you to generate all of these reports using different management extensions. My suggestion is to look at some of the introductory videos on the Oracle Tech Network to get an introduction to the problem that you are trying to solve then figure out how much it will cost to measure what is important to you.

Changing the page title in Fluid at run-time

Javier Delgado - Wed, 2016-05-25 07:26
One of our customers asked us to implement nested landing pages, in which some tiles would open a second landing page and eventually a third one and so on. Let me illustrate the use case with some screenshots (I apologise as they are in Spanish, but it should be useful anyway). This would be how the main landing page would look like:


By clicking the "Formación y Desarrollo" tile, a new landing page will be displayed:


And eventually, you can click a tile in this landing page which opens a third one. Let's pick "Formación Acceso al Puesto" for instance:


Unfortunately, the customer could not take advantage of the Master - Detail Framework as they are on PeopleTools 8.54 and this functionality is only available in release 8.55 (which is more complex to upgrade to as Crystal Reports are no longer supported).

So, we decided to build a custom component for our nested landing pages. The component would be called again and again with different URL parameters, in such a way the history could be maintained and the user could go to the previous step instead of going back all the way through the top landing page.

As we were reusing the same component, we needed to adjust the page title in PeopleCode. There involved not only updating the page title itself, but also making sure the back button showed the title of the previous page.


Changing the Navigation Bar Title
By default, the title is set to the component label in the menu. Luckily, there is a good number of examples in the standard functionality on which the title is set at run time, so this one was not particularly difficult to implement. The code that makes the trick is the following:

PTLAYOUT.PAGETITLE_GROUPBOX.Label = &title;

This code needs to be placed in the page activate event. If placed anywhere else, the standard PT_HEADERPAGE Activate code will override the title back to the default one.

Another option is to create a custom header page and add it to the component, but at least from the back button functionality point of view, it did not seem an easy solution.
Changing the Back Button Title
This one was trickier. PeopleSoft maintains a navigation history stack in Javascript which is populated with the default page title at load time using the following Javascript call:

AddToHistory('Cns Navgrppage', '', '', 'CNS_NAVGRPPAGE', 1, 27);

So, in order to keep the right page title in the navigation stack, we needed to update it. Fortunately, there is another Javascript function provided by PeopleSoft called UpdateHistory. The PeopleCode function AddOnLoadScript is particularly helpful when trying to run Javascript functions after the page is loaded. This is the way we implemented the call also in the page activate PeopleCode event:

AddOnLoadScript("UpdateHistory('" | &title | "', undefined, undefined, undefined, 1);");









Pages

Subscribe to Oracle FAQ aggregator