Skip navigation.

Feed aggregator

At last APEX_PAGE.GET_URL!

Tony Andrews - 6 hours 38 min ago
I have just discovered (thanks to a tweet by @jeffreykemp) that in APEX 5.0 there is now a function called APEX_PAGE.GET_URL: About time!  I've been using this home-made version for years: So if I want to create a URL that redirects back to the same page with a request I can just write:     return my_apex_utils.fp (p_request=>'MYREQUEST'); One difference is that the one I use has a Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2016/05/at-last-apexpagegeturl.html

SQL Developer connection to DBaaS

Pat Shuff - 9 hours 22 min ago
Today we are going to connect to our database using SQL Developer. We could connect using sqlplus with a remote command but instead we are going to use a graphical tool to connect to our database in the cloud. It is important to note that this is the same tool that is used to connect to our on premise database. We can execute sql commands, look at the status of the database, clone pluggable databases from one service to another, and generally manipulate and manage the database with command line features of wizards.

SQL Developer is a free integrated development environment that simplifies the development and management of Oracle Database in both traditional and Cloud deployments. SQL Developer offers complete end-to-end development of your PL/SQL applications, a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution, and a migration platform for moving your 3rd party databases to Oracle. There are a few books that have been written about this product.

as well as blogs I suggest looking at the following

We are not going to dive deep into SQL Developer but rather introduce a couple of concepts for monitoring our database in the cloud. We are running version 4.1.3 on a Windows desktop. We actually are cheating a little bit and running it on a Windows 2012 Server that is provisioned into IaaS in the Oracle Cloud. It makes a good scratch space for demos and development hands on labs. When we connect we can connect to the public ip address of our database on port 1521 or we can create an ssh tunnel and connect to localhost on port 1521. We will first connect via an ssh tunnel. To start, we need to log into our database service and figure out what the ip address is for the system we provisioned. For our system we notice that the ip address is 129.152.150.120.

We are going to first connect with ip tunneling through putty. We launch putty and enter the ip address, the ssh keys, and open up port 1521 as a tunnel. We open a connection and all connections to port 1521 on localhost will be connected to our cloud service at the ip address specified. Note that this solution works if we have one database that we are connecting to. If we have two database instances in the cloud we will need to map a different port number on localhost to port 1521 or open up the ports to the internet which we will talk about later. We need to keep this shell active and open but we can iconify the window.

In SQL Developer we can now create a new connection to our database. This is done by clicking on the green plus sign in the top right of the screen. This opens a dialog window to define the connection to the database. We will call this connection prs12cHP which is the name of our service in the cloud. We are going to connect as sys so we need to select the advanced connection to connect as sysdba. It is important to note that you can not do this with Amazon RDS if you provision an Oracle database in the Amazon PaaS. Amazon does not allow you to login as sys or system and does not give you sysdba privileges. If you want sysdba access you will need to deploy Oracle into Amazon EC2 to get access. Once we define our connection to localhost, port 1521, sys as sysdba, and an OID of ORCL we can test our interface and accept the connection once it is successful. Note that we can execute commands in the right window and look at things like what version of the database we are running. In this example we are running the High Performance Edition so we can use diag and tuning extensions from SQL Developer.

There is a new DBA feature in the latest release of SQL Developer. We can launch a navigation menu to add our cloud database by going to the View ... DBA option at the top of the screen. This give us another green plus sign so that we can add the database and expose typical management views adn functions. Two things that are of note here are a simple exposure to pluggable database as well as a clone option associated with this exposure.

We can do other things like look at backup jobs, look at table space allocation and location, look at users that are authorized and active. This is not a replacement for Enterprise Manager because it is looking at immediate and not historic data.

Now that we have connected through a tunnel, let's look at another option. We can open up port 1521 on the database service and connect straight to the ip address. This method is not recommended because it opens up your database to all ip addresses on the internet if you are using a demo or evaluation account. You can whitelist ip addresses, vpn, or subnet limit the systems that it answers. This is done through the compute service management interface under the networking tab. We need to enable the dblistener for our database service. Once we do this we can connect SQL Developer to the database using the ip address of the database service. We might need to do this if we are connecting to multiple cloud servers and don't want to create a tunnel for each of them.

In summary, we have connected to our database service using SQL Developer. This is the same tool that we use to connect to databases in our data center. We can connect the same way that we normally do via an ip address or tunnel to keep the server in the cloud a little more secure. We noted the differences between the Amazon RDS and Oracle DBaaS options and provided a workaround with EC2 or Azure Compute as an alternative. It is important to remember the differences between PaaS features and IaaS features when it comes time to calculating the cost of services. PaaS gives you expanded features like automated backup and size up/down which we will look at next week.

Embedded mode limitations for Production systems

Anthony Shorten - 14 hours 56 min ago

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is ideal for demonstration and development environments as the default setup is enough for those types of activities.

Over time though customers and partners will want to use more and more of the Oracle WebLogic domain facilities including advanced setups like multiple servers, clusters, advanced security setups etc. Here are a few important things to remember about embedded mode:

  • The embedded mode domain setup is fixed with a fixed single server that houses the product and the administration server with the internal basic security setup. In non-production this is reasonable as the requirements for the environment are simple.
  • The domain file (config.xml) is generated by the product, using a template, assuming it is embedded only.
  • When implementations need additional requirements within the domain there are three alternatives:
    • Make the changes in the domain from the administration console and then convert the new config.xml generated by the console as a custom template. This needs to be done as remember when Oracle deliver ANY patches or upgrades (or when you make configuration changes) we need to run initialSetup[.sh] to add the patch, upgrade or configuration to the product. This will reset the file back to the factory provided template unless you are using the custom template. Basically, if you decide to use this option, and do not implement a custom template then you will lose your changes each time.
    • In later versions of OUAF we introduced user exits. These allow implementations to add to the configuration using XML snippets. It does require you to understand the configuration file that is being manipulated and we have sprinkled user exits all over the configuration files to allow extensions. Using this method means that you make changes to the domain using the configuration files, examine the changes to the domain file and then decide which user exit is available to reflect that change and add the relevant XML snippet. Again you must understand the configuration file to make sure you do not corrupt the domain.
    • The easiest option is to migrate to native mode. This basically removes the embedded nature of the domain and houses it within Oracle Weblogic. This is explained in Native Installation whitepaper (Doc Id: 1544969.1) available from My Oracle Support.

Native Installations allows you to use the full facilities within Oracle WebLogic without the restrictions of embedded mode. The advantages of native installations is the following:

  • The domain can be setup according to your company standards.
  • You can implement clusters, multiple servers including dynamic clustering..
  • You can use the security features of Oracle WebLogic to implement complex security setups including SSO solutions.
  • You can lay out the architecture according to your volumes to manage within your SLA's.
  • You can implement JDBC connection pooling, Work Managers, advanced diagnostics etc.

Oracle recommends that native installations be used for environments where you need to take advantage of the domain facilities. Embedded mode should only be used within the restrictions it poses.

Comparing Fully-Online vs Mixed-Course Enrollment Data

Michael Feldstein - 15 hours 11 min ago

By Phil HillMore Posts (406)

Mike Caulfield wrote a post yesterday about a new Blackboard report on design findings regarding online students. The focus of Mike’s post was that people often assume that the norm for an “online” student is taking all courses online, when in fact it is more common for students to take some courses online and some face-to-face (what Mike calls “Mixed-Course”). This distinction is important:

What shocks some people reading this, I think, is that online students would have face-to-face courses as a comparison or option, or that they would be consciously choosing between online and face-to-face in the course of a semester. But this is the norm now at state universities and community colleges; it’s only a secret to people not in those sorts of environments.

I agree that this is an important topic, so let’s look at the data in more detail (and for those wondering if you can connect to Tableau Public to edit online data visualizations while flying, the answer is yes if you’re patient). The best source of information is the IPEDS database, with the most recent data for Fall 2014. WCET put out an excellent report analyzing the distance education data in IPEDS – more on that later. Both IPEDS and WCET use the term “Some but not all” in the same manner as “Mixed-Course”. Based on overall data combining undergraduate and graduate students, Mike is right that more students are Mixed-Course than they are Fully-Online – 2.926 million vs. 2.824 million.

Fully Online vs Mix-and-match table

But this comparison varies by sector and by degree type as you can see below.

Fully-Online vs Mixed-Course

Some notes:

  • For graduate students in all sectors, Fully-Online is more common than Mixed-Course;
  • For private not-for-profit and for for-profit sectors, Fully-Online is more common than Mixed-Course;
  • The predominant case where Mixed-Course is most common is for public institutions – both 4-year and 2-year – for undergraduate degrees (as Mike pointed out); and
  • Due to the large size of those two public sectors – combined they represent 75% of all enrollments in the US – the overall numbers slightly favor Mixed-Course.

What about growth measures? IPEDS only began collecting this level of distance education enrollment data for Fall 2012, and I have worked with WCET to show some data issues in IPEDS, particularly for Fall 2012. So be aware of some noise in these measurements. However, the WCET report shows overall growth trends in their report, and Fully-Online (exclusively DE) is actually growing slightly faster than Mixed-Course – 225 thousand vs. 178 thousand. Note that WCET combined public 4-year and public 2-year into one bucket.

Some but not all growth

Exclusive DE growth

If you’d like to explore the data in more detail, see this interactive chart.

The post Comparing Fully-Online vs Mixed-Course Enrollment Data appeared first on e-Literate.

Crowdsourced software development experiment

Dimitri Gielis - Thu, 2016-05-26 15:30
A few days ago I got an email about an experiment how to program with the crowd.
I didn't really heard about it before, but found it an interesting thought. In this experiment people will perform microtasks (10 minutes task), as a member of the crowd. People don't know each other, but will collaborate together. The system is distributing the work and supplies instructions. The challenge is in creating quality code that meets the specifications.
Job is still searching for some people to be part of the experiment, so I thought to put it on my blog, in case you're interested you find more details below and how to contact him.


Categories: Development

Understanding MySQL Fabric Faulty Server Detection

Pythian Group - Thu, 2016-05-26 12:39

Awhile ago I found myself analyzing a MySQL fabric installation to understand why a group member was occasionally being marked as FAULTY even when the server was up and running and no failures were observed.  

         
                         server_uuid     address  status       mode weight
------------------------------------ ----------- ------- ---------- ------
ab0b0653-6121-11c5-55a0-007543445454 mysql1:3306 PRIMARY READ_WRITE    1.0
f34dd331-2432-11f4-a2d3-006754678533 mysql2:3306 FAULTY  READ_ONLY     1.0

 

Upon reviewing mysqlfabric logs, I found the following warnings were being logged from time to time:

[WARNING] 1442221217.920115 - FailureDetector(xc_grp_1) - Server (f34dd331-2432-11f4-a2d3-006754678533) in group (xc_grp_1) is unreachable

 

Since I was not clear under which circumstances a server is marked as FAULTY, I decided to review MySQL Fabric code (Python) to better understand the process.

The module responsible for printing this message is failure_detection.py and more specifically, the _run method belonging to FailureDetector class. This method will loop through every server in a group, and attempt a connection to the MySQL instance running on that node. MySQLServer.Is_alive (mysql/fabric/server.py) method is called for this purpose.

Before reviewing the failure detection process, we first need to know that there are four MySQL fabric parameters that will affect when a server is considered unstable or faulty:

DETECTION_TIMEOUT
DETECTION_INTERVAL
DETECTIONS
FAILOVER_INTERVAL

 

Based on the above variables, the logic followed by FailureDetector._run() to mark a server as FAULTY is the following:

1) Every {DETECTION_INTERVAL/DETECTIONS} seconds, a connection against each server in the group is attempted with a timeout equal to DETECTION_TIMEOUT

2) If DETECTION_TIMEOUT is exceeded, the observed message is logged and a counter incremented

3) When this counter reaches DETECTIONS, the server is marked as “unstable” and if the last time the master changed was greater than FAILOVER_INTERVAL ago, the server is marked as FAULTY

With a better understanding of the logic followed by MySQL fabric to detect faulty nodes, I went to the configuration file to check the existing values for each of the parameters:

DETECTIONS=3
DETECTION_TIMEOUT=1
FAILOVER_INTERVAL=0
DETECTION_INTERVAL=6

From the values above we can notice that each group will be polled every 2 seconds (DETECTION_INTERVAL/DETECTIONS) and that the monitored server should respond within a second for the test to be considered successful.

On high concurrency nodes, or nodes under heavy load, a high polling frequency combined with tight timeouts could cause the servers to be marked as FAULTY just because the connection attempt would not be completed or processed (in the case of high connection rates or saturated network interfaces) before the timeout occurs.

Also, having FAILOVER_INTERVAL reduced to 0, will cause the server to be marked as FAULTY even if a failover had just occurred.

A less aggressive configuration would be more appropriated for heavy loaded environment:

DETECTIONS=3 -> It's Ok
DETECTION_TIMEOUT -> 5 TO 8
FAILOVER_INTERVAL -> 1200
DETECTION_INTERVAL -> 45 TO 60

Conclusion

As with any other database clustering solution that relies on a database connection to test node status, situations where the database server would take longer to respond should also be considered. The polling frequency should be adjusted so the detection window is within an acceptable range, but the rate of monitoring connections generated is also kept to the minimum. Check timeouts should also be adjusted to avoid false positives caused by the server not being able to respond in a timely manner.

 

Categories: DBA Blogs

INSERT ALL caveat

Dominic Brooks - Thu, 2016-05-26 10:33

Why you might want to think twice about using INSERT ALL.

One of those things I knew and then forgot.

So, let’s say you’ve got three tables or a partitioned table or something like that.

Let’s use regional tables for simplicity.

drop table t1_r1;
drop table t1_r2;
drop table t1_r3;

create table t1_r1
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R1')));

create table t1_r2
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R2')));

create table t1_r3
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R3')));

insert into t1_r1 values ('R1',1);
insert into t1_r2 values ('R2',1);
insert into t1_r3 values ('R3',1);

commit;

And you want a routine that will insert into one of those tables depending on region.

And you’re a simple fellow, so you go with an IF statement:

create or replace procedure p1 (
  col1 in varchar2, 
  col2 in number
)
as
begin
  if col1 = 'R1'
  then
      insert into t1_r1 values(col1,col2);
  elsif col1 = 'R2'
  then
      insert into t1_r3 values(col1,col2);
  else 
      insert into t1_r3 values(col1,col2);
  end if;
end p1;
/

Procedure P1 compiled

And then in the same session you run this uncommitted:

exec p1('R1',2);

PL/SQL procedure successfully completed.

And then in another session you decide to truncate table T1_R3:

truncate table t1_r3;

Table T1_R3 truncated.

No problem.
None was expected.

However…

Let’s say that we decide to tidy up that procedure and get rid of some of the repetition by using an INSERT ALL statement.
I will use a standalone sql statement just to demonstrate a further minor aspect rather than using a procedure with a bound parameter.

insert all
when col1 = 'R1' then into t1_r1
when col1 = 'R2' then into t1_r2
when col1 = 'R3' then into t1_r3
select 'R1' col1,2 col2
from dual;

1 row inserted.

Let’s revisit the truncate:

truncate table t1_r3;

SQL Error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
00054. 00000 -  "resource busy and acquire with NOWAIT specified or timeout expired"
*Cause:    Interested resource is busy.
*Action:   Retry if necessary or increase timeout.

TM share locks from the INSERT ALL on all three possible targets prevent the TRUNCATE.

So, a simple/simplisitic illustration of why you might want to think twice about whether INSERT ALL is the best feature for your use case, based on a real life problem.


Tracefile Automation – Simplify Your Troubleshooting Tasks

Pythian Group - Thu, 2016-05-26 09:06

Here’s a common Oracle troubleshooting scenario when a SQL statement needs tuning and/or troubleshooting:

  • log on to dev server
  • connect to database (on a different server)
  • run the SQL statement with 10046 tracing enabled
  • ssh to the database server
  • copy the trace file back to work environment
  • process the trace file.

 

All of this takes time. Granted, not a great deal of time, but tuning is an iterative process and so these steps will be performed multiple times. Not only are these steps a productivity killer, but they are repetitive and annoying. No one wants to keep running the same manual command over and over.

This task is ripe for some simple automation.

If both the client and database servers are some form of Unix, automating these tasks is straightforward.

Please note that these scripts require an 11g or later version of the Oracle database. These scripts are dependent on the v$diag_info view to retrieve the tracefile name. While these scripts could be made to work on 10g databases, that is left as an exercise for the reader.

Step by Step

To simplify the process it can be broken down into steps.

 

1. Reconnect

The first step is to create a new connection each time the SQL is executed. Doing so ensures the database session gets a new tracefile, as we want each execution to be isolated.

-- reconnect.sql

connect jkstill/XXXX@oravm

 

2. Get the Tracefile hostname, owner and filename

Oracle provides all the information needed.

In addition the script will set the 10046 event, run the SQL of interest and then disable the 10046 event.

Following is a snippet from the tracefile_identifier_demo.sql script.

 


-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

 

In this case sql2trace.sql is a simple SELECT from a test table.  All of the scripts used here appear in Github as mentioned at the end of this article.

 

3. Process the Tracefile

Now that the tracefile has been created, it is time to retrieve it.

The following script scp.sql is called from tracefile_identifier_demo.sql.

 


col scp_src new_value scp_src noprint
col scp_target new_value scp_target noprint

set term off feed off verify off echo off

select '&&1' scp_src from dual;
select '&&2' scp_target from dual;

set feed on term on verify on

--disconnect

host scp &&scp_src &&scp_target

Following is an example putting it all together in tracefile_identifier_demo.sql.

 

SQL> @tracefile_identifier_demo
Connected.

1 row selected.


PRODUCT                        VERSION              STATUS
------------------------------ -------------------- --------------------
NLSRTL                         12.1.0.2.0           Production
Oracle Database 12c Enterprise 12.1.0.2.0           64bit Production
 Edition

PL/SQL                         12.1.0.2.0           Production
TNS for Linux:                 12.1.0.2.0           Production

Data Base
------------------------------
P1.JKS.COM

INSTANCE_NAME        HOST_NAME                      CURRDATE
-------------------- ------------------------------ ----------------------
js122a1              ora12c102rac01.jks.com         2016-05-23 16:38:11

STARTUP
--------------------
04/02/2016 11:22:12


Session altered.

Elapsed: 00:00:00.00

OWNER        OBJECT NAME                     OBJECT_ID OBJECT_TYPE             CREATED
------------ ------------------------------ ---------- ----------------------- -------------------
SYS          OLAP_EXPRESSION                     18200 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_BOOL                18206 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_DATE                18204 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_TEXT                18202 OPERATOR                2016-01-07 21:46:54
SYS          XMLSEQUENCE                          6379 OPERATOR                2016-01-07 21:41:25
SYS          XQSEQUENCE                           6380 OPERATOR                2016-01-07 21:41:25
SYS          XQWINDOWSEQUENCE                     6393 OPERATOR                2016-01-07 21:41:25

7 rows selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

js122a1_ora_1725_MYTRACEFILE.trc                                                                                                                                                            100% 3014     2.9KB/s   00:00

SQL> host ls -l js122a1_ora_1725_MYTRACEFILE.trc
-rw-r----- 1 jkstill dba 3014 May 23 16:38 js122a1_ora_1725_MYTRACEFILE.trc

But Wait, There’s More!

This demo shows you how to automate the retrieval of the trace file. But why stop there?  The processing of the file can be modified as well.

Really, it isn’t even necessary to copy the script over, as the content can be retrieved and piped to your favorite command.  The script mrskew.sql for instance uses ssh to cat the tracefile, and then pipe the contents to the Method R utility, mrskew.  Note: mrskew is a commercial utility, not open source software.

 

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on

--disconnect

host ssh &&ssh_target 'cat &&scp_filename' | mrskew

 

Following is another execution of tracefile_identifier_demo.sql, this time piping output to mrskew. Only the final part of the output is shown following

 

...

Elapsed: 00:00:00.01

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
---------------------------  --------  ------  -----  --------  --------  --------
PARSE                        0.002000   33.1%      2  0.001000  0.000000  0.002000
db file sequential read      0.001211   20.0%      5  0.000242  0.000056  0.000342
FETCH                        0.001000   16.5%      1  0.001000  0.001000  0.001000
gc cr grant 2-way            0.000999   16.5%      1  0.000999  0.000999  0.000999
SQL*Net message from client  0.000817   13.5%      2  0.000409  0.000254  0.000563
Disk file operations I/O     0.000018    0.3%      2  0.000009  0.000002  0.000016
SQL*Net message to client    0.000002    0.0%      2  0.000001  0.000001  0.000001
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
---------------------------  --------  ------  -----  --------  --------  --------
TOTAL (9)                    0.006047  100.0%     19  0.000318  0.000000  0.002000

Now we can see where all the db time was consumed for this SQL statement, and there was no need to copy the trace file to the current working directory. The same can be done for tkprof and other operations.  Please see the plan.sql and tkprof.sql scripts in the Github repository.

 

Wrapping It Up

A little bit of automation goes a long way. Commands that are repetitive, boring and error prone can easily be automated.  Both your productivity and your mood will soar when little steps like these are used to improve your workflow.

All files for this demo can be found at https://github.com/jkstill/tracefile_identifier

Categories: DBA Blogs

On Demand Webcast: Driving a Connected, More Productive Next-Gen Workplace

WebCenter Team - Thu, 2016-05-26 08:25
Over the past two decades technological change has transformed business processes. Among the biggest changes is mobile ubiquity.
Finding ways to engage with customers in a mobile world is becoming increasingly challenging for outdated systems to handle.
Discover how a cloud platform can unite content, social, and mobile engagement with simplified line of business automation, as well as microsites and communities for an integrated next-generation workplace in this on demand webcast. View today!

Using Enterprise Manager to manage cloud services

Pat Shuff - Thu, 2016-05-26 07:18
Yesterday we talked about the virtues of Enterprise Manager. To honest the type of monitoring tool is not important but the fact that you have one is. One of the virtues that VMWare touts of VSphere is that you can manage instances on your server as well as instances in VCloud. This is something worthy of playing with. The same tool for your on premise instances also managing your instances in the cloud has power. Unfortunately, VCloud allows you to allocate virtual machines and storage associated with it so you only have a IaaS option of compute only. You can't allocate just storage. You can't deploy a database server unless you have a database deployed that you want to clone. You need to start with an operating system and build from there. There are benefits of PaaS and SaaS that you will never see in the VCloud implementation.

Oracle Enterprise Manager provides the same universal management interface for on premise and in cloud services. Amazon falls short on this. First, they don't have on premise instances so the tools that they have don't monitor anything in your data center, only in their cloud. Microsoft has tools for monitoring services plugins for looking at Azure services. It is important to note that you need a gateway server in the Azure cloud to aggregate the data and ship the telemetry data back and report it in the monitoring tool. There is a good Blog detailing the cost if IaaS monitoring in Azure. The blog points out that the outbound data transfer for monitoring can cost up to $17/month/server so this is not something that comes for free.

Today we are going to look at using Enterprise Manager as a management tool for on premise systems, the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. We are going to cheat a little and use a VirtualBox instance of Enterprise Manager 13c. We are not going to go through the installation process. The books and blogs that we referenced yesterday detail how to do this. Unfortunately, the VirtualBox instance is available from edelivery.oracle.com. We are not going to use this instance but are going to use an instance for demo purposes only available internal to Oracle. The key difference between the two systems is that the edelivery instance is 21 GB in size for download and expands to provide an OEM 13c instance for testing while the internal system (retriever.us.oracle.com) has a 12c and 11g database installed and is 39.5 GB (expanded to almost 90 GB when uncompressed). Given the size of the instance I really can't provide external access to this instance. You can recreate this by downloading the edelivery system, installing an 11g database instance, installing a 12c database instance, and configuring OEM to include data from those instances to replicate the screen shots that we are including.

If we look at the details on the virtual box instance we notice that we need at least 2 cores and 10 GB of memory to run this instance. The system is unusable at 8 GB of RAM. We really should bump this up to 12 GB of RAM but given that it is for demo purposes and for training it is ok if it runs a little slow. If we were running this in production it is recommended to grow this to 4 cores and 16 GB of memory and also recommended that you not use a downloaded VirtualBox instance for production but install from scratch.

The key things that we are going to do are walk through what it takes to add a monitoring agent onto the service that we are trying to monitor and manage. If we look at the architecture of Enterprise Manager we notice that there are three key components; the Oracle Management Repository (OMR), the Oracle Management Service (OMS), and the Oracle Management Agent (OMA). The OMR is basically a database that keeps a history of all telemetry actions as well as reports and analytics for the systems being monitored. The OMS is the heart of Enterprise Manager and runs on a WebLogic server. The code is written in Java and presents the primary user interface to the administrators as well as being the gateway between the OMR and the agents or OMAs. The agents are installed on the target systems and collect operating system data, database data, weblogic data, and all other log data to ship back to the OMR for analysis by the users.

It is important to note at this point that most PaaS and SaaS providers do not allow you to install an Enterprise Manager Agent or any other management agent on their instances. They want to manage the services for you and force you to use their tools to manage their instance. SalesForce, for example, only gives you access to your customer relationship data. You can export your contact lists to an csv file to backup your data but you can't correlate the contact list to the documents that you have shared with these users. Amazon RDS does not provide a file system access, system access to the database, or access to the operating system so that you can install the management agent. You must use their tools to monitor services provided on their sites. Unfortunately, this inhibits you from looking at important things like workload repository reports or sql tuning guides to see if something is running slow or waiting on a lock. Your only choice is to deploy the desired PaaS or SaaS as a manual or bundled install on IaaS forcing you to manually manage things like backups and patching on your own.

The first thing that we need to do in Enterprise Manager is to log in and click on the Setup button on the top right. We need to define named credentials since we are going to connect to the cloud service using public and private ssh keys. We need to follow the Security pull down to Named Credentials.

We click on the Create icon in the top left and add credentials with public and private keys. If we don't have an ssh key to access the service we can generate an ssh key using ssh-keygen which generates a public and private key and upload the key using the SSH Access pull down in the hamburger menu. Once we upload the ssh key we can use ssh -i keyname.ppk opc@ip_address for our database server. We will use this keyname.ppk to connect with Enterprise Manager and have all telemetry traffic transferred via the ssh protocol.

Once we have the credentials valid in the cloud account we can create the ssh access through Enterprise Manager. To do this we to to Setup at the top right, Security, Named Credentials. We then click on the Create button in the middle left to start entering data about the credentials. The name in the the screen shot below failed because it begins with a number so we switched it to ssh2017 since 2017ssh failed the naming convention. We are trying to use host access via ssh which is done with pull down menu definitions. The system defaults to a host access but we need to change from host to global which does not tie our credentials to one ip address. We upload our public and private key as well as associate this with the opc user since that user has sudo rights. We can verify the credentials by looking at the bottom of the list. This should allow us to access our cloud host via ssh and deploy an agent to our cloud target.

Note that we created two credentials because we had a step fail later. We created credentials for the opc user and for the oracle user. The opc credentials are called ssh2017 as shown in the screen shots. The oracle credentials are called oracle2017 and are not shown. The same steps are used just the username is changed as well as the name of the credentials.

If we want to install the management agent onto our instance we need to know the ip address of the service that we are going to monitor as well as an account that can sudo to root or run elevated admin services. We go to the Enterprise Manager splash screen, login, select the Setup button in the top right and drill down to Add Target and Add Target Manually. This takes us to the Add Target screen where we can Install Agent on Host. To get rid of the warnings, we added our cloud target ip address to the /etc/hosts file and used a fully qualified and short name associated with the ip address. We probably did not add the right external dns name but it works with Enterprise Manager. When we add the host we use the fully qualified host name. We can find this by logging into the cloud target and looking at the /etc/hosts file on that server. This gives us the local ip address and a fully qualified host name. Once we have this we can enter a directory to upload the agent software to. We had to create an agent directory under the /u01/app/oracle directory. We select the oracle2017 credentials (the screen shots use ssh2017 but this generates an error later) we defined in the previous step and start uploading the agent software and configuring the host as a target.

Note that we could have entered the ip address rather than going through adding the ip address to /etc/hosts. We would have received a warning with the ip address.

When we first tried this we got an error during the initialization phase that opc did not own the /u01/app/oracle directory and had to create an agent directory and change ownership. Fortunately, we could easily resubmit and enter a new directory without having to reenter all of the other information. The deployment takes a while because Enterprise Manager needs to upload the agent binaries, extract, and install them. The process is updated with status so that you can see the progress and restart when errors happen. When we changed the ownership, the installation failed at a later step stating the opc did not have permission to add the agent to the inventory. We corrected this by installing as oracle and setting the /u01/app/oracle/agent directory to be owned by oracle.

When we commit the ip address or host name as well as the ssh credentials, we can track progress as the management server deploys the agent. We get to a point where we note that the oracle user does not have ssh capabilities and we will need to run some stuff manually from the opc account.

At this point we should have an enterprise manager connection to a cloud host. To get this working from my VirtualBox behind my AT&T Uverse wireless router I first had to configure a route on my broadband connection and set the ip address of the Enterprise Manager VirtualBox image to a static ip address. This allows the cloud instance to talk back to the OMS and store data in the OMR.

The next step is to discover the database instances. This is done by going through a guided discovery on the host that we just provisioned. It took a few minutes to sync up with the OMS but we could verify this with the emctl status agent command on the target host. We add the target manually using the guided discovery and select database services to look for on the target.

At this point we should have a database, listener, and host connected to our single pane of management glass. We should see a local database (em12c) and a cloud based database (prs12cHP). We can look at the host characteristics as well as dive into sql monitoring, database performance, and database management like backup and restore options or adding users to the repository. We could add a Java Cloud Service as well as link these two systems together and trace a web page request down to a sql read and look at what the longest latency component is. We can figure out if the network, java memory allocation, or databse disk is causing the slowest response. We can also look at sql tuning recommendations to get suggestions on changing our sql code or execution plans using the arw report and sql tuning utilities in Enterprise Manager.

In summary, we can connect to an on premise server as well as a cloud server. We can't connect to an Amazon RDS instance because we don't get file system level access to push a client to or a root user to change the agent permissions. We do get this with IaaS on Oracle, Compute servers on Azure, and EC2 on Amazon. We also get this with PaaS on Oracle and potentially event Force.com from SalesForce. No one give you this ability with SaaS. It is assumed that you will take the SaaS solution as is and not need to look under the covers. Having a single pane of glass for monitoring and provisioning services is important. The tool should do more than tell you how full a disk is or how much of a cpu is loaded or available. It should dive into the application and let you look at where bottlenecks are and help troubleshoot issues. We could spend weeks diving into Enterprise Manager and the different management packs but we are on a journey to look at PaaS options from Amazon, Microsoft, and Oracle.

How to clear MDS Cache

Darwin IT - Thu, 2016-05-26 06:06
In the answer on a question on community.oracle.com, I found the following great tip: http://www.soatutor.com/2014/09/clear-mds-cache.html.

In 12cR2 this looks like:
1. Start System MBean Browser; In Domain, pull down the Weblogic Domain menu and choose 'System MBean Browser':


2. Browse for the Application Defined Beans:



3. Expand it and navigate to oracle.mds.lcm, choose server (AdminServer or SOAServer1)
 4. Navigate to the node Application em (AdminServer) or soa-infra (SOAServer) -> MDSAppRuntime -> MDSAppRuntime. Click on the tab Operations, and then on clearCache


5. Click on Invoke:


Then the following confirmation is shown:


By the way, often a redeploy of the SOA Composite that calls the WSDL or other artefact helps. But unfortunately not always.





My "Must See" ADF/MAF Sessions at KScope 16

Scott Spendolini - Thu, 2016-05-26 06:00
Yes, you read that right - it's not a typo, nor did one of my kids or wife gain access to my laptop.  It's part of a "blog hop" - where a number of experts made recommendations about KScope sessions that are "must attend" and are not in their core technology.  I picked ADF/MAF, as I don't have any practical experience in either technology, but they are at least similar enough that I would not be totally lost.

In any case, the following sessions in the ADF/MAF track are worth checking out at Kscope 16 this year:

How to Use Oracle ALTA UI to Create a Smashing UI for Web and Mobile
Luc Bors, eProseed NL
When: Jun 28, 2016, Session 12, 4:45 pm - 5:45 pm

I've always liked UI, and Oracle ALTA is a new set of templates that we'll be seeing quite a bit of across a number of new technologies.

Three's Company: Going Mobile with Oracle APEX, Oracle MAF, and Oracle MCS
Frederic Desbiens , Oracle Corporation
When: Jun 27, 2016, Session 6, 4:30 pm - 5:30 pm

I'll admit - anytime there's a comparison of APEX and other similar technologies, it's always interesting to witness the discussion.  If nothing else, there will be a good healthy debate as a result of this session!

Introduction to Oracle JET: JavaScript Extension Toolkit
Shay Shmeltzer, Oracle Corporation
When: Jun 28, 2016, Session 7, 8:30 am - 9:30 am

Oracle JET is a lot more than just charts, and there's a lot of momentum behind this technology.  I'm very interested to learn more and perhaps even see a thing or two that you can do with it, as well as the various integration points that are possible with other technologies.

Build a Mobile App in 60 Minutes with MAF
John King , King Training Resources
When: Jun 27, 2016, Session 5, 3:15 pm - 4:15 pm

Native mobile applications are something that APEX doesn't do, so it would be nice to see how this would be possible, should the need ever arise.

Thanks for attending this ODTUG blog hop! Looking for some other juicy cross-track sessions to make your Kscope16 experience more educational? Check out the following session recommendations from fellow experts!

Storing Date Values As Characters (What’s Really Happening)

Richard Foote - Thu, 2016-05-26 02:00
For something that’s generally considered an extremely bad idea, I’ve lost count of the number of times I’ve come across applications that insist on storing date values as characters within the database. We’ve all seen them … I recently got called in to assist a customer who was having issues with a POC in relation […]
Categories: DBA Blogs

The river floes break in spring...

Greg Pavlik - Wed, 2016-05-25 18:37

Alexander Blok The river floes break in spring... March 1902translation by Greg Pavlik 

The river floes break in spring,
And for the dead I feel no sorrow -
Toward new summits I am rising,
Forgetting crevasses of past striving,
I see the blue horizon of tomorrow.

What regret, in fire and smoke,
What agony of Aaron’s rod,
With each hour, with each stroke -
Or instead - the heavens’ gift stoked,
From the Bush of Moses, the Mother of God!

Original:

Весна в реке ломает льдины,
И милых мертвых мне не жаль:
Преодолев мои вершины,
Забыл я зимние теснины
И вижу голубую даль.

Что сожалеть в дыму пожара,
Что сокрушаться у креста,
Когда всечасно жду удара
Или божественного дара
Из Моисеева куста!  Март 1902

Please, use HTTPS for your APEX apps

Dimitri Gielis - Wed, 2016-05-25 16:07
Why use HTTPS?

When you Google this question you get many different answers, but this answer of Google Developers answers it for me in short (click the link for more details):
  • HTTPS protects the integrity of your website/APEX app
  • HTTPS protects the privacy and security of your users
  • HTTPS is the future of the web; many new technologies only work with https (for example Service Workers; you can read more about Service Workers and APEX in my presentation)
Industry going to HTTPS

Before websites had an HTTP portion and an HTTPS portion, which became active when you would login to the site, but nowadays everything is under HTTPS. Google will actually rank your site higher when it's using HTTPS. Look at the sites you visit; many of them will now use HTTPS as a default.

HTTPS on localhost

If you're developing locally, you don't really need HTTPS on localhost, but I still like to have that.
Here're the steps I did in Chrome on my Mac (OSX) to get the nice green lock when developing locally (works also with APEX Front-End Boost)
  • In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says "Certificate Information."
  • Click and drag the certificate image to your desktop. 
  • Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it.
  • Be sure you add the certificate to the System keychain, NOT the login keychain. 
  • After it has been added, double-click it. 
  • Expand the "Trust" section. "When using this certificate," set to "Always Trust"
  • Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser.
HTTPS on your own server

For years I've been using SSL certificates ordered from Godaddy, but depending the certificate you get, it might not be that cheap. The APEX R&D website is a multi-site certificate - the same certificate is used for the APEX Office Print website.

But there's some good news... you can get SSL for free too (and it's very easy to do!), thanks to Letsencrypt. I used Letsencrypt to protect the Euro2016challenge.eu APEX app/website for example.
Here's the Getting Started Guide from Let's Encrypt. This is the command I used (after installing the package):

./letsencrypt-auto certonly --webroot -w /var/www/euro2016 -d euro2016challenge.eu -d www.euro2016challenge.eu


If you're not yet on https with your APEX app/site, I would definitely recommend looking into it :)

Categories: Development

Are Zero Days or Bugs Fixed by CPU The Worst?

Pete Finnigan - Wed, 2016-05-25 15:50

I spoke yesterday about compartmentalising Oracle Security and one element that comes out of this is the need to consider what you are trying to achieve; secure actual data and also secure the platform. In general applying security patches will....[Read More]

Posted by Pete On 25/05/16 At 12:51 PM

Categories: Security Blogs

Compartmentalised Oracle Security

Pete Finnigan - Wed, 2016-05-25 15:50

I have been teaching security classes about Oracle Security for many years and they are very popular and I teach many classes per year around the world; mostly in the UK and EEC but I also venture to the Middle....[Read More]

Posted by Pete On 24/05/16 At 12:43 PM

Categories: Security Blogs

New Oracle Security Paper on Non-Production and Delphix

Pete Finnigan - Wed, 2016-05-25 15:50

I was asked by Delphix earlier this year to review their product with a particular focus on Oracle security of course. I wrote two papers; the first about Data Masking and Delphix and the second about securing data in non-production....[Read More]

Posted by Pete On 23/05/16 At 11:23 AM

Categories: Security Blogs

Oracle Security And Delphix Paper and Video Available

Pete Finnigan - Wed, 2016-05-25 15:50

I did a webinar with Delphix on 30th March 2016 on USA time. This was a very good session with some great questions at the end from the attendees. I did a talk on Oracle Security in general, securing non-production....[Read More]

Posted by Pete On 01/04/16 At 03:43 PM

Categories: Security Blogs

3 Days of Oracle Security Training In York, UK

Pete Finnigan - Wed, 2016-05-25 15:50

I have just updated the public Oracle Security training dates on our Oracle Security training page to remove the public trainings that have already taken place this year and to add a new training in York for 2016. After the....[Read More]

Posted by Pete On 31/03/16 At 01:53 PM

Categories: Security Blogs