Skip navigation.

Feed aggregator

Adversarial analytics and other topics

DBMS2 - 3 hours 7 min ago

Five years ago, in a taxonomy of analytic business benefits, I wrote:

A large fraction of all analytic efforts ultimately serve one or more of three purposes:

  • Marketing
  • Problem and anomaly detection and diagnosis
  • Planning and optimization

That continues to be true today. Now let’s add a bit of spin.

1. A large fraction of analytics is adversarial. In particular:

  • Many of the analytics companies I talk with tell me that they have important use cases in security, anti-fraud or both.
  • Click fraud steals a large fraction of the revenue in online advertising and other promotion. Combating it is a major application need.
  • Spam is another huge, ongoing fight.
    • When Google et al. fight web spammers — which is of course a great part of what web search engine developers do — they’re engaged in adversarial information retrieval.
    • Blog comment spam is still a problem, even though the vast majority of instances can now be caught.
    • Ditto for email.
  • There’s an adversarial aspect to algorithmic trading. You’re trying to beat other investors. What’s more, they’re trying to identify your trading activity, so you’re trying to obscure it. Etc.
  • Unfortunately, unfree countries can deploy analytics to identify attempts to evade censorship. I plan to post much more on that point soon.
  • Similarly, de-anonymization can be adversarial.
  • Analytics supporting national security often have an adversarial aspect.
  • Banks deploy analytics to combat money-laundering.

Adversarial analytics are inherently difficult, because your adversary actively wants you to get the wrong answer. Approaches to overcome the difficulties include:

  • Deploying lots of data. Email spam was only defeated by large providers who processed lots of email and hence could see when substantially the same email was sent to many victims at once. (By the way, that’s why “spear-phishing” still works. Malicious email sent to only one or a few victims still can’t be stopped.)
  • Using unusual analytic approaches. For example, graph analytics are used heavily in adversarial situations, even though they have lighter adoption otherwise.
  • Using many analytic tests. For example, Google famously has 100s (at least) of sub-algorithms contributing to its search rankings. The idea here is that even the cleverest adversary might find it hard to perfectly simulate innocent behavior.

2. I was long a skeptic of “real-time” analytics, although I always made exceptions for a few use cases. (Indeed, I actually used a form of real-time business intelligence when I entered the private sector in 1981, namely stock quote machines.) Recently, however, the stuff has gotten more-or-less real. And so, in a post focused on data models, I highlighted some use cases, including:

  • It is increasingly common for predictive decisions to be made at [real-timeish] speeds. (That’s what recommenders and personalizers do.) Ideally, such decisions can be based on fresh and historical data alike.
  • The long-standing desire for business intelligence to operate on super-fresh data is, increasingly, making sense, as we get ever more stuff to monitor. However …
  • … most such analysis should look at historical data as well.
  • Streaming technology is supplying ever more fresh data.

Let’s now tie those comments into the analytic use case trichotomy above. From the standpoint of mainstream (or early-life/future-mainstream) analytic technologies, I think much of the low-latency action is in two areas:

  • Recommenders/personalizers.
  • Monitoring and troubleshooting networked equipment. This is generally an exercise in anomaly detection and interpretation.

Beyond that:

  • At sufficiently large online companies, there’s a role for low-latency marketing decision support.
  • Low-latency marketing-oriented BI can also help highlight system malfunctions.
  • Investments/trading has a huge low-latency aspect, but that’s somewhat apart from the analytic mainstream. (And it doesn’t fit well into my trichotomy anyway.)
  • Also not in the analytic mainstream are the use cases for low-latency (re)planning and optimization.

Related links

My April, 2015 post Which analytic technology problems are important to solve for whom? has a round-up of possibly relevant links.

Categories: Other

resizing database as a service with Oracle

Pat Shuff - 6 hours 16 min ago
Historically, what happens to a database months after deployment has always been an issue and problem. If we go out and purchase a computer and disk storage then deploy a database onto the server. If we oversize the hardware and storage, we wasted budget. If we undersize the hardware and storage we had to purchase a new computer or new storage and get an operating system expert to reconfigure everything on the new server and get a database administrator to reconfigure the database installation to run on the new server or new storage. For example, if we purchased a 1 TB disk drive and allocated it all to /u02 the database had a ton of space to grow into. We put the DATA area there and put the RECO area into /u03. Our database service suddenly grows wildly and we have a record number of transactions and increase the offerings in our product catalog and our tablespace suddenly grows to over 800 GB. Disk performance starts to suffer and we want to grow our 1 TB to 2 TB. To do this we have to shut down our database, shut down the operating system, attach the new disk, format and mount it as /u05, copy the data from /u02 to /u05, remount /u05 as /u02, and reboot the system. We could have backed up the database from /u02 and reformatted /u02 and /u05 as a logical volume to allow us to dynamically grow the disk and allow us to purchase a 1 TB for our /u05 disk rather than a 2 TB disk and reduce our cost. We successfully grew our tablespace by purchasing more hardware, involving an operating system admin, and our database administrator. We were only down for a day or half day while we copied all of our data and modified the disk layout.

Disk vendors attacked this problem early by offering network or fiber attached storage rather that direct attached storage. They allow you to add disks dynamically keeping you from having to go out and purchase new disks. You can attach your disk as a logical unit number and add spindles as desired. This now requires you to get a storage admin involved to update your storage layout and grow your logical unit space from 1 TB to 2 TB. You then need to get your operating system admin to grow the file system that is on your /u02 logical unit mount to allow your database admin to grow the tablespace beyond the 1 TB boundary. Yes, this solves the problem of having to bring down the server, touch the hardware, add new cables and spindles to the computer. It allows data centers to be remote and configurations to be done dynamically with remote management tools. It also addresses the issue of disk failures much easier and quicker by pushing the problem to the storage admin to monitor and fix single disk issues. It solves a problem but there are better ways today to address this issue.

With infrastructure as a service we hide these issues by treating storage in the cloud as dynamic storage. With Amazon we can provision our database in EC2 and storage in S3. If we need to grow our S3, we allocate more storage to our bucket and grow the file system in EC2. The database admin then needs to go in and grow the tablespace to fill the new storage area. We got rid of the need for a storage admin, reduced our storage cost, and eliminated a step in our process. We still need an operating system admin to grow the file system and a database admin to grow the tablespace. The same is true if we use Azure compute or Oracle IaaS.

Let's go through how to attach and grow storage to a generic compute instance. We have a CentOS image running in IaaS on the Oracle Cloud. We can see that the instance has 9 GB allocated to it as the root operating system. We would like to add a 20 GB disk then grow the disk to 40 GB as a second test. At first we notice that our instance is provisioned and we the 9 GB disk labeled CentOS7 allocated to our instance as /dev/xvdb. We then create a root partition /dev/xvdb1, provision an operating system onto it using the xfs file system, and mount it as the root filesystem.

To add a 20 GB disk, we go into the Compute management screen, and create a new storage volume. This is easy because we just create a new volume and allocate 20 GB to it.

Given that this disk is relatively small, we don't have to wait long and can then attach it to our CentOS7 instance by clicking on the hamburger menu to the right of our new 20 GB disk and attaching it to our CentOS7 instance.

It is important to note that we did not need to reboot the instance but suddenly the disk appears as /dev/xvdc. We can then partition the disk with fdisk, create a file system with mkfs, and mount the disk by creating a new /u02 mount point and mounting /dev/xvdc1 on /u02.

The real exercise here is to grow this 20 GB mounted disk to 40 GB. We can go into the Volume storage and Update the storage to a larger size. This is simple and does not require a reboot or much work. We go to the Storage console, Update the disk, grow it to 40 GB, and go back to the operating system and notice that our 20 GB disk is now 40 GB. We can create a new partition /dev/xvdc2 and allocate it to our storage.

Note that we selected poorly when we made our file system selection. We selected to lay out an ext3 file system onto our /dev/xvdc1 partition. We can't grow the ext3 filesystem. We should have selected ext4. We did this on purpose to prove a point. The file selection is critical and if you make the wrong choice there is no turning back. The only way to correct this is to get a backup of our /u02 mount and restore it onto the ext4 newly formatted partition. We also made a second wrong choice of laying the file system directly on the raw partition. We really should have created a logical partition from this one disk and put the file system on the logical partition. This would allow us to take our new /dev/xvdc2, create a new physical partition, add the physical partition to our logical partition, and grow the ext4 file system. Again, we did this on purpose to prove a point. You need to plan on expansion when you first lay out a system. To solve this problem we need to unmount the /u02 disk, delete the /dev/xvdc1 and /dev/xvdc2 partitions, create a physical partition with logical volume manager, create a logical partition, and lay an ext4 file system onto this new volume. We then restore our data from the backup and can simply grow the partition much easier in the future. We are not going to go through these steps because the exercise is to show you that it is much easier with platform as a service and not how to do it on infrastructure as a service.

If we look at a database as a service disk layout we notice that we have /dev/xvdc1 as /u01 which represents the ORACLE_HOME, /dev/mapper/dataVolGroup-lvol0 as /u02 which represents the tablespace area for the database, /dev/mapper/fraVolGroup-lvol0 which represents the fast recovery area (where RMAN dumps backups), and /dev/mapper/redoVolGroup-lvol0 which represents the redo log area (where DataGuard dumps the transactions logs). The file systems are logical volumes and created by default for us. The file systems are ext4 which can be seen by looking at the /etc/fstab file. If we need to grow the /u02 partition we can do this by using the scale up option for the database. We can add 20 GB and extend the data partition or the fra partition. We also have the option of attaching the storage as /u05 and manually growing partitions as desired. It is important to note that scaling up the database does require a reboot and restart of the database. When we try to scale up this database instance we get a warning that there is a Java service that depends upon the database and it must be stopped before we can add the storage desired.

In summary, we can use IaaS to host a database. It does get rid of the need for a storage administrator. It does not get rid of the need for an operating system administrator. We still have to know the file system and operating system commands. If we use PaaS to host a database, we can add storage as a database administrator and not need to mess with the logical volume or file system commands. We can grow the file system and add table extents quickly and easily. If we undersize our storage, correcting for this mistake is much easier than it was years ago. We don't need to overpurchase storage anymore because we can allocate it on demand and pay for the storage as we use it. We can easily remove one of the headaches that has been an issue for years and no longer need to triple our storage estimates and go with realistic estimates and control budget better and easier.

Storing Date Values As Characters Part II (A Better Future)

Richard Foote - 13 hours 47 min ago
In the previous post, I discussed how storing date values within a character data type is a really really bad idea and illustrated how the CBO can easily get its costings totally wrong as a result. A function-based date index helped the CBO get the correct costings and protect the integrity of the date data. During […]
Categories: DBA Blogs

Using Oracle on OS X? Instant Client 12.1 is here

Christopher Jones - Sun, 2016-05-29 06:45

Oracle Instant Client 12.1 for OS X was just released and is now available for free download from OTN for 32-bit and 64-bit applications.

Instant Client provides libraries and tools for connecting to Oracle Database. Among other uses, languages such as C, Python, PHP, Ruby, Perl and Node.js can use Instant Client for database connectivity.

In addition to having Oracle 12.1 client features like auto-tuning, new in this release is an ODBC driver.

The install instructions have been updated to reflect the resolution of the linking issues caused by the OS X El Capitan changes with SIP to ignore DYLD_LIBRARY_PATH in sub processes. The ~/lib location required for Instant Client 11.2 on El Capitan is no longer needed with Instant Client 12.1. Note if you are creating your own apps, you should link with -rpath.

This release of Instant Client supports Mavericks, Yosemite, and El Capitan. Applications can connect to Oracle Database 10.2 or more recent. You should continue using the older 11.2 client if you need to connect to Oracle Database 9.2.

If you are interested in running Oracle Database itself on OS X, see my earlier post The Easiest Way to Install Oracle Database on Mac OS X.

Oracle JET Handling ADF BC 12.2.1 REST Validation Event

Andrejus Baranovski - Sat, 2016-05-28 10:32
I already had a post about how to handle ADF BC validation messages in Oracle JET - Handling ADF BC 12.2.1 REST Validation in Oracle JET. There is an improvement I would like to share. JET submits data to ADF BC REST and there happens validation. In JET perspective data is correct, however it may fail validation in ADF BC and we need to reset values back to original in JET. Previously I was re-executing entire JET collection, drawback of this approach - current selected row was lost and control was returned to the first page in the table configured with pagination.

With improved solution, current row remains selected one, even if ADF BC returns failure for validation event. As you can see in the example below:


Callback method for JET model save API is set to parse ADF BC validation messages and at the end to refresh current row model. Current row is refreshed by re-fetching row data, only one row data will be re-fetched from REST. Previously entire collection was refreshed, causing current row reset:


Download sample application - JETCRUDApp_v9.zip (you must run ADF BC REST application in JDEV 12.2.1 and JET in NetBeans 8).

Oracle Partners Cloud Connection: Are you connected?

OPN Cloud connection was launched back in Oracle Open World 2014, as a new community for partners to engage, explore and learn about the Oracle Cloud business opportunities. Since then it...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Encrypting sensitive data in puppet using hiera-eyaml

Pythian Group - Fri, 2016-05-27 12:36

Puppet manifests can hold a lot of sensitive information. Sensitive information like passwords or certificates are used in the configuration of many applications. Exposing them in a puppet manifest is not ideal and may conflict with an organization’s compliance policies. That is why data separation is very important aspect of secure puppet code.

Hiera is a pluggable Hierarchical Database. Hiera can help by keeping data out of puppet manifests. Puppet classes can look for data in hiera and hiera would search hierarchically and provide the first instance of value.

Although Hiera is able to provide data separation, it cannot ensure security of sensitive information. Anyone with access to the Hiera data store will be able to see the data.

Enter Hiera-eyaml. Hiera-eyaml is a backend for Hiera that provides per-value encryption of sensitive data within yaml files to be used by Puppet.

The following puppet module can be used to manage hiera with eyaml support.

https://forge.puppetlabs.com/hunner/hiera puppet module.

The module class can be used like below,

modules/profile/manifests/hieraconf.ppclass profile::hieraconf {
# hiera configuration
class { ‘hiera’:
hierarchy => [
‘%{environment}/%{calling_class}’,
‘%{environment}’,
‘%{fqdn}’,
‘common’,
‘accounts’,
‘dev’
],
}
}

The /etc/hiera.conf would look like following after the puppet run,

/etc/puppet/hiera.yaml# managed by puppet

:backends:
– yaml
:logger: console
:hierarchy:
– “%{environment}/%{calling_class}”
– “%{environment}”
– “%{fqdn}”
– common
– accounts
– dev
:yaml:
:datadir: /etc/puppet/hieradata

Moving data to Hiera
In following example, diamond collector for Mongodb does have data like, hosts, user and password. The collector is only enabled for grafana.pythian.com host.

modules/profile/manifests/diamond_coll.pp
[..] diamond::collector { ‘MongoDBCollector’:
options => {
enabled => $fqdn ? { /grafana.pythian.com/ =>True, default => false },
hosts => ‘abc.pythian.com,xyz.pythian.com’,
user => ‘grafana’,
passwd => ‘xxxx’,
}
}

To move the data to hiera, create_resources function can be used in the manifest.

modules/profile/manifests/diamond_coll.ppclass profile::diamond_coll{
[..] $mycollectors = hiera(‘diamond::collectors’, {})
create_resources(‘diamond::collector’, $mycollectors)
[..] }

Then a new yaml file can be created and diamond::collectors code for MongoDBCollector can be abstracted like below,

hieradata/grafana.pythian.com.yaml

diamond::collectors:
MongoDBCollector:
options:
enabled: True
hosts: abc.pythian.com,xyz.pythian.com
user: grafana
passwd: xxxx

Moving data to Hiera-eyaml
Hiera puppet code can be changed to following to enable eyaml.

class profile::hieraconf {
# hiera configuration
class { ‘hiera’:
hierarchy => [
‘%{environment}/%{calling_class}’,
‘%{environment}’,
‘%{fqdn}’,
‘common’,
‘accounts’,
‘dev’
],
eyaml => true,
eyaml_datadir => ‘/etc/puppet/hieradata’,
eyaml_extension => ‘eyaml’,
}
}

This will add eyaml backend to puppet after a puppet run on puppet server. Puppet modules does following to achieve this.

Update
1. The hiera-eyaml gem will be installed.
2. Following keys will be created for hiera-eyaml using ‘eyaml createkeys’.

/etc/puppet/keys/private_key.pkcs7.pem
/etc/puppet/keys/public_key.pkcs7.pem

3. Update /etc/hiera.conf.

The /etc/hiera.conf would look like following after the puppet run,

/etc/puppet/hiera.yaml# managed by puppet

:backends:
– eyaml
– yaml
:logger: console
:hierarchy:
– “%{environment}/%{calling_class}”
– “%{environment}”
– “%{fqdn}”
– common
– accounts
– dev
:yaml:
:datadir: /etc/puppet/hieradata
:eyaml:
:datadir: /etc/puppet/hieradata
:extension: eyaml
:pkcs7_private_key: /etc/puppet/keys/private_key.pkcs7.pem
:pkcs7_public_key: /etc/puppet/keys/public_key.pkcs7.pem

Puppetmaster need to be restarted after this as changes to hiera.conf would need a restart to apply.

Using eyaml command line

Eyaml commands need to be used in a directory with keys directory(In this example /etc/puppet). Following command can be used to encrypt a password. The command would give us two options, string and block.

# eyaml encrypt -p
Enter password: ****
string: ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ] OR
block: >
ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEw
DQYJKoZIhvcNAQEBBQAEggEAQMo0dyWRmBC30TVDVxEOoClgUsxtzDSXmrJL
pz3ydhvG0Ll96L6F2WbclmGtpaqksbpc98p08Ri8seNmSp4sIoZWaZs6B2Jk
BLOIJBZfSIcHH8tAdeV4gBS1067OD7I+ucpCnjUDTjYAp+PdupyeaBzkoWMA
X/TrmBXW39ndAATsgeymwwG/EcvaAMKm4b4qH0sqIKAWj2qeVJTlrBbkuqCj
qjOO/kc47gKlCarJkeJH/rtErpjJ0Et+6SJdbDxeSbJ2QhieXKGAj/ERCoqh
hXJiOofFuAloAAUfWUfPKnSZQEhHCPDkeyhgDHwc8akWjC4l0eeorZgLPcs1
1oQJqTA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBC+JDHdj2M2++mFu+pv
ORXmgBA/Ng596hsGFg3jAmdlCLbQ]

To decrypt following command can be used.

# eyaml decrypt -s ‘ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ]’
test

The encrypted string or block can be used in hiera. While using our previous example, the hiera file would look like following. We also have to rename the file to .eyaml from .yaml.

hieradata/grafana.pythian.com.eyaml

diamond::collectors:
MongoDBCollector:
options:
enabled: True
hosts: abc.pythian.com,xyz.pythian.com
user: grafana
passwd: ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ]

Encrypting certificates
Following is a standard file resource used to copy an ssl certificate..

environments/production/manifests/logstash.pythian.com.ppfile { ‘/etc/logstash/certs/logstash-forwarder.crt’:
ensure => present,
mode => ‘0644’,
owner => ‘root’,
group => ‘root’,
source => ‘puppet:///modules/logstash/logstash-forwarder.crt’,
}

The file resource can be moved to hiera using hiera_hash.

environments/production/manifests/logstash.pythian.com.pp$fileslog = hiera_hash(‘fileslog’)
create_resources ( file, $fileslog )

The data can be added to a yaml file.

hieradata/common.yaml—
files:
‘/etc/logstash-forwarder/certs/logstash-forwarder.crt’:
ensure : present
mode : ‘0644’
owner : ‘root’
group : ‘root’
source : ‘puppet:///modules/logstash/logstash-forwarder.key’

To encrypt data, following command can be used.

# eyaml encrypt -f modules/logstash/files/logstash-forwarder.crt

The returned string value can be added using content parameter of file resource.

hieradata/common.eyaml
[..] files:
‘/etc/logstash-forwarder/certs/logstash-forwarder.crt’:
ensure : present
mode : ‘0644’
owner : ‘root’
group : ‘root’
content : ‘ENC[PKCS7,MIIB+wYJKoZI[..]C609Oc2QUvxARaw==]’

The above examples covers encrypting strings and files, which constitutes most of the sensitive data used in puppet code. Incorporating hiera-eyaml into puppet work-flow will ensure compliance and security of sensitive data.

Categories: DBA Blogs

At last APEX_PAGE.GET_URL!

Tony Andrews - Fri, 2016-05-27 03:51
I have just discovered (thanks to a tweet by @jeffreykemp) that in APEX 5.0 there is now a function called APEX_PAGE.GET_URL: About time!  I've been using this home-made version for years: So if I want to create a URL that redirects back to the same page with a request I can just write:     return my_apex_utils.fp (p_request=>'MYREQUEST'); One difference is that the one I use has a Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2016/05/at-last-apexpagegeturl.html

SQL Developer connection to DBaaS

Pat Shuff - Fri, 2016-05-27 01:07
Today we are going to connect to our database using SQL Developer. We could connect using sqlplus with a remote command but instead we are going to use a graphical tool to connect to our database in the cloud. It is important to note that this is the same tool that is used to connect to our on premise database. We can execute sql commands, look at the status of the database, clone pluggable databases from one service to another, and generally manipulate and manage the database with command line features of wizards.

SQL Developer is a free integrated development environment that simplifies the development and management of Oracle Database in both traditional and Cloud deployments. SQL Developer offers complete end-to-end development of your PL/SQL applications, a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution, and a migration platform for moving your 3rd party databases to Oracle. There are a few books that have been written about this product.

as well as blogs I suggest looking at the following

We are not going to dive deep into SQL Developer but rather introduce a couple of concepts for monitoring our database in the cloud. We are running version 4.1.3 on a Windows desktop. We actually are cheating a little bit and running it on a Windows 2012 Server that is provisioned into IaaS in the Oracle Cloud. It makes a good scratch space for demos and development hands on labs. When we connect we can connect to the public ip address of our database on port 1521 or we can create an ssh tunnel and connect to localhost on port 1521. We will first connect via an ssh tunnel. To start, we need to log into our database service and figure out what the ip address is for the system we provisioned. For our system we notice that the ip address is 129.152.150.120.

We are going to first connect with ip tunneling through putty. We launch putty and enter the ip address, the ssh keys, and open up port 1521 as a tunnel. We open a connection and all connections to port 1521 on localhost will be connected to our cloud service at the ip address specified. Note that this solution works if we have one database that we are connecting to. If we have two database instances in the cloud we will need to map a different port number on localhost to port 1521 or open up the ports to the internet which we will talk about later. We need to keep this shell active and open but we can iconify the window.

In SQL Developer we can now create a new connection to our database. This is done by clicking on the green plus sign in the top right of the screen. This opens a dialog window to define the connection to the database. We will call this connection prs12cHP which is the name of our service in the cloud. We are going to connect as sys so we need to select the advanced connection to connect as sysdba. It is important to note that you can not do this with Amazon RDS if you provision an Oracle database in the Amazon PaaS. Amazon does not allow you to login as sys or system and does not give you sysdba privileges. If you want sysdba access you will need to deploy Oracle into Amazon EC2 to get access. Once we define our connection to localhost, port 1521, sys as sysdba, and an OID of ORCL we can test our interface and accept the connection once it is successful. Note that we can execute commands in the right window and look at things like what version of the database we are running. In this example we are running the High Performance Edition so we can use diag and tuning extensions from SQL Developer.

There is a new DBA feature in the latest release of SQL Developer. We can launch a navigation menu to add our cloud database by going to the View ... DBA option at the top of the screen. This give us another green plus sign so that we can add the database and expose typical management views adn functions. Two things that are of note here are a simple exposure to pluggable database as well as a clone option associated with this exposure.

We can do other things like look at backup jobs, look at table space allocation and location, look at users that are authorized and active. This is not a replacement for Enterprise Manager because it is looking at immediate and not historic data.

Now that we have connected through a tunnel, let's look at another option. We can open up port 1521 on the database service and connect straight to the ip address. This method is not recommended because it opens up your database to all ip addresses on the internet if you are using a demo or evaluation account. You can whitelist ip addresses, vpn, or subnet limit the systems that it answers. This is done through the compute service management interface under the networking tab. We need to enable the dblistener for our database service. Once we do this we can connect SQL Developer to the database using the ip address of the database service. We might need to do this if we are connecting to multiple cloud servers and don't want to create a tunnel for each of them.

In summary, we have connected to our database service using SQL Developer. This is the same tool that we use to connect to databases in our data center. We can connect the same way that we normally do via an ip address or tunnel to keep the server in the cloud a little more secure. We noted the differences between the Amazon RDS and Oracle DBaaS options and provided a workaround with EC2 or Azure Compute as an alternative. It is important to remember the differences between PaaS features and IaaS features when it comes time to calculating the cost of services. PaaS gives you expanded features like automated backup and size up/down which we will look at next week.

Links for 2016-05-26 [del.icio.us]

Categories: DBA Blogs

Embedded mode limitations for Production systems

Anthony Shorten - Thu, 2016-05-26 19:33

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is ideal for demonstration and development environments as the default setup is enough for those types of activities.

Over time though customers and partners will want to use more and more of the Oracle WebLogic domain facilities including advanced setups like multiple servers, clusters, advanced security setups etc. Here are a few important things to remember about embedded mode:

  • The embedded mode domain setup is fixed with a fixed single server that houses the product and the administration server with the internal basic security setup. In non-production this is reasonable as the requirements for the environment are simple.
  • The domain file (config.xml) is generated by the product, using a template, assuming it is embedded only.
  • When implementations need additional requirements within the domain there are three alternatives:
    • Make the changes in the domain from the administration console and then convert the new config.xml generated by the console as a custom template. This needs to be done as remember when Oracle deliver ANY patches or upgrades (or when you make configuration changes) we need to run initialSetup[.sh] to add the patch, upgrade or configuration to the product. This will reset the file back to the factory provided template unless you are using the custom template. Basically, if you decide to use this option, and do not implement a custom template then you will lose your changes each time.
    • In later versions of OUAF we introduced user exits. These allow implementations to add to the configuration using XML snippets. It does require you to understand the configuration file that is being manipulated and we have sprinkled user exits all over the configuration files to allow extensions. Using this method means that you make changes to the domain using the configuration files, examine the changes to the domain file and then decide which user exit is available to reflect that change and add the relevant XML snippet. Again you must understand the configuration file to make sure you do not corrupt the domain.
    • The easiest option is to migrate to native mode. This basically removes the embedded nature of the domain and houses it within Oracle Weblogic. This is explained in Native Installation whitepaper (Doc Id: 1544969.1) available from My Oracle Support.

Native Installations allows you to use the full facilities within Oracle WebLogic without the restrictions of embedded mode. The advantages of native installations is the following:

  • The domain can be setup according to your company standards.
  • You can implement clusters, multiple servers including dynamic clustering..
  • You can use the security features of Oracle WebLogic to implement complex security setups including SSO solutions.
  • You can lay out the architecture according to your volumes to manage within your SLA's.
  • You can implement JDBC connection pooling, Work Managers, advanced diagnostics etc.

Oracle recommends that native installations be used for environments where you need to take advantage of the domain facilities. Embedded mode should only be used within the restrictions it poses.

Crowdsourced software development experiment

Dimitri Gielis - Thu, 2016-05-26 15:30
A few days ago I got an email about an experiment how to program with the crowd.
I didn't really heard about it before, but found it an interesting thought. In this experiment people will perform microtasks (10 minutes task), as a member of the crowd. People don't know each other, but will collaborate together. The system is distributing the work and supplies instructions. The challenge is in creating quality code that meets the specifications.
Job is still searching for some people to be part of the experiment, so I thought to put it on my blog, in case you're interested you find more details below and how to contact him.


Categories: Development

Understanding MySQL Fabric Faulty Server Detection

Pythian Group - Thu, 2016-05-26 12:39

Awhile ago I found myself analyzing a MySQL fabric installation to understand why a group member was occasionally being marked as FAULTY even when the server was up and running and no failures were observed.  

         
                         server_uuid     address  status       mode weight
------------------------------------ ----------- ------- ---------- ------
ab0b0653-6121-11c5-55a0-007543445454 mysql1:3306 PRIMARY READ_WRITE    1.0
f34dd331-2432-11f4-a2d3-006754678533 mysql2:3306 FAULTY  READ_ONLY     1.0

 

Upon reviewing mysqlfabric logs, I found the following warnings were being logged from time to time:

[WARNING] 1442221217.920115 - FailureDetector(xc_grp_1) - Server (f34dd331-2432-11f4-a2d3-006754678533) in group (xc_grp_1) is unreachable

 

Since I was not clear under which circumstances a server is marked as FAULTY, I decided to review MySQL Fabric code (Python) to better understand the process.

The module responsible for printing this message is failure_detection.py and more specifically, the _run method belonging to FailureDetector class. This method will loop through every server in a group, and attempt a connection to the MySQL instance running on that node. MySQLServer.Is_alive (mysql/fabric/server.py) method is called for this purpose.

Before reviewing the failure detection process, we first need to know that there are four MySQL fabric parameters that will affect when a server is considered unstable or faulty:

DETECTION_TIMEOUT
DETECTION_INTERVAL
DETECTIONS
FAILOVER_INTERVAL

 

Based on the above variables, the logic followed by FailureDetector._run() to mark a server as FAULTY is the following:

1) Every {DETECTION_INTERVAL/DETECTIONS} seconds, a connection against each server in the group is attempted with a timeout equal to DETECTION_TIMEOUT

2) If DETECTION_TIMEOUT is exceeded, the observed message is logged and a counter incremented

3) When this counter reaches DETECTIONS, the server is marked as “unstable” and if the last time the master changed was greater than FAILOVER_INTERVAL ago, the server is marked as FAULTY

With a better understanding of the logic followed by MySQL fabric to detect faulty nodes, I went to the configuration file to check the existing values for each of the parameters:

DETECTIONS=3
DETECTION_TIMEOUT=1
FAILOVER_INTERVAL=0
DETECTION_INTERVAL=6

From the values above we can notice that each group will be polled every 2 seconds (DETECTION_INTERVAL/DETECTIONS) and that the monitored server should respond within a second for the test to be considered successful.

On high concurrency nodes, or nodes under heavy load, a high polling frequency combined with tight timeouts could cause the servers to be marked as FAULTY just because the connection attempt would not be completed or processed (in the case of high connection rates or saturated network interfaces) before the timeout occurs.

Also, having FAILOVER_INTERVAL reduced to 0, will cause the server to be marked as FAULTY even if a failover had just occurred.

A less aggressive configuration would be more appropriated for heavy loaded environment:

DETECTIONS=3 -> It's Ok
DETECTION_TIMEOUT -> 5 TO 8
FAILOVER_INTERVAL -> 1200
DETECTION_INTERVAL -> 45 TO 60

Conclusion

As with any other database clustering solution that relies on a database connection to test node status, situations where the database server would take longer to respond should also be considered. The polling frequency should be adjusted so the detection window is within an acceptable range, but the rate of monitoring connections generated is also kept to the minimum. Check timeouts should also be adjusted to avoid false positives caused by the server not being able to respond in a timely manner.

 

Categories: DBA Blogs

INSERT ALL caveat

Dominic Brooks - Thu, 2016-05-26 10:33

Why you might want to think twice about using INSERT ALL.

One of those things I knew and then forgot.

So, let’s say you’ve got three tables or a partitioned table or something like that.

Let’s use regional tables for simplicity.

drop table t1_r1;
drop table t1_r2;
drop table t1_r3;

create table t1_r1
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R1')));

create table t1_r2
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R2')));

create table t1_r3
(col1 varchar2(2) not null
,col2 number not null
,check( col1 in ('R3')));

insert into t1_r1 values ('R1',1);
insert into t1_r2 values ('R2',1);
insert into t1_r3 values ('R3',1);

commit;

And you want a routine that will insert into one of those tables depending on region.

And you’re a simple fellow, so you go with an IF statement:

create or replace procedure p1 (
  col1 in varchar2, 
  col2 in number
)
as
begin
  if col1 = 'R1'
  then
      insert into t1_r1 values(col1,col2);
  elsif col1 = 'R2'
  then
      insert into t1_r3 values(col1,col2);
  else 
      insert into t1_r3 values(col1,col2);
  end if;
end p1;
/

Procedure P1 compiled

And then in the same session you run this uncommitted:

exec p1('R1',2);

PL/SQL procedure successfully completed.

And then in another session you decide to truncate table T1_R3:

truncate table t1_r3;

Table T1_R3 truncated.

No problem.
None was expected.

However…

Let’s say that we decide to tidy up that procedure and get rid of some of the repetition by using an INSERT ALL statement.
I will use a standalone sql statement just to demonstrate a further minor aspect rather than using a procedure with a bound parameter.

insert all
when col1 = 'R1' then into t1_r1
when col1 = 'R2' then into t1_r2
when col1 = 'R3' then into t1_r3
select 'R1' col1,2 col2
from dual;

1 row inserted.

Let’s revisit the truncate:

truncate table t1_r3;

SQL Error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
00054. 00000 -  "resource busy and acquire with NOWAIT specified or timeout expired"
*Cause:    Interested resource is busy.
*Action:   Retry if necessary or increase timeout.

TM share locks from the INSERT ALL on all three possible targets prevent the TRUNCATE.

So, a simple/simplisitic illustration of why you might want to think twice about whether INSERT ALL is the best feature for your use case, based on a real life problem.


Tracefile Automation – Simplify Your Troubleshooting Tasks

Pythian Group - Thu, 2016-05-26 09:06

Here’s a common Oracle troubleshooting scenario when a SQL statement needs tuning and/or troubleshooting:

  • log on to dev server
  • connect to database (on a different server)
  • run the SQL statement with 10046 tracing enabled
  • ssh to the database server
  • copy the trace file back to work environment
  • process the trace file.

 

All of this takes time. Granted, not a great deal of time, but tuning is an iterative process and so these steps will be performed multiple times. Not only are these steps a productivity killer, but they are repetitive and annoying. No one wants to keep running the same manual command over and over.

This task is ripe for some simple automation.

If both the client and database servers are some form of Unix, automating these tasks is straightforward.

Please note that these scripts require an 11g or later version of the Oracle database. These scripts are dependent on the v$diag_info view to retrieve the tracefile name. While these scripts could be made to work on 10g databases, that is left as an exercise for the reader.

Step by Step

To simplify the process it can be broken down into steps.

 

1. Reconnect

The first step is to create a new connection each time the SQL is executed. Doing so ensures the database session gets a new tracefile, as we want each execution to be isolated.

-- reconnect.sql

connect jkstill/XXXX@oravm

 

2. Get the Tracefile hostname, owner and filename

Oracle provides all the information needed.

In addition the script will set the 10046 event, run the SQL of interest and then disable the 10046 event.

Following is a snippet from the tracefile_identifier_demo.sql script.

 


-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

 

In this case sql2trace.sql is a simple SELECT from a test table.  All of the scripts used here appear in Github as mentioned at the end of this article.

 

3. Process the Tracefile

Now that the tracefile has been created, it is time to retrieve it.

The following script scp.sql is called from tracefile_identifier_demo.sql.

 


col scp_src new_value scp_src noprint
col scp_target new_value scp_target noprint

set term off feed off verify off echo off

select '&&1' scp_src from dual;
select '&&2' scp_target from dual;

set feed on term on verify on

--disconnect

host scp &&scp_src &&scp_target

Following is an example putting it all together in tracefile_identifier_demo.sql.

 

SQL> @tracefile_identifier_demo
Connected.

1 row selected.


PRODUCT                        VERSION              STATUS
------------------------------ -------------------- --------------------
NLSRTL                         12.1.0.2.0           Production
Oracle Database 12c Enterprise 12.1.0.2.0           64bit Production
 Edition

PL/SQL                         12.1.0.2.0           Production
TNS for Linux:                 12.1.0.2.0           Production

Data Base
------------------------------
P1.JKS.COM

INSTANCE_NAME        HOST_NAME                      CURRDATE
-------------------- ------------------------------ ----------------------
js122a1              ora12c102rac01.jks.com         2016-05-23 16:38:11

STARTUP
--------------------
04/02/2016 11:22:12


Session altered.

Elapsed: 00:00:00.00

OWNER        OBJECT NAME                     OBJECT_ID OBJECT_TYPE             CREATED
------------ ------------------------------ ---------- ----------------------- -------------------
SYS          OLAP_EXPRESSION                     18200 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_BOOL                18206 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_DATE                18204 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_TEXT                18202 OPERATOR                2016-01-07 21:46:54
SYS          XMLSEQUENCE                          6379 OPERATOR                2016-01-07 21:41:25
SYS          XQSEQUENCE                           6380 OPERATOR                2016-01-07 21:41:25
SYS          XQWINDOWSEQUENCE                     6393 OPERATOR                2016-01-07 21:41:25

7 rows selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

js122a1_ora_1725_MYTRACEFILE.trc                                                                                                                                                            100% 3014     2.9KB/s   00:00

SQL> host ls -l js122a1_ora_1725_MYTRACEFILE.trc
-rw-r----- 1 jkstill dba 3014 May 23 16:38 js122a1_ora_1725_MYTRACEFILE.trc

But Wait, There’s More!

This demo shows you how to automate the retrieval of the trace file. But why stop there?  The processing of the file can be modified as well.

Really, it isn’t even necessary to copy the script over, as the content can be retrieved and piped to your favorite command.  The script mrskew.sql for instance uses ssh to cat the tracefile, and then pipe the contents to the Method R utility, mrskew.  Note: mrskew is a commercial utility, not open source software.

 

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on

--disconnect

host ssh &&ssh_target 'cat &&scp_filename' | mrskew

 

Following is another execution of tracefile_identifier_demo.sql, this time piping output to mrskew. Only the final part of the output is shown following

 

...

Elapsed: 00:00:00.01

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
---------------------------  --------  ------  -----  --------  --------  --------
PARSE                        0.002000   33.1%      2  0.001000  0.000000  0.002000
db file sequential read      0.001211   20.0%      5  0.000242  0.000056  0.000342
FETCH                        0.001000   16.5%      1  0.001000  0.001000  0.001000
gc cr grant 2-way            0.000999   16.5%      1  0.000999  0.000999  0.000999
SQL*Net message from client  0.000817   13.5%      2  0.000409  0.000254  0.000563
Disk file operations I/O     0.000018    0.3%      2  0.000009  0.000002  0.000016
SQL*Net message to client    0.000002    0.0%      2  0.000001  0.000001  0.000001
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
---------------------------  --------  ------  -----  --------  --------  --------
TOTAL (9)                    0.006047  100.0%     19  0.000318  0.000000  0.002000

Now we can see where all the db time was consumed for this SQL statement, and there was no need to copy the trace file to the current working directory. The same can be done for tkprof and other operations.  Please see the plan.sql and tkprof.sql scripts in the Github repository.

 

Wrapping It Up

A little bit of automation goes a long way. Commands that are repetitive, boring and error prone can easily be automated.  Both your productivity and your mood will soar when little steps like these are used to improve your workflow.

All files for this demo can be found at https://github.com/jkstill/tracefile_identifier

Categories: DBA Blogs

On Demand Webcast: Driving a Connected, More Productive Next-Gen Workplace

WebCenter Team - Thu, 2016-05-26 08:25
Over the past two decades technological change has transformed business processes. Among the biggest changes is mobile ubiquity.
Finding ways to engage with customers in a mobile world is becoming increasingly challenging for outdated systems to handle.
Discover how a cloud platform can unite content, social, and mobile engagement with simplified line of business automation, as well as microsites and communities for an integrated next-generation workplace in this on demand webcast. View today!

Using Enterprise Manager to manage cloud services

Pat Shuff - Thu, 2016-05-26 07:18
Yesterday we talked about the virtues of Enterprise Manager. To honest the type of monitoring tool is not important but the fact that you have one is. One of the virtues that VMWare touts of VSphere is that you can manage instances on your server as well as instances in VCloud. This is something worthy of playing with. The same tool for your on premise instances also managing your instances in the cloud has power. Unfortunately, VCloud allows you to allocate virtual machines and storage associated with it so you only have a IaaS option of compute only. You can't allocate just storage. You can't deploy a database server unless you have a database deployed that you want to clone. You need to start with an operating system and build from there. There are benefits of PaaS and SaaS that you will never see in the VCloud implementation.

Oracle Enterprise Manager provides the same universal management interface for on premise and in cloud services. Amazon falls short on this. First, they don't have on premise instances so the tools that they have don't monitor anything in your data center, only in their cloud. Microsoft has tools for monitoring services plugins for looking at Azure services. It is important to note that you need a gateway server in the Azure cloud to aggregate the data and ship the telemetry data back and report it in the monitoring tool. There is a good Blog detailing the cost if IaaS monitoring in Azure. The blog points out that the outbound data transfer for monitoring can cost up to $17/month/server so this is not something that comes for free.

Today we are going to look at using Enterprise Manager as a management tool for on premise systems, the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. We are going to cheat a little and use a VirtualBox instance of Enterprise Manager 13c. We are not going to go through the installation process. The books and blogs that we referenced yesterday detail how to do this. Unfortunately, the VirtualBox instance is available from edelivery.oracle.com. We are not going to use this instance but are going to use an instance for demo purposes only available internal to Oracle. The key difference between the two systems is that the edelivery instance is 21 GB in size for download and expands to provide an OEM 13c instance for testing while the internal system (retriever.us.oracle.com) has a 12c and 11g database installed and is 39.5 GB (expanded to almost 90 GB when uncompressed). Given the size of the instance I really can't provide external access to this instance. You can recreate this by downloading the edelivery system, installing an 11g database instance, installing a 12c database instance, and configuring OEM to include data from those instances to replicate the screen shots that we are including.

If we look at the details on the virtual box instance we notice that we need at least 2 cores and 10 GB of memory to run this instance. The system is unusable at 8 GB of RAM. We really should bump this up to 12 GB of RAM but given that it is for demo purposes and for training it is ok if it runs a little slow. If we were running this in production it is recommended to grow this to 4 cores and 16 GB of memory and also recommended that you not use a downloaded VirtualBox instance for production but install from scratch.

The key things that we are going to do are walk through what it takes to add a monitoring agent onto the service that we are trying to monitor and manage. If we look at the architecture of Enterprise Manager we notice that there are three key components; the Oracle Management Repository (OMR), the Oracle Management Service (OMS), and the Oracle Management Agent (OMA). The OMR is basically a database that keeps a history of all telemetry actions as well as reports and analytics for the systems being monitored. The OMS is the heart of Enterprise Manager and runs on a WebLogic server. The code is written in Java and presents the primary user interface to the administrators as well as being the gateway between the OMR and the agents or OMAs. The agents are installed on the target systems and collect operating system data, database data, weblogic data, and all other log data to ship back to the OMR for analysis by the users.

It is important to note at this point that most PaaS and SaaS providers do not allow you to install an Enterprise Manager Agent or any other management agent on their instances. They want to manage the services for you and force you to use their tools to manage their instance. SalesForce, for example, only gives you access to your customer relationship data. You can export your contact lists to an csv file to backup your data but you can't correlate the contact list to the documents that you have shared with these users. Amazon RDS does not provide a file system access, system access to the database, or access to the operating system so that you can install the management agent. You must use their tools to monitor services provided on their sites. Unfortunately, this inhibits you from looking at important things like workload repository reports or sql tuning guides to see if something is running slow or waiting on a lock. Your only choice is to deploy the desired PaaS or SaaS as a manual or bundled install on IaaS forcing you to manually manage things like backups and patching on your own.

The first thing that we need to do in Enterprise Manager is to log in and click on the Setup button on the top right. We need to define named credentials since we are going to connect to the cloud service using public and private ssh keys. We need to follow the Security pull down to Named Credentials.

We click on the Create icon in the top left and add credentials with public and private keys. If we don't have an ssh key to access the service we can generate an ssh key using ssh-keygen which generates a public and private key and upload the key using the SSH Access pull down in the hamburger menu. Once we upload the ssh key we can use ssh -i keyname.ppk opc@ip_address for our database server. We will use this keyname.ppk to connect with Enterprise Manager and have all telemetry traffic transferred via the ssh protocol.

Once we have the credentials valid in the cloud account we can create the ssh access through Enterprise Manager. To do this we to to Setup at the top right, Security, Named Credentials. We then click on the Create button in the middle left to start entering data about the credentials. The name in the the screen shot below failed because it begins with a number so we switched it to ssh2017 since 2017ssh failed the naming convention. We are trying to use host access via ssh which is done with pull down menu definitions. The system defaults to a host access but we need to change from host to global which does not tie our credentials to one ip address. We upload our public and private key as well as associate this with the opc user since that user has sudo rights. We can verify the credentials by looking at the bottom of the list. This should allow us to access our cloud host via ssh and deploy an agent to our cloud target.

Note that we created two credentials because we had a step fail later. We created credentials for the opc user and for the oracle user. The opc credentials are called ssh2017 as shown in the screen shots. The oracle credentials are called oracle2017 and are not shown. The same steps are used just the username is changed as well as the name of the credentials.

If we want to install the management agent onto our instance we need to know the ip address of the service that we are going to monitor as well as an account that can sudo to root or run elevated admin services. We go to the Enterprise Manager splash screen, login, select the Setup button in the top right and drill down to Add Target and Add Target Manually. This takes us to the Add Target screen where we can Install Agent on Host. To get rid of the warnings, we added our cloud target ip address to the /etc/hosts file and used a fully qualified and short name associated with the ip address. We probably did not add the right external dns name but it works with Enterprise Manager. When we add the host we use the fully qualified host name. We can find this by logging into the cloud target and looking at the /etc/hosts file on that server. This gives us the local ip address and a fully qualified host name. Once we have this we can enter a directory to upload the agent software to. We had to create an agent directory under the /u01/app/oracle directory. We select the oracle2017 credentials (the screen shots use ssh2017 but this generates an error later) we defined in the previous step and start uploading the agent software and configuring the host as a target.

Note that we could have entered the ip address rather than going through adding the ip address to /etc/hosts. We would have received a warning with the ip address.

When we first tried this we got an error during the initialization phase that opc did not own the /u01/app/oracle directory and had to create an agent directory and change ownership. Fortunately, we could easily resubmit and enter a new directory without having to reenter all of the other information. The deployment takes a while because Enterprise Manager needs to upload the agent binaries, extract, and install them. The process is updated with status so that you can see the progress and restart when errors happen. When we changed the ownership, the installation failed at a later step stating the opc did not have permission to add the agent to the inventory. We corrected this by installing as oracle and setting the /u01/app/oracle/agent directory to be owned by oracle.

When we commit the ip address or host name as well as the ssh credentials, we can track progress as the management server deploys the agent. We get to a point where we note that the oracle user does not have ssh capabilities and we will need to run some stuff manually from the opc account.

At this point we should have an enterprise manager connection to a cloud host. To get this working from my VirtualBox behind my AT&T Uverse wireless router I first had to configure a route on my broadband connection and set the ip address of the Enterprise Manager VirtualBox image to a static ip address. This allows the cloud instance to talk back to the OMS and store data in the OMR.

The next step is to discover the database instances. This is done by going through a guided discovery on the host that we just provisioned. It took a few minutes to sync up with the OMS but we could verify this with the emctl status agent command on the target host. We add the target manually using the guided discovery and select database services to look for on the target.

At this point we should have a database, listener, and host connected to our single pane of management glass. We should see a local database (em12c) and a cloud based database (prs12cHP). We can look at the host characteristics as well as dive into sql monitoring, database performance, and database management like backup and restore options or adding users to the repository. We could add a Java Cloud Service as well as link these two systems together and trace a web page request down to a sql read and look at what the longest latency component is. We can figure out if the network, java memory allocation, or databse disk is causing the slowest response. We can also look at sql tuning recommendations to get suggestions on changing our sql code or execution plans using the arw report and sql tuning utilities in Enterprise Manager.

In summary, we can connect to an on premise server as well as a cloud server. We can't connect to an Amazon RDS instance because we don't get file system level access to push a client to or a root user to change the agent permissions. We do get this with IaaS on Oracle, Compute servers on Azure, and EC2 on Amazon. We also get this with PaaS on Oracle and potentially event Force.com from SalesForce. No one give you this ability with SaaS. It is assumed that you will take the SaaS solution as is and not need to look under the covers. Having a single pane of glass for monitoring and provisioning services is important. The tool should do more than tell you how full a disk is or how much of a cpu is loaded or available. It should dive into the application and let you look at where bottlenecks are and help troubleshoot issues. We could spend weeks diving into Enterprise Manager and the different management packs but we are on a journey to look at PaaS options from Amazon, Microsoft, and Oracle.

How to clear MDS Cache

Darwin IT - Thu, 2016-05-26 06:06
In the answer on a question on community.oracle.com, I found the following great tip: http://www.soatutor.com/2014/09/clear-mds-cache.html.

In 12cR2 this looks like:
1. Start System MBean Browser; In Domain, pull down the Weblogic Domain menu and choose 'System MBean Browser':


2. Browse for the Application Defined Beans:



3. Expand it and navigate to oracle.mds.lcm, choose server (AdminServer or SOAServer1)
 4. Navigate to the node Application em (AdminServer) or soa-infra (SOAServer) -> MDSAppRuntime -> MDSAppRuntime. Click on the tab Operations, and then on clearCache


5. Click on Invoke:


Then the following confirmation is shown:


By the way, often a redeploy of the SOA Composite that calls the WSDL or other artefact helps. But unfortunately not always.





My "Must See" ADF/MAF Sessions at KScope 16

Scott Spendolini - Thu, 2016-05-26 06:00
Yes, you read that right - it's not a typo, nor did one of my kids or wife gain access to my laptop.  It's part of a "blog hop" - where a number of experts made recommendations about KScope sessions that are "must attend" and are not in their core technology.  I picked ADF/MAF, as I don't have any practical experience in either technology, but they are at least similar enough that I would not be totally lost.

In any case, the following sessions in the ADF/MAF track are worth checking out at Kscope 16 this year:

How to Use Oracle ALTA UI to Create a Smashing UI for Web and Mobile
Luc Bors, eProseed NL
When: Jun 28, 2016, Session 12, 4:45 pm - 5:45 pm

I've always liked UI, and Oracle ALTA is a new set of templates that we'll be seeing quite a bit of across a number of new technologies.

Three's Company: Going Mobile with Oracle APEX, Oracle MAF, and Oracle MCS
Frederic Desbiens , Oracle Corporation
When: Jun 27, 2016, Session 6, 4:30 pm - 5:30 pm

I'll admit - anytime there's a comparison of APEX and other similar technologies, it's always interesting to witness the discussion.  If nothing else, there will be a good healthy debate as a result of this session!

Introduction to Oracle JET: JavaScript Extension Toolkit
Shay Shmeltzer, Oracle Corporation
When: Jun 28, 2016, Session 7, 8:30 am - 9:30 am

Oracle JET is a lot more than just charts, and there's a lot of momentum behind this technology.  I'm very interested to learn more and perhaps even see a thing or two that you can do with it, as well as the various integration points that are possible with other technologies.

Build a Mobile App in 60 Minutes with MAF
John King , King Training Resources
When: Jun 27, 2016, Session 5, 3:15 pm - 4:15 pm

Native mobile applications are something that APEX doesn't do, so it would be nice to see how this would be possible, should the need ever arise.

Thanks for attending this ODTUG blog hop! Looking for some other juicy cross-track sessions to make your Kscope16 experience more educational? Check out the following session recommendations from fellow experts!